report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Title XIX of the Social Security Act established Medicaid as a federal-state partnership that finances health care for low-income individuals, including children, families, the aged, and the disabled. Medicaid is an open-ended entitlement program and provided health coverage for an estimated 53.9 million individuals in 2010.federal requirements, each state administers and operates its Medicaid Within broad program in accordance with a state Medicaid plan, which must be approved by CMS. A state Medicaid plan details the populations that are served, the categories of services that are covered (such as inpatient hospital services, nursing facility services, and physician services), and the methods for calculating payments to providers. The state Medicaid plan also describes the supplemental payments established by the state and specifies which providers are eligible to receive supplemental payments and what categories of service are covered. Any changes a state wishes to make in its Medicaid plan, such as establishing new payments or changing methods for developing provider payment rates, must be submitted to CMS for review and approval as a state plan amendment. States may also receive approval from CMS for a waiver from certain Medicaid requirements in order to conduct a Medicaid demonstration, and these demonstrations may include supplemental payments. These demonstrations allow states to test new approaches to deliver or pay for health services through Medicaid. Under certain demonstrations, a state may cover populations or services that would not otherwise be eligible for federal Medicaid funding under federal rules. Some states, including California and Massachusetts, have also in recent years been allowed to make supplemental payments under Medicaid demonstrations. The terms and conditions governing such demonstrations are specific to each demonstration. All states make supplemental Medicaid payments to certain providers. DSH payments are made to hospitals and cannot exceed the unreimbursed cost of furnishing inpatient and outpatient services to Medicaid beneficiaries and the uninsured. Non-DSH supplemental payments can be made to hospitals or other providers (such as nursing homes or groups of physicians) for any category of service provided on a fee-for-service basis. For example, a state might make non-DSH supplemental payments on a quarterly basis to county-owned nursing facilities that serve low-income populations to fill the gap between what regular Medicaid rates pay toward the cost of services and higher payments permitted through the UPL. Supplemental payments are typically made for services provided on a fee-for-service basis, rather than those provided through Medicaid managed care contracts. Non-DSH supplemental payments need to be approved by CMS. To obtain the federal matching funds for Medicaid payments made to providers, each state files a quarterly expenditure report to CMS—the CMS-64. This form compiles state payments in over 20 categories of medical services, such as inpatient hospital services and outpatient hospital services. States are required to report total DSH payments made to hospitals and mental health facilities separately from other Medicaid payments in order to receive reimbursement for them. From 2001 through 2009, when completing the CMS-64 to obtain federal matching funds for non-DSH supplemental payments, states combined their non-DSH supplemental payments with their regular payments—those made using states’ regular Medicaid payment rates. During this period, CMS requested that states report their non-DSH supplemental payments in a separate informational section of the CMS-64 that was not the basis for states receipt of federal matching funds. Instead, states received federal matching funds based on their reports of expenditure totals that included both regular and non-DSH supplemental payments. In 2008, we found that states reported making $6.3 billion in non-DSH supplemental payments during fiscal year 2006, but that not all states reported their non-DSH supplemental payments separately from other expenditures of Starting with the first quarter of fiscal year 2010, CMS’s the same type.new reporting procedures requested that states report certain non-DSH supplemental payments separately from their regular payments on the section of the CMS-64 used to claim federal matching funds.CMS continues to provide federal matching funds to states that report these payments in combination with regular payments on this form. The data CMS finalized for fiscal year 2010 show that states and the federal government spent at least $32 billion for DSH and non-DSH supplemental payments during fiscal year 2010, with the federal share of these payments totaling at least $19.8 billion. States reported $17.6 billion in DSH payments and $14.4 billion in non-DSH supplemental payments during fiscal year 2010, but state reporting of non-DSH supplemental payments separately from regular payments was incomplete, so the exact amount of non-DSH supplemental payments is unknown. States reported $17.6 billion in DSH payments during fiscal year 2010, with the federal government reimbursing states $9.9 billion for its share of these payments. Fifty of the 51 states reported making DSH payments during fiscal year 2010, with total reported payments ranging from about $650,000 for South Dakota to over $3.1 billion for New York.10 states reporting the largest total DSH payments in fiscal year 2010 The accounted for more than 70 percent of the $17.6 billion nationwide total, and the 4 states with the largest total DSH payments—New York, California, Texas, and New Jersey—accounted for almost half (47 percent) of the nationwide total. In assessing the contribution of DSH payments to each state’s overall spending, we found that DSH payments as a percentage of states’ reported Medicaid payments varied considerably among the states. Among states that reported DSH payments, the percentage ranged from less than 1 percent (Arizona, Delaware, North Dakota, South Dakota, Wisconsin, and Wyoming) to 17 percent (New Hampshire). Figure 1 provides information on the amount of each state’s DSH payments and each state’s DSH payments as a percentage of its total Medicaid expenditures. (App. II lists each state’s reported DSH payments during fiscal year 2010, the federal share of those payments, the state’s total Medicaid payments, and each state’s total reported DSH payment as a percentage of the state’s total Medicaid payments and of total nationwide DSH payments.) The majority of DSH payments were to hospitals for traditional inpatient and outpatient services. During fiscal year 2010, 83 percent of the nationwide total of reported DSH payments ($14.7 billion) was paid to hospitals for traditional inpatient and outpatient services and 17 percent of the total ($2.9 billion) was paid to mental health facilities for inpatient and outpatient mental health services. During fiscal year 2010, states separately reported making $14.4 billion in non-DSH supplemental payments (of which the federal share was $9.9 billion), primarily for inpatient hospital services.states were to report non-DSH supplemental payments on the CMS-64 separately from their regular payments for six categories of service. Thirty states separately reported non-DSH payments during fiscal year 2010, with reported payments ranging from $125,000 for Vermont to $3.1 billion for Texas. In assessing the contribution of non-DSH supplemental payments to each state’s overall spending, we found that non-DSH supplemental payments as a percentage of states’ Medicaid spending also varied considerably across the 30 states that separately reported these payments, ranging from 1 percent for Vermont to over 17 percent for Illinois. Figure 2 provides information on the amount of each state’s non-DSH supplemental payments and each state’s non-DSH supplemental payments as a percentage of its total Medicaid expenditures. (App. II lists each state’s reported non-DSH supplemental payments during fiscal year 2010, the federal share of those payments, the state’s total Medicaid payments, and each state’s total reported non-DSH supplemental payments as a percentage of the state’s total Medicaid payments and of total nationwide non-DSH supplemental payments.) Of the six categories of service for which states reported making non-DSH supplemental payments, states reported the largest amount of payments for inpatient hospital services. States reported $11 billion in non-DSH supplemental payments for inpatient services (with a federal share of $7.7 billion). States reported $1.8 billion in non-DSH supplemental payments for outpatient services (with a federal share of $1.15 billion). (See fig. 3.) The proportion of a state’s reported expenditures that were non-DSH supplemental payments varied across states and categories of service. In some states, non-DSH supplemental payments represented very little of the state’s reported expenditures for a category of service, while in other states, non-DSH supplemental payments represented more than one-third of the state’s reported expenditures for a category of service. For example, 27 states separately reported non-DSH supplemental payments for inpatient hospital services, and the percentage of their expenditures for inpatient hospital services that were non-DSH supplemental payments ranged from less than 1 percent (Virginia and Washington) to 48 percent (Tennessee); 13 states separately reported non-DSH supplemental payments for outpatient hospital services, and the percentage of their expenditures for outpatient hospital services that were non-DSH supplemental payments ranged from less than 1 percent (Texas) to 57 percent (Illinois); and 16 states separately reported non-DSH supplemental payments for physician and surgical services, and the percentage of their expenditures for physician and surgical hospital services that were non-DSH supplemental payments ranged from less than 1 percent (Oklahoma) to 34 percent (West Virginia). See appendix II for more information about each state’s reported total and non-DSH supplemental payments for inpatient hospital services, outpatient hospital services, nursing facility services, physician and surgical services, other practitioners’ services, and intermediate care facility services. The exact amount of non-DSH supplemental payments nationwide is unknown, in part because not all states that made non-DSH supplemental payments in 2010 reported them on the CMS-64 separately from regular payments, and some states separately reported some but not all of their non-DSH supplemental payments. For example, Georgia reported $0 for non-DSH supplemental payments during fiscal year 2010, but according to CMS, it made non-DSH supplemental payments of $120.6 million for nursing home services during 2010. CMS officials told us that they are aware that some states did not separately report all of their non-DSH supplemental payments. Officials stated that they have taken, and are taking, steps to improve states’ reporting of non-DSH supplemental payments for the six categories of service. They told us that after revising the form CMS-64 to include lines for separate reporting of certain non-DSH supplemental payments, they monitored states’ reports of these payments and, as a result, they learned that some states had not reported these payments separately. They then took steps to improve states’ reporting of these payments, for example, by training state staff in the use of the revised form CMS-64 and asking regional CMS staff to work with states to identify and resolve reporting problems. CMS officials also noted, however, that some states encountered technical difficulties with their state databases. For example, CMS officials told us that the data systems used by some states in 2010 did not permit them to separate the non-DSH supplemental payments from their regular payments. CMS officials confirmed that states did not separately report all non-DSH supplemental payments in 2010 and acknowledged that CMS cannot definitively determine the extent to which reporting is incomplete. The 39 states that separately reported non-DSH supplemental payments during either 2006 or 2010 (or both) reported an increase of $8.1 billion in non-DSH supplemental payments during this period. Most of this increase was from 15 states that reported during both years, and most of the reported increase was for inpatient hospital services. However, because of the potential for underreporting of supplemental payments for one or both years, the extent of the actual increase and the contributing factors cannot be quantified. Our examination of information from CMS and from public sources about changes in 11 judgmentally selected states indicates that some states were making new and modified non-DSH supplemental payments during this period, contributing to the reported increase. Changes in reporting also contributed to the increase. The 39 states that separately reported non-DSH supplemental payments during either 2006 or 2010, or during both years, together reported $8.1 billion more in non-DSH supplemental payments during 2010 than 2006. Nineteen states separately reported some non-DSH supplemental payments during both years, with 15 of those states reporting more for these payments during 2010 and 4 of those states reporting less for these payments during 2010. Eleven states reported non-DSH supplemental payments separately only during 2010, and 9 states reported non-DSH supplemental payments separately only during 2006. (See fig. 4 and table 1.) Most of the $8.1 billion increase was from the 15 states that separately reported non-DSH supplemental payments during both years, with higher non-DSH supplemental payments reported during 2010. The increase in these states ranged from $1 million (Washington) to $2.3 billion (Texas). For the 4 states that separately reported lower non-DSH supplemental payments during 2010, the decrease ranged from $12 million (Oklahoma) to $608 million (North Carolina). The largest change in non-DSH supplemental payments separately reported during 2006 and 2010 was for inpatient hospital services. The net increase in these separately reported payments was $6.3 billion, and the number of states that separately reported non-DSH supplemental payments for inpatient hospital services increased from 23 during 2006 to 27 during 2010. (App. III lists the amounts each state reported separately for non-DSH supplemental payments during 2006 and 2010 and the categories of service for which they reported these payments.) Because of the potential underreporting of non-DSH supplemental payments during one or both of the years examined, the extent of the actual increase cannot be quantified. On the basis of some reports, Medicaid spending on hospital services is increasing, and growth in non-DSH supplemental payments has been cited as a contributing factor. A January 2012 article on the growth in U.S. health spending found that while overall Medicaid spending growth slowed in 2010, Medicaid spending growth on hospital services increased in 2010 compared to 2009. The researchers attributed the growth in Medicaid spending for hospital services, in part, to a large amount of non-DSH supplemental payments reported during the last quarter of calendar year 2010. A March 2012 report by the Medicaid and CHIP Payment and Access Commission found that states reported over $23 billion in non-DSH supplemental payments for hospital services during 2011. Information from CMS and from public sources about changes in 11 judgmentally selected states suggests that some increases from 2006 to 2010 in reported non-DSH supplemental payments were due to increases in payments states made after establishing new non-DSH supplemental payments or increasing their existing non-DSH supplemental payments. The available information suggests that changes to existing payments also resulted in some decreases from 2006 to 2010 in reported non-DSH supplemental payments, including, for example, when states terminated non-DSH supplemental payments. In recent years, states have submitted and received approval to implement new non-DSH supplemental payments, according to CMS officials. Available information, maintained by CMS and derived from state Medicaid plans from 11 selected states, indicates that new or modified supplemental payments made by states contributed to increased non-DSH supplemental payments. For example: Illinois reported $1.4 billion more for non-DSH supplemental payments for inpatient hospital services during 2010 than during 2006. From 2006 through 2010, Illinois established new non-DSH supplemental payments for inpatient hospital services and also modified several existing payments for these services. Taken together, these new and modified payments were estimated to result in an increase in Illinois’s supplemental payments for inpatient services by about $1.2 billion during fiscal year 2010. Colorado reported $411 million more for non-DSH supplemental payments for inpatient and outpatient hospital services during 2010 than during 2006. Colorado established a set of new non-DSH supplemental payments for inpatient and outpatient hospital services. These supplemental payments were to a variety of hospital types, including rural hospitals, hospitals with neonatal intensive care units, and state teaching hospitals. Effective on July 1, 2009, these new payments were estimated to result in an increase in payments of about $300 million during fiscal year 2010. Arkansas reported $173 million more for non-DSH supplemental payments for inpatient hospital services during 2010 than during 2006. Arkansas made new non-DSH supplemental payments for inpatient hospital services provided by private hospitals, and it also modified existing non-DSH supplemental payments for inpatient hospital services. Arkansas’s new supplemental payments, effective July 1, 2009, were estimated to increase the state’s supplemental payments for inpatient services by about $110 million during fiscal year 2010. South Carolina reported $39 million for non-DSH supplemental payments for nursing facility services during 2010, but did not report making such payments during 2006. South Carolina had suspended certain non-DSH supplemental payments for nursing facility services prior to fiscal year 2006, but it reinstated these payments, effective on October 1, 2008, with a slight change to payment qualification criteria. Reinstating these payments was estimated to increase payments by about $25 million during fiscal year 2010. According to the available information about changes in these 11 judgmentally selected states, some states’ non-DSH supplemental payments decreased from 2006 to 2010 because they terminated supplemental payments or made changes to their Medicaid programs that reduced supplemental payments. For example: North Carolina reported $607 million less in non-DSH supplemental payments for inpatient and outpatient hospital services in 2010 than in 2006. According to CMS, North Carolina discontinued making non-DSH supplemental payments to non-state government hospitals, effective on October 1, 2006. Discontinuation of these payments for inpatient and outpatient hospitals would have resulted in a reduction in payments. Georgia reported $221 million less in non-DSH supplemental payments for inpatient and outpatient hospital services in 2010 than in 2006. Georgia implemented a managed care program in 2007, and according to CMS, the state estimated that its supplemental payments (which generally can only be made for services provided on a fee-for-service basis) were reduced by more than $100 million per year as a result. Missouri reported paying $70 million in non-DSH supplemental payments for inpatient hospital services during 2006 and did not report making such payments during fiscal year 2010. According to CMS, the state reported that it did not make non-DSH supplemental payments for inpatient services during fiscal year 2010 because the State General Assembly did not approve funding for such payments. Changes in state reporting of non-DSH supplemental payments also contributed to differences in amounts between 2006 and 2010. In some cases, an apparent increase in non-DSH supplemental payments was due, at least in part, to more complete reporting of non-DSH supplemental payments in 2010 than in 2006. For example: Pennsylvania reported $0 for non-DSH supplemental payments during 2006 and $410 million for non-DSH supplemental payments for nursing facility services during 2010, for an apparent increase of $410 million in supplemental payments. However, Pennsylvania made, but did not separately report, non-DSH supplemental payments for nursing home services during 2006. South Carolina reported $0 for non-DSH supplemental payments for physician and surgical services during 2006 and $46 million for non-DSH supplemental payments for these services during 2010, for an apparent increase of $46 million in supplemental payments. However, CMS told us that South Carolina paid $43 million for non-DSH supplemental payments for physician and surgical services during 2006, so the actual increase in payments for these services from 2006 to 2010 was $3 million, not $46 million. In contrast, an apparent decrease in some states’ non-DSH supplemental payments was due, at least in part, to not reporting these payments separately during 2010. For example, as noted above, Georgia made, but did not separately report, non-DSH supplemental payments during 2010. States that did not separately report payments during 2010, but did separately report them in 2006, created the appearance of decreases in non-DSH supplemental payments. Medicaid supplemental payments can help ensure that providers make important services available to Medicaid beneficiaries. However, the transparency and accountability of these often very large payments have been lacking. Although CMS has instituted new reporting procedures for, and more complete reporting of, non-DSH supplemental payments, the exact amount of these payments is still not known because not all states have provided complete information as CMS requested during 2010. Nevertheless, as reporting of non-DSH supplemental payments becomes more complete, the significance of these payments, in terms of cost, growth, and contribution to total Medicaid payments for those providers receiving them, is becoming clearer. Identifying and monitoring Medicaid supplemental payments and ensuring that they, along with regular Medicaid payments, are consistent with federal requirements are complex tasks that will require continued vigilance by CMS. Ongoing federal efforts to improve the completeness of reporting of Medicaid supplemental payments are important for effective oversight and to better understand these payments’ role in financing Medicaid services. We provided a draft of this report to HHS for review. HHS stated that HHS and CMS will continue their ongoing efforts to improve states’ reporting of Medicaid supplemental payments. HHS’s letter is reprinted in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. This appendix provides information about our analyses of reported supplemental Medicaid expenditures and our analyses of information from selected states. Supplemental Medicaid payments include Disproportionate Share Hospital (DSH) payments to hospitals and other supplemental payments to hospitals or other providers, which we refer to as non-DSH supplemental payments. To determine what supplemental Medicaid payments states reported during fiscal year 2010 and how non-DSH supplemental payments reported during 2010 compared with those reported during 2006, we obtained and analyzed data about the Medicaid expenditures states reported during these 2 years. To examine reasons for differences between 2006 and 2010 in reported non-DSH supplemental payments, and to obtain additional information about states’ reports of these payments, we obtained information from the Centers for Medicare & Medicaid Services (CMS) and public sources about non-DSH supplemental payments in a nongeneralizable, judgmental sample of 11 states. In addition, we reviewed relevant federal laws, regulations, and guidance; our prior work on supplemental Medicaid payments; and other relevant documentation. We also interviewed officials from CMS. To determine what Medicaid payments states reported, we examined data from the standardized expenditure reports states submit to CMS on a quarterly basis using form CMS-64. States have 30 days after the end of a quarter to submit this form and must certify that the data are correct to the best of their knowledge. CMS reviews these reports and works with states to resolve any questions before certifying them as final. CMS transfers the certified, finalized data into a Financial Management Report (FMR) and makes annual data available on its website. CMS allows states to make adjustments to their prior CMS-64 submissions for up to 2 years. The annual FMR incorporates adjustments reported by the states by applying reported adjustments to the fiscal year during which they are reported, even if an adjustment corrects expenditures reported during an earlier fiscal year. Expenditures reported during fiscal year 2010. We obtained fiscal year 2010 FMR data from CMS on December 22, 2011. These data reflected adjustments to expenditures reported by states on the quarterly reports filed during fiscal year 2010 and included states’ reported total Medicaid expenditures, DSH expenditures, and any non-DSH expenditures the states reported separately from their other expenditures. Fiscal year 2010 was the most recent year for which certified data were available from all 50 states and the District of Columbia. To assess the reliability of the fiscal year 2010 data, we reviewed the steps CMS took to ensure the accuracy of expenditure data and we examined the data for outliers or other unusual values, which we discussed with CMS officials. We determined that the data were sufficiently reliable to describe the expenditures reported by the states during fiscal year 2010, although as discussed in this report, we found that states’ separate reporting of non-DSH supplemental payments was incomplete. Expenditures reported during fiscal year 2006. We obtained data about expenditures reported during fiscal year 2006 as part of work we For that work, we obtained reported DSH payments reported in 2008. from CMS’s FMR for fiscal year 2006, which included adjustments reported by states for prior years. To obtain data about non-DSH supplemental payments, we extracted expenditure data that had been reported on a section of the CMS expenditure report—the form CMS-64.9I—that states were to use for informational purposes during 2006 to identify non-DSH supplemental payments made under Medicaid’s Upper Payment Limit regulations.adjustments states reported through their CMS-64.9I entries during fiscal year 2006 and through October 5, 2007. As described in our 2008 report, our assessment of the reliability of the fiscal year 2006 data included review of the steps CMS took to ensure the accuracy of expenditure data submitted to the Medicaid Budget and Expenditure System, comparison to data we obtained from a nongeneralizable sample of 5 states, and We adjusted these data to reflect comparison to similar data published by the Urban Institute. We determined that the data were sufficiently reliable to describe the expenditures identified by the states during fiscal year 2006, although as we discussed in our earlier report, we concluded that states’ separate reporting of non-DSH supplemental payments was incomplete. Our analyses of reported DSH and non-DSH payments included identification of state-by-state and nationwide expenditures for DSH and non-DSH supplemental payments in both absolute (dollar amount) and relative (percentage) terms. DSH payments can be made to hospitals for traditional inpatient and outpatient services and to mental health facilities for inpatient and outpatient mental health services; we examined reported payments to both types of hospitals. Non-DSH supplemental payments can be made for various categories of service (such as inpatient hospital services or physician and surgical services) provided by hospitals or other types of providers (such as nursing homes or intermediate care facilities); we examined payments for specific categories of services. Some states may not have separately reported all of their non-DSH supplemental payments during 2006 or 2010. We did not quantify the extent to which states did not separately report their supplemental payments. Therefore, we may not be capturing the full amount of states’ non-DSH supplemental payments or the degree to which these payments have changed over time. We did not examine whether changes in non-DSH supplemental payments were associated with changes in states’ regular Medicaid payments. To examine reasons for differences between 2006 and 2010 in reported non-DSH supplemental payments, and to obtain additional information about states’ reports of these payments, we obtained information from CMS and public sources about non-DSH supplemental payments in a judgmental sample of 11 states selected to include a mix of relevant characteristics. We selected a nongeneralizable sample of states, including some that separately reported non-DSH supplemental payments (1) in fiscal year 2006, but not 2010 (Georgia and Missouri); (2) in fiscal year 2010, but not 2006 (Maine, Massachusetts, and Pennsylvania); and (3) both (Arkansas, Colorado, Illinois, North Carolina, South Carolina, and Texas). These states differed in absolute and relative changes in reported non-DSH supplemental payments and changes in categories of service for which payments were reported. The information we used to select states included published information as well as preliminary information from CMS. (For more information about non-DSH supplemental payments made by these states in 2006 and 2010, see app. III.) For each of our selected states, we asked CMS to provide us with documentation, such as state plan amendments, that could shed light on observed differences from 2006 to 2010 in reported non-DSH supplemental payments. We reviewed this information, along with information from other public sources (such as states’ websites) to identify possible reasons for changes in reported payments and to develop rough estimates of the financial impact of planned changes. The state plan amendments states submit when proposing new supplemental payments, or modifications to existing payments, include an estimate of the financial impact of the state plan amendment. This estimate is intended to reflect the impact of the state plan amendment as a whole, even if the amendment covers several changes. CMS officials told us that these estimates are the best available estimates of the financial impact of changes states make to their state plans. We did not attempt to develop a full, dollar-by-dollar explanation of any state’s changes from 2006 to 2010 in reported amounts of non-DSH supplemental payments. We did not determine the accuracy of states’ estimates of the financial impact of their state plan amendments. Information from our judgmental sample of 11 states cannot be generalized to other states. This appendix provides state-by-state and nationwide information about the DSH and non-DSH supplemental Medicaid payments reported during fiscal year 2010 by the states and the District of Columbia. Table 2 shows states’ reported Medicaid payments, their DSH and non-DSH supplemental payments, the federal share of DSH and non-DSH supplemental payments, and the percentage of the state Medicaid payments that was for DSH and non-DSH supplemental payments. Table 3 shows states’ reported DSH payments, including payments for traditional and mental health hospitals (as dollar amounts and as a percentage of the state DSH payments), total DSH payments, and total DSH payments as a percentage of the national total for DSH payments. Table 4 shows states’ reported non-DSH supplemental payments, including the amounts states reported for certain categories of service (as dollar amounts and as a percentage of the state non-DSH supplemental payments), total non-DSH supplemental payments, and total non-DSH supplemental payments as a percentage of the national total for non-DSH supplemental payments. The six categories of service listed are those for which CMS requested information— inpatient hospital services, outpatient hospital services, nursing facility services, physician and surgical services, other practitioners’ services, and intermediate care facility services. Tables 5 through 10 provide additional information about states’ reported Medicaid payments for the six categories of service for which CMS obtained information about non-DSH supplemental payments— inpatient hospital services, outpatient hospital services, nursing facility services, physician and surgical services, other practitioners’ services, and intermediate care facility services. For each of these six categories, the tables provide the states’ reported Medicaid payments, non-DSH supplemental payments, the federal share of the non-DSH payments, and the percentage of the state Medicaid payments for this category that was for non-DSH supplemental payments. This appendix provides state-by-state and nationwide information about non-DSH supplemental Medicaid payments reported during 2006 by the states and District of Columbia in comparison to similar payments reported during 2010. Table 11 shows the total amount of non-DSH supplemental payments states reported during 2006 and 2010 and the change from 2006 to 2010 in these amounts, both as a dollar amount and as a percentage of the 2006 total. Table 12 shows states’ reported non-DSH supplemental payments for specific categories of service during 2006 and 2010. In addition to the contact named above, Tim Bushfield, Assistant Director; Kristen Joan Anderson; Helen Desaulniers; Sandra George; Giselle Hicks; Roseanne Price; and Jessica C. Smith made key contributions to this report. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicaid: Ongoing Federal Oversight of Payments to Offset Uncompensated Hospital Care Costs Is Warranted. GAO-10-69. Washington, D.C.: November 20, 2009. Medicaid: CMS Needs More Information on the Billions of Dollars Spent on Supplemental Payments. GAO-08-614. Washington, D.C.: May 30, 2008. Medicaid Financing: Long-standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-650T. Washington, D.C.: April 3, 2008. Medicaid Financing: Long-Standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-255T. Washington, D.C.: November 1, 2007. Medicaid Financing: Federal Oversight Initiative Is Consistent with Medicaid Payment Principles but Needs Greater Transparency. GAO-07-214. Washington, D.C.: March 30, 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. Medicaid: States’ Efforts to Maximize Federal Reimbursements Highlight Need for Improved Federal Oversight. GAO-05-836T. Washington, D.C.: June 28, 2005. Medicaid: Intergovernmental Transfers Have Facilitated State Financing Schemes. GAO-04-574T. Washington, D.C.: March 18, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. Major Management Challenges and Program Risks: Department of Health and Human Services. GAO-03-101. Washington, D.C.: January 2003. Medicaid: HCFA Reversed Its Position and Approved Additional State Financing Schemes. GAO-02-147. Washington, D.C.: October 30, 2001. Medicaid: State Financing Schemes Again Drive Up Federal Payments. GAO/T-HEHS-00-193. Washington, D.C.: September 6, 2000. Medicaid: Disproportionate Share Payments to State Psychiatric Hospitals. GAO/HEHS-98-52. Washington, D.C.: January 23, 1998. Medicaid: Disproportionate Share Hospital Payments to Institutions for Mental Diseases. GAO/HEHS-97-181R. Washington, D.C.: July 15, 1997. Medicaid: States Use Illusory Approaches to Shift Program Costs to Federal Government. GAO/HEHS-94-133. Washington, D.C.: August 1, 1994.
GAO designated Medicaid a high-risk program because of concerns about its size, growth, and inadequate fiscal oversight. The program cost the federal government and states an estimated $383 billion in fiscal year 2010. In addition to regular Medicaid payments to providers, states make supplemental payments, including DSH payments, which are intended to offset the uncompensated costs of care provided to uninsured individuals and Medicaid beneficiaries. States also make other supplemental payments, which we refer to as non-DSH supplemental payments, to hospitals and other providers, for example, to help offset the costs of care provided to Medicaid beneficiaries. GAO and others have raised concerns about the transparency of states’ Medicaid supplemental payments. GAO was asked to provide information on supplemental payments. GAO examined (1) how much states reported paying in supplemental Medicaid payments during fiscal year 2010 and (2) how non-DSH supplemental payments reported during 2010 compared with those reported during 2006 and reasons for differences. GAO analyzed CMS’s Medicaid expenditure data for all states and information from CMS and other sources about non-DSH supplemental payments in a nongeneralizable sample of 11 states selected to capture a mix of relevant characteristics. In its comments on a draft of GAO’s report, HHS stated that HHS and CMS will continue their ongoing efforts to improve states’ reporting of supplemental Medicaid payments. States reported $32 billion in Medicaid supplemental payments during fiscal year 2010, but the exact amount of supplemental payments is unknown because state reporting was incomplete. On expenditure reports used to obtain federal funds filed with the Department of Health and Human Services’ (HHS) Centers for Medicare & Medicaid Services (CMS), states reported the following: A total of $17.6 billion in Disproportionate Share Hospital (DSH) payments. The 10 states reporting the largest total DSH payments in fiscal year 2010 accounted for more than 70 percent of the nationwide total, with 4 states— New York, California, Texas, and New Jersey—accounting for almost half (47 percent). DSH payments as a percentage of total Medicaid payments varied considerably—ranging from 1 to 17 percent—among the 50 states that reported DSH payments. A total of $14.4 billion in non-DSH supplemental payments to hospitals and other providers. Because not all states reported these payments separately, complete information is not available. Like DSH payments, non-DSH supplemental payments as a percentage of total state Medicaid spending varied considerably—also ranging from 1 to 17 percent—among the 30 reporting states. These payments can also constitute a large portion of states’ expenditures for particular categories of services, such as inpatient or outpatient hospital, nursing facility, or physician and surgical services. For example, non-DSH supplemental payments for inpatient hospital services ranged from 1 to 48 percent of state expenditures for these services among reporting states. CMS officials told GAO that they were taking steps to improve states’ reporting of non-DSH supplemental payments, including working with states to train staff on reporting of payments and on identifying and resolving reporting problems. States’ reported non-DSH supplemental payments were more than $8 billion higher during 2010 than during 2006, the year for which GAO previously reported on the amount of these payments. More complete state reporting of payments and new and modified supplemental payments were factors in this increase. The information available to identify changes from 2006 to 2010 came from 39 states that separately reported non-DSH supplemental payments during either 2006 or 2010 or both. Most of the increase was from the 15 states that reported some payments in both years and reported higher non-DSH supplemental payments during 2010 than 2006. In addition, most of the reported increase was for inpatient hospital services. In 11 selected states, GAO found that new and modified supplemental payments contributed to some increases. For example, new and modified supplemental payments for hospital services in Colorado and Illinois are estimated to increase the states’ non-DSH supplemental payments by about $300 million and $1 billion per year, respectively. However, data limitations prevented GAO from quantifying the full extent to which the increase was attributable to new and modified payments. In light of the apparent increase in non-DSH supplemental payments, ongoing federal efforts to improve the completeness of reporting are important for effective oversight and to better understand the payments’ role in financing Medicaid services.
According to the State Department, no country poses a more immediate narcotics threat to the United States than Mexico. For over 20 years, the United States has supported the Mexican government in its counternarcotics efforts and has provided assistance to develop and strengthen the Mexican government in its law enforcement efforts to stop the flow of illegal drugs from Mexico into the United States. However, from 1993 to 1995, the government of Mexico decided to combat drug-trafficking activities with reduced assistance from the United States. This policy remained in effect until 1995 when the Mexican government recognized the increased threat being posed by drug traffickers and again agreed to accept U.S. counternarcotics assistance for both law enforcement and military organizations involved in counternarcotics activities. In October 1995, the U.S. Secretary of Defense visited Mexico in an effort to strengthen military-to-military relationships between the two countries. As a result of this visit, the Mexican military agreed to accept U.S. counternarcotics assistance as part of the Mexican President’s decision to expand the role of the military in counternarcotics activities. During fiscal years 1996 and 1997, the Department of Defense (DOD) provided the Mexican military with $76 million worth of equipment and training from its inventories. Table 1 summarizes the types of counternarcotics assistance provided to or planned for delivery to the Mexican military for counternarcotics purposes during fiscal years 1996 and 1997. All of the helicopters and the C-26 aircraft were delivered to the Mexican military during 1996 and 1997. Mexico has also received some logistics and training support; however, DOD officials were unable to provide us with the exact level of support given because the data was not readily available. In fiscal year 1998, DOD plans to provide about $13 million worth of counternarcotics training assistance under section 1004 of the National Defense Authorization Act for Fiscal Year 1991, as amended, to Mexico’s military. In addition to the counternarcotics assistance provided by DOD, the Mexican military used its own funds to purchase two Knox-class frigates from the United States through the Foreign Military Sales program. These frigates were valued at about $7 million and were delivered to Mexico in August 1997. According to U.S. embassy officials, the Mexican Navy plans to use these frigates for performing various missions, including counternarcotics activities. Finally, during the same period, the State Department provided about $11 million to support Mexican law enforcement efforts. It plans to provide another $5 million in fiscal year 1998. The State Department, through its Bureau of International Narcotics and Law Enforcement Affairs, is responsible for formulating and implementing the international narcotics control policy, as well as coordinating the narcotics control assistance of all U.S. agencies overseas, including DOD. U.S. and Mexican counternarcotics objectives include (1) reducing the flow of drugs into the United States, (2) disrupting and dismantling narco-trafficking organizations, (3) bringing fugitives to justice, (4) making progress in criminal justice and anticorruption reform, (5) improving money-laundering and chemical diversion control, and (6) increasing mutual cooperation between the governments. In February 1998, the United States and Mexico issued a joint U.S.-Mexican drug strategy that addressed these objectives. In February 1998, the President certified that Mexico was fully cooperating with the United States in its counternarcotics efforts. Mexico is the principal transit country for cocaine entering the United States and, despite the Mexican government’s attempts to eradicate marijuana and opium poppy, Mexico remains a major source country for marijuana and heroin used in the United States. According to the State Department’s March 1998 International Narcotics Control Strategy Report, about 650 metric tons of cocaine were produced in South America in 1997. Of this amount, Mexico serves as the transshipment point for between 50 and 60 percent of U.S.-bound cocaine. Furthermore, DEA estimates that the majority of the methamphetamine available in the United States is either produced in Mexico and transported to the United States or manufactured in the United States by Mexican drug traffickers. In recent years, drug-trafficking organizations in Mexico have expanded their cocaine and methamphetamine operations. According to DEA, Mexican trafficking groups were once solely transporters for Colombian groups. However, in the early 1990s, major Mexican groups began receiving payment in product for their services. Thus, major Mexican organizations emerged as wholesale distributors of cocaine within the United States, significantly increasing their profit margin. According to DEA, Mexican drug-trafficking organizations are becoming stronger. DEA reports indicate that Mexican organizations have billions of dollars in assets and have at their disposal airplanes, boats, vehicles, radar, communications equipment, and weapons that rival the capabilities of some legitimate governments. One such Mexican organization generates tens of millions of dollars in profits per week. Profits of such magnitude enable the drug traffickers to pay enormous bribes—estimated for one organization to be as much as $1 million per week—to Mexican law enforcement officials at the federal, state, and local levels. DEA has reported that, because of the traffickers’ willingness to murder and intimidate witnesses and public officials, they are a growing threat to citizens within the United States and Mexico. According to the Justice Department, there has also been an increase in the number of threats to U.S. law enforcement officials in Mexico. Since our 1996 report, Mexico has undertaken actions intended to enhance its counternarcotics efforts and improve law enforcement and other capabilities. Some of the actions include (1) eradicating and seizing illegal drugs; (2) increasing counternarcotics cooperation with the United States; (3) initiating efforts to extradite Mexican criminals to the United States; (4) passing an organized crime law, as well as other legislation to enhance Mexico’s authority to prevent money laundering and the illegal use and diversion of precursor and essential chemicals; and (5) implementing measures aimed at reducing corruption within law enforcement organizations and increasing the role of Mexico’s military forces in law enforcement activities. Although these are positive efforts, the results of these actions are yet to be realized because (1) many of them have just been put in place and (2) some have not been broadly applied. The government of Mexico faces continuing challenges in trying to implement these efforts. These challenges include dealing with the lack of adequately trained and trustworthy law enforcement and judicial personnel, overcoming the lack of support for operations, coping with the inability of U.S. agents stationed in the United States to cross the border with firearms, and combating extensive corruption. During this decade, Mexico has eradicated large amounts of marijuana and opium poppy and has seized significant amounts of cocaine. Since 1990, Mexico has eradicated about 82,600 hectares (one hectare equals 2.47 acres) of marijuana. As figure 1 shows, there has also been a substantial decline in the amount of marijuana under cultivation—from a high of 41,800 hectares in 1990 to a low of 15,300 hectares in 1997. Despite Mexico’s success at reducing the amount of marijuana under cultivation, Mexico has not been as successful in reducing the amount of opium poppy cultivation. During 1990 through 1997, Mexico eradicated about 56,800 hectares of opium poppy. However, as figure 2 shows, the amount of opium poppy under cultivation in 1997 was almost 2,000 hectares greater than in the early 1990s. Mexico has also increased the amount of cocaine seized from 1994 to 1997—from 22.1 metric tons to 34.9 metric tons. However, as figure 3 shows, despite this increase, cocaine seizures are still substantially below the levels of 1990-93. Despite these eradication and seizure efforts, U.S. embassy documents indicate and U.S. law enforcement and U.S. embassy officials in Mexico stated that the amount of drugs flowing into the United States from Mexico remains essentially unchanged, and no major drug-trafficking organization has been dismantled. U.S. embassy officials estimated that Mexican cocaine seizures represent less than 10 percent of the total amount of cocaine flowing through Mexico. In 1996, we reported that cooperation between the United States and Mexico on counternarcotics activities was beginning to occur. Cooperation was taking place through actions such as the establishment of a high-level contact group to review drug control policies, enhance cooperation, develop new strategies, and devise a new action plan. Since then, a number of activities have been underway. For example, the high-level contact group on drug control, comprised of senior officials from both governments responsible for drug control, has met five times. Results of these meetings include the following: A U.S.-Mexico Binational Drug Threat Assessment was issued in May 1997 that addressed illegal drug demand and production, drug trafficking, money laundering, and other drug-related issues. A joint U.S.-Mexico Declaration of the Alliance Against Drugs was issued in May 1997 that included pledges from both governments to work toward reducing demand, production, and distribution; improving interdiction capacity; and controlling essential and precursor chemicals, among other issues. A joint U.S.-Mexican binational drug strategy was issued in February 1998 that identified 16 objectives that both countries seek to achieve in their efforts to reduce illegal drug-trafficking activities. In September 1997, ONDCP reported that the progress made by the high-level contact group is largely attributable to cooperative efforts that frequently occur within lower-level working groups. One such effort is the senior law-enforcement plenary group that meets about three times annually and is composed of senior law enforcement personnel from each country. These groups have addressed a variety of issues. For example, senior-level U.S. law enforcement agency officials worked closely with Mexican officials in providing technical assistance during the drafting of Mexico’s anti-money-laundering and chemical control laws. The Mexican government has taken a number of legislative and executive actions to strengthen Mexican counternarcotics activities. These involve starting extradition initiatives, passing various laws designed to strengthen Mexico’s ability to reduce various illegal drug-related activities, and instituting several anticorruption activities such as reorganizing law enforcement agencies and instituting a screening process for law enforcement personnel. However, the government of Mexico faces numerous challenges in implementing these actions. The United States and Mexico have had a mutual extradition treaty since 1980. Although no Mexican national has ever been surrendered to the United States on drug-related charges, since 1996 Mexico has approved the extradition of 4 of 27 Mexican nationals charged with drug-related offenses to the United States. Two of these are currently serving criminal sentences in Mexico, and the other two are appealing their convictions in Mexico. The remaining drug-related extradition requests include 5 persons currently under prosecution in Mexico, 14 persons still at large, and 4 others. According to U.S. embassy officials, it is not clear whether any Mexican national will be surrendered on such charges before the end of 1998 because of Mexico’s lengthy legal processes. Another example of bilateral extradition efforts is the November 1997 signing of a U.S.-Mexico “temporary extradition protocol.” This protocol will allow suspected criminals who are charged in both countries to be temporarily surrendered for trial in either country while evidence is current and witnesses are available. The protocol is not yet in effect because it requires legislative approval in both the United States and Mexico. U.S. officials from the Departments of State and Justice stated that they do not know when this protocol will be sent to the countries’ Congresses for ratification. In November 1996, Mexico passed an organized crime law that represents a major step in Mexico’s law enforcement capabilities by providing legal authority for Mexican law enforcement organizations to employ modern techniques to combat crime. These include provisions to use sentencing concessions that equate to plea bargaining to obtain information on other suspects, provide rewards and protection to persons who give information to law enforcement officials, establish witness secrecy and protection, allow undercover operations, and permit court-authorized wiretaps. The law also has some provisions for asset seizures and forfeitures. Although the law provides the law enforcement community with the tools necessary to fight organized crime, including drug trafficking, it has no provisions allowing the seizure of assets of a suspected criminal who has either died or fled Mexico. Thus, in some instances, Mexican law enforcement agencies are limited in their ability to fully pursue suspected drug traffickers. Furthermore, according to U.S. and Mexican officials, Mexico needs to develop a cadre of competent and trustworthy judges and prosecutors that law enforcement organizations can rely on to effectively carry out the provisions of the organized crime law. For example, DEA reported that the lack of judicial support has frustrated implementation of the wire-tapping aspect of the law. The impact of the organized crime law is not likely to be fully evident for some time. Mexican and U.S. officials told us that the process of conducting investigations is inherently lengthy and that the capabilities of many Mexican personnel who are implementing and enforcing the law are currently inadequate. At present, agencies within the Mexican government are in the early stages of carrying out and enforcing the law. Mexican agencies have initiated some cases and are currently conducting a number of investigations under the new law. In addition, the Department of Justice reported that, by using Mexico’s organized crime law in conjunction with the U.S.-Mexico Mutual Legal Assistance Treaty, cooperating witnesses have been transferred from prisons in Mexico to the United States to testify in U.S. criminal proceedings. Although some guidelines and policies have been established, additional ones still need to be developed. For example, some units of the Mexican Attorney General’s Office are unable to use important investigative tools such as plea bargaining and court-authorized wiretaps because guidelines and policies have not yet been established. Several U.S. agencies are assisting Mexico with training and technical assistance to implement the law and improve institutional capabilities. For example, the Justice Department is providing assistance designed to strengthen the investigative capabilities of Mexican police and prosecutors. In addition, the U.S. Agency for International Development has judicial exchange programs and conducts seminars and training courses for Mexican federal and state judges. Also, the State Department plans to spend a total of about $3 million during fiscal years 1997 and 1998 to train judges and other law enforcement personnel and to procure computers and other equipment for law enforcement and judicial institutions. According to the State Department, Mexico has become a major money-laundering center. Drug cartels launder the proceeds of crime in legitimate businesses in both the United States and Mexico, favoring transportation and other industries that can be used to facilitate drug, cash, and arms smuggling and other illegal activities. Mexico has taken actions to enhance its capacity to combat money laundering. In May 1996, money laundering was made a criminal offense that provides penalties of up to 22 years in prison. Prior to May 1996, money laundering was a tax offense—a civil violation—punishable by only a fine. In March 1997, Mexico issued regulations requiring reporting of transactions over $10,000 U.S. dollars and of suspicious voluntary transactions, and obtaining and retaining information about customers’ financial institution accounts. However, U.S. and Mexican officials are concerned that the law lacks some important provisions. For example, financial institutions are not required to obtain and retain account holders’ information for transactions below the $10,000 level, thus providing no protection against “structuring.”In addition, there is no requirement for reporting outbound currency leaving the country. As of December 1997, the Mexican government had initiated 27 money-laundering cases since the new requirements went into effect. One of these cases was prosecuted under the organized crime law, and the remaining 26 cases are still under investigation. In the one case that was prosecuted, the charges were dismissed because a federal judge ruled that there was inadequate evidence of a link between an illegal activity and how the money was obtained. The Mexican government has appealed the judge’s decision. The United States is assisting Mexico’s money-laundering control efforts. For example, the State Department will spend a total of about $500,000 during fiscal years 1997 and 1998 to provide computer systems and training for personnel responsible for enforcing the money-laundering control requirements. Mexico established trafficking in precursor and essential chemicals as a criminal offense in May 1996. These chemicals can be used in the production of heroin, cocaine, or synthetic drugs of abuse. Although some chemicals that the United Nations recommends be controlled were not included in the May 1996 law, Mexico passed additional legislation in December 1997 to cover them. The new legislation brought Mexico into compliance with the 1988 United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. (See app. I for a list of these chemicals.) In addition, Mexico has taken further action to control chemicals by limiting their legal importation into eight ports of entry and by imposing regulatory controls over the machinery used to manufacture tablets or capsules. The impact of the December 1997 chemical control law is not yet evident because of its recent passage. Currently, the implementation of the law, the drafting of implementing regulations, and the development of an administrative infrastructure for enforcing it are under way. The United States has provided technical assistance and training to Mexico for establishing and carrying out the law. In addition, the State Department plans to spend about $400,000 during fiscal years 1997 and 1998 to train government personnel in the safe handling and disposal of seized chemicals. In September 1996, the President of Mexico publicly acknowledged that corruption is deeply rooted in Mexican institutions and in the general social conduct of the nation. He added that the creation of a new culture of respect for law must start with public officials. Then he affirmed his administration’s intent to gradually eliminate official corruption by temporarily increasing the role of the military in civilian law enforcement matters and by implementing anticorruption reforms in law enforcement. Mexico has initiated several actions intended to reduce corruption and reform civilian law enforcement agencies. In 1996, Mexico’s Office of the Attorney General began a reorganization to reduce corruption in Mexican law enforcement agencies. As part of this action, the State Department reported that over 1,250 officials had been dismissed for incompetence and/or corruption. In February 1997, the Mexican general who headed the National Institute for Combat Against Drugs, the Mexican equivalent of DEA, was arrested for corruption. Subsequently, in April 1997, Mexico’s Attorney General dissolved the National Institute for Combat Against Drugs, dismissed a number of its employees, and established a new organization known as the Special Prosecutor for Crimes Against Health to replace the Institute. Within the Special Prosecutor’s Office, there are two special units: the Organized Crime Unit and the Bilateral Task Forces. The Organized Crime Unit, with an authorized strength of 300, was established under the organized crime law to conduct investigations and prosecutions aimed at criminal organizations, including drug-trafficking activities. The Bilateral Task Forces, with an authorized strength of 70, are responsible for investigating and dismantling the most significant drug-trafficking organizations along the U.S.-Mexican border. The Bilateral Task Forces have offices in Tijuana, Cuidad Juarez, and Monterrey, with suboffices in several other locations within Mexico. Also beginning in 1997, Mexico’s Attorney General instituted a screening process that is supposed to cover all Attorney General personnel, including those who work for the Special Prosecutor, the Organized Crime Unit, and the Bilateral Task Forces. This process consists of conducting personal background and financial checks, performing medical and psychological screening, requiring urinalysis, and conducting regular polygraph testing. U.S. agencies are supporting this initiative by providing equipment, training, and technical assistance. However, U.S. embassy officials stated that the screening requirements do not apply to judges, most units of the military, and other key law enforcement organizations in counternarcotics-related activities. Finally, the Mexican President expanded the role of the Mexican military in undertaking some counternarcotics activities. The Mexican military, in addition to eradicating marijuana and opium poppy, has also taken over some law enforcement functions. For example, in 1997, airmobile special forces units became operational to assist and enhance the Mexican government’s counternarcotics capabilities. These units have been used to patrol streets in certain Mexican cities and search for drug kingpins. Although officials from the Departments of State and Justice and the U.S. embassy believe these actions show Mexico’s commitment to disrupting and dismantling drug-related activities in Mexico, there remain unresolved operational and resource issues that hamper counternarcotics efforts. These include the following: U.S. embassy and Mexican officials stated that the Special Prosecutor’s Office and the special units suffer from a shortage of trained and appropriately screened personnel. In December 1997, DEA reported that 796, or 27 percent of the Special Prosecutor’s Office’s authorized strength of 3,000, had passed the screening process and 84, or 23 percent, of the special units’ authorized strength of 370 personnel had passed this process. Mexican officials stated that some personnel who failed the screening process are still working in the Special Prosecutor’s Office but have been placed in nonsensitive positions. U.S. embassy officials expressed concern about having such personnel in the office. In addition, according to the State Department, personnel who have passed the screening process often lack law enforcement experience. Special units face operational and support problems. These problems include inadequate Mexican government funding for equipment, fuel, and salary supplements for personnel assigned to the special units and a lack of standard operating procedures. The Bilateral Task Forces have yet to complete any successful investigation of a major trafficking group. DEA has reported that the operations of the Bilateral Task Forces have been hampered because U.S.-based law enforcement agents assigned to the Task Forces cannot carry firearms into Mexico. According to the Justice Department, this exposes DEA agents to a higher level of danger because of the significant threat by Mexican drug trafficking organizations. Attracting and retaining competent and trustworthy law enforcement personnel is difficult. Low salaries of law enforcement officers increase their susceptibility to corruption. Many Mexican law enforcement officers have little job security. According to U.S. embassy officials, most officers are essentially political appointees who are replaced after each election because Mexico has no career “civil service” within law enforcement organizations. Mexico lacks a cadre of judges and prosecutors that law enforcement organizations can rely on to effectively carry out the provisions of the organized crime law. The establishment of screening procedures or the involvement of the military cannot ensure that corruption will not continue to be a significant impediment to U.S. and Mexican counternarcotics efforts. For example, in February 1998, the U.S. embassy reported that three officials who had passed the screening process had been arrested for illegal drug-related activities. This report also noted that five Mexican generals have been arrested during the past year on illegal drug-related activities. One of these generals was arrested for offering another general about $1.5 million per month on behalf of a major drug-trafficking organization, according to DEA. Between 1996 and 1997, the United States provided the Mexican military with $76 million worth of assistance, including 73 UH-1H helicopters, spare parts, 4 C-26 aircraft, and Navy training, to enhance the counternarcotics capabilities of Mexico’s military. In addition, the Mexican Navy purchased two Knox-class frigates under the U.S. Foreign Military Sales Program. The usefulness of the 73 UH-1H helicopters is limited because they cannot perform some counternarcotics missions and lack adequate logistical support. Available evidence also suggests that there was inadequate planning and coordination associated with the C-26 aircraft and Knox-class frigates. Neither the aircraft nor the frigates are currently being used. In September 1996, the President approved the transfer of 73 UH-1H helicopters and 2 years’ worth of spare parts under the section 506(a)(2) drawdown to enhance the mobility of 12 special Mexican Army units involved in interdicting drug-trafficking activities. However, the extent to which the helicopters can assist the Mexican government in their counternarcotics efforts is not clear. No information was available on the extent to which the helicopters were being used to support the special Army units. We also found that the UH-1Hs have limited capability to conduct certain types of operations. The U.S. embassy reported in August 1997 that the UH-1Hs are of limited utility because the helicopters’ operational capability is significantly reduced at altitudes above 5,000 feet. Except for the coastal areas of Mexico, almost all of the Mexican land area and altitudes at which most drug-trafficking activities take place, including the cultivation of most opium poppy, are above this level. Available information indicates that the Mexican military has used the helicopters primarily for other counternarcotics missions such as troop transport for interdiction and manual eradication forces, logistics support, and aerial reconnaissance. DOD included supplies valued at $12 million under the 506(a)(2) drawdown authority to provide logistical support for the helicopters. This package was based on a U.S. Army assumption that the Mexican military would follow the U.S. Army flight standard of 14.5 hours per month. DOD and U.S. embassy officials stated that their goal is to achieve an operational rate for the 73 helicopters of 70 percent. Since being delivered to Mexico, the operational rates for the 73 helicopters has been low due to overuse of those available and maintenance problems. According to U.S. embassy reports, the Mexican military’s operational rates for the UH-1H helicopters have varied between 35 percent and 58 percent from February 1997 through January 1998. Key elements of the logistical support package were not provided on a timely basis. According to the U.S. embassy, the U.S. Army delivered six aviation tool kits in January 1997. However, in April 1997, the U.S. embassy reported that the kits and their contents were incomplete and useless. In December 1997, we visited the Mexican air base where the kits are located and found that they still lacked a number of the tools needed to make them useful to maintaining the UH-1H helicopters. According to U.S. embassy military officials, many of the spare parts contained in the support packages are now being delivered. Moreover, end-use monitoring reports from the U.S. embassy and information supplied by the Defense Security Assistance Agency showed that, of the helicopters which were operational, the Mexican military was flying them at an average of 50 hours per month. This resulted in the Mexican military using up the spare parts that have been provided more rapidly than intended. In 1997, the Mexican military requested, and DOD subsequently approved, $8 million in additional counternarcotics assistance authorized under section 1031 of the National Defense Authorization Act for Fiscal Year 1997 for additional spare parts because of the UH-1Hs’ heavy use. Even with this additional support, U.S. embassy officials stated that the amount of spare parts is not adequate to maintain the fleet of helicopters for any significant length of time. DOD and U.S. embassy officials are concerned that once U.S. logistical assistance is used, the Mexican military will be unable to provide any additional support because of budgetary constraints. A Mexican Air Force official also stated that the Mexican military does not have any plans to provide large sums of funding needed to support the helicopters and is counting on the United States to do so. The U.S. embassy has estimated that it will take about $25 million annually to support the UH-1H fleet and that the Mexican military has no plans to provide this level of support. In June 1998, U.S. embassy military officials told us that, due to the costly operational expenses and Mexican funding constraints, the UH-1H program has a high potential for complete mission failure. DOD policy is to ensure that countries receiving assistance are made aware of and given the opportunity to plan for and obtain all support items, services, and training needed to operate, maintain, and sustain any equipment. This approach is aimed at ensuring that all material, training, and services offered to a recipient country are scheduled and delivered in a logical sequence. We found that DOD’s policy was not followed in providing the C-26 aircraft. Moreover, DOD fell short in planning and coordinating the delivery and training support for two Knox-class frigates with the U.S. embassy and the Mexican Navy. The four C-26 aircraft were originally included as part of the September 1996 506(a)(2) drawdown package to enhance Mexico’s surveillance capabilities. The C-26 aircraft were added to the package for Mexico by the National Security Council only 3 days before this package was provided to the U.S. President for his approval. The aircraft were delivered to Mexico in September and October 1997. As a result of this short time frame, DOD and the U.S. embassy did not have adequate time to plan and coordinate for the provision of the C-26 aircraft. DOD officials stated that they had no input into the decision to provide these aircraft prior to their inclusion by the National Security Council. They indicated that, at the time, the Mexican military had not identified a need for the C-26 aircraft. These officials also stated that they did not identify the level of operation and support needed to use the aircraft. Although the C-26 aircraft were originally intended to provide Mexico with a surveillance-capable aircraft, no C-26 aircraft with this capability were available under the drawdown. DOD noted that the Mexican military was aware that they would receive the C-26 aircraft without the sophisticated surveillance equipment. DOD and the State Department estimate that it will cost the Mexican military about $3 million to reconfigure each aircraft and as much as $2 million annually to operate and maintain the aircraft. According to DOD, the Mexican military has indicated that it has no plans to invest in U.S. surveillance equipment. Further, U.S. embassy military officials stated that, as of June 1998, the Mexican Air Force has not used these aircraft for any purposes, including counternarcotics, because the Mexican military has not obtained contractor support needed to maintain the aircraft. The United States provided the Mexican Navy with two Knox-class frigates that arrived in Mexico in August 1997. The Mexican Navy procured these ships, using its own funds, through the Foreign Military Sales program. The value of these ships was about $7 million. According to U.S. embassy officials, the Mexican Navy plans to use these ships to perform a variety of missions, including counternarcotics operations. We found that there was limited understanding between the U.S. Navy, DOD, the U.S. embassy, and the Mexican Navy regarding the condition that the two frigates would be in when they were delivered. U.S. Navy policy states that ships are to be transferred to foreign countries through the Foreign Military Sales program in “as-is, where-is” condition. U.S. Navy officials said that the two ships purchased by the Mexican Navy had been deactivated and were in dry dock for 6 years before the Mexican Navy inspected the ships and sought their subsequent transfer. DOD said that the Mexican Navy was aware that certain equipment would not be provided and that they would not be operational when delivered. However, our review of U.S. embassy reports and discussions with U.S. embassy military officials indicate that the Mexican Navy believed that certain types of equipment would be provided when the ships were delivered. These reports indicate that the frigates could not be activated when they were delivered because they lacked the test kits needed to ensure safe operations of the propulsion systems. U.S. Navy officials estimate that the Mexican Navy will have to pay about $400,000 to procure these kits and that it will take the Mexican Navy about 2 years to obtain the kits once a procurement action is initiated. These officials also stated that other parts of the ships will have to be refurbished before the ships can be reactivated. According to DOD, on April 6, 1998, DOD was informed by the Mexican Navy that it plans to reactivate the two ships during the summer of 1998. Part of the Mexican plan includes the purchase of a third Knox-class frigate, which the Mexican Navy intends to use as a source of parts and spares for the first two ships. On June 3, 1998, a U.S. embassy military official told us that the reactivation date for the two Knox-class frigates has slipped until at least the fall of 1998. We also found that the training was not well coordinated between the U.S. Navy and DOD. In 1997, DOD provided the Mexican Navy with about $1.3 million worth of training to about 110 Mexican Navy personnel on how to operate and maintain the Knox-class frigates. These personnel will be used to train additional Mexican Navy personnel who will be assigned to the vessels. According to U.S. embassy military and DOD officials, the training occurred between February 1997 and March 1998. In commenting on a draft of this report, DOD acknowledged that the training was scheduled even though there was no clear commitment on the part of the Mexican Navy as to when the ships would be activated. Furthermore, DOD officials told us that they agreed to provide the training without knowing that the U.S. Navy had delivered ships that were not operational. U.S. embassy military officials stated that the Mexican Navy will reassign these personnel, thus making them potentially unavailable if and when the ships finally are activated. DOD noted that, in their view, the training was not a wasted effort because it provided the Mexican Navy with a cadre of trained naval personnel and expanded the cooperation with their U.S. counterparts. Without performance measures of effectiveness, it is difficult for decisionmakers to evaluate the progress that the United States and Mexico are making to reduce the flow of illegal drugs into the United States. We have previously noted the need for ONDCP to develop drug control plans that include measures to allow it to assess the effectiveness of antidrug programs. While the United States and Mexico issued a joint antidrug strategy in February 1998, it does not contain performance measures. It does have 16 general objectives, such as reducing the production and distribution of illegal drugs in both countries and focusing law enforcement efforts against criminal organizations. However, although this strategy is indicative of increased U.S.-Mexican cooperation, it lacks specific, quantifiable performance measures and milestones for assessing progress toward achieving these objectives. State Department officials said that the bilateral process of establishing performance measures and milestones is incremental and will be addressed during 1998. ONDCP officials said that they plan to issue specific performance measures and milestones for the binational strategy by the end of this year. The effectiveness of some U.S. counternarcotics assistance to the Mexican military was limited because of inadequate planning and coordination, an issue that we have reported on in the past. We continue to believe that counternarcotics assistance, particularly that provided under 506(a)(2) should be better planned and coordinated. Thus, we recommend that the Secretary of State, in close coordination with the Secretary of Defense and the National Security Council, take steps to ensure that future counternarcotics assistance provided to Mexico, to the maximum extent possible, meets the needs of the Mexican military and that adequate support resources are available to maximize the benefits of the assistance. In written comments on a draft of this report (see app. II), DOD generally concurred with the report and our recommendation. However, DOD stated that the report’s representation of DOD’s counternarcotics assistance provided to the Mexican military required clarification. Where appropriate, we have added information on DOD roles and the circumstances surrounding the provision of the helicopters, aircraft, and frigates. DOD noted that these initiatives and all other DOD-provided counterdrug activities are the result of careful planning and coordination between DOD, its federal counterparts, the U.S. embassy in Mexico, and Mexican government and military officials. It further stated that while each case has some aspects that could have been better coordinated, the overall results of the transactions and the broader U.S.-Mexican military-to-military coordination are very beneficial to building trust and confidence between two countries engaged in the fight against drugs. We agree that U.S.-Mexico cooperation and U.S. counterdrug assistance have been beneficial as the two countries strive to combat drug-trafficking activities. However, our analysis shows that weaknesses in planning and coordination adversely affected the usefulness of certain key items of the specific assistance transactions we examined. The equipment provided did not meet a specific counternarcotics need, could not perform required missions, were inoperable, or lacked adequate logistical support. Moreover, DOD’s position is not supported by events surrounding the provision of training to the Mexican Navy. While this training may be valuable in improving the military-to-military relationships between the United States and Mexico, the value to improving the counternarcotics capabilities of the Mexican Navy is clearly limited. We continue to believe that improvements in planning and coordination are necessary to ensure the Mexican military realizes the full benefits of this assistance. The Departments of Justice and State provided oral comments to clarify information contained in the report. We have incorporated these as appropriate. To examine the nature of Mexico’s drug threat, we received briefings from U.S. law enforcement, intelligence, and military officials, and reviewed and analyzed documentation in Washington, D.C., and at the U.S. embassy in Mexico. To address Mexico’s progress in improving its counternarcotics efforts, we met with officials from U.S. agencies in Washington, D.C., and at the U.S. embassy in Mexico. Specifically in Washington, D.C., we reviewed and analyzed strategic and operational planning documents, cables, and correspondence at the Departments of State, the Treasury, and Justice; the U.S. Customs Service; DEA; the Federal Bureau of Investigation; the U.S. Coast Guard; and ONDCP. In addition, at the U.S. embassy in Mexico City, we interviewed U.S. embassy officials, including the Chargé d’Affaires, and personnel from the Narcotics Affairs Section, DEA, the Federal Bureau of Investigation, the U.S. Customs Service, and the Department of the Treasury. We reviewed and analyzed planning documents, cables, and correspondence regarding the progress that Mexico was making in improving its counternarcotics efforts. To assess the issues related to the provision of U.S. counternarcotics assistance to the Mexican military, we met with DOD officials from the Office of the Coordinator for Drug Enforcement Policy and Support; the Defense Security Assistance Agency; and the Departments of the Army, Navy, and Air Force. At the U.S. embassy in Mexico City, we interviewed U.S. military personnel from the Military Liaison Office and the Office of the Defense Attaché. We reviewed and analyzed all reports, cables, and correspondence provided by the U.S. embassy and DOD regarding how U.S.-provided counternarcotics assistance was being used and problems associated with maintaining this assistance. To determine how the U.S. government plans to assess the effectiveness of U.S. and Mexican counternarcotics efforts, we interviewed officials from ONDCP and the Department of State. We reviewed and analyzed documents and correspondence related to the status of developing performance measures for evaluating the effectiveness of counternarcotics efforts with Mexico. While in Mexico, we also interviewed Mexican officials from the Ministries of Treasury, Foreign Affairs, and the Office of the Attorney General to obtain their views on the issues discussed in this report. We also visited with Mexican police officials at their maintenance facility in Mexico City and with Mexican Air Force personnel at their maintenance facility in Culiacan, Mexico to determine how the police and Air Force were maintaining UH-1H helicopters. Finally, we analyzed Mexican reports and other documents relating to the progress that Mexico was making to reduce the flow of drugs into the United States and Mexican military reports addressing operational readiness and issues relating to the delivery of U.S.-provided assistance. We conducted our review between September 1997 and April 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other congressional committees; the Secretaries of State, Defense, and Treasury; the U.S. Attorney General; the Administrator, DEA; and the Directors of ONDCP and the Federal Bureau of Investigation. Copies will also be made available to other interested parties upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-4128. This report was done under the direction of Jess Ford. The major contributors to this report were Ronald Kushner, Allen Fleener, Ronald Hughes, José Peña, and George Taylor. Numerous precursor and essential chemicals are used in the illicit production of illegal drugs. Under the chemical control legislation enacted in December 1997, Mexico controls the following precursor and essential chemicals. Drug Control: Status of Counternarcotics Efforts in Mexico (GAO/T-NSIAD-98-129, Mar. 18, 1998). Drug Control: Observations on Counternarcotics Activities in Mexico (GAO/T-NSIAD-96-239, Sept. 12, 1996). Drug Control: Counternarcotics Efforts in Mexico (GAO/NSIAD-96-163, June 12, 1996). Drug Control: Observations on Counternarcotics Efforts in Mexico (GAO/T-NSIAD-96-182, June 12, 1996). Drug War: Observations on U.S. International Drug Control Efforts (GAO/T-NSIAD-95-194, Aug. 1, 1995). Drug War: Observations on the U.S. International Drug Control Strategy (GAO/T-NSIAD-95-182, June 27, 1995). Drug Control: Revised Drug Interdiction Approach Is Needed in Mexico (GAO/NSIAD-93-152, May 10, 1993). Drug Control: U.S.-Mexican Opium Poppy and Marijuana Aerial Eradication Program (GAO/NSIAD-88-73, Jan. 11, 1988). Gains Made in Controlling Illegal Drugs, Yet the Drug Trade Flourished (GAO/GGD-80-8, Oct. 25, 1979). Opium Eradication Efforts in Mexico (GAO/GGD-77-6, Feb. 18, 1977). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided an update on the status of counternarcotics activities in Mexico, focusing on: (1) the nature of the drug threat from Mexico; (2) the progress that Mexico has made in improving its counternarcotics efforts; (3) issues related to the provision of U.S. counternarcotics assistance to the Mexican military; and (4) the plans that the U.S. government has to assess the effectiveness of U.S. and Mexican counternarcotics efforts. GAO noted that: (1) Mexico continues to be the primary transit country for cocaine entering the United States from South America, as well as a major source country for heroin, marijuana, and methamphetamines; (2) according to the Drug Enforcement Administration, drug-trafficking organizations are increasing their activities, posing a threat to citizens in the United States and Mexico; (3) Mexico, with U.S. assistance, has taken steps to improve its capacity to reduce the flow of illegal drugs into the United States by: (a) increasing the eradication of marijuana and opium poppy and seizing significant amounts of cocaine; (b) enhancing its counternarcotics cooperation with the United States; (c) initiating efforts to extradite Mexican criminals to the United States; (d) passing new laws on organized crime, money laundering, and chemical control; and (e) instituting reforms in law enforcement agencies and expanding the role of the military in counternarcotics activities to reduce corruption; (4) the results of these actions have yet to be realized because many of them are in the early stages of implementation and some are limited in scope; (5) also, the Mexican government faces a shortage of trained personnel, a lack of adequate funding to support operations, and extensive corruption; (6) U.S. counternarcotics assistance has enhanced the ability of the Mexican military to conduct counternarcotics missions by allowing it to perform reconnaissance, increase eradication missions, and bolster the air mobility of its ground troops; (7) however, key elements of the Department of Defense's counternarcotics assistance were of limited usefulness or could have been better planned and coordinated by U.S. and Mexican military officials; (8) although the Mexican government has agreed to a series of actions to enhance its counternarcotics capacity and the United States has begun to provide a larger level of assistance, no performance measures have been established to assess the effectiveness of these efforts; and (9) the Office of National Drug Control Policy has recognized the need to develop such measures and has indicated that it plans to devise methods for evaluating U.S. and Mexican counternarcotics performance by the end of 1998 as part of the binational drug control strategy.
From the start of the CAARS program in 2005 until the course correction in December 2007, DNDO planned the acquisition and deployment of CAARS machines without understanding that they would not fit within existing primary inspection lanes at CBP ports of entry. This occurred because during the first year or more of the program DNDO and CBP had few discussions about operating requirements for primary inspection lanes at ports of entry. In addition, the CAARS program was among numerous acquisition programs about which we previously reported that appropriate DHS oversight was lacking. Furthermore, the development of the CAARS algorithms—a key part of the machine needed to identify shielded nuclear materials automatically—did not mature at a rapid enough pace to warrant acquisition and deployment. Moreover, the description of the progress of the CAARS program used to support funding requests in DNDO’s budget justifications for fiscal years 2009 through 2011 was misleading because it did not reflect the actual status of the program. From the inception of the CAARS program until the decision in December 2007 to cancel acquisition of the program, DNDO and CBP had few, if any, in-depth discussions about CBP’s requirements to be able to use radiography in primary inspection lanes. According to DNDO officials, they requested information from CBP on its user requirements for the CAARS system, but CBP was slow to respond to these requests. DNDO continued with its plans to develop CAARS machines because, according to DNDO officials, at the time it was thought that a solution was urgently needed to be able to detect shielded nuclear materials in primary inspection lanes. In discussing this with senior CBP officials, they said that DNDO officials did not attempt to meet with them during the beginning of the CAARS program. When CBP and DNDO officials met, shortly before the course correction, CBP officials said they made it clear to DNDO that they did not want the CAARS machines because they would not fit in primary inspections lanes and would slow down the flow of commerce through these lanes and cause significant delays. In our view, had CBP and DNDO officials met early in the development of the program to discuss CBP’s needs and operational requirements, as stated in DHS’s acquisition policy at the time, it is unlikely that DNDO would have found reason to move forward with its plan to develop and acquire CAARS technology. Nonetheless, in September 2006, DNDO awarded contracts to three CAARS vendors. In December 2007, DNDO decided to cancel the acquisition of CAARS and limit any further work to a research and development effort. In recent joint discussions with CBP and DNDO officials, they acknowledged that communication between the two agencies could have been improved during the early part of the CAARS program. They said they communicate much more routinely now and that, in their view, it would be unlikely that the communication problems associated with the CAARS program would reoccur. DNDO did not follow DHS acquisition protocols for the CAARS program. Specifically, in 2008, we reported that CAARS was among numerous major DHS acquisition programs that did not have a mission needs statement—a required DHS acquisition document that formally acknowledges that the need for an acquisition is justified and supported. DHS policy also called for programmatic reviews at key decision points and required certain analytical documents. However, CAARS did not undergo annual department level reviews as called for nor did DNDO program officials obtain or prepare basic analytical documents. For example, one of these documents, a concept of operations (CONOPS), was intended to demonstrate how CBP would use CAARS machines in primary inspection areas at the ports. However, as a result of inadequate communication and collaboration between CBP and DNDO discussed earlier, no CONOPS was developed during the early phase of the CAARS program. Ultimately, according to DNDO officials, once DNDO made the decision to cancel the acquisition portion of CAARS in December 2007, a CONOPS was no longer required. According to DNDO officials, at the time of the inception of the CAARS program, there was a widespread view within DNDO that something had to be done to provide CBP with the capability to detect highly shielded nuclear material in primary inspection lanes. DNDO officials acknowledged that the agency decided to move forward with the CAARS program despite the fact that automatic detection, a key feature of CAARS, depended on the rapid development of algorithms that were technologically immature. The algorithms are critical because they provide the capability for CAARS to automatically detect highly shielded nuclear material in primary inspection areas without the need for extensive operator review and interpretation of an image—two factors that could adversely affect CBP’s ability to avoid delays to the flow of commerce along with its overall effectiveness in detecting highly shielded nuclear material. Although algorithms supporting the CAARS technology were technologically immature, DNDO created an aggressive production and deployment schedule that was to begin in August 2008, the end of DNDO’s planned 2-year development period for the CAARS program. At the time it decided on this production milestone, DNDO officials said it was likely that the algorithms would be developed in time to meet the start of planned production. However, the technology did not develop as expected and contributed to DNDO’s decision to cancel the acquisition phase of CAARS. For fiscal year 2009 through fiscal year 2011, DHS justified annual budget requests to Congress by citing significant plans and accomplishments of the CAARS program, including that CAARS technology development and deployment was feasible, even though DNDO had made the decision in December 2007 to cancel the acquisition of CAARS. For example, in its fiscal year 2009 budget justification, DHS stated that a preliminary DNDO/CBP CAARS production and deployment program had been successfully developed and that CAARS machines would be developed that would detect both contraband and shielded nuclear material with little or no impact on CBP operations. The fiscal years 2010 and 2011 DHS budget justifications both cited that an ongoing testing campaign would lead to a cost benefit analysis, followed by rapid development of a prototype that would lead to a pilot deployment at a CBP point of entry. Furthermore, the fiscal year 2010 budget justification stated that while the CAARS technology was less mature than originally estimated, successful development was still feasible. However, DHS’s description and assessment of the CAARS program in its budget justification did not reflect the actual progress of the program. Specifically, DNDO officials told us that when they made their course correction and cancelled the acquisition part of the program in 2007, they also decided not to conduct a cost benefit analysis because such analyses are generally needed to justify going forward with acquisitions. In addition, DNDO completed CAARS testing in March 2010; however, as of today, the final test results for two of the three CAARS machines are not yet available. Currently, no CAARS machines have been deployed. CAARS machines from various vendors have either been disassembled or sit idle without being tested in a port environment, and CBP is considering whether to allow DNDO to collect operational data in a port environment. During recent discussions with DNDO officials, they agreed that the language in the budget justifications lacked clarity and stated that they are not planning to complete a cost benefit analysis since such analyses are generally associated with acquisition programs. Based on our review of the CAARS program and our reports on DNDO efforts to develop an advanced RPM called the advanced spectroscopic portal (ASP), we have identified lessons learned for DHS to consider in its continuing efforts to develop the next generation of radiography imaging technology. Despite the importance of coordinating crosscutting program efforts, we have reported that weak coordination of those efforts has been a long- standing problem in the federal government and has proven to be difficult to resolve. We have also reported that agencies can enhance and sustain their collaborative efforts. One way we reported that agencies can enhance coordination is to agree on roles and responsibilities and establish mutually reinforcing or joint strategies. As discussed, DNDO did not coordinate and collaborate with CBP early in the development of the CAARS program to identify CBP’s needs and requirements. According to DHS budget documents, in fiscal year 2011, the responsibility for research and development of advanced radiography will shift from DNDO to S&T. Leading up to this transition, there is confusion related to roles and responsibilities among DNDO, S&T, and CBP. For example, DNDO officials said they have requested permission from CBP to collect operational data in a port environment on an enhanced radiography machine. However, CBP officials stated that they had already purchased, operationally tested, and deployed 11 of these machines in secondary inspection areas. We recently discussed this issue at a joint meeting with DNDO and CBP officials. CBP and DNDO officials agreed that there was confusion over this issue, and both agencies agreed with the need to collect operational data on this enhanced radiography machine, and CBP has begun making arrangements to do so. Also, S&T officials said that they are about to contract out for radiography imaging technology for CBP that will improve imaging capabilities. DNDO officials told us that S&T’s efforts will include development of radiography capabilities to detect shielded nuclear material, while S&T officials told us that this is not an area of their focus. As DHS transitions its research and development of radiography, DHS officials said that a draft memorandum of agreement intended to clarify roles and responsibilities for cooperation and coordination among DNDO, CBP, and S&T has not been finalized. Completing the memorandum of agreement to clarify roles and responsibilities before proceeding with the research, development, and deployment of radiography technology could give DHS reasonable assurance that problems resulting from a lack of clearly defined roles and responsibilities in the CAARS program do not recur. In discussions with senior officials from DHS, DNDO, CBP and S&T, they all agreed with the need for the memorandum and said that they intend to work toward finalizing the draft memorandum of agreement. DNDO officials said that they were aware of the DHS draft management directive in 2006 that was intended to guide management and oversight of acquisition programs like CAARS but did not follow it. DHS policy officials acknowledged that at the time CAARS was in its early stages, DHS was continuing the process of organizing and unifying its many disparate components and there was not strong oversight over its major programs, including CAARS. Policy officials told us the oversight review process is more robust today. However, we reported in June 2010 that DHS acquisitions need further improvement and sustained management attention. For example, while DHS’ current management directive includes more detailed guidance than the previous 2006 management directive for programs to use in preparing key documentation to support component and departmental decision making, it is not applied consistently and most major programs have not been reviewed. DNDO was simultaneously engaged in a research and development phase while planning for an acquisition phase of the CAARS program. In this regard, we have previously reported that separating technology development from product development and acquisition is a best practice that can help reduce costs and deliver a product on time and within budget because separation of the technology development phase from production in particular helps to ensure that (1) a sound business case is made for the product, (2) product design is stable, and (3) production processes are mature and the design is reliable. At the time that the CAARS program was in its early stages, DHS and DNDO did not have clearly defined ways to define and communicate the maturity of technology leading to acquisition. We have previously reported on the need for a disciplined and knowledge-based approach of assessing technology maturity, such as using technology readiness levels. In that report, we recommended that technologies need to reach a high readiness level before an agency should make a commitment to production. DNDO officials acknowledged that CAARS algorithm’s readiness level was not high enough to warrant entering into the acquisition phase. As we testified in June 2009 on DNDO’s testing of ASPs, a primary lesson to be learned regarding testing is that the push to replace existing equipment with the new portal monitors led to an ASP testing program that lacked the necessary rigor. We reported that testing programs designed to validate a product’s performance against increasing standards for different stages in product development are a best practice for acquisition strategies for new technologies and if properly implemented, would provide rigor to DHS’s testing of other advanced technologies. For further information about this statement, please contact Gene Aloise at (202) 512-3841 or [email protected]; or Stephen L. Caldwell at 202-512- 9610 or [email protected]. Dr. Timothy Persons (Chief Scientist), Ned Woodward (Assistant Director), Mike Harmond, Jonathan Kucskar, Linda Miller, Ron Salo, Kiki Theodoropoulos, and Franklyn Yao also made key contributions to this testimony. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Combating Nuclear Smuggling: DHS Has Made Some Progress but Not Yet Completed a Strategic Plan for Its Global Nuclear Detection Efforts or Closed Identified Gaps, GAO-10-883T, Washington D.C.: June 30, 2010. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: Oct. 30, 2009. Combating Nuclear Smuggling: Lessons Learned from DHS Testing of Advanced Radiation Detection Portal Monitors, GAO-09-804T, Washington, D.C.: June 25, 2009. Combating Nuclear Smuggling: DHS Improved Testing of Advanced Radiation Detection Portal Monitors, but Preliminary Results Show Limits of the New Technology. GAO-09-655. Washington D.C.: May 21, 2009. Nuclear Detection: Domestic Nuclear Detection Office Should Improve Planning to Better Address Gaps and Vulnerabilities. GAO-09-257. Washington, D.C.: Jan. 29, 2009, Combating Nuclear Smuggling: DHS’s Program to Procure and Deploy Advanced Radiation Detection Portal Monitors is Likely to Exceed the Department’s Previous Cost Estimates. GAO-08-1108R. Washington, D.C.: Sept. 22, 2008. Supply Chain Security: CBP Works with International Entities to Promote Global Customs Security Standards and Initiatives, but Challenges Remain. GAO-08-538. Washington, D.C.: Aug. 15, 2008 Maritime Security: National Strategy and Supporting Plans Were Generally Well-Developed and Are Being Implemented. GAO-08-672. Washington, D.C.: June 20, 2008. Supply Chain Security: Challenges to Scanning 100 Percent of U.S.- Bound Cargo Containers. GAO-08-533T. Washington, D.C.: June 12, 2008. Supply Chain Security: U.S. Customs and Border Protection Has Enhanced Its Partnership with Import Trade Sectors, but Challenges Remain in Verifying Security Practices. GAO-08-240. Washington, D.C.: Apr. 25, 2008. Supply Chain Security: Examination of High-Risk Cargo at Foreign Seaports Have Increased, but Improved Data Collection and Performance Measures Are Needed. GAO-08-187. Washington, D.C.: Jan. 25, 2008. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: Nov. 18, 2008). Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: Oct. 30, 2007. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: Sept. 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: Aug. 17, 2007. International Trade: Persistent Weaknesses in the In-Bond Cargo System Impede Customs and Border Protection’s Ability to Address Revenue, Trade, and Security Concerns. GAO-07-561. Washington, D.C.: Apr. 17, 2007. Combating Nuclear Smuggling: DHS’s Decision to Procure and Deploy the Next Generation of Radiation Detection Equipment is Not Supported by Its Cost-Benefit Analysis. GAO-07-581T. Washington, D.C.: Mar. 14, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Program. GAO-07-347R. Washington, D.C.: Mar. 9, 2007. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors was Not Based on Available Performance Data and Did not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: Oct. 17, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: Mar. 30, 2006. Combating Nuclear Smuggling: Challenges Facing U.S. Efforts to Deploy Radiation Detection Equipment in Other Countries and in the United States. GAO-06-558T. Washington, D.C.: Mar. 28, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: Mar. 22, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security's (DHS) Domestic Nuclear Detection Office (DNDO) is charged with developing and acquiring equipment to detect nuclear and radiological materials to support federal efforts to combat nuclear smuggling. Also within DHS, Customs and Border Protection (CBP) has the lead for operating systems to detect nuclear and radiological materials entering the country at U.S. ports of entry. In 2005, DNDO began working on the cargo advanced automated radiography system (CAARS) intending that it be used by CBP to detect certain nuclear materials in vehicles and containers at U.S. ports of entry. However, in 2007 DNDO decided to cancel the acquisition phase of the program and convert it to a research and development program. GAO was asked to examine events that led to DNDO's decision to cancel the acquisition phase of the program and provide lessons learned from DNDO's experience. This statement is based on prior GAO reports from March 2006 through July 2010 and ongoing work reviewing DHS efforts to develop radiography technology. For ongoing work, GAO reviewed CAARS planning documents and interviewed DHS, DNDO, and CBP officials. GAO provided a draft of the information in this testimony to DHS and component agencies, which provided technical comments and which were incorporated as appropriate. From the start of the CAARS program in 2005 until DNDO cancelled the acquisition phase of the program in December 2007, DNDO pursued the acquisition and deployment of CAARS machines without fully understanding that they would not fit within existing primary inspection lanes at CBP ports of entry. This occurred because during the first year or more of the program DNDO and CBP had few discussions about operating requirements at ports of entry. When CBP and DNDO officials met, shortly before DNDO's decision to cancel the acquisition phase of the program, CBP officials said they made it clear to DNDO that they did not want the CAARS machines because they would not fit in primary inspections lanes and would slow down the flow of commerce through these lanes and cause significant delays. Also, the CAARS program was among numerous DHS acquisition programs about which GAO reported in 2008 that appropriate oversight was lacking. Further, the development of the CAARS algorithms (software)--a key part of the machine needed to identify shielded nuclear materials automatically--did not mature at a rapid enough pace to warrant acquisition and deployment. Also, the description of the progress of the CAARS program used to support funding requests in DNDO's budget justifications was misleading because it did not reflect the actual status of the program. For example, the fiscal years 2010 and 2011 DHS budget justifications both cited that an ongoing CAARS testing campaign would lead to a cost-benefit analysis. However, DNDO officials told GAO that when they cancelled the acquisition part of the program in 2007, they also decided not to conduct any associated cost benefit analysis. During recent discussions with DNDO officials, they agreed that the language in the budget justifications lacked clarity, and they have no plans to prepare a cost benefit analysis. Based on GAO's review of the CAARS program and its prior reports on DHS development and acquisition efforts, GAO identified lessons learned for DHS to consider in its continuing efforts to develop the next generation of radiography imaging technology. For example, GAO previously reported that agencies can enhance coordination by agreeing on roles and responsibilities. In this regard, a draft memorandum of agreement among DHS agencies that intends to clarify roles and responsibilities in developing technologies and help ensure effective coordination has not been finalized. Completing this memorandum could give DHS reasonable assurance that problems associated with the CAARS program do not recur. In discussions with senior officials from DHS, DNDO, CBP and S&T, they all agreed with the need for the memorandum and said that they intend to work toward finalizing the draft memorandum of agreement. Other lessons GAO identified include (1) engage in a robust departmental oversight review process (2) separate the research and development functions from acquisition functions (3) determine the technology readiness levels before moving forward to acquisition, and (4) rigorously test devices using actual agency operational tactics before making decisions on acquisition.
In recent years, steep declines have occurred in some of IRS’s compliance programs for individual taxpayers, as have broad declines in its efforts to collect delinquent taxes. These trends have triggered concerns that taxpayers’ motivation to voluntarily comply with their tax obligations could be adversely affected. Taxpayers’ willingness to voluntarily comply with the tax laws depends in part on their confidence that their friends, neighbors, and business competitors are paying their fair share of taxes. IRS’s compliance programs, including audits, document matching, and other efforts, are viewed by many as critical to maintaining the public’s confidence in our tax system. Looking across all four of IRS’s major enforcement programs between fiscal years 1993 and 2002 reveals a somewhat mixed picture, as shown in figure 1. The four programs and their trends are as follows: The math error program that identifies obvious errors such as mathematical errors, omitted data, or other inconsistencies on the filed tax return. Using only the math error count, which is consistent throughout the 10 years, the math error contact rate rose 33 percent (from 3.59 to 4.79 percent). The document matching program that identifies unreported income using information returns filed by third parties such as employers and banks. Document matching rates went up and down at various times but ended 45 percent lower (from 2.37 percent to 1.30 percent) in 2002 compared to 1993. The nonfiler program that identifies potential nonfilers of tax returns by using information return and historical filing data. The nonfiler rates also went up and down but ended in 2002 about where they were in 1993. The audit program that checks compliance in reporting income, deductions, credits and other issues on tax returns through correspondence or in face-to-face meetings with an IRS auditor. Comparing 1993 to 2002, the audit contact rate dropped 38 percent (from 0.92 to 0.57 percent) even though it rose significantly between 1993 and 1995. Figure 2 shows compliance program trends based on income ranges. Although audit rates for individual taxpayers in higher and middle-income ranges rose slightly in 2002, overall they were significantly lower in 2002 than in 1993, while the rate for the lowest income range was virtually the same in 2002 as in 1993. The audit contact rates for the highest and lowest income individuals essentially converged at around .8 percent in fiscal years 2001 and 2002. Most of the audits of the lowest income individuals dealt with EIC claims. Document matching contact for all three income ranges rose somewhat between 2001 and 2002. However, rates for all three income ranges ended significantly lower in 2002 than 1993 following similar patterns of change over the years. Data on contact rates by income level were not available for the math error and nonfiler programs. As we reported in May 2002, between fiscal years 1996 and 2001 trends in the collection of delinquent taxes showed almost universal declines in collection program performance, in terms of coverage of workload, cases closed, direct staff time used, productivity, and dollar of unpaid taxes collected. Although IRS’s workload generally declined for two of those indicators, workload and cases closed, the collection work it completed declined more rapidly creating an increasing gap as show in figure 3. During that 6-year period, the gap between the new collection workload and collection cases closed grew at an average annual rate of about 31 percent. The increasing gap between collection workload and collection work completed led IRS in March 1999 to start deferring collection action on billions of dollars in delinquencies. IRS’s inventory of delinquent accounts was growing and aging and the gap between its workload and capacity to complete work was increasing. Officials recognized that they could not work all collection cases, and they believed that they needed to be able to deal with taxpayers more quickly, particularly taxpayers who were still in business and owed employment taxes. By the end of fiscal year 2002, after the deferral policy had been in place for about 3 and one-half years, IRS had deferred taking collection action on about $15 billion in unpaid taxes, interest, and penalties. IRS’ deferral of collection action has declined somewhat since the deferral policy was adopted. Although the rate has declined from 45 percent in 2000 to about 32 percent in 2002, IRS is still deferring collection action on about one out of three collection cases. Many parties have expressed concern about these trends in IRS’s compliance and collection programs. Since the mid-1990s, we have issued six reports on IRS compliance and collection trends in response to congressional concerns. During annual oversight hearings on IRS, Congress often raises questions about the declining audit rate and possible effects on compliance. In recent years, congressional concerns as well as IRS’s requests have resulted in efforts to augment IRS’s staffing levels. In fact, the former IRS Commissioner’s report to the IRS Oversight Board during September 2002 made what was perhaps the most explicit case for additional staffing to address IRS’s compliance and collection performance. Although the Commissioner recognized that IRS needed to improve the productive use of its current resources, he cited a need for an annual 2 percent staffing increase on top of planned productivity increases over 5 years to help reverse the trends. In terms of the collection of tax debts, the IRS Commissioner estimated that an almost 60 percent gap exists between IRS’s collection workload and the work it has completed. Closing this gap, according to the Commissioner, would require 5,450 new full-time staff. IRS also has been looking for ways to free resources for compliance programs by boosting productivity or reducing workload in other areas. In our recent Performance and Accountability Series report on the Treasury Department, we cite the collection of unpaid taxes as one of the management challenges facing IRS. In that report, we state that IRS is in various stages of planning and implementing management improvements, including reengineering compliance and collection practices. However, as of September 30, 2002, IRS’s inventory of known unpaid taxes totaled $249 billion, of which $112 billion has some collection potential, and is at risk. Since 2001, IRS's budget requests have made increasing its compliance and collection staff one of several key priorities. For example, in its 2001 budget request IRS asked for funding for the Staffing Tax Administration for Balance and Equity initiative, which was designed to provide additional staff for examination, collection, and other enforcement activities. However, as shown in figures 4 and 5, staffing in two key compliance and collection occupations were lower in 2002 than in 2000. This continues a general trend of declining staffing in these occupations for a number of years. The declines in compliance and collection staffing occurred for several reasons, including increased workload and unbudgeted costs. In September 2002, the Commissioner attributed the decline in compliance staffing to increases in workload in other essential operations, such as processing returns, issuing refunds, and answering taxpayer mail. In the most recently completed fiscal year, 2002, IRS faced unbudgeted cost increases, such as rent and pay increases, of about $106 million. As a result, IRS had to delay hiring revenue agents and officers. IRS noted in its 2002 budget request that any major negative changes in the agency's financial posture, such as unbudgeted salary increases, will have a negative effect on staffing levels. For fiscal year 2004, IRS is requesting $10.4 billion, an increase of 5.3 percent over fiscal year 2003 requested levels, and 100,043 full-time equivalents (FTE). Also, IRS's 2004 budget request is its second in a row to propose increased spending for higher priority areas that would be funded, in part, with internal savings redirected from other areas. Specifically, IRS proposes to devote an additional $454 million and 3,033 more FTEs to enhance programs, including compliance and some customer service areas. As shown in figure 6, $166 million of the enhancements would be funded from internal savings with the remainder funded from the budget increase. We commend IRS for identifying savings to be reinvested in operations to improve IRS performance. This approach implements a key principle of IRS's long-term modernization effort. Under this approach, the reengineering of IRS's work processes, much of which depends on investments in computer modernization, would automate or eliminate work, improve productivity, and free staff time that could then be redirected to higher priority customer service and compliance activities. Some caution is appropriate, however, in considering whether the additional FTEs will be realized. In addition to the potential that some cost increases may not be funded as in prior years, revised projections developed since the 2004 budget request was prepared raise questions about IRS's ability to achieve all the savings projected and shift resources to compliance as planned. IRS has revised the savings associated with several reengineering efforts identified in the 2004 budget request. Revisions this far in advance of the start of the fiscal year are not a surprise. They do indicate some uncertainty associated with the budget request's savings projections. For example, most significant reengineering efforts planned for fiscal year 2004, in terms of FTEs and dollars to be saved, will not achieve all of their projected savings because the efforts were based on assumptions that will not be realized, according to IRS data and officials. IRS's effort to improve the efficiency of compliance support activities, the single most significant effort, depended partially on IRS implementing individual compliance savings projects in 2003. This effort was projected to save 394 FTEs and almost $26 million. However, due in part to delays until 2004 to allow for additional testing, this effort is now expected to save about 30 percent of the original projections through the end of fiscal year 2004. IRS now projects that the seven most significant efforts will save 1,073 FTEs and $60.5 million, down from original projections of 1,356 FTEs and $77.7 million. Reengineering efforts may not achieve all of their savings goals, in part, because of the long time lag between when IRS begins developing its budget request and when the fiscal year begins. As with most other federal agencies, IRS usually begins formulating its budget request about 18 months before the start of the fiscal year and about 10 months before the President submits his budget to Congress. With planning beginning so far ahead of the budget's actual execution, inevitable intervening events, such as delays in implementing computer systems, make the assumptions upon which projections are based no longer realistic. In addition to lower current estimates of the potential savings from the seven most significant reengineering estimates, some of the other reengineering efforts listed in the 2004 budget request are not well defined. This raises questions about whether they will achieve their savings goals. For example, IRS still is reviewing its procedures to identify ways to make tax return processing more efficient. Although IRS projected this effort to save 203 FTEs and $6.9 million, it has not yet identified the operational areas that will be reengineered. IRS officials said that the projected savings are based on a 2 percent efficiency increase, but they are currently determining how to achieve that goal. According to IRS budget officials, IRS uses its budget formulation process to establish productivity goals, although the responsible business units may not know specifically how savings will be achieved. Officials said that this approach encourages innovation in meeting performance goals while identifying ways to save FTEs and budget dollars. IRS’s 2004 budget submission requests $100 million and 650 FTEs for a new initiative to improve compliance in one area in which noncompliance is known to be a concern: EIC. Although Treasury and IRS have made progress in defining the scope and nature of the initiative, many details about its implementation are still to be settled as planning for and implementation of the initiative proceed simultaneously. IRS hopes that this effort will reduce EIC noncompliance without unnecessarily burdening or discouraging those who are eligible for and claim EIC. Given its scope, potential effects on EIC claimants, and planned rapid expansion, the success of the initiative will depend on careful planning and close management attention as IRS implements the initiative. Begun in 1975, the EIC is a refundable tax credit available to certain low- income, working taxpayers. Two stated long-term objectives of Congress have been: 1) to offset the impact of Social Security taxes on low-income individuals; and 2) to encourage these same individuals to seek employment, rather than depend on welfare benefits. Researchers have reported that the EIC has been a generally successful incentive-based antipoverty program, as was intended by Congress. For tax year 2001, about $31 billion was paid to about 19 million EIC claimants. However, in addition to its successes, the EIC program has historically experienced high rates of noncompliance—including both overclaims and underclaims of benefits. For over a decade we have reported on IRS’s efforts to reduce EIC noncompliance. Due to persistently high noncompliance rates, we have identified the EIC program as a high-risk area for IRS since 1995. An IRS study of 1985 tax returns estimated that the EIC overclaim in that year rate was 39.1 percent. The results of subsequent EIC compliance studies conducted by IRS are shown in table 1. In 1997, Congress instructed IRS to improve EIC compliance through expanded customer service and outreach, strengthened compliance, and enhanced research efforts. For these efforts Congress authorized a 5-year, EIC-specific appropriation of $716 million. Although the 5-year period elapsed in fiscal year 2002, Congress appropriated $145 million specifically for EIC compliance for fiscal year 2003. For fiscal year 2004, IRS is requesting $153 million for this appropriation; the $100 million request for the new EIC initiative is separate. Early in 2002, when the results of IRS’s most recent study of EIC compliance for tax year 1999 were released, the Assistant Secretary of the Treasury and the IRS Commissioner established a joint task force to seek new approaches to reduce EIC noncompliance. The task force sought to develop an approach to validate EIC claimants’ eligibility before refunds are made, while minimizing claimants’ burden and any impact on EIC’s relatively high participation rate. Through this initiative, administration of the EIC program would become more like that of a social service program such as Food Stamps or Social Security Disability, where proof of eligibility is required prior to receipt of any benefit. Based on its various studies of EIC noncompliance, IRS determined that three specific areas account for a substantial portion of EIC noncompliance. These three areas—qualifying child eligibility, improper filing status, and income misreporting (also called “underreporter”)— account for nearly 70 percent of all EIC refund errors, according to IRS. The joint Treasury/IRS task force designed an initiative that would address each of these sources of EIC noncompliance. Filers that improperly claim qualified children represent the single largest area of EIC overclaims, on a dollar basis. Under the proposed initiative, IRS will attempt to verify all taxpayers’ claims for EIC-qualifying children under two criteria: a residency and a relationship certification. IRS plans to use third-party databases and other means to verify qualifying children for an estimated 80 percent of EIC claimants. All other EIC claimants will be asked to provide additional eligibility documentation prior to the filing season. Those who do not respond and/or are unable to document their eligibility will have the EIC portion of their returns frozen. If taxpayers do not provide documentation before the filing season, IRS plans to require them to provide it during or after the filing season. When and if they document their eligibility, the EIC portion of their returns will be released. Initially, beginning in the summer of 2003, IRS intends to select 45,000 EIC claimants whose qualifying child residency or relationship requirements could not be verified from available databases. IRS plans under its initiative to contact these taxpayers and give them the opportunity to provide verifying documentation for the child (or children) that is (are) claimed to qualify for the EIC. The two components of establishing qualifying child eligibility—the claimant’s relationship to the child, and residency with the claimant for more than 6 months—will be treated somewhat differently. Taxpayers who establish their qualifying child relationship will not have to do so in future years but all taxpayers will have to show annually that the children lived with them for the required time. IRS expects to expand the program in July 2004 by starting to contact approximately 2 million taxpayers; in another planned expansion in July 2005, IRS would contact 2.5 million taxpayers. The other two parts of this initiative will cover an additional 180,000 high- risk filers for tax year 2003—5,000 to verify their filing status and 175,000 to verify their income. Criteria for selecting the 5,000 cases in the filing status category have not yet been determined. They will be drawn from tax year 2003 cases. For the 175,000-case income verification initiative, IRS will use document matching to identify EIC filers who have a history of misreporting income in order to increase (or receive) the EIC. Based on that history, these taxpayers’ returns are to be flagged when their 2003 EIC claims are filed. Any EIC refund portion of each return is then to be frozen until IRS can verify the taxpayers’ income through document matching or audit procedures in the fall of 2003. These filers will be identified out of tax year 2002 and 2003 cases. Table 2 shows IRS’s projections for future casework in all three initiative areas. Although the Treasury/IRS task force and now IRS have made progress in defining the scope and nature of this initiative, many details about its implementation are still to be settled. IRS expects to learn lessons from initial sample cases it will work in 2003 that will be incorporated into planned expansions of the effort later in 2003 and in 2004. IRS officials said that estimates of the number of new employees that will be needed and their training requirements are evolving. For example, although IRS’s 2004 budget submission identifies 650 FTEs for this initiative, current plans are for a much lower staffing level at this point. In addition, cost and FTE estimates are based on historical data that may not be directly comparable to the staffing and technological demands of the initiative. Based on these estimates, of the $100 million 2004 request, IRS has proposed budgeting just under $55 million for direct casework in the three compliance areas. The remaining $45 million is allocated to technology improvements and management, development, and implementation costs related to the three targeted compliance areas. Fundamental to the precertification of qualifying children is the development of clear forms that identify the specific types of documentation IRS will accept to substantiate that a qualifying child meets the relationship and residency tests for EIC. IRS is currently working with others, like the Taxpayer Advocate, to develop these new forms, which are to be used beginning this summer. Recently, concerns have been expressed about IRS’s intention to request marriage certificates as proof of relationship to the qualifying child. We have not looked at this specific issue. However, our 2002 report noted that EIC forms and instructions that IRS used for similar attempts to determine qualifying child eligibility could be confusing to taxpayers and required documents that EIC claimants had difficulty obtaining. When taxpayers have been disallowed the EIC through an IRS audit, they are required to substantiate their qualification for the EIC—that is, “recertify”—before they can receive the credit again. As part of this process, IRS’s forms indicated that EIC claimants could, for example, use medical records to prove a child’s residency with them. However, EIC claimants faced difficulty in providing such records. Low-income working families are less likely to have stable relationships with medical service providers and their children are less likely to have routine medical care. IRS officials said that they plan to pretest the proposed precertification forms both to determine whether they are clear and understandable to EIC claimants but also to determine whether the claimants can provide the required information. This is a critical step in implementing the initiative. We recently reported that IRS seldom tests the new and revised individual tax forms and instructions. Ensuring consistent interpretation of documentation gathered in the new initiative will also be important. In our 2002 report, we noted that IRS examiners did not consistently assess documentation for qualifying children. For example, we asked 21 examiners to examine five EIC scenarios. The 21 examiners did not agree for any of the scenarios, and, in some cases, the examiners reached widely varying judgments about whether the evidence was sufficient to support an EIC claim. In order to better ensure consistent and accurate decisions based on documentation submitted, we recommended that IRS provide training to its examiners. Administering the EIC is not an easy task for IRS. IRS has to balance its efforts to help ensure that all qualified persons claim the credit with its efforts to protect the integrity of tax system and guard against fraud and other forms of noncompliance associated with EIC. This initiative is a substantial undertaking with a relatively aggressive implementation schedule. Although it appears to be targeted to address known compliance issues, its success will depend on careful planning and close management attention. Any one of many challenges could put the initiative at risk. These include whether, for instance, the proposed new forms will result in evidence that IRS can use to verify relationship and residency requirements. Further, IRS must determine whether lessons from the first attempts to verify the eligibility of relatively small numbers of EIC claimants can be learned and incorporated before the substantial expansion of the initiative in fiscal years 2004 and 2005. IRS’s 2004 budget requests $2 million and legislative authorization for use of private collection agencies (PCA) to assist IRS in collecting certain types of delinquent tax debt. IRS proposes to fund continuing use of PCAs from a to-be-established revolving fund that would receive a portion of taxes collected through use of PCAs. As with its EIC initiative, IRS has defined the parameters for its use of PCAs, but many key details for implementing the initiative remain to be resolved. These implementation details, such as identifying delinquent debt cases suitable for PCAs to pursue and protecting taxpayer rights, will be critical if the initiative is to succeed. If the PCA initiative is authorized, IRS will need to put focused management attention on planning and monitoring the implementation of the PCA initiative to ensure proper handling of these issues. As previously noted, IRS’s inventory of delinquent debt is growing and aging, with the gap between its workload and capacity to complete work increasing. As a consequence, IRS has been deferring about one in three new delinquency cases without pursuing any collection action. This practice is contrary to the experience of Treasury and IRS, which indicates that referring eligible debts for collection as early as possible greatly enhances the success of collection efforts. The former IRS Commissioner estimated in 2002 that it would take 5,450 FTEs and $296.4 million for IRS to bridge the gap between its workload and capacity to complete the collection casework. To help bridge this gap, Treasury has proposed an initiative to reach taxpayers to obtain payment on delinquent debt through the assistance of PCAs. Under this initiative, IRS would give PCAs specific, limited information on outstanding tax liabilities, such as types of tax, amount of the outstanding liabilities, tax years affected, and prior payments. Based on the information provided by the IRS, PCAs would then be permitted to locate and contact taxpayers, requesting payment of liabilities (i.e., tax, interest, or penalty) in full or in installments (within 3 years, as specified by IRS). If a taxpayer’s last known address is incorrect, the PCAs would search public records for the correct address. PCAs are not to be permitted to contact third parties to locate taxpayers. PCAs would not be allowed to accept payments; all payments must be made to IRS. PCAs would generally have 12 to 24 months to attempt collection. Afterwards, uncollected accounts are to be redistributed to other PCAs for additional collection efforts. Because PCAs would have no enforcement power, the initiative would allow the IRS to focus its own enforcement resources on more complex cases and issues. Other procedural conditions under the proposal include requiring PCAs to inform taxpayers of the availability of assistance from the Taxpayer Advocate. Furthermore, PCAs would not be permitted to take any enforcement action against a taxpayer, such as seizing assets to satisfy the debt. To ensure that taxpayer rights and privacy would be safeguarded, PCAs would be governed by the same rules by which IRS is governed. Initially, IRS plans to stagger implementation of the initiative. For the first 6 months, IRS will place collection cases with no more than five PCAs, with a total volume estimate not to exceed 50,000 cases per month for the first 3 months. IRS will contract with additional PCAs at a 6-month interval with an anticipated rate of 2.6 million total cases annually by the time all agencies have been operational for 1 full year. Assigned case inventory rates will depend on a number of factors, including PCA performance, ability to manage new inventory, quality control, volume of cases referred to IRS for review, and readiness of IRS to supply cases. Based on this implementation framework, Treasury has projected revenue estimates of $46 million in 2004, and $476 million from 2004 through 2008. In order to implement the PCA initiative, IRS must ensure that it will have the capacity to fulfill its responsibilities under the proposed contracts and to oversee the PCAs. Further, it must make some difficult design decisions. One significant capacity issue concerns whether IRS will be able to identify those delinquent debts with the highest probability of resolution through PCA contacts. Earlier pilot efforts to study use of PCAs in 1996 and 1997 were hindered, in part, because IRS was unable to do this. For example, we reported that the numbers and types of collection cases sent to PCAs during those pilots were significantly different from those anticipated in the pilot program’s original design. This resulted in substantially fewer collection cases than PCAs could productively work to make the effort cost-effective. IRS realizes that identifying appropriate cases for referral to PCAs is a key issue. While IRS proposes using “case selection analytics” to identify appropriate cases, the analytical model has not been developed. Another IRS capacity issue relates to the cases that PCAs, for several reasons, will refer back to IRS. For instance, under the proposed arrangement, if a taxpayer was unable to fully pay the delinquent debt, the case would be referred back to IRS. Some cases would then go to a different PCA; therefore, the success of that PCA’s efforts in these cases would depend on how well IRS reprocesses the cases. Until some experience is gained under the proposed program, it will be difficult to reliably estimate the number of cases that will be referred back to IRS and the number of resources it will need to devote to the cases. Other IRS capacity issues concern, for instance, how many resources it will take to administer the contracts and to oversee the PCAs’ performance. IRS expects to have up to 12 contractors, 2 of which would be small businesses, and proposed procedures call for on-site visits and some direct observation of PCAs’ collection efforts. IRS would also need to ensure PCAs’ performance, such as having adequate procedures to safeguard taxpayer information before and after the contracts are awarded, and that it and the PCAs have secure computer systems to manage the work flow. How the PCAs will be compensated is a key design decision that must be finalized. On the one hand, IRS needs to provide the PCAs an incentive to be efficient in collecting the delinquent debts and on the other hand, it must ensure that the incentive does not lead to inappropriate performance pressure on PCA staff. IRS intends to make part of PCAs' compensation dependent on other factors, such as quality of service, taxpayer satisfaction, and case resolution, in addition to collection results. Both the law and IRS policy prohibit IRS managers from using records of tax enforcement results, such as dollars collected and taxes assessed, to evaluate employee performance or set production goals. IRS and Treasury report that existing taxpayer protections would be fully preserved under the PCA initiative. Specifically, as with IRS employees, PCA employees could not be evaluated based on tax enforcement results. IRS is considering using a “balanced scorecard” to measure contractors’ performance, but has not proposed specifically how this compensation balance will be struck. Finally, although the revolving fund mechanism presents potential advantages to IRS in better ensuring that it can pursue delinquent tax debts, IRS has not done a cost analysis on implementing the PCA initiative versus expanding the use of traditional IRS collection activities. We have not seen any plans to do so in the future. Some IRS officials believe that because IRS telephone collection staff have a broader scope of authority (e.g., the ability to levy a bank account to satisfy the debt) and greater experience with collecting delinquent taxes, IRS telephone staff are likely to be more effective and cost-efficient than PCAs. PCAs, however, may have advantages that IRS lacks. A number of factors would need to be considered in doing a cost analysis, and a comparison may not be possible without having some experience in using PCAs to collect this type of debt. Although IRS has received increases in its budgets since fiscal year 2001 in part to increase staffing to enhance in its compliance and collection programs, IRS has been unable to achieve the desired staffing levels. Based on past experience and uncertainty regarding some expected internal savings that would enable IRS to reallocate staff to these programs, fiscal year 2004 staff increases might not fully materialize. Today’s hearing provides a useful venue for the Subcommittee to explore these funding issues and how IRS should prioritize its efforts. IRS has defined the scope and nature of its proposed new initiatives to address known sources of EIC noncompliance and to use private collection agencies to assist in collecting certain delinquent taxes. However, in both cases IRS faces significant challenges in moving forward to successfully implement the proposals. In commenting on a GAO report on IRS’s National Research Program—IRS’s ongoing effort to measure the level of taxpayers’ compliance while minimizing the burden on taxpayers selected for the study—the former Commissioner said that IRS would not compromise the quality of the program in order to meet the program’s target date. We believe this is a sound standard for these efforts as well. Careful planning for and testing of key implementation steps can help ensure the initiatives’ success. This completes my prepared statement. I would be pleased to respond to any questions. For further information on this testimony, please contact Michael Brostek at (202) 512-9110 or [email protected]. Individuals making key contributions to this testimony include Leon Green, Demian Moore, Neil Pinney, and Tom Short. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Taxpayers' willingness to voluntarily comply with tax laws depends in part on their confidence that friends, neighbors, and business competitors are paying their fair share of taxes. The Internal Revenue Service's (IRS) programs to ensure compliance and to collect delinquent taxes are viewed by many as critical for maintaining the public's confidence in our tax system. Congress asked GAO to present information on trends in IRS's compliance and collection programs and to discuss issues related to IRS's efforts to increase staffing for these programs. GAO was also asked to discuss IRS's plans to launch new initiatives to reduce noncompliance with the Earned Income Tax Credit (EIC) and to use private collection agencies to assist in collecting delinquent taxes. From fiscal years 1993 through 2002, IRS's four major compliance programs for individual taxpayers had mixed trends in the portion of taxpayers they contacted with two declining, one increasing, and one staying relatively the same. Among the programs, IRS's often-cited audit rate declined about 38 percent. From fiscal years 1996 through 2001, IRS's collection program experienced almost universal declines in workload coverage, cases closed, direct staff time used, productivity, and dollars of unpaid taxes collected. Many parties have expressed concern about the compliance--especially audit--and collections trends for their potential to undermine taxpayers' motivation to fulfill their tax obligations. Since 2001 IRS has sought more resources including increased staffing for compliance and collections. Despite receiving requested budget increases, staffing levels in key occupations were lower in 2002 than in 2000. These declines occurred for reasons such as unbudgeted expenses consuming budget increases and other operational workload increases. Based on past experience and uncertainty regarding some expected internal savings, fiscal year 2004 anticipated staff increases might not fully materialize. IRS's 2004 budget proposes a substantial initiative to address known sources of EIC noncompliance. IRS intends to ramp up the initiative rapidly with planning and implementation proceeding simultaneously. If it is to succeed, the initiative will require careful planning and close management attention. IRS also proposes to use private collection agencies to assist in collecting certain delinquent tax debt. IRS is seeking authority to retain some tax receipts in a revolving fund to pay the private collectors. A pilot effort to use private collectors in 1996 was unsuccessful, in part because IRS was unable to identify and channel appropriate collection cases to the private collectors. Key implementation details for this proposal must be worked out and it too will require careful planning and close management attention.
The Statute provides a legal basis for the current federal labor and management relations program and establishes two sources of official time. Official time for, both, collective bargaining and Federal Labor Relations Authority (FLRA)-related activities, such as negotiations, attendance at impasse proceedings, and participation in proceedings before the FLRA, is provided as a statutory right. Official time for other purposes must be negotiated between the agency and the union in an agreed-upon amount deemed reasonable, necessary, and in the public interest. However, activities that relate to internal union business, such as the solicitation of members or the election of union officials, must be performed when in a non-duty status; that is, not on official time. In a 1979 report, we recommended that OPM (1) clarify its recordkeeping requirements then in effect for capturing time spent on representational activities, and (2) direct agencies to comply with those requirements. Following our report, in 1981, OPM issued Federal Personnel Manual Letter 711-161. The letter stated that, no later than January 1, 1982, federal agencies activate a recordkeeping system to capture official time charges to representational activities. But the letter did not require agencies to report the yearly time charges to OPM, as we had recommended. As a result, OPM never consolidated the amount of time charged government-wide to union activities and has no information on agencies’ compliance with the recordkeeping requirement. When the Federal Personnel Manual was abolished in 1994, all recordkeeping requirements regarding time spent on union activities were rescinded. In a 1997 report accompanying an appropriations bill, the House Appropriations Committee requested that OPM provide a one-time report on the total hours of official time spent on representational activities, number of employees who used official time, and related costs (salary, office space, equipment, and telephone) covering the first 6 months of calendar year 1998. In response, OPM reported that a total of 23,965 federal employees used approximately 2.2 million hours during the 6- month sample period. OPM estimated the cost of this time at about $48 million. OPM also reported that 946 of these employees (or 4 percent) worked 100 percent of the time in a representational capacity. OPM has prepared reports on official time usage since fiscal year 2002 and most recently for the period covering fiscal year 2012. Seven of 10 selected agencies reported lower official time rates in fiscal year 2013 compared to fiscal year 2006, as shown in table 2 below. Official time rates indicate the number of official time hours expended per BU employee and allow for meaningful comparison of official time usage over time. For seven agencies, declines in official time charges per BU employee ranged from about 30 minutes or less at several agencies to 2- 1/2 fewer hours per BU employee at one agency. The remaining three agencies—including DHS, DOT, and SSA—reported increased official time rates. An analysis of the average annual rate of official time use was somewhat higher but showed a similar pattern for the same seven agencies with annual declines and three agencies with annual increases. Overall, the total number of official time hours charged as reported by the 10 selected agencies was higher in fiscal year 2013 when compared to fiscal year 2006, as shown in table 3 below. In fiscal year 2013, the 10 selected agencies in our review reported that BU employees charged a total of 2,485,717 hours to official time, an increase of 25 percent compared to the 1,991,089 hours these agencies reported for fiscal year 2006. We found that half of the agencies reported using more official time hours in fiscal year 2013 than in fiscal year 2006 (see figure 1 for the interactive graphic, which represents each individual agency’s official time rate and hours reported for fiscal year 2006 through fiscal year 2013). OPM and agencies attributed changes in usage to several reasons. According to OPM, factors that have contributed to the changes in official time use in previous years include: an emphasis by agencies on accurately documenting official time changes in the number of BU employees; changes in the amount of mid-term and term collective bargaining; variation in the use of labor-management forums; and hours. A number of agencies cited similar factors. For example, RRB attributed changes in usage to the age of its CBA with the American Federation of Government Employees labor union, which is almost 30 years old. Thus, the agency has not had any nationwide negotiations during the time period which might have required a large number of official time hours. NSF reported a lower number of charged official time hours in mid-term negotiations (284 to 110), dispute resolution (203 to 93), and general labor-management relations (978 to 691). NSF officials informed us that their official time tally of hours was incomplete for fiscal years 2012 and 2013 because the agency transitioned to a different time and attendance system, which we will explain more fully later in this report. Other agencies cited factors such as increases in the amount of negotiations or general labor-management relations areas impacting changes in use of official time. For example, DOT officials pointed out that the increase in official time charges per BU employee was spent improving labor management relations and internal business processes, and not litigating disputes. They noted that the agency’s spike in official time rate between fiscal years 2006 and 2007 may be related to possible underreporting in fiscal year 2006. This made a subsequent return to better accuracy appear to be a sharp increase in fiscal year 2007. Most of DOT’s increased reporting of official time was also in the general labor- management relations category. The agency reported 66,736 hours in fiscal year 2006 compared to 230,080 hours reported in that category for fiscal year 2013. According to DOT, the agency’s increase since fiscal year 2006 in the use of official time in the general labor-management relations category resulted in turn from increased collaboration between the Federal Aviation Administration (FAA) and its unions, primarily the National Air Traffic Controllers Association (NATCA). NATCA is FAA’s largest BU and accounts for the majority of official time used by FAA’s union representatives. In 2009, FAA and NATCA renegotiated their 2006 CBA. DHS, with the highest percentage increase of official time hours charged, also had the biggest percentage increase of BU employees. DHS reported its largest increases in official time hours in the general labor- management relations category, from 25,785 hours in fiscal year 2006 to 185,509 hours in fiscal year 2013, and also in the mid-term negotiations category, from 3,416 to 11,045. According to DHS, several factors contributed to the agency’s increased use of official time hours during the period. For the first time, the recognition of a BU within the Transportation Security Administration increased the overall DHS number of BU employees by more than 40,000 from fiscal year 2011 to fiscal year 2012. In addition, DHS officials said that the establishment of labor- management forums contributed to official time usage fluctuations during the period. Agency officials explained that as more forums were established and became more active, the hours expended grew. DHS also cited budget reductions, sequestration, and furloughs as factors that led to increases in the general labor-management relations hours reported, as briefings and meetings with the unions were necessary to keep them informed of how DHS components would address shortfalls, avoid or mitigate planned furloughs, and contingency plans for the potential lapse of future appropriations. In addition, DHS explained that there was also a corresponding increase in mid-term bargaining hours reported as unions exercised their right to negotiate based on the notices they received regarding these matters. Agency officials told us of instances where agencies may have underreported the number of official time hours. Several agencies explained particular internal circumstances that impacted agencies’ ability to accurately record the number of official time hours charged. For example, NSF officials told us that the agency transferred its official time reporting to a different time and attendance system during the middle of fiscal year 2012. Because of the transition, it did not capture all official time charges for parts of fiscal year 2012 and fiscal year 2013. NSF does not have a mechanism to retroactively collect incomplete official time data for these years. A Commerce official told us that one of its components does not report official time using the same transactional codes as other components use. As a result, the component had more than 24,000 hours of official time for fiscal year 2013 that was not accounted for in EHRI. According to the official, Commerce is negotiating a change in the CBAs with the three affected unions to report official time using the same transactional codes that the other components use. In addition, a recent GAO report found that official time activities at VA were recorded as administrative leave because the agency’s current time and attendance system does not have a code to capture official time separately. VA officials told us that the agency is implementing a new time and attendance system, the Veterans Affairs Time and Attendance System (VATAS), which will capture official time usage. According to a VA official, the agency has not collected official time data through VATAS because of system issues they are addressing. The officials said VA does not have a time frame for when VATAS will be in use department-wide. In addition, we found that some agencies, such as DHS, SSA, and Commerce, vary in how they report hours charged to labor management forum meetings conducted under Executive Order 13522. Executive Order 13522 was designed to establish a cooperative and productive form of labor-management relations but does not specify how agencies should treat labor management forum meetings for time and attendance purposes. Some agencies consider this time as official time and others as duty time. For example, DHS reported that it advises its components that time used in relationship to these meetings is to be included as official time under the general labor management category. On the other hand, we were told by an SSA official that SSA considers time spent on labor management forum meetings as duty time. Commerce reported that time spent at labor management forum meetings, depending on the particular agency component, is sometimes charged to official time and other times charged as regular duty time. In total for fiscal year 2013, the 10 selected agencies reported that less than 2 percent of BU employees from the 10 agencies charged official time hours. As shown in table 4, the percentage of BU employees who charged official time at the ten agencies ranged from less than 0.01 percent at VA to 7.5 percent at DOT. As shown in table 5 below, 8 of our 10 selected agencies reported that a small number of employees charged 100 percent of their duty time to official time in fiscal year 2013. We found that each of these eight agencies have CBAs in place that authorize certain union officials to charge 100 percent of their time to official time. VA, the largest of our 10 selected agencies with about 265,000 BU employees spread among 18 unions and approximately 200 facilities, reported the highest number of employees, 259, that charged 100 percent of their time to official time in fiscal year 2013. Treasury and DHS were next with 44 and 43 of their respective 2,046 and 2,960 total official time users charging 100 percent official time. NSF and SSA reported no employees charged 100 percent of their duty time to official time in fiscal year 2013. OPM did not implement key practices needed to develop a reliable cost estimate of official time. Specifically, OPM’s cost estimate is not reliable because it lacks assurance of its accuracy and also lacks adequate documentation. OPM could have greater assurance of the accuracy of its cost estimate if it cross-checked its results using an alternative methodology to determine whether the results are similar. Since OPM had not published a cost estimate for fiscal year 2013, we replicated OPM’s methodology for fiscal year 2012 and applied the methodology to fiscal year 2013 EHRI salary data to facilitate a comparison of cost estimates for fiscal year 2013. Basing estimates on an assessment of most likely costs enhances accuracy. Best practices for high-quality cost estimates incorporate cross- checking with an alternative methodology to see if the results are similar. If the results are not similar, the methodologies should be reconciled. As described below, our comparison of the cost estimates generated by the two methodologies revealed different results. OPM has historically estimated annual official time costs by using a simple computation—multiplying each agency’s average salary (as reported in EHRI) for BU employees covered by official time activities by the agency’s total reported official time hours. We computed our own cost estimate for the 6 of our 10 selected agencies who report data through EHRI using an alternative methodology that used actual salary data of BU employees who charged official time and multiplied this amount by the We agency total reported official time hours used for each employee.found that our cost estimate for the 6 agencies yielded an estimate that was about $5 million more than the estimate using OPM’s methodology ($61 million versus $56 million, or a difference of about 9 percent). Further, cost estimates using GAO’s methodology at 4 of the 6 agencies were higher by 15 percent or more than the estimates using OPM’s methodology (see table 6). As a result, OPM’s cost estimate for government-wide use of official time could be higher or lower if this methodology were applied to all reporting agencies rather than the 6 agencies used here. OPM officials said reporting on official time is not a priority at this time and they have used the same methodology for preparing its estimate since fiscal year 2002. According to these officials, the publication of reports on official time is impacted by available resources, such as staff time, and the consideration of other mission priorities. OPM told us it produces the official time reports as a resource to help inform agencies, unions, and the public on the granting and use of official time. DOL and SSA officials reported that OPM’s reports were useful because they provide a perspective on agency usage levels. One agency said it uses the reports to support negotiations with unions. Other agencies may benefit similarly from OPM reporting on official time. In addition, the Federal Labor Relations Authority (FLRA) has previously referenced OPM reports in a recent case. Use of other methodologies by OPM may result in more representative estimates of actual costs and OPM may be able to provide better information to help Congress oversee the use of official time and help agencies manage this activity. OPM’s cost estimate for official time lacked adequate documentation because OPM could not initially provide a reasonable amount of documentation on its methodology for producing the cost estimate so that a cost analyst unfamiliar with the program could quickly replicate the process and produce the same results. A credible cost estimate is supported by detailed documentation that describes how it was derived. The methodology used to derive cost estimates should be thoroughly documented so that results are replicable. We requested documentation but the agency was unable to produce it. For example, we submitted several requests to OPM to understand significant assumptions about the cost estimate. However, OPM was unable to provide documentation that guides its estimation process. Accordingly, we developed a summary of our understanding of OPM’s steps for producing the estimate based on discussions and e-mails between us and OPM. For example, after several inquiries about its methodology, OPM provided information about filters it applies for computing the number of BU employees when finalizing the number used to compute salary costs. The filters OPM uses could impact the average salary and total count of BU employees which are key factors in computing agency total salary costs. We recognize that the methodology OPM uses can be considered a relatively straightforward and reasonable labor equation. However, that is all the more reason that OPM should be able to have its methodology readily available so an independent analyst could quickly recreate its results. Four of our 10 selected agencies reported that they collected data on non-payroll costs such as travel, office space, telephone service, or related costs. Among these four agencies, the type of data collected varied by agency. The other six agencies said they did not collect or track data on non-payroll costs. SSA is required to report on non-payroll costs Each year since related to official time to its appropriations committee.1998, SSA has reported official time costs (hours, dollar value of payroll costs, travel and per diem, office space, telephones and supplies, associated interest, and arbitration expenses) to the House Appropriations Committee. For fiscal year 2013, SSA reported that its unions’ representational activity costs were $14.6 million, of which $12.6 million were for salary and benefits, $700,000 for travel and per diem, $1.1 million for office space, telephones, and supplies, and the remainder split among interest and arbitration expenses. DOL reported that it tracks non-payroll costs for its unions; however, the specific types of costs tracked vary by union. For example, DOL reported office annual rent ($54,000) costs for one union and reported travel ($268,000) and communication ($6,000) costs for another union for fiscal year 2013. Another agency, Treasury, reported that IRS, the agency’s largest bureau with approximately 100,000 employees, has different needs and practices than some of Treasury’s smaller bureaus and finds it useful to track administrative costs attributable to official time—union office space and travel cost—to support agency proposals when negotiating with the union, and for responding to outside inquiries. HHS reported it has systems enabling it to track travel costs related to official time. Further, the organizational units within HHS maintain records and can generate reports for costs such as office space rentals and services such as computers, telephones, and copiers. According to OPM, the agency issues reports on agency use of official time on its own initiative to assist agencies with ensuring accountability in labor-management relations. Specifically, in a memorandum to agency and department heads on June 17, 2002, OPM requested each agency to report by the end of each fiscal year on the number of hours of official time used by employees to perform representational activities. The first agency submissions were due to OPM by October 31, 2002, covering fiscal year 2002. Since fiscal year 2004, OPM has asked agencies to report official time hours used in the four predefined categories of term negotiating, mid-term negotiating, dispute resolution, and general labor In addition, fiscal year 2009 was the first time management relations. OPM relied upon agency official time usage data extracted from EHRI. OPM officials told us that they expected to publish reports for fiscal years 2012 and 2013 by the end of fiscal year 2014 to the extent that data is available and validated by agencies during this time period. Subsequently, however, OPM informed us that fiscal year 2013 data has not been available and validated for all agencies, and that, accordingly, OPM released a report for fiscal year 2012 on October 3, 2014. EHRI collects data from the various payroll providers on official time used in the agencies serviced by the payroll providers. However, according to OPM, some agencies have not transitioned to reporting official time via the categories included in electronic payroll systems and must still provide the official time data to OPM manually. Four of our 10 selected agencies provided fiscal year 2011 official time data to OPM manually— VA, DOL, HHS, and SSA. OPM produces reports on government-wide use despite having no reporting requirement for official time. OPM prepares for reporting on official time data by asking agencies to verify data that the agencies have previously provided to OPM through the EHRI database. Between fiscal years 2009 and 2012, OPM relied on data extracted from EHRI to prepare its annual reports on official time, but took an additional step in the process by asking agencies to verify the data reported through EHRI. As mentioned earlier, EHRI collects agency data on official time from the various payroll providers. Agencies transmit payroll data that include information on official time hours to payroll providers based upon agencies’ time and attendance data. According to OPM officials, the verification is a time and labor intensive process. OPM asks agencies to verify information such as number of hours used in each of the four categories of official time use and total hours. Agencies may confirm OPM’s numbers or make changes based on the agencies’ data. When there are differences, OPM relies on the data verified and provided by the agencies to prepare its report. OPM does not follow up with individual agencies who submitted revised usage data to (1) determine the source of the differences, or (2) identify steps for improvements to future reporting through EHRI. As shown in table 7, we found differences between OPM’s EHRI data and agency data reported to us on total official time hours charged in fiscal year 2013 for the 6 of our 10 selected agencies that report through EHRI. As mentioned earlier, 4 of our 10 agencies provide official time data to OPM manually—VA, DOL, HHS, and SSA. Internal control standards dictate that management obtains relevant data from reliable internal and external sources on a timely basis. Federal financial accounting standards stress that reliable information on the costs of federal programs or activities is crucial for effective management of government operations. The standards explain Congress needs cost information to evaluate program performance, to make program authorization decisions, and to compare alternative courses of action. Moreover, OPM’s guidelines instruct the agency on the importance of pursuing high-quality data and reliable information on program costs. Specifically, according to OPM’s Information Quality Guidelines, the agency is to maximize the quality of the information it disseminates. According to OPM officials, OPM does not know if agencies’ reported official time hours are accurate. The officials told us generally, at least half of the about 50 agencies that report official time data through EHRI revise their official time hours through the report validation process. However, OPM does not know why agencies submit such changes and does not request explanatory information. Several of our selected agencies that report through EHRI provided reasons why there may be differences. For example, DOT officials explained that DOT collects the official time data by pay period using pay codes entered by the employee on their timecard and reflects amendments to previous pay periods. They explained that because the pay periods do not begin and end on the first and last day of the fiscal year, the numbers provided may not match the numbers provided by OPM and that unless the timeframe between the collection by OPM and DOT are exact, there is a potential for differences. Commerce told us that the amount of official time reported by EHRI is not as accurate as what they report because EHRI includes official time that should not be reported (e.g., official time for employees not covered by title 5 U.S.C., specifically, foreign service employees). To date, OPM has not sought to determine reasons for discrepancies between EHRI and agency reported data. By not following up with agencies on data differences, OPM may be missing an opportunity to improve data quality on agency reporting through EHRI and enable a less labor intensive and more efficient process. CBAs contain provisions by which agencies manage official time. Typically, an agreement outlines the approach, types of activities that are allowed and not allowed, and internal controls, such as the supervisory approval process and practices for verifying authorized employees who perform representational duties. Since agencies and unions can negotiate at the department, component, bureau, operating administration, facility, or local level, there can be variations in how official time is managed within an agency. For example, within VA there are 18 unions with 18 CBAs representing about 265,000 BU employees. VA has several components that encompass more than 200 facilities. On the other hand, NSF has one union with one CBA representing more than 900 BU employees that are located at a single facility. Our review of 173 CBAs from the 10 agencies found that agencies manage official time using three different approaches or a combination of two or more approaches. These include: Bank of hours: Specified number of hours or a limit (i.e., not-to- exceed) on the number of hours authorized for representational activities; Designated positions: Specified percentage or number of hours authorized for a designated position, such as the President, Vice- President, Secretary, or Treasurer, and is typically characterized as a percentage of an employee’s total time, such as 50 or 100; and Reasonable time: No specified number or percentage of hours for representational activities (i.e., an agreement may state that a reasonable amount of time will be granted to a union representative to accomplish representational duties). Official time for certain representational activities is provided as a statutory right. Therefore, if a BU has exhausted its allotted bank of hours of official time for representational activities before the calendar or fiscal year ends, it may negotiate additional time with the agency, or otherwise receive additional time, as appropriate. DHS officials told us that if their unions used up their allotted bank of hours, additional time would be granted for union representatives to attend FLRA-mandated hearings. In addition, one of DOT’s CBAs includes language that additional time may be requested and approved on a case-by-case basis. A majority of CBAs at 8 of the 10 agencies contained provisions directing agencies to use the “reasonable time” approach—one that is not defined in terms of specific hours—to manage official time for representational duties. As shown in table 8, 141 of 173 CBAs, or 82 percent, we reviewed Of the 141 contained provisions for using the reasonable time approach.CBAs that specified the reasonable time approach, 64 used reasonable time exclusively while the remaining 77 used it in combination with another approach, such as a bank of hours, designated positions, or both. For example, Commerce, DHS, DOL, DOT, HHS, Treasury, and VA have CBAs that contained all three approaches to manage official time. Some of them included reasonable time for union representatives to conduct representational activities, designated percentages or hours of official time for union officers, and a separate bank of hours for travel or training activities. The second most frequently used approach to manage official time was through a bank of hours. Our review found that 93 of 173 CBAs, or 54 percent, in nine agencies contained a provision for using a bank of hours to conduct representational activities. Of the 93 CBAs that utilized a bank of hours, 16 specified using a bank of hours exclusively while 77 created a bank of hours in combination with other approaches. Depending on the size of the agency and BU, the number of hours allotted to the bank can vary. For instance, a smaller agency, NSF, included a provision for a bank of 1,040 hours per year. Larger agencies have a wide range of hours allotted to the bank. For instance, one of DHS’s CBAs included a provision for a bank of 30,000 hours per fiscal year, while one of SSA’s CBAs allotted a bank of 250,000 hours per fiscal year for all representational activities. The least often used approach by agencies involved designated positions with authorized percentages or hours of official time. Of the 49 CBAs that contained a provision for designated positions, 1 CBA at Treasury specified using the designated positions approach exclusively and 48 CBAs at eight other agencies used it in combination with other approaches. We found 27 CBAs at nine agencies that provided for at least one union official to charge up to 100 percent of their duty hours to official time. These agencies include: Commerce, DHS, DOL, DOT, HHS, RRB, SSA, Treasury, and VA. All agencies we reviewed reported that immediate supervisors generally have the primary responsibility of approving official time requests and monitoring use when they sign off on their designated employee’s timecards. For example, DOL and HHS require immediate supervisors to monitor and verify official time use for employees under their supervision and also submit official time hours to their human resources office periodically, which are then compiled for OPM’s Official Time Reports. One of DHS’s components, the United States Coast Guard, provides labor-management relations program guidance and training to educate immediate supervisors on official time procedures, rights, and responsibilities to ensure that the provisions for official time are administered appropriately as specified in relevant CBAs. NSF also provides training sessions and best practice discussions with all supervisors responsible for approving official time. In addition to the supervisory process, some of the agencies’ labor relations offices have a responsibility to monitor official time. For example, the labor relations office at DOT’s Federal Railroad Administration receives official time requests and also monitors and verifies official time usage. Similarly, DOT’s Federal Transit Administration requires union representatives to seek approval from immediate supervisors and the labor relations officer to use official time. NSF’s Labor Relations Officer monitors official time usage quarterly to determine whether it is being used within the confines of the CBA. Eight of 10 agencies reported taking additional steps to monitor official time. Similar to agency approaches for managing official time, agency internal controls practices for monitoring official time varied at the eight agencies because they are negotiated at the exclusive level of recognition, such as components, bureaus, operating administrations, and facilities. As shown in table 9, agency practices may include: (1) comparing authorized versus actual individuals charging official time; (2) comparing requests for official time versus actual official time used; (3) verifying that actual official time use does not exceed authorized amounts through internal reports used by agency management to monitor usage; and (4) verifying accuracy of official time usage by sharing internal reports with authorized individuals, such as union representatives. DHS and VA reported that they do not use any additional practices besides the monitoring performed by the immediate supervisor. Of the four practices, agencies we reviewed most often used the list of authorized union representatives to compare it against those who charged official time. For example, DOT, HHS, NSF, and Treasury reported that they provide a list of authorized official time users to supervisors who are responsible for ensuring that their employees are authorized to charge official time prior to approving timesheets. SSA’s internal official time tracking system has built-in capabilities that would only allow authorized union representatives to request official time and enter the actual amount used. Commerce partially addressed this practice because only some of its bureaus reported that they used the list to cross-verify. For example, Census reported that officials pull reports each pay period to verify whether an employee should have charged the official time category while the National Institute of Science and Technology’s Labor Relations Manager spot checks time and attendance records of union representatives, using the most recent list of authorized employees on file with the agency. Internal reports used to verify that authorized individuals did not exceed their authorized amounts were the second most-often-used practice reported by agencies to monitor official time use. For example, NSF used internal reports to ensure that the total amount of official time hours was appropriately credited towards the bank as outlined in its CBA. SSA used internal reports generated from its official time tracking system, which was programmed to ensure that the time requested by union representatives and approved by immediate supervisors matches the actual time used. In addition, the system does not allow users to exceed their authorized amounts of official time as negotiated in the CBAs. Commerce and DOT used this practice as well but not all of their bureaus or operating administrations reported that they used internal reports for cross- verification. For example, one of Commerce’s bureaus, the United States Patent and Trademark Office, reported that it periodically runs internal reports on usage and tracks overall use through the official time categories. Unions that have an allotted bank of hours typically authorize who can use official time and the amount. According to DOT, only one of its operating administrations reported using internal reports to verify that authorized individuals did not exceed their authorized amounts because official time is drawn from a bank of hours. DOT reported that internal reports were unnecessary for other operating administrations that use the reasonable time approach. Regardless of the approaches used, having internal reports would enable agencies to gauge overall usage, ensure that individuals did not exceed what they were authorized to use, and provide reasonable assurance that use of official time is as intended. OPM is a member of a forum of agencies that exchange information on issues related to labor management relations. According to OPM officials, the Employee Labor Relations (ELR) network is an informal group of agency headquarters labor and employee relations practitioners who have ongoing communication through face-to-face meetings and e-mail distribution. OPM said it uses the ELR network to share information on policies, significant third-party decisions, and best practices. According to one agency official, the ELR network plans to discuss official time reporting as an agenda item. This council could be an avenue for OPM to work with agencies on reporting issues for agency use of official time. While informal, the ELR network presents an opportunity for OPM to share information on monitoring and reporting practices for agency use of official time. Internal control guidance prescribes management to perform ongoing monitoring through regular management and supervisory activities, comparisons, and reconciliations. Monitoring is essential for assessing the extent of performance over time. OPM officials have stated that matters relating to official time use are governed by the law and negotiated between agencies and unions. Consistent with the Federal Service Labor-Management Relations Statute, OPM has no statutory or regulatory role for monitoring or enforcing agencies’ use of official time.Consequently, OPM officials said they do not share information on monitoring practices. By not sharing monitoring practices among agencies, OPM may be missing an opportunity to help agencies strengthen their internal controls for monitoring the use of official time and increase transparency and accountability. While we described earlier in this report costs associated with official time, agency management and union officials also cited what they considered to be some benefits of official time. Specifically, agency management and union officials at three selected agencies—SSA, Treasury, and VA—told us about several benefits related to official time, such as (1) improving labor-management relations, and (2) reducing agency costs. Similar benefits were also cited in our September 1997 report, which surveyed 30 federal agencies on how resources were used for employee union activities. First, according to both management and union officials, official time has helped improve labor-management relations between management and unions because they work jointly to develop solutions or improvements to address workplace challenges. For example, some of the Treasury union officials we met with said that management involved their unions early on in the process when making suggestions to streamline or fine tune workplace processes, such as installing a new performance management system and updating existing procedures. In addition, they also told us that official time has helped to create an environment where the workforce can be more engaged and have their voices heard. Treasury officials told us that official time improves the agency’s efficiency and accomplishment of the mission because union officials communicate goals to the organization. SSA management officials told us that allowing official time provides a stable work environment for SSA employees while SSA union officials said that official time has played a critical role in improving SSA as a workplace. For example, they explained that SSA unions were able to negotiate “flexi-place” arrangements with agency management using official time to allow employees to work from home. VA union officials told us that official time has allowed them to help agency management establish workforce policies related to telework. Second, according to both management and union officials, the use of official time by union representatives to address issues, such as potential unfair labor practices, equal employment opportunity complaints, and grievances with employees, has led to agency cost savings. For example, management and union officials at Treasury and VA told us that having official time has resulted in fewer unfair labor practices and grievances filed by employees because they are usually resolved at the lowest level of management. Specifically, VA union officials told us that a VA union conducted a study of its 22 local chapters and found reductions in grievances and unfair labor practices because of official time. In addition, VA management officials said that having on-site union representation and support helps lessen and resolve disputes more quickly, thereby assisting the department in moving forward with its mission. Similarly, SSA union officials also said that official time has helped to resolve employee issues before escalating to formal grievances or equal employment opportunity complaints. The use of official time is granted in statute as being in the public interest and established in practice by federal agencies. OPM has produced reports on agencies’ use of official time and estimated government-wide costs on its own initiative for most years since 2002 while emphasizing that agency labor and management are both accountable for ensuring official time is used appropriately. There has been longstanding congressional interest in official time usage as well as some concern about the amount, type, accuracy, and timeliness of information available to help ensure an appropriate level of congressional oversight. The scope and level of official time use reinforces the need for oversight and accountability with more than 1.2 million BU employees eligible to use official time and over 3.4 million hours charged for representational activities in fiscal year 2012, the latest year for which OPM has reported this information. Within this overall context, it is important that sufficient controls, processes, and guidance are in place for reporting and monitoring to provide reasonable assurance that official time is used as intended; is consistent with the statute and applicable agency policies and procedures; enables congressional oversight; informs management and labor decision making; and provides public transparency. OPM has historically estimated official time costs using a methodology that uses the average salary of all employees in a BU. An alternative methodology using actual salary data of BU employees who charged official time would yield a different estimate than OPM’s methodology. The use of alternative cost estimation methodologies may result in a more representative estimate of actual costs. Since OPM recognizes weaknesses in data collected through its EHRI database, OPM must expend additional resources to validate official time data. OPM reports that on any given year, about half of about 50 agencies reporting change their submissions during the validation process. OPM’s attempt to improve the reliability of official time data by having agencies validate their data is noteworthy but labor intensive and time consuming. By not following up with agencies on data differences, OPM may be missing an opportunity to improve data quality on agency reporting through EHRI and enable a less labor intensive and more efficient process. In addition, Congress may not have the most accurate information on the use of official time at agencies to support its oversight activities. Since agencies are most often managing the use of official time using an approach that has no specified number of hours, they could be at a greater risk for abuse. The risk may increase within agencies with multiple collective bargaining agreements at the department, component and operating administration levels that have differences in how official time is managed. Hence, agencies may need to implement additional actions to monitor the use of official time to help mitigate the risk of abuse. Agencies that use a reasonable time approach and rely exclusively on immediate supervisors for monitoring could benefit from the experience of other agencies that use a number of techniques to monitor the use of official time. By not considering whether it would be useful for agencies to share information on monitoring practices, OPM may be missing an opportunity to assist agencies in strengthening internal controls and increasing transparency and accountability. To help ensure that OPM and agencies collect, track, and report reliable data on the use of official time, we recommend that the Director of OPM take the following three actions: Consider other approaches to developing its cost estimate. Work with agencies to identify opportunities to increase efficiency of data collection and reporting through EHRI. Consider whether it would be useful to share agencies’ practices on monitoring use of official time through existing forums such as the ELR network. We provided a draft of this report to the Director of OPM for review and comment. OPM commented on our three recommendations and partially concurred on all three. OPM also provided technical comments which we incorporated as appropriate. OPM’s written comments are reprinted in appendix IV. We also provided an abridged draft laying out key facts and information to the 10 selected agencies we reviewed and incorporated comments where appropriate. OPM partially concurred with our first recommendation that the agency should consider other approaches to developing its cost estimate. OPM agreed to consider other approaches to developing its cost estimates in addition to considering whether to continue using its current methodology. OPM stated that its cost estimates have been based on (1) official time and average salary data provided to OPM through EHRI; (2) official time data manually provided directly to OPM by certain agencies; and (3) official time data manually updated by a number of agencies. OPM said that the approach we used in the report linking official time hours taken by specific individuals to those individuals’ actual salaries is not always possible using EHRI in all instances and is a labor intensive, and thus more costly process to undertake for the entire executive branch. The methodology we used was intended as an example of an alternative method for producing a cost estimate. OPM reported to us on October 15, 2014, that 52 of the 62 agencies that reported fiscal year 2012 official time data to OPM did so using EHRI, thus OPM would be able to link official time hours used by specific individuals to the actual salaries for the overwhelming majority of reporting agencies. Although our approach may be slightly more labor intensive, it provides greater assurance that the cost reported is more representative of actual cost and, ultimately, more useful for oversight purposes. OPM partially concurred with our second recommendation that the agency should work with other agencies to identify opportunities to increase the efficiency of data collection and reporting through EHRI. OPM stated that it will work with agencies to identify opportunities which they may wish to consider in order to increase the efficiency of data collection and reporting of official time through EHRI. However, OPM stated that it has no authority to direct agency actions regarding official time, including how official time data is collected and reported. It added that any opportunities to increase efficiency of data collection and reporting of official time are ultimately dependent upon individual agency determinations subject to local collective bargaining obligations. We agree that agencies are ultimately responsible for making changes to their data collection but OPM plays an important role via its reporting of official time. By following up with agencies that report discrepancies during the verification process, OPM could determine whether there are less resource-intensive alternatives for agencies to pursue that would yield more accurate data. We continue to believe that by following up with agencies on data differences, OPM has an opportunity to help improve the data quality on agency reporting through EHRI. OPM partially concurred with our third recommendation that the agency consider whether it would be useful to share agencies’ practices on monitoring use of official time through existing forums such as the ELR network. OPM stated that it will consider whether it would be useful to share agencies’ practices on monitoring use of official time through existing forums such as the ELR network, but ultimately, implementation of any identified practices is subject to each agency’s policies and their collective bargaining obligations. We continue to believe that OPM has an opportunity to strengthen its assistance to agencies by sharing techniques and approaches on monitoring official time in a collaborative manner through its membership in the ELR network. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director of OPM and other interested parties. In addition, the report will be available at no charge on the GAO website at www.gao.gov. If you have any questions about this report, please contact me at 202-512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of this engagement were to review the use of official time by federal agencies and the federal rules relating to the use of official time by federal employees. Specifically, this report (1) describes the extent to which 10 selected agencies reported using official time; (2) assesses the extent to which OPM’s cost estimate for official time aligns with leading cost estimation practices; (3) examines OPM reporting on official time; and (4) determines the extent to which selected agencies vary in their approach for managing official time and related internal control practices, and describes reported benefits. We included available information on both costs and benefits to be consistent with standard economic principles for evaluating federal programs and generally accepted government auditing standards. For purposes of this review, “use of official time” will constitute time charged to an official time and attendance code. To address these objectives, we selected a nongeneralizable sample of 10 of 61 agencies that reported official time data covering fiscal year 2011 to OPM. We selected the 10 agencies using the following factors: (1) the number of bargaining unit (BU) employees, (2) agency size, (3) rate of official time use, (4) the number of BUs and unions represented at the agency, and (5) the amount of reported agency salary costs associated with official time (see table 10 for agencies and data on selected criteria). In fiscal year 2011, the 10 agencies accounted for approximately 47 percent of BU employees. To describe the extent to which the 10 selected agencies reported using official time, we used OPM’s published reports on official time that included official time data for each of the 10 selected agencies and covered fiscal years 2002 through 2011. We provided a structured document request to the 10 selected agencies to collect official time usage data for fiscal years 2012 and 2013. We reviewed relevant agency documentation, interviewed agency officials charged with administering agency official time processes, and reviewed documentation to better understand the data systems each agency used to collect and report such data, as well as the quality of data entered into their systems. Specifically, we examined the data provided for obvious errors and inconsistencies and we also queried each of the 10 agencies to better understand the data systems each agency used to collect and report official time usage data, as well as to the quality of data entered into their systems. We determined that agency official time usage data for fiscal years 2012 and 2013 are sufficiently reliable for the purposes of the report. To further support our analysis, we used OPM’s Enterprise Human Resources Integration (EHRI) Statistical Data Mart, which contains information on personnel actions and payroll data for most federal civilian employees, including employees of our 10 selected agencies. We assessed the reliability of EHRI data through electronic testing to identify missing data, out-of-range values, and logical inconsistencies. We also reviewed our prior work assessing the reliability of these data and interviewed OPM officials knowledgeable about the data to discuss the data’s accuracy and steps OPM takes to ensure reliability. On the basis of this assessment, we believe the EHRI data we used are sufficiently reliable for the purpose of this report. We began our analyses with fiscal year 2006 because that is the first year in which OPM consistently reported all data elements for each of our 10 selected agencies.because it was the most recent, complete fiscal year of data available during our review. We selected fiscal year 2013 as the endpoint To assess whether OPM’s cost estimate for agency use of official time aligned with leading cost estimation practices, we compared OPM’s method and approach for preparing its estimate with GAO’s Cost Estimating and Assessment Guide. For this guide, GAO cost experts assessed measures consistently applied by cost-estimating organizations throughout the federal government and industry, and considered best practices for the development of reliable cost estimates. We assessed whether OPM’s estimate met the four desired characteristics for sound cost estimating, which include: well documented, comprehensive, accurate, and credible. We performed a limited analysis of the cost estimating practices used by OPM against the characteristics. The cost estimating best practices criteria will be limited because OPM did not develop a life-cycle cost estimate. OPM collects statistics on agency use of official time, including hours per year and estimated costs of prior years, and applies a straightforward labor equation. To calculate the total cost, OPM uses an equation that is wage rate (plus a fringe rate) multiplied by hours used. For the wage rate, OPM uses an agency average of salaries for all employees who belong to a BU. As a part of our assessment of the reliability of OPM’s cost estimate, we cross-checked OPM’s methodology with an alternative methodology. Using fiscal year 2013 salary data from EHRI, we developed a methodology that uses an alternative wage rate—salaries of employees who charged official time. To calculate the total cost, we calculated hourly costs plus fringe rate for individuals who charged greater than zero hours of official time in any category. Our approach included using the same filters and merges as OPM used, according to its responses to our queries. We conducted interviews with knowledgeable OPM officials and provided OPM with a description of our analysis to ensure our assumptions were consistent with their approach. To examine the extent of OPM reporting on the use of official time, we used OPM’s published reports that included government-wide official time data from federal agencies between fiscal years 2002 through 2011. We reviewed relevant agency documentation, interviewed agency officials responsible for producing government-wide reports on official time, and reviewed documentation to better understand OPM’s role in collecting and reporting on use of official time. To determine the extent to which selected agencies varied in their approach for managing official time and related internal controls practices, we reviewed active collective bargaining agreements (CBA) and related agency documentation provided by the 10 selected agencies in response to a structured document request. We identified 173 active CBAs in the 10 selected agencies representing the universe for this review. We also reviewed agency documentation and interviewed agency officials knowledgeable on internal control practices used to monitor use of official time. We do not generalize the results of our analysis to agencies outside of this review. We performed a content analysis of 173 CBAs covering active BUs at the 10 selected agencies to create a unique database of official time provisions. To ensure that we received the appropriate CBAs for all active BUs, we cross-verified them using information, such as bargaining unit status (BUS) codes, from OPM’s FLIS and a list of active BUs provided by OPM. We also followed up with all of our selected agencies to verify that we correctly matched their CBAs to active BUs using the BUS codes. In addition, to ensure consistency and accuracy of our analysis of various agency approaches, analysts independently analyzed CBAs and then compared their results through a double blind review for all 173 CBAs. In cases where there were discrepancies, analysts reconciled their differences for a final determination of an agency’s approach used to manage official time. To describe reported benefits of official time, we interviewed agency management and union officials from 3 of our 10 selected agencies, including SSA, Treasury, and VA, to obtain their viewpoints.agencies reflected a large proportion of BU employees and also utilized different approaches for capturing and reporting official time. Because they are not tangible, we could not independently verify benefits cited by agency management and union officials. We conducted this performance audit from August 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. OPM reported an overall decrease in government-wide official time hours between fiscal years 2002 and 2012 with a slight rise between fiscal years 2006 and 2012 (see figure 12). According to OPM, official time costs in fiscal year 2012 represented less than 0.1 percent of the civilian personnel budget for federal civil service BU employees. In addition to the contact named above, Signora J. May (Assistant Director), Leslie Ashton, Lee Clark, Clifton G. Douglas Jr., Sara Daleski, Barbara Lancaster, Jason Lee, Andrea Levine, Robert Robinson, Susan Sato, Cynthia Saunders, Rebecca Shea, and Stewart Small made key contributions to this report.
Official time is time spent by federal employees performing certain union representational activities, such as negotiations and handling grievances. Employees on official time are treated as if they are in a duty status and are paid accordingly. OPM's estimated total payroll costs, salary and benefits, for fiscal year 2012 official time hours was over $156 million and covered more than 1.2 million employees. GAO was asked to review federal rules relating to the use of official time. This report (1) describes the extent of official time use by 10 selected agencies; (2) assesses OPM's cost estimate for official time; and (3) examines OPM's reporting on official time. GAO obtained usage data from agencies and OPM's annual reports. For this study, GAO selected 10 agencies (National Science Foundation, Railroad Retirement Board, Social Security Administration, and the Departments of Commerce, Health and Human Services, Homeland Security, Labor, Treasury, Transportation, and Veterans Affairs) representing 47 percent of BU employees covered by OPM's report. GAO's selection was based on factors such as agency size, number of BU employees, and official time rate. The ten agencies GAO reviewed reported using 2.5 million official time hours in fiscal year 2013 compared to about 2 million hours in fiscal year 2006. Although the total number of hours charged increased by 25 percent, 7 of the 10 selected agencies reported lower official time rates in fiscal year 2013 as compared to fiscal year 2006. Three agencies reported increased official time rates over the same period. Official time rates indicate the number of official time hours expended per bargaining unit (BU) employee and allow for meaningful comparisons over time. Declines in official time rates per BU employee ranged from about 30 minutes or less at several agencies to 2-1/2 fewer hours per BU employee at one agency. The Office of Personnel Management (OPM) attributed changes in the number of hours, in part, to changes in the number of BU employees and the amount of collective bargaining negotiations. In total for fiscal year 2013, the 10 selected agencies reported that less than 2 percent of employees charged official time. During the same year, eight of the 10 agencies reported having employees who charged 100 percent of their duty time to official time; a total of 386 employees combined. Two agencies reported having no employees who charged 100 percent official time in fiscal year 2013. OPM has historically estimated annual official time costs by using a simple computation—multiplying each agency's average salary as reported in its Enterprise Human Resources Integration (EHRI) database for BU employees covered by official time activities by the agency's total reported official time hours. GAO computed its own cost estimate using an alternative methodology that used actual salary data of BU employees who in fact charged official time and multiplied this amount by the agency total reported official time hours used for each individual. GAO computed a cost estimate for the 6 of our 10 selected agencies that report through EHRI. GAO found that its cost estimate for these 6 agencies yielded an estimate that was about $5 million more than the estimate using OPM's methodology ($61 million versus $56 million, or a difference of about 9 percent). Further, cost estimates using GAO's methodology at 4 of the 6 agencies were higher by 15 percent or more than the estimates using OPM's methodology. A government-wide cost estimate could be higher or lower if this methodology was applied to all agencies. OPM said reporting on official time is not a priority at this time and they have used the same methodology for preparing its cost estimate since fiscal year 2002. Use of other methodologies may result in a more representative estimate of actual cost. OPM issues reports on official time to assist agencies with ensuring accountability in labor-management relations. It reports on official time usage government-wide. OPM asks agencies to verify data that OPM obtains through its EHRI database. According to OPM, at least half of the about 50 agencies that report official time data through EHRI report differences with the EHRI data and provide revised official time data to OPM. While OPM reports the corrected data, it does not follow-up with agencies to determine the source of data differences. Its guidelines state the importance of pursuing high quality data, reliable data on program costs. By not following up with agencies on data differences, OPM may be missing an opportunity to improve data quality on agency reporting through EHRI and enable a less labor intensive and more efficient process. GAO recommends, among other things, that OPM (1) consider other approaches to developing its cost estimate and (2) work with agencies to identify opportunities to increase efficiency of data collection and reporting through EHRI. OPM partially concurred but raised questions about implementation costs and limits to its authority. GAO continues to believe the recommendations are valid.
The FBI was founded in 1908 to serve as the primary investigative bureau of the Department of Justice. Its mission includes upholding the law by investigating serious federal crimes; protecting the nation from foreign intelligence and terrorist threats; providing leadership and assistance to federal, state, local, and international law enforcement agencies; and being responsive to the public in the performance of these duties. Approximately 11,000 special agents and 16,000 professional support personnel are located at the bureau’s Washington, D.C., headquarters and at more than 400 offices throughout the United States and 44 offices in foreign countries. Mission responsibilities at the bureau are divided among five major organizational components: Criminal Investigations, Law Enforcement Services, Counterterrorism and Counterintelligence, Intelligence, and Administration. Criminal Investigations, for example, investigates serious federal crimes, including those associated with organized crime, violent offenses, white-collar crime, government and business corruption, and civil rights infractions. It also probes federal statutory violations involving exploitation of the Internet and computer systems for criminal, foreign intelligence, and terrorism purposes. (The major components and their associated mission responsibilities are shown in table 1.) Each component is headed by an Executive Assistant Director who reports to the Deputy Director, who in turn reports to the Director. To execute its mission responsibilities, the FBI relies on the use of IT. For example, it develops and maintains computerized IT systems such as the Combined DNA Index System to support forensic examinations, the Digital Collection System to electronically collect information on known and suspected terrorists and criminals, and the National Crime Information Center and the Integrated Automated Fingerprint Identification System to help state and local law enforcement agencies identify criminals. According to FBI estimates, the bureau manages hundreds of systems, networks, databases, applications, and associated tools such as these at an average annual cost of about $800 million. Several prior reviews of the FBI’s existing IT environment have revealed that it is antiquated and not integrated. Specifically, the Department of Justice Inspector General reported that as of September 2000, the FBI had over 13,000 desktop computers that were 4 to 8 years old and could not run basic software packages. Moreover, it reported that some communications networks were 12 years old and obsolete, and that many end-user applications existed that were neither Web-enabled nor user-friendly. In addition, a December 2001 review initiated by the Department of Justice found that FBI’s IT environment was disparate. In particular, it identified 234 nonintegrated (“stove-piped”) applications, residing on 187 different servers, each of which had its own unique databases and did not share information with other applications or with other government agencies. Moreover, in June 2002, we reported that IT has been a long-standing problem for the bureau, involving outdated hardware, outdated software, and the lack of a fully functional E-mail system. We also reported that these deficiencies served to significantly hamper the FBI’s ability to share important and time-sensitive information internally and externally with other intelligence and law enforcement agencies. Following the terrorist attacks of September 11, 2001, the FBI refocused its efforts to investigate the events and to detect and prevent possible future attacks. To do this, the bureau changed its priorities and accelerated modernization of its IT systems. Collectively, the FBI’s many modernization efforts involve 51 initiatives that the FBI reported will cost about $1.5 billion between fiscal years 2002 and 2004. For example, the Trilogy project, which is to introduce new systems infrastructure and applications, includes establishing an enterprisewide network to enable communications between hundreds of FBI locations domestically and abroad, upgrading 20,000 desktop computers, and providing 2,400 printers and 1,200 scanners. In addition, a new investigative data warehousing initiative called Secure Counterterrorism Operational Prototype Environment is to (1) aggregate voluminous counterterrorism files obtained from both internal and external sources and (2) acquire analytical capabilities to improve the FBI’s ability to analyze these files. Another initiative, called the FBI Administrative Support System, is to integrate the bureau’s financial management and administrative systems with the Department of Justice’s new financial management system. Beyond the scope and size of the FBI’s modernization effort is the need to ensure that the modernized systems effectively support information sharing within the bureau and among its law enforcement and intelligence community partners. This means that the modernized FBI systems will, in many cases, have to interface with existing (legacy) systems to obtain data to accomplish their functions, which bureau officials said will be challenging, given the nonstandard and disparate nature of the existing IT environment. Moreover, bureau staff will have to be trained on the new systems and business processes modified to accommodate their use. The development, maintenance, and implementation of enterprise architectures (EA) are recognized hallmarks of successful public and private organizations and as such are an IT management best practice. EAs are essential to effectively managing large and complex system modernization programs, such as the FBI’s. Our experience with federal agencies has shown that attempting a major modernization effort without a well-defined and enforceable EA results in systems that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. The Congress and the Office of Management and Budget have recognized the importance of agency EAs. The Clinger-Cohen Act, for example, requires that agency Chief Information Officers (CIO) develop, maintain, and facilitate the implementation of architectures as a means of integrating business processes and agency goals with IT. In response to the act, the Office of Management and Budget, in collaboration with us and others, has issued guidance on the development and implementation of these architectures. It has also issued guidance that requires agency investments in information systems to be consistent with agency architectures. An EA is a systematically derived snapshot—in useful models, diagrams, and narrative—of a given entity’s operations (business and systems), including how its operations are performed, what information and technology are used to perform the operations, where the operations are performed, who performs them, and when and why they are performed. The architecture describes the entity in both logical terms (e.g., interrelated functions, information needs and flows, work locations, systems, and applications) and technical terms (e.g., hardware, software, data, communications, and security). EAs provide these perspectives for both the entity’s current (or “as-is”) environment and for its target (or “to- be”) environment; they also provide a high-level capital investment roadmap for moving from one environment to the other. Among others, the Office of Management and Budget, the National Institute of Standards and Technology, and the federal CIO Council have issued frameworks that define the scope and content of architectures. For example, the federal CIO Council issued a framework, known as the Federal Enterprise Architecture Framework, in 1999. While the various frameworks differ in their nomenclatures and modeling approaches, they consistently provide for defining an enterprise architecture’s operations in both logical terms and technical terms and providing these perspectives both for the “as-is” and “to-be” environments, as well as the investment roadmap. Managed properly, an enterprise architecture can clarify and help optimize the interdependencies and relationships among a given entity’s business operations and the underlying systems and technical infrastructure that support these operations. Over the past few years, several reviews related to the FBI’s management of its IT have focused on enterprise architecture efforts and needs. For example, in July 2001, the Department of Justice hired a consulting firm to review the FBI’s IT management. Among other things, the consultant recommended that the bureau develop a comprehensive EA to help reduce the proliferation of disparate, noncommunicating applications. The next year, in February 2002, we reported as part of a governmentwide survey of the state of EA maturity that the FBI was one of a number of federal agencies that were not effectively managing their architecture efforts, and we made recommendations to the Office of Management and Budget for advancing the state of architecture maturity across the federal government. In this report, we noted that while the FBI was attempting to lay the management foundation for developing an architecture, the bureau had not yet established certain basic management structures and controls, such as establishing a steering committee or group that had responsibility for directing and overseeing the development of the architecture. Later, our June 2002 testimony recommended that the FBI significantly upgrade its IT management capabilities, including developing an architecture, in order to successfully change its mission and effectively transform itself. Subsequently, in December 2002, the Department of Justice Inspector General reported that the FBI needed to complete an architecture to complement its IT investment management processes. According to guidance published by the federal CIO Council,effective architecture management consists of a number of key practices and conditions (e.g., establishing a governance structure, developing policy, defining management plans, and developing and issuing an architecture). In April 2003, we published a maturity framework that arranges these key practices and conditions (i.e., core elements) of the council’s guide into five hierarchical stages, with Stage 1 representing the least mature and Stage 5 being the most mature. The framework provides an explicit benchmark for gauging the effectiveness of EA management and provides a roadmap for making improvements. Each of the five stages is described below. 1. Creating EA awareness. The organization does not have plans to develop and use an architecture, or it has plans that do not demonstrate an awareness of the value of having and using an architecture. While Stage 1 agencies may have initiated some EA activity, these agencies’ efforts are ad hoc and unstructured, lack institutional leadership and direction, and do not provide the management foundation necessary for successful EA development. 2. Building the EA management foundation. The organization recognizes that the EA is a corporate asset by vesting accountability for it in an executive body that represents the entire enterprise. At this stage, an organization assigns EA management roles and responsibilities and establishes plans for developing EA products and for measuring program progress and product quality; it also commits the resources necessary for developing an architecture—people, processes, and tools. 3. Developing the EA. The organization focuses on developing architecture products according to the selected framework, methodology, tool, and established management plans. Roles and responsibilities assigned in the previous stage are in place, and resources are being applied to develop actual EA products. The scope of the architecture has been defined to encompass the entire enterprise, whether organization-based or function-based. 4. Completing the EA. The organization has completed its EA products, meaning that the products have been approved by the EA steering committee or an investment review board, and by the CIO. Further, an independent agent has assessed the quality (i.e., completeness and accuracy) of the EA products. Additionally, evolution of the approved products is governed by a written EA maintenance policy approved by the head of the organization. 5. Leveraging the EA to manage change. The organization has secured senior leadership approval of the EA products and has a written institutional policy stating that IT investments must comply with the architecture, unless granted an explicit compliance waiver. Further, decision makers are using the architecture to identify and address ongoing and proposed IT investments that are conflicting, overlapping, not strategically linked, or redundant. Also, the organization tracks and measures EA benefits or return on investment, and adjustments are continuously made to both the EA management process and the EA products. The FBI has yet to develop an EA, and it does not have the requisite means in place to effectively develop, maintain, and implement one. The state of the bureau’s architecture efforts is attributable to the level of management priority and commitment that the bureau has assigned to this effort. Unless this changes, it is unlikely the FBI will produce a complete and useful architecture, and without the architecture, the bureau will be severely challenged in its ability to implement a set of modernized systems that optimally support critical mission needs. An EA is an essential tool for effectively and efficiently engineering business operations (e.g., processes, work locations, and information needs and flows) and defining, implementing, and evolving IT systems in a way that best supports these operations. As mentioned earlier, an EA provides systematically derived and captured structural descriptions—in useful models, diagrams, tables, and narrative—of how a given entity operates today and how it plans to operate in the future, and it includes a roadmap for transitioning from today to tomorrow. The nature and content of these descriptions vary among organizations depending on the EA framework selected. The FBI has selected the federal CIO Council’s Federal Enterprise Architecture Framework as the basis for defining its EA. At the highest level of component content description, the Federal Enterprise Architecture Framework requires an “as-is” architectural description, a “to- be” architectural description, and a transition plan. For the “as-is” and “to- be” descriptions, this framework also requires the following major architecture products: business, information/data, applications, and technical components. The FBI has yet to develop any of these architectural components. In response to our requests for all EA products, FBI officials, including the chief architect and the deputy chief information officer, told us that they do not yet exist. They added that they are currently in the process of developing an inventory of the FBI’s existing (legacy) systems, which is a first step toward creating “as-is” architectural descriptions. They also stated that their goal is to develop and issue an initial bureau EA by the fall of 2003. The FBI lacks an architecture largely because it is not treating development and use of one as a management priority. According to the FBI’s chief architect, although the FBI launched its architecture effort 32 months ago, resources allocated to this effort have been limited to about $1 million annually and four staff. In contrast, our research of successful architecture efforts in other federal agencies shows that their resource needs are considerably greater than those that the FBI has committed. Similarly, the Justice Inspector General reported in December 2002 that limited funding and resources contributed to the immature state of the bureau’s EA efforts. Additionally, assignment of responsibility and accountability for developing the architecture has not been stable over the last 32 months. For example, the chief architect has changed three times in the past 12 months. As our prior reviews of federal agencies and research of architecture best practices show, attempts to modernize systems without an architecture, which is what the FBI is doing, increases the risk that large sums of money and much time and effort will be invested in technology solutions that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. In the FBI’s case, there are indications that this is occurring. For example, the director of the modernization program management office told us that the office recently assumed responsibility for managing three system modernization initiatives and found that they will require rework in order for them to be integrated. Such integration—which an EA would have provided for—was not previously factored into their development. To allow for a more coordinated and integrated approach to pursuing its other 48 modernization initiatives, the FBI has started holding informal meetings among top managers to discuss related systems. However, such meetings are not a sufficient surrogate for an explicitly defined architectural blueprint that provides a commonly understood, accepted frame of reference against which to effectively and efficiently acquire and implement well-integrated systems. Because the task of developing, maintaining, and implementing an EA is an important, complex, and difficult endeavor, doing so effectively and efficiently requires that rigorous, disciplined management practices be adopted. Such practices form the basis of our EA management maturity framework, which specifies by stages the key architecture management structures, processes, and controls that are embodied in federal guidance and best practices. For example, Stage 2 specifies nine key practices or core elements that are necessary to provide the management foundation for successfully launching and sustaining an architecture effort. Five of the nine Stage 2 core elements are described below. Establish an architecture steering committee representing the enterprise and make the committee responsible for directing, overseeing, or approving the EA. This committee should include executive-level representatives from each line of business, and these representatives should have the authority to commit resources and enforce decisions within their respective organizational units. By establishing this enterprisewide responsibility and accountability, the agency demonstrates its commitment to building the management foundation and obtaining buy-in from across the organization. Appoint a chief architect who is responsible and accountable for the EA, and who is supported by the EA program office and overseen by the architecture steering committee. The chief architect, in collaboration with the Chief Information Officer, the architecture steering committee, and the organizational head, is instrumental in obtaining organizational buy-in for the EA, including support from the business units, as well as in securing resources to support architecture management functions, such as risk management, configuration management, quality assurance, and security management. Use an architecture development framework, methodology, and automated tool to develop and maintain the EA. These are important because they provide the means for developing the architecture in a consistent and efficient manner. The framework provides a formal structure for representing the EA, while the methodology is the common set of procedures that the enterprise is to follow in developing the EA products. The automated tool serves as a repository where architectural products are captured, stored, and maintained. Develop an architecture program management plan. This plan specifies how and when the architecture is to be developed. It includes a detailed work breakdown structure, resource estimates (e.g., funding, staffing, and training), performance measures, and management controls for developing and maintaining the architecture. The plan demonstrates the organization’s commitment to managing EA development and maintenance as a formal program. Allocate adequate resources to the EA effort. An organization needs to have the resources (funding, people, tools, and technology) to establish and effectively manage its architecture. This includes, among other things, identifying and securing adequate funding to support EA activities, hiring and retaining the right people, and selecting and acquiring the right tools and technology to support activities. Our framework similarly identifies key architecture management practices associated with later stages of EA management maturity. For example, at Stage 3, the stage at which organizations focus on architecture development activities, organizations need to satisfy six core elements. Two of the six are discussed below. Issue a documented architecture policy, approved by the organization’s head, governing the development of the EA. The policy defines the scope of the architecture, including the requirement for a description of the baseline and target architecture, as well as an investment roadmap or sequencing plan specifying the move between the two. This policy is an important means for ensuring enterprisewide commitment to developing an EA and for clearly assigning responsibility for doing so. Ensure that EA products are under configuration management. This involves ensuring that changes to products are identified, tracked, monitored, documented, reported, and audited. Configuration management maintains the integrity and consistency of products, which is key to enabling effective integration among related products and for ensuring alignment between architecture artifacts. At Stage 4, during which organizations focus on architecture completion activities, organizations need to satisfy eight core elements. Two of the eight are described below. Ensure that EA products and management processes undergo independent verification and validation. This core element involves having an independent third party—such as an internal audit function or contractor that is not involved with any of the architecture development activities—verify and validate that the products were developed in accordance with EA processes and product standards. Doing so provides organizations with needed assurance of the quality of the architecture. Ensure that business, performance, information/data, application/service, and technology descriptions address security. An organization should explicitly and consistently address security in its business, performance, information/data, application/service, and technology EA products. Because security permeates every aspect of an organization’s operations, the nature and substance of institutionalized security requirements, controls, and standards should be captured in EA products. At Stage 5, during which the focus is on architecture maintenance and implementation activities, organizations need to satisfy eight core elements. Two of the eight are described below. Make EA an integral component of IT investment decision-making processes. Because the roadmap defines the IT systems that an organization plans to invest in as it transitions from the “as-is” to the “to- be” environment, the EA is a critical frame of reference for making IT investment decisions. Using the EA when making such decisions is important because organizations should approve only those investments that move the organization toward the “to-be” environment, as specified in the roadmap. Measure and report return on EA investment. Like any investment, the EA should produce a return on investment (i.e., a set of benefits), and this return should be measured and reported in relation to costs. Measuring return on investment is important to ensure that expected benefits from the EA are realized and to share this information with executive decision makers, who can then take corrective action to address deviations from expectations. Effective EA management is generally not achieved until an organization has a completed and approved architecture that is being effectively maintained and implemented, which is equivalent to having satisfied many Stage 4 and 5 core elements. Table 2 summarizes our framework’s five stages and the associated core elements for each. The FBI is currently at Stage 1 of our maturity framework. Of the nine foundational stage core elements (Stage 2), the FBI has fully satisfied one element by designating a chief architect. Additionally, the bureau has partially satisfied two other elements. First, it has established an architecture governance board as its steering committee. However, the bureau has not included all relevant FBI stakeholders on the board, such as representatives from its counterterrorism and counterintelligence organizational component. Second, the bureau has selected the Federal Enterprise Architecture Framework as the framework to guide its architecture development. However, it has not yet selected a development methodology or automated tool (a repository for architectural products). The FBI has not satisfied the six remaining Stage 2 core elements. For example, the bureau has not established a program office. In addition, it has not developed a program management plan that provides for describing (1) the bureau’s “as-is” and “to-be” environments, as well as a sequencing plan for transitioning from the “as-is” to the “to-be” and (2) the enterprise in terms of business, data, applications and technology, including how security will be addressed in each. With respect to Stages 3, 4, and 5, the FBI has not satisfied any of the associated core elements. (The detailed results of our assessment of the FBI’s satisfaction of each of the stages and associated core elements is provided in app. II.) The state of the FBI’s EA management maturity is attributable to a lack of management commitment to having and using an architecture and to giving it priority. Indeed, several of the core elements cited above as not being satisfied, such as having EA policies and allocating adequate resources, are indicators of an organization’s architectural commitment. According to FBI officials, including the chief architect, EA management has not been an agency priority, and thus has not received needed attention and resources. Without effective EA management structures, processes, and controls, it is unlikely that the bureau will be able to produce a complete and enforceable enterprise architecture and thus be able to implement modernized systems in a way that minimizes overlap and duplication and maximizes integration and mission support. The bureau’s ongoing and planned system modernization efforts are at risk of not being defined and implemented in a way that best supports institutional mission needs and operations. Effectively mitigating this risk will require swift development and use of a modernization blueprint, or enterprise architecture; up to now, the FBI has not adequately demonstrated a commitment to developing such an architecture. In reversing this pattern, it is important that the architecture development and use be made an agency priority, and that it be managed in a way that satisfies the practices embodied in our architecture management maturity framework. To do less will continue to expose the bureau’s system modernization efforts, and ultimately the effectiveness and efficiency of its mission performance, to unnecessary risk. We recommend that the FBI Director immediately designate EA development, maintenance, and implementation as an agency priority and manage it as such. To this end, we recommend that the Director ensure that appropriate steps are taken to develop, maintain, and implement an EA in a manner consistent with our architecture management framework. This includes first laying an effective EA management foundation by (1) ensuring that all business partners are represented on the architecture governance board; (2) adopting an architecture development methodology and automated tool; (3) establishing an EA program office that is accountable for developing the EA; (4) tasking the program office with developing a management plan that specifies how and when the EA is to be developed and issued; (5) ensuring that the management plan provides for the bureau’s “as-is” and “to-be” environments, as well as a sequencing plan for transitioning from the “as-is” to the “to-be”; (6) ensuring that the management plan also describes the enterprise in terms of business, data, applications, and technology; (7) ensuring that the plan also calls for describing the security related to the business, data, and technology; (8) ensuring that the plan establishes metrics for measuring EA progress, quality, compliance, and return on investment; and (9) allocating the necessary funding and personnel to EA activities. Next, we recommend that the Director ensure that steps to develop the architecture products include (1) establishing a written and approved policy for EA development; (2) placing EA products under configuration management; (3) ensuring that EA products describe the enterprise’s business, as well as the data, applications, and technology that support it; (4) ensuring that EA products describe the “as-is” environment, the “to-be” environment, and a sequencing plan; (5) ensuring that business, performance, data, application, and technology descriptions address security; and (6) ensuring that progress against EA plans is measured and reported. In addition, we recommend that the Director ensure that steps to complete architecture products include (1) establishing a written and approved policy for EA maintenance; (2) ensuring that EA products and management processes undergo independent verification and validation; (3) ensuring that EA products describe the enterprise’s business and the data, application, and technology that supports it; (4) ensuring that EA products describe the “as-is” environment, the “to-be” environment, and a sequencing plan; (5) ensuring that business, performance, data, application, and technology descriptions address security; (6) ensuring that the Chief Information Officer approves the EA; (7) ensuring that the steering committee and/or the investment review board has approved the current version of the EA; and (8) measuring and reporting on the quality of EA products. Further, we recommend that the Director ensure that steps taken to use the EA to manage modernization efforts include (1) establishing a written and approved policy for IT investment compliance with EA, (2) establishing processes to formally manage EA changes, (3) ensuring that EA is an integral component of IT investment management processes, (4) ensuring that EA products are periodically updated, (5) ensuring that IT investments comply with the EA, (6) obtaining Director approval of the current EA version, (7) measuring and reporting EA return on investment, and (8) measuring and reporting on EA compliance. Finally, we recommend that the Director ensure that the bureau develops and implements an agency strategy for mitigating the risks associated with continued investment in modernized systems before it has an EA and controls for implementing it. We discussed our findings with the FBI’s Chief Architect and later transmitted a draft of this report to the bureau on August 22, 2003, for its review and comment, requesting that any comments be provided by September 18, 2003. However, none were provided in time to be included in this printed report. We are sending copies of this report to the Chairman and Vice Chairman of the Senate Select Committee on Intelligence and the Ranking Minority Member of the House Permanent Select Committee on Intelligence. We are also sending copies to the Attorney General; the Director, FBI; the Director, Office of Management and Budget; and other interested parties. In addition, the report will also be available without charge on GAO’s Web site at http://www.gao.gov. Should you have any questions about matters discussed in this report, please contact me at (202) 512-3439 or by E-mail at [email protected]. Key contributors to this report are listed in appendix III. To evaluate whether Federal Bureau of Investigation (FBI) has a modernization blueprint, commonly called an enterprise architecture (EA), to guide and constrain its modernization efforts, we requested that the bureau provide us with all of its EA products. We also interviewed FBI officials, including the chief architect, to verify the status and plans for developing bureau EA products, the causes for why none had been completed to date, and the effects of proceeding with modernization initiatives without an EA. To assess whether the FBI was effectively managing its architecture activities, we compared bureau EA management practices to our EA management maturity framework. This framework is based on A Practical Guide to Federal Enterprise Architecture, published by the federal Chief Information Officers (CIO) Council. To do this, we first reviewed bureau EA plans and products, and we interviewed FBI officials to verify and clarify our understanding of bureau EA efforts. Next, we compared the information that we had collected against our EA management maturity framework practices to determine the extent to which the FBI was employing such effective management practices. In addition, we interviewed FBI’s chief architect and other bureau officials to determine, among other things, the cause of differences between what is specified in the framework and the condition at the FBI. We also reviewed past FBI information technology (IT) management studies and Department of Justice Inspector General reports, to understand the state of FBI management practices, including their strengths and weaknesses, underlying causes for improvements, and open recommendations. Further, we interviewed FBI division officials to understand the extent of their participation in the bureau’s architecture efforts. Finally, to verify our findings and validate our assessment, we discussed with the chief architect our analysis of the state of FBI’s EA practices against our maturity framework. We performed our work at FBI headquarters in Washington, D.C., from September 2002 until August 2003, in accordance with generally accepted government auditing standards. Agency is aware of EA. The FBI has acknowledged the need for an EA. Adequate resources exist. The FBI has allocated four architects and approximately $1 million annually for the development, implementation, and maintenance of its EA. Committee or group representing the enterprise is responsible for directing, overseeing, or approving EA. The FBI has established the architecture governance board to direct, oversee, and approve the EA. However, not all FBI components are represented on the board. Program office responsible for EA development and maintenance exists. The FBI does not have a program office responsible for the development, maintenance, or implementation of its EA. Chief architect exists. The FBI has designated a chief architect. EA is being developed using a framework, methodology, and an automated tool. The FBI plans to use the Federal Enterprise Architecture Framework. However, FBI officials reported that they are not using a methodology or automated tool. EA plans call for describing “as-is” environment, “to-be” environment, and sequencing plan. No EA plans exist. EA plans call for describing the enterprise in terms of business, data, applications, and technology. No plans exist. EA plans call for business, performance, data, application, and technology descriptions to address security. No plans exist. EA plans call for developing metrics for measuring EA progress, quality, compliance, and return on investment. No plans exist. Written/approved policy exists for EA development. The FBI does not have a written and approved policy for EA development. EA products are under configuration management. The FBI has not developed its EA products; thus no products are under configuration management. EA products describe or will describe the enterprise’s business and the data, applications, and technology that support it. The FBI plans to describe its enterprise’s business and the data, applications, and technology that support it. However, no completion date has been established. EA products describe or will describe the “as-is” environment, the “to-be” environment, and a sequencing plan. The FBI plans to describe its “as-is” and “to-be” environments, as well as a sequencing plan. However, no completion date has been established. Business, performance, data, application, and technology address or will address security. No plans exist. Progress against EA plans is measured and reported. No plans exist. Written/approved policy exists for EA maintenance. According to FBI officials, there is no written and approved policy for EA maintenance. EA products and management processes undergo independent verification and validation. The FBI has not developed EA products, and management processes do not undergo independent verification and validation. EA products describe the enterprise’s business and the data, applications, and technology that support it. The FBI has not developed these products. EA products describe the “as-is” environment, the “to-be” environment, and a transitioning plan. The FBI has not developed these products. Business, performance, data, application, and technology descriptions address security. No plans exist. Organization chief information officer has approved EA. There is no approved version of the FBI’s EA. Committee or group representing the enterprise or the investment review board has approved current version of EA. The FBI has not developed an EA. Quality of EA products is measured and reported. The FBI has not developed an EA. Written/approved policy exists for IT investment compliance with EA. The FBI has no written and approved policy addressing IT investment compliance with EA. Process exists to formally manage EA change. No management plans exist. EA is integral component of IT investment management process. The FBI has not developed an EA. EA products are periodically updated. The FBI has not developed an EA. IT investments comply with EA. The FBI has not developed an EA. Organization head has approved current version of EA. The organization head has not approved the EA. Return on EA investment is measured and reported. The FBI does not have an EA to determine return on investment. Compliance with EA is measured and reported. The FBI does not have an EA to measure and report compliance. In addition to the individual named above, key contributors to this report included Nabajyoti Barkakati, Katherine I. Chu-Hickman, Barbara Collier, Michael Fruitman, David Hinchman, Mary Beth McClanahan, Paula Moore, and Megan Secrest. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Federal Bureau of Investigation (FBI) is in the process of modernizing its information technology (IT) systems. Replacing much of its 1980s-based technology with modern system applications and a robust technical infrastructure, this modernization is intended to enable the FBI to take an integrated approach--coordinated agencywide--to performing its critical missions, such as federal crime investigation and terrorism prevention. GAO was requested to conduct a series of reviews of the FBI's modernization management. The objective of this first review was to determine whether the FBI has an enterprise architecture to guide and constrain modernization investments. About 2 years into its ongoing systems modernization efforts, the FBI does not yet have an enterprise architecture. An enterprise architecture is an organizational blueprint that defines--in logical or business terms and in technology terms--how an organization operates today, intends to operate in the future, and intends to invest in technology to transition to this future state. GAO's research has shown that attempting to modernize an IT environment without a well-defined and enforceable enterprise architecture risks, among other things, building systems that do not effectively and efficiently support mission operations and performance. The FBI acknowledges the need for an enterprise architecture and has committed to developing one by the fall of 2003. However, it currently lacks the means for effectively reaching this end. For example, while the bureau did recently designate a chief architect and select an architecture framework to use, it does not yet have an agency architecture policy, an architecture program management plan, or an architecture development methodology, all of which are necessary components of effective architecture management. Given the state of the FBI's enterprise architecture management efforts, the bureau is at Stage 1 of GAO's enterprise architecture management maturity framework. Organizations at Stage 1 are characterized by architecture efforts that are ad hoc and unstructured, lack institutional leadership and direction, and do not provide the management foundation necessary for successful architecture development and use as a tool for informed IT investment decision making. A key for an organization to advance beyond this stage is to treat architecture development, maintenance, and implementation as an institutional management priority, which the FBI has yet to do. To do less will expose the bureau's ongoing and planned modernization efforts to unnecessary risk.
The IQA directed OMB to issue guidelines to federal agencies covered by the Paperwork Reduction Act designed to ensure the “quality, objectivity, utility, and integrity” of information disseminated to the public. The IQA also directed OMB to include in its guidelines requirements for agencies to (1) develop their own information quality guidelines, (2) establish administrative mechanisms for affected persons to seek correction of information that does not comply with OMB’s guidelines, and (3) annually report to OMB the number and nature of complaints they receive regarding the accuracy of the information they disseminate. Prior to the IQA, there were several governmentwide actions aimed at improving agency data. For example, Statistical Policy Directive No. 2, first issued in 1952, required statistical agencies to inform users of conceptual or other limitations of the data, including how the data compare with similar statistics. In 1996, the Federal Committee on Statistical Methodology—an OMB-sponsored interagency committee dedicated to improving the quality of federal statistics—established a subcommittee to review the measurement and reporting of data quality in federal data collection programs. The results of the subcommittee’s work were published in a 2001 report that addressed such issues as what information on sources of error federal data collection programs should provide, and how they should provide it. For all federal government information collections, the 1995 amendments to the Paperwork Reduction Act called on federal agencies to manage information resources with the goal of improving “the integrity, quality, and utility of information to all users within and outside the agency.” OMB’s IQA guidelines were issued in final form in February 2002. They required agencies subject to the IQA to take such steps as issue information quality guidelines designed to ensure the quality, objectivity, utility, and integrity of information disseminated to the public; establish administrative mechanisms for affected persons to seek correction of information they believe is not in compliance with the guidelines; report annually to the Director of OMB on the number and nature of complaints received regarding compliance with the guidelines and how the agencies handled those complaints; and designate an official responsible for ensuring compliance with OMB’s guidelines. The OMB guidelines defined quality as an encompassing term comprising utility, which is the usefulness of the information to its intended users; integrity, which refers to the security of information and its protection from unauthorized access or revision; and objectivity, which addresses both presentation (i.e., whether the information is being presented in an accurate, clear, complete, and unbiased manner) and substance (i.e., whether the information is accurate, reliable, and unbiased). In addition, OMB addresses transparency within the definition of objectivity and utility. As recognized in OMB’s guidelines, agencies that disseminate influential scientific, financial, or statistical information must demonstrate a high degree of transparency about data and methods. These measures are in place to facilitate the information’s reproducibility by an outside party or reanalysis of an agency’s results. The National Research Council of the National Academies considers transparency a key principle for federal statistical agencies, and stated in a recent report that transparency, which it defines as “an openness about the sources and limitations of the data,” is particularly important for instilling credibility and trust among data users and providers. As an agency within USDA, NASS is required to comply with the IQA. One statistical program administered by NASS is the quinquennial Census of Agriculture. According to NASS, the census provides a detailed picture of U.S. farms and ranches every 5 years and is the only source of uniform, comprehensive agricultural data at the county level. The results are published in 18 reports divided among three categories: Geographic Area Series, Census Quick Stats, and Specialty Products and Special Studies. Users of this information include federal agencies (for program and statistical purposes), farm organizations, businesses, universities, state departments of agriculture, elected representatives, legislative bodies at all levels of government, and academia. The next Census of Agriculture is scheduled for 2007. Our objectives were to (1) review how NASS met OMB’s guidelines covering the IQA and (2) examine the transparency of the documentation behind the Census of Agriculture’s processes and products, including the recently completed work on the 2002 Census, and the efforts currently underway for the 2007 Census. To achieve both of these objectives, we reviewed OMB’s and NASS’s information quality guidelines, Census of Agriculture reports, submissions to OMB, and other relevant documents. We also interviewed NASS officials about how NASS conducted the 2002 Census and how it is planning for the 2007 Census. The officials included the NASS Administrator, Associate Administrator, and Deputy Administrator for Programs and Products. In addition, to evaluate the transparency of Census of Agriculture products, we reviewed eight census reports and the Frequently Asked Questions area of the 2002 Census Web site, to determine the extent to which NASS followed its own procedures for ensuring the transparency of its information products. NASS’s IQA guidelines define transparency as, “a clear description of the methods, data sources, assumptions, outcomes, and related information that allows a data user to understand how an information product was designed and produced.” NASS’s guidelines state that its survey activities include such activities as sample design, questionnaire design, pre-testing, analysis of sampling, and imputation of missing data. However, the guidelines were not clear as to the specific activities to be documented. Consequently, we reviewed the practices employed by such statistical agencies as the National Academies of Sciences, International Monetary Fund, and U.S. Census Bureau, and developed a set of 20 practices associated with transparent documentation that encompassed the items NASS laid out in its own guidelines. The practices include such actions as defining data items, discussing sample design, and describing how the content of the survey differs from past iterations (see app. II). We looked for the presence or absence of these practices in 9 out of the 18 census reports and related forms of data that NASS disseminates, and verified the results with a second, independent analysis. In instances where a report did not include a particular documentation practice, we reviewed whether the report instead informed data users where to obtain this information. We chose these 9 reports because they all stem from the original census data collection, represent different product categories, and were available on the census Web site as of February 1, 2005. To obtain an external perspective of how NASS processes and products address the IQA guidelines, we interviewed six data users from different types of agricultural and research organizations. We selected these data users from lists of registrants for USDA and NASS outreach meetings within the past 5 years. We selected these six data users because they use information from the census on a regular basis. Moreover, these data users attended the most recent NASS outreach meeting, which specifically addressed the 2002 and 2007 Censuses. Some data users had also provided NASS with feedback on the content of the agricultural census. Their views cannot be projected to the larger population of census data users. We requested comments on a draft of this report from the Secretary of Agriculture. On September 8, 2005, we received the NASS Administrator’s written comments and have reprinted them in appendix I. They are addressed in the Agency Comments and Our Evaluation section of this report. NASS fulfilled the various procedural responsibilities and reporting requirements under OMB’s guidelines. For example, NASS released its own IQA guidelines for public comment on March 27, 2002. NASS officials stated they received no substantive comments on them and OMB approved the guidelines with only minimal changes. The officials also noted that no revisions have been made since then. Table 1 shows in greater detail how NASS addressed OMB’s guidelines. NASS’s IQA guidelines define transparency as, “a clear description of the methods, data sources, assumptions, outcomes, and related information that allows a data user to understand how an information product was designed and produced.” NASS’s guidelines also note that “NASS will make the methods used to produce information as transparent as possible” and that its “internal guidelines call for clear documentation of data and methods used in producing estimates and forecasts. . . .” To assess the extent to which NASS processes help ensure the transparency of the information it publishes, we examined key publications from the 2002 Census of Agriculture. Census reports vary in terms of scope and intended audience (see table 2). On the one hand, the United States Summary and State Data report contains over 100 data tables, an introduction, and four appendices. On the other hand, County Profile reports summarize each county’s agricultural situation on two pages. Overall, we assessed eight census reports within three product categories, as well as the Frequently Asked Questions (FAQ) section of the 2002 Census Web site, to determine the extent to which NASS followed its own guidelines for ensuring the transparency of its products. As shown in table 2, the transparency of the data documentation in the reports we reviewed varied between the Geographic Area Series reports—which are the most comprehensive of NASS’s products and addressed 15 of the 20 data documentation practices—and the Specialty Products and Special Studies which, depending on the specific product, addressed no more than 1 of the practices. All eight reports and the FAQ Web site lacked a discussion of four documentation practices, including the following: 1. Questionnaire testing. NASS produced a separate, internal report that discusses questionnaire testing in detail; however, publicly available census publications do not address this topic. 2. Limitations of the data. NASS does not discuss data limitations in the census reports we reviewed. 3. Impact of imputations, by item. When a statistical agency receives a report form with missing values, it normally estimates or “imputes” those values based on comparable data sources such as a similar farm operation. Although NASS uses a complex editing and imputation process to estimate missing values, and describes this process in the United States Summary and State Data report appendix, it does not quantify the impact of imputations by item in reports. 4. Whether any of the collected data have been suppressed for data quality reasons. Without information on whether any of the data had been suppressed because the quality was lacking, data users must assume that reports include all data items collected in the census had met agency publication standards. Although NASS appropriately recognizes the variation in data user needs by publishing several types of specialized reports, none of the reports we reviewed direct data users where to find either a complete set of documentation or additional documentation. For example, given the short length and summary format of the County Profile reports, it is not surprising that they lack documentation. However, in order for users to assess the quality of the data contained in the reports, it is important for NASS to at least provide links on its Web site or to other publications where users can access definitions, response rates, and other relevant information. NASS has two methods for handling data correction requests, depending on how they are submitted: a formal approach prescribed by OMB for correction requests filed under IQA, and an informal approach that NASS uses to address correction requests that are not filed under IQA. NASS’s informal correction procedures lack transparency because they are not documented and individual cases are not tracked. As a result, we could not determine the nature of these correction requests or whether or how they were addressed. Consistent with OMB’s guidelines, NASS detailed its procedures to request corrections under IQA on its Web site, and posted appropriate Federal Register notices. For example, NASS’s Web site explains that to seek a correction under IQA, petitioners must, among other steps: (1) state that their request for correction is being submitted under IQA, (2) clearly identify the information they believe to be in error, and (3) describe which aspects of NASS’s IQA guidelines were not followed or were insufficient. According to the instructions posted on its Web site, NASS’s IQA procedures are triggered only when petitioners explicitly state they are submitting a correction request under IQA. To date, none have done so. NASS addresses all other correction requests using informal, undocumented procedures that were in place before IQA was enacted. NASS officials explained that such requests are forwarded to the agency official responsible for preparing the report containing the information in question. That official, in turn, determines if the request can be resolved by clarifying the data, or whether a correction is needed. If a data item needs to be corrected, NASS has a set of procedures for documenting errors and issuing errata reports that are detailed in its Policy and Standards Memorandum No. 38. The memorandum describes the circumstances under which errata reports will be printed, and provides a mechanism for NASS staff to describe the nature of the error, its cause, and the action taken to resolve it. According to the Administrator, Associate Administrator, and other senior NASS officials we interviewed, the requests it has handled from the 2002 Census have so far been resolved to the petitioners’ satisfaction, and none resulted in any corrections to the data from the 2002 Census. However, because NASS does not document its informal procedures for handling inquiries and data correction requests, and lacks a recordkeeping system to log and track them, NASS could not provide us with firm information on the number of inquiries it has handled, the nature of those inquiries, and whether and how they were addressed. This is not to say that all complaints should follow the same procedures required by the IQA mechanism. For efficiency’s sake, it is important for agencies to respond to complaints in accordance with the magnitude of the problem. However, to provide a more complete picture of the questions NASS receives about its data and how those questions were handled, it will be important for NASS to better document its approach for handling correction requests not filed under IQA, and track their disposition. The 2002 Census of Agriculture was the first in which NASS developed the questionnaire (the 1997 Census of Agriculture was moved from the Census Bureau to NASS after the content had been determined). In doing so, NASS went to great lengths to obtain input from data users on what questions to ask, and evaluated their suggestions using a documented set of criteria. In preparing for the 2007 Census, NASS sought feedback on the questionnaire content from a broader spectrum of data users, in part because NASS solicited suggestions via the Internet. However, unlike the 2002 cycle, the criteria NASS used to assess the feedback were not initially documented, which is contrary to NASS’s IQA guidelines. However, as a result of our review, NASS has developed documented criteria similar to that used during the previous census. Under the Paperwork Reduction Act, agencies must obtain OMB’s approval prior to collecting information from the public. As part of this process, agencies must certify to OMB that, among other things, the effort is necessary for the proper performance of agency functions, avoids unnecessary duplication, and reduces burden on small entities. Agencies must also provide an estimate of the burden the information collection would place on respondents. For the 2002 Census, NASS submitted its request for approval—a form called “OMB 83-I”—in August 2001, and OMB approved it in October 2001. NASS estimated that the census would require a cumulative total of more than 1.3 million hours for respondents to complete and would cost them, in terms of their time, in excess of $21 million. OMB’s approval process also requires agencies to solicit input from external sources. NASS obtained input on the 2002 Agricultural Census content through a Federal Register notice, meetings with data users, and by contacting federal and state agencies that use census statistics to discuss data needs. Likewise, NASS is obtaining input on the content of the 2007 Census through a variety of channels. According to an agency official, the process began around June 2004, when NASS began releasing publications from the 2002 Census. NASS sent an evaluation form to its state offices requesting feedback on the census, including their suggestions for changing the content. NASS also asked the state offices to identify users from whom it could obtain additional feedback. NASS solicited further input by reaching out to data users within USDA and other federal agencies, querying organizations included in a list of “typical” data users maintained by NASS’s Marketing and Information Services Office, and holding periodic regional meetings with data users. NASS also has a “hot button” on its Web site where visitors are asked what items, if any, should be added or deleted from the census. In all, NASS obtained input on the 2007 Census through 10 distinct conduits. Moreover, compared to the process used to develop the content of the 2002 Census, its 2007 efforts were open to a wider spectrum of customers, and involved more direct contact with data users during the planning phase. Indeed, as shown in table 3, NASS’s outreach via the Internet, regional meetings, and queries to data users was over and above the steps it took when developing the 2002 Census. This openness was reflected in the comments of the six data users we interviewed. Five of the six users said NASS’s approach to eliciting input was adequate, while three of the six had requested new content items for the 2007 Census to better meet the needs of their organizations. The content evaluation process began in December 2004, and NASS is currently testing the questionnaire content. Following any refinements, mail-out of the actual census is scheduled for December 2007. For both the 2002 and 2007 Census cycles, the solicitation, review, and ultimate determination of the questionnaire content was led by the Census Content Team, a group consisting of experienced NASS statisticians representing different segments of the agency such as livestock, crops, and marketing. The 2002 Content Team used specific, documented criteria to inform its decisions. Specifically, suggestions were assessed according to the following factors, which were also made available to data users: items directly mandated by Congress or items that had strong items proposed by other federal agencies where legislation called for that agency to provide data for Congress; items needed for evaluation of existing federal programs; items which, if omitted, would result in additional respondent burden and cost for a new survey for other agencies or users; items required for classification of farms by historical groupings; items needed for improving coverage in the census; and items that would provide data on current agricultural issues. However, the criteria the 2007 Team used to assess input on the questionnaire content were not initially documented. According to agency officials we interviewed, NASS largely relied on professional judgment to evaluate the feedback it received, considering such factors as the need to keep the data comparable to past censuses and not increase the length of the questionnaire. Although a certain amount of professional judgment will invariably be used in making determinations on questionnaire content, the absence of documented assessment criteria is inconsistent with NASS’s guidelines. Indeed, these guidelines note that transparent documentation “allows a data user to understand how an information product was designed and produced.” Moreover, without documented criteria, it is not clear whether members of the Content Team are considering the same set of factors, or even if they are weighing those factors in the same manner. According to NASS, the shift in approach stemmed from staff turnover and reassignments of members of the 2002 Team and, as a result, the 2007 Team was not aware of the criteria used in 2002. Our review made the 2007 Team aware of the earlier set of criteria, and the Team has since developed similar documentation. NASS noted that all future content teams will use and update these criteria when developing the content of subsequent censuses. It will be important for NASS to continue with this approach because it is more consistent with its own IQA guidelines, and will also help NASS to do the following: Ensure the utility and relevance of information. A key principle for federal statistical agencies is to provide information relevant to issues of public policy. However, the nation’s information needs are constantly evolving, and it is important for statistical agencies to adapt accordingly. This is particularly true with agriculture, where a variety of factors such as changing technology and agricultural trends can affect what information should be collected. Rigorous content selection criteria could help NASS methodically evaluate the needs of different users, establish priorities, and keep the census synchronized with changing public policy requirements. Maximize cost-effectiveness and reduce public burden. As with all federal surveys, there are financial and nonfinancial costs to conducting the Census of Agriculture. These costs include the direct expenditures related to planning, implementing, and analyzing the census, as well as disseminating the information. There is also a cost to respondents in terms of the time they take to complete the questionnaire. Additionally, there are opportunity costs in that for every question that is included in the census, another question might need to be excluded so as not to increase the length of the census. Rigorous, consistently applied criteria can help promote cost-effectiveness because they can ensure that only those questions that meet a particular, previously identified need are included in the census. Applying such criteria also help inform decisions on the appropriate role of the federal government in collecting the data, and whether a particular question might be more appropriately addressed by a different survey, government organization, or the by the private sector. Maintain credibility. Content selection criteria provide a basis for consistent decision making on what to include in the census and what gets left off. This is especially important for maintaining NASS’s credibility given the input it receives from various sources. Without documented criteria, NASS’s actions could be perceived as arbitrary or disproportionately swayed by one particular interest or another; thus, NASS’s decisions would be more defensible. Further, documented criteria will guard against the loss of institutional memory to the extent there is further turnover in Content Team membership. NASS satisfied the procedural responsibilities and reporting requirements under OMB’s IQA guidelines. Moreover, to the extent that NASS continues to use the documented criteria it developed to inform future decisions on the content of the Census of Agriculture, it could help establish a closer alignment between the questions included in the census and evolving agricultural policy requirements, resulting in a more cost-effective data collection program. Building on these efforts, the transparency of census data products could be improved with more robust documentation. NASS’s procedures for addressing correction requests not filed under IQA could be more transparent as well. More than just a paperwork issue, greater transparency will help enhance NASS’s accountability to public data users and increase the credibility of census information. To help enhance the transparency of the Census of Agriculture’s processes and products, we recommend that the Secretary of Agriculture direct NASS to take the following two steps: 1. Ensure that census products fully address NASS’s own guidelines for data documentation or at least contain links to such information. The list of 20 documentation practices that we developed, while not necessarily exhaustive, represents sound actions used by other statistical agencies and could form a starting point for NASS. 2. Document and post on NASS’s Web site its procedures for handling data correction requests not filed under IQA, and track the disposition of those requests. The NASS Administrator provided written comments on a draft of this report on September 8, 2005, which are reprinted in appendix I. NASS noted that our “report and recommendations are insightful and will be used to further strengthen the transparency of NASS methods and procedures.” In particular, NASS concurred with our finding that the methods and procedures in its specialized reports should be better documented and, consistent with our recommendation, stated that these products “will now provide links to this information.” NASS’s efforts, if fully implemented, should make it easier for data users to understand how these products were designed and produced, and NASS should be commended for its actions to continually improve its products and better meet the needs of its customers. While NASS’s more comprehensive products were better documented, our analysis found that they could also benefit from more robust documentation. Thus, in keeping with our recommendation, it will be important for NASS to ensure that all of its census products—its larger reports and more focused studies--fully address NASS’s own guidelines for data documentation. In commenting on our recommendation for NASS to document and post on its Web site its procedures for handling data correction requests not filed under IQA, NASS concurred with our view that this information would provide it with a better sense of the questions it receives about its data, but added that “a detailed recordkeeping system to log and track every inquiry” would not be the best use of its resources. Instead, NASS plans to “compile a listing of the more common issues” and make them available on its Web site in the form of frequently asked questions. NASS believes this approach would be useful for future planning, as well as provide answers to questions most likely to arise among other data users. As noted in our report, our recommendation stemmed from our finding that NASS could not provide us with information on the number of inquiries not filed under IQA, the characteristics of those inquiries, and how they were addressed. Although the details remain to be seen, NASS’s proposed approach could provide this information and, consistent with the intended outcome our recommendation, address the need for greater transparency. NASS’s efforts will be further strengthened if, consistent with our recommendation, it posts on its Web site its procedures for handling correction requests not filed under IQA. We will send copies of this report to other interested congressional parties, the Secretary of Agriculture, and the NASS Administrator. Copies will be made available to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. State and County Reports (Alabama) State and County Reports (Alabama) In addition to the contact named above, Robert Goldenkoff, Assistant Director; David Bobruff; Jennifer Cook; Richard Donaldson; Andrea Levine; Robert Parker; John Smale; and Michael Volpe made key contributions to this report.
The Information Quality Act (IQA) required the Office of Management and Budget to issue guidelines for ensuring the quality, objectivity, utility, and integrity of information disseminated by federal agencies. As part of our long-term examination of the quality of federal information, under the Comptroller General's authority, we reviewed how the act was implemented by the National Agricultural Statistics Service (NASS), and assessed the transparency of the documentation supporting its Census of Agriculture. NASS is part of the U.S. Department of Agriculture (USDA). NASS fulfilled its various procedural responsibilities and reporting requirements under the Office of Management and Budget's (OMB) guidelines for implementing the act. For example, NASS drafted its own implementation guidance, and developed a mechanism allowing affected parties to request the correction of information they believe is of poor quality. As a result of our review, NASS has also taken steps to better document the criteria it uses to evaluate data users' input on the content of the Census of Agriculture. Building on these efforts, better documentation could improve the transparency of census data products. For example, the nine key products from the 2002 Census we examined lacked, among other things, discussions of any data limitations. This is contrary to NASS's own guidelines for ensuring transparency, which stress the importance of describing the methods, data sources, and other items to help users understand how the information was designed and produced. Although NASS complied with OMB's requirement to establish a mechanism under IQA to address requests to correct information, NASS has not documented its approach for handling correction requests not filed under IQA (NASS handles these correction requests using an existing, informal method). Agency officials told us that data users have been satisfied with the way NASS had responded to these requests. However, because NASS does not document its informal procedures for handling correction requests and lacks a recordkeeping system to log and track them, NASS could not provide us with specific data on the number of such requests it has handled, the nature of those requests, and whether and how they were addressed.
In December 2012, we reported on Border Patrol’s evolving approach for deploying agents along the southwest border. In that report we found that Border Patrol’s 2004 Strategy provided for increasing resources and deploying these resources using an approach that provided for several layers of Border Patrol agents at the immediate border and in other areas 100 miles or more away from the border (referred to as defense in depth). According to the CBP officials we interviewed for our report, as resources increased, Border Patrol sought to move enforcement closer to the border over time to better position the agency to ensure the arrest of those trying to enter the country illegally. Additionally, headquarters and field officials said station supervisors determined (1) whether to deploy agents in border zones or interior zones, and (2) the types of enforcement or nonenforcement activities agents were to perform. Similarly, Border Patrol officials from the five sectors we visited stated that they used similar factors in making deployment decisions, such as intelligence showing the presence of threat across locations, the nature of the threat, and environmental factors including terrain and weather. We reported in December 2012 on Border Patrol data from fiscal year 2011 that showed how agent workdays were scheduled and found differences across sectors in the percentage of agent workdays scheduled for border zones and interior zones and across enforcement and nonenforcement activities. Specifically, we found that while Tucson sector scheduled 43 percent of agent workdays to border zones in fiscal year 2011, agent workdays scheduled for border zones by other southwest border sectors ranged from 26 percent in the Yuma sector to 53 percent in the El Centro sector. Our analysis of agents deployed for enforcement compared to nonenforcement activities ranged from 66 percent for Yuma sector to 81 percent in Big Bend sector. Border Patrol officials we interviewed attributed the variation in scheduling border zone deployment in fiscal year 2011 to differences in geographical factors among the southwest border sectors—such as varying topography, ingress and egress routes, and land access issues, and structural factors such as technology and infrastructure deployments— and stated that these factors affect how sectors operate and may preclude closer deployment to the border. Additionally, we found that many southwest border sectors have interior stations that are responsible for operations at some distance from the border, such as at interior checkpoints generally located 25 miles or more from the border, which could have affected their percentage of agent workdays scheduled for border zones. We have planned work to assess Border Patrol deployment and management of agents across the southwest border beginning later this year. We also reported in December 2012 that Border Patrol sector management used changes in various data over time to help inform assessment of its efforts to secure the border against the threats of illegal migration, smuggling of drugs and other contraband, and terrorism. These data showed changes in the (1) percentage of estimated known illegal entrants who are apprehended, (2) number of seizures of drugs and other contraband, and (3) number of apprehensions of persons from countries at an increased risk of sponsoring terrorism. In addition, apprehension and seizure data could be analyzed in terms of where they occurred relative to distance from the border as an indicator of progress in Border Patrol enforcement efforts. Border Patrol officials at sectors we visited, and our review of fiscal years 2010 and 2012 sector operational assessments, indicated that sectors historically used these types of data to inform tactical deployment of personnel and technology to address cross-border threats. Our analysis showed that in most southwest border sectors less than half of Border Patrol’s apprehensions and seizures were made within five miles of the border in fiscal year 2011. In Tucson sector, for example, 47 percent of Border Patrol’s apprehensions of illegal entrants, 38 percent of the drugs and contraband seizures, and 8 percent of the apprehensions of aliens from special interest countries were within five miles of the border. However, our analysis also showed that Border Patrol had moved overall enforcement efforts closer to the border since the prior fiscal year. Further, we reported that Border Patrol sectors and stations tracked changes in their overall effectiveness as a tool to determine if the appropriate mix and placement of personnel and assets were being deployed and used effectively and efficiently, according to officials from Border Patrol headquarters. Border Patrol calculated an overall effectiveness rate using a formula in which it added the number of apprehensions and “turn backs” in a specific sector and divided this total by the total estimated known illegal entries—determined by adding the number of apprehensions, turn backs, and “got aways” for the sector. Border Patrol views its border security efforts as increasing in effectiveness if the number of turn backs as a percentage of estimated known illegal entries has increased and the number of got aways as a percentage of estimated known illegal entries has decreased. In our December 2012 report, we analyzed apprehension, turn back, and got away data from fiscal years 2006 through 2011 for the Tucson sector and found that while apprehensions remained fairly constant at about 60 percent of estimated known illegal entries, the percentage of reported turn backs increased from about 5 percent to about 23 percent, while the percentage of reported got aways decreased from about 33 percent to about 13 percent. As a result of these changes in the mix of turn backs and got aways, our analysis of Border Patrol data using Border Patrol methodology for our report showed that the enforcement effort, or the overall effectiveness rate for Tucson sector, improved 20 percentage points from fiscal year 2006 to fiscal year 2011, from 67 percent to 87 percent. Border Patrol data showed that the effectiveness rate for eight of the nine sectors on the southwest border also improved from fiscal years 2006 through 2011, using Border Patrol methodology. At the time of our review in 2012, Border Patrol headquarters officials said that differences in how sectors defined, collected, and reported turn back and got away data used to calculate the overall effectiveness rate precluded comparing performance results across sectors. They stated that each Border Patrol sector decided how it would collect and report turn back and got away data, and as a result, practices for collecting and reporting the data varied across sectors and stations based on differences in agent experience and judgment, resources, and terrain. The ability to obtain accurate or consistent data using these identification sources depends on various factors, such as terrain and weather, according to Border Patrol officials. As a result of these data limitations, Border Patrol headquarters officials said that while they considered turn back and got away data sufficiently reliable to assess each sector’s progress toward border security and to inform sector decisions regarding resource deployment, they did not consider the data sufficiently reliable to compare—or externally report—results across sectors at the time we issued our report in December 2012. Border Patrol headquarters officials issued guidance in September 2012 to provide a more consistent, standardized approach for the collection and reporting of turn back and got away data by Border Patrol sectors. As we reported in 2012, Border Patrol officials expected that once the guidance was implemented, data reliability would improve. Since that time, DHS has reported the effectiveness rate in its Fiscal Year 2015- 2017 Annual Performance Report as a performance measure and method to publicly report results of its border security efforts on the southwest border. In March 2014 and April 2015, we reported that CBP had made progress in deploying programs under the Arizona Border Surveillance Technology Plan, but that CBP could take additional action to strengthen its management of the Plan and its various programs. The Plan’s acquisition programs include fixed and mobile surveillance systems, agent portable devices, and ground sensors. Specifically, we reported in March 2014 that CBP had identified the mission benefits of its surveillance technologies, as we recommended in November 2011. CBP had identified mission benefits of surveillance technologies to be deployed under the Plan, such as improved situational awareness and agent safety. However, we also reported that the agency had not developed key attributes for performance metrics for all surveillance technology to be deployed as part of the Plan, as we recommended in November 2011. As of May 2015, CBP had identified a set of potential key attributes for performance metrics for all technologies to be deployed under the Plan; however, CBP officials stated that this set of measures was under review as the agency continued to refine the measures to better inform the nature of the contributions and impacts of surveillance technology on its border security mission. While CBP had yet to apply these measures, CBP had established a time line for developing performance measures for each technology. In November 2014, CBP officials stated that baselines for each performance measure were to be developed, at which time the agency was to begin using the data to evaluate the individual and collective contributions of specific technology assets deployed under the Plan. Moreover, CBP plans to establish a tool by the end of fiscal year 2016 that explains the qualitative and quantitative impacts of technology and tactical infrastructure on situational awareness in specific areas of the border environment. While these are positive steps, until CBP completes its efforts to address our recommendation and fully develop and apply key attributes for performance metrics for all technologies to be deployed under the Plan, it will not be able to fully assess its progress in implementing the Plan and determine when mission benefits have been fully realized. Further, in March 2014, we found that CBP did not capture complete data on the contributions of these technologies, which in combination with other relevant performance metrics or indicators could be used to better determine the contributions of CBP’s surveillance technologies and inform resource allocation decisions. Although CBP had a field within its Enforcement Integrated Database for data on whether technological assets, such as SBInet surveillance towers, and nontechnological assets, such as canine teams, assisted or contributed to the apprehension of illegal entrants and seizure of drugs and other contraband, according to CBP officials, Border Patrol agents were not required to record these data. This limited CBP’s ability to collect, track, and analyze available data on asset assists to help monitor the contribution of surveillance technologies, including its SBInet system, to Border Patrol apprehensions and seizures and inform resource allocation decisions. We made two recommendations that (1) CBP require data on asset assists to be recorded and tracked within its database; and that once these data were required to be recorded and tracked, (2) analyze available data on apprehensions and technological assists, in combination with other relevant performance metrics or indicators, as appropriate, to determine the contribution of surveillance technologies to CBP’s border security efforts. CBP concurred with our recommendations and has implemented one of them. In June 2014, in response to our recommendation, CBP issued guidance informing Border Patrol agents that the asset assist data field within its database was now a mandatory data field. Agents are required to enter any assisting surveillance technology or other equipment before proceeding. As we testified in May 2015, to fully address our second recommendation, CBP needs to analyze data on apprehensions and seizures, in combination with other relevant performance metrics, to determine the contribution of surveillance technologies to its border security mission. In addition, with regard to fencing and other tactical infrastructure, CBP reported that from fiscal year 2005 through May 2015, the total miles of vehicle and pedestrian fencing along the nearly 2,000-mile U.S.-Mexico border increased from approximately 120 miles to 652 miles. With the completion of the new fencing and other tactical infrastructure, DHS is now responsible for maintaining this infrastructure including repairing breached sections of fencing. We have previously reported on CBP’s efforts to assess the impact of tactical infrastructure on border security. Specifically, in our May 2010 and September 2009 reports, we found that CBP had not accounted for the impact of its investment in border fencing and infrastructure on border security. CBP had reported an increase in control of southwest border miles, but could not account separately for the impact of the border fencing and other infrastructure. In September 2009, we recommended that CBP determine the contribution of border fencing and other infrastructure to border security. DHS concurred with our recommendation, and in response, CBP contracted with the Homeland Security Studies and Analysis Institute to conduct an analysis of the impact of tactical infrastructure on border security. We have ongoing work for this subcommittee and others assessing CBP’s deployment and management of tactical infrastructure, and we plan to report on the results of this work later this year. Our March 2012 report on AMO assets highlighted several areas the agency could address to better ensure the mix and placement of assets is effective and efficient. These areas included: (1) documentation clearly linking deployment decisions to mission needs and threats, (2) documentation on the assessments and analysis used to support decisions on the mix and placement of assets, and (3) consideration of how deployment of border technology will affect customer requirements for air and marine assets across locations. Specifically, we found that AMO had not documented significant events, such as its analyses to support its asset mix and placement across locations, and as a result, lacked a record to help demonstrate that its decisions to allocate assets were the most effective ones in fulfilling customer needs and addressing threats, among other things. While AMO’s Fiscal Year 2010 Aircraft Deployment Plan stated that AMO deployed aircraft and maritime vessels to ensure its forces were positioned to best meet the needs of CBP field commanders and respond to the latest intelligence on emerging threats, AMO did not have documentation that clearly linked the deployment decisions in the plan to mission needs or threats. We also found that AMO did not provide higher rates of support to locations Border Patrol identified as high priority, a fact that indicated that a reassessment of AMO’s resource mix and placement could help ensure that it meets mission needs, addresses threats, and mitigates risk. AMO officials stated that while they deployed a majority of assets to high- priority sectors, budgetary constraints, other national priorities, and the need to maintain presence across border locations limited overall increases in assets or the amount of assets they could redeploy from lower-priority sectors. While we recognized AMO’s resource constraints, the agency did not have documentation of analyses assessing the impact of these constraints and whether actions could be taken to improve the mix and placement of assets within them. Thus, the extent to which the deployment of AMO assets and personnel, including those assigned to the southwest border, most effectively utilized AMO’s constrained assets to meet mission needs and address threats was unclear. We further found in March 2012 that AMO did not document assessments and analyses to support the agency’s decisions on the mix and placement of assets. DHS’s 2005 aviation management directive requires operating entities to use their aircraft in the most cost-effective way to meet requirements. Although AMO officials stated that it factored in cost- effectiveness considerations, AMO did not have documentation of analyses it performed to make these decisions. AMO headquarters officials stated that they made deployment decisions during formal discussions and ongoing meetings in close collaboration with Border Patrol, and considered a range of factors such as operational capability, mission priorities, and threats. AMO officials said that while they generally documented final decisions affecting the mix and placement of assets, they did not document assessments and analyses to support these decisions. Finally, we reported that CBP and DHS had ongoing interagency efforts under way to increase air and marine domain awareness across U.S. borders through deployment of technology that may decrease Border Patrol’s use of AMO assets for air and marine domain awareness. However, at the time of our review, AMO was not planning to assess how technology capabilities could affect the mix and placement of air and marine assets until the technology has been deployed. Specifically, we concluded that Border Patrol, CBP, and DHS had strategic and technological initiatives under way that would likely affect customer requirements for air and marine support and the mix and placement of assets across locations—CBP and DHS also had ongoing interagency efforts under way to increase air and marine domain awareness across U.S. borders through deployment of technology that may decrease Border Patrol’s use of AMO assets for air and marine domain awareness. AMO officials stated that they would consider how technology capabilities affect the mix and placement of air and marine assets once such technology has been deployed. To address the findings of our March 2012 report, we recommended that CBP, to the extent that benefits outweigh the costs, reassess the mix and placement of AMO’s air and marine assets to include mission requirements, performance results, and anticipated CBP strategic and technological changes. DHS concurred with this recommendation and responded that it planned to address some of these actions as part of the Fiscal Year 2012-2013 Aircraft Deployment Plan. In September 2014, CBP provided us this Plan, which was approved in May 2012, and updated information on its subsequent efforts to address this recommendation, including a description of actions taken to reassess the mix and placement of AMO’s assets. According to AMO, after consulting with DHS and CBP officials and approval from the Secretary of Homeland Security in May 2013, the office began a realignment of personnel, aircraft, and vessels from the northern border to the southern border based on its evaluation of the utilization and efficiency of current assets and available funding to accomplish the transfers. In September 2015, AMO officials provided GAO with data and analysis documenting that personnel, aircraft, and vessels were in the process of being moved to support the realignment of assets, which addressed the intent of our recommendation. In December 2012, we reported on Border Patrol’s efforts to develop performance goals and measures for assessing the progress of its efforts to secure the border between ports of entry and for informing the identification and allocation of resources needed to secure the border. We found that until fiscal year end 2010, DHS used Border Patrol’s goal and performance measure of operational control as the publicly reported DHS goal and outcome measure for border security and to assess resource needs to accomplish this goal. We had previously testified in February 2011 that at the time this goal and measure was discontinued at the end of fiscal year 2010, Border Patrol reported achieving varying levels of operational control of 873 (44 percent) of the nearly 2,000 southwest border miles. Border Patrol officials attributed the uneven progress across sectors to multiple factors, including terrain, transportation infrastructure on both sides of the border, and a need to prioritize resource deployment to sectors deemed to have greater risk of illegal activity. DHS transitioned from using operational control as its goal and outcome measure for border security in its Fiscal Year 2010-2012 Annual Performance Report. Specifically, citing a need to establish a new border security goal and measure that reflected a more quantitative methodology as well as the department’s evolving vision for border control, DHS established the interim performance goal and measure of the number of apprehensions between the land border ports of entry until a new border control goal and measure could be developed. We testified in May 2012 that the interim goal and measure provided information on activity levels, but did not inform program results or resource identification and allocation decisions, and therefore, until new goals and measures could be developed, DHS and Congress could experience reduced oversight and DHS accountability. Further, studies commissioned by CBP documented that the number of apprehensions bore little relationship to effectiveness because agency officials did not compare these numbers with the amount of cross-border illegal activity. In our December 2012 report, we found that Border Patrol was in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between ports of entry and for informing the identification and allocation of resources needed to secure the border, but had not identified milestones and time frames for developing and implementing them. According to Border Patrol officials, establishing milestones and time frames for the development of performance goals and measures was contingent on the development of key elements of the 2012-2016 Strategic Plan, such as a risk assessment tool, and the agency’s time frames for implementing these key elements—targeted for fiscal years 2013 and 2014—were subject to change. Specifically, under the 2012-2016 Strategic Plan, the Border Patrol planned to continuously evaluate border security—and resource needs—by comparing changes in risk levels against available resources across border locations. Border Patrol officials stated that the agency was in the process of identifying performance goals and measures that could be linked to the new risk assessment tools that would show progress and status in securing the border between ports of entry, and determine needed resources, but had not established milestones and time frames for developing and implementing goals and measures because the agency’s time frames for implementing key elements of the plan were subject to change. We recommended in our December 2012 report that Border Patrol establish milestones and time frames for developing a (1) performance goal, or goals, for border security between the ports of entry that defines how border security is to be measured and (2) performance measure, or measures—linked to a performance goal or goals—for assessing progress made in securing the border between ports of entry and informing resource identification and allocation efforts. DHS agreed with these recommendations and since our December 2012 report, has added performance measures for border security to its Annual Performance Report. In its Fiscal Year 2015-2017 Annual Performance Report, these measures included the percent of people apprehended multiple times on the southwest border and the rate of effectiveness in responding to illegal activity. Further, as part of its efforts to revise the Border Patrol strategic plan, Border Patrol has developed outcome measures for each of 14 objectives, and according to officials, Border Patrol continues to work toward the development of goals and measures to support its overarching performance goal of low-risk borders. Until these new goals and measures are in place, it is unknown the extent to which they will address our past findings and would provide DHS and Congress with information on the results of CBP efforts to secure the border between ports of entry and the extent to which existing resources and capabilities are appropriate and sufficient. Chairman McSally, Ranking Member Vela, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you or members of the committee may have. For questions about this statement, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included David Alexander; Cindy Ayers; Tom Lombardi; Krista Mantsch; Jon Najmi; and Edith Sohna. Border Security: Progress and Challenges in DHS’s Efforts to Implement and Assess Infrastructure and Technology, GAO-15-595T (Washington, D.C.: May 13, 2015). Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability, GAO-15-171SP (Washington, D.C.: April 22, 2015). Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness, GAO-14-411T. (Washington, D.C.: March 12, 2014). Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness, GAO-14-368 (Washington, D.C.: March 4, 2014) Border Security: Progress and Challenges in DHS Implementation and Assessment Efforts, GAO-13-653T. (Washington, D.C.: June 27, 2013). Border Security: DHS’s Progress and Challenges in Securing U.S. Borders, GAO-13-414T. (Washington, D.C.: March 14, 2013). Border Patrol: Goals and Measures Not Yet in Place to Inform Border Security Status and Resource Needs, GAO-13-330T (Washington, D.C.: Feb. 26, 2013). Border Patrol: Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs, GAO-13-25 (Washington, D.C.: Dec. 10, 2012). Border Patrol Strategy: Progress and Challenges in Implementation and Assessment Efforts, GAO-12-688T (Washington D.C.: May 8, 2012). Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets, GAO-12-518, (Washington, D.C. March 30, 2012). U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan, GAO-12-106R. (Washington, D.C.: Nov. 17, 2011). Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding, GAO-12-22 (Washington, D.C.: Nov. 4, 2011). Border Security: Preliminary Observations on the Status of Key Southwest Border Technology Programs, GAO-11-448T (Washington, D.C.: March 15, 2011). Border Security: Preliminary Observations on Border Control Measures for the Southwest Border, GAO-11-374T (Washington, D.C.: Feb. 15, 2011). Secure Border Initiative: DHS Has Faced Challenges Deploying Technology and Fencing Along the Southwest Border, GAO-10-651T (Washington, D.C.: May 4, 2010). Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed, GAO-09-896 (Washington, D.C.: Sept. 9, 2009). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The southwest border continues to be vulnerable to cross-border illegal activity, with DHS apprehending over 331,000 illegal entrants, and making over 14,000 seizures of drugs in fiscal year 2015. DHS has employed a variety of resources to help secure the border, including personnel, technology—such as cameras and sensors, tactical infrastructure—such as fencing and roads, and air and marine assets. This statement discusses (1) DHS efforts to deploy resources on the southwest border and measure the effectiveness of these resources in securing the border, and (2) DHS efforts to develop performance goals and measures for achieving situational awareness and border security. This statement is based on GAO reports and testimonies issued from September 2009 through May 2015, with selected updates through February 2016 on DHS enforcement efforts and actions to address prior GAO recommendations. To conduct the updates, GAO interviewed agency officials and reviewed related documentation. U.S. Customs and Border Protection (CBP), within the Department of Homeland Security (DHS), has taken action to deploy various resources—including agents and technology—along the southwest border and assess those resources' contributions to border security. For example, in December 2012, GAO reported that CBP's Border Patrol scheduled agents for deployment differently across southwest border locations, and although in most locations less than half of Border Patrol apprehensions were made within five miles of the border in fiscal year 2011, Border Patrol had moved overall enforcement efforts closer to the border since the prior fiscal year. GAO also reported in December 2012, that Border Patrol tracked changes in the effectiveness rate for response to illegal activity across border locations to determine if the appropriate mix and placement of personnel and assets were deployed and used effectively, and took steps to improve the data quality issues that had precluded comparing performance results across locations at the time of GAO's review. For example, Border Patrol issued guidance in September 2012 for collecting and reporting data with a more standardized and consistent approach. DHS has reported the effectiveness rate as a performance measure in its Fiscal Year 2015-2017 Annual Performance Report. Further, in March 2014, GAO reported that CBP had made progress in deploying programs under the Arizona Border Surveillance Technology Plan, but that CBP could strengthen its management and assessment of the plan's programs. GAO reported that while CBP had identified mission benefits of technologies to be deployed under the plan, the agency had not developed key attributes for performance metrics to identify the technologies' individual and collective contribution, as GAO had recommended in 2011. GAO also reported in 2014 that CBP officials stated that baselines for each performance measure would be developed and that by the end of fiscal year 2016, CBP would establish a tool to explain the impact of technology and infrastructure on situational awareness in the border environment. CBP should complete these actions in order to fully assess its progress in implementing the plan and determine when mission benefits have been fully realized. In December 2012, GAO reported on Border Patrol's efforts to develop performance goals and measures for assessing the progress of efforts to secure the border between ports of entry and informing the identification and allocation of border security resources. GAO reported that DHS had transitioned from a goal and measure related to the capability to detect, respond to, and address cross-border illegal activity to an interim performance goal and measure of apprehensions between the land border ports of entry beginning fiscal year 2011. GAO reported that this interim goal and measure did not inform program results or resource identification and allocation decisions, limiting DHS and congressional oversight and accountability. DHS concurred with GAO's recommendation that CBP develop milestones and time frames for the development of border security goals and measures and Border Patrol works to define a new overarching performance goal for achieving a low-risk border and develop associated performance measures. CBP should complete these actions in order to fully assess its capabilities and progress to secure the border. GAO previously made recommendations for DHS to, among other things, (1) strengthen its management of technology plans and programs and (2) establish milestones and time frames for the development of border security goals and measures. DHS generally agreed and has actions underway to address the recommendations.
The U.S. agricultural sector benefits our economy and the health of our nation. However, if not properly managed, agricultural activities can impair the nation’s water, air, and soil; disrupt habitat for endangered species; and constrain groundwater resources. For example, sediment produced during routine agricultural activities may run off the land and reach surface waters, including rivers and lakes. Sediment can destroy or degrade aquatic habitat and can further impair water quality by transporting into area waters both the pesticides applied to cropland and the nutrients found in fertilizers and animal waste. These and other water quality issues are of concern in a number of U.S. agriculture-producing regions, including the Midwest and along the Mississippi River. Agriculture is also a major user of groundwater and surface water, which has led to water resource concerns across the country, particularly in the West. In 2000, irrigation accounted for 65 percent of the nation’s consumption of fresh water. Agricultural production can also impair air quality, when wind carries eroded soil, odors, and smoke, and may lead to the loss of wetlands, which provide wildlife habitat, filter pollutants, retain sediment, and moderate hydrologic extremes. EQIP is one of a number of USDA conservation programs designed to mitigate agriculture’s potentially negative environmental effects. EQIP provides cost-share funds and incentive payments for land used for agricultural production and supports around 190 conservation practices, including constructing facilities to temporarily store animal waste; planting rows of trees or shrubs to reduce wind erosion and provide food for wildlife; and planning the amount, form, placement, and timing of the application of plant nutrients. EQIP is designed to fund conservation practices in a manner that helps the program achieve the following national priorities identified by NRCS: reducing nonpoint source pollution (nutrients, sediment, pesticides, or excess salinity), groundwater contamination, and pollution from point sources (such as concentrated animal feeding operations); conserving groundwater and surface water resources; reducing emissions that contribute to air quality impairment; reducing soil erosion from unacceptable levels on agricultural land; and promoting at-risk species habitat conservation. The Federal Agriculture Improvement and Reform Act of 1996 created EQIP by combining four existing conservation programs into a single program. The Farm Security and Rural Investment Act of 2002, the farm bill, reauthorized EQIP and increased its authorized funding from about $200 million in 1997 to current levels of over $1 billion. The 2002 act required that at least 60 percent of EQIP funds be made available for conservation practices relating to livestock production. In addition, it authorized EQIP funds for specific conservation purposes—(1) funds for producers to install water conservation practices to improve groundwater and surface water conservation (the Ground and Surface Water Conservation component of EQIP) and (2) funds for water conservation practices in the Klamath Basin located on the California/Oregon border (the Klamath Basin component of EQIP). Annually, NRCS headquarters officials determine the amount of funding each state receives, while state and local NRCS officials decide what conservation practices to fund in their state and local communities. The total amount of EQIP funding a state receives can be derived by adding together that state’s funding for all categories. Table 1 describes the different categories of funding that states received for fiscal year 2006 and NRCS’s process for allocating that funding. As the table shows, each category of EQIP funding is allocated to the states using a different process. For the general financial assistance formula, the availability of natural resources accounts for approximately half of the funds allocated, and the presence of environmental concerns or problems accounts for the remainder. Table 2 shows the factors and weights used in the financial assistance formula for fiscal year 2006. In fiscal year 2006, approximately $652 million was divided among the states through the general financial assistance formula. For example, according to the formula, EQIP funding for nonirrigated cropland (accounting for 3.2 percent of financial assistance) totaled $20.9 million. The state with the most acres of nonirrigated cropland received $1.7 million of the funds associated with this factor, and the state with the fewest acres of nonirrigated cropland received approximately $1,100. A state’s total allocation is composed of the funds it receives for each of the 31 factors. Although about 65 percent of EQIP funds are provided through the general financial assistance formula, other categories of funding can have a significant effect on the total amount of funds an individual state receives. For example, 35 percent of Utah’s fiscal year 2006 allocation was from general financial assistance. The largest category of EQIP funds Utah received—38 percent—was Colorado Salinity funds. Appendix II provides additional information on the 2006 funding allocation formulas for general financial assistance, Ground and Surface Water Conservation, performance incentive bonuses and Klamath Basin funding categories. Figure 1 shows the initial distribution of NRCS’s fiscal year 2006 EQIP allocations to the states in November 2005. States had to return any unused funds by June 2006 for redistribution to states with a need for additional funds. Appendix IV describes the amount of funding each state initially received in fiscal year 2006. NRCS’s process for providing EQIP funds to the states is not clearly linked to the program’s purpose of optimizing environmental benefits. In particular, NRCS’s general financial assistance formula, which accounts for approximately two-thirds of funding provided to the states, does not have a specific, documented rationale for each of the formula’s factors and weights. In addition, the financial assistance formula relies on some questionable and outdated data. As a result, NRCS may not be directing EQIP funds to states with the most significant environmental concerns arising from agricultural production. Although the 31 factors and weights used in the general financial assistance formula give it an appearance of precision, NRCS does not have a clearly documented rationale for including each factor in the formula and assigning or modifying each weight. The original EQIP formula was created in 1997 by an interagency task force that modified the formula created for a different conservation program—the Conservation Technical Assistance Program. The task force added and deleted factors and adjusted factor weights so that the EQIP formula better corresponded to the Federal Agriculture Improvement and Reform Act of 1996’s requirement that 50 percent of funds be targeted at funding livestock-related practices. Since the creation of the financial assistance formula, NRCS has periodically modified factors and weights to emphasize different program elements and national priorities, most recently in fiscal year 2004 following the passage of the 2002 Farm Security and Rural Investment Act. Furthermore, NRCS officials stated that they meet annually to review the allocation of funds to states. However, throughout this process, NRCS has not documented the basis for its decisions to modify factors and weights or documented how changes to its formula achieve the program’s purpose of optimizing environmental benefits. Thus, it is not always clear whether the formula factors and weights guide funds to the states as effectively as possible. For example, it is unclear why NRCS includes a factor in the formula that addresses the waste management costs of small animal feeding operations but not a factor that addresses such costs for large operations—large operations can also damage the environment and are eligible for EQIP funding. By not including the costs of the larger operations in its financial assistance formula, some states may not be receiving funds to address their specific environmental concerns. In addition, NRCS has not demonstrated that it has the most appropriate water quality factors in its formula. For example, the formula includes a factor addressing river and stream impairment but no factor for impaired lakes and other bodies of water. Moreover, it is not certain whether the impaired rivers and streams factor results in funds being awarded on the basis of general water quality concerns or water pollution specifically caused by agricultural production. As a result, it was not certain whether the formula allocates funds as effectively as possible to states with water quality concerns arising from agricultural production. While the factors in the EQIP general financial assistance formula determine what resource and environmental characteristics are considered when allocating funds, the weights associated with these factors directly affect how much total funding is provided for each factor and, thus, the amount of money each state receives. Factors and weights are key to ensuring states with the greatest environmental problems receive funding to address these problems. Small differences in the weights of the factors can shift the amount of financial assistance directed at a particular resource concern and, ultimately, the amount of money provided to a state. In 2006, if the weight of any of the 31 factors had increased by 1 percent, $6.5 million would have been allocated on the basis of that factor at the expense of one or more other factors. Such a shift could impact the amount of financial assistance received by each state. For example, a 1 percent increase in the weight of the specialty cropland factor with a corresponding decrease of 1 percent in the American Indian tribal land factor could result in large changes to the distribution of EQIP general financial assistance. According to our analysis, the state benefiting the most from such a change would receive $2.6 million more (a 7.2 percent increase in that state’s level of general financial assistance) and the state benefiting least from such a change would lose $2.7 million (a 13.5 percent decrease in that state’s level of general financial assistance). The potential for the weights to significantly affect the amount of funding a state receives underscores the importance of having a well-founded rationale for assigning them. To date, NRCS has not documented its rationale for choosing the weights. Some stakeholders we spoke with questioned NRCS’s assignment of weights to certain factors in the financial assistance formula because they did not believe NRCS’s formula adequately reflected the states’ environmental priorities. For example, NRCS’s general financial assistance formula allocates 6.3 percent of EQIP funds to the states based on factors specifically associated with animal feeding operations. However, states spent more of their EQIP financial assistance on related practices, which suggests that the weights in the financial assistance formula may not reflect states’ priorities. In fiscal year 2005, states spent a total of 11 percent of EQIP financial assistance, or $91.1 million, on one such practice—the construction of waste storage facilities for animal feeding operations. (App. VI outlines the practices funded in fiscal year 2005, including other practices to control pollution from animal feeding operations.) More generally, other stakeholders said that, as the program develops, NRCS should give additional weight to factors related to the presence of environmental concerns in a state and place less emphasis on factors related to natural resources in a state. They believed this reassignment of weights would better ensure that states contending with the most significant environmental problems receive the most funding. Currently, factors related to the presence of environmental concerns account for approximately half of the total funding, while factors relating to the availability of natural resources account for the remainder. Factors related to the availability of natural resources provide states that have significant amounts of a particular type of land—such as grazing land or cropland— with more funds, regardless of whether that land is impaired. Although NRCS has stated that it meets annually to review its allocation of funds to states, officials told us they had not conducted any statistical analysis to examine the influence of factors on funding outcomes. Statistical analyses can provide information on how the factors in the allocation formula have affected the distribution of funds, thereby providing information to improve program implementation. To better understand the effect of the factors on the allocations to states, we used two types of statistical analysis to assess the effects of the EQIP financial assistance formula on state funding: (1) regression analysis to show which factors are the most influential in determining funding levels and (2) factor analysis to understand how factors can be grouped and identified with program priorities. Our regression analysis for the fiscal year 2006 funding allocation shows that the factors that were the most important in explaining the distribution of general financial assistance to states were acres of fair and poor rangeland, acres of nonfederal grazing lands, livestock animal units, acres of irrigated cropland, acres of American Indian tribal lands, and wind erosion above T. This analysis suggests that regions of the country with these types of characteristics are more likely to benefit from the current formula. On the other hand, a few factors, such as acres of forestlands, potential for pesticide and nitrogen leaching, and air quality nonattainment areas were not significantly related to the allocation, indicating that they had little or no impact on the formula. Our factor analysis, which groups the data into a smaller number of categories that actually drive the formula, found that the largest grouping with the greatest amount of correlation, included acres of nonfederal grazing land, acres of fair and poor rangeland, livestock animal units, and wind erosion above T—all indicative of dryland agriculture and livestock feeding and ranching. These results correspond with those of our regression analysis and help to show how the current national allocation formula prioritizes money to states. A complete explanation of both analyses is included in appendix III. Weaknesses in the financial assistance formula are compounded by NRCS’s use of questionable and outdated data. Accurate data are key to ensuring that funds are distributed to states as intended. However, we identified several methodological weaknesses in the data sources: (1) data that were used more than once in the formula, (2) data sources whose accuracy could not be verified, and (3) data that was not as recent as possible. First, 5 of the 29 data sources behind the factors in the financial assistance formula were used more than once, potentially causing NRCS to overemphasize some environmental concerns at the expense of others. Specifically: NRCS uses the same data to estimate pesticide and nitrogen runoff and phosphorous runoff in its formula. According to NRCS, because data measuring the potential for phosphorous runoff were unavailable, it substituted data measuring the potential for pesticide and nitrogen runoff. The agency did so believing that similar characteristics cause both types of runoff. However, an NRCS official responsible for deriving the runoff and leaching indicators commented that the substitution of one type of runoff data for another was problematic because the mechanisms through which pesticides and nitrogen are transported off- site to cause environmental problems are different from those of phosphorous. A 2006 NRCS cropland report estimates that the intensity of nitrogen and phosphorous losses may differ geographically. For example, nitrogen dissolved in surface water runoff in the upper Midwest accounts for 28 percent of the national total, while phosphorous dissolved in surface water runoff in the same region accounts for 45 percent of the national total. This difference in the effect of these two pollutants in the same region raises questions about the appropriateness of substituting one type of data for the other. Until adequate data are available for a given factor, it may not be appropriate to include that factor in the general financial assistance formula. NRCS’s formula uses nonirrigated cropland, federal grazing land, nonfederal grazing land, and forestland once for estimating acreage and then again for estimating carbon sequestration. According to NRCS, the agency did not have good source data to measure potential areas where management practices could improve levels of carbon sequestration so it substituted these other data sources. While we could not fully assess the soundness of NRCS’s estimate of carbon sequestration, some academic stakeholders we spoke with questioned whether NRCS had estimated carbon sequestration as effectively as possible and noted that alternate data sources were available. In discussing these alternate sources with NRCS, the EQIP Manager said the agency had not previously considered using these sources for the EQIP formula, but that they could prove relevant. Using the same data for multiple factors may result in factors being indirectly weighted higher than intended. For example, the effective weight of the pesticide nitrogen runoff factor is 5.6 percent—the sum of the original pesticide nitrogen runoff weight (1.7 percent) and the phosphorous runoff weight (3.9 percent). Using data created for one factor for a second factor also makes the formula less transparent and potentially less reliable for allocating state funding. Second, NRCS could not confirm the source of data used in 10 factors in the formula; as such, we could not determine the accuracy of the data, verify how NRCS generated the data, or fully understand the basis on which the agency allocates funding. Specifically, we could not confirm the source of data for acres of federal grazing land, livestock animal units, animal waste generation, acres of cropland eroding above T, acres of forestlands eroding above T, ratio of animal units to cropland, miles of impaired rivers and streams, ratio of commercial fertilizers to cropland, riparian areas, and coastal zone land. For example, we could not verify how data for the livestock animal units and animal waste factors were generated, and NRCS said it had not retained documentation of how the data for these factors were calculated. As a result, it was uncertain whether NRCS had chosen the most appropriate data as its basis for allocating funds to states with pollution problems from livestock and animal waste or whether the data were accurately calculated. EQIP officials told us that, in most cases, the data sources had been chosen and incorporated into the formula before they were involved with EQIP and that documentation had not been kept to identify how data sources were used. In addition, for one factor— the number of limited resource producers in a state—we found that the data did not measure what its factor name indicated. NRCS defines a limited resource producer as one who had, for the last 2 years, (1) farm sales not more than $100,000 and (2) a household income at or below the poverty level, or less than 50 percent of the county median household income. However, the data NRCS uses in the general financial assistance formula only captures farms with low sales, which does not necessarily indicate whether producers on those farms have limited means. As a result, NRCS may not be directing funds to states having farmers with the most limited resources. A description of each factor in the fiscal year 2006 general financial assistance formula can be found in appendix II. Third, NRCS does not use the most current data for six factors in the formula—livestock animal units, animal waste generation, number of limited resource producers, miles of impaired rivers and streams, ratio of livestock animal units to cropland, and ratio of commercial fertilizers to cropland. According to NRCS, the source of data on the ratio of commercial fertilizers to cropland was a 1995 report by the Association of American Plant Food Control Officials; we found a 2005 version of the same report with more current data. In other cases, we identified more current, alternate sources of data. For example, the formula currently uses 1996 EPA data for its waste management capital cost factor but could use 2003 NRCS data that estimates waste management costs. Not using recent data raises questions about whether the formula allocates funds to areas of the country that currently have the greatest environmental needs, because recent changes in a state’s agricultural or environmental status may not be reflected. According to our analysis, by using more current data for the number of limited resource producers factor, one state would have received approximately $151,000 more in fiscal year 2006 (a 0.2 percent increase in that state’s general financial assistance), and another state would have received approximately $138,000 less (a 1.3 percent decrease in that state’s general financial assistance). Because we were unable to determine how NRCS used the data for developing the remaining five factors, we could not determine what impact using more current data for those factors would have on financial assistance provided to states. According to NRCS, the alternate sources we identified appeared to be acceptable for use in the formula, and the agency is in the process of updating the formula’s livestock data. In addition to these six factors, data used to measure acres of riparian areas, fair and poor rangeland, and forestland eroding above T are about 20 years old and will likely become more inappropriate over time. When we brought our concerns to NRCS’s attention, officials agreed that the formula, including weights and data sources, needed to be reexamined. NRCS subsequently announced plans to issue a request for proposal soliciting comments and suggested revisions to NRCS’s formulas for allocating conservation funds, including the EQIP financial assistance formula. In addition, according to NRCS’s EQIP Manager, the agency is in the process of consolidating the data used in the financial assistance formulas for its conservation programs into a single database. As a part of this process, the agency plans to review its data sources for the formula factors and update them with more relevant and current data when possible. NRCS has recently begun to develop program-specific, long-term measures to monitor EQIP’s outcomes. In 2000, we reported that performance measures tied to outcomes would better communicate the results NRCS intended its conservation programs to achieve. As part of its 2005 strategic planning effort, NRCS developed outcome-based, long-term measures to assess changes to the environment resulting from the installation of EQIP conservation practices. These measures include such things as reduced sediment delivery from farms, improved soil condition on working cropland, and increased water conservation. Previously, in 2002, NRCS established annual measures that primarily assess program outputs—the number and type of conservation practices installed. Table 3 outlines NRCS’s seven annual performance measures for fiscal year 2006, and table 4 describes its seven long-term EQIP performance measures approved in 2005. According to NRCS, it has developed baselines for its long-term, outcome- based performance measures and plans to assess and report on them once computer models and other data collection methods that estimate environmental change are completed. The Director of the NRCS Strategic Planning and Performance Division said NRCS expects to assess and report on the status of all measures by 2010 but will be able to assess the results of some measures, such as improved soil condition on working land, sooner. In the meantime, the agency will continue to utilize its existing annual measures to assess performance. The Director of NRCS’s Strategic Planning and Performance Division acknowledged that the long-term measures were not as comprehensive as needed but represented measures NRCS could reasonably assess using modeling and data collection methods that would soon become available. NRCS plans to continue to improve its performance measures going forward. Although we did not assess the comprehensiveness of the EQIP performance measures, the additional information they provide about the results of EQIP outcomes should allow NRCS to better gauge program performance. Such information could also help the agency refine its process for allocating funds to the states via its financial assistance formula by directing funds toward practices that address unrealized performance measures and areas of the country that need the most improvement. The Chief of NRCS’s Environmental Improvement Programs Branch agreed that information about program performance might eventually be linked back to the EQIP funding allocation process. However, the agency does not yet have plans to do so. As a key NRCS conservation program with over $1 billion in annual funding, EQIP was designed to help producers mitigate the potentially negative environmental impacts of agricultural production. However, the program may not be fully optimizing the environmental benefits resulting from practices installed using EQIP dollars because of weaknesses in NRCS’s process for allocating funds to the states. Moreover, outdated and duplicate formula data sources may further compromise EQIP’s effectiveness in allocating funds. Currently, it is not clear that factors, weights, and data sources in the general financial assistance formula help the agency direct funding to the areas of the nation with the greatest environmental threats arising from agricultural production. NRCS has an opportunity to address this issue as it moves forward on its plans to reexamine its conservation funding formulas. Furthermore, the agency may be able to use information gathered from the results of its outcome-based performance measures to refine the financial assistance formula, making it easier for NRCS to direct EQIP funds at the most pressing environmental problems related to agriculture production. To achieve EQIP’s purpose of optimizing environmental benefits, we recommend that the Secretary of Agriculture direct the Chief of the Natural Resources Conservation Service to take the following two actions: ensure that the rationale for the factors and weights used in the general financial assistance formula are documented and linked to program priorities, and data sources used in the formula are accurate and current; and continue to analyze current and newly developed long-term performance measures for the EQIP program and use this information to make any further revisions to the financial assistance formula to ensure funds are directed to areas of highest priority. We provided USDA with a draft of this report for review and comment. USDA agreed that the EQIP allocation formula needs review. USDA did not agree with our assessment that NRCS’s funding process lacks a clear link to the program’s purpose of optimizing environmental benefits. The agency stated that its use of factors related to the natural resource base and condition of those resources shows the general financial assistance formula is tied to the program’s purpose of optimizing environmental benefits. USDA stated that, while some formula data sources and weights will be updated, the types of factors used would be needed in any process that attempts to inventory and optimize environmental benefits. While this may in fact be the case, USDA needs to document this connection—that is, why factors were chosen and weights assigned. USDA could make the connection between the formula and the program’s purpose of optimizing environmental benefits more evident if it provided additional information describing its reasons for including or excluding factors in the formula and its rationale for assigning and modifying weights. Appendix VII presents USDA’s comments. We are sending copies of this report to interested congressional committees, the Secretary of Agriculture, the Director of the Office of Management and Budget, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov . If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. At the request of the Ranking Democratic Member, Senate Committee on Agriculture, Nutrition, and Forestry, we reviewed the extent to which (1) the U.S. Department of Agriculture’s (USDA) process for allocating Environmental Quality Incentives Program (EQIP) funds to states is consistent with the program’s purpose of optimizing environmental benefits and (2) USDA has developed measures to monitor program performance. To review the Natural Resources Conservation Service’s (NRCS) process for allocating EQIP funding to the states, we examined EQIP funding documents and spoke with NRCS officials from the Financial Assistance Program Division, Budget Planning and Analysis Division, and Financial Management Division. Our analysis considered each of the different categories of EQIP funding, including EQIP general financial assistance, EQIP technical assistance, regional equity funds, performance bonuses, Conservation Innovation Grants, Colorado Salinity funds, Ground and Surface Water Conservation funds, and Klamath Basin funds. We gathered comments from stakeholders about the strengths and weaknesses of NRCS’s EQIP funding approach. We selected stakeholders from environmental and farm organizations to get a broad set of views on the effectiveness of the formula in allocating funds. Specifically, we spoke with representatives from environmental organizations, including Environmental Defense, the National Association of Conservation Districts, the Soil and Water Conservation Society, and the Sustainable Agriculture Coalition, as well as farm organizations, including the American Farm Bureau and the National Pork Producers Council. We also discussed the EQIP funding allocation process with selected participants on state technical committees—the Iowa Department of Natural Resources, Iowa Farm Bureau, and Nebraska Department of Environmental Quality; academic stakeholders; and former NRCS employees who participated in the development of the original formula. We examined the factors and weights in the financial assistance formula and discussed their purpose with EQIP program officials. We performed statistical analysis of the financial assistance formula to determine what impact the different factors had on overall funding. A discussion of the analysis we performed can be found in appendix III. We searched for information about the source of data for each factor in the formula in order to formulate an understanding of what each factor measured and verify the accuracy of the data being used by NRCS. NRCS did not retain documentation of the source data for 10 factors and, as a result, we were unable to verify all data used in the financial assistance formula. To estimate the number of factors using outdated data, we searched for more updated versions of the same data sources NRCS said it used in its formula. We did not include more updated, but different, sources of data in our count. To understand Congress’s and NRCS’s goals for EQIP, we reviewed the Federal Agriculture Improvement and Reform Act of 1996, Farm Security and Rural Investment Act of 2002, associated regulations, and related appropriations laws. We reviewed program documentation describing the purpose and priorities of EQIP and discussed the documentation with EQIP officials. To understand agency conservation priorities, we analyzed a 2005 database of conservation practices funded using EQIP, Ground and Surface Water Conservation, and Klamath Basin funds. To determine how the factors and weights in the formula aligned with resource concerns across the nation, we conducted research on the impact agricultural production has on the environment. We spoke with NRCS officials from selected states—Iowa, Maryland, Mississippi, Missouri, Montana, Nebraska, New Mexico, Rhode Island, and Texas—to better understand resource concerns important to their state and how they used funds received from headquarters to address those concerns. We also spoke with officials from three county offices within these states. This geographically diverse group included states that received varying amounts of EQIP funding and engaged in a range of types of agricultural production. To review what measures are in place to monitor EQIP program performance, we spoke with representatives from the NRCS teams responsible for strategic planning and oversight activities—the Operations Management and Oversight Division, Oversight and Evaluation staff, and Strategic and Performance Planning Division—and representatives from the Financial Assistance Program Division. We examined agency strategic planning and performance documents. We reviewed documentation of agency and EQIP goals and performance measures and reviewed the Web- based NRCS Performance Results System. We also spoke with representatives from NRCS and nongovernmental organizations working on the Conservation Effects Assessment Project and reviewed related documentation to determine how that initiative might influence the development of future EQIP goals. Our analysis did not include an independent verification of NRCS’s compliance with internal controls. We performed our work between December 2005 and August 2006 in accordance with generally accepted government auditing standards. Tables 5, 6, 7, and 8, respectively, describe the formulas for allocating general financial assistance, Ground and Surface Water Conservation funds, performance bonuses, and Klamath Basin funds. In the case of the general financial assistance formula, we have identified the source of data for each factor and described what each factor measures. Using statistical techniques—that is, principal components regression and factor analysis—we analyzed the Environmental Quality Incentives Program (EQIP) formula used to allocate fiscal year 2006 financial assistance to the states to identify the environmental factors that most influenced the allocations. Sixty-five percent of the total EQIP funds for 2006 were based on the allocation formula for financial assistance. In order to determine the relationships between the allocation and the environmental factors (variables), we typically would apply regression techniques to a model, expressed as (1) ... (i = 1,…, 48) In equation (1), the dependent variable is the funding allocation for state i, the x’s are the j factors in the allocation formula, β, β,…,β are the regression coefficients, and ε is the model error for the ith state. When we used this model, however, standard regression techniques were not possible because many of the environmental factors used in the allocation formula were highly collinear. We used both the variance inflation factor (VIF), as well as inspection of the eigenvalues to determine the extent of multicollinearity in the model. Many of the eigenvalues were close to zero, indicating a serious problem with multicollinearity. (1984), and others, we used principal components regression analysis since this technique is recommended when there is multicollinearity in the data. Before running the regression analysis, we performed the principal components analysis. This procedure generates a set of latent variables, called principal components—uncorrelated linear transformations of the original variables. At this stage, even though the new variables are not collinear, the same magnitude of variance is retained. Therefore, the elimination of small principal components reduces the total variance and substantially improves the diagnostic capability of the model. In order to eliminate these small principal components, various selection procedures are used. Following Fekedulegn (2002), we chose the cumulative eigenvalue product rule, which keeps the first principal components whose combined product is greater than 1.00 (Guiot et al., 1982). The principal components themselves are expressed as (2) Z = X*V. In equation (2), Z is an (i x j) matrix of principal components, X is an (i x j) matrix of standardized environmental factors, and V is a (j x j) matrix of eigenvectors. After the principal components analysis and the elimination of smaller principal components as described above, we used the data in a cross- sectional multivariate regression expressed as (3) 1 is an (i x 1) vector of the intercept terms, Z is an (i x j) matrix of principal components, and α is a (j x 1) vector of new coefficients of the principal components. However, this procedure will usually leave some principal components that are not statistically significant. Therefore, to further eliminate the nonsignificant principal components, we used the SAS stepwise regression procedure. Specifically, we eliminated “r” principal components in the analysis, which consisted of the (1) number eliminated using the eigenvalue product rule and (2) number eliminated from the stepwise regression. We were then left with (j – r) principal components estimators or coefficients and the reduced form in equation 3 becomes (4) . pc = Vj x (j-r) α(j-r) x 1 . In equation (5), bvector of j standardized principal component estimators of the regression coefficients of the environmental factors, V is the (j x ( j - r)) matrix of eigenvectors, and α is the reduced vector of ((j – r) x 1) estimated coefficients as in equation 4. Once we have the standardized coefficients of the principal components estimators of the factors, we can transform them back into the coefficients of the original environmental factors. For the standardized estimators, the method for this transformation is expressed as pc (subscript pc stands for principal components) is the (6) In equation (6), S is the standard deviation of the original jth environmental factor, x, bj,pc is the jth standardized estimator, and bj,pc is the coefficient of the original environmental factor. While we can obtain the regression coefficients of the original environmental factors (the bj,pc’s) that have been corrected for multicollinearity, we cannot directly compare them because most have different units. For instance, some environmental and resource factors used in the formula are measured in acres, while others may be measured in terms of animal units. In other words, the largest coefficient may not be the most influential in the regression. Therefore, when comparing the relative importance of the factors (variables) in the regression, we mainly discuss the standardized estimators of the environmental factors used in the allocation formula. For the 48 contiguous states, we used a cross-section of data for the dependent variable—the allocation variable—and the independent variables—the environmental variables (factors). We could not incorporate Alaska or Hawaii because we lacked complete data. We excluded two factors—independent variables—from the regression analysis because they were linear combinations of factors already included in the data. For instance, we could not include the carbon sequestration factor because it is the sum of four factors already included in the formula allocation model: acres of nonirrigated cropland, forestland, federal grazing land, and nonfederal grazing land. We also excluded the factor for pesticide and nitrogen runoff because it contains the same data as the phosphorous runoff potential factor. Although the U.S. Department of Agriculture (USDA) weights these factors differently, they are still linear combinations and, for regression analysis, must be excluded. In all, we ran the regression using the 2006 state allocations for the 48 states as our dependent variable and the 29 environmental and resource factors in the formula for our independent variables. Specifically, a standardized coefficient of a factor measures the expected change in the dependent variable for a one unit change in the standardized independent variable, in this case the ith factor, all other things being equal. Those variables that had the largest standardized coefficient as well as being highly statistically significant were acres of fair and poor rangeland, acres of nonfederal grazing land, acres of irrigated cropland, acres of American Indian tribal lands, wind erosion above T, and livestock animal units. As table 9 shows, as one would expect with a formula, most of the factors in the regression were highly significant and positively related to the allocation, except the four factors, acres of forestlands, potential for pesticide and nitrogen leaching, air quality nonattainment areas, and acres of federal grazing lands. We used the factor analysis technique to reduce the original set of variables (environmental factors) in the EQIP formula to a smaller set of underlying factors that actually drive the variables and the relationships among these variables. Factor analysis has been used previously by researchers to identify, group, and interpret various environmental concerns, such as soil quality, that cannot be measured directly, but must be inferred by measuring other attributes that serve as indicators. For this formula, the underlying factors should mimic, in some sense, the underlying environmental concerns, such as water quality and quantity, soil productivity, and wildlife habitat preservation. Factor analysis is a technique used to explain the correlations between variables and to derive a new set of underlying variables, called “factors,” that give a better understanding of the data being analyzed. Using this technique allows us to determine what smaller number of factors accounts for the correlation in the larger set of variables in the formula. In factor analysis, each observed variable, x, can be expressed as a weighted composite of a set of underlying, latent variables (f’s) such that (7) . In equation (7), the correlation between the observed variables, the x’s, can be explained in terms of the underlying (latent) factors. These latent factors explain the common variance between the variables. For example, given a set of observed variables, factor analysis forms a set of factors that are as independent from each other as possible, while the observed variables within each factor are as highly correlated as possible. To perform the factor analysis, we used the SAS PROC FACTOR procedure, choosing the principal factors method to extract the factors. One part of the analysis was to determine the number of factors to extract. Hypothetically, there can be one factor for every variable, but the goal is to reduce this number to a subset of factors that drive, or control, the values of the variables being measured. We postulated that the underlying factors should mimic, in some sense, the underlying environmental concerns, such as water quality and quantity, soil productivity, and wildlife habitat. However, since the data contain certain variables such as acres of nonirrigated cropland, acres of nonfederal grazing land, or acres of American Indian tribal lands, the latent factors may be different in character. To determine the number of factors, there are several computational methods and more subjective methods such as ease of interpretability of factors. We used both the ease of interpretability of the factors, as well as the “scree test.” As is typically done to achieve a more meaningful and interpretable solution, we applied a rotation technique to the initial factor pattern matrix. We used the rotated factor pattern matrix to interpret the meaning of the latent factors, which we identified through their correlations with the environmental factors (variables), as shown in table 10. The factor loadings that have an absolute value equal to or greater than 0.4 are shaded, and several variables are significantly correlated with more than one factor— called a “split loading.” The factor analysis technique also calculates the amount of common variance explained by each latent factor. For these data, the variances are: factor 1— 6.44, factor 2—5.49, factor 3—3.56, and factor 4—3.00, accounting for about 71 percent of the common variance in the data. Overall, the four factors (1) all relate to environmental concerns, as well as agricultural resources, and (2) each latent factor contributes a decreasing amount of common variance to the total variation among all of the variables. We interpreted the EQIP data that went into the factor analysis to represent (1) dryland agriculture and cattle feeding, (2) water quality concerns relating to concentrated livestock feeding operations and nonirrigated cropland, (3) wildlife habitat preservation, and (4) specialty crops/intensive agriculture and water quality/quantity concerns. Specifics of the factor analysis follow: Factor 1: This factor contributes the most variation to the factor analysis and seems to be associated with dryland agriculture and cattle grazing and feeding. The variables—acres of nonfederal grazing lands, acres of fair and poor rangeland, wind erosion above T, acres of cropland eroding above T, and acres of irrigated cropland—are all descriptors of this type of agriculture. In addition, factor 1 is also strongly correlated with the livestock animal units variable, although it has a split loading with factor 2. While the number of limited resource producers variable has a split loading between this factor and factor 2, it is most heavily loaded with this factor. Factor 2: This factor, like factor 1, has to do with livestock operations, as well as with other important livestock-related variables that affect water quality. Here, the highest loading is with the variable, number of concentrated animal feeding operations/animal feeding operations, (CAFOs) (0.88), although it has the split loading with livestock animal units (0.52). In addition, factor 2 showed high loadings for phosphorous runoff potential and potential for pesticide and nitrogen leaching, which may be related to sediment losses from both animal and cropland agriculture. Moreover, as cropland and CAFOs are usually in the same location, one would expect the variable for acres of nonirrigated cropland to also have a high loading, which it does (0.72). Factor 3: This factor seems to be related to environmental concerns about wildlife habitat, with the highest loading going to acres of wetland and at- risk species habitat (0.87), as well as to acres of bodies of water, (0.81) coastal zone land (0.80) and acres of forestlands (0.70). Potential for pesticide and nitrogen leaching (0.46) showed a split loading with factor 2. Factor 4: This factor seems to represent variables relating to specialty crop and intensive agriculture, with high loadings for acres of specialty crops, ratio of commercial fertilizer to cropland, and acres of irrigated cropland, (which had a split loading with factor 1). Also, acres of cropland and pastureland affected by saline and/or sodic conditions, a soil condition that often accompanies irrigated soils, is almost significantly correlated to Factor 4 (0.39). This factor also highly loads with miles of impaired rivers and streams, which may be an indication of water quality and quantity concerns associated with soils that require irrigation. Factor 4 is also highly associated with acres of forestlands eroding above T, many of which are found in the same areas that contain acres of irrigated cropland. The two variables—air quality nonattainment areas and acres of American Indian tribal lands—did not load onto any of the latent factors. When this happens, the variable has a unique variance that is not explained by the common factors. Totals for Puerto Rico also include funding provided to the U.S. Virgin Islands. Total funding may not equal the sum of state funding due to rounding. Totals for Puerto Rico also include funding provided to the U.S. Virgin Islands. Ground and Surface Water Conservation dollars obligated This represents an interim state practice, rather than a national approved practice. Interim state practices are tested by NRCS for 2 years, after which they are approved for national use, extended for further testing, added to an existing state standard, or cancelled. Totals may not add due to rounding. In addition to the individual named above, Ronald E. Maxon, Jr., Assistant Director; William Bates; Thomas Cook; Barbara El Osta; Paige Gilbreath; Lynn Musser; Omari Norman; and Carol Herrnstadt Shulman made key contributions to this report.
The Environmental Quality Incentives Program (EQIP) assists agricultural producers who install conservation practices, such as planting vegetation along streams and installing waste storage facilities, to address impairments to water, air, and soil caused by agriculture or to conserve water. EQIP is a voluntary program managed by the U.S. Department of Agriculture's (USDA) Natural Resources Conservation Service (NRCS). NRCS allocates about $1 billion in financial and technical assistance funds to states annually. About $650 million of the funds are allocated through a general financial assistance formula. As requested, GAO reviewed whether USDA's process for allocating EQIP funds to states is consistent with the program's purposes and whether USDA has developed outcome-based measures to monitor program performance. To address these issues, GAO, in part, examined the factors and weights in the general financial assistance formula NRCS's process for providing EQIP funds to states is not clearly linked to the program's purpose of optimizing environmental benefits; as such, NRCS may not be directing funds to states with the most significant environmental concerns arising from agricultural production. To allocate most EQIP funds, NRCS uses a general financial assistance formula that consists of 31 factors, including such measures as acres of cropland, miles of impaired rivers and streams, and acres of specialty cropland. However, this formula has several weaknesses. In particular, while the 31 factors in the financial assistance formula and the weights associated with each factor give the formula an appearance of precision, NRCS does not have a specific, documented rationale for (1) why it included each factor in the formula, (2) how it assigns and adjusts the weight for each factor, and (3) how each factor contributes to accomplishing the program's purpose of optimizing environmental benefits. Factors and weights are important because a small adjustment can shift the amount of funding allocated to each state on the basis of that factor and, ultimately, the amount of money each state receives. For example, in 2006, a 1 percent increase in the weight of any factor would have resulted in $6.5 million more allocated on the basis of that factor and a reduction of 1 percent in money allocated for other factors. In addition to weaknesses in documenting the design of the formula, some data NRCS uses in the formula to make financial decisions are questionable or outdated. For example, the formula does not use the most recent data available for 6 of the 31 factors, including commercial fertilizers applied to cropland. As a result, any recent changes in a state's agricultural or environmental status are not reflected in the funding for these factors. During the course of GAO's review, NRCS announced plans to reassess its EQIP financial assistance formula. NRCS recently developed a set of long-term, outcome-based performance measures to assess changes to the environment resulting from EQIP practices. The agency is also in the process of developing computer models and other data collection methods that will allow it to assess these measures. Thus, over time, NRCS should ultimately have more complete information on which to gauge program performance and better direct EQIP funds to areas of the country that need the most improvement.
We plan to issue a report with the results from this work in the fall of 2013. AIT systems equipped with ATR software display anomalies that could pose a threat using a generic figure for all passengers. management challenges—into one department.effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. Given the significant effort required to build and integrate a department as large and complex as DHS, our initial high-risk designation addressed the department’s initial transformation and subsequent implementation efforts, to include associated management and programmatic challenges. At that time, we reported that the creation of DHS was an enormous undertaking that would take time to achieve, and that the successful transformation of large organizations, even those undertaking less strenuous reorganizations, could take years to implement. As DHS continued to mature, and as we reported in our assessment of DHS’s progress and challenges 10 years after the terrorist attacks of September 11, 2001, we found that the department implemented key homeland security operations and achieved important goals in many areas to create and strengthen a foundation to reach its potential. result, we narrowed the scope of the high-risk area and changed the name from Implementing and Transforming the Department of Homeland Security to Strengthening the Department of Homeland Security Management Functions. Recognizing DHS’s progress in transformation and mission implementation, our 2011 high-risk update focused on the continued need to strengthen DHS’s management functions (acquisition, information technology, financial management, and human capital) and integrate those functions within and across the department, as well as the impact of these challenges on the department’s ability to effectively and efficiently carry out its missions. GAO, Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11, GAO-11-881 (Washington, D.C.: Sept. 7, 2011). passenger aircraft. In response to the December 25, 2009, attempted terrorist attack on Northwest Airlines Flight 253, TSA revised its procurement and deployment strategy for AIT, commonly referred to as full-body scanners, increasing the number of AIT units it planned to procure and deploy. TSA stated that AIT provides enhanced security benefits compared with walk-through metal detectors, such as enhanced detection capabilities for identifying nonmetallic threat objects and liquids. In July 2011, TSA began installing ATR software on deployed AIT systems designed to address privacy concerns by eliminating passenger- specific images. As of May 2013, TSA had deployed about 750 AIT systems to more than 200 airports, most of which were equipped with ATR software. In January 2012, we issued a classified report on TSA’s procurement and deployment of AIT that addressed the extent to which (1) TSA followed DHS acquisition guidance when procuring AIT and (2) deployed AIT units are effective at detecting threats. Pursuant to the FAA Modernization and Reform Act of 2012, TSA was mandated to ensure that all AIT systems used to screen passengers are equipped with and employ ATR software by June 1, 2012. Consistent with provisions of the law, TSA subsequently extended this deadline to June 1, 2013. While TSA has taken some steps and is taking additional steps to address challenges related to developing, testing, and delivering screening technologies for selected aviation security programs, additional challenges remain. In January 2012, we issued a classified report on TSA’s procurement and deployment of AIT at airport checkpoints. (ARB) if AIT could not meet any of TSA’s five key performance parameters or if TSA changed a key performance parameter during qualification testing. Senior TSA officials acknowledged that TSA did not comply with the directive’s requirements, but stated that TSA still reached a “good decision” in procuring AIT and that the ARB was fully informed of the program’s changes to its key performance parameters. Further, TSA officials stated that the program was not bound by AD 102 because it was a new acquisition process and they believed that the ARB was not fully functioning at the time. DHS officials stated that the ARB discussed the changed key performance parameter but did not see the documents related to the change and determined that TSA must update the program’s key acquisition document, the Acquisition Program Baseline, before TSA could deploy AIT systems. However, we concluded that, according to a February 2010 acquisition decision memorandum from DHS, the ARB gave approval to TSA for full-scale production without reviewing the changed key performance parameter. DHS officials stated that the ARB should have formally reviewed changes made to the key performance parameter to ensure that TSA did not change it arbitrarily. According to TSA, it should have submitted its revised requirements for approval, but it did not because there was confusion as to whether DHS should be informed of all changes. Acquisition best practices state that programs procuring new technologies with fluctuating requirements pose challenges to agencies ensuring that the acquisition fully meets program needs.requirements is not a best practice for system acquisitions already under way. As a result, we found that TSA procured and deployed a technology that met evolving requirements, but not the initial requirements included in its key acquisition requirements document that the agency initially determined were necessary to enhance aviation security. We recommended that TSA develop a road map that specifies development milestones for AIT and have DHS acquisition officials approve the road map. DHS agreed with our recommendation and has taken actions to address it, which we discuss below. DHS acquisition oversight officials agreed that changing key EDS. In July 2011, we found that TSA revised its EDS requirements to better address current threats, and had plans to implement these However, we found that some requirements in a phased approach. number of EDS machines in TSA’s checked baggage screening fleet were configured to detect explosives at the levels established in 2005 and that the remaining EDS machines are configured to detect explosives at levels established in 1998.requirements, it did not have a plan with the appropriate time frames needed to deploy EDS machines that meet the requirements. To help ensure that TSA’s checked baggage-screening machines are operating most effectively, we recommended that TSA develop a plan to deploy EDSs that meet the most recent explosive detection requirements established in 2010 and ensure that new machines, as well as machines already deployed in airports, will be operated at the levels established in those requirements. DHS concurred with our recommendation and has begun taking action to address it. Specifically, in April 2012, TSA reported that it had awarded contracts to vendors to implement detection upgrades across the currently deployed EDS fleet to meet the 2010 requirements. In March 2013, TSA reported that it plans to complete upgrading the currently deployed fleet by the end of fiscal year 2013. However, our When TSA established the 2005 recommendation is intended to ensure that EDS machines in use at airports meet the most recent detection requirements—both previously deployed units as well as newly procured machines. Until TSA develops such a plan, it will be difficult for the agency to provide reasonable assurance that its upgrade approach is feasible or cost-effective. As we have reported in the past few years, TSA has not always resolved problems discovered during testing, which has led to costly redesign and rework at a later date, as shown in the following examples. We concluded that addressing such problems before moving to the acquisition phase can help agencies better manage costs. Specifically: Canines. In January 2013, we found that TSA began deploying passenger screening canine teams to airport terminals in April 2011 prior to determining the teams’ operational effectiveness. According to TSA officials, operational assessments did not need to be conducted prior to deployment because canines were being used to screen passengers by other entities, such as airports in the United Kingdom. In June 2012, the DHS Science and Technology Directorate (S&T) and TSA began conducting operational assessments to help demonstrate the effectiveness of passenger screening canine teams. We recommended that on the basis of the results of DHS’s assessments, TSA expand and complete operational assessments of passenger screening canine teams, including a comparison with conventional explosives detection canine teams before deploying passenger screening canine teams on a nationwide basis to determine whether they are an effective method of screening passengers in the U.S. airport environment, particularly since they cost the federal government more than TSA’s conventional canine teams.screening canine teams before it had completed an assessment to Additionally, we found that TSA began deploying passenger determine where within the airport (i.e., the public, checkpoint, or sterile areas) the teams would be most effectively utilized. TSA leadership focused on initially deploying passenger screening canine teams to a single location within the airport—the sterile area—because it thought it would be the best way to foster stakeholders’ acceptance of the teams. However, aviation stakeholders we interviewed at the time raised concerns about this deployment strategy, stating that passenger screening canine teams would be more effectively utilized in nonsterile areas of the airport, such as curbside or in the lobby areas. DHS concurred with our recommendation to expand and complete testing to assess the effectiveness of the teams in areas of the airport deemed appropriate. As of April 2013, TSA concluded testing with DHS S&T of passenger screening canine teams in the sterile areas of airports, and TSA is still in the process of conducting its own testing of the teams in the sterile and public areas of the airports. GAO-11-740. List for its acquisition of EDS, which would separate the need for explosives data from future procurements, and would require that EDS be certified to meet detection requirements prior to beginning acquisitions of EDS to meet those requirements. According to best practices established in prior work on major acquisitions, realistic program baselines with stable requirements for cost, schedule, and performance are important to delivering capabilities within schedule and cost estimates. Our prior work has found that program performance metrics for cost and schedule can provide useful indicators of program health and can be valuable tools for improving oversight of individual programs. According to DHS’s acquisition guidance, the program baseline is the contract between the program and departmental oversight officials and must be established at program start to document the program’s expected cost, deployment schedule, and technical performance. Best practices guidance states that reliable and realistic cost, schedule, and performance estimates help ensure that a program will deliver capabilities on time and within budget.reported in the past few years and on the basis of our preliminary observations from our ongoing work, TSA has not always developed accurate baselines for establishing cost, schedule, and performance estimates. However, as we have AIT. In January 2012, we found that TSA did not have clear plans to require AIT vendors to meet milestones used during the AIT acquisition. On the basis of our findings, we recommended that TSA develop a road map that outlines vendors’ progress in meeting all key performance parameters because it is important that TSA convey vendors’ progress in meeting those requirements and full costs of the technology to decision makers when making deployment and funding decisions. While TSA reported that it hoped vendors would be able to gradually improve meeting key performance parameters for AIT over time, we concluded that TSA would have more assurance that limited taxpayer resources are used effectively by developing a road map that specifies development milestones for the technology and having DHS acquisition officials approve this road map. DHS agreed with our recommendation and has taken actions to address it. For example, in February 2012, TSA developed a road map that specifies development and deployment milestones, including the addition of ATR to existing deployed systems, continued development of enhanced detection capabilities, and acquisition plans for the next generation of AIT systems (AIT-2). In July 2012, DHS acquisition officials reviewed the AIT road map. However, on the basis of our preliminary observations from our ongoing work conducted in March 2013, we found that TSA has fallen behind schedule as outlined in the AIT road map to install ATR software upgrades to existing deployed AIT systems because of one of the vendors’ inability to develop this software in time for the installation of ATR software on all units by June 2013. TSA subsequently decided to terminate its contract with this vendor and remove all deployed units from airports. TSA has also fallen behind schedule as outlined in the AIT road map to acquire and test AIT-2 systems because of vendors’ inability to provide required documentation verifying that contractual requirements have been met and the units are ready to begin testing. Although TSA updated the AIT road map in October 2012, it subsequently missed some of the key deadlines specified in the updated version as well. We currently have ongoing work related to this area and we plan to report the results in the fall of 2013. EDS. In July 2011, we found that TSA had established a schedule for the acquisition of EDS machines but it did not fully comply with leading practices, and TSA had not developed a plan to upgrade its EDS fleet to meet the current explosives detection requirements. These leading practices state that the success of a large-scale system acquisition, such as TSA’s EDS acquisition, depends in part on having a reliable schedule that identifies when the program’s set of work activities and milestone events will occur, amongst other things. However, we reported that the schedule for the EDS acquisition is not reliable because it does not reflect all planned program activities and does not include a timeline to deploy EDSs or plans to procure EDSs to meet subsequent phases of explosive detection requirements. On the basis of our findings, we concluded that developing a reliable schedule would help TSA better monitor and oversee the progress of the EDS acquisition. DHS concurred with our recommendation to develop and maintain a schedule for the entire Electronic Baggage Screening Program in accordance with the leading practices we identified for preparing a schedule. In July 2011, DHS commented that TSA had already begun working with key stakeholders to develop and define requirements for a schedule and to ensure that the schedule aligns with the best practices we outlined. TSA reported in March 2013 that it plans to have an updated integrated master schedule by September 2013. GAO, Checked Baggage Screening: TSA Has Deployed Optimal Systems at the Majority of TSA-Regulated Airports, but Could Strengthen Cost Estimates, GAO-12-266 (Washington D.C.: Apr. 27, 2012). can be used to support DHS funding and budget decisions. In April 2013, TSA reported it plans to have an updated integrated master schedule and revised life cycle cost estimate by September 2013, which, when completed, will allow it to update its cost estimate for the Electronic Baggage Screening Program. In part because of the challenges we have highlighted in DHS’s acquisition process, strengthening DHS’s management functions remains on our high-risk list. However, DHS has efforts under way to strengthen its oversight of component acquisition processes. We found in September 2012 that while DHS has initiated efforts to address the department’s acquisition management challenges, most of the department’s major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised. We identified 42 programs that experienced cost growth, schedule slips, or both, with 16 of the programs’ costs increasing from a total of $19.7 billion in 2008 to $52.2 billion in 2011—an aggregate increase of 166 percent. Moreover, we reported that DHS leadership has authorized and continued to invest in major acquisition programs even though the vast majority of those programs lack foundational documents demonstrating the knowledge needed to help manage risks and measure performance. For example, we found that DHS leadership—through the Investment Review Board or its predecessor body, the ARB—has formally reviewed 49 of the 71 major programs. We found that DHS permitted 43 of those programs to proceed with acquisition activities without verifying the programs had developed the knowledge in key acquisition documents as required by AD 102. DHS officials reported that DHS’s culture has emphasized the need to rapidly execute missions more than sound acquisition management practice and that DHS could not approve the documents in a timely manner. On the basis of our findings, we concluded that DHS recognized the need to implement its acquisition policy more consistently, but that significant work remains. We recommended that DHS modify acquisition policy to better reflect key program and portfolio management practices and ensure acquisition programs fully comply with DHS acquisition policy. DHS concurred with our recommendations and reported taking actions to address some of them. For example, in September 2012, DHS stated that it was in the process of revising its policy to more fully reflect key program management practices to enable DHS to more rapidly respond to programs’ needs by facilitating the development, approval, and delivery of more specific guidance for programs. In March 2012, we found that to enhance the department’s ability to oversee major acquisition programs, DHS realigned the acquisition management functions previously performed by two divisions within the Office of Chief Procurement Officer to establish the Office of Program Accountability and Risk Management (PARM) in October 2011. PARM, which is responsible for program governance and acquisition policy, serves as the Management Directorate’s executive office for program execution and works with DHS leadership to assess the health of major acquisitions and investments. To help with this effort, PARM is developing a database, known as the Decision Support Tool, intended to improve the flow of information from component program offices to the Management Directorate to support its oversight and management efforts. However, we reported in March 2012 that DHS executives were not confident enough in the data to use the Decision Support Tool to help make acquisition decisions. On the basis of our findings, we concluded that DHS had limited plans to improve the quality of the data because PARM planned to check the data quality only in preparation for key milestone meetings in the acquisition process. We reported that this could significantly diminish the Decision Support Tool’s value because users cannot confidently identify and take action to address problems meeting cost or schedule goals prior to program review meetings. In February 2013, we reported that DHS updated its Integrated Strategy for High Risk Management in June 2012, which includes management initiatives and corrective actions to address acquisition management challenges, among other management areas. In the June 2012 update, DHS included, for the first time, performance measures and progress ratings for all of the management initiatives. The June 2012 update also identified the resources needed to implement most of its corrective actions, although we found that DHS needs to further identify its resource needs and communicate and mitigate critical gaps. On the basis of our findings, we concluded that the strategy, if implemented and sustained, will provide a path for DHS to be removed from our high risk list. Going forward, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes including those related to acquisition management. We will continue to monitor DHS’s efforts to determine if the actions and outcomes are achieved. Chairman Hudson, Ranking Member Richmond, and members of the committee, this concludes my prepared statement. I look forward to responding to any questions that you may have. For questions about this statement, please contact Steve Lord at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Dave Bruno, Assistant Director; Carissa Bryant; Susan Czachor; Emily Gunn; and Tom Lombardi. Key contributors for the previous work that this testimony is based on are listed within each individual product. High-Risk Series: Government-wide 2013 Update and Progress Made by the Department of Homeland Security. GAO-13-444T. Washington, D.C.: March 21, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. TSA Explosives Detection Canine Program: Actions Needed to Analyze Data and Ensure Canine Teams Are Effectively Utilized. GAO-13-239. Washington, D.C.: January 31, 2013. Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs. GAO-12-833. Washington, D.C.: September 18, 2012. Homeland Security: DHS and TSA Face Challenges Overseeing Acquisition of Screening Technologies. GAO-12-644T. Washington, D.C.: May 9, 2012. Checked Baggage Screening: TSA Has Deployed Optimal Systems at the Majority of TSA-Regulated Airports, but Could Strengthen Cost Estimates. GAO-12-266. Washington, D.C.: April 27, 2012. Transportation Security Administration: Progress and Challenges Faced in Strengthening Three Key Security Programs. GAO-12-541T. Washington, D.C.: March 26, 2012 Aviation Security: TSA Has Made Progress, but Additional Efforts Are Needed to Improve Security. GAO-11-938T. Washington, D.C.: September 16, 2011. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C.: September 7, 2011. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Aviation Security: TSA Has Taken Actions to Improve Security, but Additional Efforts Remain. GAO-11-807T. Washington, D.C.: July 13, 2011. Aviation Security: TSA Has Enhanced Its Explosives Detection Requirements for Checked Baggage, but Additional Screening Actions Are Needed. GAO-11-740. Washington, D.C.: July 11, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Defense Acquisitions: Managing Risk to Achieve Better Outcomes. GAO-10-374T. Washington, D.C.: January 20, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
TSA acquisition programs represent billions of dollars in life cycle costs and support a range of aviation security programs, including technologies used to screen passengers and checked baggage. Within DHS, TSA is responsible for establishing requirements for testing and deploying transportation system technologies. Since 2010, GAO has reported that DHS and TSA faced challenges in managing acquisition efforts, including deploying technologies that did not meet requirements and were not appropriately tested and evaluated. As requested, this testimony discusses (1) the extent to which TSA addressed challenges relating to developing and meeting program requirements, testing new screening technologies, and delivering capabilities within cost and schedule estimates for selected programs, and (2) DHS efforts to strengthen oversight of component acquisition processes. This testimony is based on GAO products issued from January 2010 through January 2013, including selected updates conducted in March 2013 on TSA's efforts to implement GAO's prior recommendations and preliminary observations from ongoing work. To conduct the updates and ongoing work, GAO analyzed documents, such as the AIT road map, and interviewed TSA officials. The Transportation Security Administration (TSA) has taken and is taking steps to address challenges related to developing, testing, and delivering screening technologies for selected aviation security programs, but challenges remain. For example, in January 2012, GAO reported that TSA faced challenges developing and meeting key performance requirements for the acquisition of advanced imaging technology (AIT)--i.e., full-body scanners. Specifically, GAO found that TSA did not fully follow Department of Homeland Security (DHS) acquisition policies when acquiring AIT, which resulted in DHS approving nationwide AIT deployment without full knowledge of TSA's revised specifications. DHS required TSA to notify DHS's Acquisition Review Board (ARB) if AIT could not meet any of TSA's five key performance parameters or if TSA changed a key performance parameter during testing. However, GAO found that the ARB approved TSA for full-scale production without reviewing the changed parameter. DHS officials said that the ARB should have formally reviewed this change to ensure that TSA did not change it arbitrarily. GAO recommended that TSA develop a road map that outlines vendors' progress in meeting all key performance parameters. DHS agreed, and developed a road map to address the recommendation, but faces challenges implementing it--e.g., due to vendor delays. Additionally, in January 2013, GAO reported that TSA faced challenges related to testing and deploying passenger screening canine teams. Specifically, GAO concluded that TSA began deploying these canine teams to airport terminals in April 2011 prior to determining the canine teams' operational effectiveness. In June 2012, DHS and TSA began conducting operational assessments to help demonstrate canine teams' effectiveness. Also, TSA began deploying teams before it had completed an assessment to determine where within the airport the canine teams would be most effectively utilized. GAO recommended that on the basis of DHS assessment results, TSA expand and complete testing to assess the effectiveness of canine teams in areas of the airport deemed appropriate. DHS agreed and officials said that as of April 2013, TSA had concluded testing in collaboration with DHS of canine teams in airport sterile areas--in general, areas of an airport for which access is controlled through screening of persons and property--and is testing teams on its own in airport sterile and public areas. DHS has some efforts under way to strengthen its oversight of component investment and acquisition processes, but additional actions are needed. In September 2012, GAO reported that while DHS had initiated efforts to address the department's acquisition management challenges, most of DHS's major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised. GAO identified 42 DHS programs that experienced cost growth, schedule slips, or both, with 16 of the programs' costs increasing from a total of $19.7 billion in 2008 to $52.2 billion in 2011--an aggregate increase of 166 percent. GAO concluded that DHS recognized the need to implement its acquisition policy more consistently, but that significant work remained. GAO recommended that DHS modify acquisition policy to better reflect key program and portfolio management practices and ensure acquisition programs fully comply with DHS acquisition policy. DHS agreed, and in September 2012 officials stated that it was in the process of revising its policy to more fully reflect key program management practices. GAO has made recommendations to DHS and TSA in prior reports to help strengthen its acquisition processes and oversight. DHS and TSA generally concurred and are taking actions to address them.
This report presents the results of our survey of the background and training of key financial management personnel at 34 of the largest private corporations and 19 of the largest state governments in the United States. We asked surveyed organizations for information on the education, work experience, training, and professional certifications of their key financial management personnel—chief financial officers (CFO), controllers, and managers and supervisors—working in financial reporting, financial analysis, and accounting operations positions. In addition, we asked for information on training and qualification requirements for these personnel. Overall, our survey results provide information on about 4,900 private sector and state government financial management personnel. Qualified personnel can play a variety of important roles in establishing and maintaining a strong, successful financial management organization. Specifically, qualified personnel can provide leadership in the efficient use of an organization’s financial resources by promoting effective general and financial management practices; serve as an integral part of an organization’s decision-making by providing timely and reliable financial and performance information and by analyzing the implications of this information for the organization’s goals and objectives; and help ensure that the organization’s assets are safeguarded from fraud, waste, and abuse by improving its accounting systems and internal controls. While the accounting profession has focused on the first and last roles for many years, a number of studies indicate that financial personnel are increasingly being asked to take on the second of their potential roles, that of a “business partner” in organizational decision-making. In the past, the accounting function was paper-driven, human resource intensive, and clerical in nature. In many organizations today, recent advances in information technology, as well as competitive pressures and corporate restructuring, have combined to dramatically change the accounting function from a clerical to an analytical and consultative focus. According to a 1996 report by the Institute of Management Accountants (IMA), the management accounting profession has been in transition for the past 5 to 10 years. The study found that management accountants are increasingly being asked to supplement their traditional accounting role with more financial analysis and management consulting. Specifically, the IMA study reported that accountant work activities most critical to company success now include not only traditional financial management skills—those associated with accounting systems and financial reporting—but also strategic planning, internal consulting, and short-term budgeting processes. The IMA study characterized this change as a “. . . shift from number cruncher and corporate cop to decision-support specialist.” A recent study by a major public accounting firm also underscored the need for financial management personnel to have financial expertise, augmented by interpersonal and communication skills, an enterprise perspective, initiative, and overall organizational savvy. These evolving expectations for accountants parallel a similar movement in the auditing profession. As a result of technological innovations, coupled with complex business structures and other economic forces, auditors are being asked to provide a wide range of services that go beyond the traditional audit of historical financial statements, such as management consulting services. Auditors are increasingly being asked to be substantially more involved with the functioning of business systems than just attesting to the reliability of reported financial data. Major change is also underway in the federal financial management arena. The Congress has taken various steps to help ensure that federal agencies improve their financial management. One of the key pieces of legislation was the Chief Financial Officers Act of 1990. The CFO Act spelled out an ambitious agenda for financial management reform, including expectations for the (1) deployment of modern systems to replace existing antiquated, often manual, processes, (2) development of better performance and cost measures, and (3) design of results-oriented reports on the government’s financial condition and operating performance by integrating budget, accounting, and program information. The Government Management Reform Act of 1994 expanded and made permanent the requirement in the CFO Act for audited financial statements to the 24 largest federal departments and agencies and mandated annual audited governmentwide financial statements. The CFO Act also established chief financial officers throughout government to provide needed leadership. One of the key responsibilities assigned to agency CFOs is overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. The development of highly qualified financial managers will be crucial to successfully implementing the CFO Act. We have reported many instances in which the federal government’s ability to produce accurate financial data was undermined simply because personnel with financial management responsibilities did not follow rudimentary policies and procedures, such as accurate transaction processing and routine account reconciliations. Further, the requirements of the Government Performance and Results Act of 1993 call for federal managers to fundamentally shift their focus from a preoccupation with rigid adherence to prescribed processes to assessing the extent to which federal programs have achieved desired outcomes and results. Accordingly, agency financial personnel are increasingly being asked to draw on new sets of skills to produce cost and other performance-based financial data. Such data are essential if congressional and executive branch decisionmakers are to make well-informed decisions on the relative efficiency and effectiveness of federal programs. While these financial management improvement efforts may be new to many financial personnel in the federal government, similar requirements have been in place for personnel in the private sector and in state governments for many years. The disciplined process required to generate reliable, accurate financial data has been in place in the private sector for over 60 years following the 1929 stock market crash, and in state governments since the early 1980s. The financial personnel in these organizations have also had extensive experience in developing and implementing meaningful financial performance measures. The objectives for this report were to identify (1) the background and training profiles of key financial management personnel working at large private sector corporations and state governments and (2) the qualification requirements applicable to personnel in these positions. To accomplish these objectives, we surveyed the organizations closest in size and complexity to federal agencies. Accordingly, we requested information on the qualifications of key financial management personnel in the 100 largest private corporations in the United States, commonly referred to as the “Fortune 100,” and the 25 largest state governments. To collect profile information on key corporate and state financial management personnel, we designed a questionnaire which was sent to Fortune 100 and selected state CFO/controller offices and their five largest divisions or departments. The design of the questionnaire used in our study was based on a framework for measuring the quality of the federal workforce presented in a previous GAO report. That framework identified education, work experience, training, and professional certifications as quantifiable factors for assessing the qualifications of federal government personnel. In using this framework, we asked surveyed organizations for information on the education, work experience, and professional certifications of their key financial management personnel: chief financial officers, controllers, and managers and supervisors working in financial reporting, financial analysis, and accounting operations positions. We also asked for information on training and qualification requirements for the above mentioned managers and supervisors. To help ensure that the questionnaire was clear and that the respondents’ information would be most relevant to the federal CFO community, we obtained comments from a variety of interested parties and pretested the questionnaire. Specifically, we requested and incorporated, as appropriate, comments on our questionnaire received from representatives of the Private Sector Council; the National Association of State Auditors, Comptrollers, and Treasurers; and the federal CFO Council Human Resources Committee. In addition, an academic consultant from the University of Denver School of Accountancy with expertise in this area reviewed the questionnaire and provided comments. The survey instrument was also pretested at one Fortune 100 company and two state governments. The pretests were conducted through interviews to observe respondents as they completed the questionnaire and to debrief them immediately afterward. On the basis of the advisors’ comments and pretesting results, the questionnaire was revised. Appendix III provides a copy of the final survey instrument. Responses were received from 34 Fortune 100 companies and from 19 of the 25 largest state governments. The 34 Fortune 100 companies from which we received responses represent all major industry groupings except agriculture. Ten of the companies were finance, insurance, or real estate companies, such as BankAmerica Corporation, Citibank, and the Metropolitan Life Insurance Company. Fifteen manufacturing and mining companies responded to our survey, including the Lockheed Martin Corporation, the Hewlett-Packard Company, AlliedSignal, and the Mobil Corporation. We also received responses from nine transportation, communication, and wholesale/retail trade companies, including AT&T, MCI Communications, United Airlines, and SuperValu. The 1995 revenues of the Fortune 100 respondents ranged from $12.7 billion to $79.6 billion.Appendix I lists the Fortune 100 companies, divisions, and subsidiaries responding to our survey. State government comptroller offices and operational departments responding to our survey were located throughout the country and included responses from an average of over four of the major organizations within each of the states responding, ranging from one to six per state, and also included the largest states. For example, we received responses from California, Florida, Illinois, Michigan, New York, Virginia, and Washington. The revenues of the state government respondents ranged from $10.8 billion to $108.2 billion. The state government comptroller offices and other departments that responded to our survey are listed in appendix II. We did not verify the accuracy of the information provided by the Fortune 100 and state government respondents. However, we provided a draft of this report to the parties commenting on the initial survey instrument and have incorporated their comments as appropriate. We conducted our work from June 1996 through December 1997 in accordance with generally accepted government auditing standards. Overall, the survey respondents provided information on 4,930 financial management personnel: 3,621 (73 percent) in Fortune 100 companies and 1,309 (27 percent) in state governments. Table 1 shows the positions held by the financial management personnel about whom information was provided. The Fortune 100 personnel about whom information was provided worked in 1 of 34 corporate offices or 54 corporate divisions or subsidiaries, as listed in appendix I. The state government personnel about whom information was provided worked in 1 of 18 state comptroller offices (or the equivalent) or 67 operational departments, as listed in appendix II. The following sections present information on the educational backgrounds and related education requirements for key financial management personnel at the Fortune 100 companies and state governments responding to our survey. In the Fortune 100 companies, more than 90 percent of financial management personnel held undergraduate degrees, with about 75 percent holding either accounting or other business degrees. Accounting degrees were more commonly held by managers and supervisors of financial reporting and accounting operations. CFOs, controllers, and managers and supervisors of financial analysis commonly held either accounting or other business degrees. Senior executives were more likely than managers and supervisors to hold nonbusiness degrees. Figure 1 shows, by position, the undergraduate degrees attained by Fortune 100 financial management personnel. Overall, about 40 percent of the Fortune 100 personnel held advanced degrees. The percentage of personnel with advanced degrees ranged from over 60 percent of CFOs and controllers to about 24 percent of supervisors of accounting operations. For example, about 39 percent of the Fortune 100 managers of accounting operations held advanced degrees. In addition, managers and supervisors of financial analysis were more likely to hold an advanced degree than were other managers and supervisors. In the Fortune 100 companies, the majority of advanced degrees held were MBAs. Figure 2 shows, by position, the advanced degrees attained by Fortune 100 financial management personnel. Almost 60 percent of Fortune 100 respondents required bachelor’s degrees—in either accounting or another business field—for manager and supervisor positions in financial reporting, financial analysis, and accounting operations. For example, about 45 percent of the respondents required their managers of financial reporting to have accounting degrees and another 23 percent required such managers to have either accounting or other business degrees. About 34 percent of the respondents required their managers of accounting operations to have accounting degrees, and another 20 percent required such managers to have either accounting or other business degrees. In addition, several of the organizations without any formal bachelor’s degree requirements for their financial management personnel said that they preferred hiring personnel with bachelor’s degrees for these positions. In some cases, Fortune 100 organizations required advanced degrees for their managerial and supervisory financial management positions. Overall, about 12 percent of the respondents required advanced degrees, most commonly MBAs, for the financial management positions examined. For example, 11 percent of respondents required managers of financial reporting and accounting operations to have advanced degrees, while about 19 percent of respondents required their managers of financial analysis to have advanced degrees. Further, 18 respondents added that advanced degrees, while not formally required, were preferred for these positions. Also, 43 Fortune 100 respondents said that they had recently upgraded or planned to upgrade their education requirements. For example, one Fortune 100 respondent had established a new requirement that all its financial management personnel have CPAs and MBAs or other advanced financial degrees. Another respondent told us that it recently had established a policy encouraging its financial management personnel to obtain advanced degrees and professional certifications because of the increased knowledge they had to possess to report the organization’s financial results in accordance with generally accepted accounting principles and because of the sophisticated nature of its business, including diverse products and markets. Yet another respondent indicated that its vision for upgrading the qualifications of its financial personnel focused on building broader business awareness and related analytic skills. On average, about 78 percent of the state government financial management personnel held bachelor’s degrees. The percentage of personnel holding bachelor’s degrees varied by position, ranging from about 96 percent of CFOs and controllers to 58 percent of supervisors of accounting operations. Depending on the position, from 50 to 80 percent of personnel held either accounting or other business degrees. CFOs were more likely than controllers, managers, and supervisors to hold nonbusiness degrees. About one-third of the CFOs held nonbusiness degrees. Figure 3 shows, by position, the undergraduate degrees attained by state government personnel. About 16 percent of the financial management personnel working for state government respondents held advanced degrees. The percentage of personnel with advanced degrees ranged from about 41 percent of CFOs to about 6 percent of supervisors of accounting operations. For instance, about 11 percent of managers of accounting operations held advanced degrees. In addition, CFOs, controllers, and managers and supervisors of financial analysis were more likely to hold advanced degrees other than MBAs or master’s degrees in accounting. For other positions, MBAs were the most commonly held advanced degrees. Figure 4 shows, by position, the advanced degrees attained by state government personnel. About 44 percent of the state government respondents required either accounting or other business degrees for manager and supervisor positions in financial reporting, financial analysis, and accounting operations. For example, about 27 percent of respondents required their managers of financial analysis to have accounting degrees, and another 18 percent of respondents required these managers to have either accounting degrees or other business degrees. About 35 percent of respondents required their managers of accounting operations to have accounting degrees, and another 9 percent of the respondents required such managers to have either accounting or other business degrees. “The evolution of accounting functions has resulted in increased need for personnel with four year accounting degrees. The typical make-up of office staff over the past 15 years has changed mostly from clerical individuals to individuals with accounting degrees. The increased use of computers requires a high degree of computer skills and analytical capabilities.” Further, several of the state organizations that said they did not have formal bachelor’s degree requirements for their financial management personnel said they preferred that these personnel have bachelor’s degrees. One state department informed us that it had raised the number of college accounting hours needed for all its professional level accounting positions from 12 to 24 approximately 2 years ago. In only a few cases did state government organizations require advanced degrees for their manager and supervisor positions in financial management. On average, less than 3 percent of state government respondents required advanced degrees—either MBAs or other master’s degrees—for these positions. For example, while 4 percent of respondents required their managers of financial reporting to have advanced degrees, none of the respondents to our study required their supervisors of financial reporting or accounting operations to have advanced degrees. Financial management personnel in the Fortune 100 companies responding to the survey had, on average, about 14 years of total experience in corporate accounting, public accounting, internal auditing, or accounting systems design and maintenance. This overall experience included an average of 2.5 years combined experience in public accounting, internal auditing, or accounting systems design and maintenance. These three areas of experience are particularly noteworthy because they often provide exposure to a wide variety of accounting issues and decision-making processes throughout an organization. The years of work experience in corporate accounting and the other three areas varied by position. Controllers and CFOs averaged about 19 and 17 years of work experience, respectively, while managers and supervisors averaged from 12 to 16 years of experience, depending on position. Figure 5 shows, by position, the average years of work experience in these four areas for financial management personnel in the Fortune 100 companies surveyed. Overall, state government personnel had about 20 years of work experience in government accounting, public accounting, internal auditing, and accounting systems design and maintenance. The state governments’ CFOs and controllers averaged 20 and 21 years of work experience, respectively, in these areas. Managers and supervisors averaged 16 to 24 years, depending on the position. This total experience included an average of 4 years combined experience in public accounting, internal auditing, or systems design and maintenance, fields which often provide exposure to a broad base of accounting issues throughout an organization. Figure 6 shows, by position, the average years of work experience in these four areas for financial management personnel in the state governments surveyed. The following sections present information on the training attained and required for financial management personnel for those Fortune 100 and state government organizations responding to our survey. Overall, Fortune 100 financial management personnel completed an average of 26 hours of training in 1996. The number of training hours ranged from about 20 to 40, depending on financial management position. Most of the hours completed were in technical accounting subjects. For example, one respondent told us that over the past few years the company had strongly encouraged managers throughout the organization to increase their technical skills by taking classes, becoming certified, and working toward advanced degrees. Another respondent stressed the importance of employee development programs to not only emphasize both customer and market knowledge but also broaden and upgrade financial skills. In addition, a number of Fortune 100 respondents cited the need to tailor their CPE programs so that their financial management personnel could maintain their professional certifications, such as CPAs. For example, one company subsidiary stated that its CPE requirements are tailored toward the requirements of the professional certifications that its financial management personnel possess. The subsidiary also indicated that it planned to greatly increase its training curriculum and requirements for all its financial management personnel in the near future. Figure 7 shows, by position, the average number of continuing professional education hours completed in 1996 by financial management personnel in the Fortune 100 companies surveyed. About 70 percent of Fortune 100 respondents set aside between 1 and 2 percent of their budget for financial management salaries and benefits for training financial management personnel. In addition, another 15 percent of the Fortune 100 respondents set aside more than 2 percent. However, while all Fortune 100 respondents set aside some portion of their budgets for training, 15 percent set aside less than 1 percent. Few of the Fortune 100 respondents had any financial management training requirements. However, those respondents with such requirements had, on average, 31 total hours of required training in 1996, including 18 hours in technical accounting. The total average number of hours of training required of financial management personnel in these corporations ranged from 15 to 45 hours, depending on position. In addition, 36 respondents commented that they encouraged their employees to obtain additional training. Also, they commented that employees tend to seek out training on their own, particularly those with professional certifications. In order to maintain their CPA certifications, employees are generally required to complete at least 80 hours of continuing professional education every 2 years. State government financial management personnel completed, on average, about 31 hours of training in 1996. (The number of hours ranged from about 25 to 35, depending on position.) Most of the hours completed were in technical accounting subjects. Several state government respondents also stressed that their CPE training programs were, in part, driven by the CPE requirements needed to maintain the various professional certifications held by their financial management personnel. One respondent noted that its policy for financial management personnel at the manager and supervisor level who are not certified was to develop individual training plans tailored to the individual’s area of expertise, with a goal of 24 hours of training a year. Another state indicated that it provided training to its financial management personnel on an as-needed basis in order for personnel to successfully perform job requirements. It further informed us that it encourages its financial personnel to attend CPE training courses by allowing administrative time off and, to the extent that funds are available, paying for the cost of such training. Figure 8 shows, by position, the average number of continuing professional education hours completed in 1996 by financial management personnel in state governments surveyed. In addition, over half of the state government organizations set aside 1 percent or more of their financial management salaries and benefits budget for training. Forty-five percent of respondents set aside from 1 to 2 percent of their budgets for training, with another 8 percent setting aside more than 2 percent. However, 47 percent of the state government respondents set aside less than 1 percent of their budgets for training, including 15 respondents (21 percent) who said that they did not set aside any funds. Few state government respondents had any financial management training requirements. However, those states with such requirements had, on average, 36 hours of required training in 1996, including 26 hours in technical accounting. Total required training for financial management personnel in these state organizations ranged from 31 to 40 hours, depending on position. In addition, similar to many Fortune 100 respondents, 24 state government respondents commented that they encouraged their employees to obtain training, even though it was not required. In order to maintain their CPA and CGFM certifications, employees are generally required to complete at least 80 hours of continuing professional education every 2 years. The following sections describe the certifications attained and required for Fortune 100 and state government financial management personnel. Among Fortune 100 respondents, the CPA was the most commonly held professional certification. Overall, about 25 percent of Fortune 100 financial managers were CPAs. Specifically, this included about 42 percent of the controllers, 43 percent of managers of financial reporting, and 41 percent of supervisors of financial reporting. For other positions, the percentage of CPAs ranged from 32 percent of the CFOs to 16 percent of the supervisors of financial analysis. Few financial management personnel were certified management accountants (CMA) or certified internal auditors (CIA). Figure 9 shows, by position, the professional certifications held by financial management personnel in the Fortune 100 companies surveyed. Fortune 100 organizations generally did not require professional certifications for the financial management positions examined in our study, although 13 respondents said that they preferred that their managers and supervisors be CPAs. On average, about 18 percent of respondents required a CPA for the manager and supervisor positions examined in our survey. For example, about 31 percent of Fortune 100 respondents required a CPA for their managers of financial reporting, and 11 percent to 21 percent required a CPA for other positions. Requirements for other certifications (CMA and CIA) were rare. Two types of certifications were common among state government financial managers—CPA and certified government financial manager (CGFM). On average, about 21 percent of state government financial management personnel were CPAs. About 30 percent of CFOs, controllers, and managers and supervisors of financial reporting held CPAs. The percentage of personnel in other positions holding CPAs ranged from 21 percent of managers of accounting operations to 10 percent of supervisors of accounting operations. For example, one state informed us that certifications, such as CPA or CGFM, have replaced a bachelor’s degree as preference items in its hiring and promotion programs. In addition, the percentage of personnel across all positions that held CGFM certificates ranged from 3 percent to 18 percent. Figure 10 shows, by position, the professional certifications held by financial management personnel in the state governments surveyed. Few state organizations required professional certifications for the financial management positions examined in the study. For example, about 13 percent of state government respondents required a CPA for their managers of financial reporting. For other positions, a lower percentage of respondents required CPAs, although five respondents said that they preferred that their managers and supervisors be CPAs. One state department told us that for the past 8 years, it has required all its financial management personnel to be CPAs. Another indicated that it now required all financial reporting and accounting operations managers to be CPAs and that, because of a perceived increase in personnel with CPAs available in recruitment pools, it has established a CPA as a desired credential for all professional positions in the accounting, financial analysis, and financial reporting areas. Requirements for other certifications—CGFM and CMA—were rare. Like Fortune 100 companies and large state governments, federal agencies must respond creatively to the challenges posed by new technologies, downsizing and restructuring, and increased reporting requirements. Consequently, the experiences of the nonfederal organizations in our review may provide important lessons learned for future federal efforts to improve the qualifications and professionalism of its financial management workforce in response to the challenge of moving from a strict accounting role to that of a “business partner.” These lessons learned include upgrading requirements for hiring personnel and ensuring that personnel on board acquire the appropriate training needed to effectively carry out their evolving responsibilities. We are sending copies of this report to the Ranking Minority Member of the House Committee on Government Reform and Oversight, CFOs and inspectors general for the 24 largest federal agencies and departments, the Directors of the Office of Management and Budget and the Office of Personnel Management, and the Human Resource Committee of the Chief Financial Officers’ Council. We will make copies available to others on request. Please contact me at (202) 512-9095 if you or your staffs have any questions. Major contributors to this report are listed in appendix IV. We received survey responses from the corporate-level CFO office of 34 Fortune 100 companies and from 54 divisions or subsidiaries of these companies. While no corporate offices requested anonymity, one subsidiary did request not to be listed and we honored their request. Respondents agreeing to be listed as participants in our study are the following: AlliedSignal Corporate-level CFO Office Engineered Materials Division American Airlines Corporate-level CFO Office The SABRE Group Holdings, Inc. American Express Company Corporate-level CFO Office AMOCO Corporation Corporate-level CFO Office Petroleum Products Energy Group North America International Operations Group AT&T Corporate-level CFO Office BankAmerica Corporation Financial Accounting Shared Services Retail Business Finance Support Groups Business Finance Wholesale Business Finance Commercial Wealth Management Business Finance BellSouth Corporation Corporate-level CFO Office Telecommunications Advertising and Publishing Corporation Cellular Corporation International The Boeing Company Corporate-level CFO Office Commercial Airplane Group Defense and Space Group Information Support Services Group Bristol-Myers Squibb Corporation Financial Shared Services Worldwide Medicines Group Clairol, Inc. ConvaTec Zimmer, Inc. Mead Johnson Nutritionals Chase Manhattan Corporate-level CFO Office Chevron Corporation Corporate-level CFO Office Chrysler Corporation Corporate-level CFO Office Citibank, N.A. Corporate-level CFO Office E.I. du Pont de Nemours Corporate-level CFO Office Federal National Mortgage Association Corporate-level CFO Office General Electric Company Corporate-level CFO Office Hewlett-Packard Company Corporate-level CFO Office Test and Measurement Measurement Systems Organization Consumer Products Group International Business Machines Corporate-level CFO Office J.C. Penney Company Corporate-level CFO Office Eckerd Corporation Insurance Group Catalog Division J.P. Morgan Corporate-level CFO Office Johnson & Johnson Corporate-level CFO Office Lehman Brothers Holdings, Inc. Corporate-level CFO Office Lockheed Martin Corporation Corporate-level CFO Office Tactical Aircraft Systems Astronautics Missiles and Space Aeronautical Systems Electronics and Missiles MCI Communications Corporate-level CFO Office Telecommunications Business Services Division Mass Markets Division MCI International, Inc. Metropolitan Life Insurance Corporate-level CFO Office Institutional Financial Management Division Individual Business Division Capital Corporation Canadian Operations Division Property and Casualty Division Mobil Corporation Corporate-level CFO Office NationsBank Corporation Finance Group New York Life Insurance Company Corporate-level CFO Office SBC Communications, Inc. Corporate-level CFO Office Southwestern Bell Telephone Southwestern Bell Yellow Pages Southwestern Bell Wireless Southwestern Bell Mobile Systems Sprint Corporate-level CFO Office Long Distance Division Local Service Division SuperValu, Inc. Corporate-level CFO Office Cub Foods Midwest Region Northern Region Save-A-Lot, Ltd. In addition to the above individuals, the contributions of the following individuals and organizations are acknowledged: Thomas Fritz, President, Private Sector Council, Washington, D.C.; Relmond Van Daniker, Executive Director, and Patricia O’Connor, Program Manager, National Association of State Auditors, Comptrollers, and Treasurers, Lexington, Kentucky; and James Sorensen, Professor of Accounting, School of Accountancy, University of Denver. These individuals reviewed and commented on drafts of the survey instrument and the report, organized pretests, and/or assisted with the survey distribution. In addition, the Colorado State Auditor’s Office implemented an early version of the survey with State of Colorado agencies and departments and provided valuable input to us on the results. Association of Government Accountants. A Blueprint for Attracting and Retaining Financial Management Personnel. A Report by a Blue Ribbon Task Force of the Association of Government Accountants. Gary Siegel and James E. Sorensen. What Corporate America Wants in Entry-Level Accountants. A joint research project of the Institute of Management Accountants and the Financial Executives Institute. Montvale, New Jersey: August 1994. Gary Siegel Organization, Incorporated. The Practice Analysis of Management Accounting. A research project of the Institute of Management Accountants. Montvale, New Jersey: 1996. Holdman, John B., Jeffrey M. Aldridge, and David Jackson. “How to Hire Ms./Mr. Right.” Journal of Accountancy, August 1996, pp. 55-57. Jablonsky, Stephen F., and Patrick J. Keating. “Financial Managers: Business Advocates or Corporate Cops?” Management Accounting, Vol. 76, No. 8 (February 1995), p. 21. Joint Financial Management Improvement Program. Continuing Professional Education: Federal GS-510 Accountants’ Report. Washington, D.C.: December 1990. __________. Framework for Core Competencies for Financial Management Personnel in the Federal Government. A joint project of the Human Resources Committee of the Chief Financial Officers Council and the Joint Financial Management Improvement Program. Washington, D.C.: November 1995. Siegel, Gary, C.S. Kulesza, and James E. Sorensen. “Are You Ready for the New Accounting?”, Journal of Accountancy, August 1997, pp. 42-46. U.S. General Accounting Office. Developing and Using Questionnaires. GAO/PEMD-10.1.7, October 1993. __________. Federal Workforce: A Framework for Studying Its Quality Over Time. GAO/PEMD-88-27, August 1988. __________. Financial Management: Challenges Facing DOD in Meeting the Goals of the Chief Financial Officers Act. GAO/T-AIMD-96-1, November 14, 1995. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the background and training of key financial personnel at 34 of the largest private corporations and 19 of the largest state governments in the United States, focusing on: (1) education, work experience, training, and professional certifications of their key management personnel; and (2) training and qualification requirements for these personnel. GAO noted that: (1) while a majority of Fortune 100 and state government financial management personnel held undergraduate degrees in accounting or other business fields, personnel in chief financial officer (CFO) and controller positions were more likely to also hold advanced degrees; (2) in both sectors, managers and supervisors of financial analysis were more likely to hold advanced degrees than their counterparts in financial reporting and accounting operations; (3) in the Fortune 100 companies, the most common advanced degree was a Master of Business Administration (MBA); (4) in state governments, MBAs and other master's degrees were both prevalent; (5) accounting, auditing, and systems experience of financial management personnel averaged about 14 years for Fortune 100 companies and about 20 years for state government organizations; (6) for each sector, the majority of the work experience was in corporate or governmental accounting and finance, respectively; (7) combined experience in public accounting, internal auditing, and accounting systems design and maintenance averaged 2.5 years for the Fortune 100 respondents and about 4 years for the state government respondents; (8) these fields often provide personnel with a broad base of experience with accounting, and other organizationwide issues; (9) continuing professional education training was encouraged in Fortune 100 and state government organizations responding to GAO's survey; (10) on average, Fortune 100 and state government financial management personnel completed about 26 to 31 hours of training, respectively, in 1996; (11) respondents from both groups received the majority of their training in technical accounting subjects; (12) about 70 percent of Fortune 100 respondents and 45 percent of state government respondents set aside from 1 to 2 percent of their budgets for financial management staff salaries and benefits to train these staff each year; (13) as for professional certifications, over 40 percent of the Fortune 100 and about 30 percent of the state controllers and managers and supervisors of financial reporting were certified public accountants; and (14) in addition, about 10 percent of state government personnel, across positions, were certified government financial managers.
The creation of DHS represents enormous leadership challenges, encompassing opportunities in multiple management areas. Sustained and inspired political and career leadership will be essential to successfully implementing the transformation of DHS. Success will also largely depend on its ability to attract and retain the right people; set the appropriate priorities for the department; and build effective partnerships with the appropriate public, private, and not-for-profit sector entities. In establishing the new department, the Congress articulated a seven-point mission for DHS: Prevent terrorist attacks within the United States. Reduce the vulnerability of the United States to terrorism. Minimize the damage and assist in the recovery from terrorist attacks. Carry out all functions of entities transferred to the department, including by acting as a focal point regarding natural and man-made crises and emergency planning. Ensure that the functions of the agencies within the department that are not directly related to securing the homeland are not diminished or neglected. Ensure that the overall economic security of the United States is not diminished by efforts aimed at securing the homeland. Monitor connections between illegal drug trafficking and terrorism, coordinate efforts to sever such connections, and otherwise contribute to efforts to interdict illegal drug trafficking. DHS is generally organized into four mission-related directorates: Border and Transportation Security, Emergency Preparedness and Response, Science and Technology, and Information Analysis and Infrastructure Protection. The Border and Transportation Security directorate consolidates the major border security and transportation operations under one roof, including the U.S. Customs Service, parts of the Immigration and Naturalization Service (INS), the Transportation Security Administration (TSA), the Federal Law Enforcement Training Center (FLETC), The Federal Protective Service, the Office for Domestic Preparedness from the Department of Justice (DOJ), and part of the Animal and Plant Health Inspection Service (APHIS). The Emergency Preparedness and Response directorate integrates domestic disaster preparedness training and government disaster response and includes the Federal Emergency Management Agency (FEMA), the Strategic National Stockpile and the National Disaster Medical System, the Nuclear Incident Response Team, the Domestic Emergency Support Teams from DOJ, and the National Domestic Preparedness Office from the Federal Bureau of Investigation (FBI). The Science and Technology directorate coordinates scientific and technological advantages when securing the homeland and will include CBRN Countermeasures Programs, the Environmental Measurements Laboratory, the National Bio-Weapons Defense Analysis Center, and the Plum Island Animal Disease Center. The Information Analysis and Infrastructure Protection directorate accesses and analyzes intelligence, law enforcement data, and other information involving threats to homeland security and evaluating vulnerabilities from state and local agencies, the private sector, and federal agencies such as the Central Intelligence Agency (CIA), FBI, and the National Security Agency (NSA). It includes the Critical Infrastructure Assurance Office, the Federal Computer Incident Response Center, the National Communications System, the National Infrastructure Protection Center, and the energy security and assurance program activities of the Department of Energy. In addition to the four mission-related directorates, the U.S. Secret Service and the U.S. Coast Guard remain intact as distinct entities in DHS; INS adjudications and benefits programs report directly to the Deputy Secretary as the Bureau of Citizenship and Immigration Services; and the Management Directorate is responsible for budget, human capital, and other general management issues. DHS has approximately 155,000 civilian positions and 54,000 military positions in the U.S. Coast Guard, for a total of just over 209,000. (See table 1.) Of the civilian employees, a vast majority transferred from seven organizations: TSA, INS, Customs, FEMA, the U.S. Coast Guard, the U.S. Secret Service, and APHIS. Of the civilian employees who transferred from these seven organizations, approximately 90 percent are stationed outside the Washington, D.C. metropolitan area. DHS employees work in over 300 metropolitan statistical areas. These employees serve in positions ranging from inspectors, investigators, police, and intelligence to attorneys and administrative services. DHS employees are compensated under multiple pay and benefits systems, are hired using varied authorities, and undergo performance appraisals with different rating scales and factors. According to OPM, just over 49,000, or just under one-third, of DHS civilian employees are represented by unions. This includes 16 different unions divided into 75 separate bargaining units. The 3 unions representing the largest number of employees are AFGE, NTEU, and NAAE. AFGE represents almost 33,000 employees who were transferred from INS, the U.S. Coast Guard, FEMA, and others. NTEU represents over 12,000 employees who were transferred largely from Customs. NAAE represents just over 2,000 employees who were transferred from APHIS. DHS’s and OPM’s effort to design a new human capital system is collaborative and facilitates participation of employees from all levels of the department. The process is divided into three stages: research, outreach, and drafting of initial personnel system options; review of the options; and development of proposed regulations. First, the Core Design Team conducted research on human capital approaches, communicated with and gathered feedback from employees, and developed options. Second, the Senior Review Advisory Committee will review these options and forward its recommendations to the DHS Secretary and OPM Director. Third, the Secretary and Director will then propose draft regulations for the human capital system, engage in the statutory collaboration period, and issue final regulations by early 2004. The stages include employees from DHS and OPM, as well as representatives from the department’s three largest unions. This process is described in further detail in appendix II. As figure 2 shows, the Core Design Team, the first stage of the design process, is responsible for research, outreach, and drafting initial options for the personnel system. This group is led by an equal number of DHS and OPM executives. Members of the Core Design Team, which includes employees from headquarters, the field, and unions, are full-time participants who work on one of two subgroups: (1) pay and performance or (2) labor and employee relations—reflecting the areas of Title 5 from which DHS may deviate. The work of the Core Design Team is to result in a broad range of options for the Senior Review Advisory Committee by late September 2003. The second stage of the design process is made of the Senior Review Advisory Committee. The committee’s members include top executives from DHS, OPM, and the three major unions and they are advised by a team of external human capital experts. The committee is provided less than a month to review the system options and forward its iteration for the Secretary and Director to consider. The committee’s time frame for completing this task is October 2003. During the committee’s public deliberations, they may choose to eliminate, create, and/or prioritize the options, or may recommend implementation strategies. Once the Secretary and Director receive the list of options from the Senior Review Advisory Committee, they may edit, remove, or develop alternatives to the proposed options as the third stage of the design process. They expect to announce the proposed regulations in November 2003, which will trigger the statutory collaboration process so final regulations can be issued in early 2004. As called for in the legislation, employee representatives have 30 calendar days to comment and make recommendations. The Secretary and Director are then to follow the provisions of the statutory reconciliation process for no less than 30 days. DHS and OPM leaders have consistently underscored their personal commitment to the design process and speak openly in support of it. When the DHS legislation was under consideration, we testified that the single most important element of successful reorganizations is the sustained commitment of top leaders. In our report that describes the key practices for successful mergers and transformations, we note that top leadership that is clearly and personally involved provides stability and an identifiable source for employees to rally around during tumultuous times. The role of top leaders is also to ensure that transformation efforts stay on course by setting priorities, focusing on critical issues, and demonstrating a commitment to change. DHS and OPM leaders are fulfilling these critical roles. For example, the DHS Under Secretary for Management and OPM’s Senior Advisor for Homeland Security cochair the Senior Review Advisory Committee. Other committee members are officials in key leadership positions at both OPM and DHS and the presidents of the three major unions. Senior officials from DHS, OPM, and DHS’s three largest unions are directly involved in the workings of the Core Design Team. Top leaders of DHS and OPM addressed employees at the Town Hall meetings, expressing their support for the transformation, and solicited feedback from those employees. Specific examples include the Under Secretary for Management writing to DHS employees in April and May 2003 to express her support of the design process and participating in a Town Hall meeting. Additionally, the Under Secretary for Border and Transportation Security participated in several Town Hall meetings to express his on-going support of the design process and to respond to questions from DHS employees. The Under Secretary for Emergency Preparedness and Response and the Commandant of the U.S. Coast Guard also participated in Town Hall meetings. At these meetings, union leaders have stood next to the agency leadership to express their support for the process, according to agency officials. Similarly, OPM’s Associate Director for Strategic Human Resources Policy and OPM’s Senior Advisor for Homeland Security also addressed DHS employees at Town Hall meetings, and responded to their questions. DHS will need to ensure that the development of the human capital policy options by the Core Design Team is integrated with the accomplishment of DHS programmatic goals as defined in the forthcoming strategic plan. Agency officials indicate that it is their intention that the personnel system design will be consistent with the strategic plan. We have reported, and the President’s Management Agenda reiterates, that leading organizations develop their workforce approaches as part of a strategic human capital plan as strategies for accomplishing their mission and programmatic goals. In light of this, we previously stated that the success of the DHS transformation requires the department to link its human capital strategy with its homeland security strategy. DHS is currently developing a strategic plan. This effort began in mid-June and is expected to be completed by the end of September 2003 – a target set by the Office of Management and Budget (OMB). As explained previously, the Core Design Team began its work in late April 2003 and expected to report its proposed options in late September 2003. According to a DHS official leading the strategic planning effort, human capital officials are engaged in drafting the strategic plan. DHS human capital officials confirmed that they have reviewed drafts of the strategic plan. Moving forward, it is critical that the Senior Review Advisory Committee, the Secretary, and the Director make the link between the new human capital system and the accomplishment of DHS’s goals as outlined in the DHS strategic plan. Once a strategic plan is in place, DHS can then develop a strategic human capital plan that, in part, identifies core competencies for staff as a tool for attracting, developing, and rewarding contributions to mission accomplishment. For example, these competencies will be critical to creating a performance management system – a key task of the Core Design Team - that aligns daily operations with organizational goals and creates a “line of sight” and shows how team, unit, and individual performance can contribute to organizational results. We recommended that DHS, in conjunction with OPM and OMB, create an effective performance management system in December 2002.Furthermore, if DHS decides to design and implement a pay-for-performance system, a set of strategic goals and validated competencies will be required so that DHS can identify the outcomes and results that employees are to be rewarded for accomplishing. The Secretary and Director outlined four principles to serve as a framework for the Core Design Team during their first meeting in April: The system has to support both the mission and the people charged with implementing the mission. Design Team members must leave preconceived notions at the door. They have an opportunity and responsibility to create a 21st century personnel system that is fair, performance based, and flexible. DHS must preserve and protect basic civil service principles. DHS must hold people at all levels accountable for performance. The agency will link individual performance to organizational goals, with the ability to identify and reward exceptional service and deal with chronic poor performance. DHS can be a department that stands as a model of excellence. These principles can serve as core values for human capital management at DHS – values that define the attributes that are intrinsically important to what the new organization does and how it will do it. Furthermore, they represent the institutional beliefs and boundaries that are essential to building a new culture for the organization. Finally, they appropriately identify the need to support the mission and employees of the department, protect basic civil service principles, and hold employees accountable for performance. On July 25, 2003, the Core Design Team presented a set of five principles to the Senior Review Advisory Committee as a guide for developing the options to be presented in late September. These principles were drafted by the Core Design Team and reviewed by the field team, using the original four principles proposed by the Secretary and Director as a guide. The five principles are to ensure that the options developed are (1) mission centered, (2) performance focused, (3) contemporary and excellent, (4) generate respect and trust, and (5) based on merit system principles and fairness. Consistent with the principles outlined by the Secretary and Director and those presented to the Senior Review Advisory Committee, our interviews with the human resource leaders in the five largest DHS components identified two areas that they would like the new human capital system to address: the new DHS personnel system should provide for competitive, performance-based pay and should give managers the ability to quickly hire the right people with the skills the agency needs. First, individuals we interviewed hoped that the new system would address their concerns about the disparities in pay rates across DHS and expressed an interest in implementing performance-based pay, linked to the accomplishment of DHS’s mission, such that employees are more accountable.Two indicated that they would like the Core Design Team to propose legislation to address the differences in premium pay that currently exist. Second, and beyond the immediate task of the Core Design Team, there was an overwhelming interest in simplifying the hiring process.Officials in one component expressed their discontent with the amount of time between when a position is announced and when it is actually filled. One executive expressed an interest in more flexibility in hiring because the perception is that the current hiring process is only understandable to those already in the federal government. DHS and OPM established a 9- to 10-month timeline for completing the design process with the expectation that the final regulations will be issued in early 2004. Agency officials have publicized this timeline at Town Hall meetings across the country. Our reports on the successful practices of mergers and transformations have noted that the establishment of a timeline with specific milestones allows stakeholders to track the organization’s progress towards its goals. Publicizing the timeline and meeting its milestones can illustrate building momentum and demonstrate that real progress is being made. The design process officially began in early April 2003 when the Core Design Team convened for a 2-week leadership conference to learn about the various human capital management systems within the component agencies as well as those in other federal agencies and private firms. The Core Design Team began its research full time in late April. This team is expected to present its broad range of options to the Senior Review Advisory Committee in late September 2003. The Senior Review Advisory Committee is allotted less than a month to develop its set of options in October 2003. The Secretary and Director will then select the options that will be submitted as officially proposed regulations available for comment. They expect to announce the proposed regulations in November 2003, which will trigger the statutory collaboration process so final regulations can be issued in early 2004. Although the establishment of a clear timeline is positive, a majority of DHS stakeholders we interviewed expressed concerns about its compressed schedule. There is some understanding that the timeline reflects an effort to take into account the final regulations in preparing the fiscal year 2005 budget that is submitted to the Congress in early 2004. However, a number of human resource directors said the “self-imposed, short” timeline would pose significant challenges for the Design Team. One director commented that the timeline was “ambitious” considering the amount of information that needs to be collected and analyzed. Most directors agreed that the lack of sufficient time to perform these tasks could prevent the Design Team from completing its work or cause it to propose options that had not been thoroughly researched. Furthermore, another stakeholder suggested that the timeline appears to allocate too much time to the development of options and not enough time to the consideration of which options to adopt. On the other hand, DHS and OPM leaders of the design effort agree that the timeline is aggressive, but said that a shorter time frame will serve to minimize employee anxiety. In addition, they said a tight design time frame is needed to provide adequate time for implementation, evaluation, and modification within the 5-year statutory window available for establishing the new system. While it is appropriate to develop and integrate the human capital systems within the department in a quick and seamless manner so that the department can begin to function as a cohesive entity, moving too quickly or prematurely can significantly raise the risk of doing it wrong. Having an ambitious timeline is reasonable only insofar as it does not impact the quality of the human capital system that is created. Overall, the members of the Core Design Team represent multiple organizational components and the three major unions. The composition of the team is important because of the visual sign it communicates regarding which components are dominant and subordinate or whether the new organization is a “merger of equals.” It also helps employees see that they are being represented and that their views are being considered in the decision-making process. The 48 participants of the Core Design Team include personnel experts from OPM, DHS and its component agencies, line employees and managers from DHS headquarters and field offices; and professional staff from the three major unions. Specifically, the Core Design Team is composed of 24 DHS employees, 16 employees from OPM, and 8 professional staff from the unions. This includes 27 staff members, 5 supervisors, 12 managers, and 3 executives. Additionally, just over 60 percent of the members consider themselves human capital professionals, and about two-thirds have experience outside headquarters. (See figs. 3 and 4.) The majority of human resource officials we interviewed consider themselves to be adequately represented on the Core Design Team. Other characteristics of the team members are described in appendix III. According to DHS officials, DHS-specific slots on the Core Design Team were filled by individuals chosen by agency executives after determining the number of seats to be allocated to the different agency components. In selecting team members, officials sought representation from across the organizational components of the department, individuals with field experience, and individuals with some expertise in human resources management. Race, gender, and occupational diversity were other factors considered when selecting participants. Additionally, NAAE selected one DHS employee to participate on the team and AFGE and NTEU each selected four professional staff members to participate. DHS recently completed a noteworthy communications strategy that provides a structured and planned approach to communicate with DHS stakeholders regarding the human capital system. The objectives of the plan are to: raise awareness, disseminate information, and promote a clear understanding of the new human capital system; manage stakeholder expectations and address their concerns; and provide opportunities for a two-way dialogue. We have recently reported that organizations undergoing a transformation should establish a communication strategy that ensures a consistent message is delivered and seeks to genuinely involve stakeholders in the process. The communications plan, completed in June 2003, represents an important and substantive effort and contains four broad pieces that are consistent with the key practices we have identified as important to successful communication during transformations. First, the plan identifies internal and external stakeholders, the concerns of each stakeholder group, and the specific communication channels to be used to communicate to that stakeholder group. Second, the plan articulates the key messages to be delivered to each stakeholder group. Third, an action plan identifies the communication channel to be used, the timeline for its use, and the DHS and OPM staff responsible for implementation. Finally, the plan identifies the feedback mechanisms to be used to ensure there is a two-way dialogue. Moving forward, DHS faces some challenges in successfully implementing its communications plan. First, in addition to the key messages articulated in the plan, DHS will need to provide information to clarify areas of confusion that were identified during our interviews. These include: the roles OPM, DHS, and the Senior Review Advisory Committee have in the process; the factors that will influence the Secretary and Director’s final decisions on which options to propose; the role of the contractor in the design process; the likelihood of the Core Design Team drafting legislative proposals for areas DHS does not have authority to change (i.e., premium pay and hiring); the possibility of there being multiple personnel systems instead of one; the implementation process. A second challenge will be to ensure that preexisting communication channels within each departmental component deliver a message that is consistent in tone and content with the central communication strategy. We learned from three of the five components we interviewed that they use additional vehicles for providing and receiving information from employees. It may be appropriate to coordinate the messages sent to employees through these additional vehicles to minimize the perception that certain groups of employees are getting the “real” story. Building on the current effort, DHS will need to provide adequate opportunities for feedback once the options are released, including providing an adequate level of detail on how the new system will impact employees. The feedback mechanisms identified in the communications plan focus on gathering employee feedback prior to the options being released. For example, two of the three feedback mechanisms outlined in the communications plan will be completed before the system options are publicized. DHS also needs to ensure effective communication to employees and stakeholders after the options are released. For example, DHS should consider describing to employees how the comments collected during the Town Hall meetings and focus groups informed the design process. Furthermore, once options are selected, DHS will be faced with communicating how the changes will impact specific jobs, rights and protections, and daily responsibilities. DHS may find it necessary to further tailor and customize the details of the new human capital system to meet the specific needs of employees. Employee perspectives on the design of the DHS human capital system are sought through many mechanisms, including the Core Design Team with its members from multiple DHS components, Town Hall meetings, focus groups, the field team, and an e-mail mailbox for employee comments. This reflects the Congress’ desire that employees be allowed to participate in a meaningful way in the creation of the new human capital system. Involving employees in planning helps to develop agency goals and objectives that incorporate insights about operations from a front-line perspective. It can also serve to increase employees’ understanding and acceptance of organizational goals and improve motivation and morale. The design process attempts to include employees by creating multiple opportunities for employees to provide feedback. While activity updates were provided in the DHS weekly newsletter and an e-mail mailbox for employees to submit their suggestions and comments was used, multiple Town Hall meetings and focus groups conducted between the end of May and the beginning of July 2003 were held in ten cities across the United States. According to DHS and OPM officials, these cities were chosen to ensure adequate representation of major DHS components and geographic diversity. The goal of the events was to promote two-way communication between management and employees and to gather employee perspectives on the personnel practices that exist in their agency and any proposed changes they would like to see. Each meeting hosted up to 200 DHS employees from the surrounding cities. At a typical Town Hall meeting, there was a general question and answer segment in which local employees had the opportunity to ask questions about the new system and express their overall concerns about DHS. If participants’ questions could not be addressed during the meeting due to time constraints, they could write their questions on note cards and give them to cognizant DHS and OPM officials in attendance. After the meeting, the Core Design Team held a series of six focus group sessions in each city to obtain their input and suggestions for the new human resource system. In most cities, five of the six sessions were devoted to hear employees’ views while the remaining session heard the views of supervisors and managers. Participants in the focus groups included both Town Hall meeting attendees and those who were not able to attend the Town Hall session. The degree to which the information gathered in these sessions was used to inform the design process is not yet evident. On one hand, the Town Hall meetings and focus groups gathered suggestions and concerns from large numbers of employees from multiple organizational components in geographically diverse locations. However, once options for the human capital system are proposed it will be particularly important that employees have adequate opportunities to make a worthwhile contribution. In addition to the Town Hall meetings and focus groups, a field team made of 32 front-line DHS managers and staff, some of whom were selected by the major unions, was formed. During the design process, the field team provided insights about the department’s human capital challenges from a front-line perspective. These insights were gathered during the three meetings of the group -- the field team was convened during the first week of the 2-week April leadership conference, 2 days in July to react to the subgroups’ research, and for 2 days again in mid-September to react to the draft personnel system options before their submission to the Senior Review Advisory Committee in late September. According to documents drafted before the April leadership conference, provided by AFGE and NAAE, it was originally expected that the field team would review the work of the Core Design Team on a “regular basis” and then be used to “test the options against workplace realities.” One stakeholder added that it was his initial impression that the field team would serve as an “extension of the Core Design Team,” empowered to provide input throughout the entire design process. However, over time, the expected role of the field team evolved to that of a recurring focus group that had no formal decision-making role in the design process. Likewise, as the role for the field team evolved, so did its membership – additional nonunionized DHS employees were added to the team. One DHS official acknowledged that the field team has not had a great deal of involvement in the process, and that the expected role of the team changed over time. Officials in NTEU, AFGE, and NAAE additionally confirmed that the role of the field team changed over time. One union president described the diminished role as a “missed opportunity.” This official added that the lack of involvement and minimal communication with the Core Design Team has made it difficult for the field team to make a worthwhile contribution. DHS and OPM have developed a process to design the new personnel system that is stimulated and supported by top leadership in both organizations and is generally inclusive, both in terms of the membership of the Core Design Team and multiple opportunities to provide input. The process is also guided by core principles and an ambitious timeline. Our research shows that these key attributes are indispensable to successful transformations. This design process provides a model for DHS to consider as it makes other important decisions about the implementation and transformation of the department. Building on this progress, DHS will need to ensure that the development of the human capital policy options by the Core Design Team is integrated with the accomplishment of DHS programmatic goals as defined in the forthcoming strategic plan. Such a linkage can ensure that the new human capital approaches support and facilitate the accomplishment of DHS’s goals and objectives – a fundamental principle of the human capital idea. It will also assist the Core Design Team in identifying human capital programs that support the DHS mission, including the development of a performance management system which creates a “line of sight” that shows how team, unit, and individual performance can contribute to overall organizational goals. Additionally, DHS has acknowledged that work lies ahead for implementing better, more effective ways to communicate with and receive feedback from its employees. The development of the communications plan is an important and positive step. As DHS implements this plan it will need to provide information on areas of confusion that were identified during our interviews, including clarifying the role of DHS versus OPM in the system development. DHS will also need to ensure that a consistent message is communicated across DHS components. Finally, effective communication, characterized by a two-way dialogue, will be central to engaging employees in the remainder of the design process and ensuring it is transparent. Ultimately, an effective two-way communication strategy can ease implementation efforts. Once options for the human capital system are proposed it will be particularly important that employees have adequate opportunities to make a worthwhile contribution. Substantial involvement of field staff in the development and implementation of the new human capital system is essential given that over 90 percent of DHS civilian employees are in the field. Continued employee involvement will help to strengthen employee buy-in to the new human capital system. It is important to consider and use the solicited employee feedback to make any appropriate changes once this feedback is received. DHS has developed an effective process to begin the formation of its new human capital system. Moving forward, it is critical that the new human capital system be linked to the DHS strategic plan and that DHS continue to communicate with and involve its employees. Accordingly, we are recommending that once the strategic plan is completed the Secretary of DHS and the Director of OPM ensure that the options selected for the new human capital system support and facilitate the accomplishment of the department’s strategic goals and objectives, as identified in the new strategic plan. In addition, we recommend that the Secretary of DHS clarify the role of the participants in the design effort and other areas of confusion identified by stakeholders during our interviews. Furthermore, consistent with the DHS communications plan, we recommend the Secretary ensure the message communicated across DHS components is consistent, and maximize opportunities for two-way communication and employee involvement through the completion of the design process, the release of the system options, and implementation, with special emphasis placed on seeking the feedback and buy-in of front-line employees in the field. OPM provided written comments on a draft of this report, which are printed in appendix IV. DHS provided technical comments by e-mail. DHS and OPM generally agreed with the contents of the report. However, both DHS and OPM expressed a concern that we misunderstood the role of the field team in the design process. Each described the role of the field team as more limited than our original understanding. While gathering additional information from DHS, NTEU, AFGE, and NAAE to clarify the role and activities of the field team, we learned that its role evolved over the course of the design effort, that it had no decision-making role in the design process, and that it was used as a recurring focus group. Accordingly, we changed the draft to reflect the field team’s current role. DHS and OPM also provided a number of technical suggestions that have been incorporated where appropriate. We are sending copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; the Chairman and Ranking Minority Member, House Committee on Government Reform; the Chairman and Ranking Minority Member, House Select Committee on Homeland Security; and other interested congressional parties. We will also send copies to the Secretary of the Department of Homeland Security and the Director of the Office of Personnel Management. Copies will be made available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Ed Stephenson on (202) 512-6806. Key contributors to this report are listed in appendix V. Implementing large-scale change management initiatives, such as mergers and organizational transformations, is not a simple endeavor and requires the concentrated efforts of both leadership and employees to realize intended synergies and to accomplish new organizational goals. At the center of any serious change management initiative are people—people define the organization’s culture, drive its performance, and embody its knowledge base. Experience shows that failure to adequately address— and often even consider—a wide variety of people and cultural issues is at the heart of unsuccessful mergers and transformations. Recognizing the “people” element in these initiatives and implementing strategies to help individuals maximize their full potential in the new organization, while simultaneously managing the risk of reduced productivity and effectiveness that often occurs as a result of the changes, is the key to a successful merger and transformation. Thus, mergers and transformations that incorporate strategic human capital management approaches will help to sustain agency efforts and improve the efficiency, effectiveness, and accountability of the federal government. GAO convened a forum on September 24, 2002, to identify and discuss useful practices and lessons learned from major private and public sector organizational mergers, acquisitions, and transformations. This was done to help federal agencies implement successful cultural transformations, including DHS. The invited participants were a cross section of leaders who have had experience managing large-scale organizational mergers, acquisitions, and transformations, as well as academics and others who have studied these efforts. We reported the key practices participants identified that can serve as the basis for subsequent consideration as federal agencies seek to transform their cultures in response to governance challenges. Since convening the forum, our additional work has identified specific implementation steps for these practices. (See fig. 5.) The process for creating a DHS human capital management system, jointly developed by DHS and OPM, calls for a design team made up of DHS and OPM employees and union representatives. The process is divided into three stages: research, outreach, and drafting of initial personnel system options; review of the options; and development of proposed regulations. Early 2004 is the expected date for the issuance of the personnel system’s final regulations. As the first stage of the design process, the Core Design Team engaged in efforts that serve as the basis for the work of the other two components. The 48 team participants included personnel experts from OPM, DHS, and its component agencies; line employees and managers from DHS headquarters and field offices; and professional staff from the three major unions. Members were assigned to one of two subgroups focusing on (1) pay and performance or (2) labor and employee relations. The management consulting firm Booz Allen Hamilton assisted the teams in their efforts. Each subgroup had two coleaders, one from OPM and one from DHS, to guide them. The subgroups performed their duties both collectively and separately. They convened jointly when there were common issues to discuss or to listen to presentations on human capital systems. For example, the teams heard presentations on the performance management and performance-based pay system at Internal Revenue Service (IRS); the human capital management systems at FBI and NSA; and the performance management, pay banding, and employee appeals process used at GAO. The pay and performance subgroup focused its work on the three chapters of Title 5 covering performance appraisal, classification, and pay rates and systems. According to the subgroup’s leaders, they identified 25 researchable areas and assigned small teams to explore each. Subgroup members were assigned to work on multiple teams. Research areas included the structure of pay ranges, methods for categorizing types of work, and different appraisal and rating methods, for example. When asked about the initial findings of their research, the leaders of the pay and performance subgroup indicated they identified many pay systems to consider and evaluate. The labor and employee relations subgroup focused on the three chapters of Title 5 covering labor-management relations, adverse actions, and appeals, to narrow its research. To gain a better understanding of these issues, the group invited agencies such as the Merit Systems Protection Board and the Federal Labor Relations Authority to make presentations. Areas that were researched included different levels of employee, union, and management rights; negotiation models; and how the success of labor relations programs, adverse action systems, and appeals systems is evaluated, for example. According to the subgroup leaders, they also researched both leading and failed practices in their subject areas. The group created interview guides to collect information in a consistent format. When asked about the initial findings of the research, the subgroup reported difficulty in identifying innovative labor relations models that can be applied to the federal system. To help facilitate its efforts in the design of the personnel system, DHS contracted with management-consulting firm Booz Allen Hamilton to provide support in project management, research, writing, staff support, and communications/publicity. In addition, it was responsible for planning the Town Hall meetings and facilitating the focus groups. According to the subgroup leaders, the contractor was expected to help design the format for the option papers but would not likely be involved in drafting the substance of the options. The Senior Review Advisory Committee, the second stage of the design process, will receive the broad set of options from the Core Design Team. From this set of options the committee is expected to develop its final list of options for the Secretary and Director to consider.Committee members are permitted to eliminate, create, or prioritize the options. In communicating its list of options to the Secretary and Director, it may present the strengths and weaknesses of each. This committee could potentially make recommendations related to implementation strategies. Meetings of the Senior Review Advisory Committee will be governed by the Federal Advisory Committee Act, which requires meetings to be open to the public. The Under Secretary for Management at DHS and the OPM Senior Advisor for Homeland Security cochair the Senior Review Advisory Committee. Committee members are officials in key leadership positions at both OPM and DHS. OPM representatives include the Senior Advisor for Homeland Security, the Associate Director for Strategic Human Resources Policy, the Associate Director for Human Capital Leadership and Merit System Accountability, and the Senior Policy Advisor to the Director and Chief Human Capital Officer. DHS representatives include the Commissioner of Customs and Border Protection, the Director of TSA, Director of the U.S. Secret Service, Director of the Bureau of Citizenship and Immigration Services, and the Director of Administration. Union representatives are the presidents from AFGE, NTEU, and NAAE. External experts with particular knowledge and experience in human capital management will serve as advisors. The Secretary of DHS and the OPM Director make up the final stage of the design process. Once they receive the list of options from the Senior Review Advisory Committee, they may edit, remove, or develop alternatives to the proposed options. The Secretary and the Director will then issue proposed personnel rules for the department. As called for in the DHS legislation, individuals affected by the proposed rules have 30 calendar days to comment and make recommendations. The Secretary and Director are then to follow the provisions of the statutory reconciliation process for no less than 30 days. Characteristics of the 48 members of the Core Design Team are described in further detail in tables 2 through 6 below. The tables summarize data for those members on board as of July 11, 2003. Since that date, membership of the Core Design Team has changed. In addition to the persons named above, Ellen V. Rubin, Tina Smith, Eric Mader, and Lou V.B. Smith made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The success of the transformation and implementation of the Department of Homeland Security (DHS) is based largely on the degree to which human capital management issues are addressed. Recognizing this, the legislation creating DHS provided it with significant flexibility to design a modern human capital management system. Congressional requesters asked GAO to describe the process DHS has in place to design its human capital system and involve employees, and analyze the extent to which this process reflects elements of successful transformations. The effort to design a human capital management system for DHS generally reflects important elements of effective transformations. (1) Leadership: One of the strengths of the effort to transform the culture of organizations going into DHS has been the on-going commitment of both DHS and Office of Personnel Management (OPM) leaders to stimulate and support the effort to design a human capital system. (2) Strategic Goals: DHS is currently developing a strategic plan. Although DHS human resource leaders are included on the strategic planning team, it will not be complete until the end of September 2003. Consequently, DHS will need to ensure that the development of the human capital policy options is integrated with the accomplishment of DHS programmatic goals as defined in the forthcoming strategic plan. Such integration is important to ensure that the human capital system enables the department to acquire, develop, and retain the core competencies necessary for DHS to accomplish its programmatic goals. (3) Key Principles: The DHS Secretary and OPM Director outlined four principles to serve as a critical framework for the human capital system. These principles appropriately identify the need to support the mission and employees of the department, protect basic civil service principles, and hold employees accountable for performance. (4) Timeline: Agency officials established an ambitious 9- to 10-month timeline for completing the design process, aiming to issue final regulations in early 2004. Some DHS stakeholders we interviewed expressed concerns about the compressed schedule. Officials leading the design effort report the aggressive schedule is necessary to relieve employee anxiety and maximize the time available for implementation. (5) Design Team: The design team includes staff from multiple organizational units within DHS, OPM, and the three major unions. (6) Communication: DHS recently finalized a communication plan that provides a structured and planned approach to communicate with DHS stakeholders regarding the human capital system. Moving forward, DHS will need to provide adequate opportunities for feedback once the options are released. (7) Employee Involvement: Employees are provided multiple opportunities to be included in the design process, including participation in the Core Design Team, the Town Hall meetings, the field team, the focus groups, and an e-mail mailbox for employee comments. Experience has shown that in making major changes in the cultures of organizations, how it is done, when it is done, and the basis on which it is done can make all the difference in whether it is ultimately successful. The analysis of DHS's effort to design a human capital system can be particularly instructive in light of legislative requests for agency-specific human capital flexibilities at the Department of Defense and the National Aeronautics and Space Administration.
Nonappropriated fund instrumentalities (NAFI) are federal government entities whose funding does not come from congressional appropriations but rather from their own activities, such as the sales of goods and services. Their receipts and expenditures are not reflected in the federal budget. NAFIs were established to provide services and items for the morale, welfare, and recreational needs of government employees. Some NAFIs, for example the exchange system run by the Department of Defense, are created by statute. Others are created and regulated by government agencies. Some NAFIs, for example the Department of Agriculture’s Graduate School, after being established by an agency, may subsequently receive congressional approval. No single federal statute establishes the overall authority to create NAFIs or defines how they are to operate. The Department of Defense military exchange system—which includes general retail stores, specialty stores, and consumer services at military installations—is the largest NAFI program. Some other agencies that use NAFIs are the Department of Veterans Affairs, the Coast Guard, and the State Department. Procurement laws and regulations applicable to federal government agencies generally do not apply to NAFIs. For example, the Federal Acquisition Regulation, the governmentwide regulation prescribing procedures for federal procurements and acquisitions with appropriated funds, does not include NAFI procurements with nonappropriated funds. Even though NAFIs generally are not covered under federal procurement regulations, the government agency that establishes a NAFI generally has financial oversight or control of its operations. For example, the Department of Defense established financial oversight procedures for its NAFI activities. The Secretary of Defense is required by statute to prescribe regulations governing (1) the purposes for which nonappropriated funds of a NAFI may be expended and (2) the financial management of such funds to prevent waste, loss, or unauthorized use (10 U.S.C. §2783). The Graduate School was established by the Secretary of Agriculture on September 2, 1921, to provide continuing education for research scientists within the department. Over the years the Graduate School has expanded as a center for professional training of government employees at the federal, state, and local levels. A major expansion in its training curriculum occurred on May 16, 1995, when the Office of Personnel Management transferred eight of its training centers to the Graduate School. The Graduate School currently provides more than 1,500 courses annually and is open to all adults regardless of their place of employment or educational background. Training is offered in a variety of subject areas including computer science, leadership development, and government auditing. Courses are available during the daytime, evening, and on weekends. In addition, the Graduate School offers a distance learning program in which courses can be taken by correspondence and online. The Graduate School also provides training services such as conference and meeting management. The Graduate School’s stated purpose is, through education, training, and related services, to improve the performance of government and to provide opportunities for individual life-long learning. Certificates of accomplishment to encourage participants to complete planned programs in their fields of study are awarded by the Graduate School. Some courses receive college credit recommendations from the American Council on Education’s College Credit Recommendation Service. The credit recommendations guide colleges and universities when they consider awarding credit to participants whom have successfully completed courses at the Graduate School. The Graduate School is under the general direction of the Secretary of Agriculture. Regulations issued by the Secretary require the Graduate School to be governed by a General Administration Board that sets the school’s policies, employs its director, and oversees its operations. These regulations also require that the board and most of the board’s leadership positions be filled by employees of the department. The Graduate School employs more than 1,200 part-time faculty who are drawn from government, academia, and the private sector. Graduate School employees are not part of the civil service system. The Graduate School receives no appropriated funds but operates on revenue derived from providing training services. The school’s revenue comes primarily from three sources: (1) training services provided through interagency agreements with federal agencies, (2) training services provided on a contractual basis, and (3) individual tuition, otherwise known as “open enrollment.” The school uses three categories to define and report its revenue each year: (1) interagency agreement revenue, (2) contractual revenue, and (3) open enrollment revenue. An annual financial audit conducted by a private sector accounting firm is reviewed by the school’s board. In 1984, the Comptroller General ruled that the Graduate School, because it was not a federal agency, could not enter into interagency agreements with federal agencies under the Economy Act (31 U.S.C. §1535). After this ruling, the Graduate School’s revenue decreased about one-third, according to officials. In 1990, Congress authorized federal agencies to enter into interagency agreements with the Graduate School for training and other related services (7 U.S.C. §5922). That authority permits federal agencies to enter into such agreements without regard to competition requirements mandated in the Federal Property and Administrative Services Act of 1949 (40 U.S.C. 471, et seq.) or other procurement laws. The statute also gives the Comptroller General authority to conduct audits of the Graduate School’s financial records relating to interagency agreements entered into under this provision. In 1996, Congress passed legislation that made it clear that as of April 4, 1996, the Graduate School would continue to operate as a NAFI (7 U.S.C. §2279b). Any fees collected by the Graduate School are not considered federal funds and are not required to be deposited in the United States Treasury. That statute also provides that the Graduate School is exempt from various other federal provisions generally applicable to federal agencies, including the Freedom of Information Act, the Privacy Act, and the Federal Tort Claims Act. Consequently, the Graduate School does not have to respond to Freedom of Information Act requests, nor does it have any legal obligation to release any of its training materials, such as specific curriculum outlines, to the general public. The law authorizes the Graduate School’s use of the Department of Agriculture facilities and resources on a cost-reimbursable basis. According to financial reports for fiscal year 1999, the Graduate School reimbursed the department approximately $1.5 million for use of office space and other facilities. To get information regarding the extent to which agencies use the Graduate School, we reviewed seven executive branch agencies. Collectively, the seven federal agencies we surveyed used private companies more frequently in fiscal year 1999 than the Graduate School.As table 1 indicates, these agencies had far more contracts with the private sector than interagency agreements with the Graduate School (531 vs. 20, respectively) for training. In examining the funding received by the Graduate School, we found a similar result: the agencies surveyed spent more on contracts with private companies—about $29 million—than on interagency agreements with the Graduate School—about $5.7 million. The selected agencies accounted for approximately one-third of the total interagency agreement revenue earned by the Graduate School in fiscal year 1999. We also surveyed these agencies regarding their total level of training supported by the Graduate School (rather than training only acquired through interagency agreements). Officials at the agencies reported that the Graduate School’s share of their overall training budgets was minimal, ranging from 0 percent to 11 percent of their total annual training budgets. Agency officials we surveyed told us they followed specific internally established policies, practices, and procedures for making decisions on vendors and procurement methods. While only two of the seven agencies had documented these policies, officials from all seven told us their policies were consistently applied and required that at least two other vendors be considered for any procurement. For example, the U.S. Customs Service has written policies governing the acquisition of training that are published in a directive and a Memorandum from the Assistant Commissioner. All Customs external training purchases require the advance approval of the appropriate management officials with delegated authority to approve training. NASA officials said they follow the external training guidelines in the Federal Acquisition Regulation. However, NASA officials at its Goddard Space Flight Center told us that when potential vendors were considered for each training requirement, a “Request for Training Quote Information” form is required for each potential vendor. This form is used to collect information regarding each potential vendor and to assess its ability to provide the training services required. All of the agency officials said they consider a number of factors before deciding which vendor and contracting approach to use. The most commonly cited factors were cost and the ability of the vendor to provide the requested training. Additional factors agencies consider include customer needs, timeliness, past experience with the vendor, and quality of the product. Some agencies, such as the Internal Revenue Service and the Census Bureau, indicated that they conduct “market analysis” studies to identify all potential vendors. In addition, some agencies identified potential vendors using the General Services Administration’s Federal Supply Schedule and the Government Wide Awarded Contracts. If the USDA Graduate School is selected as the vendor to provide training to an agency, that training can be acquired through a variety of procurement methods including interagency agreements and contracts. In general, agencies indicated that an interagency agreement can be easier to use because it is faster to put into place. Several of the agencies we surveyed also had formal written guidance addressing the use of interagency agreements that also applies to those established with the Graduate School. For example, Census Bureau guidance specifically mentions that interagency agreements are entered into under the authority of the Economy Act, including agreements with the Graduate School. Even though the Customs Service had no interagency agreements with the Graduate School in fiscal year 1999, its 1998 Interagency Agreement Guide makes direct reference to the Economy Act and its provisions and is applicable to agreements established with the Graduate School. The interagency agreement line item in the Graduate School’s fiscal year 1999 financial statements did not include all of the revenue received from interagency agreements. The Graduate School’s fiscal year 1999 financial statements reported interagency agreement revenue of about $7.1 million. Graduate School officials acknowledged that its reported revenue by type was imprecise. We independently estimated interagency agreement revenue as $14.9 million using a stratified random sample of revenue billings made in fiscal year 1999. The $7.8 million difference between the reported interagency agreement revenue and our estimate occurred because the Graduate School included certain revenue under contract training that should have been reported as interagency agreement revenue. According to Graduate School officials, only revenue earned under cost- reimbursable arrangements is reported as interagency agreement revenue. All fixed-price revenue is recorded as contract training, even though a significant amount of this revenue was provided through interagency agreements. Had management labeled these two line items as cost- reimbursable and fixed-price, rather than contract training and interagency agreements, the financial statements would have correctly disclosed the sources of revenue. However, mislabeling these line items caused the fiscal year 1999 financial statements to be misleading with regard to the procurement method used to generate revenue. Graduate School officials said they chose their method of accumulating revenue as fixed-price because it provides the information needed to support certain management decisions. For example, the school has a goal of avoiding heavy reliance on cost-reimbursable interagency agreement revenue. Further, the Graduate School Board of Directors monitors the composition of the fixed-price revenue versus cost-reimbursable revenue. In the course of our work, we noted two other matters regarding transaction processing and records retention. First, based on our sample results, we estimated that the Graduate School misclassified $563,416 in fiscal year 1999 revenue that was generated under contracts as interagency agreement revenue. These misclassification errors resulted from inaccurate manual coding of revenue transactions. Second, the Graduate School could not locate five billing invoices and documentation supporting one cash receipt in our sample. We considered the six items as complete errors and classified them as interagency agreements. The USDA Graduate School is a nonappropriated fund instrumentality whose purpose is to improve the performance of government through training of its employees. The operating structure of the Graduate School includes oversight by the Secretary of Agriculture and a governing board. Employees of the Graduate School are not part of the civil service. In examining the extent of training received by selected federal agencies in fiscal year 1999, the majority of this training was provided by sources other than the Graduate School. The Graduate School’s revenue line items need to be consistent with the revenue sources for the school’s financial statements to be meaningful. The interagency agreement line item in the school’s fiscal year 1999 financial statements was not clearly and accurately reported because it did not reflect approximately half of the interagency agreement revenue that was classified as contract revenue. As a result, a reader could not determine the actual amount of total revenue derived from interagency agreements. We recommend that the Executive Director of the USDA Graduate School revise the Graduate School’s current financial reporting policy to ensure that the revenue line items are properly presented in the school’s financial statements. We requested comments on a draft of this report from the Executive Director of the USDA Graduate School. In its written comments, the Graduate School agreed with the report’s content, conclusions, and recommendation. The Graduate School comments are reprinted in appendix I. Our work was done at the USDA Graduate School headquarters and at selected federal agencies: the Department of Labor, the Internal Revenue Service, the Department of Agriculture’s Rural Development, the Federal Deposit Insurance Corporation, the U.S. Customs Service, the Census Bureau, and the National Aeronautics and Space Administration. To provide information on the purpose and operating framework of the Graduate School we interviewed Graduate School officials and reviewed documentation including legislation, policies, and strategic plans governing the Graduate School. We determined the extent of training services that selected federal agencies obtained from the Graduate School and private contractors and—by interviewing agency officials and reviewing appropriate documents, including agency procurement regulations and listings of agreements and contracts—we learned how such decisions were made. Six agencies covered by our work were nonstatistically selected based on their having interagency agreements and contracts with the Graduate School during fiscal year 1999. The seventh agency in our sample, the Department of Labor had no interagency agreements with the Graduate School during this period. Our work was performed using fiscal year 1999 data because this was the last year for which a complete data set was available when we initiated our evaluation. The seven nonstatistical selected agencies accounted for approximately one-third of the total interagency agreement revenue earned by the Graduate School in fiscal year 1999. To assess the reasonableness of interagency agreement revenue reported in the Graduate School’s fiscal year 1999 financial statements, we met with Graduate School officials and external auditors for the school; read their audited financial statements for fiscal years 1999 and 1998; and read the school policies and procedures governing the classification of revenue and contracting. Further, we independently estimated interagency agreement revenue by selecting a stratified random probability sample of 145 transactions from 2,439 interagency agreement revenue billings made during fiscal year 1999. We stratified the population into four strata on the basis of the total of revenue billings for fiscal year 1999. In addition, we independently estimated contract revenue by selecting a stratified random probability sample of 185 transactions from 3,523 contract revenue billings made during fiscal year 1999. We stratified the population into five strata on the basis of the total amount of revenue billings for fiscal year 1999. Each sample element was subsequently weighted in the analysis to account statistically for all members of the respective populations, including those that were not selected. Transactions selected in the sample were tested for accuracy and to determine whether or not they were classified correctly. The confidence level used for estimating the value of misclassified amounts was 95 percent and the expected tolerable amount in error (test materiality) was $545,950. We also tested the reliability of the $6.1 million in fixed-price interagency agreement revenue identified by Graduate School officials that was reported as contract training revenue. We did not audit the Graduate School’s financial statements or review the other auditor’s workpapers. Furthermore, we are not expressing an opinion on the Graduate School’s financial statements or on whether their auditors followed professional standards. We conducted our review in the Washington, D.C., metropolitan area from November 2000 to June 2001 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Executive Director of the Graduate School. We are sending copies of this report to the Chairman of the House Committee on Agriculture; the Director of the USDA Graduate School; the Secretaries of the Departments of Agriculture, Labor, Treasury, and Commerce; the Directors of the Federal Deposit Insurance Corporation and U.S. Census Bureau; the Administrator of the National Aeronautics and Space Administration; the Commissioners of the Internal Revenue Service and U.S. Customs Service; and the Deputy Undersecretary of the Rural Development and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix II. If you have any questions please call me at (202) 512-9490. George H. Stalcup, (202) 512-9490. Key contributors to this report were Charlesetta Bailey, Jeffrey Bass, Sharon Byrd, Brandon Haller, Jeffrey Isaacs, Casandra Joseph, Boris Kachura, Carla Lewis, Delois Richardson, Sylvia Shanks, Alana Stanfield, Michael Volpe, McCoy Williams, and Gregory Wilmoth.
The U.S. Department of Agriculture's Graduate School provides extensive training opportunities to government employees and others. As a nonappropriated fund instrumentality, the Graduate School relies solely on income from the training it offers. During fiscal year 1999, the federal agencies GAO reviewed had 20 interagency agreements with the Graduate School totaling about $5.7 million. The agencies also had 531 contracts, totaling $29 million, with private companies for training and related services. The Graduate School's financial statements for fiscal year 1999 incorrectly identified the portion of revenue that was earned through interagency agreements. This misclassification occurred primarily because of the Graduate School's reporting policies.
The principal source of federal funding for CTE, Perkins IV authorizes federal grant funds for the enhancement of CTE for secondary and postsecondary students. In fiscal year 2008, Congress appropriated $1.2 billion for the improvement of local CTE programs. Education’s Office of Vocational and Adult Education allocates the funds to states, which retain up to 15 percent of the funds for administration and state leadership of CTE programs, before passing at least 85 percent of the funds on to local recipients of funds, such as local school districts and community colleges. States determine the percentage of funds that will be allocated to the secondary and postsecondary levels. The majority of funds allocated to the secondary level are passed on to local recipients based on the school district’s share of students from families below the poverty level for the preceding fiscal year. Postsecondary funds are primarily allocated based on the institution’s share of Pell Grant recipients. Perkins IV established six student performance measures at the secondary level and five performance measures at the postsecondary level. These measures represent a range of student outcomes, such as attainment of technical skills and placement in employment or further education following the completion of CTE programs. In addition, the measures include the nontraditional participation and completion of students from an underrepresented gender in programs with significant gender disparities (such as women participating in auto repair), among others (see tables 1 and 2 for a description of the Perkins IV performance measures). To ease states’ transition to the new provisions in Perkins IV, Education permitted states to submit a 1-year transition plan that covered only the first program year of Perkins IV implementation, 2007-2008. Accordingly, states were required only to implement and report performance on two secondary performance measures for the 2007-2008 program year: academic attainment and student graduation rates. These two measures are based on the same academic attainment and student graduation rate measures required by Title I of the Elementary and Secondary Education Act. Beginning in the 2008-2009 program year, states are required to report on student outcomes for all of the performance measures. States will report these outcomes to Education in December 2009. Perkins IV requires states to negotiate specific performance targets with Education and to annually report their performance to Education. It also requires local recipients to negotiate performance targets with the states and to annually report to the state their progress toward meeting these targets. Perkins IV established additional accountability requirements for states and local recipients, including actions to address states that do not meet all of their performance targets. Under Perkins IV, if a state does not meet at least 90 percent of its targets for one or more of the performance measures, it is required to develop and implement a program improvement plan that describes how it will address its failing performance targets. Prior to Perkins IV, states were only required to develop and implement a program improvement plan if they failed to meet their targets in all of their performance measures, not just one measure. States can also face financial sanctions. For example, Education can withhold all or a portion of funds if a state does not implement a program improvement plan, show improvement in meeting its failing performance measure, or meet the target for the same performance measure for 3 consecutive years. Local recipients that do not meet at least 90 percent of their performance targets have the same program improvement requirements as the state and face similar sanctions from the state. In the event of financial sanctions, Education is required to use the withheld funds to provide technical assistance to the state for improving its performance on the measures and the state is to use funds withheld from local recipients to provide CTE services and activities to students. In order to implement the performance measurement requirements of Perkins IV, states must define which students will be included in the measures and collect data for each of the performance measures at the secondary and postsecondary levels. For example, states define the minimum requirements, such as a certain number of CTE credits that a student would need to obtain in order to be identified as a student concentrating in CTE. Education has taken a range of actions to help states with these activities. For example, in January 2007, Education began issuing nonregulatory guidance to states to help them develop their student definitions and data collection approaches for the performance measures. Education also issued guidance to states on the information states must include in their state Perkins plans and in the annual reports that they submit to Education. In the state plans, states must detail how they intend to implement the performance measures, and in the annual reports states must describe their progress in meeting the negotiated performance targets. In addition to implementing performance measures, states are required to evaluate programs, services, and activities supported with Perkins funds and to report to Education in their state plans how they intend to conduct these evaluations. To meet this requirement, states describe the approaches, such as the use of state-developed standards, they will use to evaluate local CTE programs. In addition, Education requires states to include a description of how they used Perkins funds to evaluate their local CTE programs in their annual reports. A key feature of Perkins IV—to enhance state and local flexibility in developing, implementing, and improving career and technical education—allows for considerable variation in how states implement some performance measures. While Perkins IV was designed to strengthen accountability for results at the state and local levels, it also allows states to establish their own accountability systems, including their own data collection methods for the performance measures. Of the 11 performance measures, the secondary and postsecondary levels have 3 measures in common: technical skill attainment, student placement, and participation in and completion of nontraditional programs (see fig. 1). States may also include additional, state-developed performance measures in their accountability systems. For example, Washington state added three performance measures—earnings, employer satisfaction, and CTE student satisfaction—to its accountability system. Consistent with Perkins IV, Education’s guidance to states also allows for flexibility. Education issued nonregulatory guidance that proposed specific definitions that could be adopted by states to develop each of the secondary and postsecondary performance measures. It also identified preferred approaches for collecting data for certain measures such as student technical skill attainment. However, Education noted that in accordance with Perkins IV, states could propose other definitions and approaches to collect data for the required performance measures if they meet the requirements of the law. We found through our surveys of state CTE directors that states vary considerably in the extent to which they plan to follow Education’s guidance—specifically with regard to the technical skill attainment and secondary school completion measures. As a result, Education will collect student outcome data that vary across states for the same measures. This can create challenges for Education to aggregate student outcomes at the national level. For example, a majority of states reported that they will use technical assessments—the approach recommended in Education’s guidance—to measure student attainment of skills at the secondary and postsecondary levels. These include assessments leading to industry-ba certificates or state licenses. However, a number of states will rely on other approaches to collect data for the performance measure, including grade poi nt average (GPA), program completion, or other methods (see table 3). Officials in the states we visited provided a variety of reasons for their use of alternate methods to measure students’ attainment of technical skills. For example, postsecondary state officials in California said that a CTE instructor’s overall evaluation of a student’s technical skill proficiency, in the form of a final grade, is a better measure of technical skill attainmentthan third-party technical assessments, and can more effectivel program improvement. They questioned the value of technical assessments, in part because assessments often cannot keep pace with technology and changing CTE program curricula, such as curricula for digital animation. A Washington state official told us that the state plans to use program completion to measure technical skills at the postsecondary level, noting that each postsecondary CTE program incorporates industry- recognized standards into the curriculum. He noted that a national system se it of third-party assessments may not be adequate or appropriate, becau would not necessarily incorporate the same standards. Local school officials in Minnesota said that they will report on CTE course complet view by for this measure. Because CTE courses undergo curriculum re teachers as well as industry advisors, and align with relevant postsecondary programs in the area, school officials told us cours completion i attainment. s sufficient to satisfy the definition of technical skill Education’s guidance also allows for considerable variation in the types of technical assessments states can use and when they can administer them. Most states at the secondary level reported in our survey that they plan to use industry-developed certificates or credentials most often administere at the end of a program, such as a certificate awarded for an automotive at the end of a program, such as a certificate awarded for an automotive technician. At the postsecondary level, states plan to most often rely upon technician. At the postsecondary level, states plan to most often rely upon the results of assessments for state lice the results of assessments for state licenses, such as state nursing licenses, to measure technical skills (see fig. 2). nses, such as state nursing licenses, to measure technical skills (see fig. 2). However, we found that while a majority of states plan to use assessment to report to Education, the assessments are not currently in widespread use. For example, more than half of states at the secondary and postsecondary levels reported that they plan to use these assessments to report on few to none of their state-approved CTE programs in the 2008- s 2009 program year. Some states at the secondary level reported will use a combination of methods—including GPA o r program completion—to report on technical skill attainment. We also found that states differ in whether they plan to report student dat on GED credentials, part of the secondary school completion measure. Thirty states reported through our survey that they do not plan to report GED data to Education for the 2008-2009 program year, while 18 reported that they would. About one-third of all states cited their ability to access accurate GED data as a great or very great challenge. For example, state officials we interviewed said states face difficulty tracking the students that leave secondary education and return, sometimes several years lat to earn a GED credential. An Education official said that the agency is aware of the challenges and limitations states face in collecting GED data ed to provide technical assistance to states on and that the agency may ne ways to collect these data. States reported in our surveys that they face the most difficulty in collecting student data for two of the performance measures: technical skill attainment and student placement (see fig. 3 and fig. 4). Thirty-eight states at the secondary level reported that they face great or very great challenges in collecting data on student technical skill attainment, while, similarly, 14 said they face challenges collecting data on student placement. The results were similar at the postsecondary level: 39 states reported great or very great challenges with the technical skill attainment measure and 11 cited a similar level of difficulty with student placement. States reported that the technical skill attainment measure at the secondary and postsecondary levels was most challenging to implement because of costs and the states’ ability to collect accurate and complete student data. Specifically, states reported that the costs of state-developed assessments and third-party technical assessments—such as those for industry certifications—are high and often too expensive for many districts, institutions, or students. Several state CTE directors commented in our surveys that their Perkins funds are inadequate to pay for these assessments and additional funds would be necessary to cover the costs. Another CTE director stated that economically disadvantaged students cannot afford the cost of assessments. In addition to challenges due to cost, states are limited in their ability to access accurate and complete data. For example, a state official said that Washington state does not have data-sharing agreements with assessment providers to receive the results of student assessments. As a result, the state will have to rely largely on students to self-report the results of their assessments, which raises concerns of data quality. Challenges such as these likely contribute to some states’ use of other data—such as GPA or program completion—to collect and report information for this key student performance measure. Some states also reported difficulty collecting data on CTE students after they leave the school system. States at the secondary and postsecondary levels reported that their greatest challenge with the student placement measure is collecting data on students that are employed out of state. As we previously reported, state wage records, such as Unemployment Insurance data, track employment-related outcomes only within a state, not across states. A number of states commented in our surveys on challenges in tracking students because of the lack of data sharing across states. We found that states face challenges in tracking students employed out of state regardless of the method they most commonly use to collect student placement data. Thirty-eight states at the secondary level will use student survey data from the state, school district, or a third party to track student placement and report to Education, while 41 states at the postsecondary level will rely on state wage record data, despite potential gaps in student data (see fig. 5). g. 5). States also cited other challenges in obtaining data on student placement for CTE students. At the secondary level, states reported that their next greatest challenge is linking secondary and postsecondary data systems in order to track students that pursue higher education after graduation. To help overcome this challenge, Minnesota—one of the states we visited— recently passed legislation to allow data sharing between the secondary and postsecondary levels. Our survey also found that states’ next greatest challenge at the postsecondary level was collecting data on students who are self-employed after leaving postsecondary institutions. Community college officials in California said that while they rely on Unemployment Insurance wage record data, the data are incomplete and do not capture information on the self-employed, a group that is important for the measurement of CTE outcomes at the postsecondary level. States face similar challenges of cost and ability to access accurate data for the remaining performance measures. For example, states at the secondary level commented on data challenges for the academic attainment and student graduation rate measures. Specifically, several states cited problems in obtaining data from separate student data systems containing academic and CTE information. This can be particularly challenging for states that are trying to match student data from different systems in order to track required CTE student outcomes. In addition, at the postsecondary level, states cited challenges in tracking student retention in postsecondary education or student transfer to a baccalaureate degree program. In particular, accessing student data from out-of-state and private institutions and the high costs required to track these students were identified as the most challenging issues. States most often reported that they will track these students through their state postsecondary data systems. As we have previously reported, effective monitoring is a critical component of grant management. The Domestic Working Group’s suggested grant practices state that financial and performance monitoring is important to ensure accountability and attainment of performance goals. Additionally, GAO recently reported on the importance of using a risk-based strategy to monitor grants and noted that it is important to identify, prioritize, and manage potential at-risk grant recipients, given the large number of grants awarded by federal agencies. Education’s approach to monitoring Perkins is consistent with these suggested grant practices. According to its Perkins monitoring plan, Education selects which states to monitor based on a combination of risk factors and monitors states in two ways: through on-site visits and off-site reviews of state plans, budgets, and annual reports for those states not visited in a given year. To determine which states it will visit for on-site monitoring, Education uses a combination of risk factors, such as grant award size, issues identified through reviews of state Perkins plans, and time elapsed since Education’s last monitoring visit. Education officials told us that their goal is to visit each state at least once every 5 years and reported that they have conducted on-site monitoring visits to 28 states since 2006. Education officials also told us that the same monitoring team performs both on-site and off-site reviews, which officials said helps to ensure continuity between the reviews. Furthermore, when conducting the off- site reviews, the monitoring team looks for trends in state data and for any problems with state data validity and reliability. The team uses a checklist to match performance data to the data states report in their required annual reports. According to Education’s inventory of open monitoring findings, as of May 2009, 9 of the 28 open findings were related to accountability and states failing to submit complete or reliable data. For example, in a February 2008 monitoring visit, Education found that the monitored state’s data system had design limitations that affected the state’s ability to collect and assess data on career and technical education students. Specifically, Education found that the various data systems across the local secondary and postsecondary levels did not share data with the state-level CTE data system. This data-sharing issue raised doubts about the validity and reliability of the state’s Perkins data. Education tracks the findings from each state’s monitoring visit in a database and reviews the findings in an internal report that is updated monthly. Additionally, if a state has open findings, the state may be required to report corrective actions to Education in the state’s annual report. Officials told us that the amount of time it takes for a state to close out a finding depends upon the nature of the finding. For example, a finding related to accountability may take up to a year to resolve because a state may have to undertake extensive actions to address the deficiency. Education officials reported that their monitoring process emphasizes program improvement rather than focusing solely on compliance issues and that they use monitoring findings to guide the technical assistance they provide to the states. To evaluate its monitoring process, Education sends a survey to the CTE directors of states that were monitored that year and asks them to rate the format and content of Education’s Perkins monitoring process. For example, the survey asks states to report on whether they received sufficient notice that the site visit was going to take place, whether the monitoring team provided on-site technical assistance, and whether the state received a written report within a reasonable time frame following the visit. We reviewed Education’s summaries of the state surveys and found that for 2004 and 2005, the results of these surveys were generally positive. For example, in a 2004 monitoring evaluation report, the 10 states that were surveyed all reported that they had received sufficient notice about the monitoring visit and that Education staff provided on-site technical assistance. According to our survey of secondary-level CTE directors, about half of states have had a monitoring visit within the last 3 years, and almost all of the states whose monitoring visit resulted in findings said that Education worked with them to ensure that the findings were addressed. Education provides states with guidance, technical assistance, and a variety of other resources and is taking actions to meet states’ need for additional help. Since Perkins IV was enacted, Education has issued guidance to states on topics such as instructions for developing the state Perkins plans and annual reports, as well as guidance related to the performance measures. For example, Education’s guidance provides clarification to states on what information each state has to submit to Education before it can receive its grant award for the next program year, such as any revisions a state wants to make to its definitions of student populations, measurement approaches, and proposed performance levels for each of the measures. Some of the guidance resulted from Education’s collaborative efforts with states. For example, Education’s guidance to states on student definitions and measurement approaches incorporated the input given by state CTE directors during national conference calls between states and Education. Other guidance addresses questions raised by states during national Perkins IV meetings, such as how a state should negotiate performance levels with its local recipients. In addition to guidance, Education offers states technical assistance from Education staff—called Regional Accountability Specialists—and through a private contractor. Education officials told us that each Regional Accountability Specialist works with a specific group of states to negotiate state data collection approaches for the performance measures. In addition, each specialist maintains regular contact with his or her states throughout the year and provides assistance on other issues, such as reporting requirements and program improvement plans. In addition to the Regional Accountability Specialists, Education also provides states with technical assistance by using MPR Associates, a private contractor. MPR Associates provides technical assistance that generally includes on-site visits and follow-up discussions to help states improve their CTE programs and facilitate data collection for the performance measures. For example, MPR Associates met with one state to assist with developing population definitions and measurement approaches that aligned with Education’s guidance and helped another state with developing a plan for implementing secondary and postsecondary technical skill assessments. After providing technical assistance to a state, MPR Associates develops a summary report, which is then published on Education’s information- sharing Web site, the Peer Collaborative Resource Network. Education also offers states a range of other resources, including data work groups and monthly conference calls. See table 4 for a description of the various ways in which Education provides assistance to states. Most states reported that the assistance provided by Education has helped them implement the performance measures, but that more assistance in the area of technical skill attainment would be helpful. In our survey, states responded positively about their Regional Accountability Specialist and all of Education’s other forms of assistance, including the Data Quality Institute and the Next Steps Work Group. States also reported that more nonregulatory guidance and more individual technical assistance would improve their ability to implement the performance measures. Of the states that provided additional information on the areas in which they wanted assistance, 4 of 16 states at the secondary level and 9 of 20 states at the postsecondary level said that they wanted assistance on the technical skill attainment measure. Specifically, some of the states that provided additional information said they would like Education to clarify its expectations for this measure, to provide states with a library of technical assessments, and to provide state-specific assistance with developing low-cost, effective technical assessments. States also raised issues regarding the performance measures and their state’s data collection challenges. For example, one state reported that it was unsure how a state should report technical skill attainment as a single measure for over 400 distinct CTE programs. We found that Education officials were aware of states’ need for additional assistance and that Education has taken some actions to address these needs, particularly in the area of technical assessments. For example, through the Next Steps Work Group, Education facilitated a technical skills attainment subgroup that is led by state officials and a national research organization. The subgroup reviewed state Perkins plans and annual reports for technical skill assessment strategies that states reported to Education for consideration in upcoming guidance. Education also collaborated with MPR Associates to conduct a study on the feasibility of a national technical assessment clearinghouse and test item bank. The study, conducted with several CTE research organizations and state-level consortia, proposed national clearinghouse models for technical assessments. MPR Associates concluded that clarifying ownership, such as who is responsible for the development and management of the system, and securing start-up funding were the two most likely impediments to creating such a system. The report was presented to states at the October 2008 Data Quality Institute seminar, and Education officials reported that they are working with organizations such as the National Association for State Directors of Career and Technical Education Consortium and the Council of Chief State School Officers to implement next steps. In addition to helping states with the technical skill attainment measure, Education also has taken actions to improve its information-sharing Web site, the Peer Collaborative Resource Network. Specifically, a Next Steps Work Group subcommittee surveyed states for suggested ways to improve the Web site and reported that states wanted to see the information on the site kept more current. The subcommittee reported in December 2008 that Education would use the survey results to develop a work plan to update the Web site. In May 2009, Education officials reported that they had implemented the work plan and were piloting the revamped site with selected state CTE directors before the department finalizes and formally launches the site. State performance measures are the primary source of data available to Education for determining the effectiveness of CTE programs, and Education relies on student outcomes reported through these measures to gauge the success of states’ programs. While Perkins IV requires states to evaluate their programs supported with Perkins funds, it only requires states to report to Education—through their state plans—how they intend to evaluate the effectiveness of their CTE programs. It does not require states to report on the findings of their evaluations and does not provide any specific guidance on how states should evaluate their programs. Because only 2 of 11 measures have been implemented and reported on thus far, Education has little information to date on program outcomes. In program year 2007-2008, Education required states to implement and report only the academic skill attainment and graduation rate measures. States are required to provide Education with outcome data for the remaining 9 secondary and postsecondary measures in December 2009. According to Education’s annual report for program year 2007-2008, 43 states met their targets for the academic attainment in reading/language arts measure, 38 states met their targets for the academic attainment in mathematics measure, and 46 states met their targets for the graduation rate measure. We analyzed the state plans of all 50 states and the District of Columbia and found that, as required by Perkins IV, states provide a description to Education on how they are evaluating their CTE programs. The type of information that states provided varied. For example, some states described the databases they use to capture key data and others explained how they use state-developed performance measures to evaluate their programs. Perkins IV does not require that states include information on what their evaluations may have found in terms of the success of a program. In our surveys of state CTE directors, nearly half of states (23 states at the secondary level and 21 states at the postsecondary level) responded that they have conducted or sponsored a study, in the past 5 years, to examine the effectiveness of their CTE programs. In response to these survey results, we collected seven studies that states identified as evaluations of their program effectiveness. We developed an instrument for evaluating these studies and determined the type of evaluation and methodology used by states in these studies. We determined that four of the studies were outcome evaluations and the remaining three studies were not outcome, impact, or process evaluations. For example, one state found in its outcome evaluation that high school graduates who completed a CTE program of study entered postsecondary institutions directly after high school at the same rate as all graduates. Perkins IV provides states with considerable flexibility in how they implement the required performance measures and how they evaluate the effectiveness of their CTE programs. While this flexibility enables states to structure and evaluate their programs in ways that work best for them, it may hinder Education’s ability to gain a broader perspective on the success of state CTE programs. Specifically, differences in how states collect data for some performance measures may challenge Education’s ability to aggregate student outcomes at a national level and compare student outcomes on a state-by-state basis. Further, Education is limited in what it knows about the effectiveness of state CTE programs, beyond what states report through the performance measures. Perkins only requires that states report on how they are evaluating their programs, and does not provide any guidance on how states should evaluate their programs or require that states report on the outcomes of their evaluations. Education is working with states to help them overcome challenges they face in collecting and reporting student outcomes, and over time, states may collect more consistent data for measures such as technical skill attainment. As states become more adept at implementing the Perkins performance measures, they will be better positioned to conduct more rigorous evaluations of their CTE programs. However this information may not be reported to Education. If policymakers are interested in obtaining information on state evaluations, they will need to weigh the benefits of Education obtaining this information with the burden of additional reporting requirements. We provided a draft of this report and the electronic supplement to the Department of Education for review and comment. Education provided technical comments on the report, which we incorporated as appropriate. Education had no comments on the electronic supplement. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. To obtain national-level information on states’ implementation of Perkins IV, we designed and administered two Web-based surveys, at the secondary and postsecondary levels, to state directors of career and technical education (CTE) in the 50 states and the District of Columbia. The surveys were conducted between January and April 2009, with 100 percent of state CTE directors responding to each survey. The surveys included questions about the types of data states collect for the student performance measures and challenges they face; the various kinds of technical assistance, guidance, and monitoring states received from Education; and how states evaluate their CTE programs. The surveys and a more complete tabulation of the results can be viewed at GAO-09-737SP. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft survey instruments and using a Web-based administration system. Specifically, during survey development, we pretested draft instruments with officials in Minnesota, Washington state, and Vermont in December 2008. We also conducted expert reviews with officials from the National Association of State Directors of Career and Technical Education Consortium and MPR Associates, who provided comments on the survey. In the pretests and expert reviews, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure that terms used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section were appropriate. On the basis of the pretests and expert reviews, the Web instruments underwent some revisions. A second step we took to minimize nonsampling errors was using Web-based surveys. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors associated with a manual data entry process. When the survey data were analyzed, a second, independent analyst checked all computer programs to further minimize error. While we did not fully validate all of the information that state officials reported through our surveys, we reviewed the survey responses overall to determine that they were complete and reasonable. We also validated select pieces of information by corroborating the information with other sources. For example, we compared select state responses with information submitted to Education in state Perkins plans. On the basis of our checks, we believe our survey data are sufficiently reliable for the purposes of our work. To better understand Perkins IV implementation at the state and local levels, we conducted site visits to three states—California, Minnesota, and Washington state—between September 2008 and February 2009. In each state we spoke with secondary and postsecondary officials at the state level with CTE and Perkins responsibilities. We also interviewed officials from local recipients of Perkins funds—that is, school districts and postsecondary institutions. Through our interviews with state and local officials, we collected information on efforts to implement the Perkins performance measures and uses of Perkins funding, experiences with Education’s monitoring and technical assistance, and methods for CTE program evaluation. The states we selected represent variation across characteristics such as the type of state agency (i.e., state educational agencies or state college and university systems) eligible to receive Perkins funds, the amount of Perkins IV funds received in fiscal year 2008, and type of approach used to measure student attainment of technical skills. The localities selected for site visits provided further variation in geographic location (urban versus rural), number of CTE students served, and amount of Perkins funding received. We conducted this performance audit from August 2008 to July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Elizabeth Morrison (Assistant Director), Avani Locke, Robin Nye, Charlotte Gamble, Stephen Steigleder, Jessica Orr, Jean McSween, Christine San, and Jessica Botsford made key contributions to this report.
The Carl D. Perkins Career and Technical Education Act of 2006 (Perkins IV) supports career and technical education (CTE) in high schools and postsecondary institutions, such as community colleges. Perkins IV established student performance measures at the secondary and postsecondary levels for state agencies, such as state educational agencies, and local recipients, such as school districts, eligible to receive funds. GAO examined (1) how states have implemented the Perkins IV performance measures and what, if any, challenges they have faced in implementing the measures; (2) to what extent the Department of Education (Education) has ensured that states are implementing the new performance measures and supported states in their efforts; and (3) what Education knows about the effectiveness of CTE programs. To collect national-level data, GAO surveyed state CTE directors in the 50 states and District of Columbia between January and April 2009, and received responses from all states and the District of Columbia. To view survey results, click on http://www.gao.gov/special.pubs/gao-09-737sp/index.html . We provided a draft copy of this report to Education for comment. We received technical comments, which we incorporated into the draft where appropriate. States are implementing some of the Perkins IV performance measures using different approaches and report that the greatest challenge is collecting data on technical skill attainment and student placement. Flexibility in Perkins IV and Education's guidance permits differences in how states implement the measures. According to our surveys, 34 states at the secondary level and 29 at the postsecondary level intend to adopt Education's recommended use of assessments--such as those for industry certifications--to measure technical skills. States reported that they face the most challenge collecting data on the technical skill attainment and student placement measures because of cost and concerns with their ability to access complete and accurate data. Education ensures states are implementing the Perkins IV accountability requirements through on-site monitoring and off-site document reviews, and supports states through technical assistance and guidance. Monitoring findings were most often related to states failing to submit complete or reliable data, and Education uses its findings to guide the technical assistance it provides to states. States reported that Education's assistance has helped them implement the performance measures, but that more assistance with technical skill attainment would be helpful. Education is aware of states' need for additional assistance and has taken actions to address this, including facilitating a state-led committee looking at technical assessment approaches. State performance measures are the primary source of data available to Education for determining the effectiveness of CTE programs, and Education relies on student outcomes reported through these measures to gauge the success of states' programs. Because only 2 of 11 measures (secondary and postsecondary have 3 measures in common) have been implemented and reported on thus far, Education has little information to date on program outcomes. In addition, Perkins IV does not require states to report to Education the findings of their program evaluations. In our surveys of state CTE directors, nearly half of states responded that they have conducted or sponsored a study to examine the effectiveness of their CTE programs. We reviewed 7 of these studies and found that only 4 were outcome evaluations.
Created in 1789, Customs is one of the federal government’s oldest agencies. Customs is responsible for collecting revenue from imports and enforcing customs and related laws. It also processes persons, carriers, cargo, and mail into and out of the United States. In fiscal year 1997, Customs collected about $19 billion in revenues and processed about 18 million import entries; about 128 million vehicles; and about 446 million air, land, and sea passengers entering the country. Customs performs its mission with a workforce of about 19,500 personnel at its headquarters in Washington, D.C., and at 20 CMCs, 20 Special Agent-in-Charge offices, and 301 ports of entry around the country. Of these 301 ports, 24 are located along the Southwest border and—through 39 crossing points (such as bridges)—handle both passengers and commercial cargo entering the United States. At the end of fiscal year 1997, Customs had deployed about 28 percent of its inspectors and about 62 percent of its canine enforcement officers at ports along the Southwest border. This compared to about 24 percent of its inspectors and about 50 percent of its canine enforcement officers being deployed at the Southwest border in fiscal year 1992, the earliest year that complete data were available. This deployment represented an increase of about 36 percent in the number of inspectors and about 67 percent in the number of canine enforcement officers at the Southwest border over the fiscal year 1992 level. A major goal of Customs is to prevent the smuggling of drugs into the country by attempting to create an effective drug interdiction, intelligence, and investigation capability that disrupts and dismantles smuggling organizations. The Commissioner of Customs has designated this goal to be the highest priority within Customs. Specifically, as 1 of more than 50 federal agencies involved in the national drug control effort, Customs is responsible for stopping the flow of illegal drugs through the nation’s ports of entry. In addition to routine inspections to search passengers, cargo, and conveyances (these include cars, buses, trucks, aircraft, and marine vessels) for illegal drugs moving through the ports, Customs’ drug interdiction efforts include investigations and the use of contraband enforcement teams and canine enforcement officers. In February 1995, Customs initiated Operation Hard Line along the Southwest border to address drug smuggling, including port running (the practice of racing drug-laden conveyances through a Customs inspection point), and related border violence through increased and intensified inspections, improved facilities, and the use of technology. According to Customs officials, port running had increased in part as a result of enforcement operations conducted by the Immigration and Naturalization Service between the ports of entry along the Southwest border. Customs has expanded its anti-smuggling initiative (called Operation “Gateway”) beyond the Southwest border to the southern tier of the United States to include the Caribbean and Puerto Rico. According to Customs officials, in keeping with the need to perform a multifaceted mission, Customs does not generally allocate inspectors to ports of entry exclusively to perform drug enforcement. Accordingly, while it is the highest priority, drug enforcement is only one of many functions that inspectors are expected to perform when inspecting goods and persons. However, in an effort to enhance its drug enforcement operations, Customs has been using more specialized drug enforcement units, such as contraband enforcement teams, and assigning inspectors to such units on a rotational basis. The Results Act was enacted to improve the efficiency and effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. Under the Results Act, executive agencies were to develop, by September 1997, strategic plans in which they defined their missions, established results-oriented goals, and identified the strategies they will use to achieve those goals for the period covering at least fiscal years 1997 through 2002. These plans are to be updated at least every 3 years. Beginning in fiscal year 1999, agencies are also to develop annual performance plans. The Results Act requires that these plans (1) identify annual performance goals and measures for each of an agency’s program activities, (2) discuss the strategies and resources needed to achieve the performance goals, and (3) explain the procedures the agency will use to verify and validate its performance data. Customs’ Strategic Plan for fiscal years 1997 through 2002 established a goal and a number of objectives designed to continue Customs’ multipronged drug enforcement effort to increase the risk of being caught for those smuggling illegal drugs into the country. The plan also included measures, such as the number and amounts of narcotics seizures, to gauge the success of the enforcement efforts and proposed conducting internal evaluations of specific components of the strategy, such as narcotics interdiction. Customs’ fiscal year 1999 Annual Performance Plan detailed performance goals and measures for each of its operational activities. The plan also discussed the strategies and proposed resources that would be utilized to achieve the goals. Customs does not have an agencywide process for annually determining its need for inspectional personnel—such as inspectors and canine enforcement officers—and for allocating these personnel to commercial cargo ports of entry nationwide. Customs officials were not aware of any such process to determine inspectional personnel needs prior to 1995. While Customs has moved in this direction by conducting three assessments to determine its need for additional inspectional personnel since 1995, these assessments (1) focused exclusively on the need for additional personnel to implement its anti-drug smuggling initiatives, such as Operation Hard Line and similar initiatives; (2) were limited to land ports along the Southwest border and certain sea and air ports at risk from drug smuggling; (3) were conducted each year using different assessment and allocation factors; and (4) were conducted with varying degrees of involvement by Customs headquarters and field units. Focusing on only a single aspect of its operations (i.e., countering drug smuggling); not consistently including the key field components (i.e., CMCs and ports) in the decisionmaking process; and using different assessment and allocation factors from year to year could prevent Customs from accurately estimating the need for inspectional personnel and then allocating them to ports. According to Customs officials, they were not aware of any agencywide efforts prior to 1995 to determine the need for additional inspectional personnel at commercial cargo ports of entry. Rather, CMCs (then called districts) requested additional personnel primarily when new ports were established. For example, when the new port at Otay Mesa, California, was established, the Southern California CMC (then called the San Diego District) requested from headquarters, and was allocated, some additional personnel to staff the port. Separately, according to Customs officials, as part of the annual budget request development process, CMCs can also submit requests for inspectional personnel to fill vacancies in existing positions created by attrition. On a broader basis, according to officials at Customs’ Anti-Smuggling Division (ASD), beginning in the late 1980s, Customs redeployed some existing inspectional personnel in response to the increasing workload and drug smuggling threat along the Southwest border. For example, as shown in figures 1 and 2 (see also app. II), prior to the Hard Line buildup, there was an increase in inspectional personnel—inspectors and canine enforcement officers —at Southwest border ports between fiscal years 1993 and 1994. According to ASD officials, this was done in preparation for the implementation of the North American Free Trade Agreement and the anticipated increase in related workload. According to these officials, Customs accomplished the pre-Hard Line buildup by reallocating positions that had become vacant through attrition from ports around the country—such as those on the border with Canada—to the Southwest border. Customs’ personnel needs assessment process for fiscal years 1997 through 1999 focused exclusively on its anti-drug smuggling initiatives, namely Operations Hard Line and Gateway. In focusing on only one aspect of its cargo and passenger operations (i.e., countering drug smuggling), Customs is not identifying the need for inspectional personnel for its overall cargo processing operations. According to Customs and Treasury officials, the impetus for the focus of the needs assessements on the anti-smuggling initiatives, beginning with Hard Line, was provided by a June 1995 visit by the Deputy Secretary of the Treasury to ports within the Southern California CMC to observe how Hard Line was being implemented. According to these officials, the Deputy Secretary expressed concern about Hard Line’s implementation, especially about the extensive use of overtime and the apparent lack of results in terms of drug seizures. According to the officials, the Deputy Secretary concluded that the Southwest border ports did not have a sufficient number of inspectors and other personnel to adequately implement Hard Line. As a result, the Deputy Secretary asked Customs officials to review the staffing situation at the Southwest border ports and prepare a proposal for additional staffing and other measures to enhance Hard Line’s implementation. In response to the Deputy Treasury Secretary’s concerns about Operation Hard Line, Customs conducted a needs assessment in 1995 for its fiscal year 1997 budget submission. Specifically, in June 1995, ASD asked the four Customs districts (now called CMCs) along the Southwest border to develop estimates of their inspectional personnel needs. The four districts were San Diego, California (now the Southern California CMC); El Paso, Texas (now the West Texas CMC); Laredo, Texas (now the South Texas CMC); and Nogales, Arizona (now the Arizona CMC). The factors used in this assessment and its results are discussed later in this report. Because they focused on Customs’ anti-smuggling initiatives, the inspectional personnel needs assessments that began in 1995 were accordingly limited to ports along the Southwest border and the southern tier of the United States, and to sea and air ports determined to be at risk from drug smuggling. Also, these assessments focused only on the need for additional personnel at these ports. Specifically, Customs did not conduct a review of its 301 ports to determine (1) the appropriate staffing levels at each one of these ports and (2) whether it was feasible to permanently reallocate inspectors to the Southwest border ports and other high-risk ports from other ports around the country that potentially had, at that time, higher levels of inspectors than justified by workload and other factors, before assessing the need for additional personnel. In addition, Customs’ strategic plan and the fiscal year 1999 Annual Performance Plan did not provide the detail necessary to determine the level of personnel needed and how Customs planned to align, or allocate, these personnel to meet its plans’ goals and objectives. The strategic plan, however, recognized the need to assess the allocation of resources, including personnel, and their effectiveness and to address any necessary redeployments, while Customs’ fiscal year 1998 Annual Plan identified the linkage of its goals with available and anticipated resources as an area that needed attention. Customs officials said that they did not conduct broad-based assessments because the results of these assessments would likely indicate the need to move inspectional personnel. These officials stated that moving personnel would be difficult for four primary reasons. First, about 1,200 current inspectional positions are funded through revenues from user fees established by the Consolidated Omnibus Budget Reconciliation Act of 1985 as amended, codified at 19 U.S.C. 58c. These positions are funded for specific purposes at specific locations, such as processing arriving passengers at air and sea ports, in proportion to the revenues contributed by each user fee category. For example, according to Customs officials, since air passenger fees contributed about 85 percent of all user fee revenues, air ports would receive 85 percent of all inspectors funded by the fee revenues. Consequently, Customs cannot redeploy such positions to other locations for other purposes, such as inspecting cargo at commercial cargo ports. Second, under the terms of its union contract, to permanently move inspectors from one CMC to another, Customs needed to ask for volunteers before directing the reassignment of any inspectors. However, when Customs asked for 200 volunteers—a number far short of what was ultimately estimated as being needed—to be detailed to the Southwest border to help implement Operation Hard Line, very few volunteers emerged. Consequently, Customs abandoned its call for volunteers and decided to implement Hard Line with existing personnel by relying on the use of overtime. In other instances that would require inspectors to move, according to Customs officials, if volunteers did not emerge, Customs would need to select the most junior inspectors to move. However, for operational reasons having to do primarily with inspector experience, this was not an option preferred by Customs. Third, funding historically was not requested in the President’s budgets or appropriated by Congress for permanent changes of station (i.e., permanent moves) because of the high cost involved. Customs officials estimated that it cost between $50,000 and $70,000 to move an inspector, thus making any substantial number of moves prohibitively expensive. However, more recently, the President’s budgets have requested funding for redeploying Customs agents, and Congress has appropriated such funding. For example, in the fiscal year 1998 budget, $4 million was requested for agent redeployments, and Congress appropriated the requested amount. Fourth, by 1995, Customs had already reallocated positions to the Southwest border from other ports as the positions became vacant through attrition. However, according to ASD officials, Customs could no longer reallocate positions in this manner because some non-Southwest border ports were experiencing staffing shortages due to attrition and growing workloads and needed to fill their vacancies. Customs’ three needs assessments utilized different factors from year to year to determine the need for additional inspectional personnel. However, Customs’ decision not to consider factors critical to accomplishing its overall mission every year—such as the configuration of the ports that was used in the fiscal year 1997 assessment—could have prevented Customs from estimating the appropriate level of personnel at each port. For the fiscal year 1997 assessment, ASD provided the Southwest border districts with a number of factors to use in determining the need for additional inspectors and canine enforcement officers for their cargo and passenger operations. The factors were based primarily on the configuration of ports, which, in addition to its drug enforcement functions, is a reflection of Customs’ mission critical functions. The factors were (1) the need to fully staff all primary passenger lanes, taking into account agreements with the Immigration and Naturalization Service (INS), including inspectors to conduct preprimary roving; (2) the need to fully staff cargo facilities (primary booths and examination docks), while taking into account the balance between Customs’ enforcement mission and the need to facilitate the movement of legitimate conveyances and their cargo; and (3) the need for canine enforcement officers to support all cargo and passenger processing operations. The districts were also asked to (1) assume that they were going to at least maintain the examination rates being achieved at the time of the assessment, based on the national standard to examine a minimum of 20 percent of a selected conveyance’s cargo, and (2) consider the overall drug smuggling threat at ports. Unlike the fiscal year 1997 needs assessment process that was based on the configuration of ports, ASD officials said they used the threat of drug smuggling at commercial cargo land ports along the Southwest border and at air and sea ports on the southern tier of the United States and other locations to estimate the number of additional inspectors needed in fiscal years 1998 and 1999. However, in not considering land border port configurations, Customs did not take into account changes in the configurations that could have implications for the number of inspectional personnel needed. For fiscal year 1998, ASD officials said they focused on three aspects of the drug smuggling threat: (1) the number and location of drug seizures, since they were evidence of the threat; (2) the use of rail cars by drug smugglers to smuggle drugs; and (3) the existence of internal conspiracies by individuals, such as dock workers at ports, to smuggle drugs. According to the ASD officials, the latter two factors represented the evolving nature of the drug smuggling threat and needed to be addressed. For fiscal year 1999, in addition to the latter two factors used in fiscal year 1998, ASD said it also considered the need to address the continued evolution of the drug smuggling threat, namely (1) an increase in drug smuggling using waterways bordering the United States and (2) an expansion in the number of drug smuggling organizations operating in U.S. cities. According to an ASD official, the factors used in fiscal years 1998 and 1999 were meant to balance Customs’ continued emphasis on the drug smuggling threat along the Southwest border and the need to address new threats in other areas, such as Miami and Los Angeles. The processes to allocate the inspectional personnel funded by Congress in fiscal years 1997 and 1998 generally used different needs assessments factors. For fiscal year 1997, ASD used commercial truck volume to allocate the new cargo inspectors to the Southwest border ports.According to ASD officials, they used the workload data because they believed that the drug smuggling threat ultimately manifested itself in terms of conveyance and passenger traffic—commercial trucks, passenger vehicles, and pedestrians at land ports; aircraft and passengers at air ports; and vessels, cargo, and passengers at sea ports—and the likelihood that any one of these conveyances or passengers could carry drugs through any port at any time. ASD used an estimated ratio of 10 inspectors for every 100,000 laden (full) trucks and 5 inspectors for every 100,000 empty trucks to allocate the additional personnel to the Southwest border ports. An ASD official said that the ratio was based on ASD’s experience with the number of inspectors and the length of time needed to inspect laden and empty trucks. In addition, according to this official, ASD used the ratio because it was relatively easy to understand and implement and was generally supported by the CMCs and ports receiving the resources, such as the Otay Mesa port. For fiscal year 1998, ASD officials stated that they used the same aspects of the drug smuggling threat used for that year’s needs assessment to allocate the inspectional personnel that were funded. Accordingly, to address the use of commercial rail cars to smuggle drugs, for example, ASD estimated that a team of between four to eight inspectors was needed to inspect a commercial cargo train, depending on the number of rail cars. Using this estimate, ASD allocated inspectors to ports with rail car inspection operations that were facing a drug smuggling threat, such as Laredo and Brownsville, Texas. However, because Customs did not consider its entire workload, it did not take into account the anticipated growth in trade volume and the potential resulting need for additional inspectional personnel to handle this growth. Further, considering that Customs has identified workload as an indicator of the drug smuggling threat, it could not respond to the escalation of the threat as represented by the growth in its entire workload. Customs officials commented that, since a limited number of additional inspectors were available for allocation to rail operations, they allocated a minimum number of inspectors to each port with such operations, without considering the workload. In fiscal year 1997, ASD officials stated that, working with Customs’ Canine Branch, they used workload and the extent of the drug smuggling threat to allocate the additional canine enforcement officers to ports. No canine enforcement officers were requested or appropriated for fiscal year 1998. Customs’ needs assessments and allocations were conducted with varying degrees of involvement by headquarters and field units. ASD had the lead role in the assessments and allocations, while other units—CMCs and ports—had more limited roles. Specifically, while Southwest border CMCs and ports conducted the fiscal year 1997 assessment, they were not involved in the subsequent allocation of the personnel that were funded. In its role, ASD compiled the results of the needs assessment. Customs’ Office of Investigations estimated the number of investigative agents and other staff needed to support the inspectors and canine enforcement officers. The Canine Branch estimated the number of canine enforcement officers needed to provide operational support to inspectors. As described earlier, ASD also allocated the inspectors and, working with the Canine Branch, the canine enforcement officers that were funded by Congress. The Office of Investigations allocated the agents and support staff. ASD conducted the fiscal year 1998 needs assessment and allocation processes and the fiscal year 1999 needs assessment. CMCs and ports affected by ASD’s proposed fiscal year 1998 allocations were asked to comment on them. Five CMCs and two ports submitted written comments. Three CMCs indicated that they were satisfied with the number of additional inspectors to be allocated to them. Two CMCs and two ports indicated that additional inspectors were needed. ASD officials said that they took these comments into consideration when finalizing the allocation. According to ASD officials, they assumed a leading role because the needs assessments and subsequent allocations were being conducted exclusively in support of Customs’ anti-drug smuggling initiatives, such as Operation Hard Line. These initiatives are conducted under ASD’s oversight. In addition, an ASD official explained that ASD was fully cognizant of the threat, workload, and other factors relevant to the needs assessment and allocation processes at the CMCs and ports; thus, it was able to conduct them without the need to consult extensively with the CMCs and ports. However, because it did not fully involve the two key field components responsible for day-to-day operations (i.e., the CMCs and ports) throughout the needs assessment and allocation processes, Customs received no input from those who, by virtue of their operational roles, are in the best position to know the levels of inspectional personnel they need. The Results Act specifies that all agencies’ strategic plans should have six critical components. Among these is the establishment of approaches or strategies to achieve general goals and objectives. In addition, the Results Act requires that, beginning in fiscal year 1999, agencies must develop annual performance plans to establish a link between their budget requests and performance planning efforts. The Act also envisioned that the strategic and annual performance planning cycles would be iterative, mutually reinforcing processes. We have previously reported that under strategic planning envisioned by the Results Act, as part of establishing strategies to achieve goals, strategic plans and annual performance plans need to describe, among other things, (1) the human and other resources needed and (2) how agencies propose to align these resources with their activities to support mission-related outcomes. We have accordingly pointed out that in order to effectively implement the Results Act, and as part of the annual performance planning process, agencies will need to consider how they can best deploy their resources to create a synergy that effectively and efficiently achieves performance goals. Consequently, to effectively implement the Results Act, Customs will need to consider the relationship or link between the personnel it will have available and the results it expects these personnel to produce. However, its most recent estimates of the need for inspectional personnel and allocations of such personnel to ports were too narrowly focused on certain aspects of its operations and limited to certain ports to clearly achieve such a link for all of its operations. As discussed earlier, in its strategic plan, Customs has already recognized the need to review the deployment of its resources, including personnel; evaluate their effectiveness; and address any necessary redeployments. In addition, in its fiscal year 1998 Annual Plan, Customs has identified the linkage of available and anticipated resources with achieving performance goals as an area that needs attention. The President’s budgets did not request all of the additional personnel Customs’ assessments indicated it needed. According to Customs and Treasury officials, budget constraints, drug enforcement policy considerations, and legislative requirements affected the number of personnel Customs could request and how it could allocate those it received. For its fiscal year 1997 personnel needs assessment, Customs’ four districts (now CMCs) along the Southwest border estimated that they needed 931 additional inspectors and canine enforcement officers to adequately implement Operation Hard Line. While reviewing this assessment, the Office of Investigations determined that an additional 75 agents and 30 support staff for the agents were needed to complement the districts’ request. This raised the estimate to a total of 1,036 additional positions. According to ASD, CMC, and port officials, this estimate represented the minimum number of additional positions needed to adequately implement Hard Line. The President’s fiscal year 1997 budget ultimately requested 657—or about 63 percent of Customs’ original estimate—additional inspectors, canine enforcement officers, agents, and support staff. Congress appropriated funds for the 657 positions. In terms of inspectional personnel specifically for commercial cargo, Customs received funding for about 80 percent (260 of 325) of the additional inspectors, 63 percent (157 of 249) of the additional canine enforcement officers, and about 96 percent (101 of 105) of the additional agent and support positions originally estimated as being needed. Figure 3 provides a position-by-position comparison of what Customs estimated it needed for fiscal year 1997 and what was actually requested and appropriated. Tables 1 and 2 show how the funded inspector and canine enforcement officer positions in fiscal year 1997 were allocated to CMCs and how these allocations compared with the original Customs estimates. Fewer inspectional positions were requested for Customs than it originally determined were needed in fiscal year 1998. For that year, Customs initially estimated that it needed 200 additional cargo inspectional positions. However, the President’s fiscal year 1998 budget requested 119—or about 60 percent of Customs’ original estimate—additional cargo inspectional positions. Congress appropriated funding for the 119 positions. Table 3 shows how funded inspector positions were allocated and how these allocations compared with the original Customs estimates. For fiscal year 1999, Customs estimated that it needed an additional 479 inspectors, 85 canine enforcement officers, 211 agents, 33 intelligence analysts, and 68 marine enforcement officers, for a total of 876 additional positions. However, the President’s fiscal year 1999 budget requested 27 agents as part of a separate initiative called the “Narcotics and Drug Smuggling Initiative” to counter drug smuggling. This request represented 3 percent of Customs’ total estimate and about 13 percent of the estimate for agents. Customs and Treasury officials cited internal and external budget constraints, drug enforcement policy considerations, and legislative requirements as the primary factors affecting the number of additional personnel that Customs requested and the manner in which it allocated appropriated personnel or reallocated existing personnel. Budget constraints affected the number of additional inspectional personnel that Customs requested for fiscal years 1997, 1998, and 1999. Specifically, according to Customs officials, internal budget constraints resulted in their reducing the original fiscal year 1997 1,036-position request to 912 positions. Customs subsequently submitted its request for 912 additional positions to Treasury. Treasury officials, also citing budget constraints, including their decision to maintain budget requests within OMB’s overall targets for Treasury, further reduced Customs’ request to 657 additional positions. Customs’ fiscal year 1997 request was transmitted by Treasury as part of its departmental request to OMB for an initial review. According to Customs and Treasury officials, upon review, OMB denied the request. Instead, OMB recommended that Customs move 240 existing inspectional positions to the Southwest border to help implement Operation Hard Line. OMB’s decision was appealed by Treasury and ONDCP, which had already certified Customs’ request for 657 additional positions as adequate to meet the National Drug Control Strategy’s goals. In its appeal, Treasury cited the detrimental effect OMB’s denial would have on Customs’ drug enforcement operations, including its inability to increase the number of cargo examinations. In its own appeal, ONDCP identified the level of personnel for Customs as a critical issue and argued that the requested inspectional personnel were needed to strengthen the Southwest border against drug smuggling. According to Customs officials, through subsequent negotiations following Treasury’s appeal involving, among others, the Treasury Secretary and the OMB Director, OMB approved the 657-position request. The 657 positions—and $65 million to fund them—were ultimately funded when Treasury received an additional $500 million from Congress as part of its budget appropriation, according to Treasury officials. As discussed earlier, for fiscal year 1998, Customs originally estimated that 200 additional cargo inspectors were needed for air and sea ports determined to be at risk from drug smuggling. None of the ports were located along the Southwest border. Treasury initially denied Customs’ request for 200 positions and later approved for submission to OMB a request for 119 additional positions. Customs and Treasury officials again cited budget constraints as the reason for the reduction. As part of its role (see footnote 12), ONDCP certified the submission as adequate. According to Customs budget documents, the 119 positions were funded for 1 year with appropriations from the Violent Crime Reduction Trust Fund. For fiscal year 1999, the President’s budget is proposing that these positions be funded from Customs’ Salaries and Expenses account. For fiscal year 1999, Customs estimated that it needed an additional 876 inspectional and related positions for its anti-drug smuggling initiatives at the Southwest border and at air and sea ports believed to be at risk from drug smuggling. According to an ASD official, Treasury supported and ONDCP certified this estimate, which was then transmitted to OMB for review. OMB denied the request. Instead, the President’s fiscal year 1999 budget included a separate 27-agent anti-narcotics initiative. The resource allocation process was affected by policy considerations related to the drug smuggling threat. Specifically, for fiscal year 1997, ASD changed its initial allocation of inspectional personnel to include a port not located at the Southwest border. ASD had originally planned to allocate all of the additional inspectional and canine enforcement officers exclusively to Southwest border ports. According to an ASD official, ASD modified the allocation because the South Florida CMC appealed to the Commissioner of Customs for additional inspectional personnel, citing a significant drug smuggling threat as indicated by the number of cocaine seizures—totaling about 10,000 pounds—at Port Everglades port. The Commissioner agreed with the appeal. Consequently, ASD adjusted its allocation to provide nine inspectors and canine enforcement officers to this port. For fiscal year 1998, during the course of its review of Customs’ request for the additional resources and its plans to allocate them, ONDCP directed Customs to change its allocation to include cargo ports along the Southwest border. According to Customs and ONDCP officials, this was done to maintain the National Drug Control Strategy’s emphasis on the Southwest border. Subsequently, Customs reallocated 47 of the 119 positions to Southwest border ports. According to Customs officials, the potential reallocation of existing inspectional personnel has also been affected by legislative requirements. As discussed earlier, positions funded with the user fees established in the Consolidated Omnibus Budget Reconciliation Act of 1985 as amended, cannot be redeployed because these positions are funded for specific purposes at specific ports. In addition, according to the officials, the positions funded in the fiscal year 1997 appropriation for Operation Hard Line were to be used exclusively at Southwest border ports. It is too early to definitively determine (1) any implications of Customs not receiving all of the inspectional personnel it originally estimated to be needed and (2) the effect of the additional personnel that were appropriated on Customs’ drug enforcement operations. According to Customs officials, the new inspectors need to gain experience before they are fully effective. Further, while many of the fiscal year 1998 inspectors have been hired, few, if any, have finished basic training. Customs plans to assess the effectiveness of drug enforcement operations by establishing performance measures and conducting internal evaluations. One reason that it is too early to determine the impact of the additional inspectional personnel on Customs’ drug enforcement operations is that new inspectors need to gain experience. For example, according to a Southern California CMC official, the CMC’s policy is to provide extensive on-the-job training lasting up to 1 year to new inspectors at its passenger processing port before deploying them to cargo processing. New inspectors are effective in interdicting drugs in the passenger processing environment, but must receive commercial operations training to be proficient at drug interdiction in the truck and rail environments on the Southwest border. As a result, according to this official, it may take up to 2 years to fully train new inspectors in the skills needed in all areas of this CMC’s operations. Also, the South Texas CMC Director said that, once the new inspectors were hired and trained, they were sent to this CMC for an additional 10 weeks of specialized training, of which 2 weeks were for cargo inspections. The Director estimated that it then takes about 6 months before new inspectors are fully productive on their own. A second reason why it is too soon to determine the full impact of the additional resources is that, while many of the 119 inspector positions funded for fiscal year 1998 have been filled, few, if any of these inspectors have completed basic training. An ASD official said that, as of early April 1998, about 60 percent of the inspectors had been hired and were in basic training, and thus had not been deployed in the field. Customs plans to evaluate the effectiveness of its anti-drug smuggling initiatives. To this end, in its fiscal year 1997 to 2002 Strategic Plan, Customs established seven measures or improvement targets, including the number and amount of drug seizures and the ratio of seizures to the number of cargo examinations conducted. Three other measures or targets—including the number of internal conspiracies disrupted—were being reviewed at the time of the Strategic Plan’s introduction by Customs management for possible permanent inclusion in the Plan. Customs also proposed to conduct internal evaluations of its strategies, including the narcotics strategy. For example, it plans to evaluate the interdiction component of that strategy in fiscal year 1999. We have previously reported that, while Customs’ goals and objectives appear to be results-oriented and measurable, it still faces challenges in evaluating its drug interdiction mission. For example, according to several Customs officials, it is unclear whether an increase in drug seizures indicates that Customs has become more effective or that the extent of drug smuggling has increased significantly. Customs does not have an agencywide process for annually determining its need for inspectional personnel—such as inspectors and canine enforcement officers—for all of its cargo operations and for allocating these personnel to commercial ports of entry. Customs has moved in this direction since 1995 by conducting three assessments to determine its need for additional inspectional personnel. However, these assessments (1) focused exclusively on the need for additional resources to implement Operation Hard Line and other anti-smuggling initiatives, (2) were limited to land ports along the Southwest border and certain sea and air ports at risk from drug smuggling, (3) were conducted each year using different assessment factors, and (4) were conducted with varying degrees of involvement from Customs units. Focusing on only a single aspect of its operations (i.e., countering drug smuggling), not consistently including the key field components (i.e., CMCs and ports) in the decisionmaking process, and using different assessment and allocation factors from year to year could prevent Customs from accurately estimating the need for inspectional personnel and then allocating them to ports. In conducting its strategic planning under the Results Act, Customs will need an annual approach that considers all of its commercial ports, its mission-related functions, and the impact of technology and related equipment so that it can determine the inspectional personnel it would need to achieve the desired mission outcomes it details in its strategic and annual performance plans. Customs, in its strategic planning documents, has already recognized the need to review its personnel deployments, evaluate their effectiveness, and address any necessary changes and to address the link between performance goals and existing and anticipated resources, including personnel. We recognize that Customs’ requests for inspectional personnel will continue to be influenced by budget, policy, and legislative constraints. However, we believe that by developing a process that, in addition to considering drug enforcement activities, also considers mission-critical functions related to processing cargo at commercial ports, Customs would be able to provide Treasury, OMB, ONDCP, and Congress with more systematically developed personnel needs estimates and rationales for these estimates. We recommend that, as a sound strategic planning practice, and taking into account budget and other constraints, the Commissioner of Customs establish a systematic process to ensure, to the extent possible, that Customs’ inspectional personnel are properly aligned with its goals, objectives, and strategies, including those for drug enforcement. Such a process should include conducting annual assessments to determine the appropriate staffing levels for its operational activities related to processing cargo at commercial ports. We requested comments on a draft of this report from the Director of OMB, the Director of ONDCP, and the Secretary of the Treasury, or their designees. On April 2, April 6, and April 8, 1998, respectively, the Chief of OMB’s Treasury Branch; the Director of ONDCP’s Office of Programs, Budget, Research, and Evaluation; and the Assistant Commissioner of Customs’ Office of Field Operations provided us with their agencies’ oral comments on the draft. These officials generally agreed with our conclusions and recommendation. The officials also provided technical comments and clarifications, which we have incorporated in this report where appropriate. The Assistant Commissioner indicated that Customs had already undertaken steps to begin implementing the recommendation. We are sending copies of this report to the Secretary of the Treasury, the Commissioner of Customs, and to the Chairmen and Ranking Minority Members of the congressional committees that have responsibilities related to these issues. Copies also will be made available to others upon request. The major contributors to this report are listed in appendix III. If you or your staff have any questions about the information in this report, please contact me on (202) 512-8777 or Darryl Dutton, Assistant Director, on (213) 830-1000. Our objectives in this review were to determine (1) how Customs assesses its needs for inspectional personnel and allocates these personnel to commercial ports of entry, (2) whether Customs received all the inspectional personnel its assessments indicated it needed, and (3) whether there are any known implications of Customs’ not receiving all of the personnel estimated to be needed and the impact of the additional personnel that were appropriated on Customs’ drug enforcement operations. To determine how Customs assesses its needs for inspectional personnel and allocates these personnel to commercial ports of entry, we obtained and reviewed relevant documentation. The documentation included (1) a headquarters directive to the then districts—now Customs Management Centers (CMC)—and ports initiating an assessment of the needs for inspectional personnel, (2) CMCs’ detailed responses to this directive, (3) budget proposals and requests, and (4) matrices developed by Customs headquarters that are used to allocate the inspectional personnel appropriated by Congress to ports of entry. The documentation also included summaries of current and historical workloads and staffing levels and assessments of the drug smuggling threat. We discussed these documents and related issues with cognizant officials from Customs’ Anti-Smuggling Division within the Office of Field Operations, the Budget Division within the Office of Finance, and the CMCs and ports we visited or contacted. We also held discussions with officials from the Department of the Treasury’s Office of Finance and Administration and Office of Budget, the Office of Management and Budget’s (OMB) Treasury Branch, and the Office of National Drug Control Policy (ONDCP). To determine whether Customs received all the inspectional personnel it estimated were needed, we obtained and reviewed relevant budget documents, such as internal Customs and Treasury memorandums, reports, and budget request reviews, and congressional appropriations legislation. We compared the appropriated levels with those that were estimated as needed and discussed discrepancies with cognizant Customs, Treasury, and OMB officials. To determine the known implications, if any, of Customs’ not receiving all of the personnel it estimated were needed, we obtained and reviewed relevant documents, such as summaries of Operation Hard Line and Customs’ Strategic Plan. We also interviewed cognizant Customs officials at headquarters and at CMCs and ports of entry. During these interviews, we focused on the effect, if any, of Customs’ not receiving the level of personnel it originally estimated were needed on its enforcement activities at ports of entry. We also used this information to determine if the potential contributions of the additional personnel that were provided to Customs could be identified. We visited the Southern California and South Texas CMCs and contacted the Arizona CMC by telephone because they represented three of the four CMCs along the Southwest border of the United States. We visited the Otay Mesa, California, and Laredo, Texas, ports of entry and contacted the Nogales, Arizona, port of entry by telephone because they each were among the busiest ports within their respective CMCs in terms of the number of vehicles and commodities entering the United States each day. The ports also processed a diverse mix of imports, including produce, television sets, and liquor. Laredo consists of two separate cargo facilities: the downtown Laredo facility and the Colombia Bridge facility; combined, they form the busiest commercial cargo port along the Southwest border. For the purposes of this review, we focused only on the operations of the Laredo facility, the busier of the two facilities. During fiscal year 1996, the Laredo facility handled about 732,000 vehicles, which was an average of 2,007 vehicles per day. The Laredo facility had 13 dock spaces to examine trucks and cargo and, as of July 1997, had a staff of 49 inspectors, canine enforcement officers, and supervisors. The Laredo facility is located 154 miles south of San Antonio, Texas. Otay Mesa was the third busiest commercial cargo port on the Southwest border. In fiscal year 1996, Otay Mesa handled over 516,000 vehicles, which was an average of 1,422 vehicles per day. The port had over 100 dock spaces available for inspections and, as of July 1997, had 110 inspectors, canine enforcement officers, and supervisors. Otay Mesa is located about 15 miles south of San Diego. Nogales, Arizona, was the fifth busiest commercial cargo port on the Southwest border, handling about 208,000 vehicles during fiscal year 1996, which was an average of 572 vehicles per day. Nogales had 92 dock spaces dedicated to Customs inspections and, as of April 1997, had a staff of 27 inspectors, canine enforcement officers, and supervisors. The port is located 67 miles south of Tucson, Arizona. Since it was not material for the purposes of this review, we did not independently verify the accuracy and validity of Customs’ workload and personnel data. However, to obtain some indication about the overall quality of the data and Customs’ own confidence in their accuracy and validity, we held discussions with a cognizant Customs official. According to this official, the personnel data resided in Customs’ Office of Human Resources database. The workload data resided in its Port Tracking System database. The Customs official expressed general confidence in the accuracy and validity of the data. He said his confidence was based on the fact that the data were compiled using standardized definitions and entry formats. The number of Customs inspectional personnel—inspectors and canine enforcement officers—increased overall between fiscal year 1992, the earliest year for which complete data were available, and fiscal year 1997. During the same period, the number and percentage of inspectional personnel deployed at the Southwest border, while increasing overall, fluctuated from year to year. According to Customs officials, year-to-year fluctuations in personnel levels could be attributed in part to the effects of attrition. For example, while additional positions may have been funded for a particular year or purpose (for example, in fiscal year 1997, for Operation Hard Line), others may have become vacant through retirement. In addition, according to Customs and Treasury officials, other positions could be lost because of the effects of reductions in Customs’ baseline funding. For example, in fiscal year 1997, Customs had to absorb a reduction of $38 million in its baseline funding to address unfunded mandates. As a result, 733 positions were removed through a comparability adjustment by OMB because they could not be funded. According to OMB officials, a comparability adjustment brings an agency’s authorized staffing levels more into line with actual funded levels. The loss of the 733 positions more than offset the 657 additional positions appropriated for Operation Hard Line, according to Customs officials. The Customs officials also cautioned that end-of-year data represented only a point-in-time snapshot of personnel levels. Accordingly, funded personnel levels throughout a particular year could have been lower or higher than the end-of-year number. Table II.1 shows that the number of Customs inspectors overall grew by about 17 percent between fiscal years 1992 and 1997. During the same period, while fluctuating from year to year, the number of inspectors deployed at the Southwest border grew by about 36 percent. The number of inspectors deployed at the Southwest border as a percentage of all Customs inspectors also fluctuated from year to year, but grew from about 24 percent of the total in fiscal year 1992 to about 28 percent in fiscal year 1997. Percentage change, fiscal years 1992-1997 Note 1: Fiscal year 1992 was the earliest year that complete data were available. Note 2: Inspector numbers could not be separated by passenger and cargo processing functions. According to Customs, ports shift inspectors between functions, depending on workload. Table II.2 shows that the number of Customs canine enforcement officers overall increased between fiscal years 1992 and 1997 by about 37 percent. The number of canine enforcement officers deployed at the Southwest border fluctuated during the same period, while growing by about 67 percent. The number of canine enforcement officers deployed at the Southwest border as a percentage of the total, while also fluctuating from year to year, increased from about 50 percent in fiscal year 1992 to about 62 percent in fiscal year 1997. Percentage change, fiscal years 1992-1997 Note 1: Fiscal year 1992 was the earliest year that complete data were available. Note 2: Canine enforcement officer numbers could not be separated by passenger and cargo processing functions. According to Customs, ports shift canine enforcement officers between functions, depending on workload. Kathleen H. Ebert, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed selected aspects of the Custom Service's drug enforcement operations, focusing on: (1) how Customs assesses its needs for inspectional personnel and allocates such resources to commercial cargo ports of entry; (2) whether Customs received all the additional inspectional personnel its assessments indicated it needed and, if not, why it did not receive them; and (3) whether there were any known implications of Customs' not receiving all of the personnel estimated to be needed and the impact of the additional personnel that were appropriated on Customs' drug enforcement operations. GAO noted that: (1) Customs does not have an agencywide process for annually determining its need for inspectional personnel--such as inspectors and canine enforcement officers--for all of its cargo operations and for allocating these personnel to commercial ports of entry nationwide; (2) while Customs has moved in this direction by conducting three inspectional assessments, these assessments: (a) focused exclusively on the need for additional personnel to implement Operation Hard Line and similar initiatives; (b) were limited to land ports along the southwest border and certain sea and air ports considered to be at risk from drug smuggling; (c) were conducted each year using generally different assessment factors; and (d) were conducted with varying degrees of involvement by Customs headquarters and field units; (3) Customs conducted the three assessments in preparation for its fiscal year (FY) 1997, 1998, and 1999 budget request submissions; (4) for FY 1998 and FY 1999, Customs officials stated that they used factors such as the number and location of drug seizures and the perceived threat of drug smuggling, including the use of rail cars to smuggle drugs; (5) focusing on only a single aspect of its operations; not consistently including the key field components in the personnel decisionmaking process; and using different assessment and allocation factors from year to year could prevent Customs from accurately estimating the need for inspectional personnel and then allocating them to ports; (6) the President's budgets did not request all of the additional inspectional personnel Customs' assessments indicated were needed; (7) the President's FY 1997 budget ultimately requested 657 additional inspection and other personnel for Customs; (8) Customs and Department of the Treasury officials cited internal and external budget constraints, drug enforcement policy considerations, and legislative requirements as the primary factors affecting the number of additional personnel that Customs could ultimately request and the manner in which it could allocate or reallocate certain personnel; (9) further, for FY 1998, the Office of National Drug Control Policy directed Customs to reallocate some of the additional 119 inspectors it requested and was appropriated funds for southwest border ports in accordance with the priorities in the National Drug Control Strategy; and (10) finally, Customs could not move certain existing positions to the southwest border because Congress had directed Customs to use them for specific purposes at specific ports.
Federal agencies are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The security of these systems and data is vital to public confidence and the nation’s safety, prosperity, and well-being. Virtually all federal operations are supported by computer systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, ineffective security controls to protect these systems and data could have a significant impact on a broad array of government operations and assets. Computer networks and systems used by federal agencies are often riddled with security vulnerabilities—both known and unknown. These systems are often interconnected with other internal and external systems and networks, including the Internet, thereby increasing the number of avenues of attack and expanding their attack surface. In addition, cyber threats to systems supporting the federal government are evolving and becoming more sophisticated. These threats come from a variety of sources and vary in terms of the types and capabilities of the actors, their willingness to act, and their motives. For example, foreign nations—where adversaries possess sophisticated levels of expertise and significant resources to pursue their objectives—pose increasing risks. Safeguarding federal computer systems has been a long-standing concern. This year marks the 20th anniversary of when GAO first designated information security as a government-wide high-risk area in 1997. We expanded this high-risk area to include safeguarding the systems supporting our nation’s critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. Over the last several years, GAO has made about 2,500 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations identified actions for agencies to take to strengthen their information security programs and technical controls over their computer networks and systems. Many agencies continue to be challenged in safeguarding their information systems and information, in part because many of these recommendations have not been implemented. As of February 2017, about 1,000 of our information security-related recommendations had not been implemented. Our audits of the effectiveness of information security programs and controls at federal agencies have consistently shown that agencies are challenged in securing their information systems and information. In particular, agencies have been challenged in the following activities: Enhancing capabilities to effectively identify cyber threats to agency systems and information. A key activity for assessing cybersecurity risk and selecting appropriate mitigating controls is the identification of cyber threats to computer networks, systems, and information. In 2016, we reported on several factors that agencies identified as impairing their ability to identify these threats to a great or moderate extent. The impairments included an inability to recruit and retain personnel with the appropriate skills, rapidly changing threats, continuous changes in technology, and a lack of government-wide information sharing mechanisms. We believe that addressing these impairments will enhance the ability of agencies to identify the threats to their systems and information and be in a better position to select and implement appropriate countermeasures. Implementing sustainable processes for securely configuring operating systems, applications, workstations, servers, and network devices. In our reports, we routinely determine that agencies do not enable key information security capabilities of their operating systems, applications, workstations, servers, and network devices. Agencies were not always aware of the insecure settings that introduced risk to the computing environment. We believe that establishing strong configuration standards and implementing sustainable processes for monitoring and enabling configuration settings will strengthen the security posture of federal agencies. Patching vulnerable systems and replacing unsupported software. Federal agencies we have reviewed consistently fail to apply critical security patches on their systems in a timely manner, sometimes doing so years after the patch becomes available. We have consistently identified instances where agencies use software that is no longer supported by their vendors. These shortcomings place agency systems and information at significant risk of compromise, since many successful cyberattacks exploit known vulnerabilities associated with software products. We believe that using vendor- supported and patched software will help to reduce this risk. Developing comprehensive security test and evaluation procedures and conducting examinations on a regular and recurring basis. Federal agencies we have reviewed often do not test or evaluate their information security controls in a comprehensive manner. The agency evaluations we reviewed were sometimes based on interviews and document reviews (rather than in depth security evaluations), were limited in scope, and did not identify many of the security vulnerabilities that our examinations identified. We believe that conducting in-depth security evaluations that examine the effectiveness of security processes and technical controls is essential for effectively identifying system vulnerabilities that place agency systems and information at risk. The Federal Information Security Modernization Act of 2014 (FISMA) provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets and for ensuring the effective oversight of information security risks, including those throughout civilian, national security, and law enforcement agencies. The law requires each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency. FISMA also establishes key government-wide roles for DHS. Specifically, with certain exceptions, DHS is to administer the implementation of agency information security policies and practices for information systems including: monitoring agency implementation of information security policies and providing operational and technical guidance to agencies; operating a central federal information security incident center; and deploying technology upon request to assist the agency to continuously diagnose and mitigate cyber threats and vulnerabilities. In addition, the Cybersecurity Act of 2015 requires DHS to deploy, operate, and maintain for use by any federal agency, a capability to (1) detect cybersecurity risks in network traffic transiting to or from agency information systems and (2) prevent network traffic with such risks from traveling to or from an agency information system or modify the traffic to remove the cybersecurity risk. In implementing federal law for securing agencies’ information and systems, DHS is spearheading several initiatives to assist federal agencies in protecting their computer networks and electronic information. These include NCPS, CDM, and other services. However, our work has highlighted the need for advances within these initiatives. Operated by DHS’s United States Computer Emergency Readiness Team (US-CERT), NCPS is intended to detect and prevent cyber intrusions into agency networks, analyze network data for trends and anomalous data, and share information with agencies on cyber threats and incidents. Deployed in stages, NCPS, operationally known as EINSTEIN, has provided increasing capabilities to detect and prevent potential cyber-attacks involving the network traffic entering or exiting the networks of participating federal agencies. Table 1 provides an overview of the EINSTEIN deployment stages to date. The overarching objectives of NCPS are to provide functionality that supports intrusion detection, intrusion prevention, analytics, and information sharing. However, in January 2016, we reported that NCPS had partially, but not fully, met these objectives: Intrusion detection: NCPS provided DHS with a limited ability to detect potentially malicious activity entering and exiting computer networks at federal agencies. Specifically, NCPS compared network traffic to known patterns of malicious data, or “signatures,” but did not detect deviations from predefined baselines of normal network behavior. In addition, NCPS did not monitor several types of network traffic and therefore would not have detected malicious traffic embedded in such traffic. NCPS also did not examine traffic for certain common vulnerabilities and exposures that cyber threat adversaries could have attempted to exploit during intrusion attempts. Intrusion prevention: The capability of NCPS to prevent intrusions was limited to the types of network traffic it monitored. For example, the intrusion prevention function monitored and blocked e-mail determined to be malicious. However, it did not monitor malicious content within web traffic, although DHS planned to deliver this capability in 2016. Analytics: NCPS supported a variety of data analytical tools, including a centralized platform for aggregating data and a capability for analyzing the characteristics of malicious code. However, DHS had not developed planned capabilities to facilitate near real-time analysis of various data streams, perform advanced malware behavioral analysis, and conduct forensic analysis in a more collaborative way. DHS planned to develop and implement these enhancements through 2018. Information sharing: DHS had yet to develop most of the planned functionality for NCPS’s information-sharing capability, and requirements had only recently been approved at the time of our review. Agencies and DHS also did not always agree about whether notifications of potentially malicious activity had been sent or received, and agencies had mixed views about the usefulness of these notifications. Further, DHS did not always solicit—and agencies did not always provide—feedback on them. In addition, while DHS had developed metrics for measuring the performance of NCPS, the metrics did not gauge the quality, accuracy, or effectiveness of the system’s intrusion detection and prevention capabilities. As a result, DHS was unable to describe the value provided by NCPS. To enhance the functionality of NCPS, we made six recommendations to DHS, which if implemented, could help the agency to expand the capability of NCPS to detect cyber intrusions, notify customers of potential incidents, and track the quality, efficiency, and accuracy of supporting actions related to detecting and preventing intrusions, providing analytic services, and sharing cyber-related information. DHS concurred with the recommendations. In February 2017 when we followed up on the status of the recommendations, DHS officials stated that they have implemented 2 of the recommendations and initiated actions to address the other 4 recommendations. We are in the process of evaluating DHS’s actions for the two implemented recommendations. In January 2016, we also reported that federal agencies had adopted NCPS to varying degrees. Specifically, the 23 civilian agencies covered by the Chief Financial Officers (CFO) Act that were required to implement the intrusion detection capabilities had routed some traffic to NCPS intrusion detection sensors. However, as of January 2016, only 5 of the 23 agencies were receiving intrusion prevention services, due to certain policy and implementation challenges. For example, officials stated that the ability to meet DHS security requirements to use the intrusion prevention capabilities varied from agency to agency. Further, agencies had not taken all the technical steps needed to implement the system, such as ensuring that all network traffic was being routed through NCPS sensors. This occurred in part because DHS had not provided network routing guidance to agencies. As a result, it had limited assurance regarding the effectiveness of the system. We recommended that DHS work with federal agencies and the Internet service providers to document secure routing requirements in order to better ensure the complete, safe, and effective routing of information to NCPS sensors. DHS concurred with the recommendation. When we followed up with DHS on the status of the recommendations, DHS officials said that nearly all of the agencies covered by the CFO Act are receiving at least one of the intrusion prevention services, as of March 2017. Further, the officials stated that DHS has collaborated with the Office of Management and Budget (OMB) to develop new guidance for agencies on perimeter security capabilities as well as alternative routing strategies. We will evaluate the network routing guidance when DHS finalizes and implements it. The CDM program provides federal agencies with tools and services that are intended to provide them with the capability to automate network monitoring, correlate and analyze security-related information, and enhance risk-based decision making at agency and government-wide levels. These tools include sensors that perform automated scans or searches for known cyber vulnerabilities, the results of which can feed into a dashboard that alerts network managers and enables the agency to allocate resources based on the risk. DHS, in partnership with and through the General Services Administration, established a government-wide acquisition vehicle for acquiring continuous diagnostics and mitigation capabilities and tools. The CDM blanket purchase agreement is available to federal, state, local, and tribal government entities for acquiring these capabilities. There are three phases of CDM implementation: Phase 1: This phase involves deploying products to automate hardware and software asset management, configuration settings, and common vulnerability management capabilities. According to the Cybersecurity Strategy and Implementation Plan, DHS purchased Phase 1 tools and integration services for all participating agencies in fiscal year 2015. Phase 2: This phase intends to address privilege management and infrastructure integrity by allowing agencies to monitor users on their networks and to detect whether users are engaging in unauthorized activity. According to the Cybersecurity Strategy and Implementation Plan, DHS was to provide agencies with additional Phase 2 capabilities throughout fiscal year 2016, with the full suite of CDM phase 2 capabilities delivered by the end of that fiscal year. Phase 3: According to DHS, this phase is intended to address boundary protection and event management for managing the security life cycle. It focuses on detecting unusual activity inside agency networks and alerting security personnel. The agency planned to provide 97 percent of federal agencies the services they need for CDM Phase 3 in fiscal year 2017. As we reported in May 2016, most of the 18 agencies covered by the CFO Act that had high-impact systems were in the early stages of CDM implementation. All 17 of the civilian agencies that we surveyed indicated they had developed their own strategy for information security continuous monitoring. Additionally, according to survey responses, 14 of the 17 had deployed products to automate hardware and software asset configuration settings and common vulnerability management. Further, more than half of the agencies noted that they had leveraged products/tools provided through the General Services Administration’s acquisition vehicle. However, only 2 of the 17 agencies reported that they had completed installation of agency and bureau/component-level dashboards and monitored attributes of authorized users operating in their agency’s computing environment. Agencies also noted that expediting the implementation of CDM phases could be of benefit to them in further protecting their high-impact systems. The effective implementation of the CDM tools and capabilities can assist agencies in overcoming the challenges we have identified that they face when securing their information systems and information. As noted earlier, our audits often identify insecure configurations, unpatched or unsupported software, and other vulnerabilities in agency systems. We believe that the tools and capabilities available under the CDM program, when effectively used by agencies, can help them to diagnose and mitigate vulnerabilities to their systems. By continuing to make these tools and capabilities available to federal agencies, DHS can also have additional assurance that agencies are better positioned to protect their information systems and information. DHS provides other services that could help agencies protect their information systems. Such services include, but are not limited to: US-CERT monthly operational bulletins are intended to provide senior federal government information security officials and staff with actionable information to improve their organization’s cybersecurity posture based on incidents observed, reported, or acted on by DHS and US-CERT. CyberStat reviews are in-depth sessions with National Security Staff, OMB, DHS, and an agency to discuss that agency’s cybersecurity posture and opportunities for collaboration. According to OMB, these interviews are face-to-face, evidence-based meetings intended to ensure agencies are accountable for their cybersecurity posture. The sessions are to assist the agencies in developing focused strategies for improving their information security posture in areas where there are challenges. DHS Red and Blue Team exercises are intended to provide services to agencies for testing their systems with regard to potential attacks. A Red Team emulates a potential adversary’s attack or exploitation capabilities against an agency’s cybersecurity posture. The Blue Team defends an agency’s information systems when the Red Team attacks, typically as part of an operational exercise conducted according to rules established and monitored by a neutral group. In May 2016, we reported that although participation varied among the 18 agencies we surveyed, most of those that chose to participate generally found these services to be useful in aiding the cybersecurity protection of their high-impact systems. Specifically, 15 of 18 agencies participated in US-CERT monthly operational bulletins, and most found the service very or somewhat useful. All 18 agencies participated in the CyberStat reviews, and most found the service very or somewhat useful. 9 of 18 agencies participated in DHS’ Red/Blue team exercises, and most found the exercises to be very or somewhat useful. Half of the agencies in our survey reported that they wanted an expansion of federal initiatives and services to help protect their high-impact systems. For example, agencies noted that expediting the implementation of CDM phases, sharing threat intelligence information, and sharing attack vectors, could be of benefit to them in further protecting their high- impact systems. We believe that by continuing to make these services available to agencies, DHS will be better able to assist agencies in strengthening the security of their information systems. In conclusion, DHS is leading several programs that can benefit federal efforts to secure agency information systems and information. Two such programs, NCPS and CDM, offer the prospect of important advances in the security over federal systems. Enhancing NCPS’s capabilities and greater adoption by agencies will help DHS achieve the full benefit of the system. Effective implementation of CDM functionality by federal agencies could better position them to protect their information technology resources from evolving and pernicious threats. Chairman Ratcliffe, Ranking Member Richmond, and Members of the Subcommittee, this concludes my statement. I would be happy to respond to your questions. If you or your staff have any questions about this testimony, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Christopher Businsky, Michael W. Gilmore, Nancy Glover, Jeff Knott, Kush K. Malhotra, Scott Pettis, David Plocher, and Angela D. Watson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Cyber-based intrusions and attacks on federal systems are evolving and becoming more sophisticated. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include the protection of cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. DHS plays a key role in strengthening the cybersecurity posture of the federal government. Among other things, DHS has initiatives for (1) detecting and preventing malicious cyber intrusions into agencies' networks and (2) deploying technology to assist agencies to continuously diagnose and mitigate cyber threats and vulnerabilities. This statement provides an overview of GAO's work related to DHS's efforts to improve the cybersecurity posture of the federal government. In preparing this statement, GAO relied on previously published work, as well as information provided by DHS on its actions in response to GAO's previous recommendations. The Department of Homeland Security (DHS) is spearheading multiple efforts to improve the cybersecurity posture of the federal government. Among these, the National Cybersecurity Protection System (NCPS) provides a capability to detect and prevent potentially malicious network traffic from entering agencies' networks. In addition, DHS's continuous diagnostics and mitigation (CDM) program provides tools to agencies to identify and resolve cyber vulnerabilities on an ongoing basis. In January 2016, GAO reported that NCPS was limited in its capabilities to detect or prevent cyber intrusions, analyze network data for trends, and share information with agencies on cyber threats and incidents. For example, it did not monitor or evaluate certain types of network traffic and therefore would not have detected malicious traffic embedded in such traffic. NCPS also did not examine traffic for certain common vulnerabilities and exposures that cyber threat adversaries could have attempted to exploit during intrusion attempts. In addition, at the time of the review, federal agencies had adopted NCPS to varying degrees. GAO noted that expanding NCPS's capabilities, such as those for detecting and preventing malicious traffic and developing network routing guidance, could increase assurance of the system's effectiveness in detecting and preventing computer intrusions and support wider adoption by agencies. By taking these steps, DHS would be better positioned to achieve the full benefits of NCPS. The tools and services delivered through DHS's CDM program are intended to provide agencies with the capability to automate network monitoring, correlate and analyze security-related information, and enhance risk-based decision making at agency and government-wide levels. In May 2016, GAO reported that most of the 17 civilian agencies covered by the Chief Financial Officers Act that also reported having high-impact systems were in the early stages of CDM implementation. For example, 14 of the 17 agencies reported that they had deployed products to automate hardware and software asset inventories, configuration settings, and common vulnerability management but only 2 had completed installation of agency and bureau/component-level dashboards. Some of the agencies noted that expediting CDM implementation could be of benefit to them in further protecting their high-impact systems. GAO concluded that the effective implementation of the CDM program can assist agencies in resolving cybersecurity vulnerabilities that expose their information systems and information to evolving and pernicious threats. By continuing to make available CDM tools and capabilities to agencies, DHS can have additional assurance that agencies are better positioned to protect their information system and information. In addition, DHS offered other services such as monthly operational bulletins, CyberStat reviews, and cyber exercises to help protect federal systems. In May 2016, GAO reported that although participation varied among the agencies surveyed, most agencies had found that the services were very or somewhat useful. By continuing to make these services available to agencies, DHS is better able to assist agencies in strengthening the security of their information systems. In a January 2016 report, GAO made nine recommendations related to expanding NCPS's capability to detect cyber intrusions; notifying customers of potential incidents; providing analytic services; and sharing cyber-related information, among other things. DHS concurred with the recommendations and is taking actions to implement them.
Medi-Cal was implemented in 1965, the year the Medicaid statute was enacted. Administered by the California DHS, in fiscal year 1996, Medi-Cal provided a wide range of services to approximately 5.2 million low-income individuals at an estimated cost of about $17.7 billion—about 11 percent of national Medicaid expenditures. Medi-Cal managed care, which is composed of several programs, including the 12-county expansion program, is expected to serve over 3 million Medi-Cal beneficiaries once fully implemented. Since 1968, the state has contracted with prepaid health plans (PHP)—California’s equivalent of the federal definition of “health maintenance organizations”—to provide, on a capitated basis, preventive and acute-care Medicaid services, as well as case management. In the 1980s, the state established three additional managed care programs: Primary Care Case Management (PCCM), County Organized Health System (COHS), and Geographic Managed Care (GMC). In early 1993, the state completed conceptual development of its most ambitious program to date: the “two-plan model,” which requires more than 2.2 million Medi-Cal beneficiaries to enroll with one of two health plans participating in each of 12 counties. California’s managed care expansion program—often referred to as the two-plan model—was designed to ensure that each of the two managed care plans operating in each county could achieve an enrollment level sufficient to spread risk and that beneficiaries could obtain care from health plans that also served privately insured individuals. In addition, the model was developed to make the most of limited state resources by restricting the number of plans the state would need to monitor. Selection of the 12 counties to use the two-plan model was made on the basis of two criteria. First, the counties must have had a minimum of 45,000 Medicaid beneficiaries eligible to participate in managed care, and, second, the counties must have had an interest in the program or a significant managed care presence already established in the county. (See table 1 for the number of eligibles and current enrollees by county and plan.) In each county, beneficiaries are required to enroll in either the “local initiative”—a publicly sponsored health plan cooperatively developed by local government, clinics, hospitals, and other providers—or the commercial plan, under contract in a beneficiary’s county of residence.The local initiative concept was developed to support health care safety-nets—those providers, such as community health centers and federally qualified health centers, that provide health care services to the indigent. Minimum enrollment levels were set for both the commercial and local initiative plans to ensure their financial viability. A maximum enrollment level was also set for each commercial plan to further protect local initiatives and their subcontracted safety-net providers. The state contracted with the local initiatives on a sole-source basis, while the commercial plan contracts were awarded on a competitive basis. The situation in Los Angeles County, however, is unique. While California contracted with a local initiative and a commercial plan in Los Angeles County, the county has, in essence, 10 plans because the local initiative plan subcontracted with 7 plans, and the commercial plan subcontracted with 2 plans. Beneficiaries can choose a primary care physician from any one of the 10 plans. Medi-Cal beneficiaries required to enroll in the two-plan expansion program are informed about managed care and their choices of health care plans through DHS’ Health Care Options (HCO) program. HCO also enrolls and disenrolls beneficiaries in managed care plans. The state contracts with an enrollment broker to conduct HCO program activities. Beneficiaries are informed about the mandatory expansion program and their available choices primarily through an enrollment packet that they receive through the mail. The enrollment packet includes information on managed care, how to join a health plan, available plans and participating providers, phone numbers to call for assistance, and an enrollment form. The packet also includes the first of three standard notices that inform beneficiaries of the 30-day time frame in which they have to choose a plan and the plan to which they will be automatically assigned if they do not return an enrollment form. Beneficiaries also can learn about the two-plan model and their plan options at HCO presentations, which are often held daily at county social service offices. At these face-to-face presentations, HCO counselors provide information on managed care, plans available in the county, how to fill out the enrollment form, beneficiary rights and responsibilities, how to resolve problems with plans, and who to contact for more information. Enrollment materials are available at the presentations. Beneficiaries also can contact HCO’s toll-free call center to obtain enrollment packets and to have enrollment-related questions or concerns addressed. Since 1984, DHS has contracted with an enrollment broker to provide certain education and enrollment services. Initially, enrollment broker responsibilities consisted primarily of conducting HCO presentations in selected counties and helping beneficiaries complete enrollment forms. With the expansion of Medi-Cal’s mandatory program, broker responsibilities increased. In addition to distributing enrollment packets and providing HCO presentations, the broker was tasked with processing beneficiary enrollments and disenrollments in 18 counties with managed care and operating a call center to assist beneficiaries. Full implementation of Medi-Cal’s mandatory expansion program is more than 2 years behind its initial implementation schedule. Originally, local initiatives and commercial plans in each of the 12 affected counties were to become simultaneously operational in March 1995. However, repeated delays in the awarding of contracts and the development of plans made it clear that some counties would be ready for implementation before others. Implementation therefore took place county by county. As of July 1997, plans in 7 of the 12 affected counties had been fully implemented, and full implementation in all counties was scheduled for the end of 1997 at the earliest. Figure 1 shows the 12 counties and their stages of implementation. As of July 1997, over 1.1 million beneficiaries were enrolled in the 12-county expansion program. Overly optimistic time frames and unanticipated difficulties resulted in a number of delays throughout the state’s planning and awarding of managed care contracts. Developing a Request for Applications for commercial plans and a Detailed Design Application for local initiatives took several months longer than expected. Once applications were submitted, the state did not at first meet its 90-day turnaround goal for approving submissions. Some plans protested the contract awards, further delaying the contracting process 6 to 8 months. In addition, the state unexpectedly had to obtain—at the request of the developers of the local initiatives—additional state legislative authority, such as exemptions from regulations on public meetings that would enable the local initiatives to hold closed-door sessions to negotiate rates with providers. There also were delays in establishing local initiatives and commercial plans. Some local initiatives took 3 years to develop, instead of the expected 2 years. Unlike commercial plans, local initiatives had to develop health care plans from scratch and, as public entities, they had to interact with community stakeholders. In Fresno County, consensus on whether or not to develop a local initiative could not be reached. As a result, no local initiative was developed, and the state awarded a second commercial contract. The local government in Stanislaus County also had difficulty establishing a local initiative. Consequently, the local initiative contract was awarded to a commercial plan, which will operate in informal partnership with the county. It also took longer than expected for some commercial plans to begin operating under the two-plan model. In addition to obtaining approval of material modifications to their operating licenses, commercial plans had to develop provider networks in counties where the plans were not already operating. Even after implementation of the expansion program began—with Alameda County in January 1996—the state and HCFA took actions that further delayed implementation. For example, DHS delayed full implementation of the program in Fresno, Contra Costa, San Joaquin, and Santa Clara counties to allow the new enrollment broker to fully test its automated systems and capacity to handle all of the enrollment and disenrollment functions. Because of concerns about the education and enrollment process in Santa Clara, San Joaquin, and Los Angeles counties, HCFA temporarily prohibited the automatic assignment of beneficiaries who did not choose a plan and required DHS instead to maintain them in the fee-for-service system. As a result, the pace of enrollment was slowed in these counties, even though plans were allowed to receive voluntary enrollments. As of July 1997, the expansion program had been fully implemented in seven counties—Alameda, Kern, Fresno, San Francisco, Santa Clara, San Joaquin, and Contra Costa—with beneficiaries required to enroll in either the local initiative or the commercial plan. In four of the remaining counties—San Bernardino, Riverside, Stanislaus, and Los Angeles—the program was partially implemented, with only one plan operating in San Bernardino, Riverside, and Stanislaus counties. Although Los Angeles County had both plans operating, the program was in effect only partially implemented because HCFA had delayed automatic assignment and the state had prohibited additional enrollment in the commercial plan until some remaining contract issues were resolved. In Tulare County, neither plan was operating. The December 1997 target date for full implementation may not be met since some of the plans in counties where the program has yet to be fully implemented have had difficulty developing and complying with regulations. For example, although both plans in Tulare County were tentatively scheduled to become operational by the end of the year, the plans were having difficulty organizing provider networks; implementation target dates have already been moved from spring 1997 to the end of the year. In San Bernardino and Riverside counties, the local initiative began operating in September 1996, but the commercial plan’s operation was delayed because it had not complied with the federal Medicaid requirement that effectively prohibited plan enrollment of Medicaid beneficiaries from reaching 75 percent. This requirement was repealed in August 1997; however, because of concerns the state has with other aspects of the plan’s operations, it is still not clear when this plan will begin operating under the two-plan model. Despite California’s efforts to encourage beneficiaries to choose a health plan, many beneficiaries have been assigned to a plan by the state. Long-standing problems with California’s HCO program, which provides beneficiaries with information about their managed care options and enrolls them in a plan, may have contributed to this and to widespread confusion among beneficiaries. While many agree that the HCO program is running smoother now than in the past, deficiencies persist—some serious enough to have prompted HCFA to delay full implementation in several counties earlier this year. To encourage Medi-Cal beneficiaries to choose their own managed care plan, California’s HCO program provides them information on managed care and their available health plan options. Plans, advocates, and researchers agree that beneficiaries who are well informed about managed care—and how it differs from fee-for-service—are more likely to choose a health plan, and those who choose a health plan are more likely to stay with that plan. Experts also believe that well-informed beneficiaries are more likely to use health services appropriately, such as relying more on a primary care physician and less on inappropriate use of emergency room services. Despite its efforts, the state estimated in January 1997 that the majority of enrollments had been the result of automatic assignments by the state. The automatic assignment rate for Alameda County at the beginning of implementation was estimated as high as 80 percent. Although automatic assignment rates have declined—the automatic assignment rate for two-plan counties averaged 45 percent from March to June 1997—the rates ranged widely from county to county. For example, the automatic assignment rate in Contra Costa County in April 1997 was 72 percent, while in Santa Clara County it was 32 percent. Unlike other states, California has not established a numeric goal for automatic assignments. Regardless, California’s automatic assignment rates have varied enough across counties to indicate potential problems with HCO’s program. HCFA, advocates, and managed care plans have expressed concerns about the adequacy of the state’s efforts to inform beneficiaries about their Medi-Cal managed care options. According to these groups, information in the enrollment packet was complex, lengthy, and written at too high a grade level. In some cases, the information was incorrect. For example, enrollment packets sent to some beneficiaries in San Bernardino and Riverside counties stated that automatic assignments would be made to Molina Medical Centers—a plan not contracted to serve beneficiaries in the expanded program in these counties at that time. Information in the enrollment packets could also be confusing. In anticipation of the Los Angeles County local initiative’s beginning operations in April 1997, thousands of beneficiaries in Los Angeles County received packets with cover letters dated January 8, 1997, that instructed them to respond by January 18, 1997—which did not allow beneficiaries the required 30 days to respond. DHS remailed the letters and provided additional time for beneficiaries to respond. And it has only been recently—more than a year after full implementation of the mandatory program in the first county—that many of the enrollment materials have begun to be translated into all of the state’s “threshold” languages. Although DHS has established a work group to address problems associated with the enrollment packet, all planned changes are not expected until November 1997, at which time many beneficiaries will have already been enrolled. Initially, there also were a number of problems with the toll-free call center, which was set up to provide beneficiaries access to additional information about how health plans operate and how to use them. The call center, however, often was a source of frustration and confusion because callers could not get through, messages went unanswered, voicemail boxes were full, or counselors provided incorrect information. However, a review of HCO’s recently instituted “problem log” revealed that the problems have largely disappeared under the current enrollment broker, Maximus, which expanded the call-center operation. There also have been problems with the HCO presentations. Through county-by-county preimplementation reviews, HCFA often found that the presentations were confusing, not conducted in the appropriate language, not accurate or performed as scripted or scheduled, or not sufficiently informative. In addition, beneficiary attendance has been low. State officials recognize that the limited number of presentation sites may make it difficult for beneficiaries to attend. For example, in June 1997, Los Angeles County—which comprises 88 cities and 136 unincorporated areas and covers over 4,000 square miles—had 35 presentation sites. Officials from one managed care plan we contacted believed that poor attendance at the HCO presentations was due in part to limitations in the state’s outreach to beneficiaries. The officials believed that by working closely with community-based organizations that beneficiaries know and trust, such as churches and legal aid services, more beneficiaries could be reached; in addition, these organizations could provide outreach services and thereby supplement HCO presentations. HCFA, advocates, and managed care plans have long called for increased outreach efforts—not only to beneficiaries, who can be difficult to reach, but to providers and others in the community as well. Some plans and advocates have, at their own expense, conducted outreach activities to fill the perceived gap in the state’s efforts. Yet even with high automatic assignment rates and poor attendance at the HCO presentations, it was not until October 1996 that DHS began development of an outreach campaign that was implemented in selected counties in March 1997. The campaign consisted of bus billboards and posters sent to HCO presentation sites, managed care plans, and community-based organizations. Brochures, a video, and radio announcements were also recently added. DHS has recently begun to explore additional ways to improve outreach and involve community-based organizations in HCO activities, such as participating in DHS-sponsored work groups. DHS asked community-based organizations to identify additional HCO presentation sites in Los Angeles County and plans to require Maximus to contract with a number of community-based organizations to provide HCO presentations to their clients. Recognizing that provider education could also be improved, DHS has begun to better disseminate information to participating providers on managed care programs, such as DHS provider bulletins that give HCO program updates. In addition, DHS created the HCO Education and Outreach Unit in June 1997 to develop and implement strategies to ensure beneficiaries, providers, legislators, advocates, and other interested parties are well informed and educated about the expansion program. Some of the problems with enrolling beneficiaries persisted throughout the state’s first year of implementation of its new mandatory program and were exacerbated by the timing of the changeover between enrollment brokers. While many agree that enrollment processing is functioning much smoother now, there was enough lingering concern to have prompted HCFA to slow the pace of enrollment in several counties earlier this year. During the first year of implementation, the volume of enrollments may have overwhelmed Benova, the former enrollment broker. Enrollment materials were not always sent on time, and, in one county, it could not be determined whether they were sent at all. Enrollment data were not accurately or completely entered into the enrollment information system, and some beneficiaries were enrolled in a plan other than the one they chose or were assigned to a plan that was not an option for them. State assignments of beneficiaries who did not choose a plan were not always timely, which meant that plans lost capitation revenue. The situation worsened when Benova lost its bid for the enrollment broker contract and began losing significant numbers of staff. HCFA and managed care plans agree that Medi-Cal’s enrollment process has begun to function more smoothly. Maximus has more resources to process and track enrollments, and the state has begun to implement long-needed fixes, such as improved monitoring of the enrollment broker. However, problems have continued to occur. For example, in April 1997, thousands of beneficiaries in Riverside County were sent letters with dates that implied beneficiaries had already been assigned to a plan. The state remailed the letters with corrected dates. Because of continuing concerns, HCFA slowed enrollment in several counties earlier this year. According to HCFA, it would not approve the February 1997 full implementation in Santa Clara and San Joaquin counties because it had found, during its preimplementation reviews, deficiencies in the education process that “grossly violated” the HCO process and the conditions of California’s waiver. For example, enrollment packets sent to beneficiaries were incomplete, and the state could not verify whether a subsequent mailing was sent. At the end of March 1997, HCFA decided to slow enrollment in Los Angeles County, prior to full implementation. HCFA took this action, in part, because the enrollment broker had not yet demonstrated an ability to send timely or accurate mailings to beneficiaries or to properly train HCO counselors to make accurate and informative presentations to beneficiaries. Adequately educating beneficiaries in Los Angeles about their plan options is especially difficult, since there are multiple plans from which beneficiaries can choose. Furthermore, with over 1 million beneficiaries who will be mandatorily enrolled, and another 400,000 voluntarily eligible, the consequences of enrollment errors in Los Angeles County could be significant. Based on anecdotal evidence from HCFA, advocates, and managed care plans, the problems with the education and enrollment processes throughout the implementation of the two-plan model have affected beneficiaries and plans alike. Officials from one plan said that beneficiaries were not only confused but concerned because they did not understand what was happening to their health care coverage—some beneficiaries thought they were losing Medi-Cal benefits altogether. According to some plans, enrollment problems have resulted in significant financial loss due to lost capitation revenue and unanticipated operating and administrative costs. For example, if enrollment was delayed, some plans not only lost revenue but may have unnecessarily expended funds for staffing, facilities, and advertising. Officials at one local initiative claimed gross revenue losses of almost $2 million due to a 25-day delay in the mailing of enrollment materials. The lost capitation revenue required the plan to draw upon an existing line of credit—with interest—from the county. Because of long-standing problems and concerns over the implementation of the two-plan model, some groups wanted implementation either stopped or further delayed. Yet, some plans urged the state and HCFA not to delay implementation and enrollment further because of the financial repercussions. HCFA officials agreed that long delays in implementation could present financial hardship for some plans. Over the past several years, California has been criticized for a number of weaknesses in the management of its Medi-Cal managed care program. In a 1993 report, HCFA questioned whether DHS, with its existing staffing and processes, could effectively monitor the state’s contracts with Medi-Cal managed care plans. Two years later, we echoed similar concerns. In 1994, HCFA also cited a number of weaknesses in the implementation of Sacramento’s GMC program, including the need for early and ongoing local input into the planning process and deficiencies in the education and enrollment process. More recently, Mathematica Policy Research, Inc., in its 1996 report on Medi-Cal managed care, cited limited time and resources as the cause of initial enrollment problems experienced by beneficiaries in Sacramento’s GMC program. These and other management weaknesses—such as insufficient contract performance requirements for enrollment brokers, inadequate monitoring of the HCO program, and poor communication with and involvement of outside groups—contributed to the problems the state encountered in implementing its two-plan model. Benova and Maximus also cited reasons that made it difficult for them to perform as efficiently as possible. The state has taken a number of long-needed actions aimed at improving various aspects of the HCO program. However, the effect of some of these actions remains to be seen. Federal guidance on designing and implementing a mandatory managed care program, especially when education and enrollment functions are contracted to an enrollment broker, may have assisted the state in improving its program implementation in its earlier stages. Although HCFA is currently developing such guidance, HCFA’s oversight of California’s program has consisted primarily of approving the waiver application and conducting preimplementation reviews of each county prior to full implementation. DHS’ contract with Benova, the former enrollment broker, contained no specific performance standards. Performance standards should make clear the level of service expected of the broker and enable a state to gauge the sufficiency of the broker’s operations. When tied to payment, performance standards can provide incentives for the enrollment broker to provide the services required and penalties for nonperformance. DHS’ contract with Maximus, the current enrollment broker, contained several performance standards; however, few were tied to payment. For example, although call-center staff were required to answer phones within three rings and process enrollment forms within 2 days, there was no penalty for noncompliance. More importantly, no performance standards that were tied to payment related to potential quality indicators, such as the rate of automatic assignment, beneficiary satisfaction with the education and enrollment process, or the rate of beneficiary disenrollment. California is planning to amend Maximus’ contract to include additional performance standards and to increase the number of standards that are tied to payment, which should help strengthen the contract and make it more enforceable. According to HCFA, many of the problems with the state’s process for educating and enrolling beneficiaries were the result of inadequate monitoring of the HCO program. Until recently, DHS did not conduct on-site monitoring of enrollment broker activities nor did it have staff with the expertise to monitor the broker’s automated systems. In addition, HCO’s management information and reports were not adequate to effectively monitor the program. According to DHS, regular, on-site monitoring of Benova was difficult since Benova’s operations were about 80 miles from DHS headquarters in Sacramento. Without on-site monitoring, however, DHS could not guarantee that critical broker responsibilities, such as the mailing of enrollment packets, were carried out. For example, it was not until enrollment broker operations were transitioning to Maximus that DHS found that thousands of beneficiary enrollment packets had not been sent from a Benova mail facility. To help ensure this does not recur, as a condition of its contract, Maximus operations are located in or near Sacramento. DHS also has dedicated five full-time Payment Systems Division staff, four of whom have automated systems expertise, to conduct on-site monitoring at Maximus’ various locations. To help ensure Maximus complies with the terms of its contract, DHS staff observe the broker’s operations and test the automated systems. Staff also observe mail facility operations to ensure the timeliness, completeness, and accuracy of the enrollment materials mailed to beneficiaries. Until recently, HCO program staff did not have the expertise to evaluate automated systems operations and ensure that their outputs were valid. Without such expertise, the state could not determine if beneficiaries had been assigned to plans as intended. Moving day-to-day HCO program operations from the Medi-Cal Managed Care Division to the Payment Systems Division provided the program with the expertise required to make such determinations. In addition, in March 1997, DHS contracted with a systems consultant, Logicon, to test Maximus’ automated systems and validate its output by July 1997. According to a DHS official, the testing and validation process will allow DHS to better understand the enrollment broker’s system and thus have greater confidence in its output. Validating system output will likely enhance the reliability of the information that the system generates, such as enrollment and disenrollment data. As of the end of August 1997, however, Logicon had yet to complete its contract. As a result, according to HCFA, there remains no external verification that the enrollment broker can effectively handle the increased volumes of enrollment that will result when plans in the remaining counties, like Los Angeles, become fully implemented. Management information and reporting also were not sufficient to effectively monitor the HCO program. According to one DHS official, HCO reports were not managerially useful. For example, while data were provided on the number of beneficiaries who chose a plan, the number who were automatically assigned to a plan, and the number who disenrolled from a plan, the reports did not include trend analyses. And while an automatic assignment rate was calculated, a disenrollment rate was not, which can serve as an important indicator of beneficiary satisfaction with plans. In addition, certain key terms, such as “disenrollment,” have yet to be defined, and the data have yet to be verified, which provides little confidence in its meaning or accuracy. As part of its contract, Logicon is required to ensure that numbers across reports are consistent and reconcilable and to identify reports that are needed for the state to effectively monitor enrollment broker activities. Finally, DHS initially had no system to determine whether problems reported to DHS were recorded or addressed. Although DHS began keeping an HCO “problem log” in January 1997 to capture and track the status of problems and complaints reported to either DHS, the enrollment broker, or the Medi-Cal managed care ombudsman, DHS had not summarized or systematically analyzed the information collected at the time of our review. HCFA, managed care plans, and advocates have long expressed concern over a lack of effective state internal communication and timely communication with and involvement of outside groups in planning and decision-making. We found, for example, that until recently, HCO policy decisions often were not officially documented or disseminated to the appropriate state staff. DHS has taken some steps to improve its internal communications, such as requiring HCO’s policy unit to provide written documentation of all HCO policy decisions to the chief of the Headquarters Management Branch, Payment Systems Division, for review and systematic dissemination. DHS has also increased its communication efforts with outside groups. To provide a forum to discuss and address issues and concerns, the state has convened or participates in several work groups. For example, the Policy Workgroup was formed in January 1997 to improve the education and enrollment process, such as by redesigning and translating the enrollment materials. The group includes representatives from DHS, HCFA, health plans, advocacy groups, and Maximus. The state also convened in June 1997 a Stakeholder Advisory Group to provide policy advice on and oversight of program implementation in Los Angeles County. The group is composed of advocates, provider representatives, DHS, Maximus, and the Los Angeles commercial plan and local initiative. It plans to meet monthly. Benova and Maximus, the two enrollment brokers DHS has contracted with, also cited a number of factors that they believed adversely affected their performance. According to these brokers, DHS made frequent policy and program changes and often provided little lead time to appropriately implement these changes. According to Maximus, during the first 7 weeks of its contract period—which began January 1997—DHS made about 300 policy changes, sometimes giving Maximus little time to implement them. To comply with DHS’ time frames, Maximus believed it necessary to sometimes bypass quality assurance measures that it had established to ensure that such system changes did not have unintended consequences. In one instance, changes made to the mailing dates in one county caused Maximus to inadvertently halt mailings to another county. Benova believed that its performance as Medi-Cal’s enrollment broker suffered because of DHS’ often-changing directions and its lack of responsiveness. For example, DHS denied Benova’s request to transfer calls during peak times to call centers in other states—an arrangement Benova believed would have improved service. According to Benova, DHS also denied its request for cost-reimbursement for additional equipment needed to handle increasing volumes of enrollment. Benova and Maximus officials also stated that, relative to their experience with other states, California limited their contact with plans, advocacy groups, and community-based organizations. DHS was concerned about remaining informed about program operations and not burdening limited contractor staff with additional responsibilities. DHS recently has relaxed its policy and begun to allow the enrollment broker to participate in community meetings. HCFA’s oversight of California’s education and enrollment functions has consisted primarily of reviewing and approving the state’s waiver application to implement its mandatory managed care program and conducting preimplementation reviews in each county. As of August 1997, few federal guidelines existed for states to use for their process of educating Medicaid beneficiaries and enrolling them in mandatory managed care programs—two relatively new functions for states. In addition, guidelines did not exist for contracting out these functions. With such guidance, some of the problems that California experienced in expanding its Medi-Cal managed care program might have been avoided. HCFA is in the process of developing guidelines to assist states with designing and implementing an effective education and enrollment program, including contracting with enrollment brokers—an increasing trend. Earliest issuance of these guidelines was projected for October 1997. An expressed objective of the two-plan model was to protect existing health care safety nets in the new competitive environment of managed care. Safety-net providers—such as federally qualified health centers, and community and rural health centers—provide health care services to the medically indigent. However, while the two-plan model provides some assurances that plans will assign beneficiaries to safety-net providers, it does not guarantee that these providers will receive a specified level of enrollment, nor can it guarantee that they will maintain their enrollments. Some providers have reported that they are having difficulty operating under the two-plan model, especially in maintaining their former patient base. The two-plan model has several provisions and incentives aimed at protecting safety-net providers. The model’s local initiative arrangement enables counties to develop a plan that reflects local needs and priorities and includes county-operated health facilities. Once developed, the local initiative must contract with any safety-net provider that complies with the local initiative’s specific requirements and standards and accepts the rates offered. Although commercial plans are not required to contract with safety-net providers, they were awarded extra points during the evaluation process for the extent to which their networks included safety-net providers. The model also requires that automatic assignments be made to the local initiative until preestablished minimum enrollment levels are reached. In addition, the local initiatives and commercial plans are required to ensure—to the maximum extent possible—that existing patient-physician relationships are maintained. Furthermore, the local initiative must develop a process that “equitably assigns” to safety-net providers those beneficiaries who do not choose a primary care provider; similarly, the commercial plan must develop a process that “proportionately” assigns such beneficiaries. According to DHS, it did not require plans to assign a specific number of beneficiaries to safety-net providers because federal law requires states to ensure that beneficiaries have a choice of providers. Despite these protections, an initial assessment of the two-plan model’s impact on safety-net providers suggests that some are experiencing difficulties, especially in maintaining their levels of enrollment. According to the state and HCFA, a couple of factors have affected safety-net providers’ enrollment bases. Beneficiaries in managed care are required to designate only one provider as their primary care physician, although they may have visited more than one provider in fee-for-service care. Consequently, some safety-net providers say that they have seen fewer beneficiaries under the two-plan model. However, many beneficiaries who choose a provider are not choosing safety-net providers, and many who are assigned to these providers disenroll. HCFA has reported that in Los Angeles County, 12,600 beneficiaries—or 70 percent—who had been assigned to a safety-net provider chose to disenroll within 5 days. The two-plan model does not prescribe, other than in general terms, how plans are to assign beneficiaries to individual providers. However, a number of plans favor safety-net providers in their assignment methodology. One plan had designed a four-tier assignment methodology that gives priority to contracted safety-net providers and other providers that have at least a 50-percent Medi-Cal enrollment base. Another plan seeks to maintain a 60/40 assignment ratio, with approximately 60 percent of beneficiaries assigned to private providers and the remaining 40 percent assigned to county and community clinics. The state has begun to assess measures that could be taken to assist safety-net providers and has taken action in one county. To reduce the number of beneficiaries assigned by plans away from their safety-net providers, the state planned to provide information on beneficiaries’ last provider of record to plans beginning August 1997. With this information, plans could assign the beneficiary to that provider if the provider was part of the plan’s network. Safety-net providers in Fresno County were particularly concerned about their viability since the county’s two-plan model did not include a local initiative. An agreement was reached between the state, providers, and the two commercial plans that addressed some of the short- and long-term concerns of these safety-net providers. For example, the two plans agreed to assign all state-assigned beneficiaries who had not designated a primary care physician to a safety-net provider. Over the longer term, a special team composed of state, plan, and provider representatives will be established to oversee the implementation of managed care in Fresno County. California’s expansion of its Medi-Cal managed care program is currently the largest effort of its kind in the nation in terms of the number of beneficiaries involved. Although California invested nearly 5 years in both conceptual and implementation planning of its two-plan mandatory program, implementation has not been smooth. Many of the circumstances that contributed to implementation problems were within the state’s control, while others were not. For example, the timing of the transition from one enrollment broker to another undoubtedly contributed to the implementation delays and difficulties. Had the transition not occurred in the midst of the two-plan implementation in several counties, some problems might have been less severe. Many of the problems that occurred in implementing the new mandatory program were foreshadowed by the state’s earlier efforts to implement managed care. These earlier problems—documented in prior evaluations by other organizations—should have convinced the state that many of its policies and procedures needed retooling. The state is now taking certain actions to improve the program, but many are too late to benefit those beneficiaries already enrolled in the seven counties where implementation has been completed. HCFA’s preimplementation reviews enabled HCFA to identify problem areas in California’s implementation of its two-plan model; the reviews did not, however, always result in immediate improvements. At the same time that DHS was attempting to address these problems, managed care plans were exerting pressure to push ahead with program implementation since their large investments—and financial viability—were dependent on receiving enrollments and associated revenues according to set time frames. As a result, while HCFA identified the need for significant improvements, it did not halt program implementation to effect such changes. HCFA also did not have sufficient written guidance in place to assist the state in developing and implementing its program. Despite these delays and difficulties, California’s experience can be instructive for other states as they develop, expand, or adapt their mandatory Medicaid managed care programs. Specifically, California’s experience points to several potential lessons learned: Incremental implementation allows for adjustments and improvement. Simultaneous or quick-succession implementation in multiple areas does not give sufficient time for program modifications when unforeseen problems arise. Sufficient staff—including individuals who have expertise in managed care program design and implementation—are needed to conduct program activities. Of particular importance are systems analysts and contract specialists. Stakeholder and community input and involvement, sought early and often, can contribute significantly to effective education and enrollment processes and problem resolution. Effective monitoring systems, including adequate management information and reporting, can ensure accountability for program operations—especially if there is heavy reliance on a contractor for integral parts of the program. Including performance standards for key areas of operation in enrollment broker contracts and tying these standards directly to broker payment might help to ensure maximum contractor performance. To help states design and implement Medicaid managed care programs that ensure beneficiaries who enroll—especially those who are mandated to do so—are able to make an informed choice in selecting a plan, we recommend that the Secretary of Health and Human Services direct HCFA to promptly finalize guidelines for developing and operating an education and enrollment program. To help ensure accountability, these guidelines should include considerations regarding appropriate performance standards and measures and monitoring mechanisms, especially when a state contracts out these functions to an enrollment broker. We provided a draft of this report to the Administrator, HCFA; Director, California DHS; and officials of Benova and Maximus, the former and current enrollment brokers. Each entity provided technical or clarifying comments, which we incorporated as appropriate. HCFA concurred with our recommendation and stated it is working to finalize its education and enrollment guidelines. For example, it sponsored a joint industry and Medicaid managed care meeting in September to discuss the draft guidelines. HCFA did not, however, indicate a target date for finalizing the guidelines. HCFA’s Administrator stated that, because the guidelines are not requirements, it is important to take the necessary time to reach consensus on them in order to obtain necessary buy-in and endorsement from those affected in order to give the guidelines credibility and acceptability. DHS agreed with our conclusions and recommendation, saying that the state has already adopted or is working toward implementing the lessons learned that were outlined in the conclusions. It acknowledged that there have been problems associated with California’s transition to managed care for its Medi-Cal population and emphasized its efforts to address these problems in partnership with HCFA, plan partners, medical providers, and advocacy groups; however, the state was concerned that the report did not sufficiently acknowledge its efforts in this regard. DHS provided to us additional information on its efforts to be responsive to identified problems, which we incorporated where appropriate. In terms of the evidence and findings presented in the report, DHS questioned the objectivity of information obtained from some sources, such as some contracted health plans and the former enrollment broker, with whom the state is involved in formal contract disputes or litigation. Being aware of these ongoing disputes and litigation during the course of our work, we were sensitive to the use of information obtained from all affected parties. In this regard, we either corroborated the testimonial evidence we obtained with independent sources or clearly attributed the information to its source in the report. Both Benova and Maximus generally concurred with our findings. Benova provided additional information on several findings in order to more fully explain its relationship with the state and the resulting impact on Benova’s performance. For example, Benova contends that its contract was not adequately funded to fulfill the enrollment contract functions. We chose, however, not to include these additional details because of ongoing litigation between the two parties. Maximus generally agreed with our assessment of the program and implementation issues. Despite the difficulties cited in the report, Maximus believed that it has gained sound administrative control of the basic enrollment processes, such as the call center operations, the enrollment process, and the computer system operations. While Maximus endorsed holding all program participants accountable, it emphasized that establishing standards for functions that are not entirely within its control can be problematic—especially when these functions are tied to payment. Maximus added that the California experience has served as an important learning opportunity in its role as enrollment broker in other states. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Secretary of Health and Human Services; the Administrator, HCFA; the Director, California DHS; and interested congressional committees. Copies of this report will also be made available to others upon request. If you or your staff have any questions about the information in this report, please call me or Kathryn G. Allen, Acting Associate Director, at (202) 512-7114. Other contributors were Aleta Hancock, Carla Brown, and Karen Sloan. To determine the status of California’s expansion of its Medi-Cal managed care program and identify potential reasons for delays in implementing the two-plan model, we interviewed officials from the California Department of Health Services (DHS) and reviewed their implementation schedules—the initial schedule and subsequent updates—for the two-plan model. We also interviewed Medicaid officials in HCFA’s region IX office in San Francisco and examined their preimplementation reviews, which are conducted in each affected county to determine the state’s readiness to implement the two-plan model in that county. To identify the state’s efforts to educate Medi-Cal beneficiaries about managed care and enroll them into one of the state-contracted plans, and to evaluate its management of the education and enrollment process, we interviewed DHS and HCFA region IX officials and obtained and reviewed relevant state law, regulations, policies, and procedures; the state’s strategic plan for expanding its Medi-Cal managed care program; the state’s two-plan model waiver application submitted to HCFA; Health Care Options (HCO) program documents, including enrollment materials; minutes from DHS’ Policy and Transition Workgroup meetings; HCO’s problem log; enrollment broker contracts and the 1995 Request for Proposal; HCO management reports, including monthly enrollment summaries; and HCFA’s preimplementation reviews. We also interviewed officials from two commercial and four local-initiative health plans that served 11 of the 12 two-plan counties; Benova, Medi-Cal’s previous enrollment broker, and Maximus, its current enrollment broker; and advocacy and consumer groups. We reviewed documents obtained from these officials, including minutes from the California Alliance of Local Health Plan Enrollment Workgroup meetings and written testimony of some stakeholders on the implementation of the two-plan model provided in February 1997 before the California state legislature. We also reviewed reports by Mathematica Policy Research, Inc., and the Medi-Cal Community Assistance Project that discussed issues and concerns about DHS’ expanded program. To evaluate the state and federal oversight of California’s enrollment broker, we obtained and analyzed California’s past and current enrollment brokers’ contracts and amendments. To obtain detailed information on specific DHS activities to monitor enrollment broker performance, we interviewed DHS and HCFA region IX officials. We also visited Maximus’ administrative office, which houses its systems operations and call center, and one of the subcontracted mail facilities to observe broker operations. At these facilities, we met with DHS and Maximus officials to discuss oversight activities and broker operations. We also reviewed program information generated by Maximus. To identify federal monitoring of contracted enrollment broker functions and guidance for states to use in monitoring contracted enrollment broker activities, we met with officials in HCFA’s Baltimore Office of Managed Care and region IX Medicaid officials. In addition to reviewing HCFA’s guidelines for state compliance with federal regulations on Medicaid managed care marketing, we obtained and reviewed HCFA’s “Managed Care Pre-Implementation Review Guide” and its draft guidelines to states for enrolling beneficiaries in managed care programs. To make an initial assessment of the two-plan model’s impact on safety-net providers, we interviewed officials from DHS, HCFA, and two commercial and two local initiative plans. We also reviewed the state’s strategic plan, which discusses how safety-net providers would be included under the two-plan model; state requirements for assigning beneficiaries to plans; and selected plan assignment methodologies. In addition, we reviewed reports by the Medi-Cal Community Assistance Project and Mathematica, which examined the experiences of some safety-net providers. We performed our work between January and August 1997 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed California's Medicaid Program, Medi-Cal, focusing on: (1) the implementation status of California's managed care expansion, including identifying the primary causes of delays; (2) the degree to which state efforts to educate beneficiaries about their managed care options and enroll them in managed care have encouraged beneficiaries to choose a plan; (3) the management of the state's education and enrollment process for the new program, including state and federal oversight of enrollment brokers that the state contracted with to carry out these functions; and (4) the impact of the managed care expansion on current safety-net providers, such as community health centers, that serve low-income beneficiaries. GAO noted that: (1) despite California's extensive planning and managed care experience, implementation of its 12-county expansion program is more than 2 years behind its initial schedule and is still incomplete; (2) California originally had planned to implement the program simultaneously in all affected counties by March 1995; (3) however, as a number of unforeseen difficulties arose, the state began to stagger implementation as it became clear that some counties would be ready before others; (4) still, as of July 1997, the program had been fully implemented in only seven counties; (5) the most recent schedule estimated complete implementation in all 12 counties by December 1997, at the earliest; (6) the state's efforts to encourage beneficiaries to choose a health plan have been undermined by problems in the process for educating and enrolling beneficiaries; (7) according to the Health Care Financing Administration (HCFA), beneficiary and provider advocates, and managed care plans, a number of problems contributed to confusion for many beneficiaries, including incorrect or unclear information about the mandatory Medi-Cal program and participating plans as well as erroneous assignments of beneficiaries to plans; (8) available data show that, on average, almost half of affected beneficiaries have not actively chosen their own plan but instead have been automatically assigned to one by the state; (9) other problems were evident in California's Department of Health Services' (DHS) management of the program, including insufficient performance standards for enrollment brokers and poor internal communication and weak ties with advocacy and community-based organizations; (10) California has taken a number of actions to improve the implementation and administration of its mandatory expansion program; (11) DHS also has taken steps to work more closely with community-based organizations to improve outreach efforts; (12) however, these actions were taken too late to benefit the many beneficiaries who have already enrolled in the seven counties where full program implementation has been completed; (13) HCFA is in the process of developing federal guidelines on designing and implementing an education and enrollment program; and (14) despite the fact that the state's 12-county expansion program was designed to help ensure that federally qualified health centers, community and rural health centers, and other safety-net providers participate in the provider networks, some safety-net providers have reported difficulty maintaining their patient base.
Land mines in the U.S. inventory are of two distinct types: The first consists of conventional land mines that are hand-emplaced and are termed nonself-destruct, or sometimes “dumb,” because they remain active for years unless disarmed or detonated. They can therefore cause unintended post-conflict and civilian casualties. The second type consists of land mines that are generally, but not always, surface-laid “scatterable” land mines that are dropped by aircraft, fired by artillery, or dispersed by another dispenser system. They are conversely called “smart” because they remain active for preset periods of time after which they are designed to self-destruct or deactivate, rendering themselves nonhazardous. According to DOD, smart land mines have a 99.99-percent self-destruct reliability rate. Most self-destruct land mine systems are set at one of three self-destruct periods: 4 hours, 48 hours, or 15 days. In addition, should the self-destruct mechanism fail, self-destruct land mines are designed to self- deactivate, meaning that they are to be rendered inoperable by means of the “irreversible exhaustion of their batteries” within 120 days after employment. This feature, according to DOD, operates with a reliability rate of 99.999(+) percent. At the time of the Gulf War, U.S. forces were armed with both nonself-destruct and self-destruct land mines, and U.S. policy allowed them to use both types. Today, however, U.S. presidential policy limits the U.S. forces’ use of nonself-destruct M-14 and M-16 antipersonnel land mines (see fig. 6 in app. II) to Korea. Antitank mines, as the name implies, are designed to immobilize or destroy tracked and wheeled vehicles and the vehicles’ crews and passengers. The fuzes that activate antitank mines are of various types. For example, they can be activated by pressure, which requires contact with the wheels or tracks of a vehicle, or by acoustics, magnetic influence, radio frequencies, infrared-sensor, command, disturbance, or vibration, which do not require contact. Antitank mines have three types of warheads. Blast mines derive their effectiveness from the force generated by high-explosive detonation. Shaped-charged mines use a directed-energy warhead. Explosive-formed penetrating mines have an explosive charge with a metal plate in front, which forms into an inverted disk, a slug, or a long rod. Antipersonnel land mines are designed to kill or wound soldiers. Their fuzes can be activated, for example, by pressure, trip wires, disturbance, antihandling mechanisms, or command detonation. Antipersonnel land mine warhead types include blast, directed fragmentation, and bounding fragmentation. The blast mine is designed to injure the lower extremities of the individual who steps on it. The directed fragmentation mine propels fragments in the general direction it is pointed, and the bounding fragmentation mine throws a canister into the air, which bursts and scatters shrapnel throughout the immediate area to kill or wound the enemy. Antitank and antipersonnel land mines are often employed together, as “mixed” systems. In a mixed system, the antipersonnel land mines are intermingled with antitank land mines to discourage enemy personnel from attempting to disarm them. Antitank land mines may also be equipped with explosive antidisturbance devices designed to protect them from being moved by enemy personnel, thus increasing the difficulty and challenge of breaching a minefield. According to DOD, all the types of land mines in DOD’s arsenal were available and included in U.S. war plans for use if needed in the Gulf War. DOD reported that during the war, U.S. forces used no nonself-destruct land mines. The services reported using a total of about 118,000 artillery- delivered or aircraft-delivered surface-laid scatterable self-destruct land mines. DOD provided few records showing why land mines were used and no evidence of specific military effects on the enemy—such as enemy killed or equipment destroyed—from the U.S. use of land mines during the Gulf War. We therefore could not determine the effect of U.S. land-mine use during the Gulf War. See appendix II for pictures, types, and numbers of land mines available for use and numbers used in the Gulf War. U.S. forces deployed to the Gulf War with over 2.2 million of the DOD- estimated 19 million land mines available in U.S. worldwide stockpiles in 1990. These consisted of both the conventional nonself-destruct land mines and scatterable surface-laid, self-destruct land mines. Nonself- destruct, hand-emplaced land mines available but not used included the M-14 (“Toe Popper”) and the M-16 (“Bouncing Betty”) antipersonnel land mines and the M-15, M-19, and M-21 antitank land mines. Self-destruct, scatterable land mines included air-delivered cluster bomb unit (CBU) 78/89 Gator, which dispensed mixed scatterable antipersonnel and antitank land mines, and artillery-fired M-692/731 Area Denial Artillery Munition (ADAM) antipersonnel land mines and M-718/741 Remote Anti-Armor Mine (RAAM) antitank land mines. (See app. II, figs. 5, 6, and 7 and table 10.) The services reported that all standard types of U.S. land mines in their inventories were available from unit and theater supplies or U.S. stockpiles. During the Gulf War, U.S. forces were permitted by doctrine, war plans, and command authority to employ both nonself-destruct and self-destruct land mines whenever an appropriate commander determined that U.S. use of land mines would provide a tactical advantage. U.S. land mines of all types were available and planned for use by U.S. forces. U.S. land mine warfare doctrine for the services during the Gulf War indicated that land mines could be used both offensively, for example, to deny the enemy use of key terrain, and defensively, for instance, to protect U.S. forces from attack. U.S. doctrine states that the primary uses of land mines are to provide force protection, shape the battlefield, and reduce the number of forces needed. At the time of the Gulf War, U.S. land mine doctrine included the following four types of minefields: 1. protective minefields, whose purpose is to add temporary strength to weapons, positions, or other obstacles; 2. tactical minefields, which are emplaced as part of an overall obstacle plan to stop, delay, and disrupt enemy attacks; reduce enemy mobility; channelize enemy formations; block enemy penetrations; and protect friendly flanks; 3. point minefields, which are emplaced in friendly or uncontested areas and are intended to disorganize enemy forces or block an enemy counterattack; and 4. interdiction minefields, which are emplaced in enemy-held areas to disrupt lines of communication and separate enemy forces. U.S. plans for the execution of the Gulf War included the use of hand- emplaced antipersonnel and antitank land mines (e.g., M-14/16/21), artillery-delivered land mines (ADAM/RAAM), air-delivered land mines (Gator), and others for these purposes when U.S. commanders determined their use was needed. Military units’ on-hand ammunition supplies, as well as ammunition resupply stockpiles located within the combat theater, included millions of U.S. land mines. Ammunition resupply plans included planned rates for the daily resupply of land mines consumed in combat. The services reported that during the Gulf War, they used about 118,000 land mines from the approximately 2.2 million U.S. land mines that were taken to the Gulf War theater of operations and the millions of land mines available for use from U.S. worldwide stockpiles, which in total contained about 19 million land mines. All of the land mines used were the self- destructing, scatterable, surface-laid types. However, the services also indicated that, because Gulf War records related to land mines might be incomplete, information made available to us may be inexact. For example, the Army indicated that, while its record searches show that the Army used no land mines, it is unsure whether archived Gulf War records include evidence of Army land mine use that it has not uncovered. The services reported no confirmed use of any nonself-destruct land mines during the Gulf War. In other words, U.S. forces reported no use of antipersonnel land mines such as the over 6 million available (over 200,000 in theater) M-14 “Toe Popper” or M-16 “Bouncing Betty” and no M-15, M-19, or M-21 antitank land mines, which numbered over 2 million in U.S. stockpiles (over 40,000 in theater). (See fig. 6 and table 10 in app. II.) The Army reported no confirmed use of any land mines, with the qualification that it is unsure whether it had emplaced two minefields of an unknown type. The other military services reported that they used a total of 117,634 U.S. self-destruct land mines, whose destruction time-delay periods were set at 4 hours, 48 hours, or 15 days. The type of land mine used in the largest quantity was the aircraft-delivered surface-laid Gator land mines, which were dispersed from cluster bomb units containing both antitank and antipersonnel mines. Air Force, Navy, and Marine aircraft employed a total of 116,770 Gator land mines. Table 1 and appendix II provide additional details on the numbers and types of land mines available for use and used by the U.S. military services during the Gulf War. DOD records on the Gulf War provided us include little detail on why land mines were used. Available records indicate that U.S. forces employed land mines both offensively and defensively when fighting in Iraqi- controlled Kuwait. For example, U.S. aircraft offensively employed concentrations of surface-laid Gator land mines to deny Iraqi use of Al Jaber airbase in Kuwait and to hamper the movement of Iraqi forces. In addition, Gator land mines were used extensively with the intent to inhibit free movement in and around possible staging and launch areas for enemy Scud missiles. Possible Scud missile transporter “hide sites” included culverts, overpasses, and bridges in Iraq. In a defensive mode, Gator land mines were employed along the flanks of U.S. forces. In addition, U.S. Marines defensively employed concentrations of artillery-fired ADAM and RAAM land mines to supplement defenses against potential attacks by enemy forces north of Al Jaber airbase in southern Kuwait. Procedures for commanders to approve land mine use were established, disseminated, and included in all major unit war plans. A senior U.S. force commander who participated in the Gulf War told us that U.S. forces had no restrictive theaterwide or forcewide prohibitions on the employment of land mines, U.S. commanders understood their authority to use mines whenever their use would provide a tactical advantage, and U.S. commanders decided to use land mine or nonland-mine munitions based on their determinations as to which were best suited to accomplish assigned missions. The services reported no evidence of enemy casualties, either killed or injured; enemy equipment losses, either destroyed or damaged; or enemy maneuver limitations resulting, directly or indirectly, from its employment of surface-laid scatterable Gator, ADAM, and RAAM land mines during the Gulf War. (See app. II, fig. 5.) U.S. forces intended to adversely affect the enemy by using 116,770 Gator land mines, but no service has provided specific evidence that these land mines or the 864 ADAM and RAAM land mines reported as employed actually caused or contributed to enemy losses. Because neither DOD nor the services provided us evidence or estimates of actual effects and losses inflicted on the enemy by these U.S. land mines, we were unable to determine the actual effect of U.S. land mine use during the Gulf War. DOD and service documents detailing when land mines were used did not provide evidence of the effects of that use. For example, in one case, the Marine Corps reported that it had fired artillery-delivered ADAM and RAAM land mines to supplement a defensive position. However, the enemy was not reported to have been aware of or have actually encountered these land mines. Similarly, air Gator drops on possible Scud missile sites were not reported to have destroyed any Scud missiles or transporters. The services provided no evidence indicating whether the enemy had ever encountered the Gator land mines dropped on possible enemy maneuver routes or whether Gator employments had resulted in enemy destruction. Service reports indicate that 81 of the 1,364 U.S. casualties attributed to the Gulf War were caused by land mines. None of these were attributed specifically to U.S. land mines, but rather to an Iraqi or an “unknown” type of land mine. Because of service data limitations, the possibility cannot be ruled out that some of the casualties now attributed to explosions of unknown or ambiguously reported unexploded ordnance were actually caused by land mines. Service casualty reporting indicates that at least 142 additional casualties resulted from such unexplained explosions. However, there is no way to determine whether some portion of these might have been caused by U.S. or other land mines or by unexploded ordnance. Of all casualties reported to have been caused by explosions, a relatively small percentage were reported to have been caused by the unauthorized handling of unexploded ordnance. The services reported that there were 1,364 U.S. casualties associated with the Gulf War. Of these, 385 were killed, and 979 were injured. Army personnel suffered 1,032 casualties, or 76 percent, of all U.S. deaths and injuries. Table 2 shows the numbers of U.S. casualties by military service. To determine what number of these casualties could have been caused by U.S. or other land mines, we obtained information from the services on the causes of all Gulf War deaths and injuries. Service officials attributed casualties to causes and categories based on battlefield casualty, accident, after-action, and other reports. As shown in figure 1, enemy ground and Scud missile fire caused the largest number of identifiable casualties to Gulf War service members. The services assigned 287, or 21 percent, of all casualties during the Gulf War to the “enemy ground/Scud fire” category. In particular, the Army attributed 128 of the 287 in this category to an Iraqi Scud missile attack. In addition, enemy fire caused some “aircraft incident” casualties. The second and third largest categories of identifiable causes of casualties were vehicle accidents and aircraft incidents. Available data indicate that explosions from some type of ordnance caused 177 casualties: land mines caused 81; cluster munition unexploded ordnance (UXO) caused 80; and other UXO caused 16. The casualty categories depicted in figure 1 are defined in table 3. As would be expected, the various services experienced different types and numbers of casualties. For the Marine Corps, “enemy ground fire” caused the largest number of casualties—84; for the Air Force, “aircraft incidents” was the largest cause—39; and for the Navy, “other accidents” caused the largest number—33. For the Army, “other causes” was the largest category—267. Our comparison of casualty-related documentation, however, indicates that at least some of these casualties should have been categorized elsewhere. For example, documentation shows that one casualty placed in “other causes” might have been a land mine casualty. In a second case, documentation indicates that one of these casualties suffered a heart attack and should have been placed in the “natural causes” category. In other documentation, we found indications that five casualties placed in this “other causes” category suffered what were “other accidents.” For these reasons, it is unclear whether all 267 of these Army-reported casualties should have been placed in the “other causes” category. However, Army officials indicated that available data limited the Army’s ability to identify more specifically the causes of these casualties. See appendix III for the reported numbers of casualties by service and cause. Service data show that 34 persons were killed and 143 were injured during the Gulf War by the explosion of some type of ordnance other than enemy fire. These 177 casualties—caused by land mines, cluster munition UXO, or other UXO—represent 13 percent of all casualties suffered by service members. (See table 4.) Of the 177 Gulf War casualties that DOD reported were caused by an explosion from some type of land mine, cluster munition, or unidentified type of UXO, the services reported no U.S. casualties were caused by U.S. land mines. However, as shown in table 5, U.S. cluster munition UXO (CBU or dual-purpose improved conventional munitions) or other UXO (unidentified) caused more U.S. casualties—96—than Iraqi and unidentified land mines—81. Of all persons killed or injured by explosions from land mines (either Iraqi or unidentified), cluster munition UXO (either CBUs or dual-purpose improved conventional munitions), and other unidentified UXO, Army personnel represented 164, or 93 percent. In addition, 12 Marine Corps personnel were killed or injured, and 1 Air Force service member was injured by these explosions. Of the 177 explosion casualties attributed by the services to some type of ordnance explosion, service records specify that 35 were caused by Iraqi land mines (see fig. 2). Casualty records for some of the 142 other explosion casualties are inexact or ambiguous. Thus, the other explosion categories—cluster munition UXO from CBU and dual-purpose improved conventional munitions, unidentified land mines, and other UXO—could include some U.S. casualties by U.S. or other land mines because casualty records did not always permit DOD to identify definitively the type of UXO causing the casualty. While the UXO causing a casualty might have been reported as a cluster munition CBU, it could have been misidentified and actually have been a U.S. land mine cluster munition from Gator, ADAM, RAAM, or some other munition. Casualty records show numerous cases in which all these terms are used interchangeably. For example, in one reported case, a casualty is first attributed to a mine and next to a dual-purpose improved conventional munition. In a second case, the service member was said to have driven over a cluster munition, which was later called a “mine.” In a third case, the soldier is reported in one document to have “hit a trip wire causing mine to explode” but in another document to have “stepped on an Iraqi cluster bomb.” In other words, the terminologies used in these casualty reports are inconsistent and imprecise, thus preventing a definitive analysis by the services of the causes of some casualties. DOD indicated that it is possible also that some of the casualties attributed to land mines were actually caused by unexploded ordnance. DOD data did not always allow it to identify how service members had triggered the UXO that caused each casualty. Because of the many ways that ordnance and UXO can be triggered and because some ordnance can be triggered from a distance, DOD was unable to always determine the circumstances causing an explosion and the type of ordnance that exploded. DOD-reported data, however, indicate that relatively few persons who became casualties of unexploded ordnance were handling it without authorization. In attempting to determine what percentage of service members were injured or killed while handling ordnance in an unauthorized manner, we consulted all available descriptions of these incidents. We grouped these casualties into three categories based on service-reported information concerning how the explosion was triggered: (1) in performance of duty, (2) unauthorized handling of UXO, and (3) unknown circumstance. As shown by figure 3, DOD data indicate that more than half of the explosion casualties resulted from unknown circumstances. Of the 177 explosion casualties, DOD records indicated that 64 casualties (36 percent) resulted from explosions that were triggered in the performance of assigned duties. For example, one Army ground unit reported that when it began its ground attack, its first casualty resulted from a soldier encountering an artillery submunition dud that exploded. In another incident, seven Army engineers were killed while clearing unexploded BLU-97 (nonland-mine) duds at an Iraqi airfield. DOD attributed these casualties to “incorrect or incomplete training in mine neutralization techniques and the handling of UXOs.” An expert in explosive ordnance demolition who was advising the engineers on how to clear safely Gator land mine duds and other submunitions reported, “I feel worse because the guys who died probably died of ignorance. This is a EOD related problem which was ill handled by others who thought they could handle the job.” This situation illustrates that UXO can be so dangerous that even engineers with some training in handling UXO were thought by an explosive ordnance disposal expert to be inadequately prepared to deal with UXO on the battlefield. Soldiers who represent the 16 casualties (9 percent) attributed by DOD to unauthorized handling of UXO were generally performing their military duties but for some unknown reason touched or otherwise triggered UXO. These soldiers were typically on duty in or traversing U.S. dudfields on the battlefield while performing such actions as pursuing the enemy. DOD reported that some soldiers were casualties as a result of disturbing battlefield objects that they thought were not hazardous, while others might have known they were handling a piece of some sort of ordnance. For example, a DOD document cited a case in which soldiers handled UXO that they thought was harmless. This report stated that two persons were killed and seven injured when soldiers “collected what they thought were parachute flares.” Furthermore, soldiers might not have recognized that a battlefield object was hazardous because UXO comes in many shapes, sizes, and designs, much of which inexperienced soldiers have never seen before. Some common U.S. submunitions appear to be harmless while actually being armed and dangerous. Moreover, many soldiers are not aware that some UXO can cause injuries at distances of 100 meters. A small number of DOD casualty reports describing unauthorized handling of UXO attribute soldier casualties to souvenir hunting. For example, one incident resulted when a soldier who was examining an object was told by fellow soldiers to get rid of it. When the soldier threw the object away from him, it exploded. In other cases, soldiers might have known that handling UXO was unauthorized and handled it anyway. Gulf War documents indicate that DOD and the services called for soldiers on a battlefield to be warned not to handle UXO unless directed to do so. The remaining 97 (55 percent) of the 177 explosion casualties fell into the unknown circumstances category. Because battlefield casualty reports did not identify the circumstance or activity of these soldiers, it is unknown whether or not these soldiers became casualties while performing assigned duties. The Army’s Safety Center provided us data on 21 Gulf War U.S. explosion casualties that occurred in Kuwait, Iraq, and Saudi Arabia (5 deaths and 16 injured). The Center attributed 7 of these casualties to land mines of unknown type and 14 to U.S. dual-purpose improved conventional munitions and CBU submunitions. These casualties were associated with unintentional entry into minefields or dudfields or disturbance of UXO. These casualties are included in the Gulf War casualty totals presented in this report. Numerous issues included in service and DOD Gulf War lessons-learned, after-action, and other reports concerned the safety and utility of conventional and submunition U.S. land mines. Fratricide and battlefield mobility were cited often as important overall concerns associated with both available and used U.S. land mines and nonland-mine submunitions. These concerns led to the reluctance of some U.S. commanders to use land mines in areas that U.S. and allied forces might have to traverse.Commanders’ fears arose because of two basic reasons: The first reason involved both the obsolescence of conventional U.S. mines and safety issues with both conventional and scatterable land mines. A higher-than- anticipated dud rate for land mines and other submunitions during the Gulf War was one safety issue. Reflective of the safety issues, DOD reports recognized that de facto minefields created by all unexploded submunitions—land mine and nonland-mine alike—threatened fratricide and affected maneuvers by U.S. forces. The second reason involved concern that reporting, recording, and, when appropriate, marking the hazard areas created by the placement of self-destruct land mines or dudfields were not always accomplished when needed. On the basis of its Gulf War experience, DOD recognized the importance of commanders’ taking into consideration the possible effects of unexploded munitions when making and executing their plans and identified a variety of corrective actions. (App. IV cites DOD-reported actions related to land- mine and UXO concerns. Because it was beyond the scope of this report, we did not evaluate DOD’s progress in these areas.) In Gulf War lessons-learned and other documents, DOD and the services reported that U.S. conventional nonself-destructing land mines were obsolete and dangerous to use and that the newer self-destructing land mines also posed safety concerns to users. For example, one Army after- action report recommended that U.S. conventional antitank and antipersonnel land mines be replaced because of safety concerns. Army officials stated that U.S. conventional mines needed better fuzing and the capability of being remotely turned on or off or destroyed. In a joint service lessons-learned report, officials stated, “Commanders were afraid to use conventional and scatterable mines because of their potential for fratricide.” The report said that this fear could also be attributed to the lack of training that service members had received in how to employ land mines. In particular, prior to the Gulf War, the Army restricted live-mine training with conventional antipersonnel land mines (M-14s and M-16s) because they were considered dangerous. The joint lessons-learned report argued, “If the system is unreliable or unsafe during training, it will be unreliable and unsafe to use during war.” Since before the Gulf War, the Army has known about safety issues with its conventional nonself-destruct M-14 and M-16 antipersonnel land mines. For example, because of malfunctions that can occur with the M605 fuze of the “Bouncing Betty” M-16 antipersonnel land mine, the Army has restricted the use of the pre-1957 fuzes that are thought to be dangerous. However, the concern extends beyond the fuze issue to include also the land mines themselves. A DOD reliability testing document states that the M-16 mines “are subject to duds; the mine ejects but fails to detonate. mine is then unexploded ordnance and still presents a danger.” A DOD 2001 report on dud rates for land mines and other munitions states that the dud rate identified by stockpile reliability testing for M-16 land mines is over 6 percent. In a specific case, a currently serving senior Army officer told us that he had trained his unit with these antipersonnel land mines in Germany in 1990 to get ready for the Gulf War. According to the officer, during the training, his unit suffered 10 casualties from the M-16 land mine. This officer said that U.S. “Bouncing Betty” M-16 and “Toe Popper” M-14 antipersonnel land mines should be eliminated from Army stockpiles because they are too dangerous to use. Due to safety concerns, the Army placed prohibitions on live-fire training with these land mines before and after the Gulf War, with restrictions being lifted during the Gulf War. But DOD reporting does not indicate that any U.S. unit chose to conduct live-mine training in the theater with any type of mines. According to an Army engineer after-action report, “Some troops even reported that they were prohibited from training on live mines after their arrival in Saudi Arabia.” Moreover, DOD reporting states that U.S. forces employed no M-14 or M-16 mines in combat. Because of renewed restrictions following the Gulf War, service members still are prohibited from live-fire training on M-14 antipersonnel land mines, and training on live M-16 mines is restricted to soldiers in units assigned or attached to the Eighth U.S. Army in Korea. Another safety concern expressed in lessons-learned reports was that higher-than-expected dud, or malfunction, rates occurred for the approximately 118,000 U.S. self-destruct land mines and the millions of other U.S. scatterable submunitions employed in the Gulf War. These included duds found by a U.S. contractor while clearing a portion of the Kuwaiti battlefield. These duds created concerns about potentially hazardous areas for U.S. troops. According to briefing documents provided by DOD’s Office of the Project Manager for Mines, Countermine and Demolitions, testing over the past 14 years of almost 67,000 self-destructing antitank and antipersonnel land mines at a proving ground has resulted in no live mines being left after the tests. The office also reports that all U.S. self-destruct mines self- deactivate, that is, their batteries die within 90 to 120 days. The office stated that the reliability rate for the self-destruct feature is 99.99 percent and that the reliability rate for the self-deactivation feature is 99.999(+). According to the program office, these features mean that self-destruct land mines leave no hazardous mines on the battlefield. “For safety reasons, SCATMINEs must receive two arming signals at launch. One signal is usually physical (spin, acceleration, or unstacking), and the other is electronic. This same electronic signal activates the mine’s SD time. “Mines start their safe-separation countdown (arming time) when they receive arming signals. This allows the mines to come to rest after dispensing and allows the mine dispenser to exit the area safely . . . . “Mines are armed after the arming time expires. The first step in arming is a self-test to ensure proper circuitry. Approximately 0.5 percent of mines fail the self-test and self- destruct immediately. “After the self-test, mines remain active until their SD time expires or until they are encountered. Mines actually self-destruct at 80 to 100 percent of their SD time. . . . No mines should remain after the SD time has been reached. Two to five percent of US SCATMINES fail to self-destruct as intended. Any mines found after the SD time must be treated as unexploded ordnance. For example, mines with a 4-hour SD time will actually start self-destructing at 3 hours and 12 minutes. When the 4-hour SD time is reached, no unexploded mines should exist.” Conventional Munitions Systems (CMS), Inc., a U.S. contractor that specialized in explosive ordnance disposal, was paid by the government of Kuwait to clear unexploded ordnance from one of seven sectors of the battlefield in Kuwait, which included Al Jaber Airbase (see fig. 4). CMS reported finding substantially more U.S. land mine duds than would be expected if dud rates were as low as DOD documents and briefings stated they are. DOD indicated that it cannot confirm the accuracy of the CMS-reported data. After the Gulf War, CMS employed more than 500 certified, experienced, and trained personnel to eliminate the unexploded ordnance in its sector of Kuwait. About 150 CMS employees were retired U.S. military explosive ordnance disposal experts. In a report for the U.S. Army, CMS recorded the types and numbers of U.S. submunition duds it found in its explosive ordnance disposal sector of the Kuwaiti battlefield. The report illustrates how the dangers of the battlefield during the Gulf War were compounded by the large numbers of unexploded U.S. submunitions, including land mines. According to the CMS report, it found 1,977 U.S. scatterable land mine duds and about 118,000 U.S. nonland-mine submunition duds in its disposal sector. CMS’s report stated that “many tons of modern bombs called Cluster Bomb Unit were dropped,” each of which “would deploy as many as 250 small submunitions.” The report states, “A significant number of the bombs and more importantly the submunitions, did not detonate upon striking the ground resulting in hundreds of thousands of ‘dud’ explosive devices laying on the ground in Kuwait.” While the vast majority of these duds were from nonland mine submunitions, they included the more modern self-destructing RAAM, ADAM, and Gator land mines. Table 6 lists the types and amounts of U.S. dud submunitions CMS reported finding in its disposal sector of the Kuwaiti battlefield. DOD reports that it employed in the Gulf War a total of about 118,000 self- destruct land mines (see table 1) and that their self-destruct failure, or dud, rate is 0.01 percent (1 in 10,000). However, if, as DOD reported, about 118,000 of these self-destruct land mines were employed and they produced duds at the DOD-claimed rate of 0.01 percent, there should have been about 12 duds produced, not 1,977 as CMS reported finding in one of seven Kuwaiti battlefield sectors. Thus, a substantial inconsistency exists between the DOD-reported reliability rate and the dud rate implied by the number of mines that CMS reported finding from actual battlefield use. At the time CMS was completing this UXO disposal work in Kuwait, the DOD program manager for Mines, Countermine and Demolitions visited the CMS cleanup operation. His report of that trip indicates that he thought CMS’s techniques, training of personnel, and recording of ordnance recovered were thorough and accurate. The project manager said in his report that he had personally seen unexploded U.S. ordnance on the battlefield. The mine database developed by CMS to record the location of land mines, the project manager believed, was “extremely useful” to the U.S. soldiers working in that area. We interviewed several former employees of CMS to obtain their views on these issues. All of those we interviewed were retired senior U.S. officers and noncommissioned officers whose rank ranged from major general to sergeant first class. All but one were experienced in military ordnance and explosive ordnance disposal. They included the then-CMS president, the Kuwaiti on-site manager, and leaders of ground UXO disposal teams. They made two major points: (1) U.S. submunition UXO found in their sector was tactically employed, unexploded ordnance duds that had failed to explode as designed and could have been hazardous, meaning that if disturbed, the ordnance might have exploded, and (2) U.S. Gator, ADAM, and RAAM land-mine duds had not self-destructed as designed and were treated as hazardous. CMS explosives disposal personnel stated that they had personally experienced what they thought were Gator duds exploding on the battlefield in Kuwait, caused by no apparent triggering event, over a year after the Gulf War ended. CMS experts speculated that these detonations might have been caused by the extreme heat in a desert environment. DOD has been unable to explain the circumstances that caused the nearly 2,000 U.S. self-destruct land mine duds found in the CMS disposal sector of the Kuwaiti battlefield not to self-destruct. Several DOD land mine and explosive ordnance disposal experts speculated that these dud land mines could have resulted from (1) mines that had malfunctioned or had been misemployed; (2) greater-than-expected and reported dud rates; or (3) the use by U.S. forces of many thousands more scatterable land mines than DOD has reported having used. Some Army land mine-related officials discounted the accuracy of some data included in the CMS report. However, these officials did not provide us with any factual evidence supporting these views. Other DOD experts in explosive ordnance disposal confirmed in interviews that scatterable mine duds can exist after their self-destruct times have elapsed and that these duds may be hazardous. A DOD explosive ordnance disposal expert said that procedures for eliminating Gator duds specify that explosive ordnance disposal should be postponed for 22 days, and then the duds should normally be destroyed remotely by blowing them up in place. The 22-day period is calculated by adding a 50-percent safety factor to the maximum possible self-destruct period of 15 days. Explosive ordnance disposal personnel thus attempt to reduce the possibility of a munition detonating or self-destructing while they are near it. DOD did not provide us with records to show the results of reliability testing for ADAM, RAAM, or Gator land mines done prior to the Gulf War or any safety-of-use messages that might have been in effect for these or other U.S. land mines that were in U.S. stockpiles at that time. However, DOD did provide some post-Gulf War test records that document reliability problems with eight of its self-destruct land mine systems.Specifically, testing showed that some land mines did not self-destruct at the selected times. For example, a July 2000 Army study of dud rates for ammunition reports that the submunition dud rate for RAAM land mines with short duration fuzes is over 7 percent, and the dud rate for RAAM land mines with long duration fuzes is over 10 percent. In an Ammunition Stockpile Reliability Program test for the ADAM, the Army suspended one lot because it failed. In a test for the Volcano system, 66 out of 564 land mines failed the test. Among the failures were 1 hazardous dud (meaning that it could explode), 24 nonhazardous duds (meaning that they had not armed), 6 mines that detonated early, and 1 mine that detonated late. In another case, DOD testing of the Selectable Lightweight Attack Munition (SLAM) land mine showed that it also did not destruct at the selected time. While this problem was investigated, SLAM use was suspended and a safety-of-use message was put into effect advising personnel “never to approach an M2 SLAM that has been armed” and, in training, “to assure that it can be detonated if it fails to go off as intended.” According to DOD, the same self-destruct and self-deactivation design has been used in all U.S. mines since 1970. Because of this design similarity, it is possible that U.S. self-destruct land mines could be subject to similar failures. Failures of self-destruct land mines that are induced by extremes in temperature and other variations in environmental conditions are well- documented in service field manuals and after-action reports. Field manuals state that the reliability of self-destruct land mines degrades when they are employed on sand, vegetation, hillsides, snow, or hard surfaces. Also, self-destruct land mines have reportedly “reduced effectiveness” on hard surfaces such as concrete and asphalt. They break apart and can easily be seen. Also, the high detectability of scatterable mines on bare and lightly covered surfaces permits the enemy to seek out unmined passageways or pick a way through lightly seeded areas. An Army document states that “FASCAM must be covered by either observation or fire, since FASCAM minefields are surface laid and an undisturbed enemy could breach those obstacles quickly….FASCAM is not suitable for use in road interdiction due to its tendency to malfunction on hard surfaces.” In snow, self-destruct land mines may settle into the snow at unintended angles, causing their antihandling devices to prematurely detonate them. In deep snow, self- destruct land mines are considered “ineffective,” and at least 40 percent of their blast is smothered. Soft sand, mud, or surface water can have similar effects. During the Gulf War in particular, Marines found that in the constantly blowing and shifting sand, surface mines became buried, and buried mines came to the surface. Slope or unevenness of the terrain may also have an adverse impact on self-destruct land mines. Specifically, between 5 and 15 percent of scatterable mines come to rest on their edges when deployed. RAAM and ADAM land mines must come to rest and stabilize within 30 seconds of impact, or the submunitions will not arm. Very uneven terrain such as ground covered by vegetation or rocks also may prevent the ADAM or Gator trip wires from deploying properly. Gator testing indicates that various reliability problems can increase dud rates. For example, in 58 tests, seven submunition land mine dispenser failures were observed, reducing the reliability rate of the dispensers to 88 percent. Of the submunition mines delivered, 99 percent survived ground impact. Of those, 97 percent of the antitank mines armed, and 95 percent of the antipersonnel mines armed. Various other problems can affect a mine’s explosion. For example, one antitank mine did not explode when triggered, but it did activate when it was picked up and shaken. During the Gulf War, accumulations of thousands of U.S. nonland-mine submunition duds on the battlefield created unintended de facto minefields. This problem was exacerbated by dud rates for these submunitions that appear to have been higher than the 2- to 4-percent submunition dud rate that DOD had previously reported. In a study of UXO issues, the Army identified an estimated 8-percent overall dud rate for submunitions. Another Army document said that an explosive ordnance disposal (EOD) commander estimated that an area occupied by the 24th Infantry Division during the war experienced at least a 15- to 20-percent dud rate for some Army submunitions. The document stated that “An unknown amount was covered by sand suggesting an even higher rate.” EOD personnel estimated that the dud rate for Air Force submunitions was 40 percent for one area. They commented that these submunitions “did not function well in soft sand.” In addition, DOD reported that at the time of the Gulf War, over half of the 133 Multiple Launch Rocket System (MLRS) submunition lots in inventory exceeded the Army’s 5-percent dud-rate goal. Each Multiple Launch Rocket System contains 644 M77 submunitions. One DOD document stated that the dud rate for the M77 for the Gulf War ranged from 10 to 20 percent. U.S. ammunition stockpile sample testing also indicated that DOD has experienced past problems with submunition reliability rates. For example, in 1990, testing of artillery-delivered nonland-mine submunitions identified two lots that had duds in excess of 40 percent. According to a testing document, one way to compensate for this high dud rate is to increase the quantity fired. Instructions contained in the testing document were to “Notify the user of the increase in submissile defect rate so that he can make adjustments in the tactical employment plans.” The July 2000 Army study of dud rates for ammunition reports that the dud rate for artillery-fired M42/46 submunitions is over 14 percent. Like land mines, nonland-mine submunitions experience higher failure rates in various environmental conditions. According to an Army field manual, about 50 percent of the submunitions that fail to detonate are armed and hazardous. Firing them into mountainous areas or uneven terrain further increases the dud rate. The effectiveness of these rounds also decreases in snow, water, marshy areas, mud, vegetation, and soft sand. According to one DOD document, the improved conventional munitions used, including dual-purpose improved conventional munitions, and CBUs, experienced a high dud rate and caused obstacles for maneuvering forces. Units perceived the dud rates as “considerably greater than the 2-4 percent anticipated,” creating a dud minefield. The document continued that because the dud rates were “too high,” some maneuver commanders hesitated to use submunition weapons, especially if they believed that their units would move through the area later. Hazardous dudfields caused delays in movement on the battlefield, and high winds and shifting sands often covered many duds. According to this report, “This became especially dangerous for high hazard missions such as refueling operations.” “In one case, the 1st Cavalry Division moved into Kuwait along the Wadi al Batin. Twenty miles of this route was saturated with both USAF submunitions (BLU97 and Rockeye) and Army M77 submunitions. . . . Maneuvering through this area was no problem for the tracked vehicles of the division. However, the 1st Cav selected the same route for its main supply route (MSR). Because the division’s CSS consisted of mainly wheeled vehicles, EOD support was required. It took the 64th EOD and a British unit about five days to clear a two lane path through the area. In this case, the unit’s progress was clearly slowed by the duds.” Because Gulf War records are not always specific, it is not clear how frequently U.S. forces experienced problems in maneuvering through areas previously attacked by U.S. ordnance. However, available records indicate that such problems did occur to some degree and were an operational concern. In fact, DOD reported that in some instances “ground movement came to a halt” because units were afraid of encountering unexploded ordnance. Moreover, Army officials reported that, in the case of the M77 submunitions, the Army believed that the weapon would most likely be used against the Soviet threat in Europe, where U.S. troops would probably be in a defensive position. Therefore, U.S. soldiers were not expected to occupy submunition-contaminated areas. During the Gulf War, the placement of self-destruct land mines was not always reported, recorded, or marked when appropriate. This situation was exacerbated by the possibility that self-destruct land mines did not always self-destruct as designed after their preset periods of time. Consequently, safety issues involving Gulf War self-destruct land mines, as well as other submunitions, focused on the potential for fratricide resulting from U.S. forces’ unknowingly maneuvering into areas where scatterable land mines had been employed but had not yet self-destructed. Shortly after the Gulf War, one DOD fact sheet reported that DOD’s joint procedures for coordinating the use of air-delivered mines had not been widely disseminated. Further, according to the fact sheet, the procedures were outdated with respect to the rapid mobility of the modern Army. Thus, the warning information—such as the locations and self-destruct timing durations—“was next to impossible to obtain and pass to ground component commanders.” According to the document, this situation dramatically increased the probability of friendly fire casualties. The Army’s Field Manual on Mine/Countermine Operations states the importance of such coordination: “Because SCATMINEs [scatterable mines] are a very dynamic weapon system, great care must be taken to ensure that proper coordination is made with higher, adjacent, and subordinate units. To prevent friendly casualties, all affected units must be notified of the location and duration of scatterable minefields.” Gulf War records include numerous reports indicating that scatterable minefields were employed in locations that were not reported to maneuver commanders. For example, one DOD report stated that neither the Air Force nor the Navy could accurately track the location or duration of Gator minefields. An Army after-action report stated that the Air Force “flew over 35 GATOR missions (the exact number is not known) without reporting or recording the missions.” According to this report, the result was that “uring the ground offensive, units found themselves maneuvering in GATOR minefields without any knowledge of their existence.” Another Army after-action report stated, “Some friendly Gator- scatterable Air Force-delivered scatterable minefields were encountered in Iraq.” The report highlighted the lack of a scatterable minefield self- extraction capability for units to avoid fratricide. A DOD fratricide lessons- learned document noted that casualties from friendly minefields were a “major problem” due to the lack of coordination, failure to disseminate obstacle plans, and failure to report the location of mines throughout the chain of command. Another Army after-action report attributed fatalities to the failure to mark hazardous areas. According to this report, “In many cases GATOR minefields and large areas which contained DPICM [dual-purpose improved conventional munitions] and CBU duds were left unmarked due to the lack of a fast and simple method for marking hazardous areas.” After-action reports also cited planners’ ignorance of “the capabilities, limitations and reporting, recording, and marking requirements of our scatterable mine systems,” as well as a lack of training regarding unexploded ordnance, as the causes of fatalities. Tracking nonland-mine dudfields presented similar concerns. A case in which one U.S. unit had moved through an area where another U.S. unit had earlier dropped cluster munitions is presented in an historical account of the Gulf War written by a retired Army lieutenant general. According to this account, a U.S. Army 101st Airborne Division aviation battalion traversed an area that had previously been seized by the U.S. Army VIIth Corps, which had fired cluster munitions. The battalion’s commander cited a case in which one of his soldiers was injured when he stepped on a cluster munition. “Keeping track of DPICM -dudded areas,” said the commander, “was complicated by the fact that one Corps moved into another Corps area.” Senior U.S. Gulf War commanders were aware of the incidence of fratricide from unexploded CBU, dual-purpose improved conventional munitions, and other ordnance. For example, one U.S. Army artillery general sent a safety message that read, “In recent days I have received numerous reports of soldiers being injured and killed by duds. . . . I am firmly convinced that each case could have been averted. Every soldier must be warned. . . .” According to one DOD official, the main reason hazardous dudfields were not always reported or marked was that doctrine did not require commanders to always report or mark nonland-mine hazard areas, as is required for minefields. However, DOD has noted, “Although UXO is not a mine, UXO hazards pose problems similar to mines concerning both personnel safety and the movement and maneuver of forces on the battlefield.” According to after-action, lessons-learned, and other reports, DOD and the services recognize the nature, extent, and implications for fratricide and battlefield maneuver of reported concerns, as well as the need to act upon their concerns about land mines and other submunition UXO. According to an Army after-action report, “The large amount of UXO found in Iraq and Kuwait caught Allied forces by surprise. Lessons learned from past conflicts were not learned, leading to unacceptable casualties among our soldiers, allies, and civilians.” These reports suggested that changes to address these concerns would increase submunition battlefield utility and effectiveness while simultaneously reducing casualties and increasing freedom of maneuver. In after-action reports, a number of actions were identified to improve the safety of troops and their mobility through land mines and other employed submunitions. These included, among others, that DOD replace the current conventional land mines with modern, safer ones; add a feature to scatterable land mines that would allow them to be turned on and off, giving the land mines a long-term static capability and providing U.S. commanders with the ability to create cleared lanes for friendly passage when and where needed; develop submunitions with lower dud rates and develop self-destruct mechanisms for nonland-mine submunitions; consider the magnitude and location of UXO likely to be on the battlefield when deciding the number and mix of submunitions, precision-guided munitions, or other munitions to use and, when planning maneuver operations, avoid dudfield hazard areas or breach them with troops inside armored vehicles; develop training aids—such as manuals and working models of U.S. scatterable mines—to provide service members with the ability to recognize U.S. scatterable mines and other unexploded ordnance and the knowledge of the proper actions to take to safely avoid and/or deactivate/detonate explosive submunitions and to safely extract themselves from minefields or dudfields; and establish and standardize procedures for the reporting, recording, and, when appropriate, marking of concentrations of submunition bomblets as hazard areas. DOD has reported a number of actions that relate to these land mine and UXO concerns. These actions are summarized in appendix IV. Because it was beyond the scope of this report, we did not evaluate DOD’s progress in these areas. In its comments on a draft of this report, DOD stated that it believes the report is flawed because it “makes assertions and speculations that are not based on fact” and because we used “unreliable or unrelated data.” In particular, DOD made the following main points: Our report implies that U.S. casualties caused by land mines were higher than DOD records show. Our report relied heavily on the report by CMS, Inc., even though there are weaknesses and mistakes in the CMS report. Our report confuses issues dealing with unexploded ordnance and land mines. By focusing on the Gulf War experience as one “case study,” our report is not a credible analysis of land-mine utility and employment. We have made some changes to the report to clarify and elaborate on the issues DOD has raised, but we do not agree that the report is flawed or makes unsubstantiated assertions. In regard to each of DOD’s comments, we offer the following response: Our report states that DOD records show no U.S. casualties attributed to U.S. land mines and that 81 casualties were attributed to Iraqi or other land mines. In addition, we point out that it is possible that some portion of the casualties in the “other” or “unknown” categories reported by DOD could have been caused by land mines—there is simply no way of knowing. This is a statement of fact, not an assertion that casualties were greater than reported. As we gathered data on Gulf War casualties, our service points of contact worked with us to ensure that we had the most complete information on this issue that was available. Some records were ambiguous and/or incomplete. However, DOD officials who provided us with this data agreed that our interpretation of the records was accurate. Much of DOD’s concern about “unreliable data” stems from our use of the report by CMS, Inc., on UXO cleanup of the battlefield. Most of our discussion of the CMS report is in the section addressing DOD’s lessons learned from the Gulf War. Our use of CMS data in that section corroborates in most cases the lessons learned contained in DOD after- action reports. While DOD claims that the CMS report contained inaccuracies, DOD did not provide any data to challenge the main message of the CMS report, which was that a very large number of U.S. land mine and cluster munition duds were found on the Kuwaiti battlefield. In fact, a DOD study that discusses the magnitude of the unexploded ordnance problem and that calculates the relative cost of cleaning up the battlefield compared to retrofitting or reprocuring U.S. submunitions with self- destruct fuzes in order to lower dud rates uses the same CMS data we cite in our report. In its 2000 report to Congress, DOD uses the results of these calculations to discuss the cost and feasibility of retrofitting the Army’s ammunition stockpile. UXO is discussed in our report from two standpoints. First, casualty data presenting the causes of casualties cannot always distinguish between a land mine and other types of UXO, so we believed it was important to discuss both to provide a proper context. Secondly, DOD’s own after- action reports on lessons learned discuss the problems of unexploded ordnance in terms of both land mines and cluster munitions, so our discussion of land mines needs to be in this overall UXO context. We have tried throughout the report to make clear distinctions between land mines and other ordnance, and we have made further clarifications as a result of DOD’s comments. Lastly, we recognize that this report focuses exclusively on the Gulf War; this was the agreed-upon scope of our work as discussed with our congressional requester, and this is stated in the objectives and scope and methodology sections of our report. As such, we agree that it is not a comprehensive analysis of the utility of land mines in modern warfare; it was never intended to be. As our report makes clear, we do not make any conclusions or recommendations in this report. Nevertheless, we believe the report provides important historical context—the Gulf War was the largest U.S. conflict since Vietnam, and both sides in the battle made use of land mines. Unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Chairmen of the House and Senate Committees on Armed Services; the Chairmen of the House and Senate Committees on Appropriations, Subcommittees on Defense; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Commandant of the Marine Corps. We will also make copies available to other congressional committees and interested parties on request. In addition, the report will be available at no cost on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (757) 552-8100 or e-mail me at [email protected]. Key staff who contributed to this report were Mike Avenick, William Cawood, Herbert Dunn, M. Jane Hunt, Jim McGaughey, and Bev Schladt. According to DOD and service data, the current DOD land-mine stockpile contains about 18 million land mines—over 2.9 million nonself-destruct land mines and over 15 million self-destruct land mines. The Army owns the vast majority of the nonself-destruct land mines, including over 1.1 million M-14 and M-16 mines (see fig. 6 in app. II). The Marine Corps has a relatively small number of these mines and has no M-14 land mines. The Air Force and the Navy stock no nonself-destruct land mines. Of the over 15 million self-destruct land mines in the U.S. stockpile, over 8.8 million are antipersonnel, and about 6.2 million are antitank land mines. Artillery-fired ADAM antipersonnel land mines (over 8 million) and RAAM antitank land mines (over 4 million) are stocked mainly by the Army but also by the Marine Corps. (See table 7 and fig. 5 in app. II.) The DOD land mine stockpile includes over 150,000 mixed land-mine dispensers, which contain a mixture of both antipersonnel and antitank land mines. All together, these mixed land-mine dispensers contain over 2 million land mines, of which over 400,000 are antipersonnel land mines and over 1.6 million are antitank land mines. (See table 8.) The services report that land mine types are mixed in three dispenser systems: the Gator, the Volcano, and the Modular Pack Mine System. For example, the Air Force and the Navy stockpile the Gator air-delivered CBU, which is one type of mixed land mine dispenser. The two services together have almost 14,000 CBU dispensers, which contain nearly 1.2 million land mines. The Army stocks over 134,000 Volcano mixed dispensers, which contain over 800,000 antipersonnel and antitank land mines. Table 9 contains the total current U.S. inventory of land mines by mine type and common name; self-destruct capability; dispenser type, if any; service that maintains them; and quantity. Figures 5 and 6 illustrate types of land mines that were in the U.S. inventory and available for use during the Gulf War. Figure 7 shows the M-18 Claymore antipersonnel land mine. DOD has stated that it is employed in only the command-detonation mode and therefore is defined to be a nonland mine. Army Field Manual 20-32 alternately calls the M-18 Claymore a “land mine” and a “munition.” See appendix IV for DOD’s statements. Table 10 cites the U.S. land mines—by mine type and common name and by service—that were available and used during the Gulf War. DOD has reported a number of actions that are related to the land-mine and unexploded ordnance concerns raised in Gulf War after-action and lessons-learned reports. These actions fall into three areas: (1) developing antipersonnel land-mine alternatives and more capable and safer self- destruct land mines, (2) revising doctrine and procedures to better address hazardous submunition dudfields, and (3) increasing ammunition reliability and reducing dud rates. DOD-reported actions in these areas are described below. However, because it was beyond the scope of this report, we did not independently assess DOD’s progress in these areas. Presidential directives establish and direct the implementation of U.S. policy on antipersonnel land mines. Presidential Decision Directive 48 states that the United States will unilaterally undertake not to use and to place in inactive stockpile status with intent to demilitarize by the end of 1999, all nonself-destructing antipersonnel land mines not needed for (a) training personnel engaged in demining and countermining operations and (b) defending the United States and its allies from armed aggression across the Korean demilitarized zone. The Directive also directs the Secretary of Defense to, among other things, undertake a program of research, procurement, and other measures needed to eliminate the requirement for nonself-destructing antipersonnel land mines for training personnel engaged in demining and countermining operations and to defend the United States and its allies from armed aggression across the Korean demilitarized zone. It further directs that this program have as an objective permitting both the United States and its allies to end reliance on antipersonnel land mines as soon as possible. Presidential Decision Directive 64 directs the Department of Defense to, among other things, (1) develop antipersonnel land mine alternatives to end the use of all antipersonnel land mines outside Korea, including those that self-destruct, by the year 2003; (2) pursue aggressively the objective of having alternatives to antipersonnel land mines ready for Korea by 2006, including those that self-destruct; (3) search aggressively for alternatives to our mixed antitank land mine systems; (4) aggressively seek to develop and field alternatives to replace nonself-destructing antipersonnel land mines in Korea with the objective of doing so by 2006; and (5) actively investigate the use of alternatives to existing antipersonnel land mines, as they are developed, in place of the self-destructing/self-deactivating antipersonnel submunitions currently used in mixed antitank mine systems. In April 2001, DOD reported to the Congress on its progress in meeting the objectives of Presidential Decision Directives 48 and 64. Although DOD has pursued programs to develop and field systems to replace land mines and has plans to spend over $900 million to do so, it reported to us in May 2002 that it will not be able to meet the dates established in Presidential Decision Directives 48 and 64. Begun in 1997 and led by the Army, DOD’s “Antipersonnel Landmines Alternative” program is aimed toward producing what DOD calls a Non Self-Destruct Alternative (NSD-A). According to the program office, however, DOD does not now anticipate that it will be able to field this alternative system by the presidential goal of 2006. The alternative system, which DOD expects to cost over $507 million, is now on hold pending a decision on whether to include a mechanism that would allow a command- controlled “man-in-the-loop” feature to be turned off so that unattended mines could remain armed and detonate on contact. In response to the June 1998 Presidential Decision Directive 64, DOD has also been pursuing alternatives to pure antipersonnel land mine systemsto end the use of all antipersonnel land mines outside of Korea by 2003 and in Korea by 2006. These efforts are being led by the Army, the Defense Advanced Research Projects Agency, and the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics). The program office indicated that the Army-led project to end the use of all pure antipersonnel systems outside Korea by 2003 by fielding artillery- fired mixed land mine ammunition, budgeted at about $145 million, might now be discontinued. A second effort, budgeted at $24 million and led by the Defense Advanced Research Projects Agency, is to seek long-term alternatives for mixed land mine systems. One concept under development is the self-healing minefield, which does not require antipersonnel land mines to protect antitank land mines because the antitank mines in the system are able to independently hop around the battlefield to intelligently redistribute themselves in response to breaching attempts. This system is not expected to be fielded before 2015. A third effort, budgeted at about $230 million and led by the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics), is aimed at replacing all U.S. mixed land mine systems by removing the antipersonnel land mines in them. These mixed systems include the Modular Pack Mine System, the Volcano, and the Gator. At present, DOD does not expect any of these alternatives to be fielded by 2006. Although DOD has numerous land-mine- related program activities underway, it has not reported to us that it has identified the land mine alternative concepts or systems or the specific land-mine programs that it plans to develop or procure and field as its next generation of land mines or land mine alternatives, which would comply with presidential directives and meet DOD’s military requirements. Because it was beyond the scope of this report, we did not assess DOD’s progress in these areas. Since the Gulf War, DOD and the services have updated their manuals and procedures dealing with unexploded ordnance to increase the attention paid to reporting and tracking possibly hazardous areas. These revisions are intended to improve the integration of UXO-related planning into military operations and provide improved procedures for the services to use when operating in a UXO environment. However, DOD has provided to us no manuals that require combat commanders to always report and track all potential hazardous submunition dudfields. Instead, commanders are allowed to determine when reporting, tracking, and marking of potentially hazardous submunition dudfields are required. DOD’s post-Gulf War UXO manuals increase attention to procedures for operations in a UXO environment. DOD’s guidance is based on Gulf War lessons learned: “Experience from Operation Desert Storm revealed that a battlefield strewn with unexploded ordnance (UXO) poses a twofold challenge for commanders at all levels: one, to reduce the potential for fratricide from UXO hazards and two, to minimize the impact that UXO may have on the conduct of combat operations. Commanders must consider risks to joint force personnel from all sources of UXO and integrate UXO into operational planning and execution.” DOD’s manuals conclude that “Although UXO is not a mine, UXO hazards pose problems similar to mines concerning both personnel safety and the movement and maneuver of forces on the battlefield.” DOD’s manuals describe the UXO problem as having increased in recent years: “Saturation of unexploded submunitions has become a characteristic of the modern battlefield. The potential for fratricide from UXO is increasing.” According to DOD, “The probability of encounter is roughly equal for a minefield and a UXO hazard area of equal density the lethality of the UXO hazard area is lower.” DOD lists three Army and Marine Corps systems as causes of UXO: the Multiple Launch Rocket System (MLRS), the Army Tactical Missile System (ATACMS), and the cannon artillery-fired dual-purpose improved conventional munition (DPICM). The manuals warn that, based on the types of ammunition available for these weapons in 1996, “every MLRS and ATACMS fire mission and over half of the fire missions executed by cannon artillery produce UXO hazard areas.” With a 95-percent submunition reliability rate, a typical fire mission of 36 MLRS rockets could produce an average of 1,368 unexploded submunitions. Air Force and Navy cluster bomb units (CBUs) contain submunitions that produce UXO hazard areas similar to MLRS, ATACMS, and cannon artillery-fired DPICM submunitions. In its post-Gulf War manuals, DOD’s guidance includes “recommended methodologies for use by the services for planning, reporting, and tracking to enhance operations in an UXO contaminated environment.” Of primary concern to DOD is the prevention of fratricide and the retention of freedom of maneuver. DOD’s manuals state that U.S or allied casualties produced by friendly unexploded submunitions may be classified as fratricide. In planning wartime operations, the guidance suggests that commanders be aware of hazardous areas and assess the risk to their operations if their troops must transit these areas. Such planning is necessary for any type of mission, regardless of the unit. Without careful planning, according to the manuals, commanders’ ability to maintain the required operational tempo could be difficult. Planners should allocate additional time for the operation if a deliberate breach or a bypass of a UXO hazard area is required. When encountering locations where unexploded submunitions have been or may be encountered, commanders should immediately report these areas. According to the manuals, “Immediate reporting is essential. UXO hazard areas are lethal and unable to distinguish between friend and foe.” After reporting hazardous areas, commanders should carefully coordinate with other units to prevent the UXO from restricting or impeding maneuver space while at the same time decreasing fratricide. Such areas should be accurately tracked and marked. When describing the need for improved procedures, DOD’s UXO manuals state, “Currently no system exists to accurately track unexploded submunitions to facilitate surface movement and maneuver.” DOD now highlights staff responsibilities for joint force planning, reporting, tracking, and disseminating UXO hazard area information and tactics, techniques, and procedures for units transiting or operating within a UXO hazard area. For example, the joint force engineer is responsible for maintaining the consolidated mine field records and historical files of UXOs, minefields, and other obstacles. The manuals conclude that “Properly integrated, these procedures will save lives and reduce the impact of UXO on operations.” Some of the suggested procedures are as follows: Coordination between component commanders and the joint force commander may be required before the use of submunitions by any delivery means. Units should bypass UXO hazard areas if possible. When bypassing is not feasible, units must try to neutralize the submunitions and scatterable mines. Combat units that have the assets to conduct an in-stride breach can do so. Extraction procedures resemble in-stride breach or clearing procedures. Dismounted forces face the greatest danger of death or injury from UXO. Unexploded ordnance is a significant obstacle to dismounted forces. Dismounted forces require detailed knowledge of the types and locations of submunitions employed. The chance of significant damage to armored, light armored vehicles, and other wheeled armored vehicles is relatively low. Personnel being transported by unarmored wheeled vehicles face nearly the same risk to UXO as dismounted forces. The protection afforded by unarmored wheeled vehicles is negligible. Air assault and aviation forces are also at risk from UXO. Aircraft in defilade, flying nap-of-the-earth or in ground effect (hovering) are vulnerable to submunitions. Certain submunitions are sensitive enough to function as a result of rotor wash. DOD has issued manuals that alert U.S. forces to the threat of UXO and identify procedures to mitigate risks. For example, Field Manual 20-32 states that “Mine awareness should actually be entitled mine/UXO awareness. If only mines are emphasized, ordnance (bomblets, submunitions) may be overlooked, and it has equal if not greater killing potential.” Despite this recognition, DOD officials have not indicated to us that they plan to require commanders to report and track all potential hazardous nonland-mine submunition dudfields and to mark them when appropriate, as is now required for scatterable submunition minefields. Because it was beyond the scope of this report, we did not assess DOD’s post-Gulf War implementation of doctrinal and procedural measures to minimize UXO-caused fratricide, maneuver limitations, and other effects. In 1994, the Army formed an Unexploded Ordnance Committee after the commanding general of the Army’s Training and Doctrine Command expressed concern about the large number of submunition duds remaining on the battlefield after the Gulf War. The commanding general sent a message to the Army’s leadership that stated, “This is a force protection issue. Based on number of submunitions employed during ODS [Operation Desert Storm], dud rate of only two percent would leave about 170K-plus unexploded Army submunitions restricting ground forces maneuver. Add in other services’ submunitions and scope of problem mushrooms…. Need to reduce hazards for soldiers on future battlefields from own ordnance.” As one of the Army’s efforts to reduce the dud rates of these submunitions, the commander stated that all future requirements documents for submunitions should state that the hazardous dud rate should be less than 1 percent. The committee’s work also resulted in calculations of the cost of retrofitting or replacing the Army’s submunition stockpile to lower hazardous dud rates and the relative costs of cleaning UXO from a battlefield. The Army estimated in 1994 that the cost would be about $29 billion to increase submunition reliability by retrofitting or replacing submunitions to add self-destruct fuzing for the nearly 1 billion submunitions in the Army stockpile. In a different estimate in 1996, the Army estimated the cost to retrofit the stockpile to be $11-12 billion. The Army also estimated lesser costs to retrofit or procure submunitions with self-destruct fuzing for only those munitions most likely to be used, including those in unit basic ammunition loads and pre-positioned ships. These Army cost estimates to equip Army submunitions with self-destruct fuzing do not indicate that they include costs to similarly equip Air Force, Marine, and Navy submunitions. Using actual CMS, Inc., costs to clean up UXO from the CMS sector of the Kuwaiti Gulf War battlefield, the Army also estimated that the cost to reduce the dud rate by adding self-destruct fuzes for the submunitions actually used on a battlefield was comparable to the cost to clean up duds left by unimproved submunitions. The Army further recognized that, while the costs of reducing and cleaning up duds may be similar, the detrimental battlefield fratricide and countermobility effects of duds also need to be considered, as well as humanitarian concerns. In 1995, DOD reported that its long-term solution to reduce UXO “is the ongoing efforts to incorporate self-destruct mechanisms in the DoD’s high density munitions which would limit further proliferation of unexploded ordnance on the battlefield.” DOD called the UXO detection and clearance problem “of enormous magnitude.” DOD has reported that it is taking actions to increase land mine and submunition reliability rates and reduce dud rates. In a 2000 report to Congress, DOD summarized its overall approach to addressing UXO concerns. DOD stated in that report, “An analysis of the UXO problem concluded that UXO concerns are viable and, using existing weapons, the potential exists for millions of UXO.” The report further stated that the majority of battlefield UXO will result from submunitions that “are not equipped with self-destruct features, pose the greatest potential for UXO hazards.” Importantly, DOD’s approach to ammunition reliability improvement is to emphasize adding reliability to future procurements rather than fixing the existing stockpile. According to DOD’s 2000 report to Congress, “The Department does not plan to retrofit or accelerate the demilitarization of its current inventory of weapons containing submunitions that pose UXO hazards. Notwithstanding, the Department will monitor the Service submunition development programs to make sure that every effort is taken to develop a mechanism within the submunition that will increase its overall reliability, thus reducing the potential for UXO.” The report went on to state that DOD will also monitor future procurement programs to ensure that reprocured weapons that contain submunitions were improved to increase their overall reliability. In addition to DOD actions aimed at controlling the UXO problem, there are a number of procurement-related efforts in place by the services to reduce and/or eliminate potential UXO from new purchases of ammunition. For example, in its 2000 report to Congress, DOD states, “The Army is in the process of producing new weapons that contain self- destruct mechanisms. In addition, the Army is considering developing requirements for new weapons systems aimed at controlling unexploded submunitions.” The report also states that Air Force and Navy munitions procurements likewise address reliability concerns. DOD has concluded in this report that “hile it has been deemed infeasible to attempt to retrofit legacy weapons systems with self-destruct features, new and future submunition-based weapon systems for the Services have or will incorporate self-destruct features to contain the UXO problem.” In January 2001, the Secretary of Defense issued a memorandumdirecting the services to adhere to DOD policy on submunition reliability. This memorandum states, “Submunition weapons employment in Southwest Asia and Kosovo, and major theater war modeling, have revealed a significant unexploded ordnance (UXO) concern . . . . It is the policy of the DoD to reduce overall UXO through a process of improvement in submunition system reliability—the desire is to field future submunitions with a 99% or higher functioning rate.” The memorandum did accept lower functioning rates under operational conditions due to environmental factors such as terrain and weather. The memorandum allows the continued use of current lower reliability munitions until superseded by replacement systems. Because it was beyond the scope of this report, we did not assess DOD’s actions to increase ammunition reliability and reduce dud rates. At least in part because the Gulf War took place over a decade ago, DOD reported that many records on the U.S. use of land mines and U.S. casualties had been destroyed, were lost, were incomplete, conflicted with each other, or were archived and not easily accessed. Resulting inconsistencies and gaps in data provided to us by the services and DOD on U.S. Gulf War land mine use, casualties, and lessons learned required that we perform extensive cross-checking and comparisons to check facts and identify associated themes. To create a picture of what happened during the Gulf War, DOD assisted us in obtaining available records and documents from various DOD sources in many different locations. We relied heavily on original service casualty reports as well as service and DOD after-action and lessons-learned reports written soon after the Gulf War. Based on our request, the Army conducted a reevaluation of original Gulf War casualty data and arrived at more exact data on causes and circumstances of Army-reported casualties. Our resulting compilation of service data used in calculating U.S. usage of land mines, U.S. casualties, and lessons learned during the Gulf War is the most complete assembled to date for the topics in this report. DOD officials believe that the service- provided information on land mine usage and casualties shown in this report is as accurate as service records permit. DOD, the Joint Chiefs of Staff, and the services confirmed the accuracy of the information they provided us on casualties and land-mine use and the information included in DOD lessons-learned and after-action reports. To obtain information on land mine issues, we reviewed numerous reports and analyses of land mines by such organizations as the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics); the Center for Army Analysis; the National Academy of Sciences; Lawrence Livermore National Laboratory; the Army Training and Doctrine Command; and the Congressional Research Service. No one DOD or service office maintained complete records on the Gulf War, and existing DOD and service records were stored in various locations around the country. For example, the Headquarters of the U.S. Central Command, which had directed the war, retained no records of the war, and the services had no central repositories for the Gulf War documentation we sought. We therefore visited the following locations to obtain all available detailed descriptions of land mine systems, the doctrine governing their use, documents and records on Gulf War land mine usage and effectiveness, and historical records on the Gulf War: Office of the Project Manager for Mines, Countermine and Demolitions, and Close Combat Systems, U.S. Army Program Executive Office for Ammunition, Picatinny Arsenal, New Jersey; U.S. Army Communications-Electronics Command, Night Vision and Electronic Sensors Directorate, Fort Belvoir, Virginia; Headquarters, U.S. Central Command, MacDill Air Force Base, Florida; U.S. Army Engineer Center, Fort Leonard Wood, Missouri; U.S. Army Field Artillery Center, Fort Sill, Oklahoma; Naval Explosive Ordnance Disposal Technology Division, Indian Head, Marine Corps History and Museums, Headquarters, U.S. Marine Corps, Marine Corps Combat Development Center, Capability Assessment Branch, Quantico, Virginia; Army Center of Military History, Fort McNair, Washington, D.C.; and Air Force Headquarters, Washington, D.C. To determine the extent to which land mines and unexploded ordnance caused U.S. casualties, we gathered data from the services and consulted original casualty reports. Because DOD data was not sufficiently detailed to allow identification of land mine or related casualties, we used the services’ more detailed data. In collaboration with service officials, we reconciled inconsistencies in order to identify the most authoritative data available for casualties. We visited or received information on Gulf War casualties from the following locations: Army Records Management Declassification Agency, Springfield, Virginia; Army Safety Center, Ft. Rucker, Alabama; U.S. Marine Corps Casualty Section, Quantico, Virginia; Army Casualty Office, Washington, D.C.; U.S. Air Force Personnel Center, Casualty Branch, Randolph Air Force Base, San Antonio, Texas; U.S. Navy Casualty Division, Millington, Tennessee; and Office of the Secretary of Defense’s Directorate for Information Operations and Reports, Arlington, Virginia. Lessons learned- and after-action reports and documents on the Gulf War were similarly not available in a central location but rather were located in various service organizations and libraries. Therefore, to identify concerns expressed in these reports about the use of land mines and related unexploded ordnance issues, we visited and examined documents at the following locations: Center for Army Lessons Learned, Ft. Leavenworth, Kansas; Army Training and Doctrine Command’s Analysis Center, Ft. Leavenworth, U.S. Army Materiel Systems Analysis Activity, Aberdeen Proving Ground, U.S. Naval Historical Center, Washington Navy Yard, Washington, D.C.; U.S. Air Force Historical Research Agency, Maxwell Air Force Base, Combined Arms Research Library, Ft. Leavenworth, Kansas; U.S. Air Force Headquarters, Washington, D.C.; and Marine Corps Combat Development Center, Quantico, Virginia. To identify U.S. policy on the U.S. use of land mines during the Gulf War, we interviewed or obtained documentation from DOD and service officials in Washington, D.C. These included officials from the Office of the Joint Chiefs of Staff, the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); Office of the Deputy Assistant Secretary for Peacekeeping and Humanitarian Assistance, Assistant Secretary of Defense (Special Operations and Low-Intensity Conflict); the Army Office of the Deputy Chief of Staff for Operations and Plans, Strategy, Plans and Policy Directorate; Office of the Deputy Chief of Staff for Logistics, Army Headquarters; and service headquarters officials of the Air Force, Marine Corps, and Navy. To obtain detailed information on the U.S. policy concerning the use of land mines during the Gulf War, we interviewed the U.S. commander-in-chief of all forces participating in the Gulf War. To obtain details on what ordnance was found on the battlefield after the Gulf War, we interviewed in person or by telephone seven former employees or officials of Conventional Munitions Systems (CMS), Inc. These persons were all retired U.S. military service members, ranking from major general to sergeant first class, and all but one had extensive experience in ordnance and explosive ordnance disposal. We confirmed with each CMS interviewee that they believed that the CMS data reported to the Army were accurate. We did not examine the evidence CMS used to prepare its report contracted by the Army. To discuss U.S. policy and legal issues related to land mines, we interviewed officials from the Department of State’s Office of the Legal Adviser, Office of International Security Negotiations, and Office of Humanitarian Demining Programs. In addition, we discussed the major topics and themes in this report with an official from the State Department’s Bureau of Political-Military Affairs. We conducted our review between June 2001 and September 2002 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated September 12, 2002. 1. We have deleted from the report the example of Gator land mine use against an aircraft on an airfield. 2. We have changed the report to clarify the fact that Scud transporters were targeted rather than the Scud missiles they carried. 3. In conducting our review, we consulted these and other reports, as we state in our objectives and scope and methodology sections. We cite the National Research Council’s report in appendix IV. However, because it was beyond the scope of our report to evaluate land mine policy and program alternatives, which is the general subject of these reports, we do not discuss them in detail.
The utility of land mines on the modern battlefield has come into question in recent years, largely because of their potential for causing unintended casualties and affecting U.S. forces' maneuverability. These concerns were raised during the Persian Gulf War. U.S. land mines of all types--nonself-destructing and self-destructing, antipersonnel and antitank--were available for use if needed in the Gulf War from U.S. land mine stockpiles, which contained 19 million land mines. U.S. forces sent to the Gulf War theater of operations took with them for potential use over 2.2 million land mines. U.S. war plans included plans for the use of land mines if required by the tactical situation. According to Department of Defense (DOD) documents, no nonself-destructing or "dumb," land mines were used; and the reported number of self-destructing, or "smart," land mines used by the services totaled approximately 118,000. DOD did not provide information on the effect of U.S. land mine use against the enemy. According to U.S. service records, of the 1,364 total U.S. casualties in the Gulf War, 81, or 6 percent, were killed or injured by land mines. Concerns about land mines raised in DOD lessons-learned and other reports included the fear of fratricide and loss of battlefield mobility. These concerns led to the reluctance of some U.S. commanders to use land mines in areas that U.S. and allied forces might have to traverse.
In October 1991, DOD implemented the Fund, which consolidated the nine existing industrial and stock funds operated by the military services and DOD, as well as the Defense Finance and Accounting Service (DFAS), the Defense Industrial Plant Equipment Service, the Defense Commissary Agency, the Defense Reutilization and Marketing Service, and the Defense Technical Information Service. The Fund’s primary goal is to focus the attention of all levels of DOD management on the total costs of carrying out certain critical DOD business operations and to manage those costs effectively. This goal is in accordance with the objectives of the National Performance Review (NPR), which is aimed at achieving cost efficiencies in the federal government. The Fund is modeled after businesslike operations in that it maintains a contractual (buyer-seller) type of relationship with its customers, primarily the military services. In fiscal year 1995, the Fund will have estimated revenue of $77 billion, which would make it equivalent to one of the largest corporations in the world. The Fund provides such essential goods and services as the (1) overhaul of ships, tanks, and aircraft and (2) sale of over 5 million types of vital inventory items, such as landing gears for aircraft. Many of these services are essential to maintaining the military readiness of our country’s weapon systems. Unlike a private sector enterprise which has a profit motive, the Fund is to operate on a break-even basis by recovering the current costs incurred in conducting its operations. To accomplish our objectives, we reviewed DOD’s quarterly progress reports on the implementation of the plan and DOD’s February 1, 1995, progress report to the congressional defense committees on the Fund. We also met with DOD officials to determine if DOD had completed the following three actions within the milestones established in the plan: (1) complete all Fund policies by December 31, 1994, (2) select the systems to account for Fund resources by September 30, 1994, and begin implementing those systems by December 31, 1994, and (3) improve the accuracy of the financial data in the Fund’s monthly financial reports by December 31, 1994. To evaluate DOD’s efforts to select the Fund’s financial systems, we reviewed the system evaluation reports prepared for the 28 systems reviewed by DOD. We also met with representatives from DFAS, as well as representatives from other DOD components involved in the system selection process, to determine the system selection criteria and their appropriateness. We also determined what plans DOD has in place to (1) implement/enhance the selected systems, (2) estimate the costs to improve the selected systems, and (3) eliminate the systems not selected. To determine whether DOD has improved the data accuracy in the Fund’s monthly financial reports, we met with officials in DFAS to ascertain and evaluate actions taken (or planned) to improve data accuracy, including actions to enable reconciliation between the Fund’s cash balance and the corresponding balance maintained by Treasury. We also analyzed the Fund’s monthly financial reports to determine whether the systems are producing accurate and complete financial data. Further, we obtained the reports of the DOD Inspector General’s audit of the Fund’s fiscal years 1992 and 1993 financial statements required by the Chief Financial Officers (CFO) Act of 1990. We performed our work at the Office of the Secretary of Defense (Comptroller); the Departments of the Army, Navy, and Air Force; the Defense Logistics Agency (DLA); DFAS headquarters; and the Cleveland, Columbus, Denver, and Indianapolis DFAS Centers. Our review was performed from July 1994 through February 1995, in accordance with generally accepted government auditing standards. We discussed the facts, conclusions, and recommendations in our report with cognizant DOD officials and have incorporated their comments where appropriate. DOD has made some progress in addressing the problems that have plagued the Fund’s operations. During the past 16 months, DOD has made decisions such as developing and approving Fund policies, revising the format of the Fund’s financial reports, and selecting 12 interim migratory financial systems to account for the Fund’s resources. However, we disagree with DOD’s assessment in its February 1, 1995, progress report on the Fund that it had made “tremendous progress in rectifying or reducing the problems in the Plan.” Little improvement has yet been made in the actual day-to-day operations of the Fund. Since the Fund began operations in October 1991, it has not been able to meet its financial goal of operating on a break-even basis. The reported loss for fiscal year 1994 will mark the third consecutive year of operating losses. A key element in reducing the cost of operations is the ability to accurately identify total costs. However, DOD lacks the management tools to accomplish this task. Specifically, DOD has not (1) developed a process to ensure the Fund’s policies are consistently implemented, (2) improved the accuracy and reliability of the Fund’s systems, (3) improved the Fund’s monthly financial reports, (4) adequately managed the Fund’s cash, and (5) developed performance measures and goals. These problems are discussed below. DOD has acknowledged that its failure to develop policies and procedures has been one of the most significant weaknesses of the Fund’s implementation. In its September 1993 plan, DOD identified several actions to address this problem. In December 1994, DOD completed and issued the Financial Management Regulation (FMR) on the Fund, which contains the Fund’s financial policies. This regulation is a major step toward standardizing the Fund’s operations. It consolidates all policies of the old industrial and stock funds, changes that have been made to those policies, and new policies that have been issued since the Fund was established. However, confusion exists within DOD regarding the need for implementing procedures to accompany the various policies that have been issued. As a result, the five policies that the Deputy Comptroller for Financial Systems stated were to be effective October 1, 1994, had not been implemented throughout DOD as of January 1995. According to DFAS officials, they are responsible for developing implementing procedures for each Fund policy approved by the DOD Comptroller. DFAS’ position is supported by a memorandum from the Deputy Comptroller for Financial Systems that states that DFAS is developing accounting procedures as necessary to ensure the policies can be implemented. DOD Comptroller officials informed us that it was not necessary for DFAS headquarters to develop implementing procedures because the FMR already contains them. The officials further stated that DOD is trying to avoid the proliferation of procedures by DFAS and its field activities so that the Fund’s policies will not be implemented inconsistently as a result of different interpretations, as has happened in the past. In fact, the improvement plan notes that the Fund’s financial reports reflect the results of applying policies inconsistently across business areas. Given the October 1994 memorandum, other DOD documents on Fund policy implementation, and the opinion of DOD Comptroller officials, it is evident that DOD has not resolved this issue. Until it does so, the FMR may not be implemented in a timely manner. Given the immense size, complexity, and scope of the Fund’s $77 billion operation, the need to complete the development of the policies and consistently implement them is particularly acute. Until consistent implementation is achieved, the benefits of the new policies will not be realized. Without standard policies and procedures, Fund managers are forced to make their own interpretation regarding how to report on the respective operations of their business areas. Therefore, it is imperative that DOD put in place a process which will provide a mechanism for ensuring that Fund policies are implemented in a timely, consistent manner. In addition to implementation issues discussed above, we have concerns about the provisions in several Fund policies that DOD has developed and approved thus far. These concerns are highlighted below and discussed in detail in appendix I. DOD’s policy requires that prior year losses be recovered by increasing the prices charged customers. DOD increased fiscal year 1995 Fund prices by $1.7 billion to recover prior year losses. We have previously stated that the Fund, not the customers, should be required to request additional funds through the congressional appropriation process to recover losses. As part of the justification, DOD should explain differences such as the variances between the budgeted and actual results of operations for each business area. The explanation should also include the causes for the reported gain or loss and the actions being taken to avoid similar gains or losses in the future. Our approach would give the Congress an opportunity to review the Fund’s operations, determine if additional funds are actually needed, and evaluate the effectiveness of DOD’s management of the Fund. It would also provide a strong incentive to properly set prices and would help focus attention on the current costs of operations. DOD’s policy allows for two methods of recognizing revenue for the depot maintenance area: the completed contract method and the percentage of completion method. We disagree with the use of the completed contract method when work done on an order crosses fiscal years. Using the completed contract method for such orders defers the recognition of revenue and related expenses to the period in which they were completed. Therefore, the financial reports distort the financial results of operations for each fiscal year. The military personnel policy provides that the cost of military personnel will be at the civilian equivalent rate, not the actual cost of military personnel. This practice will understate the total military personnel costs since the civilian equivalency rate is 23 percent less than the military personnel cost. This practice also will not further the objectives of the NPR, which stipulates that the full costs be included in the prices that providers, such as the Fund, charge customers so that the total cost of what the government produces can be determined. However, the National Defense Authorization Act for Fiscal Year 1994 directs DOD to base its recovery of military personnel costs on something other than actual costs. We believe the act should be amended to provide for recovering the full cost of military personnel. The policy on management headquarters costs does not specify which costs should be allocated in the prices the Fund charges customers. Because DOD has a complex organization structure—including activities such as headquarters, major commands, and depots—the military services and DOD components could interpret the guidance differently. One of the primary challenges still confronting DOD is the improvement and standardization of the Fund’s financial systems. Currently, about 80 disparate and unlinked systems are producing accounting data. DOD has stated that it needs to apply adequate funds, personnel, and time to solve existing problems with its financial systems, which continue to produce inaccurate, inconsistent, and untimely reports on the Fund’s operations. Systems that produce credible cost data are essential for the successful operations of the Fund. The ability to charge the Fund customers the total cost of operations is predicated upon the assumption that the total costs are known. Accurate cost data are also critical in order to develop systematic means to reduce the cost of operations. Over the past 16 months, DOD has made some progress towards accomplishing the actions outlined in the plan. Specifically, DFAS developed functional and technical requirements for the Fund’s financial systems, completed the evaluation of 28 systems nominated by the military services and DOD components, and recommended interim migratory systems. However, DOD has neither developed conversion plans and procedures nor begun implementing the interim migratory systems for the Fund’s operations by December 31, 1994, as called for in its plan. Based on DFAS’ analyses, 17 systems were recommended by DFAS as the Fund’s interim migratory systems. The highest score a system could receive was 100 and a score of 75 was needed by the system to meet minimum Fund functional and technical requirements. None of the 17 systems received the minimum functional score of 75 with the scores ranging from 22 to 60. Only 3 systems received the minimum technical score of 75. According to DOD’s preliminary estimate, it will cost $94.5 million to enhance the 17 systems to meet the minimum Fund functional requirements. However, this estimate does not include the following significant costs: (1) improvements needed to meet the minimum technical requirements, (2) data conversion from the existing systems to the interim migratory systems, (3) development of interfaces with nonfinancial systems, such as logistics and personnel, that generate most of the financial data, (4) training of personnel who will operate and enter data into the interim migratory system, and (5) replacement of 63 existing systems with the interim migratory systems. DOD acknowledges that these costs will probably be higher than the estimated costs to enhance the systems’ functionality. However, as of February 1995, DOD had not completed a functional economic analysis for the interim migratory systems, which was to have been completed by March 31, 1994. Although DOD stated that the analysis should be completed before a system is selected, DOD had already selected 12 of the 17 systems. To obtain this critical information on the interim migratory systems, on December 19, 1994, the Under Secretary of Defense (Comptroller) required that a functional economic analysis be prepared for the depot maintenance and transportation business areas and a cost analysis be prepared for the other business areas before the expenditure of funds to enhance the systems is authorized. The analyses were to be completed by the end of March 1995. However, when we discussed our report with DOD officials, they stated that such an enormous task could not be completed by that date and had not determined when the analyses would be completed. Given the relatively low scores the systems received and the magnitude of the total cost to upgrade them to meet the minimum functional and technical requirements and implement the selected systems, we believe a functional economic analysis should be performed for each of the systems selected as a Fund interim migratory system. Meaningful and reliable financial reports are essential to enable the Congress to exercise its oversight responsibilities. Reliable financial reports are also imperative for DOD management to make informed decisions on the Fund’s operations and set realistic prices to charge customers. DOD has acknowledged several times that the Fund’s financial reports are inaccurate. Accordingly, DOD’s plan for improving Fund operations identifies a number of actions aimed at improving the accuracy and usefulness of the Fund’s financial reports. One of these actions was to revise the Fund’s Monthly Report of Operations—the 1307 report. Although DOD’s February 1, 1995, report stated that this action had been completed, we do not agree. DOD issued a revised 1307 report format in September 1994 that provides for a monthly income statement, balance sheet, and cash flow statement, which we have previously suggested. The new reporting format was to be used for the first time in reporting the December 1994 results of operations. However, the December data for the Air Force, Navy, Marine Corps, Transportation Command, and the Defense Information Service Agency was reported in the old format. When we discussed this report with DOD officials, they stated that these activities had been directed to submit the December 1994 financial data in the new report format. Improving the accuracy of this report will require that DFAS headquarters, the DFAS Centers, and the DOD components work together and agree on the actual source of information to be used to produce the financial reports. Officials responsible for completing the new 1307 report at several DFAS Centers, told us that the report could not be properly prepared because the current financial systems did not contain or accumulate all the necessary data. As a result, the Centers had to use manual workarounds, in some cases, to obtain the necessary data. Some Center officials were doubtful that all the data could be obtained. The officials stated that because of the variety of different data sources that will be used as a result of the workarounds, the 1307 data will not be consistent and, therefore, not comparable between similar business areas. In addition, manual workarounds increased the chance that errors could occur through the transposition of numbers. In part, this problem resulted from the Fund’s systems use of 15 different general ledgers. DOD issued crosswalks that translate the 15 different general ledgers in use for the Fund to the DOD Standard General Ledger. The crosswalks were issued with the requirement that they be used in the formulation of the Fund’s financial reports, including the 1307 report. However, the crosswalks contained general ledger accounts for which there were no corresponding accounts in the Centers’ current systems’ general ledger structure. The converse is also true: the Centers’ current systems’ general ledger structure contained accounts for which no corresponding account existed in the crosswalks. The Centers had not received guidance regarding what to do for these situations. Because of this, officials at one DFAS Center told us they “have no plans to use the crosswalks anytime soon.” We do not consider this action complete until DOD can demonstrate that the revised 1307 report is giving DOD management and the Congress accurate, reliable, and consistent financial information on the results of the Fund’s operations. Since March 1993, we have reported that the Fund’s financial reports are error prone and cannot be relied upon for decision-making purposes. Further, because of significant deficiencies in the internal controls, the DOD Inspector General was unable to express an opinion on the Fund’s fiscal year 1992 and 1993 financial statements, in performing audits required by the CFO Act. After approximately 3 years of operating the Fund, DOD is still experiencing difficulty in preparing accurate reports on the Fund’s operations. For example, the Fund’s Army supply management business area reported a fiscal year-end 1994 operating loss of $8.5 billion on a program that had revenue of $7 billion. After we visited the DFAS-Indianapolis Center, where we discussed the loss with Center officials, it was determined that a clerical error of about $6 billion had been made on the fiscal year-end 1994 financial report for Army supply management and the report was revised to show a fiscal year 1994 loss of over $2.6 billion. Our analysis of the Fund’s fiscal year 1994 monthly financial reports disclosed numerous other instances in which the reports were inaccurate. Because of these reporting problems, DOD cannot be certain (1) of the actual operating results for the Fund or (2) if the prices the Fund will charge its customers are reasonable. Until DOD and the Fund can achieve an integrated financial management system, financial reports will continue to be error prone. These problems are discussed in appendix II. Since the Fund was established, its cash balance has been centrally managed by the Office of the Secretary of Defense (Comptroller). On February 1, 1995, DOD returned the management of the Fund’s cash and related Antideficiency Act limitations to the military service and DOD component level. When we discussed our report with DOD officials, they stated that the policy had been changed to better align accountability and responsibility for cash management. However, there is no assurance that this policy change will enhance DOD’s cash accountability. This policy change is a major departure from the benefits of a single cash balance DOD cited in establishing the Fund. In our prior testimony, we pointed out that by consolidating the cash balances of the old industrial and stock funds, DOD reduced by several billion dollars the Fund’s cash requirement needs. DOD’s action may increase the Fund’s cash requirements and, therefore, increase the need for appropriated funds to implement the change or the continued advance billing of customers for goods and services they are to receive. “the single DBOF fund balance provides flexibility during execution necessary to absorb varying financial conditions. Combining previously separate appropriations into one account, united by support and business function aspects, allows lower total fund balances through a form of self-insurance not previously available. Operating fund balances for the former stock and industrial funds are significantly reduced in this budget.” In February 1992, DOD again reiterated the benefits of maintaining a single cash balance for the Fund. Further, the September 1993 plan specifically provided that the Antideficiency controls would remain at the Office of the Secretary of Defense (Comptroller) level. DOD has been experiencing a cash shortage problem since about June 1993. Because of this shortage, the DOD Principal Deputy Comptroller directed in June 1993 that all depot maintenance activities and selected research and development activities advance bill customers for goods and services to be provided. In July 1994, the Comptroller of Defense stopped the advanced billing at all activities except the Naval shipyards and research and development activities. Although these remaining activities had been tentatively scheduled to stop advance billing in January 1995, they had not done so as of February 1995. As we have stated in the past, we believe that advance billing is a stopgap measure and not a sound business practice. The policy change placing the management of cash at the military services and DOD component level could increase the Fund’s cash needs, resulting in the possibility that other Fund activities will advance bill their customers to remain solvent. Further, officials at two DFAS Centers raised concerns that if the amount of cash returned to the Army and Air Force Fund business areas was not commensurate with their normal operating needs, they could have a negative cash balance in the near future resulting in a violation of the Antideficiency Act. In addition, in a November 1994 memorandum to the Principal Deputy Under Secretary of Defense (Comptroller), the Assistant Secretary of the Navy (Financial Management) pointed out that the Fund was “not in a healthy financial condition” and that the Navy would have to determine the cash requirements of operating their portion of the Fund and the amount of advance billing necessary to achieve this cash requirement. Performance measurements are a valuable tool for managers because they provide information on an organization’s operation. Managers can use the data that performance measures provide to help them account for past activities, manage current operations, assess progress toward planned objectives, or better justify budget requests to the Congress and the impact that budgetary decisions have on an entity’s operations. However, performance measures and goals are useful as a management tool only if management makes a commitment that supports their use. One of the actions in its plan that DOD reported as being complete calls for the DOD CFO to “re-energize” performance measure development. In our March 9, 1994, report, we pointed out that DOD had (1) included performance measures in the Fund’s fiscal year 1994 annual operating budgets and (2) begun developing the corresponding goals for some business areas, such as DLA supply management and distribution depots. However, almost a year later, DOD had developed only 14 goals for the Fund’s 69 performance measures. For example, for the Navy Fund business areas, DOD had identified 25 performance measures but had developed only 4 corresponding goals as of February 1995. When we discussed the report with DOD officials, they stated that with the passage of the Government Performance and Results Act of 1993, they were devoting their efforts to developing performance measures to accomplish the objectives of the act. DOD also stated that the performance measures developed for the Fund were “one-liners” and were not sufficient to meet the criteria set forth in the act. DOD further noted that one of its pilot programs under the act is the Defense Logistics Agency, which is a Fund activity. DOD faces formidable challenges in resolving the Fund’s problems. However, many of these problems, such as inadequate systems, are the result of years of neglect and date from the old industrial and stock funds. As we have pointed out and DOD has recognized, the Fund’s financial systems cannot produce accurate and reliable information on the results of the Fund’s operations. Until these antiquated systems are eliminated, (1) the infrastructure costs of maintaining multiple systems for the same purpose will continue and (2) DOD decisionmakers and the Congress will continue to receive inaccurate and unreliable information on the Fund’s results of operation. Also, the recent decision to devolve cash management abandons one of the goals of the Fund. Further, DOD can reduce the costs of operations only if it is more conscious of operating costs and makes fundamental improvements in the way it conducts business. Although the Fund is to operate on a break-even basis, it has not been able to meet this financial goal. Fiscal year 1994 marked the third consecutive year of reported losses. If top management does not place a priority on reversing this trend, the status quo will be perpetuated and potential savings from the Fund will not be realized. We recommend that the Congress enact legislation to require that the Fund’s prices recover the full costs of using military personnel in providing goods and service and prohibit DOD from including amounts in the Fund’s prices for recovering prior year losses. We further recommend that the Under Secretary of Defense (Comptroller) ensure that a functional economic analysis is prepared for each of the recommended Fund interim migratory systems prior to authorizing the expenditure of funds to enhance and implement the systems; reverse the decision to transfer the management of the Fund’s cash to the military services and DOD components; develop a systematic process to ensure the uniform implementation of the Fund’s policies; and revise the revenue recognition policy to require that the percentage of completion method be used for work done on orders that cross fiscal years, and clarify the management headquarters policy to specifically identify the costs to be included in the prices. We are sending copies of this report to the Secretary of Defense; the Director of the Office of Management and Budget; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight; and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions concerning this report. Other major contributors to this report are listed in appendix III. DOD has issued the accounting and financial management policies to govern the operations of the Fund in the December 1994 DOD Financial Management Regulation, Volume 11B, Reimbursable Operations, Policy and Procedures—Defense Business Operations Fund. This is the first time that the Fund’s policies have been published in a single document. We disagree with the following five Fund policies: (1) increasing prices charged customers to recover prior year losses, (2) revenue recognition, (3) military personnel pricing, (4) mobilization, and (5) economic analysis for Fund capital investments. We also believe that the policy on management headquarters costs needs to be clarified. According to DOD’s pricing policy, prices will be increased to recover losses. For example, DOD increased fiscal year 1995 Fund prices by $1.7 billion to recover prior year losses. This policy is inconsistent with the basic tenet of the Fund—that prices should reflect the actual cost incurred in providing goods and services. It also diminishes the incentive for the Fund to operate efficiently and makes it difficult to evaluate and monitor the Fund’s status. Charging prices that reflect only the cost expected to be incurred for that period will enable DOD and the Congress to determine the cost of each year’s operations and measure the performance of the Fund’s activities for that period. DOD should be required to justify recovering prior year losses as part of the appropriation process. The justification should identify why a business area, such as depot maintenance, incurred a loss. For example, losses could occur because anticipated savings from productivity increases were not achieved. DOD’s policy standardizes the recognition of revenue throughout DOD by allowing two methods of recognizing revenue for the depot maintenance business area: The completed order method is used for all orders that have an estimated value of less than $1 million or a planned production cycle of less than 1 year. The percentage of completion method is used for all orders that have an estimated value of $1 million or more and a planned production cycle of 1 year or more. Although generally accepted accounting principles recognize both of these methods, they also specify the different circumstances under which it is appropriate to use one method or the other. Selecting the appropriate method is important because operating results reported in an entity’s financial reports can vary considerably depending upon which method is used. For example, in 1993 we reported that because Navy industrial fund activities used the completed contract method for recognizing revenue, the accumulated operating results at the end of fiscal year 1991 were understated by about $71 million. This understatement occurred because the activities deferred recognizing revenue and related expenses until the work was completed. In the past, we have supported the use of the percentage of completion method since income on long-term projects is recognized when the work is actually performed. In 1991, we reported that the percentage of completion method is required for long-term projects in order to present operating results more accurately for the period. Therefore, we believe that unless the work is completed within the fiscal year in which the order was accepted, the Fund should account for its revenue under the percentage of completion method. Under this policy, the cost of military personnel will be the civilian equivalent rate, not the military personnel rate. This policy will understate the total military personnel costs since the civilian equivalency rate is less than the military personnel cost. DOD estimates about 27,000 military personnel are working in the Fund’s various business areas during fiscal year 1995. One objective of the National Performance Review (NPR) is to include the full costs in the price that providers, such as the Fund, charge customers so that the total cost of what the government produces can be determined. Charging Fund customers the military rate for military personnel, rather than the civilian equivalent rate, would be more consistent with the full costing concept of the NPR and with the Fund’s basic intent. Our concerns about the pricing of military personnel also apply to the Fund’s mobilization policy. The intent of the Fund is to operate on a break-even basis for peacetime operations. However, some Fund activities incur costs to maintain a mobilization capability for combat situations, such as the costs to (1) maintain a surge capability and (2) procure and maintain approved war reserve levels. According to the Fund’s policy, mobilization efforts are to be funded separately outside the Fund prices charged customers. While we agree with most of the provisions contained in the mobilization policy, as discussed above, the military personnel costs should not be recorded at the civilian equivalency rate. DOD policy requires components to prepare an economic analysis for all Fund capital investment projects over $100,000. The analysis is to describe the need for the project, total project costs, and savings expected over the life of the project. The analysis package for the project selection process should include the net present value, which is the difference between the discounted present value of benefits and the discounted present value of total costs; the “payback” period, which is the time necessary for an alternative to repay its investment cost; and the benefit to investment ratio, which is the total present value of benefits divided by total present value of costs. The use of the payback period and the benefit to investment ratio is contrary to Office of Management and Budget criteria, recent GAO reports,and current economic literature that advocates net present value as the appropriate criterion in choosing between competing investment projects. Net present value is favored over the other indicators because it more consistently results in the selection of projects with the greatest benefits, net of cost. Furthermore, DOD would not need any additional information, beyond that necessary to calculate payback and benefit-to-investment ratios, to calculate net present value. The Fund’s plan required clarification of the existing policy on including management headquarters costs in the prices the Fund charged its customers. This policy states that (1) each Fund activity, or group of activities, is under the management control of a designated DOD component, (2) the costs for discrete Fund management headquarters organizations, and parts of organizations that perform Fund management headquarters functions, should be financed by the Fund, and (3) significant costs for common support, such as general counsel and personnel, used by Fund activities should be allocated to the Fund, if feasible. Significant management headquarters costs are defined as exceeding 1 percent of the total business area costs, or $1 million, which ever is greater. We agree that management headquarters costs should be included in the prices the Fund charges customers for the goods and services it provides. However, we do not believe that the two paragraph policy clarifies which costs should be included in the Fund for DOD’s complex organizational structure that includes depots, inventory control points, major commands, and headquarter components. As a result, the services and DOD components could interpret the policy differently. Procedures need to be developed so that this policy can be implemented consistently throughout DOD. Our analysis of the fiscal year 1994 financial reports disclosed numerous instances in which the reports were inaccurate and, therefore, of questionable value. Because the Fund’s financial management systems can neither provide complete and reliable financial data nor report accurately on the resources entrusted to its managers, the Fund financial reporting and management at all levels has been impaired. Financial information requires constant analysis to ensure its validity. However, in many instances, DOD has allowed obvious erroneous data to remain in the accounting records, and these data are ultimately included in the Fund’s financial reports. Some examples follow. Significant differences exist between Fund disbursements reported in DOD Report of Budget Execution (the 1176 report) and those reported by Treasury. These differences represent disbursements which DOD cannot allocate to specific business areas or military services. As of September 30, 1994, the difference between the two sets of records was approximately $528 million. Previously, we reported a similar problem—the difference between the Fund disbursements reported by DOD and those reported by Treasury had been $558 million at September 30, 1992. The amount of revenue shown on DOD’s monthly 1307 report for the Navy’s supply management, distribution depots, and logistic support activities business areas is inaccurate. The inventory prices that the supply management business area charged customers include the revenue for the three above business areas, and the revenue amount is not allocated to the specific business area that earned the money. As a result, the amount of revenue applicable to the distribution depots and logistic support activities is not shown on the monthly report of operations, resulting in these two activities showing a loss. On its 1307 report, the Navy shipyard business area showed negative revenues ranging from $178 million to $902 million for 4 consecutive months at the beginning of fiscal year 1994. According to DFAS-Cleveland officials, this occurred because they were required to close the work-in-process account at the end of fiscal year 1993 and reverse the entry at the beginning of fiscal year 1994. As a result, the revenue for fiscal year 1993 was overstated by about $2.3 billion and fiscal year 1994 revenue was understated by the same amount. According to the Air Force’s 1994 fiscal year-end report on supply management operations, the value of inventories in transit was a negative $1.7 billion. The Air Force also reported negative balances for inventories in transit for 4 other months in fiscal year 1994. Since inventories in transit are items that, for various reasons, are being shipped from one location to another, a negative inventory in transit balance is an abnormal or misstated account balance indicating that (1) an error was made in recording inventory data or (2) a problem exists with the procedures used to account for inventory. DFAS has recognized that it cannot accurately account for Air Force inventories in transit. To remedy this situation, DFAS-Denver and the Air Force Audit Agency are working together to identify and correct the problem. On DOD’s monthly management report (the 1302 report), the amount of Navy supply management accounts receivable and accounts payable at September 30, 1994, were a negative $336 million and a negative $625 million, respectively. These accounts normally have positive balances. According to officials at the DFAS-Cleveland Center, this occurred, in part, because the balances were reduced for undistributed collections and disbursements. Defense Budget: Capital Asset Projects Undergo Significant Change Between Approval and Execution (GAO/NSIAD-95-20, December 28, 1994). Letter to the Principal Deputy Comptroller (GAO/AIMD-94-159R, July 26, 1994). Defense Business Operations Fund: Improved Pricing Practices and Financial Reports Are Needed to Set Accurate Prices (GAO/AIMD-94-132, June 22, 1994). Financial Management: DOD’s Efforts to Improve Operations of the Defense Business Operations Fund (GAO/T-AIMD/NSIAD-94-170, April 28, 1994). Defense Management Initiatives: Limited Progress in Implementing Management Improvement Initiatives (GAO/T-AIMD-94-105, April 14, 1994). Financial Management: DOD’s Efforts to Improve Operations of the Defense Business Operations Fund (GAO/T-AIMD/NSIAD-94-146, March 25, 1994). Financial Management: Status of the Defense Business Operations Fund (GAO/AIMD-94-80, March 9, 1994). Letter to the Deputy Secretary of Defense (GAO/AIMD-94-7R, October 12, 1993). Financial Management: Opportunities to Strengthen Management of the Defense Business Operations Fund (GAO/T-AFMD-93-6, June 16, 1993). Financial Management: Opportunities to Strengthen Management of the Defense Business Operations Fund (GAO/T-AFMD-93-4, May 13, 1993). Letter to Congressional Committees (GAO/AFMD-93-52R, March 1, 1993). Financial Management: Status of the Defense Business Operations Fund (GAO/AFMD-92-79, June 15, 1992). Financial Management: Defense Business Operations Fund Implementation Status (GAO/T-AFMD-92-8, April 30, 1992). Defense’s Planned Implementation of the $77 Billion Defense Business Operations Fund (GAO/T-AFMD-91-5, April 30, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) progress in implementing the Defense Business Operations Fund Improvement Plan, focusing on: (1) the policies essential to the Fund's operations; and (2) DOD ongoing efforts to correct problems that hinder Fund operations. GAO found that: (1) DOD has no systematic process in place to ensure consistent implementation of the Fund's policies; (2) Fund managers lack guidance to execute daily Fund operations; (3) DOD lacks the financial systems necessary to provide for successful Fund operations; (4) DOD has been unable to improve the accuracy and reliability of its financial systems; and (5) DOD has reversed its cash management policy to return cash control to DOD components, however, there is no assurance that this policy change will enhance DOD cash accountability.
CT uses ionizing radiation and computers to produce cross-sectional images of internal organs and body structures. MRI uses powerful magnets, radio waves, and computers to create cross-sectional images of internal body tissues. NM uses radioactive materials in conjunction with an imaging modality to produce images that show both structure and function within the body. During an NM service, such as a PET scan, a patient is administered a small amount of radioactive substance, called a radiopharmaceutical or radiotracer, which is subsequently tracked by a radiation detector outside the body to render time-lapse images of the radioactive material as it moves through the body. Imaging equipment that uses ionizing radiation—such as CT and NM—poses greater potential short- and long-term health risks to patients than other imaging modalities, such as ultrasound. This is because ionizing radiation has enough energy to potentially damage DNA and thus increase a person’s lifetime risk of developing cancer. In addition, exposure to very high doses of this radiation can cause short-term injuries, such as burns or hair loss. To become accredited, ADI suppliers must select a CMS-designated accrediting organization, pay the organization an accreditation fee, and demonstrate that they meet the organization’s standards. As we noted in our May 2013 report, the accrediting organization fees vary. For example, as of January 2013, ACR’s accreditation fees ranged from $1,800 to $2,400 per unit of imaging equipment, while IAC’s fees ranged from $2,600 to $3,800 per application. While the specific standards used by accrediting organizations vary, MIPPA requires all accrediting organizations to have standards in five areas: (1) qualifications of medical personnel who are not physicians and who furnish the technical component of ADI services, (2) qualifications and responsibilities of medical directors and supervising physicians, (3) procedures to ensure that equipment used in furnishing the technical component of ADI services meets performance specifications, (4) procedures to ensure the safety of beneficiaries and staff, and (5) establishment and maintenance of a quality-assurance and quality control program. To demonstrate that they meet their chosen accrediting organization’s standards, ADI suppliers must submit an online application as well as required documents, which could include information on qualifications of personnel or a sample of patient images. MIPPA requires CMS to oversee the accrediting organizations and authorizes CMS to modify the list of selected accrediting organizations, if necessary. Federal regulations specify that CMS may conduct “validation audits” of accredited ADI suppliers and provide for the withdrawal of CMS approval of an accrediting organization at any time if CMS determines that the organization no longer adequately ensures that ADI suppliers meet or exceed Medicare requirements. CMS also has requirements for accrediting organizations. For example, accrediting organizations are responsible for using mid-cycle audit procedures, such as unannounced site visits, to ensure that accredited suppliers maintain compliance with MIPPA’s requirements for the duration of the 3-year accreditation cycle. According to CMS officials, five full-time staff are budgeted to oversee and develop standards for the ADI accreditation requirement.report was issued in May 2013, CMS has begun to gather input from stakeholders on the development of national standards for the accreditation of ADI suppliers, which it intends to develop by the end of 2014. Medicare payment for the technical component of ADI services is intended to cover the cost of the equipment, supplies, and nonphysician staff and is generally significantly higher than the payment for the professional component. The payment for the professional component is intended to cover the physician’s time in interpreting the image and writing a report on the findings. Medicare reimburses providers through different payment systems depending on where an ADI service is performed. When an ADI service is performed in an office setting such as a physician’s office or IDTF, both the professional and technical component are billed under the Medicare physician fee schedule. Alternatively, when the ADI service is performed in an institutional setting, the physician can only bill the Medicare physician fee schedule for the professional component, while the payment for the technical component is covered under a different Medicare payment system, according to the setting in which the service is provided. For example, the technical component of an ADI service provided in a hospital outpatient department is paid under the hospital outpatient prospective payment system (OPPS). The use of imaging services grew rapidly during the decade starting in 2000—MedPAC reported that cumulative growth between 2000 and 2009 totaled 85 percent—although the rate of growth has declined in recent Growth in imaging utilization and expenditures—including those years.for ADI services—prompted action from Congress, CMS, and private payers. Congress has enacted legislation to help ensure appropriate Medicare payment for ADI services; in some cases, this legislation has had the effect of reducing Medicare payment for the technical component of certain imaging services, such as the following: The Deficit Reduction Act of 2005 required that, beginning January 1, 2007, Medicare payment for certain imaging services under the physician fee schedule not exceed the amount Medicare pays under the OPPS. The Patient Protection and Affordable Care Act (PPACA), as amended by the Health Care and Education Reconciliation Act of 2010 (HCERA) and the American Tax Relief Act of 2012 (ATRA), reduced payment for the technical component of ADI services by adjusting assumptions, known as utilization rates, related to the rate at which certain imaging equipment is used. These changes had the effect of reducing payments for the technical component of ADI services beginning in January 2011, with additional reductions scheduled to take effect in 2014. CMS implemented additional changes to Medicare payment policy to help ensure appropriate payment for ADI services, which had the effect of reducing Medicare payment for certain imaging services. In January 2006 CMS began applying a multiple procedure payment reduction (MPPR) policy to the technical component of certain CT and MRI services, which reduces payments for these services when they are furnished together by Beginning in the same physician, to the same patient, on the same day.January 2012, CMS expanded the MPPR by reducing payments for the lower-priced professional component of certain CT and MRI services by 25 percent when two or more services are furnished by the same physician to the same patient, in the same session, on the same day. Private payers have also implemented policies designed to help control imaging utilization and expenditures. One such policy is the use of prior authorization, which can involve requirements that physician orders of imaging services meet certain guidelines in order to qualify for payment. Further, best practice guidelines, such as ACR’s Appropriateness Criteria, as well as efforts to educate physicians and patients about radiation exposure associated with imaging, have been used to promote the appropriate use of imaging services. We found that the number of ADI services provided to Medicare beneficiaries in the office setting—an indicator of access to those services—began declining before and continued declining after the accreditation requirement went into effect on January 1, 2012 (see fig. 1). In particular, the rate of decline from 2009 to 2010 was similar to the rate from 2011 to 2012 for the CT; MRI; and NM, including PET, services in our analysis. These results suggest that the overall decline was driven, at least in part, by factors other than accreditation. For example, the number of CT services per 1,000 FFS beneficiaries declined by 9 percent between 2009 and 2010, 4 percent between 2010 and 2011, and 9 percent between 2011 and 2012. The percentage decline in the number of ADI services provided in the office setting was generally similar in both urban and rural areas during the period we studied, although we found that substantially more services were provided in urban areas than in rural areas (see fig. 2). The number of ADI services per 1,000 FFS beneficiaries provided in urban areas declined by 7 percent between 2011 and 2012, while the number of services provided in rural areas declined by 8 percent. In addition, 148 services were provided per 1,000 FFS beneficiaries in urban areas in 2012, as compared to 81 services per 1,000 beneficiaries in rural areas. One reason the use of ADI services in the office setting was relatively low in rural areas was that a smaller percentage of ADI services in these areas were provided in the office setting. Specifically, in 2012, about 14 percent of ADI services in rural areas were provided in the office setting, compared to 23 percent of ADI services in urban areas. See appendix I for trends in the number of urban and rural ADI services by modality. The effect of accreditation on access—as illustrated by our analysis of the trends in ADI services in the office setting—is unclear in the context of recent policy and payment changes as well as other factors affecting the use of imaging services. In particular, the decline in ADI services occurred amid the implementation in recent years of public and private policies to slow rapid increases in imaging utilization and spending. Factors, including public and private policies, that may have played a role in the decline in ADI service utilization include the following: Medicare payment reductions. Reductions in Medicare payment may have contributed to the decline in ADI services between 2009 and 2012 as reduced fees may affect physicians’ willingness to For example, provide imaging services for Medicare beneficiaries.PPACA and ATRA reduced payment for the technical component of ADI services by adjusting assumptions related to the rate at which certain imaging equipment is used. In addition, CMS implemented a 25 percent payment reduction for the professional component of certain CT and MRI services under the MPPR, effective January 1, 2012—the same date the accreditation requirement went into effect. Prior authorization. Studies have suggested that increased use of prior authorization policies among private payers in recent years has contributed to a decrease in ADI services provided to privately insured individuals. These policies may have had a spillover effect on Medicare, thus contributing to the decline in ADI services provided to Medicare beneficiaries from 2009 to 2012. Radiation awareness. Studies have suggested that increased physician and patient awareness of the risks associated with radiation exposure may have led to a decline in CT and NM services provided to Medicare beneficiaries. Of the remaining two suppliers, one indicated that it was unsure whether accreditation has affected the number of services it provides, while the other indicated that accreditation may have led to a slight increase in the number of services it provides. accreditation process. According to the representatives, IAC and ACR requested that CMS provide a provisional accreditation period for new suppliers that would allow them to obtain reimbursement for applicable ADI services while they undergo the accreditation process. According to CMS, it does not have the authority under MIPPA to provide provisional accreditation, as the statute only allows accredited suppliers to be paid for the technical component of ADI services beginning on January 1, 2012. We provided a draft of this report for review to the Department of Health and Human Services, and the agency stated that it had no comments. We are sending copies of this report to the Secretary of Health and Human Services and appropriate congressional committees. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Phyllis Thorburn, Assistant Director; William Black, Assistant Director; Priyanka Sethi Bansal; William A. Crafton; Richard Lipinski; Beth Morrison; Jennifer Whitworth; and Rachael Wojnowicz made key contributions to this report.
The Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) required that beginning January 1, 2012, suppliers that produce the images for Medicare-covered ADI services in office settings, such as physician offices, be accredited by an organization approved by CMS. MIPPA mandated that GAO issue two reports on the effect of the accreditation requirement. The first report, issued in 2013, assessed CMS's standards for the accreditation of ADI suppliers and its oversight of the accreditation requirement. In this report, GAO examined the effect the accreditation requirement may have had on beneficiary access to ADI services provided in the office setting. To do this, GAO examined trends in the use of the three ADI modalities—CT; MRI; and NM, including PET—provided to Medicare beneficiaries from 2009 through 2012 that were subject to the ADI accreditation requirement. GAO also interviewed CMS officials, representatives of the Intersocietal Accreditation Commission and the American College of Radiology—the two CMS-approved accrediting organizations that accounted for about 99 percent of all accredited suppliers as of January 2013; and 19 accredited ADI suppliers that reflected a range of geographic areas, imaging services provided, and accrediting organizations used. In addition, GAO reviewed relevant literature to understand the context of any observed changes in ADI services throughout the period studied. GAO found that the number of advanced diagnostic imaging (ADI) services provided to Medicare beneficiaries in the office setting—an indicator of access to those services—began declining before and continued declining after the accreditation requirement went into effect on January 1, 2012. In particular, the rate of decline from 2009 to 2010 was similar to the rate from 2011 to 2012 for magnetic resonance imaging (MRI); computed tomography (CT); and nuclear medicine (NM), including positron emission tomography (PET) services. These results suggest that the overall decline was driven, at least in part, by factors other than accreditation. The percentage decline in the number of ADI services provided in the office setting was generally similar in both urban and rural areas during the period GAO studied. The effect of accreditation on access is unclear in the context of recent policy and payment changes implemented by Medicare and private payers. For example, the Centers for Medicare & Medicaid Services (CMS) reduced Medicare payment for certain CT and MRI services, which could have contributed to the decline in the number of these services. Officials from CMS, representatives from the accrediting organizations, and accredited ADI suppliers GAO interviewed suggested that any effect of the accreditation requirement on access was likely limited. The Department of Health and Human Services stated that it had no comments on a draft of this report.
Before September 2001, we and others had demonstrated significant, long- standing vulnerabilities in aviation security, some of which are depicted in figure 1. These included weaknesses in screening passengers and baggage, controlling access to secure areas at airports, and protecting air traffic control computer systems and facilities. To address these and other weaknesses, ATSA created the Transportation Security Administration and established security requirements for the new agency with mandated deadlines. Before September 2001, screeners, who were then hired by the airlines, often failed to detect threat objects located on passengers or in their carry- on luggage. Principal causes of screeners’ performance problems were rapid turnover and insufficient training. As we previously reported, turnover rates exceeded 100 percent a year at most large airports, leaving few skilled and experienced screeners, primarily because of low wages, limited benefits, and repetitive, monotonous work. In addition, before September 2001, controls for limiting access to secure areas of airports, including aircraft, did not always work as intended. As we reported in May 2000, our special agents used fictitious law enforcement badges and credentials to gain access to secure areas, bypass security checkpoints at two airports, and walk unescorted to aircraft departure gates. The agents, who had been issued tickets and boarding passes, could have carried weapons, explosives, or other dangerous objects onto aircraft. DOT’s Inspector General also documented numerous problems with airport access controls, and in one series of tests, nearly 7 out of every 10 attempts by the Inspector General’s staff to gain access to secure areas were successful. Upon entering the secure areas, the Inspector General’s staff boarded aircraft 117 times. The Inspector General further reported that the majority of the aircraft boardings would not have occurred if employees had taken the prescribed steps, such as making sure doors closed behind them. Our reviews also found that the security of the air traffic control computer systems and of the facilities that house them had not been ensured. The vulnerabilities we identified, such as not ensuring that contractors who had access to the air traffic control computer systems had undergone background checks, made the air traffic control system susceptible to intrusion and malicious attacks. The air traffic control computer systems provide information to air traffic controllers and aircraft flight crews to help ensure the safe and expeditious movement of aircraft. Failure to protect these systems and their facilities could cause a nationwide disruption of air traffic or even collisions and loss of life. Over the years, we made numerous recommendations to the Federal Aviation Administration (FAA), which, until ATSA’s enactment, was responsible for aviation security. These recommendations were designed to improve screeners’ performance, strengthen airport access controls, and better protect air traffic control computer systems and facilities. As of September 2001, FAA had implemented some of these recommendations and was addressing others, but its progress was often slow. In addition, many initiatives were not linked to specific deadlines, making it difficult to monitor and oversee their implementation. ATSA defined TSA’s primary responsibility as ensuring security in all modes of transportation. The act also shifted security-screening responsibilities from the airlines to TSA and established a series of requirements to strengthen aviation security, many of them with mandated implementation deadlines. For example, the act required the deployment of federal screeners at 429 commercial airports across the nation by November 19, 2002, and the use of explosives detection technology at these airports to screen every piece of checked baggage for explosives not later than December 31, 2002. However, the Homeland Security Act subsequently allowed TSA to grant waivers of up to 1 year to airports that would not be able to meet the December deadline. Some aviation security responsibilities remained with FAA. For example, FAA is responsible for the security of its air traffic control and other computer systems and of its air traffic control facilities. FAA also administers the Airport Improvement Program (AIP) trust fund, which is used to fund capital improvements to airports, including some security enhancements, such as terminal modifications to accommodate explosives detection equipment. Over the past 2 years, TSA and FAA have taken major steps to increase aviation security. TSA has implemented congressional mandates and explored options for increasing the use of technology and information to control access to secure areas of airports and to improve passenger screening. FAA has focused its efforts on enhancing the security of the nation’s air traffic control systems and facilities. In ongoing work, we are examining some of these efforts in more detail (see app. IV). In its first year, TSA worked to establish its organization and focused primarily on meeting the aviation security deadlines set forth in ATSA, accomplishing a large number of tasks under a very ambitious schedule. In January 2002, TSA had 13 employees—1 year later, the agency had about 65,000 employees. TSA reported that it met over 30 deadlines during 2002 to improve aviation security. (See app. I for the status of mandates in ATSA.) For example, according to TSA, it met the November 2002 deadline to deploy federal passenger screeners at airports across the nation by hiring, training, and deploying over 40,000 individuals to screen passengers at 429 commercial airports (see fig. 2); hired and deployed more than 20,000 individuals to screen all checked has been using explosives detection systems or explosives trace detection equipment to screen about 90 percent of all checked baggage as of December 31, 2002; has been using alternative means such as canine teams, hand searches, and passenger-bag matching to screen the remaining checked baggage; confiscated more than 4.8 million prohibited items (including firearms, knives, and incendiary or flammable objects) from passengers; and has made substantial progress in expanding the Federal Air Marshal Service. In addition, according to FAA, U.S. and foreign airlines met the April 2003 deadline to harden cockpit doors on aircraft flying in the United States. Not unexpectedly, TSA experienced some difficulties in meeting these deadlines and achieving these goals. For example, operational and management control problems, cited later in this testimony, emerged with the rapid expansion of the Federal Air Marshal Service, and TSA’s deployment of some explosives detection systems was delayed. As a result, TSA had to grant waivers of up to a year (until Dec. 31, 2003) to a few airports, authorizing them to use alternative means to screen all checked baggage. Recently, airport representatives with whom we spoke expressed concern that not all of these airports would meet the new December 2003 deadline established in their waivers because, according to the airport representatives, there has not been enough time to produce, install, and integrate all of the systems required to meet the deadline. To strengthen control over access to secure areas of airports and other transportation facilities, TSA is pursuing initiatives that make greater use of technology and information. For example, the agency is investigating the establishment of a Transportation Workers Identification Card (TWIC) program. TWIC is intended to establish a uniform, nationwide standard for the secure identification of 12 million workers who require unescorted physical or cyber access to secure areas at airports and other transportation facilities. Specifically, TWIC will combine standard background checks and biometrics so that a worker can be positively matched to his or her credential. Once the program is fully operational, the TWIC card will be the standard credential for airport workers and will be accepted by all modes of transportation. According to TSA, developing a uniform, nationwide standard for identification will minimize redundant credentialing and background checks. Currently, each airport is required, as part of its security program, to issue credentials to workers who need access to secure, nonpublic areas, such as baggage loading areas. Airport representatives have told us that they think a number of operational issues need to be resolved for the TWIC card to be feasible. For example, the TWIC card would have to be compatible with the many types of card readers used at airports around the country, or new card readers would have to be installed. At large airports, this could entail replacing hundreds of card readers, and airport representatives have expressed concerns about how this effort would be funded. In April 2003, TSA awarded a contract to test and evaluate various technologies at three pilot sites. In addition, TSA has continued to develop the next-generation Computer Assisted Passenger Prescreening System (CAPPS II)—an automated passenger screening system that takes personal information, such as a passenger’s name, date of birth, home address, and home telephone number, to confirm the passenger’s identity and assess a risk level. The identifying information will be run against national security information and commercial databases, and a “risk” score will be assigned to the passenger. The risk score will determine any further screening that the passenger will undergo before boarding. TSA expects to implement CAPPS II throughout the United States by the fall of 2004. However, TSA’s plans have raised concerns about travelers’ privacy rights. It has been suggested, for example, that TSA is violating privacy laws by not explaining how the risk assessment data will be scored and used and how a TSA decision can be appealed. These concerns about the system will need to be addressed as it moves toward implementation. In ongoing work, we are examining CAPPS II, including how it will function, what safeguards will be put in place to protect the traveling public’s privacy, and how the system will affect the traveling public in terms of costs, delays, and risks. Additionally, TSA has begun to develop initiatives that could enable it to use its passenger screening resources more efficiently. For example, TSA has requested funding for fiscal year 2004 to begin developing a registered traveler program that would prescreen low-risk travelers. Under a registered traveler program, those who voluntarily apply to participate in the program and successfully pass background checks would receive a unique identifier or card that would enable them to be screened more quickly and would promote greater focus on those passengers who require more extensive screening at airport security checkpoints. In prior work, we identified key policy and implementation issues that would need to be resolved before a registered traveler program could be implemented. Such issues include the (1) criteria that should be established to determine eligibility to apply for the program, (2) kinds of background checks that should be used to certify applicants’ eligibility to enroll in the program and the entity who should perform these checks, (3) security-screening procedures that registered travelers should undergo and the differences between these procedures and those for unregistered travelers, and (4) concerns that the traveling public or others may have about equity, privacy, and liability. Since September 2001, FAA has continued to strengthen the security of the nation’s air traffic control computer systems and facilities in response to 39 recommendations we made between May 1998 and December 2000. For example, FAA has established an information systems security management structure under its Chief Information Officer, whose office has developed an information systems security strategy, security architecture (that is, an overall blueprint), security policies and directives, and a security awareness training campaign. This office has also managed FAA’s incident response center and implemented a certification and accreditation process to ensure that vulnerabilities in current and future air traffic control systems are identified and weaknesses addressed. Nevertheless, the office faces continued challenges in increasing its intrusion detection capabilities, obtaining accreditation for systems that are already operational, and managing information systems security throughout the agency. In addition, according to senior security officials, FAA has completed assessments of the physical security of its staffed facilities, but it has not yet accredited all of these air traffic control facilities as secure in compliance with its own policy. Finally, FAA has worked aggressively over the past 2 years to complete background investigations of numerous contractor employees. However, ensuring that all new contractors are assessed to determine which employees require background checks, and that those checks are completed in a timely manner, will be a continuing challenge for the agency. Although TSA has focused much effort and funding on ensuring that bombs and other threat items are not carried onto commercial aircraft by passengers or in their luggage, vulnerabilities remain, according to aviation experts, TSA officials, and others. In particular, these vulnerabilities affect air cargo, general aviation, and airport perimeter security. For information on legislative proposals that would address these potential vulnerabilities and other aviation security issues, see appendix II. As we and DOT’s Inspector General have reported, vulnerabilities exist in securing the cargo carried aboard commercial passenger and all-cargo aircraft. TSA has reported that an estimated 12.5 million tons of cargo are transported each year—9.7 million tons on all-cargo planes and 2.8 million tons on passenger planes. Some potential security risks associated with air cargo include the introduction of undetected explosive and incendiary devices in cargo placed aboard aircraft; the shipment of undeclared or undetected hazardous materials aboard aircraft; and aircraft hijackings and sabotage by individuals with access to cargo aircraft. To address some of the risks associated with air cargo, ATSA requires that all cargo carried aboard commercial passenger aircraft be screened and that TSA have a system in place as soon as practicable to screen, inspect, or otherwise ensure the security of cargo on all-cargo aircraft. In August 2003, the Congressional Research Service reported that less than 5 percent of cargo placed on passenger airplanes is physically screened. TSA’s primary approach to ensuring air cargo security and safety and to complying with the cargo-screening requirement in the act is the “known shipper” program—which allows shippers that have established business histories with air carriers or freight forwarders to ship cargo on planes. However, we and DOT’s Inspector General have identified weaknesses in the known shipper program and in TSA’s procedures for approving freight forwarders. Since September 2001, TSA has taken a number of actions to enhance cargo security, such as implementing a database of known shippers in October 2002. The database is the first phase in developing a cargo- profiling system similar to the Computer-Assisted Passenger Prescreening System. However, in December 2002, we reported that additional operational and technological measures, such as checking the identity of individuals making cargo deliveries, have the potential to improve air cargo security in the near term. We further reported that TSA lacks a comprehensive plan with long-term goals and performance targets for cargo security, time frames for completing security improvements, and risk-based criteria for prioritizing actions to achieve those goals. Accordingly, we recommended that TSA develop a comprehensive plan for air cargo security that incorporates a risk management approach, includes a list of security priorities, and sets deadlines for completing actions. TSA agreed with this recommendation and expects to develop such a plan by the fall of 2003. It will be important that this plan include a timetable for implementation and that TSA expeditiously reduce the vulnerabilities in this area. Since September 2001, TSA has taken limited action to improve general aviation security, leaving it far more open and potentially vulnerable than commercial aviation. General aviation is vulnerable because general aviation pilots are not screened before takeoff and the contents of general aviation planes are not screened at any point. General aviation includes more than 200,000 privately owned airplanes, which are located in every state at more than 19,000 airports. Over 550 of these airports also provide commercial service. In the last 5 years, about 70 aircraft have been stolen from general aviation airports, indicating a potential weakness that could be exploited by terrorists. Moreover, it was reported that the September 11 hijackers researched the use of crop dusters to spread biological or chemical agents. General aviation’s vulnerability was revealed in January 2002, when a Florida teenage flight student crashed a single-engine Cessna airplane into a Tampa skyscraper. FAA has since issued a notice with voluntary guidance for flight schools and businesses that provide services for aircraft and pilots at general aviation airports. The suggestions include using different keys to gain access to an aircraft and start the ignition, not giving students access to aircraft keys, ensuring positive identification of flight students, and training employees and pilots to report suspicious activities. However, because the guidance is voluntary, it is unknown how many general aviation airports have implemented these measures. We reported in June 2003 that TSA was working with industry stakeholders as part of TSA’s Aviation Security Advisory Council to close potential security gaps in general aviation. According to our recent discussions with industry representatives, however, the stakeholders have not been able to reach a consensus on the actions needed to improve security in general aviation. General aviation industry representatives, such as the Aircraft Owners and Pilots Association and General Aviation Manufacturers Association, have opposed any restrictions on operating general aviation aircraft and believe that small planes do not pose a significant risk to the country. Nonetheless, some industry representatives indicated that the application of a risk management approach would be helpful in determining the next steps in improving general aviation security. (We discuss risk management in more detail later in this testimony.) To identify these next steps, TSA chartered a working group on general aviation within the existing Aviation Security Advisory Committee, and this working group is scheduled to report to the full committee in the fall of 2003. We have ongoing work that is examining general aviation security in further detail. Airport perimeters present a potential vulnerability by providing a route for individuals to gain unauthorized access to aircraft and secure areas of airports (see fig. 4). For example, in August 2003, the national media reported that three boaters wandered the tarmac at Kennedy International Airport after their boat became beached near a runway. In addition, terrorists could launch an attack using a shoulder-fired missile from the perimeter of an airport, as well as from locations just outside the perimeter. For example, in separate incidents in the late 1970s, guerrillas with shoulder-fired missiles shot down two Air Rhodesia planes. More recently, the national media have reported that since September 2001, al Qaeda has twice tried to down planes outside the United States with shoulder-fired missiles. We reported in June 2003 that airport operators have increased their patrols of airport perimeters since September 2001, but industry officials stated that they do not have enough resources to completely protect against missile attacks. A number of technologies could be used to secure and monitor airport perimeters, including barriers, motion sensors, and closed-circuit television. Airport representatives have cautioned that as security enhancements are made to airport perimeters, it will be important for TSA to coordinate with FAA and the airport operators to ensure that any enhancements do not pose safety risks for aircraft. We have separate ongoing work examining the status of efforts to improve airport perimeter security and assessing the nature and extent of the threat from shoulder- fired missiles. TSA’s efforts to strengthen and sustain aviation security face several longer-term challenges in the areas of risk management, funding, coordination, strategic human capital management, and building a results- oriented organization. As aviation security is viewed in the larger context of transportation and homeland security, it will be important to set strategic priorities so that national resources can be directed to the greatest needs. Although TSA initially focused on increasing aviation security, it has more recently begun to address security in the other transportation modes. However, the size and diversity of the national transportation system make it difficult to adequately secure, and TSA and the Congress are faced with demands for additional federal funding for transportation security that far exceed the additional amounts made available. We have advocated the use of a risk management approach to guide federal programs and responses to better prepare for and withstand terrorist threats, and we have recommended that TSA use this approach to strengthen security in aviation as well as in other transportation modes. A risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions linking resources with prioritized efforts for results. Comprehensive risk-based assessments support effective planning and resource allocation. Figure 5 describes this approach. TSA agreed with our recommendation and has adopted a risk management approach in attempting to enhance security across all transportation modes. TSA’s Office of Threat Assessment and Risk Management is developing two assessment tools that will help assess criticality, threats, and vulnerabilities. The first tool, which assesses criticality, will arrive at a criticality score for a facility or transportation asset by incorporating factors such as the number of fatalities that could occur during an attack and the economic and sociopolitical importance of the facility or asset. This score will enable TSA, in conjunction with transportation stakeholders, to rank facilities and assets within each mode and thus focus resources on those that are deemed most important. TSA is working with another Department of Homeland Security office—the Information Analysis and Infrastructure Protection Directorate—to ensure that the criticality tool will be consistent with the Department’s overall approach for managing critical infrastructure. The second tool—the Transportation Risk Assessment and Vulnerability Evaluation tool (TRAVEL)—will assess threats and analyze vulnerabilities for all transportation modes. The tool produces a relative risk score for potential attacks against a transportation asset or facility. In addition, TRAVEL will include a cost-benefit component that compares the cost of implementing a given countermeasure with the reduction in relative risk due to that countermeasure. We reported in June 2003 that TSA plans to use this tool to gather comparable threat and vulnerability information across all transportation modes. It is important for TSA to complete the development of the two tools and use them to prepare action plans for specific modes, such as aviation, and for transportation security generally. Two key funding and accountability challenges will be (1) paying for increased aviation security and (2) ensuring that these costs are controlled. The costs associated with the equipment and personnel needed to screen passengers and their baggage alone are huge. The administration requested $4.2 billion for aviation security for fiscal year 2004, which included about $1.8 billion for passenger screening and $944 million for baggage screening. ATSA created a passenger security fee to pay for the costs of aviation security, but the fee has not generated enough money to do so. DOT’s Inspector General reported that the security fees are estimated to generate only about $1.7 billion in fiscal year 2004. A major funding issue is paying for the purchase and installation of the remaining explosives detection systems for the airports that received waivers, as well as for the reinstallation of the systems that were placed in airport lobbies last year and now need to be integrated into airport baggage-handling systems. Integrating the equipment with the baggage- handling systems is expected to be costly because it will require major facility modifications. For example, modifications needed to integrate the equipment at Boston’s Logan International Airport are estimated to cost $146 million. Estimates for Dallas/Fort Worth International Airport are $193 million. DOT’s Inspector General has reported that the cost of integrating the equipment nationwide could be as high as $3 billion. A key question is how to pay for these installation costs. Funds from FAA’s AIP grants and passenger facility charges are eligible sources for funding this work. In fiscal year 2002, AIP grant funds totaling $561 million were used for terminal modifications to enhance security. However, using these funds for security reduced the funding available for other airport development projects, such as projects to bring airports up to federal design standards and reconstruction projects. In February 2003, we identified letters of intent as a funding option that has been successfully used to leverage private sources of funding. TSA has since signed letters of intent with three airports—Boston Logan, Dallas-Fort Worth, and Seattle-Tacoma International Airports. Under the agreements, TSA will pay 75 percent of the cost of integrating the explosives detection equipment into the baggage-handling systems. The payments will stretch out over 3 to 4 years. Airport representatives said that about 30 more airports have requested similar agreements. The slow pace of TSA’s approval process has raised concerns about delays in reinstalling and integrating explosives detection equipment with baggage-handling systems—delays that will require more labor-intensive and less efficient baggage screening by other approved means. To provide financial assistance to airports for security-related capital investments, such as the installation of explosives detection equipment, proposed aviation reauthorization legislation would establish an aviation security capital fund that would authorize $2 billion over the next 4 years. The funding would be made available to airports in letters of intent, and large- and medium-hub airports would be expected to provide a match of 10 percent of a project’s costs. A 5 percent match would be required for all other airports. This legislation would provide a dedicated source of funding for security-related capital investments and could minimize the need to use AIP funds for security. An additional funding issue is how to ensure continued investment in transportation research and development. For fiscal year 2003, TSA was appropriated about $110 million for research and development, of which $75 million was designated for the next-generation explosives detection systems. However, TSA has proposed to reprogram $61.2 million of these funds to be used for other purposes, leaving about $12.7 million to be spent on research and development this year. This proposed reprogramming could limit TSA’s ability to sustain and strengthen aviation security by continuing to invest in research and development for more effective equipment to screen passengers, their carry-on and checked baggage, and cargo. In ongoing work, we are examining the nature and scope of research and development work by TSA and the Department of Homeland Security, including their strategy for accelerating the development of transportation security technologies. By reprogramming funds and making acknowledged use of certain funds for purposes other than those intended, TSA has raised congressional concerns about accountability. According to TSA, it has proposed to reprogram a total of $849.3 million during fiscal year 2003, including the $61.2 million that would be cut from research and development and $104 million that would be taken from the federal air marshal program and used for unintended purposes. Because of these congressional concerns, we were asked to investigate TSA’s process for reprogramming funds for the air marshal program and to assess the implications of the proposed funding reductions in areas such as the numbers of hours flown and flights taken. We have ongoing work to address these issues. To ensure appropriate oversight and accountability, it is important that TSA maintain clear and transparent communication with the Congress and industry stakeholders about the use of its funds. In July 2002, we reported that long-term attention to cost and accountability controls for acquisition and related business processes will be critical for TSA, both to ensure its success and to maintain its integrity and accountability. According to DOT’s Inspector General, although TSA has made progress in addressing certain cost-related issues, it has not established an infrastructure that provides effective controls to monitor contractors’ costs and performance. For example, in February 2003, the Inspector General reported that TSA’s $1 billion hiring effort cost more than most people expected and that TSA’s contract with NCS Pearson to recruit, assess, and hire the screener workforce contained no safeguards to prevent cost increases. The Inspector General found that TSA provided limited oversight for the management of the contract expenses and, in one case, between $6 million and $9 million of the $18 million paid to a subcontractor appeared to be a result of wasteful and abusive spending practices. As the Inspector General recommended, TSA has since hired the Defense Contract Audit Agency to audit its major contracts. To ensure control over TSA contracts, the Inspector General has further recommended that the Congress set aside a specific amount of TSA’s contracting budget for overseeing contractors’ performance with respect to cost, schedule, and quality. Sustaining the aviation security advancements of the past 2 years also depends on TSA’s ability to form effective partnerships with federal, state, and local agencies and with the aviation community. Effective, well- coordinated partnerships at the local level require identifying roles and responsibilities; developing effective, collaborative relationships with local and regional airports and emergency management and law enforcement agencies; agreeing on performance-based standards that describe desired outcomes; and sharing intelligence information. The lynchpin in TSA’s efforts to coordinate with airports and local law enforcement and emergency response agencies is, according to the agency, the 158 federal security directors and staff that TSA has deployed nationwide. The security directors’ responsibilities include ensuring that standardized security procedures are implemented at the nation’s airports; working with state and local law enforcement personnel, when appropriate, to ensure airport and passenger security; and communicating threat information to airport operators and others. Airport representatives, however, have indicated that the relationships between federal security directors and airport operators are still evolving and that better communication is needed at some airports. Key to improving the coordination between TSA and local partners is establishing clearly defined roles. In some cases, concerns have arisen about conflicts between the roles of TSA, as the manager of security functions at airports, and of airport officials, as the managers of other airport operations. Industry representatives viewed such conflicts as leading to confusion in areas such as communicating with local entities. According to airport representatives, for example, TSA has developed guidance or rules for airports without involving them, and time-consuming changes have then had to be made to accommodate operational factors. The representatives maintain that it would be more efficient and effective to consider such operational factors earlier in the process. Ultimately, inadequate coordination and unclear roles result in inefficient uses of limited resources. TSA also has to ensure that the terrorist and threat information gathered and maintained by law enforcement and other agencies—including the Federal Bureau of Investigation, the Immigration and Naturalization Service, the Central Intelligence Agency, and the Department of State—is quickly and efficiently communicated among federal agencies and to state and local authorities, as needed. Disseminating such information is important to allow those who are involved in protecting the nation’s aviation system to address potential threats rather than simply react to known threats. In aviation security, timely information sharing among agencies has been hampered by the agencies’ reluctance to share sensitive information and by outdated, incompatible computer systems. As we found in reviewing 12 watch lists maintained by nine federal agencies, information was being shared among some of them but not among others. Moreover, even when sharing was occurring, costly and overly complex measures had to be taken to facilitate it. To promote better integration and sharing of terrorist and criminal watch lists, we have recommended that the Department of Homeland Security, in collaboration with the other departments and agencies that have and use watch lists, lead an effort to consolidate and standardize the federal government’s watch list structures and policies. In addition, as we found earlier this year, representatives of numerous state and local governments and transportation industry associations indicated that the general threat warnings received by government agencies are not helpful. Rather, they said, transportation operators, including airport operators, want more specific intelligence information so that they can understand the true nature of a potential threat and implement appropriate security measures. As it organizes itself to protect the nation’s transportation system, TSA faces the challenge of strategically managing its workforce of more than 60,000 people, most of whom are deployed at airports or on aircraft to detect weapons and explosives and to prevent them from being taken aboard and used on aircraft. Additionally, over the next several years, TSA faces the challenge of “right-sizing” this workforce as efficiency is improved with new security-enhancing technologies, processes, and procedures. For example, as explosives detection systems are integrated with baggage-handling systems, the use of more labor-intensive screening methods, such as trace detection techniques and manual searches of baggage, can be reduced. Other planned security enhancements, such as CAPPS II and the registered traveler program, also have the potential to make screening more efficient. To assist agencies in managing their human capital more strategically, we have developed a model that identifies cornerstones and related critical success factors that agencies should apply and steps they can take. Our model is designed to help agency leaders effectively lead and manage their people and integrate human capital considerations into daily decision- making and the program results they seek to achieve. In January 2003, we reported that TSA was addressing some critical human capital success factors by hiring personnel, using a wide range of tools available for hiring, and beginning to link individual performance to organizational goals. However, concerns remain about the size and training of that workforce, the adequacy of the initial background checks for screeners, and TSA’s progress in setting up a performance management system. As noted earlier in this testimony, TSA now plans to reduce its screener workforce by 6,000 by September 30, 2003, and it has proposed cutting the workforce by an additional 3,000 in fiscal year 2004. This planned reduction has raised concerns about passenger delays at airports and has led TSA to begin hiring part-time screeners to make more flexible and efficient use of its workforce. In addition, TSA used an abbreviated background check process to hire and deploy enough screeners to meet ATSA’s screening deadlines in 2002. After obtaining additional background information, TSA terminated the employment of some of these screeners. TSA reported 1,208 terminations as of May 31, 2003, that it ascribed to a variety of reasons, including criminal offenses and failures to pass alcohol and drug tests. Furthermore, the national media have reported allegations of operational and management control problems that emerged with the expansion of the Federal Air Marshal Service, including inadequate background checks and training, uneven scheduling, and inadequate policies and procedures. In ongoing work, we are examining the effectiveness of TSA’s efforts to train, equip, and supervise passenger screeners, and we are assessing the effects of expansion on the Federal Air Marshal Service. In addition, we reported in January 2003 that TSA had taken the initial steps in establishing a performance management system linked to organizational goals. Such a system will be critical for TSA to motivate and manage staff, ensure the quality of screeners’ performance, and, ultimately, restore public confidence in air travel. For TSA to sustain enhanced aviation security over the long term, it will be important for the agency to continue to build a results-oriented culture within the new Department of Homeland Security. To help federal agencies successfully transform their cultures, as well as the new Department of Homeland Security merge its various components into a unified department, we identified key practices that have consistently been found at the center of successful mergers, acquisitions, and transformations. These key practices, together with implementation strategies such as establishing a coherent mission and integrated strategic goals to guide the transformation, can help agencies become more results oriented, customer focused, and collaborative. (See app. III.) These practices are particularly important for the Department of Homeland Security, whose implementation and transformation we have designated as high risk. The Congress required TSA to adopt a results-oriented strategic planning and reporting framework and, specifically, to provide an action plan with goals and milestones to outline how acceptable levels of performance for aviation security would be achieved. In prior work, we reported that TSA has taken the first steps in performance planning and reporting by defining its mission, vision, and values and that this practice would continue to be important when TSA moved into the Department of Homeland Security. Therefore, we recommended that TSA take the next steps to implement results-oriented practices. These steps included establishing performance goals and measures for all modes of transportation as part of a strategic planning process that involves stakeholders, defining more clearly the roles and responsibilities of its various offices in collaborating and communicating with stakeholders; and formalizing the roles and responsibilities of governmental entities for transportation security. Table 1 shows selected ATSA requirements, TSA’s actions and plans, and the next steps we recommended. TSA agreed with our recommendations. After spending billions of dollars over the past 2 years on people, policies, and procedures to improve aviation security, we have much more security now than we had before September 2001, but it has not been determined how much more secure we are. The vast number of guns, knives, and other potential threat items that screeners have confiscated suggests that security is working, but it also suggests that improved public awareness of prohibited items could help focus resources where they are most needed and reduce delays and inconvenience to the public. Faced with vast and competing demands for security resources, TSA should continue its efforts to identify technologies, such as CAPPS II, that will leverage its resources and potentially improve its capabilities. Improving the efficiency and effectiveness of aviation security will also require risk assessments and plans that help maintain a balance between security and customer service. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have. For further information on this testimony, please contact Gerald L. Dillingham at (202) 512-2834. Individuals making key contributions to this testimony include Elizabeth Eisenstadt, David Hooper, Jennifer Kim, Heather Krause, Maren McAvoy, John W. Shumann, and Teresa Spisak. Require new background checks for those who have access to secure areas of the airport. Institute a 45-day waiting period for aliens seeking flight training for planes of 12,500 pounds or more. Establish qualifications for federal screeners. Report to the Congress on improving general aviation security. Screen all checked baggage in U.S. airports using explosives detection systems, passenger-bag matching, manual searches, canine units, or other approved means. The Federal Aviation Administration (FAA) is to develop guidance for air carriers to use in developing programs to train flight and cabin crews to resist threats (within 60 days after FAA issues the guidance, each airline is to develop a training program and submit it to FAA; within 30 days of receiving a program, FAA is to approve it or require revisions; within 180 days of receiving FAA’s approval, the airline is to complete the training of all flight and cabin crews). Develop a plan to train federal screeners. Foreign and domestic carriers are to provide electronic passenger and crew manifests to Customs for flights from foreign countries to the United States. Begin collecting the passenger security fee. The Under Secretary is to assume civil aviation security functions from FAA. Implement an aviation security program for charter carriers. Begin awarding grants for security-related research and development. The National Institute of Justice is to report to the Secretary on less-than-lethal weapons for flight crew members. Report to the Congress on the deployment of baggage screening equipment. Report to the Congress on progress in evaluating and taking the following optional Require 911 capability for onboard passenger telephones. Establish uniform IDs for law enforcement personnel carrying weapons on planes or in secure areas. Establish requirements for trusted traveler programs. Develop alternative security procedures to avoid damage to medical products. Provide for the use of secure communications technologies to inform airport security forces about passengers who are identified on security databases. Require pilot licenses to include a photograph and biometric identifiers. Use voice stress analysis, biometric, or other technologies to prevent high-risk passengers from boarding. Deploy federal screeners, security managers, and law enforcement officers to screen passengers and property. Report to the Congress on screening for small aircraft with 60 or fewer seats. Establish pilot program to contract with private screening companies (program to last until Nov. 19, 2004). Screen all checked baggage by explosives detection systems. Carriers are to transfer screening property to TSA. FAA is to issue an order prohibiting access to the flight deck, requiring strengthened cabin doors, requiring that cabin doors remain locked, and prohibiting possession of a key for all but the flight deck crew. Improve perimeter screening of all individuals, goods, property, and vehicles. Screen all cargo on passenger flights and cargo-only flights. Establish procedures for notifying FAA, state and local law enforcement officers, and airport security of known threats. Establish procedures for airlines to identify passengers who pose a potential security threat. FAA is to develop and implement methods for using cabin video monitors, continuously operating transponders, and notifying flight deck crew of a hijacking. Require flight training schools to conduct security awareness programs for employees. Completed Work with airport operators to strengthen access control points and consider deploying technology to improve security access. Provide operational testing for screeners. Assess dual-use items that seem harmless but could be dangerous and inform screening personnel. Establish a system for measuring staff performance. Establish management accountability for meeting performance goals. Periodically review threats to civil aviation, including chemical and biological weapons. Ongoing Except where otherwise indicated, the Transportation Security Administration (TSA) is responsible for implementing the provisions. H.R. 2144 - Aviation Security Technical Corrections and Improvements Act - Many of the important provisions of this bill have been incorporated into the Conference Report version of the FAA Reauthorization Act, H.R. 2115. S. 1409 - Rebuild America Act of 2003 - Establishes a new grant program in the Department of Homeland Security (DHS) for airport security improvements, including projects to replace baggage conveyer systems and projects to reconfigure terminal baggage areas as needed to install explosives detection systems. The Under Secretary for Border and Transportation Security is authorized to issue letters of intent to airports for these types of projects. One billion dollars is authorized for this program. H.R. 2555 - House and Senate versions of the Department of Homeland Security Appropriations Act for 2004 House version - Makes fiscal year 2004 appropriations of $3.679 billion for the Transportation Security Administration (TSA) to provide civil aviation security services (aviation security, federal air marshals, maritime and land security, intelligence, research and development, and administration): $1.673 billion for passenger screening activities, $1.285 billion for baggage screening activities, $721 million for airport support and enforcement presence, $235 million for physical modifications of airports to provide for the installation of checked baggage explosives detection systems, and $100 million for the procurement of the explosives detection systems. Continues to cap the number of screeners at 45,000 full-time equivalent positions. Prohibits the use of funds authorized in this act to pursue or adopt regulations requiring airport sponsors to provide, without cost to TSA, building construction, maintenance, utilities and expenses, or space for services relating to aviation security (excluding space for necessary checkpoints). Senate Version of H.R. 2555 - Makes fiscal year 2004 appropriations of $4.524 billion for TSA to provide civil aviation security services: $3.185 billion for screening activities, $1.339 billion for airport support and enforcement presence, $309 million for physical modifications of airports to provide for the installation of checked baggage explosives detection systems, and $151 million for the procurement of the explosives detection systems. Prohibits the use of funds authorized in this act to pursue or adopt regulations requiring airport sponsors to provide, without cost to TSA, building construction, maintenance, utilities and expenses, or space for services relating to aviation security (excluding space for necessary checkpoints). Prohibits the use of funds authorized in this act for the Computer Assisted Passenger Prescreening System (CAPPS II) until GAO has reported to the Committees on Appropriations that certain requirements have been met, including (1) the existence of a system of due process by which passengers considered to pose a threat may appeal their delay or prohibition from boarding a flight; (2) that the underlying error rate of databases will not produce a large number of false positives that will result in a significant number of passengers being treated mistakenly or security resources being diverted; (3) that TSA has stressed-tested and demonstrated the efficacy and predictive accuracy of all search tools in CAPPS II; and (4) that the Secretary has established an internal oversight board to monitor the manner in which CAPPS II is being developed and prepared. Requires a report from the Secretary of Homeland Security on actions taken to develop countermeasures for commercial aircraft against shoulder-fired missile systems and vulnerability assessments of this threat for larger airports. H.R. 2115 - Flight 100 - Century of Aviation Reauthorization Act - Conference Report version - Gives FAA the authority to take a certificate action if it is notified by DHS that the holder of the certificate presents a security threat. Gives the Secretary of Transportation the authority to make grants to general aviation entities (including airports, operators, and manufacturers) to reimburse them for security costs incurred and revenues lost because of restrictions imposed by the federal government in response to the events of September 11. The bill authorizes $100 million for these grants. Authorizes DHS to reimburse air carriers and airports for all security screening activities they are still performing, such as for providing catering services and checking documents at security checkpoints and for providing the space and facilities used to perform screening functions to the extent funds are available. Requires air carriers to carry out a training program for flight and cabin crews to prepare for possible threat conditions. TSA is required to establish minimum standards for this training within 1 year of the act’s passage. Requires DHS to report in 6 months on the effectiveness of aviation security, specifically including the air marshal program; hardening of cockpit doors; and security screening of passengers, checked baggage, and cargo. Establishes within DHS a grant program to airport sponsors for (1) projects to replace baggage conveyer systems related to aviation security; (2) projects to reconfigure terminal baggage areas as needed to install explosives detection systems; and (3) projects to enable the Under Secretary for Border and Transportation Security to deploy explosives detection systems behind the ticket counter, in the baggage sorting area, or in line with the baggage handling system. Requires $250 million annually from the existing aviation security fee that is paid by airline passengers to be deposited in an Aviation Security Capital Fund and made available to finance this grant program. Requires TSA to certify that civil liberty and privacy issues have been addressed before implementing CAPPS II and requires GAO to assess TSA’s compliance 3 months after TSA makes the required certification. Allows cargo pilots to carry guns under the same program for pilots of passenger airlines. Permits an off-duty pilot to transport the gun in a lockbox in the passenger cabin rather than in the baggage hold. Also provides that both passenger and cargo pilots should be treated equitably in their access to training. Requires security audits of all foreign repair stations within 18 months after TSA issues rules governing the audits. The rules must be issued within 240 days of enactment. Requires background checks on aliens seeking flight training in aircraft regardless of the size of the aircraft. For all training on small aircraft, includes a notification requirement but no waiting period. For training on larger aircraft, adopts an expedited procedure if the applicant already has training, a license, or a background check, and adopts a 30-day waiting period for first-time training on large aircraft. Makes TSA responsible for the background check. Requires TSA to issue an interim final rule in 60 days to implement this section. This section takes effect when that rule becomes effective. S.236 - Background Checks for Foreign Flight School Applicants - Amends federal aviation law to require a background check of alien flight school applicants without regard to the maximum certificated weight of the aircraft for which they seek training. (Currently, a background check is required for flight crews operating aircraft with a maximum certificated takeoff weight of 12,500 pounds or more.) S. 165 - Air Cargo Security Act - House companion bill (H.R. 1103) - Amends federal aviation law to require the screening of cargo that is to be transported in passenger aircraft operated by domestic and foreign air carriers in interstate air transportation. Directs TSA to develop a strategic plan to carry out such screening. Requires the establishment of systems that (1) provide for the regular inspection of shipping facilities for cargo shipments; (2) provide an industrywide pilot program database of known shippers of cargo; (3) train persons that handle air cargo to ensure that such cargo is properly handled and safeguarded from security breaches; and (4) require air carriers operating all-cargo aircraft to have an approved plan for the security of their air operations area, the cargo placed aboard the aircraft, and persons having access to their aircraft on the ground or in flight. H.R. 1366 - Aviation Industry Stabilization Act - Requires the Under Secretary for Border and Transportation Security, after all cockpit doors are strengthened, to consider and report to the Congress on whether it is necessary to require federal air marshals to be seated in the first class cabin of an aircraft with strengthened cockpit doors. Requires the Under Secretary to (1) undertake action necessary to improve the screening of mail so that it can be carried on passenger flights and (2) reimburse air carriers for certain screening and related activities, as well as the cost of fortifying cockpit doors, and for any financial losses attributed to the loss of air traffic resulting from the use of force against Iraq in calendar year 2003. Establishes an air cargo security working group composed of various groups to develop recommendations on the enhancement of the current known shipper program. H. R. 115 - Aviation Biometric Badge Act - Amends federal aviation law to direct TSA to require by regulation that each security screener (or employee who has unescorted access, or may permit other individuals to have unescorted access, to an aircraft or a secured area of the airport) be issued a biometric security badge that identifies a person by fingerprint or retinal recognition. H. R. 1049 - Arming Cargo Pilots Against Terrorism Act - Senate companion bill (S. 516) - Expresses the sense of Congress that a flight deck crew member of a cargo aircraft should be armed with a firearm to defend such aircraft against attacks by terrorists that could use the aircraft as a weapon of mass destruction or for other terrorist purposes. Amends federal transportation law to authorize the training and arming of flight deck crew members (pilots) of all-cargo air transportation flights to prevent acts of criminal violence or air piracy. H.R. 765 - (No title) - Legislation to arm cargo pilots - Amends federal aviation law to allow cargo pilots (not just air passenger pilots) to participate in the federal flight deck officer program. H.R. 580 - Commercial Airline Missile Defense Act - Senate companion bill - S. 311 - Directs the Secretary of Transportation to issue regulations that require all turbojet aircraft of air carriers to be equipped with a missile defense system. Requires the Secretary to purchase such defense systems and make them available to all air carriers. Sets forth certain interim security measures to be taken before the deployment of such defense systems. Define and articulate a succinct and compelling reason for change. Balance continued delivery of services with merger and transformation activities. Establish a coherent mission and integrated strategic goals to guide the transformation. Adopt leading practices for results-oriented strategic planning and reporting. Focus on a key set of principles and priorities at the outset of the transformation. Embed core values in every aspect of the organization to reinforce the new culture. Set implementation goals and a time line to build momentum and show progress from day one. Make public implementation goals and a time line. Seek and monitor employee attitudes and take appropriate follow-up actions. dentify cultural features of merging organizations to increase understanding of former work environments. Attract and retain key talent. Establish an organizationwide knowledge and skills inventory to exchange knowledge among merging organizations. Dedicate an implementation team to manage the transformation process. Establish networks to support the implementation team. Select high-performing team members. Use the performance management system to define responsibility and ensure accountability for change. Adopt leading practices to implement effective performance management systems with adequate safeguards. Establish a communication strategy to create shared expectations and report related progress. Communicate early and often to build trust. Ensure consistency of message. Encourage two-way communication. Provide information to meet specific needs of employees. Involve employees to obtain their ideas and gain their ownership for the transformation. Use employee teams. Involve employees in planning and sharing performance information. Incorporate employee feedback into new policies and procedures. Delegate authority to appropriate organizational levels. Adopt leading practices to build a world-class organization. Transportation Security Research and Development Programs at DHS and TSA Key Questions: (1) What were the strategy and organizational structure for transportation security research and development (R&D) prior to 9/11 and what is the current strategy and structure? (2) How do DHS and TSA select their transportation security R&D projects and what projects are in their portfolios? (3) What are DHS’s and TSA’s goals and strategies for accelerating the development of transportation security technologies? (4) What are the nature and scope of coordination of R&D efforts between DHS and TSA, as well as with other public and private sector research organizations? Key Questions: (1) How has the federal air marshal program evolved, in terms of recruiting, training, retention, and operations since its management was transferred to TSA? (2) To what extent has TSA implemented the internal controls needed to meet the program’s operational and management control challenges? (3) To what extent has TSA developed plans and initiatives to sustain the program and accommodate its future growth and maturation? Key Questions: (1) What are the status and associated costs of TSA’s efforts to acquire, install, and operate explosives detection equipment (electronic trace detection technology and explosives detection systems) to screen all checked baggage by December 31, 2003? (2) What are the benefits and trade-offs—to include costs, operations, and performance— of using alternative explosives detection technologies currently available for baggage screening? Reprogramming of Air Marshal Program Funds Key Questions: (1) Describe the internal preparation, review, and approval process for DHS’s reprogrammings and, specifically, the process for the May 15 and July 25 reprogramming requests for the air marshal program. (2) Determine whether an impoundment or deferral notice should have been sent to the Congress and any other associated legal issues. (3) Identify the implications, for both the air marshal program and other programs, of the pending reprogramming request. Key Questions: (1) How have security concerns and measures changed at general aviation airports since September 11, 2001? (2) What steps has TSA taken to improve general aviation security? Background Checks for Banner-Towing Aircraft Key Questions: (1) What are the procedures for conducting background and security checks for pilots of small banner-towing aircraft requesting waivers to perform stadium overflights? (2) To what extent have these procedures been followed in conducting required background and security checks since September 11, 2001? (3) How effective have these procedures been in reducing risks to public safety? TSA’s Computer Assisted Passenger Prescreening System II (CAPPS II) Key Questions: (1) How will the CAPPS II system function and what data will be needed to make the system operationally effective? (2) What safeguards will be put in place to protect the traveling public’s privacy? (3) What systems and measures are in place to determine whether CAPPS II will result in improved national security? (4) What impact will CAPPS II have on the traveling public and on the airline industry in terms of costs, delays, risks, inconvenience, and other factors? Key Questions: (1) What efforts have been taken or planned to ensure that passenger screeners comply with federal standards and other criteria, including efforts to train, equip, and supervise passenger screeners? (2) What methods does TSA use to test screeners’ performance, and what have been the results of these tests? (3) How have the results of tests of TSA passenger screeners compared with the results achieved by screeners before September 11, 2001, and at five pilot program airports? (4) What actions is TSA taking to remedy performance concerns? TSA’s Efforts to Implement Sections 106, 136, and 138 of the Aviation and Transportation Security Act Key Questions: What is the status of TSA’s efforts to implement (1) section 106 of the act requiring improved airport perimeter access security, (2) section 136 requiring the assessment and deployment of commercially available security practices and technologies, and (3) section 138 requiring background investigations for TSA and other airport employees? Assessment of the Portable Air Defense Missile Threat Key Questions: (1) What are the nature and extent of the threat from man- portable air defense systems (MANPAD)? (2) How effective are U.S. controls on the use of exported MANPADs? (3) How do multilateral efforts attempt to stem MANPAD proliferation? (4) What types of countermeasures are available to minimize this threat and at what cost? Airline Assistance Determination of Whether the $5 Billion Provided by P.L. 107-42 Was Used to Compensate the Nation’s Major Air Carriers for Their Losses Stemming from the Events of Sept. 11, 2001 Key Questions: (1) Was the $5 billion used only to compensate major air carriers for their uninsured losses incurred as a result of the terrorist attacks? (2) Were carriers reimbursed, per the act, only for increases in insurance premiums resulting from the attacks? TSA’s Use of Sole-Source Contracts Key Questions: (1) To what extent does TSA follow applicable acquisition laws and policies, including those for ensuring adequate competition? (2) How well does TSA’s organizational structure facilitate effective, efficient procurement? (3) How does TSA ensure that its acquisition workforce is equipped to award and oversee contracts? (4) How well do TSA’s policies and processes ensure that TSA receives the supplies and services it needs on time and at reasonable cost? Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Aviation Security: Measures Needed to Improve Security of Pilot Certification Process. GAO-03-248NI. Washington, D.C.: February 3, 2003. (NOT FOR PUBLIC DISSEMINATION) Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-286NI. Washington, D.C.: December 20, 2002. (NOT FOR PUBLIC DISSEMINATION) Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: December 20, 2002. Aviation Security: Vulnerability of Commercial Aviation to Attacks by Terrorists Using Dangerous Goods. GAO-03-30C. Washington, D.C.: December 3, 2002. Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: November 22, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GA0-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: Deployment and Capabilities of Explosive Detection Equipment. GAO-02-713C. Washington, D.C.: June 20, 2002. (CLASSIFIED) Aviation Security: Information on Vulnerabilities in the Nation’s Air Transportation System. GAO-01-1164T. Washington, D.C.: September 26, 2001. (NOT FOR PUBLIC DISSEMINATION) Aviation Security: Information on the Nation’s Air Transportation System Vulnerabilities. GAO-01-1174T. Washington, D.C.: September 26, 2001. (NOT FOR PUBLIC DISSEMINATION) Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: September 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: September 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: September 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: September 20, 2001. Responses of Federal Agencies and Airports We Surveyed about Access Security Improvements. GAO-01-1069R. Washington, D.C.: August 31, 2001. Responses of Federal Agencies and Airports We Surveyed about Access Security Improvements. GAO-01-1068R. Washington, D.C.: August 31, 2001. (RESTRICTED) FAA Computer Security: Recommendations to Address Continuing Weaknesses. GAO-01-171. Washington, D.C.: December 6, 2000. Aviation Security: Additional Controls Needed to Address Weaknesses in Carriage of Weapons Regulations. GAO/RCED-00-181. Washington, D.C.: September 29, 2000. FAA Computer Security: Actions Needed to Address Critical Weaknesses That Jeopardize Aviation Operations. GAO/T-AIMD-00-330. Washington, D.C.: September 27, 2000. FAA Computer Security: Concerns Remain Due to Personnel and Other Continuing Weaknesses. GAO/AIMD-00-252. Washington, D.C.: August 16, 2000. Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance. GAO/RCED-00-75. Washington, D.C.: June 28, 2000. Aviation Security: Screeners Continue to Have Serious Problems Detecting Dangerous Objects. GAO/RCED-00-159. Washington, D.C.: June 22, 2000. (NOT FOR PUBLIC DISSEMINATION) Computer Security: FAA Is Addressing Personnel Weaknesses, but Further Action Is Required. GAO/AIMD-00-169. Washington, D.C.: May 31, 2000. Security: Breaches at Federal Agencies and Airports. GAO-OSI-00-10. Washington, D.C.: May 25, 2000. Aviation Security: Screener Performance in Detecting Dangerous Objects during FAA Testing Is Not Adequate. GAO/T-RCED-00-143. Washington, D.C.: April 6, 2000. (NOT FOR PUBLIC DISSEMINATION) Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism. GAO/NSIAD-00-85. Washington, D.C.: April 7, 2000. Aviation Security: Vulnerabilities Still Exist in the Aviation Security System. GAO/T-RCED/AIMD-00-142. Washington, D.C.: April 6, 2000. U.S. Customs Service: Better Targeting of Airline Passengers for Personal Searches Could Produce Better Results. GAO/GGD-00-38. Washington, D.C.: March 17, 2000. Aviation Security: Screeners Not Adequately Detecting Threat Objects during FAA Testing. GAO/T-RCED-00-124. Washington, D.C.: March 16, 2000. (NOT FOR PUBLIC DISSEMINATION) Aviation Security: Slow Progress in Addressing Long-Standing Screener Performance Problems. GAO/T-RCED-00-125. Washington, D.C.: March 16, 2000. Computer Security: FAA Needs to Improve Controls Over Use of Foreign Nationals to Remediate and Review Software. GAO/AIMD-00-55. Washington, D.C.: December 23, 1999. Aviation Security: FAA’s Actions to Study Responsibilities and Funding for Airport Security and to Certify Screening Companies. GAO/RCED- 99-53. Washington, D.C.: February 24, 1999. Aviation Security: FAA’s Deployments of Equipment to Detect Traces of Explosives. GAO/RCED-99-32R. Washington, D.C.: November 13, 1998. Air Traffic Control: Weak Computer Security Practices Jeopardize Flight Safety. GAO/AIMD-98-155. Washington, D.C.: May 18, 1998. Aviation Security: Progress Being Made, but Long-Term Attention Is Needed. GAO/T-RCED-98-190. Washington, D.C.: May 14, 1998. Air Traffic Control: Weak Computer Security Practices Jeopardize Flight Safety. GAO/AIMD-98-60. Washington, D.C.: April 29, 1998. (LIMITED OFFICIAL USE –DO NOT DISSEMINATE) Aviation Security: Implementation of Recommendations Is Under Way, but Completion Will Take Several Years. GAO/RCED-98-102. Washington, D.C.: April 24, 1998. Combating Terrorism: Observations on Crosscutting Issues. T-NSIAD-98- 164. Washington, D.C.: April 23, 1998. Aviation Safety: Weaknesses in Inspection and Enforcement Limit FAA in Identifying and Responding to Risks. GAO/RCED-98-6. Washington, D.C.: February 27, 1998. Aviation Security: FAA’s Procurement of Explosives Detection Devices. GAO/RCED-97-111R. Washington, D.C.: May 1, 1997. Aviation Security: Commercially Available Advanced Explosives Detection Devices. GAO/RCED-97-ll9R. Washington, D.C.: April 24, 1997. Aviation Safety and Security: Challenges to Implementing the Recommendations of the White House Commission on Aviation Safety and Security. GAO/T-RCED-97-90. Washington, D.C.: March 5, 1997. Aviation Security: Technology’s Role in Addressing Vulnerabilities. GAO/T-RCED/NSIAD-96-262. Washington, D.C.: September 19, 1996. Aviation Security: Oversight of Initiatives Will Be Needed. C-GAO/T- RCED/NSIAD-96-20. Washington, D.C.: September 17, 1996. (CLASSIFIED) Aviation Security: Urgent Issues Need to Be Addressed. GAO/T- RCED/NSIAD-96-251. Washington, D.C.: September 11, 1996. Aviation Security: Immediate Action Needed to Improve Security. GAO/T-RCED/NSIAD-96-237. Washington, D.C.: August 1, 1996. Aviation Security: FAA Can Help Ensure That Airports’ Access Control Systems Are Cost Effective. GAO/RCED-95-25. Washington, D.C.: March 1, 1995. Aviation Security: Development of New Security Technology Has Not Met Expectations. GAO/RCED-94-142. Washington, D.C.: May 19, 1994. Aviation Security: Additional Actions Needed to Meet Domestic and International Challenges. GAO/RCED-94-38. Washington, D.C.: January 27, 1994. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-715T. Washington, D.C.: May 3, 2003. Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integration and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 17, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 1, 2003. Major Management Challenges and Program Risks: Department of Transportation. GAO-03-108. Washington, D.C.: January 2003. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made, More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. A Model of Human Capital Management. GAO-02-373SP. Washington, D.C.: March 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the 2 years since the terrorist attacks of September 11, 2001, the security of our nation's civil aviation system has assumed renewed urgency, and efforts to strengthen aviation security have received a great deal of congressional attention. On November 19, 2001, the Congress enacted the Aviation and Transportation Security Act (ATSA), which created the Transportation Security Administration (TSA) within the Department of Transportation (DOT) and defined its primary responsibility as ensuring security in aviation as well as in other modes of transportation. The Homeland Security Act, passed on November 25, 2002, transferred TSA to the new Department of Homeland Security, which assumed overall responsibility for aviation security. GAO was asked to describe the progress that has been made since September 11 to strengthen aviation security, the potential vulnerabilities that remain, and the longer-term management and organizational challenges to sustaining enhanced aviation security. Since September 11, 2001, TSA has made considerable progress in meeting congressional mandates designed to increase aviation security. By the end of 2002, the agency had hired and deployed about 65,000 passenger and baggage screeners, federal air marshals, and others, and it was using explosives detection equipment to screen about 90 percent of all checked baggage. TSA is also initiating or developing efforts that focus on the use of technology and information to advance security. One effort under development, the next-generation Computer-Assisted Passenger Prescreening System (CAPPS II), would use national security and commercial databases to identify passengers who could pose risks for additional screening. Concerns about privacy rights will need to be addressed as this system moves toward implementation. Although TSA has focused on ensuring that bombs and other threat items are not carried onto planes by passengers or in their luggage, vulnerabilities remain in air cargo, general aviation, and airport perimeter security. Each year, an estimated 12.5 million tons of cargo are transported on all-cargo and passenger planes, yet very little air cargo is screened for explosives. We have previously recommended, and the industry has suggested, that TSA use a risk-management approach to set priorities as it works with the industry to determine the next steps in strengthening aviation security. TSA faces longer-term management and organizational challenges to sustaining enhanced aviation security that include (1) developing and implementing a comprehensive risk management approach, (2) paying for increased aviation security needs and controlling costs, (3) establishing effective coordination among the many entities involved in aviation security, (4) strategically managing its workforce, and (5) building a results-oriented culture within the new Department of Homeland Security. TSA has begun to respond to recommendations we have made addressing many of these challenges, and we have other studies in progress.
Medicare Part D coverage is provided through private plans sponsored by dozens of health care organizations that may charge premiums, deductibles, and copayments for the drug benefit. All Part D plans must meet federal requirements with respect to the categories of drugs they must cover and the extent of their pharmacy networks. They must offer the standard Medicare Part D benefit, or an actuarially equivalent benefit. Beyond these requirements however, the specific formulary and pharmacy network of each PDP can vary. Under the MMA, drug coverage for all dual-eligible beneficiaries transitioned from Medicaid to Medicare Part D, on January 1, 2006. The MMA requires CMS to assign dual-eligible beneficiaries to a PDP if they have not enrolled in a Part D plan on their own. CMS may only assign dual-eligible beneficiaries to PDPs serving their area with premiums at or below the low-income benchmark amount and must randomly assign individuals if there is more than one eligible PDP. During October and December 2005, CMS randomly assigned to PDPs dual-eligible beneficiaries who had not already enrolled in a Part D plan. The agency mailed notices to these beneficiaries informing them of their assignment and also that they could select a different PDP if they wished. If they did not switch from their assigned PDP by December 31, 2005, their assignment took effect, with coverage beginning January 1, 2006. CMS enrolled 5,498,604 dual-eligible beneficiaries during this first round of assignments and continues to assign new dual-eligible beneficiaries into PDPs on a monthly basis, when these beneficiaries do not independently enroll in a Part D plan. For some dual-eligible beneficiaries, some drugs that were previously covered under Medicaid might not be covered by their Medicare PDP’s formulary. Subject to certain parameters, PDPs have the flexibility to set their own formularies and, as a result, PDPs vary in their inclusion of the drugs most commonly used by dual-eligible beneficiaries. According to a 2006 report by the Department of Health and Human Services, Office of Inspector General (OIG), one-fifth of dual-eligible beneficiaries were assigned to PDPs that provide coverage of all of the most commonly used drugs and one-third were assigned to PDPs that provide coverage of less than 85 percent of these drugs. However, dual-eligible beneficiaries are allowed to switch to a different PDP at any time with coverage under a new PDP effective the following month. In addition, to help ensure a smooth transition to Part D, CMS requires PDP sponsors to provide for a transition process for new enrollees whose current medications may not be included in their PDP’s formulary. For 2006, CMS recommended that PDP sponsors should fill a one-time transition supply of nonformulary drugs in order to accommodate the immediate need of the beneficiary. In particular, CMS suggested that PDPs provide at least a 30-day transition supply to all beneficiaries and a 90- to 180-day transition supply for residents in long-term care facilities. Dual-Eligible Beneficiaries Dual-eligible beneficiaries are a particularly vulnerable population. Totaling roughly 6.2 million in January 2006, they account for about 15 percent of all Medicaid beneficiaries and 15 percent of all Medicare beneficiaries. In general, these individuals are poorer, tend to have far more extensive health care needs, have higher rates of cognitive impairments, and are more likely to be disabled than other Medicare beneficiaries. A majority of dual-eligible beneficiaries live in the community and typically obtain drugs through retail pharmacies. Nearly one in four dual-eligible beneficiaries reside in a long-term care facility and obtain their drugs through pharmacies that specifically serve long-term care facilities. While most Medicare beneficiaries enrolled in a PDP pay monthly premiums, deductibles, and other cost-sharing as part of their benefit package, the Medicare Part D program pays a substantial proportion of dual-eligible beneficiaries’ cost-sharing obligations through its low-income subsidy program. For dual-eligible beneficiaries, Medicare pays the full amount of the monthly premium that nonsubsidy eligible beneficiaries normally pay, up to the level of the low-income benchmark premium. Medicare Part D also covers most or all of the prescription copayments: dual-eligible beneficiaries pay from $1 to $5.35 copayments per prescription filled in 2007, with the exception of those in long-term care facilities who have no copayments. In addition, dual-eligible beneficiaries are not subject to a deductible or the so-called “donut hole.” In addition to dual-eligible beneficiaries, the Part D low-income subsidy is available to other low-income Medicare beneficiaries. Some of these other Medicare beneficiaries must apply for the subsidy through the SSA or a state Medicaid agency. The subsidy is available on a sliding scale, according to income and resources. Dual-eligible beneficiaries are automatically entitled to the full subsidy amount and do not need to apply independently for the subsidy. An individual can become a dual-eligible beneficiary in two main ways. First, Medicare beneficiaries can subsequently qualify for Medicaid. This occurs when their income and resources decline below certain thresholds, and they enroll in the Supplemental Security Income (SSI) program, or they incur medical costs that reduce their income below certain thresholds. CMS data indicate that roughly two-thirds of the 633,614 dual- eligible beneficiaries the agency enrolled in 2006 were Medicare beneficiaries who subsequently qualified for Medicaid, and had not already signed up for a PDP on their own. According to CMS officials, it is not possible to predict the timing of dual-eligibility for these individuals because determining Medicaid eligibility is a state function. Second, Medicaid beneficiaries can subsequently become eligible for Medicare by either turning 65-years-old or by completing their 24-month disability waiting period. This group represents approximately one-third of the new dual-eligible beneficiaries enrolled by CMS in PDPs. State Medicaid agencies can generally predict when this group of individuals will become dually eligible. Multiple parties and multiple information systems are involved in the process of identifying and enrolling dual-eligible beneficiaries in PDPs. In addition to CMS, the SSA, state Medicaid agencies, and PDP sponsors play key roles in providing information needed to ensure that beneficiaries are identified accurately and enrolled. SSA maintains information on Medicare eligibility that is used by CMS and some states. State Medicaid agencies are responsible for forwarding to CMS lists of beneficiaries who the state believes to be eligible for both Medicare and Medicaid. PDP sponsors maintain information systems that are responsible for exchanging enrollment and billing information with CMS. For the most part, CMS adapted existing information systems used in the administration of other parts of the Medicare program to perform specific functions required under Part D. In addition, CMS worked with the pharmacy industry to develop a tool specifically to aid pharmacies in obtaining billing information needed to process claims for dual-eligible beneficiaries without enrollment information. The principal systems supporting the Part D program are as follows: The Medicare eligibility database. This system serves as a repository for Medicare beneficiary entitlement, eligibility, and demographic data. In the enrollment process for dual-eligible beneficiaries, the database is used by CMS to provide up-to-date information to verify the status of dual-eligible beneficiaries, as well as to determine subsidy status and make assignments to PDPs. It also provides data to other CMS systems, SSA, state Medicaid agencies, PDPs, and pharmacies. The enrollment transaction system. This system is used to enroll beneficiaries in PDPs. In addition, it informs PDPs about a beneficiary’s subsidy status and copayment information, calculates Medicare payments to PDPs for each covered enrollee, and processes changes in PDP enrollment, including those elected by the beneficiary. The eligibility query. This tool is used by pharmacies to obtain Part D billing information from the Medicare eligibility database. When filling a prescription for a beneficiary who does not have proof of Part D enrollment or eligibility, a pharmacy submits a request for billing information using the eligibility query. In response, the pharmacy receives information on the beneficiary’s PDP enrollment, including the data necessary to bill the beneficiary’s PDP for the drugs dispensed. The process of enrolling dual-eligible beneficiaries requires several steps; it begins when the state Medicaid agency identifies new dual-eligible beneficiaries and ends when PDPs make billing information available to pharmacies. (For more detailed information on the steps involved in identifying and enrolling dual-eligible beneficiaries, see app. I.) The key information systems (see fig. 1) and steps in identifying and enrolling dual- eligible beneficiaries are the following. 1. State Medicaid agencies obtain Medicare eligibility information from SSA or request data from CMS’s Medicare eligibility database and match that information against their own Medicaid eligibility files. The state Medicaid agencies compile comprehensive files identifying all dual-eligible beneficiaries, known as the dual-eligible files. CMS receives Medicare eligibility information from SSA daily. 2. State Medicaid agencies send CMS the dual-eligible files and CMS matches the files against data in its Medicare eligibility database to verify each individual’s dual eligibility. The agency sends a response file back to each state that includes the results of the matching process for each submitted individual. 3. Those dual-eligible beneficiaries who were matched are considered eligible for the full low-income subsidy and the Medicare eligibility database sets the copayment information accordingly. This process is referred to as deeming. The Medicare eligibility database also assigns beneficiaries not already enrolled in a Part D plan to PDPs that operate in regions that match the beneficiary’s official SSA address of record. Both the deeming and assignment information are sent to the enrollment transaction system to be processed. 4. The enrollment transaction system processes the deeming and assignment information in order to complete the enrollment and notifies the PDPs of those dual-eligible beneficiaries who have been enrolled in their PDP and their copayment amounts. 5. PDPs process the resulting enrollment, assign the standard billing information, and send this information to the Medicare eligibility database. In addition, the PDPs mail out ID cards and PDP information to the enrolled beneficiary. 6. The Medicare eligibility database transmits the PDP’s billing information to the eligibility query system. 7. Using the eligibility query, pharmacies can access the billing information needed to fill prescriptions and bill them to the assigned PDP if beneficiaries lack their enrollment information. Under tight time frames, CMS and its partners integrated information systems to support the Part D program. To support the Part D program, CMS pieced together existing information systems that had related Medicare functions. In addition, information systems belonging to state Medicaid agencies and PDPs had to integrate with CMS information systems and CMS did not establish formal agreements with these partners until the time of implementation. Final regulations for the program were not issued until January 28, 2005, and business requirements for the program were not finalized until March 2005. Thus, there was little time for testing given that requirements and agreements were so late in being solidified. A number of information systems problems surfaced in the early months of the program. These problems included logic errors in the enrollment process which generated cancellations to PDPs instead of enrollments, the eligibility query being overwhelmed by the number of pharmacy inquiries, and CMS difficulties matching data submitted by the state Medicaid agencies to information in the Medicare eligibility database. These problems can be attributed, in part, to poor systems testing. Because of tight time frames associated with implementing Part D, robust system- level and end-to-end testing did not occur. In January 2006, CMS contracted with EDS, an information technology consulting company, to identify opportunities for improvement in the information systems and services for Medicare Part D. EDS’s report findings and observations addressed many overarching challenges in the information systems infrastructure supporting the program, including the observation that the aggressive time frame for implementation did not allow sufficient time for end-to-end testing. CMS is redesigning key information systems involved in the enrollment process in order to improve the efficiency of these systems. CMS’s enrollment processes and implementation of its Part D coverage policy generate challenges for some dual-eligible beneficiaries, pharmacies, and the Medicare program. Because the interval between notification of Medicaid eligibility and completion of the Part D enrollment process can extend at least 5 weeks, some dual-eligible beneficiaries— those previously on Medicare who subsequently become eligible for Medicaid—may be unable to smoothly access their Part D benefits during this interval. At the same time, pharmacies that are unable to obtain up-to- date information about a dual-eligible beneficiary’s enrollment are likely to experience difficulties billing PDPs. In addition, CMS has tied dual-eligible beneficiaries’ effective date of Part D eligibility to the date of Medicaid eligibility, providing for several months of retroactive Medicare benefits. Although the Medicare program pays PDP sponsors for the period of retroactive coverage, beneficiaries were not informed of their right to reimbursement for drug costs incurred during this period. GAO found that Medicare paid PDPs an estimated $100 million in 2006 for coverage during periods for which dual-eligible beneficiaries may not have sought reimbursement for their drug costs. The timing of steps to enroll dual-eligible beneficiaries in Part D and to make billing information available to pharmacies generates a gap between the date beneficiaries are notified of their dual eligibility status and the date they receive their enrollment information. As a result, some new dual- eligible beneficiaries may have difficulty obtaining their drugs at the pharmacy counter or may pay higher than required out-of-pocket costs. Among Medicare beneficiaries who subsequently become eligible for Medicaid, Medicare-only beneficiaries not previously enrolled in a PDP are likely to experience more difficulties compared with those who had enrolled in a PDP prior to becoming eligible for Medicaid. Because the information systems used are not real-time processing systems, the enrollment process takes place over a period of about 2 months. Given the time involved in processing beneficiary data under current procedures, pharmacies may not have up-to-date PDP enrollment information on new dual-eligible individuals. This may result in beneficiaries having difficulty obtaining medications at the pharmacy. To illustrate why this occurs, we present the hypothetical example of Mr. Smith, who, as a Medicare beneficiary did not sign up for the Part D drug benefit and, therefore, upon becoming Medicaid-eligible, must be enrolled in a PDP. (Fig. 2 shows the steps in Mr. Smith’s enrollment process.) From the time Mr. Smith applies for his state’s Medicaid program on August 11, it takes about 1 month for him to receive notification from the state that he is eligible for Medicaid. It takes until October 15 before the PDP notifies Mr. Smith of his enrollment and until October 16 before all the necessary information is available to his pharmacy. If Mr. Smith had sought to obtain prescription drugs prior to October 16, the pharmacy would have had difficulty getting the PDP billing information needed to process claims on his behalf. The reason this gap occurs is that some of the enrollment and PDP assignment processing steps are done at scheduled intervals, such as once a month or once a week. According to CMS, because of the challenges some state Medicaid agencies have in compiling the dual-eligible file, CMS requires the file be submitted just once a month. CMS waits until it receives the monthly dual-eligible files from all state Medicaid agencies before determining each individual beneficiary’s subsidy level and making the PDP assignment for these beneficiaries. State Medicaid agencies that submit their dual-eligible file to CMS early in the monthly cycle do not have their beneficiaries’ subsidy levels determined or the assignments to a PDP made any sooner than the last state to submit its file. Deeming and PDP assignment can take up to 10 days. Similarly, CMS’s system of notifying the PDP of a beneficiary assignment is on a weekly cycle, beginning on Saturday. Thus, regardless of what day in the week CMS’s enrollment transaction system receives a beneficiary’s PDP assignment and processes that enrollment, the information is not communicated to the PDP until the following Saturday. It takes up to another week before the beneficiary receives a membership card or other membership documentation from the PDP or the pharmacy has computerized access to the Part D information needed to properly process a claim if an eligibility query is used to obtain billing information. Thus, the time elapsed from the date the state notified Mr. Smith of his eligibility for Medicaid to the date Mr. Smith was notified by his assigned PDP of his Part D enrollment was at least 35 days. Other new dual-eligible beneficiaries may incur out-of-pocket costs at the pharmacy that are too high for their dually eligible status because of the time it takes information on the beneficiary’s new status to reach their PDP. To illustrate this case, we present the hypothetical example of Mrs. Jones, a Medicare beneficiary who becomes eligible for Medicaid but had already enrolled in a PDP. (See fig. 3.) When Mrs. Jones, who also applied for Medicaid on August 11, goes to the pharmacy on September 12, the pharmacy charges Mrs. Jones the same copayments that she was charged as a Medicare-only Part D beneficiary instead of the reduced amount for dual-eligible beneficiaries. This occurs because the PDP, and consequently the pharmacy, does not have up-to-date information on Mrs. Jones’s status as a dual-eligible beneficiary; this information must go through processing steps similar to those for Mr. Smith. That is, the state Medicaid agency must first submit Mrs. Jones’s name to CMS on its dual- eligible file, which is done monthly. Subsequently, CMS must determine Mrs. Jones’s level of subsidy according to the agency’s schedule for the deeming process. Mrs. Jones’s PDP will change her copayment information only after it receives CMS’s weekly notification of enrollment transactions on October 7. Any dual-eligible beneficiary who has a change in subsidy status, such as dual-eligible beneficiaries who enter a nursing home, may temporarily face higher than required out-of-pocket costs for drugs due to processing delays. Residents of nursing homes who are dual-eligible beneficiaries are not required to pay any copayments, but they could be charged until the PDP updates its own data based on information provided by CMS. Recognizing the time lags that pharmacies encounter in receiving complete Part D information on dual-eligible beneficiaries, CMS issued a memorandum in May 2006 requiring PDP sponsors to use the best available data to adjust a beneficiary’s copayment, meaning that PDPs need not wait for CMS to notify them of a status change but can make adjustments based on notification received from a nursing facility or state agency. However, according to some we spoke with, PDPs vary in terms of their willingness to act on information provided by a party other than CMS. The time intervals associated with the Part D enrollment process for new dual-eligible beneficiaries can lengthen when data entry errors occur or when a dual-eligible beneficiary is identified by the state after the state has submitted its monthly dual-eligible file. For example, if CMS cannot match information from its Medicare eligibility database with a beneficiary’s information listed in the state’s dual-eligible file, the state must find the source of the problem and resubmit the beneficiary’s information in the following month’s dual-eligible file. State Medicaid agency officials told us that generally mismatches occurred in 2006 because of errors in a birth date or Social Security number. CMS reported that for the month of June 2006, about 17,000 to 18,000 names in state Medicaid agencies’ dual- eligible files could not be matched against information in the Medicare eligibility database. This number of mismatches is down from 26,000 mismatches earlier in the program. CMS has provided pharmacies with certain tools to help process a claim when a beneficiary does not present adequate billing information or has not been enrolled in a PDP. The eligibility query was designed to provide billing information to pharmacies when dual-eligible beneficiaries do not have their PDP information, but pharmacies report problems using the tool. The enrollment contingency option was designed to ensure that dual- eligible beneficiaries who were not yet enrolled in a PDP could get their medications, while also providing assurance that the pharmacy would be reimbursed for those medications. Problems with reimbursements have led some pharmacies to stop using the enrollment contingency option. The eligibility query was developed by CMS to help pharmacies determine which plan to bill when a dual-eligible beneficiary lacks proof of enrollment, but about half of the time the query system returns a response indicating a match was not found (see fig. 4). To obtain billing information on individuals without a PDP membership card or other proof of Part D enrollment, pharmacies have modified their existing computer systems to allow them to query CMS’s Medicare eligibility database. Using the Part D eligibility query, pharmacies can enter certain data elements—such as an individual’s Social Security number, Medicare ID number, name, and date of birth—to verify whether the individual is a dual-eligible beneficiary and whether the individual has been assigned to a PDP. Ideally, when a match occurs, the pharmacy receives an automated response within seconds showing codes that contain the standard billing information necessary to file a claim—such as the identity of the PDP sponsor and the member ID number. According to CMS, of all the eligibility queries pharmacies initiated in September 2006, about 55 percent enabled them to match data identification elements with an individual in the Medicare eligibility database. In comments on a draft of this report, the agency explained that pharmacies had used the eligibility query for nonenrolled individuals whose data would not otherwise be in the system. In cases where the PDP has not yet submitted standard billing information to CMS, the pharmacy must spend additional time contacting the PDP. In cases where the dual-eligible beneficiary has been assigned to a PDP, but the PDP has yet to submit the standard billing information, the eligibility query response contains only a 1-800 phone number for the assigned PDP. In these cases, pharmacies must spend additional time contacting the 1-800 number to obtain needed billing information. In April 2006, about 13 percent of the eligibility query responses that matched a beneficiary did not contain the standard billing information. Pharmacy association representatives and individual pharmacists we met with told us that improvements to the eligibility query were needed. They said the eligibility query would be more useful if the responses pharmacies receive contained such information as the name of the PDP in which the beneficiary is enrolled, the effective date of the beneficiary’s enrollment in the PDP, and the beneficiary’s low-income subsidy status, rather than just a 1-800 number or the standard billing information that is now provided. They also noted that the frequency with which the eligibility query responds without the standard billing information is also problematic; without adequate billing information the pharmacy has to make a telephone call to obtain the appropriate billing information. In cases where the eligibility query does not produce a match but the pharmacy has other evidence that the individual is dually eligible for Medicare and Medicaid, such as ID cards or a letter from the state, CMS has provided pharmacies with an enrollment contingency option. That is, the pharmacies can submit their claims to a nationwide PDP sponsor— WellPoint—which CMS has contracted with to provide pharmacies with a source of payment for prescriptions filled for dual-eligible beneficiaries who have yet to be enrolled in a PDP. The WellPoint enrollment contingency option was intended for use in cases where the pharmacy can confirm that an individual is dually eligible for Medicare and Medicaid but cannot determine the beneficiary’s assigned PDP through the eligibility query. In such cases, claims are screened for eligibility, and if the beneficiary is indeed dually eligible, but has not yet been enrolled in a PDP, the beneficiary gets enrolled in a PDP offered by WellPoint. The WellPoint enrollment contingency option has often not functioned as intended. For example, WellPoint was billed for a number of claims where the beneficiary was enrolled in another PDP. As of November 26, 2006, 46.0 percent of the 351,538 Medicare ID numbers with claims that were billed to WellPoint had already been assigned to a PDP. CMS and WellPoint officials told us WellPoint reconciles payment for these claims directly with the beneficiary’s assigned PDP. However, pharmacy association representatives told us that, in some cases, WellPoint required the pharmacies to refund payments for these claims to WellPoint and then submit the claim to the appropriate PDP. In other cases, pharmacies bill WellPoint without supplying the necessary beneficiary data elements. For instance, rather than entering the individual’s actual Medicare ID number, the pharmacy may enter dummy information into the Medicare ID field. As of November 26, 2006, CMS reported that, roughly 35 percent of the Medicare ID numbers submitted to WellPoint were invalid, requiring pharmacies to refund their outlays on claims using these numbers. In addition, about 4 percent of the Medicare ID numbers were valid but the individual was either not eligible for Medicaid or was not eligible for Part D enrollment (for instance due to incarceration). WellPoint required pharmacies to refund money for these claims as well. According to one state pharmacy association representative, some pharmacies in the state have discontinued using the WellPoint contingency option because of the reimbursement difficulties. Only about 15 percent of Medicare ID numbers with claims filed through the WellPoint option were associated with individuals eligible for enrollment in the WellPoint PDP. Pharmacy association representatives noted that some pharmacies dispense medications to individuals without proof of Part D enrollment, hoping to get needed billing information at a later date that will allow them to properly submit a claim. One state pharmacy association representative noted that pharmacies serving only long-term care facilities dispense medication without assurance of reimbursement because they are required to do so under the contractual arrangements they have with the long-term care facilities. Pharmacy association representatives told us that after-the-fact reimbursement of drug claims is problematic. According to the pharmacy association representatives, it can be burdensome for staff to determine where to appropriately resubmit the claim. They also noted that PDPs will sometimes reject retroactive claims that are submitted after a certain period of time has elapsed. With the current combination of policies and requirements under which CMS operates, Medicare pays PDPs to provide retroactive coverage to Medicare beneficiaries newly eligible for Medicaid. However, until March 2007, CMS did not inform these beneficiaries of their right to seek reimbursement for costs incurred during the retroactive period that can last several months. Given the vulnerability of the dual-eligible beneficiary population, it seems unlikely that the majority of these beneficiaries would have contacted their PDP for reimbursement if they were not notified of their right to do so. GAO found that Medicare paid PDPs millions of dollars in 2006 for coverage during periods for which dual-eligible beneficiaries may not have sought reimbursement for their drug costs. Retroactive coverage for dual-eligible beneficiaries stems from both CMS’s Part D policy and from Medicaid requirements. Under the MMA, once an individual who is not enrolled in a plan qualifies as a dual-eligible beneficiary, CMS is required to enroll the individual in a PDP. However, the MMA does not precisely define when Part D coverage for these beneficiaries must become effective. As initially written, when enrolling a Medicare beneficiary without Part D coverage who became eligible for Medicaid, CMS’s policy set the effective coverage date prospectively as the first day of the second month after CMS identified the individual as both Medicare and Medicaid eligible. In March 2006, CMS changed this policy, making coverage retroactive to the first day of the month of Medicaid eligibility. In making this change, CMS cited concerns about enrollees experiencing a gap in coverage under its prior enrollment policy. Federal Medicaid law requires that a Medicaid beneficiary’s eligibility be set retroactively up to 3 months prior to the date of the individual’s application if the individual met the program requirements during that time. Therefore, for this group of dual-eligible beneficiaries, Part D coverage may extend retroactively for several months prior to the actual date of PDP enrollment by CMS. The mechanics and time frames for Part D retroactive coverage can be illustrated by the hypothetical case of Mr. Smith, a Medicare beneficiary who was not enrolled in a PDP when he applied for Medicaid. On September 11, Mr. Smith’s state Medicaid agency made him eligible for Medicaid benefits as of May 11, 3 months prior to his August 11 program application, as he met Medicaid eligibility requirements during that retroactive period. In October, CMS notified Mr. Smith of his enrollment in a PDP and indicated that his Part D coverage was effective retroactively as of May 1, the first day of the month in which he became eligible for Medicaid. Medicare’s payment to Mr. Smith’s PDP, beginning with his retroactive coverage period, consists of three major components, two of which are fixed and a third that varies with Mr. Smith’s cost-sharing obligations. The first component is a monthly direct subsidy payment CMS makes to Mr. Smith’s PDP toward the cost of providing the drug benefit. The second component is the monthly payment CMS makes to Mr. Smith’s PDP to cover his low-income benchmark premium. The third component covers nearly all of Mr. Smith’s cost-sharing responsibilities, such as any deductibles or copayments that he would pay if he were not a dual-eligible beneficiary. CMS makes these cost-sharing payments to his PDP based on the PDP’s estimate of the typical monthly cost-sharing paid by beneficiaries. CMS later reconciles Mr. Smith’s cost- sharing payments with the PDP based on his actual drug utilization as reported by the PDP to CMS. Under CMS’s retroactive coverage policy, Mr. Smith’s PDP receives all three components of payments for the months of May, June, July, August, and September, although Mr. Smith was not enrolled in the PDP until October. Medicare pays Mr. Smith’s PDP sponsor about $60 a month for the direct subsidy and another monthly payment for the low-income premium up to the low-income benchmark, which ranges from $23 to $36 depending on Mr. Smith’s location. We estimate that for all dual-eligible beneficiaries enrolled by CMS with retroactive coverage, Medicare paid PDPs about $100 million in 2006 for these two monthly payment components for the retroactive period. Unlike the cost-sharing component of Medicare’s payments, the two monthly payment components are not subject to a reconciliation process tied to utilization of the benefit. This means that if Mr. Smith’s PDP did not reimburse Mr. Smith for any prescription drugs purchased during the retroactive coverage period, the PDP would have to refund Medicare the cost-sharing payment, but would keep the direct subsidy payments and the low-income premium payments. Medicare makes the direct subsidy and low-income premium payments for the retroactive coverage period because CMS requires PDP sponsors to reimburse beneficiaries for covered drug costs incurred during this period. However, we found that CMS did not inform dual-eligible beneficiaries about their right to seek reimbursement or instruct PDP sponsors on what procedures to use for reimbursing beneficiaries or others that paid on the beneficiary’s behalf for drugs purchased during retroactive periods. The model letters that CMS and PDPs used until March 2007 to notify dual- eligible beneficiaries of their PDP enrollment did not include any language concerning reimbursement of out-of-pocket costs incurred during retroactive coverage periods. After reviewing a draft of this report and our recommendations, CMS modified the model letters that the agency and PDPs use to notify dual-eligible beneficiaries about their PDP enrollment. The revised letters let beneficiaries know that they may be eligible for reimbursement of some prescription costs incurred during retroactive coverage periods. Given the vulnerability of the dual-eligible beneficiary population, it seems unlikely that the majority of these beneficiaries would have contacted their PDP for reimbursement if they were not notified of their right to do so nor would they likely have retained proof of their drug expenditures. In the case of Mr. Smith, for example, he would need receipts for any drug purchases made during the retroactive period—about 5 months preceding the date he was notified of his PDP enrollment—at a time when he could not foresee the need for doing so. Finally, Mr. Smith or someone helping him would have to find out how and where to claim reimbursement from his PDP. Under CMS’s 2006 policy, even if Mr. Smith had submitted proof of his drug purchases, he would not be eligible for reimbursement if CMS had enrolled him in a PDP that did not cover his prescriptions or did not have Mr. Smith’s pharmacy in its network. Nevertheless, Mr. Smith’s PDP would have received monthly direct subsidy and low-income premium payments for Mr. Smith for the retroactive coverage period. For 2006, CMS did not calculate aggregate payments made to PDP sponsors for retroactive coverage. Further, the agency did not monitor reimbursements to dual-eligible beneficiaries for drug purchases made during the retroactive period. Agency officials told us that they have data to determine the PDP payments and beneficiary reimbursements. As a result of not tracking this information, CMS does not know how much of the roughly $100 million in direct subsidy and low-income premium payments for retroactive coverage in 2006 was used by PDPs to pay for drug expenses claimed by dual-eligible beneficiaries for drugs purchased during retroactive coverage periods. Given the experience of early 2006, CMS has taken several actions to improve the transition of dual-eligible beneficiaries to Part D. First, the agency has taken steps to facilitate the change in drug coverage for Medicaid beneficiaries whose date of Medicare eligibility can be predicted—about one-third of new dual-eligible beneficiaries enrolled by CMS. In August 2006, CMS implemented a new prospective enrollment process that state Medicaid agencies may use to eliminate breaks in prescription drug coverage for these beneficiaries. Second, CMS is taking steps to improve tools pharmacies use when dual-eligible beneficiaries seek to fill a prescription, but do not have their PDP enrollment information. Third, CMS has plans to integrate the agency’s information systems to increase the efficiency of the systems involved in the enrollment process. CMS implemented a new prospective enrollment process in August 2006 to help Medicaid beneficiaries who become Medicare eligible transition to Part D without a break in coverage. Under the prospective enrollment process, state Medicaid agencies voluntarily can include on the monthly state dual-eligible file those Medicaid beneficiaries predicted to become Medicare eligible, for instance Medicaid beneficiaries who are nearing their 65th birthday. Two months prior to the date the beneficiary will become Medicare eligible, CMS assigns the beneficiary to a PDP. By completing the assignment process prior to when these beneficiaries become Medicare eligible, CMS officials told us that these beneficiaries should have all their PDP enrollment information when their Medicare Part D coverage begins. Prior to the prospective enrollment process, Medicaid beneficiaries who became Medicare eligible experienced a gap of up to 2 months during which they were no longer eligible for Medicaid prescription drug coverage but had yet to receive information on their Medicare Part D drug coverage. This is because state Medicaid agencies were allowed to include in the monthly state dual-eligible file only those dual-eligible beneficiaries who were known to be eligible for Medicaid and Medicare at the time the file was sent. State Medicaid agencies were required to end Medicaid coverage for prescription drugs when the beneficiary became Part D eligible. Because prospective enrollment was in its very early stages during our audit work, we cannot evaluate how effectively the new process is working to mitigate the gaps in coverage some new dual-eligible beneficiaries faced. In the first month of implementation, 38 state Medicaid agencies submitted records identifying at least some prospective dual-eligible beneficiaries. CMS officials attributed the lack of submission of the names of prospective dual-eligible beneficiaries by some state Medicaid agencies in August 2006 to the short time frame state Medicaid agencies were given to change how they compiled the dual-eligible file. As of November 2006, the state Medicaid agencies for all 50 states and the District of Columbia have included prospective dual-eligible beneficiaries in their monthly file. While it is too early to gauge the impact of the process on beneficiaries, we believe that prospective enrollment has the potential to provide continuous coverage for those beneficiaries who can be predicted to become dually eligible. State Medicaid officials also told us that prospective enrollment is a beneficial change to the process of identifying and enrolling new dual-eligible beneficiaries. CMS is taking steps to improve the eligibility query and the billing contingency option. CMS worked with the pharmacy industry to change the format of the eligibility query to include more complete information. Also, CMS officials said they planned to make changes to the enrollment contingency contract to institute a preliminary screen of Medicare eligibility and Part D plan enrollment before a claim goes through the system. In response to requests from pharmacies that more information be provided through the eligibility query, CMS officials told us that agency staff worked with the National Council for Prescription Drug Programs, Inc.—a nonprofit organization that develops standard formats for data transfers to and from pharmacies—to change the format of the eligibility query and increase the amount of information pharmacies could get from the responses. As part of the planned improvements, eligibility query responses for beneficiaries identified in the database will include—in addition to the data elements previously included—the beneficiary’s name and birth date, the PDP’s identification number, and the beneficiary’s low- income subsidy status. The new specifications for the eligibility query were released December 1, 2006. Pharmacies have to work with their own software vendors to implement the changes to their own systems. CMS is also taking steps to improve the availability of the information pharmacies access through the eligibility query. CMS officials told us that, after being notified of a confirmed enrollment by CMS via a weekly enrollment update, PDPs should submit standard billing information to CMS within 72 hours. However, sometimes PDPs hold the information for longer than 72 hours. According to CMS, the time it takes PDPs to submit billing information to the agency has improved since the beginning of the Part D program. While CMS does not monitor the amount of time it takes for PDPs to submit billing information, the agency has begun monitoring Medicare’s eligibility database to identify PDPs that have a large number of enrollees for whom billing information is missing. As part of this effort, CMS sends a file monthly to each PDP that lists enrollees without billing information. CMS guidance to PDPs states that each PDP should successfully submit standard billing information for 95 percent of the PDP’s enrollees each month. According to CMS data, as of October 1, 2006, about 27 percent of PDPs with CMS-assigned, dual-eligible beneficiaries had billing information for less than 95 percent of their CMS-assigned, dual-eligible beneficiaries. Of those that did not meet the 95 percent threshold, most had fewer than 20 CMS-assigned, dual-eligible beneficiaries. CMS has implemented certain changes for 2007 to address the large number of problematic claims going through the WellPoint enrollment contingency option. It has directed WellPoint to check an individual’s Medicare eligibility and Part D enrollment before the claim is approved, using a new daily update report from Medicare’s eligibility database. This is expected to allow WellPoint to deny claims at the point-of-sale that should not be paid through this option, thereby reducing the number of claims that must be reconciled at a later date. CMS is now making changes to improve the efficiency of key information systems involved in the enrollment process. It is redesigning and integrating these information systems to reduce redundancies and to synchronize data currently stored in different systems, which should lead to a more efficient enrollment process. While CMS is performing unit, system, and integration testing on these changes, it has no definitive plans to perform end-to-end testing on the changes to the overall information systems infrastructure. CMS is pursuing contractual help to determine the extent of testing that it can perform in the future. CMS is currently integrating information from the Medicare eligibility database with information from the enrollment transaction system because duplicative demographic and other data are stored in both systems. According to CMS information technology (IT) officials, because these data are not stored in one place and a huge amount of enrollment traffic is moving back and forth between these two systems, it has been a very large burden for the agency to synchronize and maintain a single set of data. CMS IT officials told us that they spent the first 6 months of Part D implementation stabilizing the supporting information systems and have only now begun to look at efficiencies that can be achieved through integration and mergers that can reduce maintenance and processing times. In the long term, the agency hopes to integrate all beneficiary, entitlement, and enrollment information into one database. CMS IT officials contend that true end-to-end testing of these current changes may not be feasible given the agency’s limited time and resources and the number of scenarios that would have to be tested in the more than 600 different PDPs. In addition, true end-to-end testing would involve thorough interface testing with SSA, and state Medicaid agency and PDP systems, which are not standardized and vary widely. While we agree that end-to-end testing will be difficult given the multiple partners involved and the complexity of the program’s systems infrastructure, it is crucial to mitigate the risks inherent in CMS’s planned changes. End-to-end testing is a highly recognized systems development best practice and is considered essential to ensure that a defined set of interrelated systems, which collectively support an organizational core business area or function, interoperate as intended in an operational environment. These interrelated systems include not only those owned and managed by the organization, but also the external systems with which they interface. Because end-to- end testing can involve multiple systems and numerous partner interfaces, it is typically approached in a prioritized fashion taking into consideration resources, test environments, and the willingness of external parties to participate. CMS IT officials acknowledge that there are risks associated with implementing these changes but still do not plan to conduct end-to- end testing even on a limited basis. As required under the MMA and implementing regulations, for dual-eligible beneficiaries who have not enrolled in a Part D plan, CMS makes random assignments to PDPs based only on the premium amount and the geographic location of the PDP. This method ensures that PDP sponsors enroll an approximately equal number of beneficiaries. However, state Medicaid officials and others assert that dual-eligible beneficiaries assigned to PDPs by CMS are often enrolled in PDPs that do not meet their drug needs. For the initial PDP assignments for January 2006, some SPAPs used additional criteria—including drugs used by beneficiaries—to enroll or reassign beneficiaries to PDPs that were more appropriate to their individual circumstances. SPAP officials reported that these alternative methods produced beneficial results. However, CMS and PDP sponsors pointed out that random assignment works to enroll beneficiaries into PDPs, and that there is no need to use additional criteria. CMS assists in the enrollment of dual-eligible beneficiaries who have not enrolled in a Part D plan on their own by randomly assigning them in approximately equal numbers among eligible PDP sponsors in each region. Under the MMA, the agency may only consider the premiums of the PDPs in the region when making these assignments. CMS first distributes beneficiaries randomly among those PDP sponsors that offer one or more PDPs at or below the low-income benchmark—the average premium in a region—if there is more than one eligible PDP serving the beneficiary’s geographic location. It then assigns the beneficiaries randomly among all eligible PDPs offered by each PDP sponsor. Following the first round of enrollments, CMS has assigned new dual-eligible beneficiaries to PDPs monthly. Dual-eligible beneficiaries may change PDPs at any time during the enrollment year. When dual-eligible beneficiaries change PDPs, coverage under the new PDP becomes effective the following month. As of November 2006, 29.8 percent—1,703,018—of dual-eligible beneficiaries initially enrolled by CMS subsequently made a PDP election of their own choosing. During the original assignments for 2006, CMS assigned some dual-eligible beneficiaries to PDPs that did not serve the area where they lived. This occurred for about 107,000 dual-eligible beneficiaries, 1.9 percent of the population randomly assigned to PDPs at that time. In these cases, CMS made inappropriate assignments because it used address information from SSA that was out-of-date or that corresponded to the individual’s representative payee—the individual or organization who manages the beneficiary’s money on the beneficiary’s behalf—rather than to the beneficiary. For example, if a beneficiary resides in Arizona and their representative payee resides in Virginia, CMS would have assigned that beneficiary to a PDP serving Virginia. CMS officials pointed out that this problem was relatively minor because most of these dual-eligible beneficiaries (about 98.1 percent of those affected) were either enrolled in a PDP offered by a PDP sponsor that offered coverage in the beneficiary’s actual region or that had a national pharmacy network. CMS officials told us that PDP sponsors serving the remainder of these beneficiaries were instructed to provide benefits to this group in accordance with their out- of-network benefits. CMS officials also told us that the fact that dual- eligible beneficiaries can switch PDPs at any time addresses the issue. PDP sponsors were still required to notify all affected beneficiaries of the out-of-area assignment. CMS instructed PDPs to notify those dual-eligible beneficiaries living in an area not served by the PDP sponsor that they would be disenrolled at some future point and must contact Medicare to enroll in an appropriate PDP. Under the MMA, SPAPs may enroll Part D beneficiaries into PDPs as their authorized representatives. Although CMS encouraged SPAPs to follow the same enrollment process CMS uses for dual-eligible beneficiaries, CMS has allowed certain SPAPs to use additional assignment criteria. Qualified SPAPs may use alternative assignment methods—often referred to as intelligent random assignment (IRA)—to identify PDP choices for their members that meet their individual drug needs. IRA methods consider beneficiary-specific information, such as drug utilization, customary pharmacy, and other objective criteria to narrow the number of PDP options to which a member could be assigned. With CMS approval, SPAPs may enroll members randomly among PDPs that meet these given criteria. However, SPAPs may not discriminate among PDPs by enrolling members into a specific or preferred PDP—a practice referred to as steering. The SPAP in Maine is one example of an organization that took steps to reassign noninstitutionalized, dual-eligible beneficiaries, with CMS approval, by aligning their drug needs with PDP formularies, ultimately reassigning nearly half of its dual-eligible population to PDPs other than those assigned by CMS. In June 2005, state legislation was enacted that authorized the inclusion of all dual-eligible beneficiaries in Maine’s existing SPAP membership. Maine officials sought to pass this legislation in response to concerns that this population could experience coverage disruptions during the transition to Medicare Part D as implemented by CMS. They reported that, although these individuals may switch PDPs at any time, it could take months for beneficiaries to transfer to a more appropriate PDP. Thus, after CMS had randomly assigned dual-eligible beneficiaries to PDPs, Maine reassigned certain noninstitutionalized, dual- eligible beneficiaries to different PDPs prior to January 1, 2006. The state found support for its decision to reassign dual-eligible beneficiaries in a state analysis, which indicated that CMS assignments resulted in a poor fit for many dual-eligible beneficiaries in Maine. (See table 1.) According to the analysis, CMS had assigned roughly one-third of dual-eligible beneficiaries to PDPs that covered all of their recently used drugs. However, nearly half of dual-eligible beneficiaries in the state had a match rate—the percentage of a beneficiary’s medications that appeared on the CMS-assigned PDP formulary—lower than 80 percent. The analysis also showed that about one in five dual-eligible beneficiaries had match rates below 20 percent. As an alternative to random assignment based on PDP premiums and location, Maine officials developed an IRA method that considered a beneficiary’s drug utilization and customary pharmacy to make new PDP assignments. Officials developed a computer program that generated scores used to rank PDPs in order of best fit for each beneficiary. The program included the 10 PDPs in the state with premiums at or below the low-income benchmark that provided their formularies to the state. It compared the drugs on these PDPs’ formularies to the beneficiary’s drug utilization history compiled from Medicaid claims for the 3 months prior to the date of assignment (September, October, and November 2005) and assigned an aggregate score to each PDP. The scoring system differentiated between instances where a drug was on the formulary with and without prior authorization requirements. For PDPs with identical scores, the program assessed pharmacy location. If more than one PDP had the beneficiary’s customary pharmacy in their network, the program randomly assigned the beneficiary among those PDPs with the highest scores. Although Maine officials conducted this analysis for all of its 2005 dual-eligible beneficiaries, after they conferred with CMS officials they reassigned only those dual-eligible beneficiaries who had lower than an 80 percent formulary match, accounting for 14,558 individuals, about 46 percent of the state’s dual-eligible population. Maine officials reported that IRA resulted in a marked improvement in match rates for beneficiaries compared to CMS’s PDP assignments. For each PDP, officials calculated the match rate before and after IRA for reassigned beneficiaries. (See table 2.) This analysis showed that before the use of IRA, the weighted average match rate for all participating PDPs was 34.14 percent, and ranged from 20.59 percent to 38.64 percent across PDPs. Following the application of IRA, the weighted average match rate rose to 99.86 percent, with little variation across PDPs. Maine officials noted that their continued use of IRA for dual-eligible beneficiaries is contingent on their access to key data. To make the initial assignments for dual-eligible beneficiaries effective January 1, 2006, the state had drug utilization information from its own Medicaid claims system. However, if the state chooses to reassign individuals again, it must obtain up-to-date utilization information. To help ensure that it would have the data needed to perform another round of IRA in the future, Maine’s SPAP included in its contract with PDP sponsors a requirement to exchange with the SPAP information on pharmacy networks, formularies, and drug utilization on an ongoing basis. For 2007, Maine reassigned 10,200, about 22 percent of dual-eligible beneficiaries, to a new PDP. The state of New Jersey’s SPAP—known as the Pharmaceutical Assistance to the Aged and Disabled (PAAD) Program—developed and implemented an IRA method, with CMS approval, that allowed it to enroll its members in PDPs that best served their drug needs. PAAD officials designed their IRA to simulate the decision process that would occur if beneficiaries had received assistance from a State Health Insurance Assistance Program counselor or had used CMS’s Web-based formulary finder on their own. PAAD officials engaged a contractor to develop a computer program that would identify PDPs that cover each individual’s prescription drug needs. The program matched information on members’ maintenance drugs with formulary and pharmacy network information for all PDPs offered in New Jersey at or below the low-income benchmark. The program treated married couples as one member in the assignment process to ensure that they would be enrolled in the same PDP. In all, PAAD matched 210,000 beneficiaries among six PDPs. Following the application of IRA and prior to enrolling individuals, PAAD sent one of two letters to beneficiaries that explained the results of the IRA method. PAAD sent a letter to some beneficiaries indicating that one PDP best met their needs in terms of its formulary match and inclusion of their customary pharmacy. Other beneficiaries were sent letters informing them that their needs would be equally met by multiple PDPs and identified those PDPs. To satisfy CMS’s requirement that the state not steer beneficiaries to a particular PDP, New Jersey included a full list of all eligible PDPs in the state on the back of the letter. PAAD staff sent these letters in October 2005 and offered to enroll these beneficiaries if they did not receive a response by November 2005. Individuals were asked to notify PAAD of the PDP that they wanted to join and PAAD moved to enroll them in that PDP. For beneficiaries who did not respond to their letters, PAAD enrolled them into the PDP identified as the best fit by the IRA, or randomly among PDPs that equally met their needs. Of the roughly 210,000 letters sent to SPAP members, PAAD received about 130,000 letters requesting enrollment in the suggested PDP within the first month or two after PAAD sent the letters. In total, PAAD enrolled 165,207 beneficiaries, about 78.7 percent of those sent letters, into PDPs identified as the best fit by the IRA. While CMS has allowed certain SPAPs to use IRA methods to assign or reassign their members, CMS does not support the use of IRA methods to assist dual-eligible beneficiaries with Part D enrollment. CMS officials told us that any proposal to add drug utilization as a criterion for PDP assignments assumes that a beneficiary should remain on the same drugs. They contend that beneficiaries can change prescriptions to a similar drug that is on their CMS-assigned PDP’s formulary and receive equivalent therapeutic value. Moreover, the officials pointed out the ability of dual- eligible beneficiaries to switch PDPs. Overall, CMS officials maintained the position that its PDP assignment method for dual-eligible beneficiaries used in fall 2005 worked well. In contrast, state Medicaid officials we met with generally support the use of IRA methods to assist beneficiaries in choosing a PDP that meets their individual circumstances. State Medicaid officials we met with maintained that overall, dual-eligible beneficiaries would have been in a better position during the initial transition to Medicare Part D if drug utilization information were considered in the PDP assignment process. A representative of the National Association of State Medicaid Directors (NASMD) asserted that while CMS’s assignment process was fair to PDP sponsors, it did not ensure that beneficiaries were enrolled in appropriate PDPs. The representative reported that CMS referred individuals who wanted to take their drug usage into account in selecting a PDP to the Medicare.gov Web site, which most dual-eligible beneficiaries are not able to use. Some state Medicaid agencies indicated their support for IRA in the months prior to Part D implementation. At that time, 15 state Medicaid agencies made commitments to a software vendor to use a free software package designed to match beneficiaries’ drug utilization history with PDP formularies as an educational tool to help them choose the PDP best aligned to their individual drug needs. However, litigation over use of the IRA software led to delays, at the end of which CMS had already assigned dual-eligible beneficiaries to PDPs. State Medicaid agencies reported that they then did not have the time to match beneficiaries, send out scorecards, and allow beneficiaries to switch PDPs before the January 1, 2006, implementation date. Executives of PDP sponsors we spoke with stated that CMS’s assignment method generally worked well; however, some executives raised concerns about IRA methodology. Two PDP sponsors raised concerns that IRA methods misinterpret formulary information. Executives from one PDP sponsor contended that there is not a need to look at drug utilization information because of the requirements for broad formularies. These executives also told us that using this method could increase the program’s costs by making PDPs cover more drugs. CMS actions to address problems associated with PDP implementation of pharmacy transition processes led to a more uniform application of transition processes. Pharmacy transition processes allow new PDP enrollees to obtain drugs not normally covered by their new PDP while they contact their physician about switching to a covered drug. In response to Part D sponsors’ inconsistent implementation of transition drug coverage processes in early 2006, CMS issued a series of memoranda that clarified its expectations. PDP sponsors, pharmacy groups, and beneficiary advocates told us that since then, beneficiaries’ ability to obtain transition drug coverage has substantially improved. However, they also report that dual-eligible beneficiaries remain unaware or confused about the significance of receiving a transition drug supply at the pharmacy and are not using the transition period to address formulary issues. CMS made the transition process requirements in its 2007 contracts with PDP sponsors more specific. After receiving complaints that Part D enrollees experienced difficulties obtaining their medications, CMS took steps to address issues related to the availability of transition drug supplies. Federal regulations require PDP sponsors to provide for a transitional process for new enrollees who have been prescribed Part D-covered drugs not on the PDP’s formulary. CMS instructed PDP sponsors to submit a transition process, which would be subject to the agency’s review, as part of the application to participate in Part D. Although CMS specified its expectations for a transition process in March 2005 guidelines for Part D sponsors, the sponsors had discretion in devising their processes. The March 2005 guidelines specified that Part D sponsors should consider filling a one-time transition supply of nonformulary drugs to accommodate the immediate need of the beneficiary. The agency suggested that a temporary 30-day supply would be reasonable to enable the relevant parties to work out an appropriate therapeutic substitution or obtain a formulary exception, but it allowed Part D sponsors to decide the appropriate length of this one-time transitional supply. For residents in long-term care facilities, CMS guidance indicated that a transition period of 90 to 180 days would be appropriate for individuals who require some changes to their medication in order to accommodate PDP formularies. During the early weeks of the program, CMS received reports that the way in which some PDP sponsors implemented their transition processes adversely affected beneficiaries’ ability to obtain transition supplies. Sponsors differed in the time period set for providing transition coverage; some PDPs provided the suggested 30-day supply, while other PDPs provided beneficiaries with as few as a 15-day initial supply. Some PDP sponsors did not apply their transition coverage processes to instances where a formulary drug was subject to utilization restrictions. For example, CMS received complaints that individuals were not given a transition supply when their medications had prior authorization, step therapy, or quantity limit restrictions. Additionally, PDP sponsors’ customer service representatives and pharmacies were generally unaware of the transition processes and how to implement them. Pharmacy association representatives also told us of problems overriding the usual pharmacy billing system in order to process a claim when dispensing a transition supply. CMS responded to the reported problems concerning the uneven application of transition processes by issuing a series of memoranda to PDP sponsors to clarify its expectations. On January 6, 2006, CMS issued a memorandum to PDP sponsors highlighting the need for beneficiaries to receive transition supplies at the pharmacy. The memorandum emphasized that PDP sponsors should (1) train customer service representatives to respond to questions about the PDP’s transition process, (2) provide pharmacies with appropriate instructions for billing a transition supply, and (3) ensure that enrollees have access to a temporary supply of drugs with prior authorization and step therapy requirements until such requirements can be met. On January 13, 2006, CMS issued guidance stating that PDP sponsors should establish an expedited process for pharmacists to obtain authorization or override instructions, and authorize PDP customer service representatives to make or obtain quick decisions on the application of transition processes. In a January 18, 2006, memorandum, CMS reiterated its policy that PDP sponsors should provide at least an initial 30-day supply of drugs and that PDPs should extend that coverage even further in situations where a longer transition period may be required for medical reasons. In addition, CMS asked PDP sponsors to consider contacting beneficiaries receiving transition supplies of drugs to inform them that (1) the supply is temporary, (2) they should contact the PDP or physician to identify a drug substitution, and (3) they have a right to request an exception to the formulary and the procedures for requesting such an exception. When many beneficiaries continued to return to the pharmacy for refills without having successfully resolved their formulary issues, CMS issued a memorandum on February 2, 2006, calling for an extension of the Part D transition period to March 31, 2006. The agency asserted that the extension was needed to give beneficiaries sufficient time to work with their provider to either change prescriptions or request an exception. In another memorandum to PDP sponsors on March 17, 2006, CMS reemphasized the objectives of the transition process and highlighted the need to inform beneficiaries of what actions to take to resolve formulary issues following the receipt of a transition supply. Since CMS clarified its transition process guidance to PDP sponsors, many of the issues surrounding transition processes have been resolved. Some of the pharmacy and long-term care associations, and Medicaid officials we spoke with, told us that problems with providing transition drug coverage have largely been addressed. They noted that the issues surrounding the implementation of the transition processes have significantly improved. To oversee PDP compliance with transition coverage processes, CMS tracks complaints and monitors the time it takes Part D sponsors to resolve complaints. CMS officials said that they rely on beneficiary and pharmacy complaints for information about problems with transition coverage. The agency also assigns case workers to ensure that PDPs resolve these issues. Although CMS can issue monetary penalties, limit marketing, and limit enrollment for PDPs, officials reported that no such punitive actions have been taken against any PDP regarding transition process compliance. Despite PDP sponsors’ efforts to communicate with beneficiaries receiving transition supplies, beneficiaries do not always take needed action during the transition period. Consequently, some dual-eligible beneficiaries return to the pharmacy without having worked with their physician to apply to get their drugs covered or find a substitute drug. While three PDP sponsors told us how they conveyed information about the transition period, two of these PDP sponsors acknowledged that dual- eligible beneficiaries often do not use the transition period as intended. For example, one PDP executive told us that beneficiaries often do not realize that a transition supply has been provided and that they have to apply to the PDP to continue receiving coverage for that particular drug. Representatives from some pharmacy associations and long-term care groups that we spoke to also agreed that, even when notified, dual-eligible beneficiaries are unaware of the implications of the policy. Some pharmacy representatives we spoke with noted that when dual-eligible beneficiaries receive a transition supply, they are often unaware that this supply is temporary and therefore return to the pharmacy the following month in an effort to refill the same prescription without having tried to switch to a formulary medication or obtain permission to continue to have the drug covered. Two other pharmacy association representatives noted that beneficiary understanding of transition supplies is a particular problem for dual-eligible beneficiaries in the long-term care setting who often do not open or read the notification letter sent from the PDP. Staff in long-term care facilities often find unopened mail for the beneficiary sent from their PDP. Unlike the discretion allowed PDP sponsors under the guidance for 2006, CMS’s 2007 contract incorporates specific requirements. For example, the guidance for 2006 stated that, “we expect that PDP sponsors would consider processes such as the filling of a temporary one-time transition supply in order to accommodate the immediate need of the beneficiary.” As part of the 2007 contract, PDP sponsors must attest that the PDP will follow certain required components of a transition process. These components require that, among other things, PDPs provide an emergency supply of nonformulary Part D drugs for long-term apply transition policies to drugs subject to prior authorization or step add a computer code to their data systems to inform a pharmacy that the prescription being filled is a transition supply, ensure that network pharmacies have the computer codes necessary to bill notify each beneficiary by mail within 72 hours of a transition supply of medications being filled. To educate beneficiaries about the purpose of transition supplies, CMS also added a requirement for PDP sponsors in its 2007 contracts to instruct beneficiaries about the implications of a transition supply and alert pharmacies that they are supplying a transition supply. Beginning in 2007, PDP sponsors are required to notify each beneficiary of the steps they should take during the transition period when they receive a transition supply of a drug. In addition, PDP sponsors are required to add a computer code to their systems so that after a pharmacist fills a transition supply, a message back to the pharmacist will alert them that the prescription was filled on a temporary basis only. The pharmacist will then be in a better position to inform the beneficiary of the need to take appropriate steps before the transition period ends. Some challenges regarding the enrollment of new dual-eligible beneficiaries have been resolved, while others remain. In particular, CMS’s decision to implement prospective enrollment for new dual-eligible beneficiaries who are Medicaid eligible and subsequently become Medicare eligible should alleviate coverage gaps this group of beneficiaries previously faced. However, because of inherent processing lags, most dual-eligible beneficiaries—Medicare beneficiaries new to Medicaid—may continue to face difficulties at the pharmacy counter. In addition, because of CMS’s limited oversight of its retroactive coverage policy, the agency has not been able to ensure efficient use of program funds. Until March 2007, the letters used to notify dual-eligible beneficiaries of their PDP enrollment and their retroactive coverage did not inform them of the right to be reimbursed and how to obtain such reimbursement. CMS monitoring of retroactive payments to PDPs and subsequent PDP reimbursements to beneficiaries is also lacking. We found that Medicare paid PDPs millions of dollars —we estimate about $100 million in 2006—for coverage during periods for which dual-eligible beneficiaries may not have sought reimbursement for their drug costs. After spending many months stabilizing the information systems supporting the Part D program, CMS is now making changes to improve the efficiency of its key information systems involved in the enrollment process. While CMS officials are aware of the risks involved in these changes, they are not planning to perform end-to-end testing because of the complexity of the systems infrastructure, the multiple partners involved, and time and resource constraints. While we agree that end-to- end testing will be difficult, it is important to perform this testing to mitigate risks and avoid problems like those that occurred during initial program implementation. CMS’s assignment of dual-eligible beneficiaries to PDPs serving their geographic area with premiums at or below the low-income benchmark generally succeeded in enrolling dual-eligible beneficiaries into PDPs. The experience of SPAPs in Maine and New Jersey, while limited, demonstrates the feasibility of using IRA methods to better align beneficiaries’ PDP assignments with their drug utilization needs. However, continued use of these methods is contingent on access to beneficiary drug utilization and formulary information from PDPs. In addition, some dual-eligible beneficiaries—those with representative payees—were assigned to PDPs that did not serve the area where they lived. Since CMS receives a file from SSA that includes an indicator showing that an individual has a representative payee, the agency could use this information to assign these beneficiaries to PDPs that serve the area where they live. To resolve problems associated with the uneven application of transition policies, CMS clarified its previous guidance to plans and added requirements to its 2007 contracts with PDP sponsors. The 2006 experience with plans’ uneven implementation of CMS’s transition policy guidance demonstrated how inconsistent interpretations can lead to problems for beneficiaries and pharmacies. CMS officials recognized that the agency needed to be more directive by including specific procedures in its 2007 PDP contracts. Even with consistent implementation of transition policies and notification requirements, however, without assistance, dual- eligible beneficiaries—a highly vulnerable population—are likely to have difficulty resolving problems that they encounter with the transition. We make the following six recommendations. To help ensure that dual-eligible beneficiaries are receiving Part D benefits, the Administrator of CMS should require PDP sponsors to notify new dual-eligible beneficiaries of their right to reimbursement for costs incurred during retroactive coverage periods. To determine the magnitude of Medicare payments made to PDPs under its retroactive coverage policy, the Administrator of CMS should track how many of the new dual-eligible beneficiaries it enrolls each month receive retroactive drug benefits and how many months of retroactive coverage the agency is providing them. To determine the impact of its retroactive coverage policy, the Administrator of CMS should monitor PDP reimbursements to dual- eligible beneficiaries, and those that paid on their behalf, for costs incurred during retroactive periods through an examination of the prescription utilization data reported by PDP sponsors. To mitigate the risks associated with implementing Part D information systems changes, especially in light of initial systems issues caused by the lack of adequate testing, the Administrator of CMS should work with key partners to plan, prioritize, and execute end-to-end testing. To help ensure new dual-eligible beneficiaries are enrolled in PDPs that serve the geographic area where they live, the Administrator of CMS should assign dual-eligible beneficiaries with representative payees to a PDP serving the state that submits the individual’s information on their dual-eligible file. To support states with the relevant authority that want to use alternative enrollment methods to reassign dual-eligible beneficiaries to PDPs, the Administrator of CMS should facilitate the sharing of data between PDPs and states. CMS reviewed a draft of this report and provided written comments, which appear in appendix II. In addition to comments on each of our recommendations, CMS provided us with technical comments that we incorporated where appropriate. CMS remarked that we did an excellent job of outlining the complex systems and steps involved in identifying, assigning, and enrolling new dual-eligible beneficiaries into PDPs. However, the agency objected to what it perceived as an overwhelmingly negative tone in our findings and stated that our discussion of retroactive coverage was overly simplified. CMS did note that the agency was in the process of implementing three of our six recommendations to improve existing procedures. CMS’s main concern regarding the draft report for comment centered on our characterization of the interval between the effective date of Part D eligibility and the completed enrollment process as a “disconnect.” Also, CMS officials noted that “it is not new or unusual for individuals to pay out of pocket for their prescription drug or other healthcare services, and then subsequently be reimbursed.” The agency explained that its policy of tying the effective Medicare Part D enrollment date to the first day of Medicaid eligibility is intended to ensure that dual-eligible individuals receive Part D benefits for the period that they were determined by their state to be eligible for this coverage. CMS asserted that it is the retroactive eligibility requirement under Medicaid, not CMS policy, which causes the “space and time conundrum” over which it has no control. Regarding this broad concern from CMS, we note that our discussion of the time to complete the enrollment process and the period of retroactive coverage experienced by a majority of newly enrolled dual-eligible beneficiaries was intended to describe CMS’s implementation of the enrollment process for new dual-eligible beneficiaries; we did not evaluate CMS’s policy. Recognizing the desirability of providing drug coverage as soon as beneficiaries attain dual-eligible status, we do not object to CMS’s policy of linking the Part D effective coverage date to Medicaid’s retroactive eligibility date. However, our review found that CMS had not fully implemented this policy and, as a consequence, neither beneficiaries nor the Medicare program are well served. Therefore, we have recommended actions that CMS should take to better protect beneficiaries and ensure efficient use of Medicare program funds. To clarify our message and to reflect information obtained through agency comments, we modified portions of this discussion and provided the revised sections to CMS for supplemental comments. In its supplemental comments, CMS again objected to what it believed is our implication that retroactive coverage for dual-eligible beneficiaries is inappropriate or that CMS has put the Medicare program at unwarranted risk. As stated above, we do not disagree with the policy of retroactive coverage for dual-eligible beneficiaries; rather we are concerned with how CMS implemented this policy in 2006. Only by monitoring the amounts paid to PDP sponsors for retroactive coverage periods and the amounts PDP sponsors reimbursed dual-eligible beneficiaries will CMS be in a position to evaluate the effectiveness of its retroactive coverage policy. Also, CMS asserted that we incorrectly imply that CMS had the information needed to monitor reimbursements to dual-eligible beneficiaries when such information is not expected to be available until after May 31, 2007. During the course of our audit work in 2006, CMS indicated no current or planned efforts to monitor or enforce PDP sponsor reimbursements to dual-eligible beneficiaries. Only after receiving our draft report did CMS state its intention to analyze the data necessary to monitor plan compliance and evaluate agency policy. In fact, we were told that CMS decided to conduct this analysis as a direct result of our draft report’s findings and recommendations. CMS agreed with our recommendation to require PDP sponsors to notify new dual-eligible beneficiaries of their eligibility for reimbursement for costs incurred during retroactive coverage periods. To be consistent with its retroactive coverage policy, CMS is in the process of adding language to this effect in the notices that the agency and PDP sponsors send to dual- eligible beneficiaries enrolled in a PDP. The revised letters advise beneficiaries to tell their PDP if they have filled prescriptions since the effective coverage date because they “may be eligible for reimbursement for some of these costs.” However, contrary to comments CMS made on our draft report—that dual-eligible beneficiaries will be told they should submit receipts for previous purchases of Part D drugs—the revised letters do not explicitly tell beneficiaries of the steps they would need to take to access their retroactive coverage. The agency also reported that it plans to inform its partners about the changes to the enrollment notification letters. In response to our recommendation that CMS determine the number of beneficiaries and the magnitude of payments made to PDP sponsors for dual-eligible beneficiaries subject to retroactive coverage, CMS indicated that it intends to continue to track the number of new dual-eligible beneficiaries provided retroactive coverage. Although this monitoring is important to managing the enrollment process for new dual-eligible beneficiaries, it would be even more useful if CMS tracked the number of months of retroactive coverage provided to beneficiaries it enrolls in PDPs. CMS disagreed with our recommendation that it monitor PDP reimbursement of beneficiary expenses incurred during retroactive coverage periods. We maintain that the agency should actively monitor its retroactive coverage policy by examining data that plan sponsors routinely submit to the agency. In their drug utilization records, sponsors must indicate the amounts paid by the plan and by the beneficiary for each claim. If it became evident that dual-eligible beneficiaries were not filing claims for retroactive reimbursements while PDPs received Medicare payments for their coverage, CMS would be in a position to evaluate its effective coverage date policy. Regarding our recommendation that the agency work with key partners to plan, prioritize, and execute end-to-end testing, CMS disagreed and questioned whether the benefits of doing so justify the associated costs. We find this position on end-to-end testing to be inconsistent with systems development best practices. Establishing end-to-end test environments and conducting such tests is widely recognized as essential to ensure that systems perform as intended in an operational environment. CMS was alerted to this issue in a March 2006 CMS contractor report that identified the lack of comprehensive end-to-end testing as a weakness of the Part D program. We acknowledge that, given the complexity of the program’s infrastructure and the multiple partners involved, end-to-end testing will be difficult. However, other forms of testing, including integration and stress testing, should be conducted in addition to, not as a replacement for, end-to-end testing. CMS concurred with our recommendation that it ensure all new dual- eligible beneficiaries are enrolled in PDPs that serve the geographic area where they live. CMS reported that it has completed the underlying changes necessary to implement this recommendation. Beginning in April 2007, the CMS auto-assignment process enrolls dual-eligible beneficiaries into PDPs that operate in the state that submits that individual in its dual- eligible file. CMS disagreed with our recommendation that the agency facilitate information sharing between PDPs and states that wish to use additional information to reassign beneficiaries yearly. The agency asserted that, for a number of reasons, efforts to match beneficiaries’ customary drugs to PDP formularies are not necessary or desirable. Furthermore, CMS noted that it lacks the statutory authority and the drug utilization data needed to assign beneficiaries to PDPs on anything other than a random basis. We did not propose that CMS change its assignment method and we did not take a position on the desirability of states’ use of intelligent random assignment methods. However, we maintain that states wishing to reassign beneficiaries should have access to PDP data once beneficiaries have been enrolled. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Kathleen King at (202) 512-7119 or [email protected]. Questions concerning information systems issues and testing should be directed to David Powner at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. The process of enrolling dual-eligible beneficiaries requires several steps: It begins when the state Medicaid agency identifies new dual-eligible beneficiaries and ends when PDPs make billing information available to pharmacies. 1. States are responsible for identifying their Medicaid enrollees who become dual-eligible beneficiaries. They combine data obtained from SSA or requested from CMS on individuals eligible to receive Medicare benefits with their own information on Medicaid enrollees to compile the dual-eligible files. CMS receives Medicare entitlement information daily from SSA. 2. After the 15th of the month and before midnight of the last night of the month, states transmit their dual-eligible files to CMS. These files contain information on all individuals identified by the states as dual- eligible beneficiaries, including those newly identified and those previously identified. Generally within 48 hours of receipt, CMS processes state submissions. Within the Medicare eligibility database, edits of the state files are performed. Based on the results of the edits, the Medicare eligibility database transmits an e-mail to each state telling the state its file was received and the results of the edits. Files that fail the edits must be resubmitted. Once a file passes the edits, the Medicare eligibility database matches the file against the Medicare eligibility database to determine if it is a valid (matched) beneficiary, eligible for Medicare, and passes business rules for inclusion as a dual eligible. The results of this processing for each transaction on the states’ file are added to the response files, which are sent back to the states. 3. After CMS has performed the matching process, the Medicare eligibility database processes these files through two additional steps: (a) Deeming. Deeming takes the input from the matching process and a monthly input file from SSA on beneficiaries receiving Social Security Supplemental Income (SSI) to determine the copayment level for the dual-eligible beneficiaries. Deeming is performed against these data according to the business rules. (b) Auto-assignment. Auto-assignment takes the results of deeming and assigns each beneficiary to a PDP within the region that includes the beneficiary’s official address. Auto-assignment takes the total dual-eligible population and eliminates records using 18 exclusions rules resulting in the final set of beneficiaries to be auto-assigned. Exclusions include beneficiaries who are already enrolled in a Part D plan, currently incarcerated, and not a U.S. resident (residing outside the States and territories). Auto- assignment uniformly assigns qualified dual-eligible beneficiaries to designated PDPs across each region. The resulting deeming and assignment information is sent to CMS’s enrollment transaction system for processing. In addition, a mail tape is prepared by CMS containing beneficiary names and addresses so that mail can be generated that informs beneficiaries of the pending enrollment and identifies the PDP to which they were assigned. A file also is sent to each of the plans identifying the beneficiaries assigned to their PDP. 4. Upon the receipt of the deeming and assignment information from the Medicare eligibility database, CMS’s enrollment transaction system facilitates the changes in the copayments and the enrollment of the beneficiaries into their assigned PDP. The enrollment transaction system informs the PDP of the enrollment and copayment transactions via a weekly Transaction Reply Report (TRR) that summarizes all transactions that the enrollment transaction system has performed for the respective PDP during the prior week, beginning on Saturday. 5. PDPs then process the resulting assignment and copayment changes, assign standard billing information, and send the information to CMS’s Medicare eligibility database. The Medicare eligibility database performs edits, such as matching each submitted beneficiary’s information with Part D enrollment information. For each match, the standard billing information is added to the Medicare eligibility database and a response is generated for the PDP, confirming that the information was accepted. The PDPs mail out ID cards and plan information to the enrolled beneficiary. 6. Nightly, the eligibility query receives billing information from the Medicare eligibility database, making the updated standard billing information available for use in the eligibility query system. 7. Pharmacies can use their computer systems to access billing information needed to bill the assigned PDP for the beneficiary’s prescriptions if a beneficiary does not have their enrollment information. In addition to the contacts named above, Rosamond Katz, Assistant Director; Lori Achman; Diana Blumenfeld; Marisol Cruz; Hannah Fein; Samantha Poppe; Karl Seifert; Jessica Smith; Hemi Tewarson; and Marcia Washington made major contributions to this report.
Since January 1, 2006, all dual-eligible beneficiaries--individuals with both Medicare and Medicaid coverage--must receive their drug benefit through Medicare's new Part D prescription drug plans (PDP) rather than from state Medicaid programs. GAO analyzed (1) current challenges in identifying and enrolling new dual-eligible beneficiaries in PDPs, (2) the Centers for Medicare & Medicaid Services' (CMS) efforts to address challenges, and (3) federal and state approaches to assigning dual-eligible beneficiaries to PDPs. GAO reviewed federal law, CMS regulations and guidance and interviewed CMS and PDP officials, among others. GAO also made site visits to six states to learn about the enrollment of dual-eligible beneficiaries from the state perspective. CMS's enrollment procedures and implementation of its Part D coverage policy generate challenges for some dual-eligible beneficiaries, pharmacies, and the Medicare program. A majority of new dual-eligible beneficiaries--generally those on Medicare who have not yet signed up for a PDP and who become eligible for Medicaid--may be unable to smoothly access their drug benefit for at least 5 weeks given the time it takes to enroll them in PDPs and communicate information to beneficiaries and pharmacies. Pharmacies also may be affected adversely when key information about a beneficiary's dual eligibility is not yet processed and available. When dispensing drugs during this interval, pharmacies may have difficulty submitting claims to PDPs and accurately charging copayments. In addition, Medicare pays PDPs to provide these beneficiaries with several months of retroactive coverage but, until March 2007, CMS did not inform beneficiaries of their right to be reimbursed for drug costs incurred during these periods. CMS does not monitor its payments to PDPs for retroactive coverage or the amounts PDPs have reimbursed dual-eligible beneficiaries. Medicare paid PDPs millions of dollars in 2006 for coverage during periods for which dual-eligible beneficiaries may not have sought reimbursement for their drug costs. CMS has taken steps to address challenges associated with enrolling dual-eligible beneficiaries in PDPs. CMS has implemented a policy to prevent a gap in prescription drug coverage for those new dual-eligible beneficiaries whose Part D eligibility is predictable--Medicaid beneficiaries who subsequently qualify for Medicare. We estimate this group represents about one-third of new dual-eligible beneficiaries. In August 2006, CMS began operating a prospective enrollment process that should allow the agency and its Part D partners time to complete the enrollment processes and notify these beneficiaries before their effective enrollment date. Also, CMS is making changes to improve the efficiency of key information systems involved in the enrollment process. While the agency is performing some information systems testing, it is not planning to perform testing of the interactions of key information systems collectively, which is crucial to mitigating the inherent risks of system changes. Under federal law, CMS is required to assign dual-eligible beneficiaries to PDPs based on PDP premiums and geographic area. State Medicaid agency officials and others assert that this assignment method often places dual-eligible beneficiaries in PDPs that do not meet their drug needs. With CMS approval, Maine officials considered beneficiary-specific data to reassign nearly half of their dual-eligible beneficiaries to PDPs that better met their drug needs in late 2005. After the reassignment, the number of these dual-eligible beneficiaries whose PDP covered nearly all of their prescription drugs increased significantly. States choosing to make such reassignments in the future would need ready access to key information from PDPs. CMS contends that reassignments are not needed because beneficiaries may switch to drugs of equivalent therapeutic value or change plans at any time.
Complaint and appeal procedures are regulated by a patchwork of federal and state law. No federal standards, however, prescribe how complaint and appeal systems are to be structured and administered. For example, the Employee Retirement Income Security Act of 1974 (ERISA), a federal law governing most employer-sponsored health plans, simply requires that covered health plans provide a mechanism to permit participants and beneficiaries to appeal a plan’s denial of a claim. Another federal law, the Health Maintenance Organization Act of 1973, simply requires that plans provide “meaningful” and “timely” procedures for hearing and resolving complaints in order to become federally qualified HMOs. Numerous bills mandating specific features of health plan complaint and appeal procedures have been introduced before the current Congress. One bill, for example, would set standards for the timeliness of plan response to appeals and the professional qualifications of appeal reviewers and would require external review of plan decisions in certain circumstances. In addition, the Presidential Advisory Commission on Consumer Protection and Quality in the Health Care Industry recently issued a “Consumer Bill of Rights and Responsibilities” that included recommendations for handling consumer complaints and appeals. Many states have laws regulating or affecting HMOs. According to the American Association of Health Plans (AAHP), which represents managed care organizations, nearly all HMO coverage offered to employees is governed by state grievance and appeal requirements. HMOs have frequently argued, however, that in certain circumstances ERISA prevents state law from applying to them. These arguments arise because ERISA prohibits states from regulating employee health plans, although it expressly permits states to regulate insurance purchased by employers. Most states require HMOs to describe their grievance procedures when applying for a license or certificate of authority. Many states require that plans inform members about grievance procedures at least upon enrollment and sometimes annually. Some states mandate that HMOs inform patients of grievance rights and procedures upon each denial of service, when this information is most pertinent. Some states require plans to submit an annual report on the number of complaints filed, their underlying causes, and their disposition. Some states have prescribed detailed requirements in the area of complaints and appeals. For example, some states require that HMOs resolve member appeals of decisions within certain time periods (for example, 20 days); some have required that HMOs allow members the option of having complaints and appeals reviewed by an external, independent panel. However, provisions in state laws vary considerably. (See app. II for specific information provided by the National Conference of State Legislatures (NCSL) on state laws governing complaint and appeal processes.) A number of elements have been identified by regulatory, consumer, and industry groups as being important to a complaint and appeal system. These elements fall into three general categories: timeliness, integrity of the decisionmaking process, and effective communication with members. Several nationally recognized groups have developed guidelines for complaint and appeal systems. We reviewed standards promulgated by two private accrediting bodies, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and the National Committee for Quality Assurance (NCQA). We also reviewed the guidelines established by groups representing industry, consumer, and regulatory interests: AAHP, Families USA (FUSA), and the National Association of Insurance Commissioners (NAIC), respectively. (See app. III for more information on these five organizations.) In all, we identified 11 features considered important to a complaint and appeal system by at least two of the groups. As table 1 shows, several elements were recommended by most of the groups, while other elements were highlighted by only two groups. However, a particular group’s omission of certain elements does not necessarily mean that the group considered those elements and rejected them as unimportant. To help ensure that member complaints and appeals are resolved in an appropriately timely fashion, several groups identified two elements as being important: explicit time periods and expedited review. Time periods refer to specified amounts of time, set out in plan policies, within which HMOs resolve complaints or appeals. JCAHO, for example, emphasized the importance of “defined time frames in which the member can anticipate response to an appeal.” The groups differed in specifying the number of days allowed for resolution; while NAIC’s criterion stated that plans have up to 30 days to resolve first-level appeals, for example, JCAHO simply called for plans to have established time periods without specifying what they should be. Expedited review refers to a plan policy of processing appeals more quickly in situations in which, were the plan to follow its usual time period for processing the appeal, the patient’s health might be jeopardized. Again, the groups differed in the extent to which they specified the time within which expedited appeals were to be processed. NAIC and FUSA said that expedited review must be completed within 72 hours of the appeal, while the other groups said simply that plans must provide a resolution appropriate to the clinical urgency of the situation. In the interest of perceived fairness and member empowerment, four factors were identified as being essential to maintaining the integrity of the decisionmaking process: (1) a two-level appeal process, (2) the member’s right to attend one appeal hearing, (3) appeal decisions made by medical professionals with appropriate expertise, (4) and appeal decisions made by individuals not involved in previous denials. A two-level appeal process is one in which, after a member appeals an initial denial of payment or service, the member may appeal to the plan a second time. Two groups (NAIC and NCQA) identifying a two-level process as important also stated that plans should allow members to appear before plan officials during at least one of the appeal proceedings. This allows members the opportunity to provide to plan officials information or evidence that the member believes is important and ensures that the member’s perspective is presented to the plan. Three groups—FUSA, NAIC, and NCQA—stated that appeal decisions should be made by medical professionals with appropriate expertise. Both NAIC and NCQA stated that such professionals should be involved in decisions regarding denials of clinical services; FUSA did not specify instances under which review by medical professionals should take place. According to AAHP, medical necessity determinations should involve a physician’s review, while determinations about whether a benefit is covered under the terms of the contract might not involve a physician. FUSA, NAIC, and NCQA stated that plan officials determining the outcome of an appeal should not be the same officials who were involved in either the initial denial or the first-level appeal. NCQA’s standards state that no one performing a first- or second-level review should be previously involved in the case. NAIC echoed this statement for first-level reviews; regarding second-level reviews, the organization stated that the majority of the second-level panel deciding the appeal should comprise persons who had not previously participated in the case. Elements of effective communication identified as important included the provision of written information about the appeal process in an understandable manner; acceptance of oral complaints and appeals; the inclusion of appeal rights when notifying enrollees of a denial of care or payment of service; and written notice of appeal denials, including appeal rights. NCQA, for example, emphasizes the importance of clear and complete information about member rights and responsibilities. NCQA requires that plans provide information that is easily accessible—for example, in a member handbook or provider directory or on a membership card—rather than relying exclusively on technical or legal documents. The President’s Quality Commission notes that consumers have the concomitant responsibility to become knowledgeable about their health plan coverage, including covered benefits, plan processes, and appeal rights. FUSA and NCQA also noted the importance of plans’ acceptance of oral complaints and appeals. In its accreditation standards, NCQA notes that “following standards for high-quality interactions with members means that any problems expressed by a member receive prompt and appropriate attention, whether those problems involve clinical care or service, and whether they be oral or written, major or minor.” Four groups emphasized the importance of informing members of their right to appeal at the time a service is denied or terminated. Regarding the plan’s response to an appeal of a denial, two of these groups also highlighted the importance of written notice of appeal denials, including appeal rights. Including appeal rights in the written denial notice ensures that plan members are aware of the steps they need to take in the event they are dissatisfied with the plan decision. An official from the Center for Healthcare Rights, a California-based consumer group, also noted that denial notices should contain information about the nature of what was denied, the basis for the decision (for example, the medical information or plan contract terms the HMO relied upon in making the determination), and information about what factors the HMO would consider in an appeal. The HMOs in our review had most of the 11 elements identified by the groups in our study as being important to complaint and appeal systems, although they varied considerably in the mechanisms adopted to meet them. However, two recommended elements—appeal decisions made by individuals not involved in previous decisions and acceptance of oral appeals—were not commonly present in the complaint and appeal systems of the HMOs in our study. The extent to which HMOs in our study implement the policies they reported to us, however, is unknown. Much similarity existed in the complaint and appeal systems of the HMOs we reviewed. As table 2 shows, 9 of the 11 elements identified as important to a complaint and appeal system were generally incorporated by HMOs in our study. Not all 38 HMOs in our study, however, provided data on each of the elements. Several HMOs provided information on some elements but not others. Much of this uniformity may be attributed to the influential role played by NCQA, which includes all these elements in its accreditation standards. That is, NCQA accreditation is important to public and private purchasers, who view it as an indicator of HMO quality. A growing number of plans have obtained or are seeking accreditation. Among the 23 HMOs in our review that have been surveyed by NCQA, 20 have been accredited: 14 HMOs were accredited unconditionally, while 6 HMOs were accredited with limitations. One HMO’s accreditation had expired, and two HMOs were denied accreditation. Even some HMOs that are not currently accredited may follow NCQA standards, intending to eventually apply for accreditation. Thirty-six of 37 HMOs providing data had established time periods within which complaints and appeals were to be resolved. Although many HMOs’ time periods called for resolution of complaints or appeals within 30 days at each level, other HMOs’ time periods varied considerably. One HMO’s policy called for complaints to be resolved immediately, another HMO’s within 24 hours; another allowed up to 60 days to resolve complaints. Time periods for first-level appeals varied from 10 to 75 days; for second-level appeals, from 10 days to 2 months. One HMO did not have explicit time periods. This HMO’s policy called for complaints to be resolved “on a timely basis.” Although first-level appeals were to be resolved within 30 days, for second-level appeals, members were to be notified within 30 days of the committee meeting, but no time period was specified for the meeting. Thirty-four HMOs in our study (of 36 reporting) had expedited appeal processes in place for use in circumstances in which delay in care might jeopardize the patient’s health. Again, however, HMOs varied considerably in the length of time they allowed for resolution of an expedited appeal. While the most common time period among the HMOs in our study was 72 hours, two HMOs’ policies called for resolution within 24 hours, and two others allowed up to 7 days for resolution. All 38 HMOs in our study had at least a two-level appeal process. Nineteen HMOs used decisionmaking committees at both levels of appeal, while 17 used an individual to make the decision at the first level of appeal and a committee at the second level. Nine HMOs had a third level of appeal within the HMO, and all nine used committees to resolve the appeal at the third level. Thirty-six HMOs (of 37 reporting) permitted the member to attend at least one appeal hearing in order to present his or her case, including necessary documentation or other evidence, to the committee. Sixteen of the 36 HMOs permitted members to be accompanied by a representative, such as a friend or a lawyer. In instances in which the member could not attend the meeting in person, 11 of the 36 HMOs made provisions for members to attend the meeting by telephone or videoconference. Thirty-one HMOs (of 35 providing data) reported that they included doctors or nurses on their appeal committees. We did not, however, analyze individual appeal cases and so were unable to determine whether doctors and nurses with appropriate expertise made appeal decisions in cases of clinical service denial, as called for by several groups. Fifteen HMOs (of 37 reporting) required that persons reviewing appeals not be the same individuals involved in the case earlier. Persons reviewing appeals varied from HMO to HMO. Among HMOs using an individual to resolve first-level appeals, some HMOs assigned an appeal coordinator or grievance coordinator to resolve these appeals, while others assigned first-level appeals to the HMO medical director or other physician, the HMO president, or the HMO executive director. The composition of review committees varied as well. Most HMOs included doctors or nurses on their appeal committees; many HMOs included representatives of various HMO departments—such as medical management, marketing, quality management, customer service, or claims—on such committees. Many HMOs also included individuals not affiliated with HMO operations on decisionmaking committees. A few HMOs used physicians not employed by the HMO to review appeals; several HMOs also included HMO enrollees on decisionmaking committees. One HMO, for example, had a 10-person panel to decide second-level appeals, with 5 HMO enrollees on the panel, including the panel chair, and 5 HMO physicians. A few HMOs used the board of directors, or a subset thereof, as the decisionmaking committee for second- or third-level appeals. Boards of directors may comprise various individuals from the community; one HMO in our study included a judge, a professor, numerous corporate officials, and others on its board of directors. “Members who are not satisfied with the initial response to their concerns should write to [the HMO] as soon as possible. Address the letter to [HMO address]. The letter should include your name, address, [HMO] ID card number, a detailed description of the grievance (including relevant dates and provider names) and all supporting documentation. We will acknowledge the receipt of all written grievances. Our Grievance Committee will review all grievances and we will send you a written determination within 30 business days after we have received your grievance. [The HMO] may notify you in writing that we need to extend the 30-day grievance determination period if we need to obtain more information.” The handbook continued with a description of the HMO’s second- and third-level appeal processes. Another HMO’s handbook, after describing the HMO’s complaint and appeal procedure, stated “In a situation where a delay could worsen your health, you will get an answer to your concern within 48 hours.” Yet another HMO’s handbook, stressing the difference between its standard appeal process and its expedited appeal process, stated that if a member had a concern about an urgent situation, “The above complaint procedures do not apply.” The handbook went on to explain the expedited appeal procedure in detail. “If the Member is unsatisfied with the informal process, or if the process exceeds the stated time limits, the concern enters a formal Level I Grievance. The Operations Intake Grievance Coordinator will coordinate a group to resolve the concern of the Member. The Intake Grievance Coordinator will respond to the Member in writing within 30 days regarding the determination.” Most HMOs—36 of 38 reporting—accepted oral complaints. Two HMOs required members to put complaints in writing. Only 12 HMOs (of 37 reporting) accepted appeals orally; the remaining 25 HMOs required members to put appeals in writing, although 3 of these plans told us they provide writing assistance to members who request it. Some HMO officials told us that they prefer the member to submit the appeal in writing in order to ensure that the member’s concerns are accurately characterized. Thirty-one HMOs, out of 34 reporting, included member appeal rights in notices of denial of care or payment of service. One HMO that did not include this element informed its members of their appeal rights in denials stemming from benefit coverage or medical necessity decisions but not in denials related to claims for payment. Another HMO provided a telephone number for the member to call if the member had any questions but did not enumerate the member’s appeal rights. Of 37 HMOs reporting, 36 provided a written notice of appeal denials, including appeal rights; the one remaining HMO provided written notice of denials but did not include appeal rights. Although the majority of HMOs’ complaint and appeal systems included most of the important elements, consumer advocates expressed concern that such systems are not fully meeting the needs of enrollees. Advocates specifically noted the lack of an independent, external review of plan decisions on appeals and noted members’ difficulty in understanding how to use complaint and appeal systems. This latter issue, however, may reflect a lack of understanding about health insurance in general and managed care in particular. Independent external review of plan decisions was of particular importance to consumer advocates, although none of the regulatory or industry groups we studied included external review as an element critical to complaint and appeal systems. Consumer advocates told us that, regardless of the particular mechanisms plans use to resolve appeals, having plan employees review the decisions made by other plan employees suggests that plan self-interest may supersede objectivity. Accordingly, consumer advocates believe that review by an independent third party is essential to ensuring integrity in decisionmaking. FUSA states that external review should (1) be conducted by reviewers with appropriate medical expertise; (2) be paid for by the plan, not the member; and (3) allow members to retain their rights to seek legal remedies. The President’s Quality Commission also states that members should have access to an independent system of external review. Among its criteria for external review, the Commission states that such review should (1) be available only after consumers have exhausted all internal processes (except in cases of urgently needed care); (2) be conducted by health care professionals with appropriate expertise, who were not involved in the initial decision; and (3) resolve appeals in a timely manner, including provisions for expedited review. Additional analysis must be done, according to the Commission, to identify the most effective and efficient methods of establishing the independent external appeal function. Issues to be considered include mechanisms for financing the external review system, sponsorship of the external review function, consumer cost-sharing responsibilities (for example, filing fees), and methods of overseeing external appeal entities and holding them accountable. Managed care organizations have raised concerns about requirements for external review, noting that under various proposals the external reviewers may not be qualified, may not use proper standards, may add expense, and may delay the process. Addressing the expense issue, however, a recent report by The Lewin Group estimated that external review would cost no more than 7 cents per enrollee per month.According to the report, the estimated cost is small because, in practice, only a small number of appeals reach the external review process. Once the cost is divided among the total number of enrollees, the cost per enrollee is very low. There is limited experience with external review systems for HMO members. HCFA requires that appeals by Medicare HMO enrollees be reviewed by an independent party if the initial appeal is denied by the HMO. In such cases, the HMO is required to send the denial, along with medical information concerning the disputed services, to a HCFA contractor that adjudicates such denials. Since 1989, the HCFA contractor has been the Center for Health Dispute Resolution (CHDR) (formerly known as the Network Design Group) of Pittsford, New York. CHDR hires physicians, nurses, and other clinical staff to evaluate beneficiaries’ medical need for contested services and make reconsideration decisions. According to the CHDR president, as of July 1997, nearly one-third of the denials that Medicare HMOs upheld in their grievance proceedings were overturned by CHDR; for some categories of care, that rate was 50 percent. According to NCSL, legislation or regulation mandating external review has been enacted by 16 states. In Florida, for example, the program consists of a statewide panel made up of three Florida Department of Insurance representatives and three representatives from Florida’s Agency for Healthcare Administration. The process is available to any enrollee who has exhausted the HMO’s internal appeal procedure and is dissatisfied with the result. HMOs are required to inform members about the program, including the telephone number and address of the panel. According to a Florida official, from 1991 to 1995 an average of 350 appeals per year were heard under the program: issues included quality of, and access to, care; emergency services; unauthorized services; and services deemed not medically necessary. About 60 percent of the appeals were resolved in favor of the member, about 40 percent in favor of the HMO. Eight of the 38 HMOs in our study, including all Florida HMOs, provided external review to their members. Thirteen HMOs, including two of the eight HMOs offering external review, granted their members the option of arbitration, a process in which the parties choose a disinterested third party to whom to present their case for a legally binding ruling, after the HMO’s internal appeal process has been exhausted. Although arbitration has been promoted as a quick, informal, and flexible alternative to litigation, some HMOs have been criticized for requiring members to enter into binding arbitration agreements as a condition of enrollment. Such agreements, according to consumer advocates, require enrollees to relinquish their rights to legal remedies in the event they are not satisfied with their plan’s response. Further, according to these advocates, not all enrollees understand that they have agreed to binding arbitration or, if they do understand, do not know how it works or the costs associated with it. Despite the fact that most HMOs provided information to members, communication difficulties were noted by both HMO officials and HMO members. For example, although many of the HMOs we reviewed had included descriptions of their complaint and appeal systems in member handbooks, several HMO officials told us that most members do not read their handbooks carefully. Some HMO officials told us that their members were not familiar with the requirements of managed care (such as obtaining authorization before seeing a specialist or using physicians in the HMO’s network) and that many complaints and appeals stemmed from this lack of understanding. Underscoring the need for effective communication, consumer advocates we spoke with consistently noted that HMOs’ complaint and appeal systems were not well understood by members. For a variety of reasons, according to the advocates, many HMO members are reluctant to use the complaint and appeal system. In some cases, advocates said, members who are incapacitated may have neither the time nor the energy to navigate the HMO’s complaint and appeal system. Advocates in Florida and Oregon told us that some members are intimidated by the formality and size of the HMO. Insufficient use of complaint and appeal systems was also identified as a problem by the president of NAIC. HMO officials’ statements about enrollees—that many do not read their handbooks and that many do not understand the requirements of managed care—are supported by the results of a 1995 national survey. According to this survey, half of insured respondents merely skim—or do not read at all—the materials about their health plan. Further, many consumers do not understand even the basic elements of health plans, including the ways in which managed care plans differ from traditional indemnity insurance. For example, barely half (52 percent) of managed care enrollees knew that managed care plans place emphasis on preventive care and other health improvement programs, generally including Pap smears and children’s immunizations. Only about three-quarters knew that their choice of physicians was limited to those in the plan, that patients must see a primary care physician first for any health problem, or that, with the exception of emergencies, patients must be referred by their primary care physician before they can see a specialist. Member confusion is not limited to managed care enrollees, however. According to a nationwide survey conducted in 1994 and 1995, a similar percentage (24 to 33 percent) of managed care enrollees and fee-for-service enrollees reported difficulty understanding which services were covered by their insurance. Further, about 30 percent of enrollees in each group reported that they had problems dealing with insurance plan rules that were confusing and complex. Communication difficulties were also noted by the California task force as well as NCQA. The task force cited a recent study of the “readability” of health insurance literature and contracts that found that the average document was written at a reading level of third- or fourth-year college to first- or second-year graduate school. In contrast, according to the report, the results of the 1992 Adult Literacy Survey conducted by the U.S. Department of Education indicated that writing directed at the general public should be at the seventh- or eighth-grade level. From focus groups with commercial members in 1994 and 1995, NCQA concluded that, though it is important that HMO members know how to use managed care systems, many do not fully understand how they function. To resolve the communications problem, whatever its genesis, several HMOs we contacted have come up with alternative communication methods to supplement member handbooks. For example, one HMO reported distributing to its members a videotape that explained the complaint and appeal system. Other HMOs periodically published reminders and articles about the system in their newsletters, some of which encouraged members to contact a customer service representative with any questions about the complaint and appeal system. Another method aimed at improving communication between plans and members is the ombudsman program, in which an independent party educates members about, and assists them with, the intricacies of the health plan, including the complaint and appeal system. Ombudsman programs—sometimes referred to as independent assistance programs—may fall along a spectrum of types, from neutral, mediation-type programs to active consumer advocacy. Ombudsman programs have been established in several locations, including California, Florida, Michigan, and Wisconsin. In Florida, for example, ombudsman committees have been established by the state to act as volunteer consumer protection and advocacy organizations on behalf of managed care members in the state, and these committees may assist in the investigation and resolution of complaints. Members of the committees include physicians, other health care professionals, attorneys, and consumers, none of whom may be employed by or affiliated with a managed care program. An ombudsman program is also available to consumers in northern California. The program is funded by three California-based foundations—the California Wellness Foundation, the Henry J. Kaiser Family Foundation, and the Sierra Health Foundation—and is administered by the Center for Health Care Rights, a Los Angeles-based consumer advocacy organization. The program, confined to the Sacramento area, was designed to assist individuals with general questions about managed care, as well as help resolve specific problems with managed care plans—for example, providing assistance in filing and pursuing formal grievances. The program also emphasizes educating managed care enrollees about their rights and responsibilities in different circumstances and using the data collected from individual patients for system improvement purposes. Publicly available data on the number and types of complaints and appeals, if defined and collected in a consistent fashion, could enhance oversight, accountability, and market competition. Such information would offer regulators, purchasers, and individual consumers a better opportunity to evaluate the relative performance of health plans. However, the data collection and documentation systems used by HMOs in our study lack uniformity, making comparisons across HMOs difficult. Therefore, although limited data from HMOs in our study show wide variation from one HMO to another in the number and types of complaints and appeals, comparisons are not particularly meaningful. Public records of member grievances can provide useful information on problems in HMOs. If systematically developed, complaint and appeal data could be used to improve monitoring of HMOs by states or purchasers. In 1996, NAIC adopted its Health Carrier Grievance Procedure Model Act, intended to provide standards for procedures by health plans to ensure that plan members receive appropriate resolution of their grievances. The model act calls for a grievance register to be accessible to the state insurance commissioner. Each health plan would maintain written records to document all grievances received during a year. The register would contain a general description of the reason for the grievance, date received, date of each review, resolution at each level, date of resolution at each level, and the name of the covered person. The plan would submit to the commissioner an annual report that includes the number of grievances, the number of grievances referred to second-level review, the number resolved at each level and their resolution, and actions taken to correct problems identified. Some government agencies and consumer groups contend that public accountability for complaint and appeal practices could also provide prospective enrollees with important information needed to compare plans. If these data were standardized and publicized, HMOs could compete on the basis of complaint and appeal rates. Publishing the complaint rates would likely boost enrollment of plans with low complaint rates and encourage plans with high rates to improve their performance. For example, in the interests of providing Medicare beneficiaries with information that will help them make choices among health plan options, HCFA intends to require contracting health plans to submit standardized, plan-level appeal data. After assessing the database, the agency, in consultation with consumer groups and managed care plans, will determine what types of measures are valid, reasonable, and helpful to the public. However, HCFA and consumer groups, as well as accrediting bodies such as NCQA, recognize that reporting simple complaint and appeal rates on individual plans may be a misleading indicator of members’ relative satisfaction with HMOs. There may be a relationship between these rates and enrollee knowledge and education about their rights to complain and appeal plan decisions. Also, some plans place greater emphasis on soliciting and documenting member complaints. Public access to such information could then lead to misunderstandings that could harm plan reputations. However, such information might prove beneficial when used in conjunction with other performance indicators. We asked HMOs to provide us with the number of complaints and appeals received from commercial members in 1996 and the nature of the complaints and appeals. HMOs differed in the ways they defined complaints and appeals and in the ways they counted the complaints and appeals they received. While many HMOs defined complaints as expressions of dissatisfaction, and appeals as requests for the HMO to reconsider a decision, several HMOs differed. Some HMOs, for example, differentiated between informal and formal complaints. Other HMOs used the term appeal to refer to an expression of dissatisfaction with the outcome of a complaint, whether or not it involved a request for reconsideration. Among the HMOs in our review, “grievance” was often used in addition to, or in place of, complaints and appeals. HMOs generally used the term when referring to (1) any expression of dissatisfaction; (2) complaints about a particular issue, such as quality of care; or (3) requests for reconsideration of HMO decisions. HMOs also differed in the way they counted complaints and appeals. One HMO, for example, told us that it does not count oral complaints that are immediately resolved by plan representatives. Another HMO reported that it may count one member contact, such as a letter or telephone call, as several complaints if the contact involves several different issues. These differences, together with limitations in the data some HMOs provided us, hindered our attempt to report in a consistent manner the numbers of complaints and appeals received by the HMOs in 1996. Although 33 of the 38 HMOs provided us with data on complaints or appeals, in only 27 cases did the data allow us to calculate the number of complaints or appeals per 1,000 enrollees. For the remainder of the HMOs that submitted data, limitations in the data prevented such a calculation. One HMO provided data for only three-quarters of the year, another HMO provided data for only 1997, another plan did not break out HMO enrollees separately from enrollees in other managed care arrangements such as preferred provider organizations or Medicare, another HMO provided data on the number of complaints in the “top five” complaint categories but did not provide the total number of complaints, and another HMO provided data for only one category of complaint. Not unexpectedly, given the wide variation in HMO definitions and data collection and documentation methods, the number of complaints and appeals reported to us by the HMOs we studied varied widely. In 1996, complaints ranged from 0.5 per 1,000 enrollees to 98.2 per 1,000 enrollees. A similarly wide range was apparent in the number of appeals received; appeals ranged from 0.07 per 1,000 enrollees to 69.4 per 1,000 enrollees. Complaints and appeals reported by HMOs covered a variety of issues. The most common complaints reported to us were characterized by HMOs as complaints about (1) medical or administrative services, (2) quality of care, and (3) claims issues (such as complaints about the processing of claims for services received). The most common appeals reported to us were characterized by HMOs as appeals of (1) benefit issues (such as services or benefits that are not covered under the member’s policy), (2) denial of payment for emergency room visits, and (3) referral issues (such as instances in which a member visited a physician without first obtaining a referral, as required in the member’s contract). Concerned about the limitations of our data, we contacted insurance regulators in the five states in our study, to determine whether they required HMOs to report to them the number of complaints and appeals the HMOs received. However, according to the regulators, none of these states collected such information from HMOs (though the states do record and maintain information about complaints they receive directly from the public). A Florida insurance division official told us that the state stopped collecting this data several years ago because they did not have the resources to continue. The Oregon insurance division will begin collecting such information in 1998. Neither JCAHO nor NCQA collects complaint or appeal numbers from plans during accreditation reviews. All HMOs in our study told us that they analyze complaint and appeal data to identify systemic problems that the plan needs to address. HMOs generally reported using complaint and appeal data, together with data from other sources, in several ways: to make changes to the plan itself (such as changes to benefits or plan processes) or to promote change in members’ behavior and providers’ behavior. In addition to using complaint and appeal data, HMOs reported using other indicators of member satisfaction, such as the results of member satisfaction surveys and member focus groups, and feedback from purchasers, to identify common problems. Documenting and analyzing complaints and appeals can help plans deal with chronic problems by informing management about various elements of plan performance, both clinical and administrative. Resolution of problems brought to the plan’s attention, if widespread or recurring, can lead to improvements in access to care, physician issues, and quality of care, as well as changes in plan policies and procedures. Several HMOs reported expanding member benefits, at least in part as a result of complaints and appeals they received. Three HMOs added a drug to the HMO’s formulary; another added Weight Watchers coverage. Other HMOs changed their processes or structure. Several HMOs reported changes to their system for processing and paying emergency room claims. Two HMOs, for example, increased the number of emergency room diagnoses that they would automatically pay without reviewing the claim. Claims that would previously have been denied were thus paid. Another HMO, in response to members’ complaints about not being allowed to see well-regarded specialists in a nearby city, changed its policy so that, after a patient had been referred to a specialist within the HMO and had seen that specialist, the patient was then free to see any of several nonplan specialists in the city, and the HMO would pay for the specialists’ services. HMOs changed other processes as well. Two HMOs increased staffing in their member service departments in order to reduce the time members telephoning the HMO spent on hold; another HMO added additional telephone lines for the same purpose. Several HMOs adopted centralized appointment systems or took other measures to increase the efficiency and timeliness of the appointment-setting process. Two HMOs reported changing their pharmacy benefits vendor as a result of member complaints. Some HMOs reported paying for an unauthorized service (for example, an unwarranted visit to the emergency room or an unauthorized visit to a provider specialist outside the network) but then sending the member a letter explaining why the member was not entitled to the service received and warning that a repeat occurrence would not be paid for. Through such policies (called by one HMO a “pay and educate” policy, by another a “first-time offender” policy), HMOs avoid an immediate appeal of a denied claim and hope to reduce unnecessary or unauthorized visits in the future. HMOs have also initiated efforts to educate their members. One HMO with a high number of appeals regarding denied payment for emergency room services increased publicity of its nurse hotline. This hotline was a service provided to members who wanted medical advice, particularly members having doubts about whether a visit to the emergency room was necessary. Many HMOs reported using complaint and appeal data about specific providers as part of their processes for recredentialing providers; one HMO reported terminating a provider as a direct result of a member complaint. Some HMOs reported using complaint and appeal data to evaluate provider performance. For example, a few HMOs reported establishing peer review panels, in which providers within the HMO would review information, including complaints and appeals, to evaluate the performance of other providers. Three HMOs told us that many of the quality-of-care complaints they received from members actually resulted not from poor quality but from poor communication between providers and members. Two HMOs began training or educating providers in order to improve their communication. Another HMO implemented a physician feedback survey to provide information to physicians about their communication and interpersonal skills. The policies HMOs have in place generally include most elements considered important to complaint and appeal systems. Yet the systems may not be working as well as they could to serve enrollees’ interests. Better communication and information disclosure could improve the complaint and appeal process for the benefit of HMO members and plans. Many consumers may not fully understand the rules for gaining access to health care or the complex benefits structures in HMOs. As a result, members may seek care that the plan will not authorize or pay for, and member dissatisfaction increases. At the same time, even though HMO enrollment materials generally describe complaint and appeal systems in accurate detail, many members may not know of their right to complain or appeal or may not understand how to exercise that right. Innovative approaches might improve consumer understanding. For example, ombudsman programs might be an alternative way to facilitate consumer knowledge about, and use of, these systems. Ironically, members’ inability to navigate the complaint process results in little formal tracking of patterns of problems encountered. Improved consumer knowledge might lead to more appropriate use of complaint and appeal systems and thus might provide more information to HMOs wishing to identify and address plan problems. Finally, consumers lack the information they need to compare plans in a meaningful way. If defined and collected uniformly, complaint and appeal data, as a performance indicator, could be important tools for consumers when selecting a health plan. Publicly available, comparative information about the number and types of complaints and appeals, the outcomes of the dispute resolution process, and actions taken to correct problems would provide information about not only member satisfaction but also plan responsiveness to problems raised by members. Demand for, and use of, such information by consumers could have a positive influence on plan operations and quality through market competition. We obtained comments from AAHP, FUSA, NAIC, NCQA, and the Center for Healthcare Rights. The reviewers provided specific technical corrections that they thought would provide clarification or reflect additional perspectives on the issues addressed. We incorporated comments and technical changes as appropriate. JCAHO did not respond to our request to provide comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We will then send copies to those who are interested and make copies available to others on request. Please call me on (202) 512-7119 if you or your staff have any questions. Major contributors to this report include Rosamond Katz, Sigrid McGinty, Steve Gaty, and Craig Winslow. Individual practice association (IPA) or network HMO size (number of commercial covered lives as of 1/1/96) At our request, the National Conference of State Legislatures (NCSL) summarized state requirements regarding HMO complaint and appeal procedures as of April 1, 1998. Following are the requirements promulgated by each of the 50 states, as reported by NCSL. NCSL notes, however, that states may have additional requirements beyond those reported. This overview focuses on the decisionmaking process (explicit time periods and graduated levels of review), the timeliness of the process, and forms of communication. The state requires HMOs to establish and maintain a compliant system that has been approved by the commissioner, in consultation with the state health officer, to provide reasonable procedures for the resolution of written complaints initiated by enrollees. Evidence of coverage must include a clear and understandable description of the HMO’s method for resolving enrollee complaints. The state requires graduated levels of review and provides for explicit time periods. The state does not (1) require graduated levels for the internal appeals process, (2) require HMOs to establish an independent or external review process, or (3) address required qualifications of the reviewer. The state requires an HMO to establish and maintain a complaint system to provide reasonable procedures for the resolution of complaints initiated by enrollees. It also requires duplicate copies of complaints relating to patient care and facility operations to be forwarded to the commissioner of Health and Social Services. Evidence of coverage must contain a clear and concise statement of the HMO’s method for resolving enrollee complaints. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. The state requires each health care organization to include in its disclosure forms a description of how to grieve a claim or treatment denial and express dissatisfaction with care and access to care issues. The state also requires graduated levels of review, an expedited review process, and explicit time periods. It requires that a request for an independent review be in writing. Evidence of coverage must include a detailed description of each level of review and of an enrollee’s right to proceed to the next level of review if the appeal is not successful. The state requires written notification of determinations. All adverse determinations must include notification of the right to appeal to the next level of review. The state also requires the establishment of an independent review process whose determination is binding and addresses the required qualifications of the reviewer. The state requires a health care insurer issuing a managed care plan to establish a grievance procedure that provides enrollees with a prompt and meaningful review on the issue of denial, in whole or in part, of a health care treatment or service. It authorizes the insurance commissioner to regulate and enforce these procedures; requires the procedures of HMOs to be approved by the commissioner, after consultation with the director of the Department of Health; and requires that a determination be in writing. In the event of an adverse outcome, the notice shall include specific findings related to the grievance. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, or (3) require HMOs to establish an independent external review process. The state requires each plan to establish and maintain a grievance system approved by the department. It requires a plan to inform enrollees upon enrollment and annually thereafter of the procedures for processing and resolving grievances, requires written notification of a determination, requires establishment of an expedited review process, and provides for explicit time periods. A subscriber may request voluntary mediation. Expenses for mediation shall be borne by both sides. The department may request an information meeting of the involved parties. The state also requires each plan to provide an external independent review process to examine the plan’s coverage decision regarding experimental or investigational therapies for individuals who meet defined criteria and addresses the qualification of the independent reviewers. The state does not require graduated levels for the internal appeals process. The state requires a health carrier to establish written procedures for the review of an adverse determination involving a situation in which the time period of the review would not jeopardize either the life or health of a covered person or the covered person’s ability to regain maximum function. The state also requires a first- and second-level review process, as well as an expedited review process, and provides for explicit time periods. A determination may be conveyed in writing, electronically, or orally. Oral notifications must be followed by written notification. An adverse decision at the first level of review must include a description of the process for submitting a grievance. A covered person has the right to attend the second-level review, present his or her case to the review panel in person or in writing, or be assisted or represented by a person of his or her choice. Notifications of an adverse determination must include the instructions for requesting a written statement of the clinical rationale and additional appeal, review, arbitration, or other options available to the covered person. Notifications must also explain the covered person’s right to contact the commissioner’s office. Expedited determinations must be made within 72 hours and written confirmation must follow within 2 working days of the notification. The state does not (1) require the establishment of an independent external appeals process or (2) address the required qualifications of the reviewer. Connecticut requires each managed care organization to establish and maintain an internal grievance procedure to assure enrollees that they may seek a review of any grievance arising from a managed care organization’s action or inaction and obtain a timely resolution of such grievance. The state requires that enrollees be informed of the procedures at the initial enrollment and at not less than annual intervals. Notification must describe the procedures for filing a grievance; give the time periods in which a managed care organization must resolve the grievance; and indicate that the enrollee, someone acting for him, or his provider may ask for a review of the grievance. The state requires the establishment of both an expedited internal review process and an independent appeals process through the commissioner of insurance. The commissioner must accept the reviewing entity’s decision. The enrollee must pay a filing fee of $25 for an independent appeal. The commissioner can waive the fee for an indigent person. The state does not require graduated levels of review for the internal review process. The state requires organizations to have an approved written grievance program that will be available to its members as well as to any medical group or groups and other health delivery entities providing services through the organization. Copies of the procedures must be posted in a conspicuous place in all offices and sent to each member or member family when they are enrolled and each time the procedures are changed. The state provides for explicit time periods. Organizations must provide reasonable procedures for handling grievances initiated by members and record related information in a form that can be readily reviewed by the board of the organization. The organization must notify members whose grievances cannot be resolved that they may take their grievances to the board of directors. The state does not (1) require graduated levels for the internal appeals process, (2) require organizations to establish an independent external review process, or (3) address the required qualifications of the reviewer. Florida requires every organization to have a grievance procedure available to its subscribers for the purpose of addressing complaints and grievances. A grievance must be filed within 1 year of the occurrence. At the time of receipt of the initial complaint, the organization must inform subscribers that they have a right to file a written grievance at any time and that assistance in preparing the written grievance will be provided by the organization. An expedited review process must be established. Plans must notify subscribers that they may voluntarily pursue binding arbitration in accordance with the terms of the contract, if offered by the organization, after completing the organization’s grievance procedure and as an alternative to the Statewide Provider and Subscriber Assistance Program. For adverse determinations, an organization must make available to a subscriber a review of the grievance by an internal review panel. Explicit time periods are outlined. The review panel has the authority to bind the organization to the panel’s decision. If the panel does not resolve the grievance, the individual may submit a grievance to the Statewide Provider and Subscriber Assistance Program. The Agency for Health Care Administration must review all unresolved claims. The final decision letter must inform subscribers that their request for review by the Statewide Provider and Subscriber Assistance Program must be made within 365 days after receipt of the final decision letter, must explain how to initiate such a review, and must include the addresses and toll-free telephone numbers of the Agency for Health Care Administration and the Statewide Provider and Subscriber Assistance Program. The state does not require graduated levels of review for the internal appeals process. The state requires every HMO to maintain a complaint system that has been approved by the commissioner of insurance after consultation with the commissioner of human resources to provide reasonable procedures for the resolution of written complaints initiated by enrollees or providers concerning health care services. Evidence of coverage shall include enrollees’ rights and responsibilities, including an explanation of the grievance procedures. The quality assurance program must establish a grievance procedure that provides enrollees with a prompt and meaningful hearing on the issue of denial of a health care treatment or service or claim. The hearing must be conducted by a panel of no fewer than three people. Notification of the determination must be conveyed in writing. Notice of an adverse determination must include specific findings; the policies and procedures for making the determination; and a description of the procedures, if any, for reconsideration of the adverse decision. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. An application for a certificate of authority to operate in the state must be accompanied by a description of the internal grievance procedures used for the investigation and resolution of enrollee complaints and grievances. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Each HMO must establish a complaint system that has been approved by the director to resolve complaints initiated by enrollees concerning health care services. Annual reporting is required. Every HMO must show evidence that the grievance procedures have been reviewed and approved by enrollee representatives through their participation on the governing body or through other specified mechanisms. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. Every HMO must submit for the director’s approval, and thereafter maintain, a system for the resolution of grievances concerning the provision of health care services or other matters concerning operation of the HMO. The grievance procedures must be fully and clearly communicated to all enrollees, and information concerning such procedures shall be readily available to enrollees. The state provides for specific time periods and requires written notification of the determination. Notice of the determination made at the final appeal step of the HMO’s grievance process shall include a “Notice of Availability of the Department.” The enrollee has the right to attend and participate in the formal grievance proceedings. The grievance committee must meet at the main office of the HMO or at another office designated by the HMO if the main office is not within 50 miles of the grievant’s home address. The committee must consider the enrollee’s request pertaining to the time and date of the meeting. The state does not (1) require graduated levels for the internal appeals process, (2) require HMOs to establish an independent external review process, or (3) address the required qualifications of the reviewer. Health maintenance or limited service health maintenance organizations must establish and maintain a grievance procedure, approved by the commissioner, for the resolution of grievances initiated by enrollees and subscribers. The organization is required to provide each enrollee and subscriber with information on how to file a grievance. HMOs must provide a toll-free telephone number through which the enrollee can contact the HMO at no cost to the enrollee to obtain information and to file grievances. Grievances can be filed orally or in writing. HMOs are required to provide timely, adequate, and appropriate notice to each enrollee or subscriber of the grievance procedure. A written description of the enrollee’s or subscriber’s right to file a grievance must be posted by the provider in a conspicuous public location in each facility that offers services on behalf of the HMO. Notification of determinations must be in writing. Explicit time period and qualifications of the reviewer are also addressed. The state requires an expedited review process. HMOs must provide enrollees and subscribers the opportunity to appear in person at the review panel hearing or to communicate with the panel through appropriate other means if the enrollee or subscriber is unable to appear in person. The state does not (1) require graduated levels for the internal appeals process or (2) require HMOs to establish an independent external review process. HMOs must establish and maintain a complaint system that has been approved by the commissioner and that provides for the resolution of written complaints initiated by enrollees concerning health care services. Evidence of coverage must include the HMO’s methods for resolving enrollee complaints. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Every contract must include a clear, understandable description of the HMO’s method for resolving a grievance. Evidence of coverage must include the HMO’s methods for resolving enrollee complaints. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Every insurer must disclose a covered person’s right to appeal, the procedure for initiating an appeal of a utilization management decision or the denial of payment, and the procedure for beginning an appeal through the Cabinet for Health Services. Insurers that deny coverage for treatment procedures, drugs, or devices must provide an enrollee with a denial letter that includes instructions for starting an appeals process within 2 working days for preauthorization requests, 24 hours for hospitalization, and 20 for retrospective review and all other cases. The state does not (1) require graduated levels for the internal appeals process, (2) require HMOs to establish an independent external review process, or (3) address required qualifications of the reviewer. Every HMO must establish and maintain a grievance procedure approved by the commissioner under which enrollees may submit their grievances to the HMO. HMOs must inform enrollees annually of the procedures, including the location and telephone number where grievances may be submitted. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. Health carriers or the carriers’ designated utilization review entity must establish procedures for a standard appeal of an adverse determination. Adverse determinations must include an explanation of how to submit a written request for a second-level grievance and the procedures and time periods governing the second-level grievance review. An expedited review process must be established. In the case of an expedited review, initial notification must made by telephone, followed by written confirmation within 2 working days of the notification. Time periods are to be established by the carrier and are required to be expeditious. A covered person has the right to attend the second-level review and to present his or her case to the review panel. The state does not (1) require the establishment of an independent external appeals process or (2) address the required qualifications of the reviewer. Maryland requires HMOs to provide an internal grievance system to resolve adequately any grievances initiated by any of its members on matters concerning quality of care. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Any organization seeking licensing as an HMO must submit an application that contains a statement of the grievance system, including procedures for the registration of grievance and procedures for the resolution of grievances, with a descriptive summary of written grievances made in the areas of medical care and administrative services. Evidence of coverage must include a description of the HMO’s method for resolving HMO complaints. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. HMOs must establish an internal formal grievance procedure approved by the state insurance bureau. The state also requires written notification of adverse determinations, requires an expedited review process, provides that a request for an expedited review can be made in writing or orally, and provides for explicit time periods. If an enrollee has exhausted the internal grievance system, he or she may file a grievance with Task Force Three of the advisory commission. The commission shall render a determination as to the validity of the grievance and direct measures it considers appropriate under the circumstances. The state does not (1) require graduated levels for the internal appeals process or (2) address the required qualifications of the reviewer. Current law requires each health plan company to establish and make available to enrollees, by July 1, 1998, an informal complaint resolution process. A plan must make reasonable efforts to resolve enrollee complaints and to inform enrollees of the decision in writing within 30 days of receiving the complaint. The state requires plans to establish an expedited review process. The state requires plans to make available to enrollees an impartial appeals process and to inform enrollees of their right to appeal through the process or to the commissioner. The state requires plans to have an alternative dispute resolution process. Plans are required to keep records of complaints and their resolution. The state requires plans to inform enrollees of their complaint resolution procedures as part of their evidence of coverage contract. Also by July 1, 1998, the commissioner must establish an expedited fact-finding and dispute resolution process to assist enrollees of health plan companies with contested treatment, coverage, and service issues. The state does not (1) require graduated levels for the internal appeals process or (2) address required qualifications of the reviewer. Every HMO must establish and maintain a grievance procedure approved by the state insurance commissioner in consultation with the state health officer. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Health carriers must establish and file with the director of the Department of Health all forms used to process a grievance. Evidence of coverage must include a description of grievance procedures. The state also requires the establishment of first- and second-level review processes, requires an expedited review process, allows oral requests for an expedited review, provides for explicit time periods, and addresses the required qualifications of the reviewer. Any decision must include notice of the enrollee’s right to file an appeal of the grievance advisory panel’s decision with the director’s office. The director is required to resolve any grievance regarding an adverse determination as to covered services through any means not specifically prohibited by law. If the grievance is not resolved by the director, then it shall be resolved by referral to an independent review organization. Reports are to be filed with the director. The state currently has a law in place, but it does not become effective until 1999. The state is drafting rules. Each HMO must establish and maintain grievance procedures to provide for the resolution of grievances initiated by enrollees. The procedures must be approved by the director of insurance after consultation with the director of regulation and licensure. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. Each managed care organization must establish a system for resolving enrollee complaints. The system must be approved by the commissioner in consultation with the state board of health. If an enrollee makes an oral complaint, the managed care organization is required to inform the enrollee that, if he or she is not satisfied with the resolution of the complaint, he or she must file a complaint in writing to receive further review of the complaint. Managed care organizations must allow an enrollee who appeals a decision to appear before the review board to present testimony at a hearing. Each managed care organization must provide to enrollees, in clear and comprehensible language, notice of their right to file a written complaint and to obtain an expedited review at the time they receive evidence of coverage, any time the organization denies coverage, and any other time deemed necessary by the commissioner. Denials of coverage must be in writing, provide the reason for the denial and the criteria used in making the determination, and explain the right of the enrollee to file a written complaint. The state provides for an expedited review process, provides for explicit time periods, and addresses required qualifications of the reviewer. The state does not require the establishment of an independent appeals process. Health carriers must establish written procedures for receiving and resolving grievances from enrollees concerning adverse determinations and other matters. Enrollees must be provided with a written description of the procedures and informed of their right to contact the office of the commissioner for assistance at any time. This statement must include a toll-free telephone number and the address of the commissioner. A written denial must include a statement of the enrollee’s right to gain access to the internal grievance procedures, including first- and second-level appeals. An adverse determination at the first level must include a description of the process for obtaining a second-level grievance review of a decision and the written procedures governing a second-level review, including the required time period for review. Enrollees may request the opportunity to appear in person at the second-level review. An adverse determination at the second-level appeal must include a statement of the enrollee’s right to file an external appeal. The state requires an expedited appeals process, requires the establishment of an external review process, and addresses required qualifications of the reviewer. Each HMO must establish and maintain a formal internal appeal process, with graduated levels of review, explicit time periods, an expedited review process, and written notification of determinations. Notification of an adverse determination must include information on further appeal rights. HMOs must also establish an external independent appeals process. When requesting an external appeal, enrollees must pay a $25 filing fee, although exceptions can be made in cases of financial hardship. HMOs are also required to provide enrollees with a description of their right to appeal; the procedure for initiating an appeal of a utilization management decision made by or on behalf of the carrier with respect to the denial, reduction, or termination of a health care benefit or the denial of payment for a health care service; and the procedure to initiate an appeal through the Independent Health Care Appeals Program. State requirements also address the qualifications of the reviewers. Every managed health care plan is required to maintain procedures to provide for the presentation, management, and resolution of complaints and grievances brought by enrollees or by providers acting on behalf of an enrollee and with the enrollee’s consent, regarding any aspect of health care services. Plans must provide written notification to enrollees that the procedures are available. Plans must disclose the toll-free telephone number and address of the managed health care plan’s department responsible for resolving grievances. In instances in which an enrollee initially makes an oral complaint and expresses interest in pursuing a written grievance, the plan shall assist the enrollee in making a written complaint or initiating a grievance. State requirements include explicit time periods, first- and second-level reviews as well as an expedited review process, binding first-level review decisions unless the grievant submits a written appeal to the second-level review committee within 30 days of receipt of the determination, and written notification of the determination. During the second-level review, plans must offer enrollees the opportunity to communicate with the review committee—at the plan’s expense—by conference call, video conferencing, or other appropriate technology. A request for an external review must be in writing. The grievant and his or her representative may appear before the independent review board. The state does address the required qualifications of the reviewer. HMOs must establish and maintain a grievance procedure that includes written notification of the procedures, grievances filed in writing or by telephone, and explicit time periods. Notice of an adverse determination must be in writing and explain the process for filing a grievance. Expedited determinations must be made by telephone followed by written notice within 3 business days. The required qualifications of the reviewer are also addressed. The state does not (1) require graduated levels for the internal appeals process or (2) require HMOs to establish an independent external review process. Each application for a certificate of authority must be accompanied by a description of the internal grievance procedures to be used for the investigation and resolution of enrollee complaints and grievances. Evidence of coverage must include a clear and understandable description of the HMO’s method of resolving enrollee complaints, including graduated levels of review, explicit time periods, and the availability of an independent appeals process. The state does not address required qualifications of the reviewer. Every HMO must establish and maintain a grievance procedure, which has been approved by the commissioner, to provide procedures for the resolution of grievances initiated by enrollees. Evidence of coverage must contain a clear statement of the enrollee grievance procedures. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Ohio requires HMOs to establish and maintain a grievance procedure that has been approved by the commissioner to provide adequate and reasonable procedures for the expeditious resolution of written complaints initiated by enrollees concerning any matter relating to services provided by the HMO. The HMO must provide a timely written response to each complaint and establish procedures to accept complaints by telephone. Responses to written complaints must inform enrollees of their right to submit their complaint to a professional peer review organization or HMO peer review committee. Evidence of coverage must contain the methods used by the HMO for resolving complaints. Patients with a terminal condition and life expectancy of no more than 2 years for whom standard therapies have not been effective and who have been denied coverage for a therapy recommended by their physician have the right to an independent review of the coverage decision. The state does not (1) provide for explicit time periods or (2) require graduated levels for the internal appeals process. HMOs must establish and maintain a grievance system to provide reasonable procedures for prompt payment and effective resolution of written grievances within explicit time periods. If the grievance can be resolved through a specific arbitration agreement, the enrollee shall be advised in writing of his or her rights and duties under the agreement at the time the grievance is registered. Any such agreement must be accompanied by a statement setting forth in writing the terms and conditions of binding arbitration. Any HMO that makes such binding arbitration a condition of enrollment must fully disclose this requirement to its enrollees in the contract and evidence of coverage. HMOs, upon notifying enrollees of a final determination, must inform the enrollees that they may request assistance from the department. Evidence of coverage must include a description of the enrollee grievance procedures. The state does not (1) require graduated levels for the internal appeals process, (2) require HMOs to establish an independent external review process, or (3) address the required qualifications of the reviewer. Insurers must have a timely and organized system for resolving grievances and appeals, with written procedures explaining the process, including a procedure to assist enrollees in filing written grievances, written explanations of determinations, at least two levels of review, and the opportunity for enrollees or a representative to appear before a review panel at either level of review. The state provides for explicit time periods. The state does not (1) require HMOs to establish an independent external review process or (2) address the required qualifications of the reviewer. The Department of Health requires a two-step internal grievance and appeals process. The first step is a paper review and reconsideration. The second is a full hearing before a grievance review committee. An expedited review process must be established, as well as an external appeal to the department. The state does not address required qualifications of the reviewer. Every HMO is required to establish and maintain a complaint system that has been approved by the director after consultations with the state director of health to provide reasonable procedures for the resolution of written complaints initiated by enrollees concerning health care services. The system must have two levels of appeal and an external appeals process. The required qualifications of the reviewers are also addressed. The state provides for explicit time periods and requires expedited review. Each HMO must establish and maintain a complaint system that is approved by the director or his or her designee to provide reasonable procedures for the resolution of written complaints initiated by enrollees. The state does not (1) define explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. The state requires explicit time periods and an expedited review process. Each managed care plan or utilization review organization must establish and maintain a grievance system, approved by the director after consultation with the secretary of the Department of Health. The system may include an impartial mediation provision, to provide reasonable procedures for the resolution of written grievances initiated by enrollees concerning the provision of health care services. Mediation shall be available to enrollees unless an enrollee elects to litigate a grievance prior to submission to mediation. The state also addresses the required qualifications of the reviewer. The state requires explicit time periods and an expedited review process. The state does not (1) require graduated levels for the internal appeals process or (2) require HMOs to establish an independent external review process. HMOs must use written procedures for receiving and resolving grievances from covered persons. Each HMO must submit to the commissioner an annual report, in a form prescribed by the commissioner, which includes a description of the procedures of the complaint system. Evidence of coverage must include a description of the grievance procedures. Notification of the determinations must be in writing. HMOs are required to provide each covered person the name, address, and telephone number of the person designated to coordinate the grievance on behalf of the HMO, upon receipt of the grievance. Covered persons have the right to seek review by the commissioner or a designee of the commissioner. The commissioner or the commissioner’s designee may consult with medical personnel in the Department of Health for grievances that involve primarily questions of medical necessity or medical appropriateness. The state provides for explicit time periods. The state does not (1) require graduated levels for the internal appeals process or (2) address the required qualifications of the reviewer. Texas requires every HMO to establish and maintain an internal system for the resolution of complaints, including a process for the notice and appeal of complaints, written and oral filing of complaints, explicit time periods, an expedited review process, and written notification of determinations. In the event of an adverse determination, the HMO must provide an appeals process that includes the right of the complainant either to appear in person before a complaint appeal panel or to address a written appeal to the panel. Enrollees have the right to an external review to appeal adverse utilization review determinations when internal processes have been exhausted. The insurance commissioner may charge payers as necessary to fund the operation of the independent review organization. The state addresses required qualifications of the reviewer. The state does not require graduated levels for the internal appeals process. Organizations must have a written grievance procedure and send it to each enrollee at the time of enrollment and each time the methods are substantially changed. The organization’s medical director or physician designee must review all grievances of a medical nature. Explicit time periods are provided. If a grievance cannot be resolved to the enrollee’s satisfaction, the organization must notify the enrollee of his or her options—that is, litigation, arbitration, and so forth. The state does not (1) require graduated levels for the internal appeals process or (2) require HMOs to establish an independent external review process. Each managed care plan must establish a review process that has been approved by the commissioner for members who are dissatisfied with the availability, delivery, or quality of their health care services. The state provides for graduated levels of review and an expedited process and also requires written notification of the determination. The determination must include a description of other processes available for further review of the grievance by the managed care plan or other reviewing body. Plans must provide members with all information in their possession or control relevant to the grievance process and the subject of the grievance, including applicable policies and procedures and copies of all necessary and relevant medical information. Plans must establish a mechanism whereby a person unable to file a written grievance may notify the plan of a grievance orally or through alternative means. Enrollees have the right to appeal adverse mental health decisions to an external independent panel of mental health care providers. The state provides for explicit time periods and addresses required qualifications of the reviewer. Each HMO must establish and maintain a complaint system to provide reasonable procedures for the resolution of written complaints. The system shall be established after consultation with the state health commissioner and approval by the commissioner. Evidence of coverage must include a description of the HMO’s method for resolving enrollee complaints. The commissioner is charged with examining the quality of health care services of the HMOs and the providers with whom the HMOs have contracts. The commissioner is directed to consult with HMOs in the establishment of their complaint systems, review and analyze the complaint reports, and assist the State Corporation Commission. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. Washington requires HMOs to establish and maintain grievance procedures, approved by the commissioner, to provide reasonable and effective resolution of complaints initiated by enrolled participants. The state requires each health carrier to file with the commissioner its procedures for review and adjudication of complaints by enrollees or health care providers. Every health carrier must provide reasonable means whereby enrollees who are dissatisfied with the actions of a carrier may be heard in person or by their authorized representative on their written request for review. If the carrier fails to grant or reject such request within 30 days after it is made, the complaining person may proceed as if the complaint had been rejected. A complaint that has been rejected by a carrier may be submitted to nonbinding mediation. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address required qualifications of the reviewer. HMOs must establish and maintain a grievance procedure that has been approved by the commissioner to provide adequate and reasonable procedures for the expeditious resolution of written grievances initiated by enrollees concerning any matter relating to any provisions of the organization’s health maintenance contracts. A detailed description of an HMO’s subscriber grievance procedures is to be included in all group and individual contracts, as well as any certificate or group member handbooks provided to subscribers. This procedure is to be administered at no cost to the subscriber. Telephone numbers are to be specified by the HMO for the subscriber to call to present an informal grievance or to contact the grievance coordinator. The subscriber grievance procedure is to state that the subscriber has the right to appeal to the commissioner. Written notification of the determination is required. The HMO must meet with the subscriber during the formal review process. An adverse determination must be accompanied by a statement about which levels of the grievance procedure have been processed and how many more levels remain. The state provides for an expedited review process. The state does not require the establishment of an independent review process. The state requires each HMO, limited service health organization, and preferred provider plan to establish and use an internal grievance procedure. The procedure must be approved by the commissioner and provide enrolled participants with complete and understandable information describing the process. Written grievances may be submitted in any form. A grievance panel must be established. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. Each HMO is to establish and maintain a complaint system that has been approved by the commissioner, after consultation with the administrator, to provide reasonable procedures for the resolution of written complaints initiated by enrollees. Reports must be made to the commissioner and the administrator. The state does not (1) provide for explicit time periods, (2) require graduated levels for the internal appeals process, (3) require HMOs to establish an independent external review process, or (4) address the required qualifications of the reviewer. The American Association of Health Plans (AAHP) is a trade organization representing more than 1,000 managed care plans, with an enrolled population of more than 100 million Americans. Criteria in our study were taken from Putting Patients First, an AAHP initiative designed to improve communication with patients and physicians and streamline administrative procedures in health plans. Families USA (FUSA) is a national nonprofit consumer organization that advocates high-quality, affordable health and long-term care for all Americans. FUSA works at the national, state, and grassroots levels with organizations and individuals to help them participate in shaping health care policies in the public and private sectors. Criteria in our report were taken from a December 1997 FUSA document entitled “Evaluation Tool,” containing FUSA criteria for evaluating 12 consumer protection issues. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and the National Committee for Quality Assurance (NCQA) are accrediting bodies. Both organizations will, at the request of managed care organizations, send surveyors to review plan operations, including complaint and appeal systems. If plan procedures meet accreditation standards, the plan is granted accreditation. As of December 1997, JCAHO has granted accreditation to 25 organizations of the 43 that had applied. As of November 1997, NCQA had reviewed 285 plans: 157 had been granted full NCQA accreditation, 101 had been granted accreditation with some conditions, and 12 had been denied accreditation. (The remaining 15 plans were awaiting NCQA’s decision.) However, not all plans accredited by either body are HMOs. Criteria in our report were taken from a 1997 draft of JCAHO’s 1998-2000 Comprehensive Accreditation Manual for Health Care Networks, and NCQA’s 1997 Surveyor Guidelines. The National Association of Insurance Commissioners (NAIC) is a voluntary organization of the chief insurance regulatory officials of the 50 states, the District of Columbia, American Samoa, Guam, Puerto Rico, and the Virgin Islands. NAIC’s stated mission is to assist state insurance regulators in protecting consumers and helping to maintain the financial stability of the insurance industry. NAIC promulgates model laws, regulations, and guidelines, intended to provide a uniform basis from which all states can deal with regulatory issues. Elements described in our report were taken from NAIC’s 1996 Health Carrier Grievance Procedure Model Act, containing standards for the establishment of procedures used by health carriers to resolve member grievances, and NAIC’s 1996 Utilization Review Model Act. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined: (1) what elements are considered important to a system for processing health maintenance organization (HMO) member complaints and appeals; (2) the extent to which HMOs' complaint and appeal system contain these elements; (3) what concerns consumers have regarding HMO complaint and appeal systems; (4) what information is available on the number and types of complaints and appeals HMOs receive from their members; and (5) how, if at all, HMOs use their complaint and appeal data. GAO noted that: (1) a majority of HMOs in GAO's study incorporated most criteria considered important for complaint and appeal systems, however, consumer advocates remain concerned that complaint and appeal systems do not fully meet member needs; (2) additionally, HMOs in GAO's study do not uniformly collect and report data on the complaints and appeals they receive to health care regulators, purchasers, or consumers; (3) nationally recognized regulatory, consumer, and industry groups have identified elements that are important to an enrollee complaint and appeal system; (4) 11 elements were identified by at least 2 of these groups and fall into 3 general categories: timeliness, integrity of the decisionmaking process, and effective communication with members; (5) the policies and procedures at the 38 HMOs in GAO's review contained most of the 11 important elements, although they varied considerably in the mechanisms adopted to meet them; (6) the lack of an independent, external review of plan decisions and the difficulty in understanding how to use plan complaint and appeal systems were of particular concern to consumer advocacy groups, who contend that plans' systems, therefore, do not adequately serve the needs of plan enrollees; (7) however, consumer concerns about the impartiality of HMO decisionmakers could be addressed by using independent, external review systems for HMO members; (8) consumer concerns about the difficulty in understanding how to use complaint and appeal systems might be addressed by revising written plan materials, which are often difficult to understand; (9) additionally, although experience to date is limited, such concerns are being addressed by ombudsman programs in some parts of the country; (10) publicly available data on the number and types of complaints and appeals, if consistently defined and uniformly collected, can enhance oversight, accountability, and market competition; (11) comparative data would provide regulators, purchasers, and individual consumers with a view of members' relative satisfaction with health plans, thereby supplementing other performance indicators; (12) all HMOs in GAO's study stated that they review complaint and appeal data to identify problems that the plan needs to address; and (13) several HMOs reported using complaint and appeal data, together with data from other sources, to make changes in benefits and plan processes, and to attempt changes in member and provider behavior as well.
According to 2007 NHIS data, fewer than 40 percent of adults in the United States reported ever having been tested for HIV. In a recent survey by the Henry J. Kaiser Family Foundation, the primary reason people gave for not being tested is that they do not think they are at risk. The second most common reason was that their doctor never recommended HIV testing. While 38 percent of adults said that they had talked to their doctor about HIV, only 17 percent said that their doctor had suggested an HIV test. According to this survey, African Americans and Latinos were more likely than adults overall to have had such a conversation with their doctor and for the doctor to have suggested testing. Sixty-seven percent of African Americans and 45 percent of Latinos said that they had talked to their doctor about HIV and 29 percent of African Americans and 28 percent of Latinos said that their doctor had suggested an HIV test. Technological advances have increased the benefits associated with HIV testing as well as with regular care and treatment for HIV. First, advances in testing methods, such as rapid HIV tests, have made testing more feasible in a variety of different settings and increased the likelihood that individuals will receive their results. Rapid tests differ from conventional HIV tests in that results are ready sometime from immediately after the test is performed to 20 minutes after the test is performed, which means that individuals can get tested and receive their results in the same visit. Second, the advent of highly active antiretroviral therapy (HAART) has transformed HIV from a fatal disease to a treatable condition. For example, a 25-year-old individual who is in care for HIV can expect to live only 12 years less than a 25-year-old individual who does not have HIV. In addition, studies have found that people generally reduce risky behaviors once they learn of their HIV-positive status. According to one study, people who are unaware that they are HIV positive are 3.5 times more likely to transmit the disease to their partners than people who know their status. At the same time, research has shown that individuals are often unaware of their status until late in the course of the disease despite visits to health care settings. For example, one study looked at HIV case reporting in a state over a 4-year period. The study found that of people who were diagnosed with HIV late in the course of the disease, 73 percent made at least one visit to a health care setting prior to their first reported positive HIV test, and the median number of prior visits was four. Funding for HIV testing can come from insurance reimbursement by private insurers as well as Medicaid and Medicare, although these payers do not cover HIV testing under all circumstances. Funding for HIV testing can also come from other government sources, such as CDC, CARE Act programs, or state and local funding. A study by CDC and the Henry J. Kaiser Family Foundation that looked at the insurance coverage of individuals at the time of their HIV diagnosis from 1994-2000 found that 22 percent were covered by Medicaid, 19 percent were covered by other public-sector programs, and 27 percent were uninsured. The cost of an HIV test varies based on a number of factors, including the type of test performed, the test result, and the amount of counseling that is associated with the test. For example, from a payer’s perspective, the costs of a rapid HIV test are higher for someone who is HIV positive than for someone who is not, primarily because rapid testing requires an initial rapid test and a confirmatory test when the result is positive with counseling conducted after both tests. Additionally, eliminating pretest counseling can lower the cost of HIV testing by about $10, regardless of the type of test. According to the most recent data available from CDC, in 2006, the cost of an HIV test could range from $10.16 to $86.84 depending on these and other factors. CDC issued its first recommendations for HIV testing in health care settings in 1987. These recommendations focused on individuals engaged in high-risk behaviors and specifically recommended that people who were seeking treatment for STDs be tested for HIV on a routine basis. Throughout the 1990s and 2000s CDC updated these recommendations periodically to reflect new information about HIV. For example, in 2001, CDC modified its recommendations for pregnant women to emphasize that HIV testing should be a routine part of prenatal care and that the testing process should be simplified to eliminate barriers to testing, such as requiring pretest counseling. CDC’s 2001 recommendations also recommended that HIV testing be conducted routinely in all health care settings with a high prevalence of HIV; in low-prevalence settings it was recommended that HIV testing be conducted based on an assessment of risk. In 2003, CDC introduced a new initiative called “Advancing HIV Prevention: New Strategies for a Changing Epidemic.” The initiative had a number of strategies, including two that specifically applied to health care settings: (1) making HIV testing a routine part of medical care; and (2) further reducing perinatal transmission of HIV by universally testing all pregnant women and by using HIV rapid tests during labor and delivery or postpartum if the mother had not been tested previously. Elements of the Advancing HIV Prevention initiative were incorporated into CDC’s revised HIV testing recommendations for heath care settings in 2006. The 2006 recommendations represent a major shift from prior recommendations for health care settings in that they no longer base HIV testing guidelines on risk factors. Rather, they recommend that routine HIV testing be conducted for all patients ages 13 through 64 in all health care settings on an opt-out basis. CDC also recommends that persons at high risk of HIV be tested annually; that general consent for medical care encompass consent for HIV testing (i.e., separate written consent is not necessary); and that pretest information, but not pretest counseling be required. According to CDC, tracking the prevalence of HIV is necessary to help prevent the spread of the disease. CDC’s surveillance system consists of case counts submitted by states on the number of HIV and AIDS diagnoses, the number of deaths among persons with HIV, the number of persons living with HIV or AIDS, and the estimated number of new HIV infections. HIV laboratory tests, specifically CD4 or viral load tests, can be used to determine the stage of the disease, measure unmet health care needs among HIV-infected persons, and evaluate HIV testing and screening activities. Current CDC estimates related to HIV are not based on data from all states because not all states have been reporting such data by name long enough to be included in CDC’s estimates. While all states collect AIDS case counts through name-based systems, prior to 2008 states collected HIV data in one of two different formats, either by name or by code. CDC does not accept code-based case counts for counting HIV cases because CDC does not consider them to be accurate and reliable, primarily because they include duplicate case counts. In order for CDC to use HIV case counts from a state for CDC’s estimated diagnoses of HIV infection, the name-based system must be mature, meaning that the state has been reporting HIV name-based data to CDC for 4 full calendar years. CDC requires this time period to allow for the stabilization of data collection and for adjustment of the data in order to monitor trends. In its most recent surveillance report, CDC used the name-based HIV case counts from 34 states and 5 territories and associated jurisdictions in its national estimates. Name-based HIV reporting had been in place in these jurisdictions since the end of 2003 or earlier. Under the CARE Act, approximately $2.2 billion in grants were made to states, localities, and others in fiscal year 2009. Part A of the CARE Act provides for grants to selected metropolitan areas that have been disproportionately affected by the HIV epidemic to provide care for HIV- positive individuals. Part B provides for grants to states and territories and associated jurisdictions to improve quality, availability, and organization of HIV services. Part A and Part B base grants are determined by formula based on the number of individuals living with HIV and AIDS in the grantee’s jurisdiction. For the living HIV/AIDS case counts HRSA used to determine fiscal year 2009 Part A and Part B base grants, see appendices II and III. Part C provides for grants to public and private nonprofit entities to provide early intervention services, such as HIV testing and ambulatory care. Part F provides for grants for demonstration and evaluation of innovative models of HIV care delivery for hard-to-reach populations, training of health care providers, and for Minority AIDS Initiative grants. Since the 2006 reauthorization of CARE Act programs, HRSA has placed an emphasis on states’ unmet need, which is the number of individuals in a state’s jurisdiction who know they are HIV positive but who are not receiving care for HIV. According to the framework used by HRSA, addressing unmet need is a three-step process. First, states are required to produce an unmet need estimate, which is submitted to HRSA on the state’s annual Part B grant application. To calculate the unmet need, the state must determine the total number of individuals who are aware of their HIV positive status in their jurisdiction, and then subtract the number of individuals who are receiving care for HIV. Second, the state must assess the service needs and barriers to care for individuals who are not receiving care for HIV, including finding out who they are and where they live. Third, the state must address unmet need by connecting these individuals to care. CDC and HRSA have coordinated on activities to assist health care professionals who provide HIV-related services. HRSA has encouraged routine HIV testing by providing for training for health care providers, as part of CDC-funded initiatives. CDC has taken other steps to encourage routine HIV testing by funding special initiatives that focus on certain populations. Since 2006, CDC and HRSA have coordinated activities to assist health care professionals who provide HIV-related services. In 2007, CDC and HRSA initiated a clinic-based research study to develop, implement, and test the efficiency and effectiveness of an intervention designed to increase client appointment attendance among patients at risk of missing scheduled appointments in HIV clinics, “Increasing Retention in Care among Patients Being Treated for HIV Infection.” An interagency agreement outlined the responsibilities of CDC and HRSA with respect to the study. For example, under the agreement, CDC is responsible for maintaining data gathered from the study and HRSA is responsible for presenting their findings at national and international conferences. Each agency provided $1.3 million for the study in fiscal year 2009 and will continue to provide funds for the study until its final year of operation in 2011. In coordination with a federal interagency work group, CDC and HRSA have also participated in the development and publication of a document for case managers who work with individuals with HIV. The document, “Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs,” outlines best practices for, and six recommended components of, HIV case management for federally funded HIV case management agencies. The document also describes how case management is practiced in different settings and methods for strengthening linkages among case management programs. CDC and HRSA were the lead authors of the document and shared staff time and production expenses. The agencies published the document in February 2009. CDC also provided HRSA with funding to expand HIV consultation services offered to health care professionals at the National HIV/AIDS Clinicians’ Consultation Center. The National HIV/AIDS Clinicians’ Consultation Center is a component of the HRSA-administered AIDS Education and Training Centers (AETC) program. The Consultation Center operates hotline systems to provide consultation to health care professionals, including the PEPline and Perinatal Hotline. Health care professionals access the PEPline to receive information on post-exposure management for health care professionals exposed to blood-borne pathogens and the Perinatal Hotline for information on treatment and care for HIV-diagnosed pregnant women and their infants. CDC provided HRSA with $169,000 to support the PEPline and Perinatal Hotline in fiscal year 2007 and $90,000 to support the PEPline in fiscal year 2008. In addition, CDC provided HRSA with $180,000 during fiscal years 2007 and 2008 for the enhancement of existing consultation services at the Consultation Center for health care professionals who expand HIV testing and need assistance in managing a resulting increase in patients who are HIV positive. In addition, CDC and HRSA have coordinated to prevent duplication of HIV training provided to health care professionals. The CDC-funded National Network of STD/HIV Prevention Training Centers, HRSA-funded AETCs, and other federal training centers, participate in the Federal Training Centers Collaboration to ensure that HIV training opportunities are not duplicated among the centers. The agencies hold biennial national meetings to increase training coordination of STD/HIV prevent and treatment, family planning/reproductive health, and substance abuse prevention to maximize the use of training resources. In addition to coordinating on HIV activities that assist health care professionals, CDC and HRSA have participated in the CDC/HRSA Advisory Committee on HIV and STD Prevention and Treatment. The Advisory Committee was established by the Secretary of HHS in November 2002 to assess HRSA and CDC objectives, strategies, policies, and priorities for HIV and STD prevention and care and serves as a forum to discuss coordination of HIV activities. The committee meets twice a year and is comprised of 18 individuals who are nominated by the HHS Secretary to serve 2- to 4-year terms and are knowledgeable in such public health fields as epidemiology, infectious diseases, drug abuse, behavioral science, health care delivery and financing, state health programs, clinical care, preventive health, and clinical research. The members assess the activities administered by HRSA and CDC, including HIV testing initiatives and training programs, and make recommendations for improving coordination between the two agencies to senior department officials, including the HHS Secretary. Officials from CDC and HRSA regularly attend the meetings to present current HIV initiatives administered by their agencies. Officials from 6 of the 14 state and local health departments we interviewed said that CDC and HRSA coordination on HIV activities could be improved. For example, officials from 3 of these health departments attributed the lack of coordination to differing guidelines CDC and HRSA use for their grantees. Officials from 1 health department stated that although they have the same desired outcome, CDC and HRSA do not always coordinate on activities that they fund. They noted that the two agencies have inconsistent policies for HIV-related activities, such as confidentiality guidelines and policies for data sharing. Officials from another health department stated that the two agencies could improve coordination on HIV testing and guidelines for funding HIV testing initiatives. Since the release of CDC’s 2006 routine HIV testing recommendations, HRSA has encouraged routine HIV testing by providing for training for health care providers, as part of CDC-funded initiatives. CDC and HRSA developed interagency agreements through which CDC provided $1.75 million in 2007 and $1.72 million in 2008 to HRSA-funded AETCs to develop curricula, training, and technical assistance for health care providers interested in implementing CDC’s 2006 routine HIV testing recommendations. As of June 2008, AETCs had conducted over 2,500 training sessions to more than 40,000 health care providers on the recommendations. HRSA provided for training during CDC-funded strategic planning workshops on routine HIV testing for hospital staff. CDC officials said that in 2007, the agency allocated over $900,000 for workshops in eight regions across the country on implementing routine HIV testing in emergency departments. CDC reported that 748 attendees from 165 hospitals participated in these workshops. HRSA-funded AETCs from each of the eight regions provided information on services they offer hospitals as they prepare to implement routine HIV testing, and also served as facilitators during the development of hospital-specific strategic plans. In addition, HRSA provided for training as part of a CDC-funded pilot project to integrate routine HIV testing into primary care at community health centers. HRSA officials said that their primary role in this project, called “Routine HIV Screening within Primary Care in Six Southeastern Community Health Centers,” was to provide for training on routine HIV testing and to ensure that HIV-positive individuals were connected to care, and that CDC provided all funding for the project. CDC officials told us that the first phase of the project funded routine HIV testing in two sites in Mississippi, two sites in South Carolina, and two sites in North Carolina. The CDC officials said that in 2008 four sites in Ohio were added and that these sites are receiving funding through CDC’s Expanded HIV Testing initiative. CDC officials said that they plan to start a second phase of the project with additional testing sites. CDC has taken other steps to encourage routine HIV testing by funding special initiatives that focus on certain populations. In 2007, CDC initiated a 3-year project for state and local health departments called the “Expanded and Integrated Human Immunodeficiency Virus (HIV) Testing for Populations Disproportionately Affected by HIV, Primarily African Americans” initiative or the Expanded HIV Testing initiative. In the first year of the initiative, CDC awarded just under $35 million to 23 state and local health departments that had an estimated 140 or more AIDS cases diagnosed among African Americans in 2005. Individual awards were proportionately based on the number of cases, with amounts to each jurisdiction ranging from about $700,000 to over $5 million. Funding after the first year of the initiative was to be awarded to these same health departments on a noncompetitive basis assuming availability of funds and satisfactory performance. Funding for the second year of the initiative was just over $36 million and included funding for 2 additional health departments, bringing the total number of funded departments to 25. CDC asked health departments participating in the Expanded HIV Testing initiative to develop innovative pilot programs to expand testing opportunities for populations disproportionately affected by HIV— primarily African Americans—who are unaware of their status. CDC required health departments to spend all funding on HIV testing and related activities, including the purchase of HIV rapid tests and connecting HIV-positive individuals to care. CDC strongly encouraged applicants to focus at least 80 percent of their pilot program activities on health care settings, including settings to which CDC had not previously awarded funding for HIV testing, such as emergency rooms, inpatient medical units, and urgent care clinics. Additionally, CDC required that programs in health care settings follow the agency’s 2006 routine HIV testing recommendations to the extent permitted by law. Programs in non-health care settings were to have a demonstrated history of at least a 2 percent rate of HIV-positive test results. The 2006 reauthorization of CARE Act programs included a provision for the Early Diagnosis Grant program under which CDC would make HIV prevention funding for each of fiscal years 2007 through 2009 available to states that had implemented policies related to routine HIV testing for certain populations. These policies were (1) voluntary opt-out testing of all pregnant women and universal testing of newborns or (2) voluntary opt-out testing of patients at STD clinics and substance abuse treatment centers. CDC’s fiscal year 2007 appropriation prohibited it from using funding for Early Diagnosis grants. In fiscal year 2008, CDC’s appropriation provided up to $30 million for the grants. CDC officials told us that in 2008, the agency awarded $4.5 million to the six states that had implemented at least one of the two specified policies as of December 31, 2007. In fiscal year 2009, CDC’s appropriation provided up to $15 million for grants to states newly eligible for the program. CDC officials said that in 2009, one state received funding for implementing voluntary opt-out testing at STD clinics and substance abuse treatment centers. CDC officials also told us that they provided HRSA with information on how the Early Diagnosis Grant program would be implemented, but have not coordinated with the agency on administration of the program. Officials from just over half of the state and local health departments we interviewed said that their departments had implemented routine HIV testing in their jurisdictions, but that they generally did so in a limited number of sites. Officials from most of the health departments we interviewed and other sources knowledgeable about HIV have identified barriers to routine HIV testing, including lack of funding. Officials from 9 of the 14 state and local health departments we interviewed said that their departments had implemented routine HIV testing, but 7 said that they did so in a limited number of sites. Specifically, officials from 5 of the state health departments we interviewed said that their departments had implemented routine HIV testing in anywhere from one to nine sites and officials from 2 of the local health departments said that their departments had implemented it in two and four sites, respectively. Officials from all but 1 of these 7 departments said that their departments used funding from CDC’s Expanded HIV Testing initiative to implement routine HIV testing. CDC’s goal for its Expanded HIV Testing initiative is to test 1.5 million individuals for HIV in areas disproportionately affected by the disease and identify 20,000 HIV-infected persons who are unaware of their status per year. During the first year of the initiative, health departments that received funding under the CDC initiative reported conducting just under 450,000 HIV tests and identifying approximately 4,000 new HIV-positive results. The two other health departments that had implemented routine HIV testing—one state health department and one local health department located in a large city—had been able to implement routine HIV testing more broadly. These departments had implemented routine HIV testing prior to receiving funding through the Expanded HIV testing initiative, and used the additional funding to expand the number of sites where it was implemented. For example, the local health department had started an initiative to achieve universal knowledge of HIV status among residents in an area of the city highly affected by HIV. The department used funding from the Expanded HIV Testing initiative and other funding sources to implement routine HIV testing in this area and other sites throughout the city, including 20 emergency rooms. An official from the state health department said that while the department had already funded routine HIV testing in some settings, for example STD clinics and community health centers, funding from the Expanded HIV Testing initiative allowed them to fund routine HIV testing in other types of settings, for example emergency rooms. Officials from five health departments we interviewed said that their departments had not implemented routine HIV testing in their jurisdictions, including three state health departments and two local health departments. None of these health departments received funding through CDC’s Expanded HIV Testing initiative, and officials from two of the state health departments specifically cited this as a reason why they had not implemented routine HIV testing. Officials from all of the departments that had not implemented routine HIV testing said that their departments do routinely test certain populations for HIV, including pregnant women, injection drug users, and partners of individuals diagnosed with HIV. Officials from 11 of the 14 state and local health departments we interviewed and other sources knowledgeable about HIV have identified barriers that exist to implementing routine HIV testing. Officials from 5 of the 11 health departments cited lack of funding as a barrier to routine HIV testing. For example, an official from 1 state health department told us that health care providers have said that they would do routine HIV testing if they could identify who would pay for the cost of the tests. The need for funding was corroborated by officials from an organization that contracts with state and local health departments to coordinate HIV-related care and services. These officials told us that they had often seen routine HIV testing end when funding streams dried up and noted that there has been little implementation of CDC’s 2006 routine HIV testing recommendations in their area outside of STD clinics and programs funded through the Expanded HIV Testing initiative. Officials from state and local health departments we interviewed and other sources also cited lack of insurance reimbursement as a barrier to routine HIV testing. When identifying lack of funding as a barrier to routine HIV testing, officials from two state health departments we interviewed explained that there is a general lack of insurance reimbursement for this purpose. Other organizations we interviewed and CDC also raised the lack of insurance reimbursement for routine HIV testing as a barrier. For example, one provider group that we spoke with said that many providers are hesitant to offer HIV tests without knowing whether they will be reimbursed for it. In a recent presentation, CDC reported that out of 11 insurance companies, as of May 2009, all covered targeted HIV testing, but only 6 reimbursed for routine HIV testing. CDC also reported that as of this same date only one state required that insurers reimburse for HIV tests regardless of whether testing is related to the primary diagnosis. CDC noted that legislation similar to this state’s has been introduced, but not passed, in two other states as well as at the federal level. Medicare does not currently reimburse for routine HIV testing, though the Centers for Medicare & Medicaid Services initiated a national coverage analysis as the first step in determining whether Medicare should reimburse for this service. While federal law allows routine HIV testing as a covered service under Medicaid, individual states decide whether or not they will reimburse for routine HIV testing. According to one study, reimbursement for routine HIV testing has not been widely adopted by state Medicaid programs. Many insurers, including Medicare and Medicaid, base their reimbursement policies on the recommendations of the U.S. Preventive Services Task Force, which is the leading independent panel of private-sector experts in prevention and primary care. While the Task Force has recommended that clinicians conduct routine HIV testing when individuals are at increased risk of HIV infection and for all pregnant women, it has not made a recommendation for routine HIV testing when individuals are not at increased risk, saying that the benefit in this case is too small relative to the potential harms. In addition, officials from three state health departments we interviewed discussed legal barriers to implementing routine testing. For example, officials from one department said that implementation of routine HIV testing would require a change in state law to eliminate the requirement for pretest counseling and written informed consent. Similarly, officials from another department said that while their department had been able to conduct routine testing through the Expanded HIV Testing initiative, expanding it further might require changing state law to no longer require written informed consent for HIV testing. The officials explained that while the initiative did have a written informed consent form, the department had been able to greatly reduce the information included on the form in this instance. The department is currently in the process of looking for ways to further expand HIV testing without having to obtain changes to state law. According to a study published in the Annals of Internal Medicine, as of September 2008, 35 states’ laws did not present a barrier to implementing routine HIV testing, though the 3 states discussed above were identified as having legal barriers. Officials from 3 of the state and local health departments we interviewed discussed operational barriers to integrating routine HIV testing with the policies and practices already in place in health care settings. For example, an official from a state health department said that the department tries to work past operational barriers to routine HIV testing, but if after 6 months the barriers prove too great in one site the department moves implementation of routine HIV testing to another site. An official from another state health department noted that in hospital settings it can take a long time to obtain approval for new protocols associated with routine HIV testing. NASTAD conducted a survey of the 25 state and local health departments that received funding through the Expanded HIV Testing initiative and found that health departments reported some barriers in implementing routine HIV testing, including obtaining buy-in from staff in health care settings and providing adequate training, education, and technical assistance to this staff. Other barriers mentioned by officials from health departments we interviewed included health care providers not being comfortable testing everyone for HIV and the ability of providers to provide care for the increased number of people who might be diagnosed through expanded HIV testing. CDC officials estimated that approximately 30 percent of the agency’s annual HIV prevention funding is spent on HIV testing. For example, according to CDC officials, in fiscal year 2008 this would make the total amount spent on HIV testing about $200 million out of the $652.8 million CDC allocated for domestic HIV prevention to its Division of HIV/AIDS Prevention. Of the $200 million CDC officials estimated was spent on testing, CDC did report that, in fiscal year 2008, $51.1 million was spent on special HIV testing initiatives, such as the Expanded HIV testing initiative and the Early Diagnosis Grant program. CDC officials said that, outside of special testing initiatives, they could not provide the exact amount CDC spent on HIV testing. CDC’s Division of HIV/AIDS Prevention spends the majority of its domestic HIV prevention budget in connection with cooperative agreements, grants, and contracts to state and local health departments and other funded entities. CDC officials explained that grantees submit reports to CDC on the activities they fund at the middle and end of the year. The officials said that while project officers check to see that these reports are consistent with how grantees planned to spend their funding, CDC does not routinely aggregate how much all grantees spent on a given activity, including HIV testing. In addition, outside of the Expanded HIV Testing initiative, CDC does not maintain data on how funds for HIV testing are distributed to different settings within jurisdictions. For example, this would mean that CDC does not have data on how much money a state health department spends on testing in emergency rooms, versus how much money it spends on testing in community-based organizations. According to data from NHIS, nearly 70 percent of all HIV tests in the United States were conducted in a private doctor’s office, HMO, or hospital setting in 2007. Specifically, 50 percent of all HIV tests were conducted in a private doctor’s office or HMO and nearly 20 percent of all HIV tests were conducted in a hospital setting, including emergency departments. The remaining tests were conducted in a variety of settings, including public clinics and HIV counseling and testing sites. Less than 1 percent of all HIV tests were conducted in a correctional facility, STD clinic, or a drug treatment facility. These data are similar to earlier data from NHIS. In 2002, NHIS found that 44 percent of all HIV tests were conducted in a private doctor’s office or HMO and 22 percent of all HIV tests were conducted in a hospital setting. Analysis of CDC surveillance data on the settings in which HIV-positive individuals are diagnosed suggests that approximately 40 percent of all HIV-positive results in the United States occurred in a private doctor’s office, HMO, or hospital setting in 2007, the most recent year for which data were available. These data also suggest that hospital inpatient settings account for a disproportionate number of HIV-positive results discovered late in the course of the disease. In 2007, hospital inpatient settings accounted for 16 percent of all HIV-positive results. Among HIV cases diagnosed in 2006, these same settings accounted for 31 percent of HIV-positive results that occurred within 1 year of an AIDS diagnosis. While CDC surveillance data can provide some indication of the types of settings where the greatest percentage of HIV-positive results occur, data limitations did not permit a more detailed analysis of HIV-positive results by setting type. Specifically, information on facility of diagnosis was missing or unknown for nearly one out of every four HIV cases reported through the surveillance system in 2007. CDC officials told us that in the past the agency used data from the Supplement to HIV/AIDS Surveillance project to examine the types of settings where individuals test positive for HIV, but this project ended in 2004. CDC reported that in place of the Supplement to HIV/AIDS Surveillance project, the agency has implemented the Medical Monitoring Project. However, data from the Medical Monitoring Project were not available at the time of our analysis. CDC has calculated a national estimate of more than 200,000 undiagnosed HIV-positive individuals—that is, individuals who were unaware they are HIV positive and were therefore not receiving care for HIV. CDC estimated that 232,700 individuals, or 21 percent of the 1.1 million people living with HIV at the end of 2006, were unaware that they were HIV positive. CDC does not have a national estimate of the total number of diagnosed individuals not receiving care, but CDC has calculated a national estimate of more than 12,000 diagnosed HIV-positive individuals who did not receive care within a year after they were diagnosed with HIV in 2003. CDC reported that the estimated proportion of individuals with HIV who did not receive care within a year of diagnosis—which CDC measures by the number of HIV-positive individuals who did not have a reported CD4 or viral load test within this time—was 32.4 percent, or 12,285 of the 37,880 individuals who were diagnosed with HIV in 2003. Since this estimate is based on the number of HIV-positive individuals who did not receive care within a year of diagnosis, this estimate does not include all individuals diagnosed with HIV who are not receiving care. For example, an individual may receive care within a year of diagnosis, but subsequently drop out of care 2 years later. Or an individual may receive care 2 years after diagnosis. In these examples, the individuals’ change in status as receiving care or not receiving care is not included in CDC’s estimate of the proportion of diagnosed individuals not receiving care. Although CDC has published these estimates, the agency has noted limitations to the data used to calculate the number of diagnosed HIV- positive individuals not receiving care for HIV. First, not all states require laboratories to report all CD4 and viral load test results; without this information being reported, CDC’s estimates may overstate the number of individuals who did not enter into care within 1 year of HIV diagnosis. Additionally, in the past, CDC only required jurisdictions to report an individual’s first CD4 or viral load test, which did not allow CDC to provide an estimate of all HIV-positive individuals who are not receiving care for HIV after the first year. CDC is currently disseminating updated data collection software which will permit the collection and reporting of all results collected by states. However, CDC officials told us that this software is still going through quality control checks. While CDC calculates national estimates of the number of undiagnosed HIV-positive individuals not receiving care for HIV and the number of diagnosed HIV-positive individuals who did not receive care within a year of diagnosis, the agency does not calculate these estimates at the state level. CDC officials said that these estimates are not available at the state level because not all states have mature name-based HIV reporting systems. CDC officials said that the agency is determining what it will need to estimate the number of undiagnosed individuals at the state level once all states have mature HIV reporting systems. CDC officials also said that once the new data collection software to collect CD4 and viral load test results from states is ready, data on all diagnosed HIV-positive individuals not receiving care may be available at the state level for those states with mature name-based HIV reporting systems with laboratory reporting requirements. HRSA also collects states’ estimates of the number of diagnosed HIV- positive individuals not receiving care for HIV, but data are not consistently collected or reported by states, and therefore estimates are not available for comparison across all states. States report their estimates of the number of diagnosed HIV-positive individuals who are not receiving care as unmet need estimates to HRSA as a part of the states’ CARE Act Part B grant applications. However, these estimates have limitations and are not comparable across states. One limitation is that not all states require laboratory reporting of CD4 and viral load results for all individuals who receive the tests. States use reported CD4 and viral load test results to calculate their unmet need, and, according to HRSA, without data for all individuals who receive CD4 or viral load tests, a state may overestimate its unmet need. Another limitation is that the estimates submitted in the states’ fiscal year 2009 grant applications were calculated using differing time periods. For example, New Hampshire calculated its unmet need estimate using HIV cases collected as of December 31, 2004, while Colorado calculated its estimate using data collected as of June 30, 2008. Additionally, not all states have access to information on the number of individuals receiving care through private insurance; therefore, these individuals are counted as part of the state’s unmet need. According to officials we interviewed, several barriers exist that could prevent HIV-positive individuals from receiving care. HRSA officials told us that structural barriers within the health care system, such as no or limited availability of services, inconvenient service locations and clinic hours, and long wait times for appointments can influence whether an individual is receiving care for HIV. Other barriers identified by HRSA officials are the quality of communication between the patient and provider, lack of or inadequate insurance, financial barriers, mental illness, and substance abuse. HRSA officials also noted that personal beliefs, attitudes, and cultural barriers such as racism, sexism, homophobia, and stigma can also have an impact on an individual’s decision to seek care. Officials from two states and one local health department we spoke with stated that transportation was a barrier, while officials from two state health departments stated that lack of housing was a barrier for access to care. Unstable housing can prevent individuals with HIV from accessing health care and adhering to complex HIV treatments because they must attend to the more immediate need of obtaining shelter. Agencies have implemented initiatives to connect diagnosed individuals to care for HIV. For example, part of CDC’s Expanded HIV Testing initiative focused on connecting individuals diagnosed with HIV to care. In the first year of the initiative, 84 percent of newly diagnosed patients received their HIV test results and 80 percent of those newly diagnosed were connected to care. CDC has also funded two studies that evaluated a case management intervention to connect HIV-positive individuals to care for HIV. In these studies, case management was conducted in state and local health departments and community-based organizations and included up to five visits with a case manager over a 3-month period. In one of these studies, 78 percent of individuals who participated in case management were still in care 6 months later. HRSA has developed two initiatives as Special Projects of National Significance. The first initiative, “Enhancing Access to and Retention in Quality HIV Care for Women of Color,” was developed to implement and evaluate the effectiveness of focused interventions designed to improve timely entry and retention into quality HIV care for women of color. The second initiative, the “Targeted HIV Outreach and Intervention Model Development” initiative, was a 5-year, 10-site project implemented to bring underserved HIV-positive individuals into care for HIV. According to HRSA, results of the initiative indicated that individuals are less likely to have a gap of 4 months or more of care when they have had nine or more contacts with an outreach program within the first 3 months of these programs. In collaboration with AIDS Action, an advocacy organization formed to develop policies for individuals with HIV, HRSA has also funded the “Connecting to Care” initiative. AIDS Action and HRSA developed the initiative to highlight successful methodologies to help connect or reconnect individuals living with HIV to appropriate and ongoing medical care. The methodologies were identified from cities across the country and are being utilized in different settings. The initiative includes two publications with 42 interventions that have been reported to be successful in connecting HIV-positive individuals to care. The publications provide a description, logistics, strengths and difficulties, and outcomes of each intervention and focus specifically on homeless individuals, Native Americans, immigrant women, low-income individuals in urban and rural areas, and currently or formerly incarcerated individuals. AIDS Action has held training workshops that provided technical assistance to explain the interventions, including how to apply the best practices from successful programs. HRSA provides grants under Part C of the CARE Act to public and private nonprofit entities to provide early intervention services to HIV-positive individuals on an outpatient basis that can help connect people to care. Part C grantees are required to provide HIV medical care services that can include outpatient care, HIV counseling, testing, and referral, medical evaluation and clinical care, and referrals to other health services. These programs also provide services to improve the likelihood that undiagnosed individuals will be identified and connected to care, such as outreach services to individuals who are at risk of contracting HIV, patient education materials, translation services, patient transportation to medical services, and outreach to educate individuals on the benefits of early intervention. HRSA and CDC are currently collaborating on a clinic-based research study, “Increasing Retention in Care among Patients Being Treated for HIV Infection.” The study is designed to develop, implement, and test the efficacy of an intervention intended to increase appointment attendance among individuals at risk of missing scheduled appointments in HIV clinics. In addition to CDC and HRSA initiatives, officials we interviewed told us that state and local health departments have implemented their own initiatives to connect HIV-positive individuals to care. Officials from six states and five local health departments we spoke with stated that their departments use case management to assist HIV-positive individuals through the process of making appointments and to help address other needs of the individuals. For example, officials from one of these health departments explained that some case managers sign up qualified individuals for an AIDS Drug Assistance Program and others assist with locating housing or with substance abuse issues, which can also be barriers to staying in care. Case managers make sure individuals are staying in care by finding patients who have missed appointments or who providers have been unable to contact. In addition, officials from one state and four local health departments we spoke with told us that their departments use mental health professionals and officials from one state and three local health departments told us that their departments use substance abuse professionals to connect individuals to care, since individuals who need these services are at a high risk of dropping out of care. Officials from two health departments said that their departments use counseling and officials from one health department said that partner counseling is conducted when an individual is diagnosed with HIV. HHS provided technical comments on a draft of the report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this report. Other staff who made major contributions to this report are listed in appendix IV. U.S. federal prisons have become a principal screening and treatment venue for thousands of individuals who are at high risk for human immunodeficiency virus (HIV) or who have HIV. According to a 2008 report by the Bureau of Justice Statistics, the overall rate of estimated confirmed acquired immune deficiency syndrome (AIDS) cases among the prison population (.46 percent) was more than 2.5 times the rate of the general U.S. population at the end of calendar year 2006. The Bureau of Justice Statistics also reported that 1.6 percent of male inmates and 2.4 percent of female inmates in state and federal prisons were known to be HIV positive. To ensure that infected individuals are aware of their HIV-positive status and to ensure that they receive care while in prison, 21 states tested all inmates for HIV at admission or at some point during their incarceration. Forty-seven states and all federal prisons tested inmates if they had HIV-related symptoms or if they requested an HIV test. The Ryan White Comprehensive AIDS Resources Emergency Act of 1990 (CARE Act) was enacted to address the needs of jurisdictions, health care providers, and people with HIV and their family members. CARE Act programs have been reauthorized three times (1996, 2000, and 2006) and are scheduled to be reauthorized again in 2009. The CARE Act Amendments of 2000 required the Health Resources and Services Administration (HRSA) to consult with the Department of Justice and others to develop a plan for the medical case management and provision of support services to individuals with HIV when they are released from the custody of federal and state prisons. The plan was to be submitted to Congress no later than 2 years after the date of enactment of the CARE Act Amendments of 2000. You asked us to review the implementation status of the plan and to determine the extent of any continued coordination between HRSA and the Department of Justice to transition prisoners with HIV to CARE Act programs. However, HRSA officials told us that they did not create this plan or coordinate with the Department of Justice to create this plan. Additionally, the requirement for this plan was eliminated by the 2006 Ryan White Treatment Modernization Act. We are therefore providing information related to other steps that HRSA has taken to address the provision of HIV prevention and care for incarcerated persons with HIV transitioning back to the community and into CARE Act funded programs. Additionally, we provide information on steps taken by the Centers for Disease Control and Prevention (CDC) and states to address this issue. To provide information related to the steps that CDC and HRSA have taken to address the provision of HIV prevention and care for incarcerated persons, we interviewed CDC and HRSA officials. We also interviewed officials from nine state health departments about their programs for incarcerated persons with HIV transitioning back to the community and into CARE Act-funded programs, and the limitations of these programs. From these nine state health departments, officials from eight states provided responses about their programs. The remaining state did not have a transition program in place. Our sample is not generalizable to all state and local health departments. The U.S. prison system has been the focus of many studies on HIV testing for prisoners and care for those with HIV while in prison and upon their release. Studies have been conducted to determine the number of individuals who are accessing HIV testing and treatment for the first time upon their incarceration. Studies have also been conducted to evaluate how infected prisoners fare in their HIV treatment upon release from prison, as inmates often encounter social and economic changes including the need to secure employment and housing, establish connections with family, and manage mental health and substance abuse disorders. For example, one recent study of the Texas state prison system published in the Journal of the American Medical Association discussed an evaluation of the proportion of infected individuals who filled a highly active antiretroviral therapy (HAART) prescription within 10, 30, and 60 days after their release from prison, respectively. The study found that 90 percent of recently released inmates did not fill a prescription for HAART therapy soon enough to avoid a treatment interruption (10 days) and more than 80 percent did not fill a prescription within 30 days of release. Only 30 percent of those released filled a prescription within 60 days. Individuals on parole and those who received assistance in completing a Texas AIDS Drug Assistance Program application were more likely to fill a prescription within 30 and 60 days. Because those who discontinue HAART are at increased risk of developing a higher viral burden (resulting in greater infectiousness and higher levels of drug resistance), it is important for public health that HIV-positive prisoners continue their HAART treatment upon release from prison. CDC, HRSA, and several states we interviewed have implemented programs to aid in the transition of HIV-positive persons from prison to the community with emphasis on their continued care and treatment. CDC and HRSA have funded demonstration projects to address HIV prevention and care for prisoners with HIV upon their release from incarceration. Selected state health departments and their respective state departments of corrections have coordinated to help HIV-positive prisoners in their transition back to the community. CDC and HRSA have funded various projects to address the provision of HIV prevention and care for prisoners with HIV upon their release from incarceration. CDC and HRSA have also provided guidance to states regarding HIV-related programs. The list below describes the projects and guidance. CDC and HRSA jointly funded a national corrections demonstration project in seven states (California, Florida, Georgia, Illinois, Massachusetts, New Jersey, and New York). This demonstration project was funded from 1999 to 2004. The goal of the demonstration project was to increase access to health care and improve the health status of incarcerated and at-risk populations disproportionately affected by the HIV epidemic. The “HIV/AIDS Intervention, Prevention, and Community of Care Demonstration Project for Incarcerated Individuals within Correctional Settings and the Community” involved jail, prison, and juvenile detention settings. The project targeted inmates with HIV, but also those with hepatitis B and hepatitis C, tuberculosis, substance abuse, and sexually transmitted diseases (STD). According to a HRSA report, the project was able to enhance existing programs in facilities, and develop new programs both within facilities and outside of them. Many states integrated lessons learned through the project at varying levels throughout their state. CDC funded Project START to develop an HIV, STD, and hepatitis prevention program for young men aged 18-29 who were leaving prison in 2001. The goal of this project was to test the effectiveness of the Project START interventions in reducing sexually risky behaviors for prisoners transitioning back to the community. State prisons in California, Mississippi, Rhode Island, and Wisconsin were selected. A study describing the Project START interventions indicated a multi-session community re-entry intervention can lead to a reduction in sexually risky behavior in recently released prisoners. CDC funded a demonstration project at multiple sites in four states (Florida, Louisiana, New York, and Wisconsin) where prisoners in short- term jail facilities were offered routine rapid initial testing and appropriate referral to care, treatment, and prevention services within the facility or outside of it. From December 2003 through June 2004, more than 5,000 persons had been tested for HIV, and according to a CDC report, 108 (2.1 percent) had received confirmed positive results. CDC officials told us that CDC is currently completing three pilot studies which began in September 2006. These studies were conducted to develop interventions for HIV-positive persons being released from several prisons or halfway houses in three states: California (prisons), Connecticut (prisons), and Pennsylvania (halfway houses). CDC officials explained that CDC has established a Corrections Workgroup within the National Center for HIV/AIDS, Viral Hepatitis, STD, and Tuberculosis Prevention. In March of 2009, the workgroup hosted a Corrections and Public Health Consultation: “Expanding the Reach of Prevention.” This forum provided an opportunity for subject matter experts in the fields of corrections and academia as well as representatives from health departments and community-based organizations to develop effective prevention strategies for their correctional systems. According to a Special Projects of National Significance program update, HRSA’s “Enhancing Linkages to HIV Primary Care and Services in Jail Settings” initiative seeks to develop innovative methods for providing care and treatment to HIV-positive inmates who are reentering the community. This 4-year project, which began in September 2007, is different from the “HIV/AIDS Intervention, Prevention, and Community of Care Demonstration Project for Incarcerated Individuals within Correctional Settings and in the Community” in that it focuses entirely on jails. HRSA defines jails as locally operated facilities whose inmates are typically sentenced for 1 year or less or are awaiting trial or sentencing following trial. Under the initiative, HRSA has awarded grants to 10 demonstration projects in the following areas: Atlanta, Georgia; Chester, Pennsylvania; Chicago, Illinois; Cleveland, Ohio; Columbia, South Carolina; New Haven, Connecticut; New York, New York; Philadelphia, Pennsylvania; Providence, Rhode Island; and Springfield, Massachusetts. Besides funding demonstration projects and creating workgroups, HRSA and CDC have issued guidance to states. HRSA issued guidance in September 2007 explaining allowable expenditures under CARE Act programs for incarcerated persons. The guidance states that expenditures under the CARE Act are only allowable to help prisoners achieve immediate connections to community-based care and treatment services upon release from custody, where no other services exist for these prisoners, or where these services are not the responsibility of the correctional system. The guidance provides for the use of funds for transitional social services including medical case management and social support services. CARE Act grantees can provide these transitional primary services by delivering the services directly or through the use of contracts. Grantees must also develop a mechanism to report to HRSA on the use of funds to provide transitional social services in correctional settings. In 2009, CDC issued HIV Testing Implementation Guidance for Correctional Settings. This guidance recommended routine opt-out HIV testing for correctional settings and made suggestions for how HIV services should be provided and how prisoners should be linked to services. The guidance also addressed challenges that may arise for prison administrators and health care providers who wish to implement the guidelines in their correctional facilities. Of the eight state health departments in our review that had HIV transition programs in place, several have implemented programs that coordinate with the state’s department of corrections to provide prisoners with support services to help them in their transition back to the community. We provide examples of three of these programs below. Officials from one state health department said that their department uses CARE Act and state funding to provide a prerelease program that uses the state’s department of corrections prerelease planners to make sure that prisoners with HIV are linked to care. Prisoners meet with their prerelease planner 60-90 days prior to release, and the planner links them to care services, has them sign up for the AIDS Drug Assistance Program and Medicaid, and follows up with them after their release to ensure that they remain in care. Additionally, the department of corrections provides 30 days of medications to prisoners upon release. The state department of health has been working with the department of corrections to help them transition HIV-positive prisoners for the past 10 years. According to officials from another state health department, their department uses state funds to provide transitional case management for HIV prisoners who are transitioning back into the community. Specialized medical case managers meet and counsel prisoners with HIV who are within 6 months of being released. Within 90 days of release, the prisoner and the medical case manager may meet several times to arrange housing, complete a Medicaid application, obtain referrals to HIV specialists and to the AIDS Drug Assistance Program, and provide the prisoner with assistance in obtaining a state identification card. Case managers will also work with the prisoner for 3 months after release so that the prisoner is stable in the community. After 90 days, the person can be transferred into another case management program or they can drop out. The client is kept on the AIDS Drug Assistance Program if they are not disabled. According to officials from a third state health department, their department uses “Project Bridge,” a nationally recognized program to transition prisoners back into the community and into CARE Act programs. The Project Bridge program provides transition services to prisoners. Ninety-seven percent of the Project Bridge participants receive medical care during the first month of their release from prison. The state attributes the success of this program to the productive relationship between the state health department and the department of corrections. Project Bridge participants are involved in discharge planning with case managers starting 6 months before their discharge. Participants then receive intense case management for approximately 18-24 months after their release. During this period they are connected with medical and social services. According to state officials, the program has also been effective in decreasing recidivism rates. Officials we interviewed from state health departments described several limitations to their departments’ programs. One state health department official explained that their department does not have the staff to coordinate services for all of the state’s 110 jails. Officials from two other state health departments explained that state budget cuts are threatening the continuation of their departments’ prisoner transition programs. One state health department official explained that finding the transitioning HIV-positive prisoner housing in the community is often very difficult. The lack of available housing has impacted their HIV care because they are so focused on finding housing that they are unable to focus on taking their medication or going to medical appointments. One state health department official explained that their department’s prisoners with HIV are sometimes not interested in being connected to care in the community. Another state health department official explained that the lack of funding for prisoner transition programs is a limitation of their program. Appendix II: Part A Grantees’ Living HIV/AIDS Cases Used by HRSA to Determine Fiscal Year 2009 CARE Act Base Grants Atlanta, Ga. Austin, Tex. Baltimore, Md. Baton Rouge, La. Bergen-Passaic, N.J. Boston, Mass. Caguas, P.R. Charlotte-Gastonia, N.C.-S.C. Chicago, Ill. Dallas, Tex. Denver, Colo. Detroit, Mich. Dutchess County, N.Y. Fort Lauderdale, Fla. Fort Worth, Tex. Hartford, Conn. Houston, Tex. Indianapolis, Ind. Jacksonville, Fla. Jersey City, N.J. Kansas City, Mo. Las Vegas, Nev. Los Angeles, Calif. Memphis, Tenn. Miami, Fla. Middlesex-Somerset-Hunterdon, N.J. Minneapolis-St. Paul, Minn. Nashville, Tenn. Nassau-Suffolk, N.Y. New Haven, Conn. New Orleans, La. New York, N.Y. Newark, N.J. Norfolk, Va. Oakland, Calif. Orange County, Calif. Orlando, Fla. Philadelphia, Pa. Phoenix, Ariz. Ponce, P.R. Portland, Ore. Riverside-San Bernardino, Calif. Sacramento, Calif. San Antonio, Tex. San Diego, Calif. San Francisco, Calif. San Jose, Calif. San Juan, P.R. Santa Rosa, Calif. Seattle, Wash. St. Louis, Mo. Tampa-St. Petersburg, Fla. Vineland-Millville-Bridgeton, N.J. Washington, D.C. West Palm Beach, Fla. In addition to the contact above, Thomas Conahan, Assistant Director; Robert Copeland, Assistant Director; Leonard Brown; Romonda McKinney Bumpus; Cathleen Hamann; Sarah Resavy; Rachel Svoboda; and Jennifer Whitworth made key contributions to this report.
Of the estimated 1.1 million Americans living with HIV, not all are aware of their HIV-positive status. Timely testing of HIV-positive individuals is important to improve health outcomes and to slow the disease's transmission. It is also important that individuals have access to HIV care after being diagnosed, but not all diagnosed individuals are receiving such care. The Centers for Disease Control and Prevention (CDC) provides grants to state and local health departments for HIV prevention and collects data on HIV. In 2006, CDC recommended routine HIV testing for all individuals ages 13-64. The Health Resources and Services Administration (HRSA) provides grants to states and localities for HIV care and services. GAO was asked to examine issues related to identifying individuals with HIV and connecting them to care. This report examines: 1) CDC and HRSA's coordination on HIV activities and steps they have taken to encourage routine HIV testing; 2) implementation of routine HIV testing by select state and local health departments; 3) available information on CDC funding for HIV testing; and 4) available data on the number of HIV-positive individuals not receiving care for HIV. GAO reviewed reports and agency documents and analyzed CDC, HRSA, and national survey data. GAO interviewed federal officials, officials from nine state and five local health departments chosen by geographic location and number of HIV cases, and others knowledgeable about HIV. The Secretary of Health and Human Services (HHS) is required to ensure that HHS agencies, including CDC and HRSA, coordinate HIV programs to enhance the continuity of prevention and care services. CDC and HRSA have coordinated to assist health care professionals who provide HIV-related services. For example, in 2007 and 2008, CDC provided funding to HRSA to expand consultation services at the National HIV/AIDS Clinicians' Consultation Center. Both CDC and HRSA have taken steps to encourage routine HIV testing--that is, testing all individuals in a health care setting without regard to risk. For example, CDC has funded initiatives on routine HIV testing and HRSA has provided for training as part of these initiatives. Officials from over half of the 14 selected state and local health departments in GAO's review reported implementing routine HIV testing in their jurisdictions. However, according to officials we interviewed, those that implemented it generally did so at a limited number of sites. Officials from most of the selected health departments and other sources knowledgeable about HIV have identified barriers that exist to implementing routine HIV testing, including lack of funding and legal barriers. CDC officials estimated that approximately 30 percent of the agency's annual HIV prevention funding is spent on HIV testing. For example, according to CDC officials, in fiscal 2008, this would make the total amount spent on HIV testing about $200 million out of the $652.8 million CDC allocated for domestic HIV prevention to its Division of HIV/AIDS Prevention. However, CDC officials said that they could not provide the exact amount the Division spends on HIV testing, because they do not routinely aggregate how much all grantees spend on a given activity, including HIV testing. CDC estimated that 232,700 individuals with HIV were undiagnosed--that is, unaware that they were HIV positive--in 2006, and were therefore not receiving care for HIV. CDC has not estimated the total number of diagnosed HIV-positive individuals not receiving care, but has estimated that 32.4 percent, or approximately 12,000, of HIV-positive individuals diagnosed in 2003 did not receive care for HIV within a year of diagnosis. State-level estimates of the number of undiagnosed and diagnosed HIV-positive individuals not receiving care for HIV are not available from CDC. HRSA collects states' estimates of the number of diagnosed individuals not receiving care, but data are not consistently collected or reported by states, and therefore estimates are not available for comparison across all states. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
The KMCC currently faces significant cost, schedule, and performance problems, and it is unclear as to when the project will be completed and at what cost. Despite being originally scheduled to open in early 2006, neither LBB-Kaiserslautern nor the Air Force can estimate a completion date for the project because of the widespread construction management problems. In addition, estimated costs associated with the KMCC have already exceeded original estimates and will continue to grow. LBB- Kaiserslautern mismanagement has caused numerous problems with the KMCC. Examples include poor designs, substandard workmanship on key building components, and a significant reduction in the number of workers on-site. Furthermore, there may be fraud within the project, which is supported by the fact that there are ongoing criminal and civil investigations by AFOSI and German police. The latest official design schedule completed by LBB-Kaiserslautern and provided to the Air Force in September 2006 indicated that the KMCC would be completed by April 2007. However, during our visit to the KMCC in May 2007, LBB-Kaiserslautern and Air Force officials stated that key milestone dates from the most recent design schedule had obviously slipped. In fact, neither LBB-Kaiserslautern nor Air Force officials could provide a new estimated project completion date during our audit of the project. Also, both LBB-Kaiserslautern and the Air Force provided us current cost estimates of about $200 million, which have already exceeded the original estimate of about $150 million. We found that these cost estimates did not include substantial costs related to the expected roof repair and replacement discussed later, as well as hindrance claims associated with the project. Furthermore, the Air Force contract with LBB-Kaiserslautern is denominated in euros and therefore the U.S. cost equivalent varies with the exchange rate. For example, the original cost estimate of about $150 million was developed in 2003 when 1 dollar was able to purchase significantly more in euros than 1 dollar can currently purchase. Figure 1 below shows the trend in the strengthening of the euro against the U.S. dollar over the past several years. The schedule delays associated with the KMCC have compounded cost problems because of the appreciation of the euro versus the U.S. dollar. Given the substantial costs associated with repairs to the roof, schedule delays, and potential hindrance claims by contractors, assuming currency rates remain higher than they were for the original project budget, the appreciation of euros versus the U.S. dollar compounds the effect of cost overruns on this project. Since the start of construction in 2003, the KMCC has experienced numerous problems including poor design, substandard workmanship, poor coordination of the different contractors, and a reduction of workers on the site. Some of the more notable problems associated with this project include the following: Roof: The roof is experiencing water leaks causing considerable damage to the walls and the floors of the complex. According to Air Force officials, since the contractor responsible for roof construction went bankrupt, KMCC funding sources from the United States (AAFES, Air Force Services Agency, and Military Construction funds) will likely be used to pay the estimated millions of dollars in costs required to repair or replace the entire roof along with any internal damage. Figure 2 shows some damage in the KMCC resulting from the leak in its roof. Exhaust ducts: The kitchen exhaust ducts installed in the KMCC do not comply with fire code standards established by the National Fire Protection Association. According to Air Force officials, it will take several months to make the exhaust ducts compliant with the fire codes at a cost of hundreds of thousands of dollars. Bathroom faucets: Design plans called for some of the bathroom faucets in the KMCC to be automatic where water would turn on when a motion sensor indicated the presence of a person. However faucets and walls were installed prior to the electrical contractor installing wires needed to power the automated faucets. Vandalism: In April 2006, vandalism occurred in over 200 rooms inside the KMCC. The cost to repair damage caused by the vandalism is estimated to be over $1 million. To make matters worse, as shown in figure 3, due to poor project coordination, a German contractor installed light fixtures on top of the vandalized walls. These lights will need to be removed to enable wall repairs to be made and then reinstalled. Reduction of construction workers: In the past several months, the KMCC has faced a drastic reduction of the number of workers on-site. LBB-Kaiserslautern officials attributed this decrease to slow payment for services and reduced payment amounts from the Air Force due to increased scrutiny of invoices by the Air Force. The Air Force has delayed the payments to certain contractors because the total amount of charges billed to the Air Force has already risen to the contract cost ceiling for the specific contractor. Therefore, the Air Force has been unable to pay those contractors for work performed without a contract change order to increase the contract ceiling. As a result, many of the contractors either reduced the number of workers or have quit working altogether on the project. Prior to September 2006, the number of workers on the site was normally several hundred. Currently, the number of workers on the site is routinely less than 50. In addition to the construction problems faced by the KMCC, there have been a number of personnel who have been removed or have resigned from the project. In the past year, project management officials from LBB- Kaiserslautern have been replaced. Also, JSK, the firm hired by LBB- Kaiserslautern to manage the KMCC, was fired. Finally, a senior Air Force civilian in charge of the project resigned from the position and left the Air Force in 2006. On top of those personnel changes, both the AFOSI and the German Police have ongoing investigations into the project. The investigations span a variety of issues, both criminal and civil, including the investigations of Air Force project management officials as well as German government officials. In the past year, both Air Force and LBB- Kaiserslautern offices have been searched and documentation seized by both AFOSI and German police in relation to these investigations. Current problems facing the KMCC have been caused by the additional risks associated with overseas construction, project management deficiencies by LBB-Kaiserslautern, and the Air Force’s lack of effective controls to mitigate project risks. Guidelines set forth in ABG-75 add risk to the contract management process for U.S. forces construction in Germany. In addition, during the design and construction of the KMCC, the German government construction agent, LBB-Kaiserslautern, did not effectively carry out its project design and construction management duties. Finally, the Air Force failed to recognize risks associated with the KMCC and develop control procedures to minimize project risks. Because the most significant control that the United States can exercise over construction projects in Germany is financial control, the Air Force should have increased the project oversight controls to identify any invalid, unsupported, or inaccurate costs before money was spent. Instead, the Air Force did not have basic oversight and in some cases has circumvented controls in order to expedite payments. The KMCC presented increased risk from the beginning because U.S. forces are not in direct control of construction projects in Germany. Under the terms of ABG-75, most U.S. military construction projects are required to be executed by German government construction agencies, in this case LBB-Kaiserslautern, in accordance with German laws. This includes all contractual authority for design, bid tender and award, project execution, construction supervision, and inspection for all military projects within Germany. As such, the German government construction agency contracts directly with the design and construction companies responsible for a given project. As a result, the United States is required to work through this indirect contracting method, and does not have any direct legal relationship with the contractors for construction projects that are to be built on their behalf. According to Air Force officials, because ABG-75 gives the German government such broad powers in the construction of military projects, the United States has limited influence on how construction projects are built. For example, Air Force officials stated that they were initially resistant to use a trade lots acquisition strategy for the construction of the KMCC because of the complexity involved with coordinating and managing the contractors associated with this strategy. Air Force officials stated that they relented to German government demands for trade lots after it was pointed out that the method of contracting was clearly within the German government’s prerogative under ABG-75. ABG-75 stipulates that the U.S. government pay German government construction agencies (e.g., LBB-Kaiserslautern) between 5 and 7 percent of the project cost for administering the contract regardless of the total project costs with no incentives for early completion. As a result, no incentive exists to minimize costs or encourage early completion. Despite additional risks associated with ABG-75, U.S. forces do have some leverage in managing construction projects in Germany. Specifically, under ABG-75, the United States is granted the authority to approve designs and provide prior consent to any modifications to the construction contract (also known as “change orders”) that affect the scope, quality, or cost of the project. Any excess costs must be approved in advance by U.S. forces, and the forces are not liable for costs proved to be the fault of German officials or contractors. Thus, U.S. forces do have the “power of the purse” which can be used to pay only for costs within the scope of the contract. According to Air Force officials, the Air Force has the ability to cut off funding for its projects. However, since the projects are needed for base operations, such a step would only be used as a last resort. Finally, general risks associated with overseas construction projects add to an already risky situation. Increased complexities of overseas projects include differences in languages, culture, construction laws, safety regulations, and exposure to changes in currency exchange rates. Changes in currency exchange rates can pose a significant risk when project costs must be paid in the host country’s currency, especially when projects take substantially longer to complete than originally planned. Despite risks associated with overseas construction, the Air Force did not institute sufficient controls to manage the project. During the design and construction of the KMCC, LBB-Kaiserslautern did not effectively carry out its project design and construction management duties. LBB-Kaiserslautern’s deficiencies in these areas have contributed to additional costs, schedule delays, and increased financial risk to the U.S. government for the KMCC project. The design of the KMCC was inadequate and resulted in numerous instances of rework costing millions of dollars to fix. LBB-Kaiserslautern hired an architect-engineer firm, JSK, to draft plans for the KMCC, and subsequently contracted with JSK to be the construction manager. According to Air Force and AAFES officials, numerous design flaws were identified by the Air Force in the initial design review of the KMCC and were communicated to both LBB-Kaiserslautern and JSK. However, according to these U.S. officials, neither LBB-Kaiserslautern nor JSK incorporated many of their comments into the final design, which later resulted in additional work and costs. Air Force officials stated that, as of June 2007, they have identified millions of dollars of additional work required because of identifiable design flaws, which the Air Force plans to pay for in order to keep construction work moving forward. The following are some examples of design and construction flaws for the KMCC project: Exhaust ducts: During review of the initial KMCC design, Air Force identified and commented to LBB-Kaiserslautern and JSK that the exhaust ducts used in the restaurant kitchens did not meet U.S. fire safety standards. However, LBB-Kaiserslautern and JSK failed to ensure the change was addressed by contractors responsible for duct construction. As a result, the exhaust ducts installed at the KMCC were not compliant with U.S. fire safety standards. In addition, when we toured the KMCC, an Air Force official showed us the material used to seal the exhaust ducts. According to the official, this material was flammable and, as such, posed a safety risk when hot gasses are vented through the exhaust ducts. Because of the poor design of the exhaust ducts, the Air Force has recently approved a change order for hundreds of thousands of dollars to fix the problem. Figure 4 below is a picture of the flammable sealant used in the kitchen exhaust ducts. Retail space ceiling: The design of the ceiling in the AAFES retail area was not adequate to support light fixtures. The design detailed an open- grid suspended ceiling (not fitted with tiles) with light fixtures fitted into some of the openings. However, during installation, workers discovered that the ceiling grid was not strong enough to support the light fixtures. Ceiling tiles stabilize the grid to keep it from shifting, so omitting the tiles weakened the grid to the point where the light fixtures could not be supported. As a result of this design error, a contract change was necessary in order to provide additional steel supports for the ceiling grid. Escalator/escalator pit: Poor design and construction coordination caused problems with installation of the building’s escalator. The escalator pit was initially built as part of the contract to construct the building’s concrete floor. A subsequent contract was issued for installation of the escalator itself. However, the contract specifications for the escalator installation did not sufficiently detail the size and location of the escalator pit, and the escalator provided by the contractor did not fit in the previously-built pit. As a result, rework was necessary to build a new pit in the proper location. LBB-Kaiserslautern did not effectively manage the KMCC project. Instead of using a general contractor who would be contractually responsible to build the project, LBB-Kaiserslautern attempted to execute the project by managing more than 30 separate trade lot contracts by itself. Each trade lot contractor was only responsible for its section of work, and no one party, other than LBB-Kaiserslautern, was responsible for the overall completion of the project. In addition, the LBB-Kaiserslautern’s decision to use trade lot contracts also meant that LBB-Kaiserslautern would be required to properly coordinate the effort of all the contractors, adequately staff the project, and appropriately monitor construction schedule and costs, so that work could progress. As described below, LBB- Kaiserslautern did not carry out its requirements in the following areas: Poor project coordination: LBB-Kaiserslautern did not effectively coordinate the work of the more than 30 construction contractors on-site. This resulted in inefficiencies in construction as well as damage to finished work. For example, one contractor responsible for installing a tile floor was forced to delay work while the contractor responsible for installing the ceiling finished work over the area where the floor was to be installed. In another case, the contractor responsible for laying the paving stones outside the building was allowed to finish its work before major exterior construction was completed. This resulted in damage to the paving stones when heavy cranes were subsequently used on top of the stones to install exterior bracing to the building. Inadequate staffing: In our interviews, LBB-Kaiserslautern officials told us that their office was understaffed. LBB-Kaiserslautern officials stated that this lack of staffing hindered LBB-Kaiserslautern’s ability to provide assurance that the project design was adequate and improve contractor coordination discussed previously. In part, as a result of the above listed design and coordination problems, numerous contract change orders were necessary. Again, the lack of staffing hindered LBB-Kaiserslautern’s ability to process necessary change orders as required by ABG-75. According to Air Force officials, there are hundreds of change orders that LBB- Kaiserslautern has approved, yet has not submitted documentation to the United States for approval. Many of these change orders also had corresponding invoices submitted and certified by LBB-Kaiserslautern that the Air Force subsequently paid. LBB-Kaiserslautern was only able to provide us a listing of the change orders involved. This was far less than the detailed specifications required for review by the Air Force prior to the approval of the change and payment. Air Force officials also stated that this failure to process change orders was a major problem because this processing serves as the basis for increasing the obligation authority for the contract. In addition, LBB- Kaiserslautern officials stated they had approved the work for most of these change orders and thus the contractors performed the work and were expecting payment. According to Air Force officials, in some cases when the Air Force refused to make payment on the unapproved changes, contractors halted work and sent notices to the LBB-Kaiserslautern that they would be liable for any costs associated with delays in payment. In many cases, the Air Force chose to reduce controls and make payments on these items despite not having appropriate change order documents in an attempt to keep the work on the project progressing. The lack of staff also hindered LBB-Kaiserslautern’s ability to sufficiently monitor the quality of the contractors work. For example, as stated previously, the KMCC roof is leaking substantially because LBB- Kaiserslautern did not properly monitor the contractor’s work. Because of this, the Air Force is facing potentially millions of dollars in additional costs to replace the poorly built roof. Unreliable construction schedule and cost estimates: LBB- Kaiserslautern is responsible for providing the Air Force with up-to-date detailed construction schedules and cost estimates. According to Air Force officials, the latest official construction schedule provided by LBB- Kaiserslautern was in September 2006 and showed a completion date of March 2007 for the visiting quarters and April 2007 for the mall portion of the KMCC. During our visit in May 2007, LBB-Kaiserslautern officials stated that they do not have a current construction schedule or completion date established for the project. Despite the lack of an estimated completion date, LBB-Kaiserslautern officials had developed an estimate of the total KMCC cost at completion. This estimate currently projects that costs will be higher than original estimates of approximately $150 million. According to LBB-Kaiserslautern officials, this cost estimate does not include certain expected costs, which we consider significant. For example, as stated earlier, the roof on the facility is continually leaking and likely will need to be replaced. Air Force and AAFES officials estimate that the cost to replace the roof will be in the millions of dollars. In addition to roof estimates, there are additional costs associated with hindrance claims that were not included in the cost estimate. In fact, in May of 2007, LBB-Kaiserslautern officials stated they received a single claim for several million dollars, which has not been substantiated, from just one of the more than 30 contractors. Finally, LBB-Kaiserslautern cost estimates do not include adjustments for future cost increases on existing contracts. Although past experience on this project has shown that many of the contract amounts have increased due to change orders or quantity increases, LBB-Kaiserslautern did not include any estimates for these expected future increases. The Air Force did not incorporate sufficient controls to minimize the significant project risks involved with the KMCC. Control deficiencies included inadequate staffing, poor policies, and a lack of effective control processes in place. By not utilizing controls that were available to them through the ABG-75 agreement, the Air Force has given up any leverage it had on keeping project costs within budget. These control weaknesses contributed to schedule and performance problems without a sufficient reaction from the Air Force. In addition, after problems were identified, the Air Force did not take appropriate corrective actions. Air Force officials did not have adequate staff with appropriate expertise needed to oversee the KMCC. In 2002, the Air Force elected not to use the USACE as the servicing agent for the KMCC project. According to the Air Force officials, they were not required to use the USACE on this project because only a small percentage of the KMCC funds were based on appropriated military construction funding. However, in foregoing USACE oversight, the Air Force did not establish adequate staffing or contracting and construction management expertise needed for a project as complex as the KMCC. According to Air Force officials, at the inception of the project, there were approximately eight full time personnel assigned to the KMCC from the Air Force. In addition, the limited number of Air Force staff did not have adequate expertise in the areas of contracting or construction management. As of May 2007, no contracting officers or certifying officials have been assigned to the KMCC. These experts are trained and certified to obligate and spend funds on behalf of the U.S. government and would typically be found in any military construction- funded project. As a result of the lack of staffing with adequate contracting and construction management expertise, many invoices came into the Air Force office, overwhelming the ability of the staff to adequately review invoices prior to payment. According to Air Force officials, no invoices were disputed prior to September 2006. However, after September 2006 when significant problems with the KMCC were recognized, some staffing improvements were made. For example, the Air Force increased the number of personnel to approximately 17 full time personnel currently on site, because it became apparent that they did not have sufficient personnel to conduct adequate reviews of invoices. Since the increase in staff, the Air Force has been able to review invoices more thoroughly, and according to Air Force officials, the percentage of recent invoices disputed increased to 75 percent. The Air Force did not have adequate policies and control procedures in place for the management of the KMCC. At the beginning of the project, project management officers lacked a standard operating procedure to follow. According to Air Force officials, the only written process in place was a simple one-page process flow chart to delineate how the entire process was supposed to work. Since the recognition of numerous problems associated with the KMCC, the Air Force has instituted additional control procedures, such as increased invoice reviews, but has not formalized those procedures into a written operating procedure. We were unable to determine if there were any specific procedures in place prior to September 2006. However, the project schedule slippage and lack of disputes of invoices by the Air Force indicates that the controls in place were not fully effective. When we asked the Air Force officials about control procedures in place prior to September 2006, several officials, who were working on the project during the time in question, stated they were unable to answer questions based on advice from their legal counsel. The same officials who declined to answer questions stated that the project was under investigation by AFOSI. In addition, a senior Air Force civilian on the project prior to September 2006 had resigned and was therefore unable to answer questions. Without written procedures or explanations from Air Force staff, we could not determine what controls, if any, existed prior to September 2006. In September 2006, Air Force officials recognized that significant problems faced the project. One problem specifically recognized was that numerous payments were made on invoices for work that had been billed on the 400 contract changes, which lacked documentation and had not been previously approved by the Air Force. Upon this recognition, the Air Force attempted to institute controls going forward. For example, the Air Force instituted a closer review of invoices to identify items that were billed but were not approved by the United States through change orders. However, under pressure to keep the project moving forward to completion, the Air Force subsequently relinquished much of this control by expediting the payment of invoices upon receipt from LBB-Kaiserslautern including charges for unapproved work. Examples of the relaxing of these controls include: paying invoices submitted after September 2006 on work billed to the Air Force related to the 400 contract changes which had not been submitted by LBB-Kaiserslautern, and approving invoices even though the line item quantities greatly exceed contracted amounts. The Air Force stated the decision to relax the controls was made so that construction proceeded as expeditiously as possible on the KMCC. Despite the removal of these controls, the number of workers on-site has still decreased significantly. In addition, Air Force officials stated that they viewed these payments on unapproved work as “partial payments” of expenses, and that any disputes in payments could be recouped upon project completion. However, we have reported in the past that such “pay and chase” strategies are not effective and increase risks substantially to recover the unapproved amounts. The Air Force was unable to provide any examples were the United States had successfully recouped overpayments in German courts. The substantial schedule and cost overruns of the KMCC may affect military personnel and have major implications for future projects in Germany. The effects of these cost increases are likely to be shouldered by our men and women in the military. AAFES, the largest financial contributor to the KMCC, has stated that cost overruns have reduced the return of investment (e.g., the amount of profit they plan to receive from the project). As a result, AAFES and Air Force Services Agency funding of morale, welfare, and recreational activities for U.S. military members may be reduced. In addition, the escalation in costs may also affect the ability of AAFES and the Air Force Services Agency to finance future capital projects from its nonappropriated funds. Further, because of the delay in the completion of the visiting quarters portion of the KMCC, service members on travel to other locations, including Iraq and Afghanistan, may have to stay off-base. In addition to the inconvenience that this places on service members, the Department of Defense—and thus taxpayers—must fund the additional cost of any required temporary lodging off-base, which the Air Force estimates to be approximately $10,000 per day or $300,000 per month. In addition to the effect on military members and their families, the current Air Force project management weaknesses may have implications for future Air Force construction in Germany. The Air Force planned construction within the Federal Republic of Germany for the next 5 fiscal years totals more than $400 million. These construction projects include small operations and maintenance projects (such as school renovations and road repairs) and major military construction projects (such as a $50 million clinic and a $50 million base exchange and commissary). Absent better Air Force controls, these projects may experience the same types of heightened risks associated with KMCC. Although one of the major problems with KMCC related to ineffective project management by LBB-Kaiserslautern, the Air Force did not effectively institute oversight to mitigate the high-risk nature of the entire project. By the time the Air Force started making an attempt at oversight, the project was already several months past the original construction deadline of early 2006. With mounting problems including contractors walking off the job, the Air Force faces the dilemma of instituting controls far too late in the process and further extending the completion of the project versus paying whatever it costs to get the job done as quickly as possible. The likely substantial cost overruns and potential years of schedule slippage will negatively affect morale, welfare, and recreation programs for DOD service members, civilians, and their families for years. The Air Force needs to seriously consider substantial changes in oversight management capabilities for the hundreds of millions of dollars of planned construction projects planned in Germany over the next several years. Mr. Chairman and Members of the committee, this concludes our statement. We would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory Kutz at (202) 512-7455 or [email protected] or Terrell Dorn at (202) 512-6293 or [email protected]. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. To assess the current problems facing Kaiserslautern Military Community Center (KMCC), we interviewed agency officials from the Air Force at Ramstein Air Force Base in Germany. We physically inspected the KMCC facility with an Air Force project manager and documented construction problems. We also reviewed financial records and statements in the form of contracts, change orders, and invoices to the extent that they were available. To examine the effect the Auftragsbaugrundsaetze 1975 (ABG-75) had on the management of the KMCC project, we reviewed the ABG-75 agreement, which outlines construction requirements for U.S. forces stationed in Germany. In addition, we conducted interviews with officials from the Air Force, Landesbetrieb Liegenschafts- und Baubetreuung (LBB) the German government construction agency, and the U.S. Army Corps of Engineers. In order to determine the management weaknesses of LBB and the Air Force, we interviewed officials from both organizations, conducted interviews with other organizations affected by the KMCC project including the Air Force Office of Special Investigations (AFOSI), Air Force Audit Agency, Air Force Services Agency, and the Army and Air Force Exchange Service. We also reviewed applicable Department of Defense Financial Management Regulations as well as the National Fire Protection Association standards. To assess the effect that control weaknesses found in the KMCC project could have on future the Air Force projects in Germany, we obtained information from the Air Force on future construction plans in Germany. We also interviewed Air Force officials to determine what changes in processes had been made that would affect future construction projects. We performed our audit work from May 2007 through June 2007. Audit work was conducted in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
According to the Air Force, the Kaiserslautern Military Community Center (KMCC), an over 800,000 square-foot facility, is currently the Department of Defense's largest single-facility project under construction. It is intended to provide lodging, dining, shopping, and entertainment for thousands of U.S. military and civilian personnel and their families in the Kaiserslautern, Germany, area. Initial costs for the KMCC were estimated at about $150 million, with funding coming from a variety of appropriated and nonappropriated fund sources. The construction for the project, which began in late 2003, was originally scheduled to be completed in early 2006. This testimony discusses GAO findings to date related to the KMCC. The testimony describes (1) current problems facing the KMCC, (2) causes for identified problems, and (3) the effect of problems identified and their implications for future projects in Germany. To address our objectives, we interviewed officials from the U.S. Air Force, Army and Air Force Exchange Service, U.S. Army Corps of Engineers, and German government. We also conducted a site visit and reviewed relevant KMCC documents. We plan to continue our work and make recommendations to the Air Force as appropriate. The KMCC project has encountered cost, schedule, and performance problems. Currently neither Landesbetrieb Liegenschafts- und baubetreuung's office in Kaiserslautern (LBB-Kaiserslautern), the German government construction agency in charge of the project, nor the Air Force have a reliable estimated completion date or final cost for the project. Problems facing KMCC include construction flaws, vandalism of property, repeated work stoppages and slowdowns by contractors, and ongoing criminal investigations. Because of financial problems facing the project, the number of workers on-site has dwindled from several hundred to less than 50, which will likely further delay completion of the project. In addition, the KMCC's multimillion dollar "green" roof is experiencing water leaks, and will likely require the Air Force to spend millions of dollars for its replacement. The KMCC faced a high level of risk from its inception, which was not effectively mitigated by the Air Force. Increased risks included an overseas project controlled by LBB-Kaiserslautern with financial risks borne by the Air Force and its funding partners. Unfortunately, LBB-Kaiserslautern did not effectively manage the design and construction of the project. Rather than increase controls to mitigate project risks, the Air Force provided minimal oversight and in some cases circumvented controls to expedite the invoice payment process in an attempt to complete the project. Because this project is funded primarily with nonappropriated funds, the likely substantial cost increases in the project will be borne by military servicemembers, civilians and their families. Further, absent better Air Force controls, future projects may experience the same types of heightened risks associated with KMCC.
Under the Defense Environmental Restoration Program, DOD is authorized to identify, investigate, and clean up environmental contamination and other hazards at FUDS. The environmental restoration program was established by section 211 of the Superfund Amendments and Reauthorization Act of 1986 (SARA), which amended the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA). Under the environmental restoration program, DOD’s activities addressing hazardous substances, pollutants, or contaminants are required to be carried out consistent with section 120 of CERCLA. DOD delegated its authority for administering the FUDS program to the U.S. Army; in turn, the U.S. Army delegated execution of the program to the Corps. To be eligible for cleanup under the FUDS program, a property must have been owned by, leased to, possessed by, or otherwise controlled by DOD during the activities that caused hazards. These hazards consist of unsafe buildings, structures, or debris, such as leaning or weakened load-bearing walls or supports; open-sided platforms or floors more than 6 feet above the next lower level; and any pit, depression, or tank that can collect or contain standing water, such as underground missile silos, septic tanks, and sewers; hazardous, toxic, and radioactive waste, which includes contaminants such as arsenic, certain paints, some solvents, petroleum and some related products, and toxic pollutants from landfills; containerized hazardous, toxic, and radioactive waste, such as transformers and underground and aboveground storage tanks that contain petroleum, solvents, or other chemicals; and ordnance and explosive waste such as military munitions and chemical warfare agents. Figure 1 shows examples of the types of hazards that might be found at FUDS properties. According to DOD’s fiscal year 2001 report on the status of its various environmental cleanup programs, there were 9,181 properties identified by the Corps, the states, or other parties as potentially eligible for cleanup under the FUDS program. To determine if an identified property is eligible for the FUDS program, the Corps conducts a preliminary assessment of eligibility to establish whether the property was ever owned or controlled by DOD and if hazards from DOD’s use are potentially present. Corps officials point out that the preliminary assessment of eligibility is not intended to be a comprehensive evaluation of the FUDS property; instead, it is a screening effort intended to determine if potential hazards caused by DOD exist and, if so, whether additional study or cleanup actions are required to address such hazards. Corps guidance generally calls for staff to use the following procedures when conducting a preliminary assessment of eligibility: obtain available information on the present and prior uses of the site from real estate and archival records; present and former owners; and other federal, state, and local agencies; identify any relevant conditions, such as a release of liability or a requirement to restore the property, in real estate deeds that would affect the federal government’s liability; contact the current owner to obtain permission for an initial survey of the property to determine if DOD-caused hazards are potentially present; and visit the property to examine it for obvious signs of hazards and identify any areas that may require further study or testing. At the end of the preliminary assessment of eligibility, the Corps determines if any further study or cleanup action is needed. If the Corps determines that no further action is needed, the property is designated as NDAI. According to Corps guidance, the districts must notify current owners of the result of the preliminary assessment of eligibility within 30 to 60 days after the final NDAI determination. Because FUDS properties may have changed significantly since DOD owned or controlled them, the facilities once present and any potential hazards that may still exist may not be obvious. For example, former DOD facilities at a FUDS property may have been renovated, destroyed, or removed, and areas no longer used may be overgrown with vegetation, making potential hazards more difficult to detect. As a result, key components of the Corps’ preliminary assessment of eligibility are (1) obtaining historical documents, such as maps and photos that can aid Corps staff in identifying and locating the facilities at the property and indicate how the property was used (prior uses and the activities that took place), and (2) conducting an inspection of the property (site visit) to check for existing hazards caused by DOD. Although DOD guidance states that CERCLA is the statutory framework for the environmental restoration program, in recent years EPA has questioned whether the Corps’ process is consistent with CERCLA, and both EPA and some state regulatory officials have questioned its adequacy. While the Corps is required to carry out the program in consultation with EPA, the Corps is not required to consult with state regulatory agencies until hazards are discovered. Corps guidance now instructs staff to keep EPA and state regulatory agencies informed of the status and disposition of each NDAI determination, but the Corps does not consult with EPA or the states when making its determination because it considers the preliminary assessment of eligibility an internal management process. Figure 2 shows the location of the 4,030 FUDS properties that the Corps has designated as NDAI. Based on our review of NDAI files, we estimate that the Corps does not have a sound basis for about 38 percent or 1,468, of the estimated 3,840 NDAI determinations in our study population because the property files did not contain evidence showing that the Corps consistently reviewed or obtained information that would have allowed it to identify all of the potential hazards at the properties or that it took sufficient steps to assess their presence. In many cases, when attempting to identify potential hazards resulting from DOD activities, the Corps apparently did not obtain relevant information about former DOD activities and facilities at the properties, such as buildings and underground storage tanks constructed and used by DOD. For example, based on our review of Corps files, we estimate that for about 74 percent, or 2,828, of all NDAI determinations, the Corps did not review or obtain site maps, aerial photos, or ground photos that could provide information about potential hazards (e.g., a site map showing an ammunition storage facility could suggest the presence of unexploded ordnance). Furthermore, in a number of cases, it appeared that the Corps overlooked or dismissed information in its possession when it looked for evidence of potential hazards. In addition, we estimate that the Corps did not conduct a site visit at 686, or about 18 percent, of all NDAI properties despite Corps guidance that requires site visits to determine if potential hazards are present. The problems we noted occurred, in part, because Corps guidance is not specific about what documents the Corps should obtain, the level of detail required when seeking information on the prior uses of the FUDS properties and the facilities located at them, or how to assess the presence of potential hazards. The files for the NDAI properties that we reviewed did not always indicate that the Corps reviewed or obtained information that would have aided in identifying potential hazards at the properties. Information on what DOD activities occurred, what DOD facilities existed, and where those activities and facilities were located at FUDS can provide leads about potential hazards and where they might be. Such information could be obtained from site maps showing buildings or facilities at the property; aerial and ground photos; current landowners; or federal, state, and local agencies. However, although Corps guidance instructs staff to obtain available information on the present and prior uses of the site, the FUDS manual offers no specific guidelines on what documents or level of detail to obtain. While our review indicates that at some sites Corps staff obtained site maps, aerial or ground photos, or information from owners or other agencies, it appeared that the Corps did not do so consistently; as a result, potential hazards may have been overlooked. The random sample of NDAI files that we reviewed contained little evidence that Corps staff reviewed or obtained site maps to identify potential hazards. Maps can provide detailed information on the facilities that were present when DOD owned or used the site and could aid the Corps in identifying potential hazards resulting from military activities at the site. For example, although there were no DOD structures remaining at an anti-aircraft artillery site whose file we reviewed, a detailed map showed the exact location of a gun emplacement, an ammunition magazine, a motor pool area, a mess hall, seven barracks, two administrative buildings, a communications building, a drainage area, a septic tank, a grease rack, a 5,000-gallon storage tank, an oil storage facility, a pump house, a grease trap, two generators, a paint shed, a latrine, and a refueling area. Obtaining such maps for sites could provide knowledge of site facilities and could lead Corps staff to identify potential locations of hazardous substances, ordnance, or unsafe buildings. Without the site map, many of these facilities might not have been identified as features that were likely to be located at an anti-aircraft artillery site. Further, without the map, it might have been difficult to establish the former locations of these facilities because of the size of the property and the length of time that has elapsed between DOD’s use and the Corps’ assessment. However, despite the usefulness of site maps, based on our review, we estimate that for about 77 percent, or 2,972, of all NDAI properties, the files do not contain site maps or references to a map review. There was also little evidence that the Corps obtained aerial or ground photos of the FUDS to identify potential hazards. Photos, like maps, can provide information that may be useful in identifying potential hazards. In addition to providing information on what facilities existed and where they were located when the military owned or used the site, photos can also provide information on the condition of the facilities when the military was present. This information is particularly important because if a facility was in good condition when the military disposed of the property, but has since been allowed to deteriorate, the Corps is not responsible for cleanup. Photos can also help the Corps identify areas that were used as landfills or other disposal sites. However, based on the information contained in the Corps’ files, we estimate that for about 92 percent, or 3,522, of all NDAI determinations, the files do not contain aerial or ground photos or indicate that photos were reviewed as part of the Corps’ process. In addition, there was little evidence that the Corps used the current owner (or owners) as a source of information for the majority of the sites that we reviewed. The current owner has the potential to provide information about a FUDS property. If the current owner is the person who first obtained the property from DOD, then the owner might be able to describe the facilities that were present at acquisition and explain what has become of those facilities. Even if the current owner is not familiar with the DOD activities conducted at the site, the owner might be able to describe the current condition of the property and note any hazards present. Based on our review of NDAI files, we estimate that the Corps did not contact all the current owners for about 60 percent, or 2,319, of all NDAI properties in our study population. Information on FUDS may also be available from various local, state, and federal agencies. For example, during a preliminary assessment for one property in the New York district, the Corps obtained information on the potential presence of ammunition or explosive wastes from the city facilities that may have been at the site during military use from a city environmental office, the port authority transportation department, the city police department, and the national archives; site maps from the city library; and permits issued for underground storage tanks from the city building department. However, it appears that the Corps seldom asked these kinds of agencies for information. For example, a New Jersey state official told us that his department has 15,000 files on sites within the state, but the Corps has never gone through the department’s files. We estimate that the Corps contacted a local, state, or federal agency to obtain information that could indicate potential hazards for only about 10 percent, or 375, of the 3,840 NDAI properties in our study population. Camp O’Reilly, a FUDS in Puerto Rico, exemplifies how obtaining historical information on how a site was used or current information on the condition of the property could have helped the Corps identify potential hazards. Camp O’Reilly was a 907-acre military post that included 591 buildings and other facilities and housed about 8,000 troops from August 1942 to June 1945. In September 1992, the Corps determined that there were no hazards at Camp O’Reilly that were eligible for FUDS cleanup and designated the site as NDAI. Yet, there is no evidence in the Corps’ files that the Corps obtained or reviewed maps, archival photos, or studies, or that it contacted the current owners to identify potential hazards during its preliminary assessment of eligibility for Camp O’Reilly. Had the Corps obtained and made use of historical information, it could have identified a number of potential hazards. In July 1997, the University of Puerto Rico, the current owner, contacted the Corps and indicated that several locations on the property contained hazards caused by DOD use. In a second assessment—with the aid of the owner’s information, site maps, and records—the Corps identified three 15,000-gallon underground storage tanks, an area adjacent to a drinking water source that is “highly” contaminated with oil by-products, a 12,000-square-foot landfill, and a concrete structure (15 feet wide, 70 feet long, and 60 feet deep) filled with water that presents a drowning hazard. These hazards have all been determined to result from DOD use of the property and are eligible for cleanup under the FUDS program. Information on potential hazards found at certain types of FUDS properties may also be useful in identifying potential hazards at other similar properties. For example, in August 1994, the Corps issued “Procedures for Conducting Preliminary Assessments at Potential Ordnance and Explosive Waste Sites.” This document notes that certain types of former sites are highly likely to contain unexploded ordnance and that such sites “must not be determined as unless strong evidence or extenuating circumstances can be presented as to why no contamination is expected.” The sites specified in the document included Army airfields, auxiliary airfields, practice bombing ranges, rifle ranges, and prisoner of war camps. Although these procedures are not referenced in the FUDS Manual, and we cannot show a cause-and-effect relationship between the issuance of the procedures and the more frequent identification of unexploded ordnance as a potential hazard, we found that that Corps staff identified unexploded ordnance as a possible hazard at these types of sites more often after the issuance of these procedures. For example, in our sample, which included 48 auxiliary airfields, unexploded ordnance was identified as a possible hazard at only 8 of 36 sites that the Corps reviewed before the procedures were issued. In contrast, for the 12 auxiliary airfields in our sample that the Corps reviewed after the procedures were issued, the Corps identified unexploded ordnance as a potential hazard at 10 of the airfields. Our sample also included 15 prisoner of war camps. Before the procedures were issued, unexploded ordnance was not identified as a possible hazard at any of the seven sites the Corps reviewed. After the procedures were issued, five of the eight prisoner of war camps in our sample were identified as having potential unexploded ordnance hazards. The Corps also developed a formal guide for assessing Nike missile sites. In addition we found that a FUDS project manager in the Corps’ Fort Worth district developed an informal guide for assessing 14 different types of FUDS properties that listed the hazards most likely to be found at each. For example, if a property contained a laundry facility, this informal guide indicates that staff should look for dry cleaning solvents and tanks. Similarly, for a property containing an unmanned radar station, staff should look for underground storage tanks. The Fort Worth project manager told us that he developed the guide because he did not know what to look for when he began working in the FUDS program. However, while the use of such procedures or guides appears to be useful in identifying potential hazards at certain types of sites, we were able to identify or obtain only the three guides discussed previously. We found that at times Corps officials overlooked or dismissed information in their possession that indicated potential hazards might be present. Often, these problems appear to have involved a failure to act upon information obtained during identification efforts or a failure to consider information from owners or from federal, state, or local environmental agencies. In other cases, the information in the file suggested potential hazards at the site and did not indicate the basis for the Corps’ NDAI determination. We also found instances where it appears that the Corps’ assessment focused on only one of the four potential hazards included in the Corps’ program—unsafe buildings, structures, or debris; hazardous, toxic, and radioactive wastes; containerized hazardous wastes; and ordnance and explosive wastes. According to several headquarters and district officials, the FUDS program was focused primarily on cleaning up unsafe buildings and debris in the early years of the program. Of the NDAI determinations that we believe lack a sound basis, we estimate that the Corps either overlooked or failed to adequately assess the potential for hazardous wastes at about 88 percent of the properties, for containerized hazards at about 78 percent of the properties, and for ordnance and explosive wastes at about 40 percent of the properties. The following cases illustrate situations where the Corps overlooked or dismissed information in its possession that suggested that hazards might be present: The Corps identified a variety of facilities at Fort Casey, an almost 1,050-acre FUDS property in the state of Washington, and developed information on prior uses; yet, the Corps apparently failed to use this information in its assessment of the site. Facilities identified by the Corps included a coal shed, oil and pump houses, a paint shop, a gasoline station, a grease rack location, and a target shelter—indicating, among other things, possible hazardous containerized and ordnance- related wastes. Yet, the file contained no evidence that these facilities and related potential hazards were considered. The potential hazards stemming from the use of these facilities were not addressed in documents or site visit descriptions, and the site was designated as NDAI. Subsequent to our review, we learned that after the Corps completed its assessment, the state environmental agency performed independent reviews in 1999 and 2001, in part to document any threats or potential threats to human health or the environment posed by this site. The state reported finding hazardous wastes exceeding state cleanup levels that were believed to have occurred during DOD ownership of this site. The state also found what appeared to be fill pipes normally associated with underground storage tanks—something the Corps overlooked during its site visit and overall assessment of this site. Fort Pickens is an approximately 1,600-acre FUDS property on the Florida coast that was used to defend against invasion during World Wars I and II. The Corps identified numerous facilities, including a power plant building, oil houses, ordnance warehouses, an ordnance magazine, search light towers, transformers, electric poles, water and “miscellaneous” facilities, and underground storage tanks. A site visit revealed open manholes, confirmed the presence of underground storage tanks and vent pipes and old ammunition lifts with magazines, and identified a septic tank. The Corps also noted vegetation stress and attributed this to the local drought. The file contained no evidence that the Corps assessed the property for possible chemical contamination. Despite noting the potential hazards associated with these types of facilities and uses during its assessment and site visit, the Corps designated the site as NDAI. The former Othello Air Force Station (Z-40) was a 77-acre aircraft warning station in the state of Washington. At this site the Corps initially identified approximately 106 facilities, including a diesel plant; an auto maintenance shop; a possible 3,000-gallon underground storage tank and two 25,000-gallon underground storage tanks; a vehicle fueling station and many other oil, grease, ammunition, and paint storage sites; a transformer; and “other structures required for operation of a radar station” that existed during DOD ownership. The presence of these facilities suggests the potential for both containerized and freestanding hazardous wastes and ordnance hazards at the property. However, there was no evidence the Corps considered the former facilities and their characteristics as potential hazards in reaching their NDAI determination, and, according to the file, the site visit was “only a cursory drive-thru inspection.” An independent study of this site by the state found hazards (i.e., petroleum compounds such as gasoline, diesel, lube or hydraulic oil; polychlorinated biphenyls; and pesticides) and some chemicals exceeding state cleanup levels at two locations, which are believed linked to military ownership. At the Millrock Repair and Storage Depot of New York, the Corps identified potential aboveground storage tanks, gas pumps, a dynamite storage building, and a generator shed. The Corps’ file contained conflicting trip reports, one indicating potential oil and gas spills and another indicating that no hazards were found. The initial Corps proposal for designating the site as NDAI was rejected by the appropriate Corps district office, and a cleanup project was proposed to sample for gasoline-related chemicals at the site of former storage tanks and a gas pump. Subsequently, the proposed project was rejected on the grounds that there was no evidence that the hazards were related to DOD’s use of the site. Despite the presence of potential hazards, the file contains no evidence that the Corps took additional steps to determine the source of the hazards or that it reported their presence to the appropriate regulatory agencies. At the Mount Vernon Municipal Airport in the state of Washington, a nearly 1,900-acre site previously used by both the Army and the Navy, the Corps overlooked information in its possession indicating possible ordnance hazards. In the preliminary assessment of eligibility for this site, the Corps obtained a map showing conditions of the site on June 30, 1944, which indicated bomb and fuse storage units. Although the Corps assessed the site for unsafe buildings and debris and containerized hazards, the file contained no evidence that the Corps searched for possible unexploded ordnance despite both guidance issued by the Corps in 1994, which states that Army airfields are likely to contain unexploded ordnance, and the presence of the bomb and fuse storage units, which would also indicate the potential presence of unexploded ordnance. We also found that in some cases the files did not contain evidence that Corps staff conducted a site visit, as required by Corps guidance. A site visit is one of the primary methods used by the Corps to determine if the potential hazards are in fact present at a site. For example, if the Corps identifies underground storage tanks as potential hazards because a site was once used as a motor pool facility, a site visit can be used to determine if underground storage tanks are still in place. A typical site visit would include at least a visual check for signs of filler or vent pipes, which would normally protrude aboveground if tanks were still present. Without a site visit, the Corps cannot check for the continued presence of potential hazards. Based on our review of NDAI files, we estimate that about 18 percent, or 686, of the estimated 3,840 NDAIs in our study population did not receive site visits that met Corps requirements: about 428 properties received no site visits, and about 258 properties received site visits conducted from the air or from a vehicle, which are not appropriate, according to Corps program officials. The following case illustrates a situation where the Corps conducted the site visit from the air: At the former Kasiana Island Base Station in Alaska, the site visit consisted of an over-flight. Although a bunker was noted during the flyover, the contractor conducting the assessment for the Corps said in its report that the area was heavily overgrown. In addition, the file contained no evidence that the Corps tried to identify power sources (and any associated fuel storage tanks) that were likely present to operate the searchlight positions and seacoast radar stations located at the site. Although it was not possible to determine what, if any, hazards may still exist at the site without being on the ground to check for the presence of hazards, the file contained no evidence that the Corps took any further action before designating the site as an NDAI. Subsequent to designating the site as an NDAI, the Corps revisited the site and found two underground storage tanks, several 55-gallon drums, and a storage battery. Tests conducted in the area of the underground storage tanks showed that diesel products in the groundwater exceeded acceptable limits. For some files designated as NDAI, it appeared that Corps staff remained in their vehicles and took site visit photos from the site’s periphery. Figures 3, 4, and 5 are examples of such photos for visits to a former Nike missile site and a former gap filler annex. According to a 1986 guide developed by the Corps for assessing Nike missile sites, hazards typically found at Nike sites include petroleum compounds, paints and solvents, leaking underground storage tanks, and lead from batteries. The guide also notes that dumping of wastes was common at Nike sites. On-site dumps were usually located in secluded areas that “would evade the attention of inspecting military officers,” according to the guide. Gap filler annexes are typically unmanned radar sites that are remotely located. According to the guide developed by Corps staff in Fort Worth, containerized hazards, such as underground or aboveground storage tanks containing petroleum, are usually found at such sites. Transformers containing toxic wastes (polychlorinated biphenyls) have also been found at similar sites. Because 30 or more years may pass between the closure of a former defense site and a Corps site visit, it is likely that potential hazards would go unnoticed from a vehicle because the area may be too large to see or may be overgrown with vegetation that could hide any evidence of potential hazards. In fact, one of the many concerns expressed by state and EPA officials was that Corps “windshield” or “drive-by” site visits did not involve a thorough assessment of an entire site. While Corps guidance requires a site visit, the guidance provides no specifics, only a general framework for assessing potential hazards. However, Corps officials told us that site visits conducted from the air or a vehicle are considered inadequate and would not fulfill the requirement to conduct a site visit. A number of other factors contributed to inadequate preliminary assessments of eligibility. Corps officials explained that, during the early stages of the FUDS cleanup program, they were hampered by limited knowledge of hazards that might be present. They also explained that the priorities of the program have changed over time. For example, several Corps officials told us that during its early stages, the program’s focus was on identifying unsafe building hazards. Later, the focus changed to identifying and removing containerized hazards—primarily underground storage tanks. As a result of changing priorities, not all of the potential hazards were identified and assessed. Moreover, several Corps officials told us that although hazardous, toxic, and radioactive wastes are one category of hazards covered by the program, they cannot propose a project to clean up such hazards without evidence of their existence. However, since 1990, sampling soil and water has not been allowed during the preliminary assessment of eligibility to determine the presence and type of any contamination that might have been caused by DOD activities. According to these officials, without sampling to indicate the presence of hazardous wastes, it is difficult to develop the evidence needed to justify a cleanup project. As a result, NDAI determinations have been made even when the presence of hazardous waste was suspected. Some Corps officials agreed that some of the older NDAI determinations might not be justified and stated that those determinations may need to be reexamined. Several district officials indicated that although they would like to reexamine some of the NDAI determinations, the FUDS program is now focused on cleaning up hazards already identified, and limited funds are available for reviewing past NDAI determinations. Although Army guidance on the FUDS program issued in March 2001 authorized the districts to reexamine two to five NDAI determinations annually per state in each of the 22 relevant Corps districts if regulatory agencies request the reexaminations and if funds are available, funding shortfalls already hamper the program, according to program officials. For example, the Corps estimates that at current funding levels—approximately $220 million in fiscal year 2002—cleaning up the hazards already identified will take more than 70 years. In its 2001 Funding Strategies report, the Corps proposed that the Army and DOD increase the annual FUDS program funding by $155 million to approximately $375 million per year. If the increased funding were approved and sustained, the Corps could complete cleanup by 2050. In the files we reviewed, we found no evidence that the Corps consistently notified owners of its NDAI determinations, as required by Corps guidance. In some cases, the Corps did not notify the owners for several years after it made the NDAI determinations. In addition, while Corps policy calls for reconsidering an NDAI determination if evidence is later discovered, it appeared that the Corps rarely instructed the owners to contact the Corps with such evidence or told them of the Corps’ policy. Furthermore, the Corps did not notify federal and state regulatory agencies of its NDAI determinations because Corps guidance at that time did not require it to do so, even though these agencies might have regulatory responsibilities or could have information that might cause the Corps to reconsider its NDAI determination. The Corps also generally did not notify federal or state regulatory agencies of potential hazards that it identified but determined were not caused by DOD’s use. By not routinely notifying the regulatory agencies of hazards caused by non-DOD users, the Corps lost an opportunity to assist these agencies’ efforts to protect human health and the environment. According to Corps guidance, the districts must notify current owners of the result of the preliminary assessment of eligibility. However, based on our review of the NDAI files, we estimate that the Corps did not provide this information to all the current owners at about 72 percent, or 2,779, of the NDAI properties included in our study population. At one district, a Corps official stated that owners were sent notification of NDAI determinations only if they requested it. Further, in spite of the requirement that owners be notified within 30 to 60 days after a final NDAI determination, in some cases the Corps did not notify owners for several years. In one district, notification letters were not sent to owners until 1994, although NDAI determinations had been made as many as 8 years before. The late-arriving letters caused many owners to call the district office with questions about their NDAI determinations. As a result, the district decided to stop sending notification letters to owners. In addition, while Corps policy calls for reconsidering an NDAI determination if evidence of potential hazards is discovered later, we found that the Corps rarely instructed the owners to contact the Corps with such evidence or told them of the Corps’ policy. Based on our review of Corps files, we estimate that even when the Corps notified the owners of the NDAI determinations, at about 91 percent of these properties it did not instruct the owners to contact the Corps with evidence of potential hazards, if found later. However, because the preliminary assessment of eligibility is not a comprehensive evaluation of these properties, and the Corps does not routinely review its past NDAI determinations, owners are an essential outside source of new information about potential hazards at a given site. By not notifying owners of the NDAI determinations or advising them to contact the Corps if evidence of potential hazards is discovered, the Corps may be reducing its ability to gather new information about potential hazards and reconsider previous NDAI determinations. Even though EPA and state regulatory agencies might have relevant statutory responsibilities or could have information that might cause the Corps to reconsider its NDAI determination, based on our review of Corps files, we estimate that the Corps did not notify the regulatory agencies of the NDAI determinations at the time they were made for about 99 percent of the properties. Although Corps officials told us that they have now provided copies of all the NDAI determinations to the relevant federal and state agencies, some EPA and state officials indicated that they have not yet received copies of the NDAI determinations. Even when notification was provided, it was often done in a way that did not encourage agencies’ involvement. For example, one state regulatory agency received a bulk delivery of Corps FUDS summary documents for past NDAI determinations with no explanation. According to state officials, sending agencies NDAI determinations made several years earlier limits the agencies’ ability to provide timely input about potential hazards at a given site. Sending bulk deliveries of documents with no explanation does not encourage the involvement of state regulators who might be unfamiliar with Corps documentation or procedures. Notifying EPA and state regulatory agencies of NDAI determinations in a timely and appropriate manner could facilitate regulators’ involvement and address some of the concerns that these agencies have about the adequacy of the Corps’ preliminary assessment of eligibility. State regulators told us that their concurrence with an NDAI determination could increase the credibility of the Corps’ determination and improve its quality. State regulators indicated that, in some cases, they could provide the Corps with information about FUDS sites and properties adjacent to FUDS sites, including sampling data that could assist the Corps in determining if further study or cleanup actions by DOD were needed. State regulators also told us that they could provide the Corps with best practice guidance on conducting site visits and engaging the public in data-gathering. A state official also pointed out that his state could assist the Corps in gaining entry to a property if the owner refused to allow the Corps to conduct a site visit. Typically, if the owner refuses entry, the Corps designates the property as NDAI. EPA and some state regulatory agencies believe that the involvement of their agencies is crucial to the successful implementation and review of the Corps’ preliminary assessment of eligibility process. One example where state involvement has led to the reconsideration of an NDAI determination is the former Wilkins Air Force Base in Ohio, where a school is now located. Following increased public interest in school sites that were once owned by DOD, the state regulatory agency became concerned about the number of FUDS in the state where schools or school activities are now located and conducted a file review of all FUDS sites with school-related activities. Based on new information from the state agency, and after conducting a joint site visit, the Corps proposed a new project at the former Wilkins Air Force Base. The Army is taking steps to improve communication among the Army, regulators, and other stakeholders. In 2000, the Army created the FUDS Improvement Working Group to (1) address the concerns of regulators and other stakeholders about the FUDS program and (2) identify new or modified policies and procedures that will improve communication. We also noted during our review of NDAI files that the Corps routinely did not notify regulatory agencies when it identified potential hazards that were not the result of DOD use. Although, according to a Corps official, it is “common sense” that the Corps would notify EPA or state regulatory agencies of non-DOD hazards that it identified during its preliminary assessment of eligibility, we estimate that at about 246 NDAI properties the Corps did not notify EPA or state regulatory agencies of non-DOD hazards. For example, when conducting a site visit in Louisiana in 1986, Corps staff identified an underground diesel oil storage tank of unknown size that held approximately 12 inches of diesel oil. The Corps concluded that this hazard was not the result of DOD activities, but was left by the Coast Guard. However, the file contains no evidence that the Corps notified EPA or state regulators of the suspected hazard. An EPA official told us that the Corps never notified EPA of the hazard at this site, and that EPA became aware of the hazard only in 2000, as the result of an initiative it undertook to review Corps FUDS files. While not notifying regulatory agencies of potential hazards that were not the result of DOD use does not affect the Corps’ NDAI determination, it presents a lost opportunity to assist regulators in their efforts to protect human health and the environment. The Corps does not have a sound basis for about a third of its NDAI determinations for FUDS properties. In making its determinations, the Corps was handicapped by a lack of information about how these properties were used and which facilities were present when DOD controlled the property. In addition, the Corps, at times, apparently overlooked or dismissed information in its possession that suggested that hazards might be present. In still other cases, the Corps did not conduct an adequate site visit to assess the presence of hazards. Because of inadequacies in the Corps’ process for assessing the presence of DOD- caused hazards at these properties, potential hazards may have gone unnoticed. The Corps also did not consistently notify owners and regulatory agencies of its findings and determinations. By not communicating with these parties, the Corps lost opportunities to obtain information on potential hazards that were not discovered during their preliminary assessment of eligibility, which is not comprehensive. These shortcomings resulted, in part, because Corps guidance does not specify what documents or level of detail the Corps should obtain when identifying potential hazards. Also, the guidance does not include information about typical hazards that might be present at certain types of properties or specify how to assess the presence of potential hazards. As a result, the Corps’ assessment that almost 4,000 FUDS require no further study or cleanup action may not be accurate. In essence, the Corps does not know the number of additional properties that may require further study or cleanup actions, the hazards that may be present at these properties, or the risk level associated with these hazards. Given that one of the factors used in establishing the Corps’ cleanup priorities is the risk that each property poses to the public or to the environment, unless the Corps improves its guidance and reviews past NDAI determinations to determine which sites should be reassessed, the Corps cannot be reasonably certain that it has identified all hazards that may require further study or cleanup action. Without knowing the full extent of the hazards at these properties, the Corps cannot be assured that the properties it is currently cleaning up or that it plans to clean up in the future are the sites that pose the greatest risk. The Corps also cannot estimate how much additional money and time may be needed to clean up properties that were not properly assessed. To help ensure that all potential hazards are adequately identified and assessed, we recommend that the Secretary of Defense direct the Corps to develop and consistently implement more specific guidelines and procedures for assessing FUDS properties. These guidelines and procedures should specify the historical documents such as site maps, aerial and ground photos, and comprehensive site histories that the Corps should try to obtain for each property to identify all of the potential hazards that might have been caused by DOD’s use; include a listing of typical hazards that might be present at certain types of properties, such as communication facilities or motor pools, and incorporate the guides already developed for ordnance hazards and Nike missile sites into Corps procedures; require that the Corps contact other interested parties—including federal, state, and local agencies—as well as owners during the preliminary assessment of eligibility to discuss potential hazards at the properties; and provide instructions for conducting site visits to ensure that each site receives an adequate site visit and that all potential hazards are properly assessed. To further ensure that all hazards caused by DOD at FUDS properties are identified, we recommend that the Secretary of Defense, as an initial step, direct the Corps to use the newly developed guidance and procedures to review the files of FUDS properties that it has determined do not need further study or cleanup action to determine if the files contain adequate evidence to support the NDAI determinations. If there is an insufficient basis for the determination, those properties should be reassessed. To ensure that all parties are notified of the Corps’ NDAI determinations, we recommend that the Secretary of Defense direct the Corps to develop and consistently implement procedures to ensure that owners and appropriate federal, state, and local environmental agencies are notified of the results of the Corps’ preliminary assessments of eligibility in a timely manner. The Corps should also ensure that owners are aware that the Corps will reconsider an NDAI determination if new evidence of DOD hazards is found. In addition, when preliminary assessments of eligibility identify potential hazards that did not result from DOD activities, the procedures should direct the Corps to notify the appropriate regulatory agencies in a timely manner. We provided a copy of this report to the DOD for review and comment. In written comments on a draft of this report, DOD disagreed with our conclusions, but partially agreed with each of the three recommendations included in this report. DOD disagreed with our conclusion that the Corps did not consistently obtain information necessary to identify potential hazards at FUDS properties. While DOD acknowledged that the Corps did not have consistent procedures for evaluating FUDS properties during the early years of the program, the agency stated that it does not believe that such inconsistencies led to inadequate assessments. Our conclusion that the Corps did not consistently obtain information necessary to identify potential hazards at FUDS properties is based on our review of over 600 randomly selected NDAI files at nine Corps district offices. We found numerous instances where the files did not contain evidence that potential hazards associated with the property’s prior uses were identified or that Corps staff looked for hazards other than unsafe buildings or debris. Furthermore, during our review, several district officials told us that they would like to reexamine some of the NDAI determinations, but that limited funding is available for this purpose. DOD also stated that the use of tools developed in the later years of the program, such as checklists for specific types of sites, have contributed to a more consistent approach. We agree that tools such as checklists and guides, which provide information on potential hazards that might be found at certain types of FUDS properties, would be useful. However, as we point out in our report, we identified only three such checklists or guides during our review and they were not referenced in the Corps’ FUDS manual that provides information and guidance to staff. For this reason, we recommended that the Corps develop guidelines and procedures that include a listing of typical hazards that might be present at certain types of facilities and incorporate the guides already developed. DOD also disagreed that the Corps did not take sufficient steps to assess the presence of potential hazards at FUDS properties. In its comments, DOD stated that the FUDS eligibility determination was never intended as a means to characterize all the hazards at a site and cannot be compared to the CERCLA preliminary assessment/site inspection. We recognize that the preliminary assessment of eligibility is not, nor is it intended to be, a comprehensive evaluation of a FUDS property, and our report does not compare the Corps’ preliminary assessment of eligibility to the CERCLA preliminary assessment/site inspection. DOD also stated that if the Corps determines that a property is eligible for the program, an investigation process is undertaken to determine the extent of DOD-caused hazards at the site. Actually, all eligible FUDS properties do not automatically proceed to the investigative phase. In fact, NDAIs, which account for over 4,000 of the approximately 6,700 properties the Corps has determined are eligible for the FUDS cleanup program, do not undergo further investigation. Only properties eligible for the FUDS program and where the Corps believes that potential hazards caused by DOD may exist undergo further investigation. However, as we point out in our report, we found instances where Corps officials appeared to overlook or dismiss information in their possession that suggested potential hazards might be present, and we included specific examples where this occurred. DOD partially agreed with our recommendation to develop and consistently implement more specific guidelines and procedures for assessing FUDS properties. DOD pointed out that the Army, through the FUDS Improvement Initiative, is currently evaluating the need for any additional guidance or requirements. Our report describes some of the shortcomings that we found in the Corps’ guidance, and our recommendation identifies key areas where we believe that the Corps’ guidelines and procedures should be made more specific. DOD also partially agreed with our recommendation to use newly developed guidance and procedures to determine if NDAI files contained adequate evidence to support the Corps’ determinations. DOD noted that the Corps would reevaluate an NDAI determination if additional information were discovered and pointed out that the Army has already agreed to reevaluate two to five NDAIs per year at each state’s request. Our report acknowledges both the Corps’ policy of reconsidering an NDAI determination if evidence of DOD-caused hazards is later found and its plans to reevaluate two to five NDAIs per year at each state’s request. We do not believe that the Corps should wait to be asked to reconsider its past NDAI determinations. Under the Defense Environmental Restoration Program, DOD and the Corps, as the executive agent for the FUDS program, bear the responsibility of identifying, investigating, and cleaning up, if necessary, DOD-caused hazards at FUDS properties. Therefore, we continue to believe that the Corps should undertake a review of NDAI property files and reassess those properties where the Corps’ determinations are not adequately supported. In response to our recommendation aimed at improving its notification procedures, DOD commented that eligibility determination reports are now routinely provided to the states and, where appropriate, to EPA regional offices, and that recent efforts have increased coordination and communication between regulatory agencies and property owners. DOD also pointed out that the Army plans to include, as part of the FUDS manual revision, guidance that specifically requires notification of landowners and regulatory agencies of all NDAI determinations. While DOD did not specifically comment on our recommendation to develop procedures to direct the Corps to notify the appropriate regulatory agencies when its preliminary assessment of eligibility identifies potential hazards that did not result from DOD activities, DOD indicated in its technical comments that the Corps will notify the proper authorities of such hazards. In addition to its written comments, DOD also provided a number of technical comments and clarifications, which we incorporated as appropriate. DOD’s comments appear in appendix III. To determine the extent to which the Corps (1) has a sound basis for its determinations that more than 4,000 formerly used defense sites need no further study or cleanup actions and (2) communicated its NDAI determinations to owners and regulatory agencies that may have responsibilities and notified the owners that it will reconsider an NDAI determination if evidence of DOD-caused hazards is found later, we reviewed a statistical sample of 635 NDAI files at nine Corps districts that execute the FUDS program. The districts selected were (1) Alaska, (2) Fort Worth, (3) Jacksonville, (4) Louisville, (5) New York, (6) Omaha, (7) Sacramento, (8) Savannah, and (9) Seattle. The Alaska district was selected with certainty because it had the highest number of NDAIs when we began our review. The remaining 8 districts were selected at random from 21 of the 22 Corps districts that execute the FUDS program, with the probability of selection proportional to the number of NDAIs in their districts. The Huntington district was excluded from our study population because it only had seven NDAIs and was not considered to be a practical choice to examine if selected. The 21 districts from which we selected our random sample accounted for 99.8 percent of the NDAI files. Thirty-two of the properties whose files we selected for review were excluded from our analysis because the files contained evidence that either the property was not eligible for the FUDS program or that a cleanup project was proposed. Each NDAI selected was subsequently weighted in the analysis to account statistically for all eligible NDAIs in the 21 districts, including those that were not selected. We obtained and reviewed the Corps’ policies and procedures and program documents to obtain information about the preliminary assessment of eligibility. We also interviewed past and present FUDS program officials from headquarters and district offices to obtain information about the practices followed by Corps staff in completing this phase. From the information provided by these officials and a review of a sample of NDAI files at the Baltimore district, we developed a data collection instrument (DCI). The DCI was used to document, in a consistent manner, the evidence that we abstracted from each file reviewed and our assessment of the soundness of the Corps’ NDAI determination. We also contacted environmental officials from 17 states that interact with Corps districts on the FUDS program. We judgmentally selected these states to provide a range of opinion and perception of the Corps’ preliminary assessment of eligibility. In addition, we contacted officials from EPA regional offices that interact with the Corps’ districts included in our review. These offices included Atlanta (Region 4); Chicago (Region 5); Dallas (Region 6); Denver (Region 8); Kansas City (Region 7); New York City (Region 2); San Francisco (Region 9); and Seattle (Region 10). Appendix I contains additional details on our scope and methodology, and appendix II presents the results of our review of 603 randomly selected NDAI files. We conducted our review from May 2001 through June 2002 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Defense; the Director, Office of Management and Budget; the appropriate congressional committees; and other interested parties. We will also provide copies to others upon request. In addition, the report will be available, at no charge, on the GAO Web site at http://www.gao.gov/. The objectives of our review were to determine the extent to which the U.S. Army Corps of Engineers (Corps) (1) has a sound basis for determining that more than 4,000 formerly used defense sites (FUDS) need no further study or cleanup and for designating those properties as “No Department of Defense (DOD) Action Indicated, Category I” (NDAI) and (2) communicated its NDAI determinations to owners and to the regulatory agencies that may have responsibility and notified the owners that it will reconsider an NDAI determination if evidence of DOD-caused hazards is found later. To address these objectives, we analyzed a statistical sample of 603 NDAI files at nine Corps districts that execute the FUDS program. The districts selected were (1) Alaska, (2) Fort Worth, (3) Jacksonville, (4) Louisville, (5) New York, (6) Omaha, (7) Sacramento, (8) Savannah, and (9) Seattle. The Alaska district was selected with certainty because it had the highest number of NDAIs when we began our review. The remaining districts were randomly selected, with the probability of selection proportional to the number of NDAIs in the district. Table 1 provides additional information on the districts selected for our review, including the states within their boundaries, the number of FUDS properties designated as NDAI, the NDAI files we selected for review, and the number of determinations that we questioned. We reviewed each selected file to determine if it contained evidence that the Corps (1) reviewed or obtained information on the buildings, structures, and other facilities (such as underground storage tanks) associated with DOD’s use of the site that would allow the Corps to identify the types of hazards potentially resulting from DOD’s use and (2) took sufficient steps to assess the presence of potential hazards. If we did not find evidence in the file that indicated the Corps reviewed or obtained information on prior DOD uses of the site, we concluded that the Corps did not identify all of the hazards that might be present at the site. However, the absence of a single piece of information, such as a site map or record of contact with an owner, did not automatically cause us to question the adequacy of the Corps’ efforts to identify the prior uses and the associated potential hazards. Rather, we based our assessment of the Corps’ efforts on the totality of the evidence in the file. For example, if the file did not contain a site map, but the file contained evidence that the Corps staff made use of a site map during its assessment, we concluded that the Corps reviewed a site map. If the file contained evidence that the Corps determined that potential hazards might be present, but did not take certain actions, such as conducting a site visit, we concluded that the Corps did not take sufficient steps to assess the presence of potential hazards at the site. However, if the file contained evidence that a site visit was conducted, such as the date of a site visit, we concluded that the Corps conducted a site visit even if the file did not contain photos or a trip report. If a file contained evidence that the Corps overlooked or dismissed information in its possession that potential hazards might be present, we concluded that the Corps did not take sufficient steps to assess the presence of potential hazards. If we found either or all of these scenarios when reviewing the files, we determined that the NDAI determinations were questionable. Our questioning of an NDAI determination does not mean that the property is contaminated; rather, it indicates that the Corps’ file did not contain evidence that the Corps took steps to identify and assess potential hazards at the property that would support the NDAI determination. We also reviewed the NDAI files to determine how often the Corps notified owners and regulatory agencies of its NDAI determinations and of its policy of reconsidering the determinations if additional evidence of DOD- caused hazards was found later. We used a data collection instrument (DCI) to document, in a consistent manner, the evidence that we abstracted from each file and our assessment of the soundness of the Corps’ NDAI determinations. Each DCI was independently reviewed and compared to the original file to ensure that the information documented on the DCI was accurate and that our assessment of the Corps’ determination was reasonable, i.e., that another person looking at the information in the file would come to the same conclusion about the Corps’ determination. We copied the contents of the files to ensure that any further questions or issues could be researched later and that we had sufficient evidence to support the information recorded on the DCI. From the DCIs, we created an electronic database. The members of our team reviewing the files and the person conducting the supervisory review changed for each district. While we rotated staff to reduce bias, we also used this rotation to help increase consistency of judgments. In addition, we conducted an independent quality check of our database entries created from the DCIs. For each of the districts visited, we randomly selected 10 percent of the electronically entered DCIs. An independent verifier checked 100 percent of the data for every question, sub-question, and comment box on the DCI, comparing the “hard copy” of the DCI to the entries found in the database to ensure that there were no data entry errors. Our error rate was 0.379 percent—less than ½ of 1 percent. All errors found were corrected. In addition, we verified 100 percent of the responses to questions and sub-questions on the DCI that were key to supporting our findings. The information presented in this report consists, in part, of statistical estimates based on our review of randomly selected files. The results of our analysis are projectable to NDAI determinations nationwide, excluding the Huntington district. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we could have drawn. Each sample could have provided different estimates. We therefore express our confidence in the precision of our particular sample’s results as 95 percent confidence intervals. Each of these intervals contains the actual (unknown) population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in the report will include the true value in the study population. All percentage estimates from the file review have 95 percent confidence intervals whose width does not exceed plus or minus 10 percentage points, unless otherwise noted. All numerical estimates other than percentages (such as averages or totals) have 95 percent confidence intervals whose width does not exceed 10 percent of the value of those estimates, unless otherwise noted. The widths of the confidence intervals are shown as footnotes to the text, where appropriate. While the results of our analysis are generally projectable nationwide, we also used our selected samples to develop case examples of the preliminary assessments of eligibility conducted by the Corps. These case examples are for illustration only. To determine the extent to which the U.S. Army Corps of Engineers (Corps) has a sound basis for its determinations that more than 4,000 formerly used defense sites (FUDS) need no further Department of Defense (DOD) study or cleanup and for designating those properties as “No DOD Action Indicated” (NDAI), we reviewed and analyzed a statistical sample of 603 NDAI files at nine Corps districts. Table 2 shows the property name, the FUDS number, and whether we found, based on our review of the evidence in the file, that the Corps had a sound basis for its NDAI determination. In those cases where we do not believe that the Corps has a sound basis, the table includes an explanation for our finding. Our questioning of an NDAI determination does not mean that the property is contaminated; rather, it indicates that the Corps’ file did not contain evidence that the Corps took steps to identify and assess potential hazards at the property that would support the NDAI determination. In the table, we use abbreviations for the four types of hazards: building demolition and debris removal (BD/DR); hazardous, toxic, and radioactive waste (HTRW); containerized hazardous, toxic, and radioactive waste (CON/HTRW); and ordnance and explosive waste (OEW). In addition to those named above, Ian Ferguson, Ken Lightner, Sherry McDonald, and Aaron Shiffrin made key contributions to this report. Also contributing to this report were Doreen S. Feldman, Susan W. Irwin, Cynthia Norris, and Sidney Schwartz. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Department of Defense (DOD) estimates that cleaning up contamination and hazards at thousands of properties that it formerly owned or controlled will take more than 70 years and cost as much as $20 billion. These formerly used defense sites (FUDS), which can range in size from less than an acre to many thousands of acres, are now used for parks, farms, schools, and homes. Hazards at these properties include unsafe buildings, toxic and radioactive wastes, containerized hazardous wastes, and ordnance and explosive wastes. The U.S. Army Corps of Engineers is responsible for identifying, investigating, and cleaning up hazards resulting from military use. GAO found that the Corps lacks a sound basis for its conclusion that 38 percent of 3,840 FUDS need no further study or cleanup action. The Corps' determinations are questionable because there is no evidence that it reviewed or obtained information that would allow it to identify all the potential hazards at the properties, or that it took sufficient steps to assess the presence of potential hazards. GAO also found that the Corps often did not notify owners of its determinations that the properties did not need further action, as called for in its guidance, or tell the owners to contact the Corps if evidence of DOD-caused hazards was found later.
Federal operations and facilities have been disrupted by a range of events, including the terrorist attacks on September 11, 2001; the Oklahoma City bombing; localized shutdowns due to severe weather conditions, such as hurricanes Katrina, Rita, and Wilma in 2005; and building-level events, such as asbestos contamination at the Department of the Interior’s headquarters. In addition, federal operations could be significantly disrupted by people-only events, such as an outbreak of severe acute respiratory illness (SARS). Such disruptions, particularly if prolonged, can lead to interruptions in essential government services. Prudent management, therefore, requires that federal agencies develop plans for dealing with emergency situations, including maintaining services, ensuring proper authority for government actions, and protecting vital assets. Until relatively recently, continuity planning was generally the responsibility of individual agencies. In October 1998, Presidential Decision Directive (PDD) 67 identified FEMA—which is responsible for leading the effort to prepare the nation for all hazards and managing federal response and recovery efforts following any national incident—as the lead agent for federal COOP planning across the federal executive branch. FEMA’s responsibilities include ● formulating guidance for agencies to use in developing viable plans; ● coordinating interagency exercises and facilitating interagency coordination, as appropriate; and ● overseeing and assessing the status of COOP capabilities across the executive branch. In July 1999, FEMA issued the first version of Federal Preparedness Circular (FPC) 65, its guidance to the federal executive branch on developing viable and executable contingency plans that facilitate the performance of essential functions during any emergency. FPC 65 applies to all federal executive branch departments and agencies at all levels, including locations outside Washington, D.C. FEMA released an updated version of FPC 65 in June 2004, providing additional guidance to agencies on each of the topics covered in the original guidance. In partial response to a recommendation we made in April 2004, the 2004 version of FPC 65 also included new guidance on human capital considerations for COOP events. For example, the guidance instructed agencies to consider telework—also referred to as telecommuting or flexiplace—as an option in their continuity planning. Telework has gained widespread attention over the past decade in both the public and private sectors as a human capital flexibility that offers a variety of potential benefits to employers, employees, and society. In a 2003 report to Congress on the status of telework in the federal government, the Director of OPM described telework as “an invaluable management tool which not only allows employees greater flexibility to balance their personal and professional duties, but also allows both management and employees to cope with the uncertainties of potential disruptions in the workplace, including terrorist threats.” A 2005 OPM report on telework notes the importance of telework in responding flexibly to emergency situations, as demonstrated in the wake of the devastation caused by Hurricane Katrina, when telework served as a tool to help alleviate the issues caused by steeply rising fuel prices nationwide. In 2004, we surveyed major federal agencies at your request to determine how they planned to use telework during COOP events. We reported that, although agencies were not required to use telework in their COOP plans, 1 of the 21 agency continuity plans in place on May 1, 2004, documented plans to address some essential functions through telework. In addition, 10 agencies reported that they intended to use telework following a COOP event, even though those intentions were not documented in their continuity plans. The focus on using telework in continuity planning has been heightened in response to the threat of pandemic influenza. In November 2005, the White House issued a national strategy to address this threat, which states that social distancing measures, such as telework, may be appropriate public health interventions for infection control and containment during a pandemic outbreak. The strategy requires federal departments and agencies to develop and exercise preparedness and response plans that take into account the potential impact of a pandemic on the federal workforce. It also tasks DHS—the parent department of FEMA—with developing plans to implement the strategy in regard to domestic incident management and federal coordination. In May 2006, the White House issued an implementation plan in support of the pandemic strategy. This plan outlines the responsibilities of various agencies and establishes time lines for future actions. Although more agencies reported plans for essential team members to telework during a COOP event than in our 2004 survey, few documented that they had made the necessary preparations to effectively use telework during an emergency. While FPC 65 does not require agencies to use telework during a COOP event, it does state that they should consider the use of telework in their continuity plans and procedures. All of the 23 agencies that we surveyed indicated that they considered telework as an option during COOP planning, and 15 addressed telework in their COOP plans (see table 1). For agencies that did not plan to use telework during a COOP event, reasons cited by agency officials for this decision included (1) the need to access classified information— which is not permitted outside of secured areas—in order to perform agency essential functions and (2) a lack of funding for the necessary equipment acquisition and network modifications. The agencies that did plan to use telework in emergencies did not consistently demonstrate that they were prepared to do so. We previously identified steps agencies should take to effectively use telework during an emergency. These include preparations to ensure that staff has adequate technological capacity, assistance, and training. Table 1 provides examples of gaps in agencies’ preparations, such as the following: ● Nine of the 23 agencies reported that some of their COOP essential team members are expected to telework during a COOP event. However, only one agency documented that it had notified its team members that they were expected to telework during such an event. ● None of the 23 agencies demonstrated that it could ensure adequate technological capacity to allow designated personnel to telework during a COOP event. No guidance addresses the steps that agencies should take to ensure that they are fully prepared to use telework during a COOP event. When we reported the results of our 2004 survey, we recommended that the Secretary of Homeland Security direct the Under Secretary for Emergency Preparedness and Response to develop, in consultation with OPM, guidance on the steps that agencies should take to adequately prepare for the use of telework during a COOP event. However, to date, no such guidance has been created. In March 2006, FEMA disseminated guidance to agencies regarding the incorporation of pandemic influenza considerations into COOP planning. The guidance states that the dynamic nature of a pandemic influenza requires that the federal government take a nontraditional approach to continuity planning and readiness. It suggests the use of telework during such an event. According to the guidance, agencies should consider which essential functions and services can be conducted from a remote location (e.g., home) using telework. However, the guidance does not address the steps agencies should take when preparing to use telework during an emergency. For example, although the guidance states that agencies should consider testing, training, and exercising of social distancing techniques, including telework, it does not address other necessary preparations, such as informing designated staff of the expectation to telework or providing them with adequate technical resources and support. Earlier this month, after we briefed your staff, the White House released an Implementation Plan in support of the National Strategy for Pandemic Influenza. This plan calls on OPM to work with DHS and other agencies to revise existing telework guidance and issue new guidance on human capital planning and COOP. The plan establishes an expectation that these actions will be completed within 3 months. If the forthcoming guidance from DHS and other responsible agencies does not require agencies to make the necessary preparations for telework, agencies are unlikely to take all the steps necessary to ensure that employees will be able to effectively use telework to perform essential functions during any COOP event. In addition, inadequate preparations could limit the ability of nonessential employees to contribute to agency missions during extended emergencies, including a pandemic influenza scenario. In summary, Mr. Chairman, although more agencies reported plans for essential team members to telework during a COOP event than in our previous survey, few documented that they had made the necessary preparations to effectively use telework during an emergency. In addition, agencies lack guidance on what these necessary preparations are. Although FEMA’s recent telework guidance does not address the steps agencies should take to prepare to use telework during an emergency event, new guidance on telework and COOP is expected to be released later this year. If the new guidance does not specify the steps agencies need to take to adequately prepare their telework capabilities for use during an emergency situation, it will be difficult for agencies to make adequate preparations to ensure that their teleworking staff will be able to perform essential functions during a COOP event. In our report, we made recommendations aimed at helping to ensure that agencies are adequately prepared to perform essential functions following an emergency. Among other things, we recommended that the Secretary of Homeland Security direct the FEMA Director to establish a time line for developing, in consultation with OPM, guidance on the steps that agencies should take to adequately prepare for the use of telework during a COOP event. In commenting on a draft of the report, the Director of DHS’s Liaison Office partially agreed with this recommendation and stated that FEMA will coordinate with OPM in the development of a time line for further telework guidance. In addition, he stated that both FEMA and OPM have provided guidance on the use of telework. However, as stated in our report, present guidance does not address the preparations agencies should make for using telework during emergencies. With the release of the White House’s Implementation Plan regarding pandemic influenza, a time line has now been established for the issuance of revised guidance on telework; however, unless the forthcoming guidance addresses the necessary preparations, agencies may not be able to use telework effectively to ensure the continuity of their essential functions. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For information about this testimony, please contact Linda D. Koontz at (202) 512-6240 or at [email protected]. Key contributions to this testimony were made by James R. Sweetman, Jr., Assistant Director; Barbara Collier; Sairah Ijaz; Nick Marinos; and Kim Zelonis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To ensure that essential government services are available in emergencies, federal agencies are required to develop continuity of operations (COOP) plans. The Federal Emergency Management Agency (FEMA), within the Department of Homeland Security (DHS), is responsible for providing guidance to agencies on developing such plans. Its guidance states that in their continuity planning, agencies should consider the use of telework--that is, work performed at an employee's home or at a work location other than a traditional office. The Office of Personnel Management (OPM) recently reported that 43 agencies have identified staff eligible to telework, and that more than 140,000 federal employees used telework in 2004. OPM also reported that many government operations can be carried out in emergencies using telework. For example, telework appears to be an effective strategy for responding to a pandemic--a global outbreak of disease that spreads easily from person to person and causes serious illness and death worldwide. In previous work, GAO identified steps that agencies should take to effectively use telework during an emergency. GAO was asked to testify on how agencies are addressing the use of telework in their continuity planning, which is among the topics discussed in a report being released today (GAO-06-713). Although agencies are not required to use telework in continuity planning, 9 of the 23 agencies surveyed reported plans for essential team members to telework during a COOP event, compared to 3 in GAO's previous survey. However, few documented that they made the necessary preparations to effectively use telework during such an event. For example, only 1 agency documented that it had communicated this expectation to its emergency team members. One reason for the low levels of preparations reported is that FEMA has not provided specific guidance on preparations needed to use telework during emergencies. Recently, FEMA disseminated guidance to agencies on incorporating pandemic influenza considerations into COOP planning. Although this guidance suggests the use of telework during such an event, it does not address the steps agencies should take when preparing to use telework during an emergency. Without specific guidance, agencies are unlikely to adequately prepare their telework capabilities for use during a COOP event. In addition, inadequate preparations could limit the ability of nonessential employees to contribute to agency missions during extended emergencies, including pandemic influenza. In its report released today, GAO recommends, among other things, that FEMA establish a time line for developing, in consultation with the OPM, guidance on preparations needed for using telework during a COOP event. In commenting on a draft of the report, DHS partially agreed with GAO's recommendation and stated that FEMA will coordinate with OPM in developing a time line for further telework guidance. DHS also stated that both FEMA and OPM have provided telework guidance. However, as GAO's report stated, present guidance does not address the preparations federal agencies should make for using telework during emergencies. On May 3 the White House announced the release of an Implementation Plan in support of the National Strategy for Pandemic Influenza. This plan calls on OPM to work with DHS and other agencies to revise existing telework guidance and issue new guidance on human capital planning and COOP. The plan establishes an expectation that these actions will be completed within 3 months. If the forthcoming guidance does not require agencies to make necessary preparations for telework, agencies are unlikely to take all the steps necessary to ensure that employees will be able to effectively use telework to perform essential functions in extended emergencies, such as a pandemic influenza.
In 1987, the Congress directed FAA to choose three states to participate in a state block grant pilot program. While many states already had existing state airport capital improvement programs and staff in place to fund development and safety projects at small airports, the block grant program transferred the responsibility for administering AIP grants from FAA to the participating states. To select the states, the Congress directed FAA to determine whether a state was capable of administering the program, used satisfactory airport system planning and programming processes, and would agree to comply with federal procedures. Furthermore, FAA’s regulations stipulated that states accepted into the block grant program could not use AIP funds to finance the costs associated with administrating the program unless granted a waiver. Thirty-five states initially expressed interest in participating in the program, 10 states applied, and an FAA review panel recommended that 3 states be selected—Illinois, Missouri, and North Carolina. FAA chose these states, in part, because they were diverse in their organization, staff size, budget, airport systems, and location. After the Congress expanded the program in 1992 to include four additional states, FAA selected Michigan, New Jersey, Texas, and Wisconsin on the basis of the same criteria. States participating in the pilot program receive a block grant consisting of AIP apportionment funds and, if available, AIP discretionary, and set-aside funds for distribution at small airports (see fig. 1). When discretionary and set-aside funds are available for small airports, they are distributed to the participating states for the projects that FAA has approved using its national priority system. According to FAA officials, once the participating states receive their block grant, they can use their AIP funds for eligible projects at any small airport. Airports in nonparticipating states receive their grant funds directly from FAA but often must apply to both their state and FAA for grant approval. The state’s approval is necessary if the state provides airports with grant funds to help “match” their AIP grant. All airports, in both participating and nonparticipating states, must provide a certain percentage of funds to match their AIP grants. Small airports receive, on average, 30 percent of all AIP funds annually, or about $450 million, for safety, preservation, and development projects at airports. The seven states have seen a steady decline in AIP funds in recent years; the average allocation fell from a high of $21.5 million in fiscal year 1992 to a low of $7.4 million in fiscal year 1995 (see fig. 2). The reduction in block grant funding since fiscal year 1992 can be attributed to an overall reduction in appropriated AIP funds and increased competition for discretionary funds, including a reduction in the amount of funding set-aside for nonprimary commercial and reliever airports. State officials told us that under the block grant program, they have successfully assumed most of FAA’s responsibilities for small airports.Most states took on responsibilities in four key areas: Planning: States participate in a number of planning tasks with airport officials. Such tasks include assisting with long-range airport planning, approving changes to airport layout plans to reflect future construction plans, and conducting environmental assessments. Grant administration: States help airports select projects qualifying for AIP funding, award AIP grants, issue grant reimbursements, and provide grant oversight. Safety and security inspections: The seven block grant states conduct safety inspections at small airports and investigate compliance issues and zoning concerns. Project construction: States provide technical assistance during the life of a project, including guiding airport sponsors in soliciting bids for construction, approving AIP construction change orders, and monitoring the progress of the project at preconstruction, interim, and final construction inspections. In 1992, FAA issued a performance review of the first three block grant states in which it concluded that the pilot program was generally working well. Since 1992, FAA has reviewed the implementation of the pilot program in all block grant states and maintains that the program is a success. FAA regional officials told us that some airport officials were initially confused about the delineation of state and federal responsibilities, but this uncertainty has largely disappeared as the states, FAA, and the airport officials have gained experience working with the pilot program. Officials from small airports with whom we spoke in each block grant state saw no major difference between the services delivered by the states and those previously delivered by FAA. The airport officials told us that they typically see state inspectors more frequently than FAA inspectors and believe that the state inspectors have more direct and current knowledge of individual airport’s needs. The states told us that one factor easing their transition to the block grant program was their prior experience with their own airport improvement programs. Each state had previously administered a state-funded grant program that provided grants, planning, and construction assistance to small airports. Furthermore, these states had provided some matching funds to help airports finance their share of AIP grants; therefore, states had been directly involved with many federally administered AIP projects in conjunction with their own efforts to oversee the state’s investment. In addition, four of the seven states required their state aviation agencies to participate in the process for approving, distributing, and overseeing federal funds for airport projects; thus, these states had already assumed an oversight role on behalf of the federal government. According to officials in six of the block grant states, another factor that facilitated their transition to the pilot program was having inspection programs in place when they assumed their new responsibilities. Even before the transition, state inspectors typically had visited the smaller airports more frequently than FAA because the states were already responsible for airport safety inspections and also routinely inspected ongoing airport construction projects. Having enough staff with the requisite expertise was also important to the block grant states’ success. Five of the seven states already had a staff of engineers, planners, grant administrators, and inspectors in place to service and oversee state-funded and AIP projects at small airports. The other two states, Missouri and New Jersey, had smaller state programs with fewer staff and could not initially accommodate the increased workload. Although Missouri state officials sought approval from the state legislature to hire additional staff, their efforts were unsuccessful because the program was a pilot and the legislators viewed its future as uncertain. When New Jersey joined the pilot program, it had a relatively small state grant program and was not providing that same range of services to small airports as FAA had been providing. To overcome their staffing shortfalls, both states petitioned FAA for a waiver allowing them to use some of their block grant funds to help defray the costs of administering the block grant program. In their petitions, both states indicated that they required more staff and training in order to efficiently manage the program. FAA approved the requests, limiting the amount of the block grant funding used for this purpose to $75,000 annually. Participating states and airports in these states have derived important benefits from the state block grant pilot program. First, the program has expedited project approvals because the block grant states may now approve project scope’s and financing which formerly required FAA approval. State officials told us that they can provide approval to airports more efficiently than FAA. The quicker turnaround time has enabled airports to use their contractors more efficiently—saving time and money on projects. The states now have also acquired the authority to review and approve airport layout plans for future projects. In the past, both the states and FAA reviewed such plans and FAA approved them. Second, the state officials told us they were able to reduce the paperwork required to apply for federal projects, using their own forms and applications instead of both their own and FAA’s. This reduction, which simplified both the application and the review processes, created efficiencies for both the airport and state officials. Third, the duplication of airport oversight activities has been reduced or, in many cases, eliminated. In the past, for example, both the states and FAA typically conducted inspections during the life of an airport project, because both had provided funds for it. Now, the state is solely responsible for those inspections. FAA has benefited from the state block grant pilot program because it has been able to shift regional staff resources to deal with other pressing priorities. FAA has thus partially compensated for the effects of attrition and a hiring freeze, which have reduced its airport staff by 12 percent in the affected regions over the past 3 years. The states can now provide oversight for small airports where attrition had, according to some regional officials, already reduced FAA’s coverage. FAA can now assign a greater portion of its remaining staff to emerging priorities at larger airports, such as reviewing passenger facility charges and environmental compliance issues. FAA regional officials told us that they are still available to advise the state officials on airport issues and to review many of the documents prepared by state officials. During the pilot program, FAA’s and the states’ views on the purpose of the state block grant pilot program have differed. FAA viewed the program’s purpose as identifying administrative functions that might be shifted to or shared with the states. FAA also saw the program as a means of (1) giving the states more discretion in selecting and managing projects and (2) testing their ability to improve the delivery of federal funds. In contrast, the states viewed the block grant program as a vehicle for putting funding decisions into the hands of those with firsthand knowledge of the projects competing for funds. FAA’s and the states’ views on the priorities for using AIP funds have also differed. FAA maintains that federal funds should be used to meet the needs of the nation’s airport system. FAA implemented a new system for prioritizing allocations of AIP discretionary funds in 1993. According to the states, however, FAA’s national system does not adequately weigh the needs of small airports or reflect the goals of the individual states. The states in the pilot program expressed a desire for autonomy in allocating AIP funds according to their own priorities rather than those established by FAA. FAA applied its national priority system to all AIP projects competing for discretionary funds, including those submitted from the block-grant states. Furthermore, although the system applied only to requests for discretionary funds, state officials from five block grant states told us that FAA had directed them to allocate their apportionment funds in accordance with the national priority system or risk losing the opportunity to compete for discretionary funds. Before 1993, FAA had allowed the block grant states greater flexibility in setting their own project priorities when distributing apportionment and discretionary funds, and many had used their own priority systems. State officials told us that their priority systems emphasized high-priority safety and capacity-enhancement systems as required by FAA’s system; however, the state systems target the funds to the airports that the states deem most important. These airports are not necessarily the same as those that FAA deems most important. FAA’s and the states’ differences in priorities have led to differences of opinion about how AIP funds should be spent. Under the state block grant law, states selected to participate in the program must have a process in place to ensure that the needs of the national airport system will be addressed when the states decide which projects will receive AIP funds. State officials said that they fulfill this requirement when they use their own priority systems to direct AIP funds to eligible projects at airports included in FAA’s National Plan of Integrated Airport Systems (NPIAS). In FAA’s view, however, according to the Director of FAA’s Office of Airport Planning and Programming, the state block grant program is a tool to develop a national system of airports and the priority system is one method to ensure the development of that system. An attorney from FAA’s Chief Counsel’s Office, Airport Laws Branch, said that FAA had adequate authority to require the block grant states to adhere to the national priority system when distributing grant funds. He added that unless FAA receives other direction from the Congress, it should continue to require the block grant states to abide by its national priority system. State officials expressed concern that using FAA’s national system does not allow them to take advantage of their expertise to direct federal funds to the airport projects that will go the farthest toward achieving the aviation goals established by their states. In their view, a primary purpose of the block grant program is to put decision-making power in the hands of decisionmakers with firsthand knowledge. State officials said they had sufficient information to make sound funding decisions on their own, because they routinely visit and inspect airports, establish local and state aviation goals, and develop state plans and priority systems. Three of the block grant states said that they had applied or would like to apply block-grant funds to projects that, under FAA’s national priority system, probably would not rank high enough to receive funding even though the projects would increase safety or capacity. North Carolina. State officials said that, until very recently, North Carolina has had little need for reliever airports to help reduce congestion at busy commercial service airports. Now, however, this need is acute. As a result, the state has placed high priority on using its block grant funds to build new reliever airports or help general aviation airports evolve into reliever airports. To achieve its goal, North Carolina has requested AIP reliever set-aside funds and also used most of its AIP apportionment funds, (typically used for projects at general aviation airports) for projects at reliever airports. State officials said that had they used FAA’s priority system in allocating these funds, the types of airports and the projects funded would have been different. New Jersey. The state has chosen to save its block-grant funds over the past few years to amass enough money to buy a private general aviation airport. Officials said that most of the state’s remaining general aviation airports are privately owned, and many of the owners are either considering closing or have already closed their airports because of increased costs for property taxes and liability insurance. The state would eventually like to purchase several small airports, preserving them for general aviation access. Under FAA’s criteria, purchasing new general aviation airports is a relatively low priority that probably would not be funded. Missouri. State officials said that in the first years of their program, they provided grants to airports that needed safety-related upgrades but had not previously received AIP funding, either because the airports were too small or the types of projects had not met FAA’s funding criteria. State officials told us that the initial block grant program was scheduled to last for 2 years and they felt compelled to issue as many grants to small airports as possible during that time. Thus, the state awarded more grants to more airports than FAA would have typically funded in a similar period with the same amount of money. We conducted a nationwide survey to determine whether states would be interested in participating in a block grant program. Of the 43 nonparticipating states, 34, or 79 percent, indicated that they would be interested in participating in such a program and appeared capable of doing so. (See app. I for a list of the 34 states interested in participating in the block grant program.) Many states wanted the flexibility to manage airport funds and financial assistance to administer the program. Nearly all of the states expressing interest in the block-grant program already manage state-funded capital improvement programs of their own. Many of the state programs include funding for airport maintenance projects and emphasize aviation safety and education for pilots and the community at large. The majority of the states that expressed an interest in the block grant program appear to have the staff with the types of expertise that would be needed to successfully administer AIP grants for general aviation airports. In response to our survey, over 59 percent of the interested states said they had at least one full-time engineer, grant administrator, planner, and airport inspector. In addition, in 1995, over 71 percent of the interested states reported that they used either contract employees, personnel from other state agencies, or both to augment their own staff’s expertise. Besides having staff with the requisite skills, the states interested in joining the block grant program have already assumed many of the responsibilities taken on by block grant states. Over 90 percent of these states currently perform half or more of the tasks normally performed by FAA. These tasks include assisting airports in land acquisition and sales, assisting airports in identifying improvement projects and eligible projects, and reviewing plans and specifications for specific projects. Many of the interested states would be more inclined to participate in the block grant program if they could use their own methodology for selecting projects. Over three-quarters of the states interested in the block grant program currently have their own systems for prioritizing airport projects. We reviewed several of these systems and found that they include many of the same elements that appear in FAA’s priority system, including high priorities for safety projects. However, in some instances, states prioritize projects that would be ineligible for funding using FAA’s priority system, such as constructing general aviation terminals and hangars. In addition, we found that when assessing an eligible project’s priority for funds, some states consider factors that FAA’s priority system does not, such as whether (1) an airport has the potential to enhance economic development in a community, (2) an airport has an ongoing airfield maintenance program, or (3) a project has local financial, political, or zoning support. Sixty-two percent of the interested states also said they would request additional funding to administer the block grant program. Over half of the interested states said that they hoped to obtain this additional funding from a combination of state and federal funds. Twenty-nine percent of the interested states indicated that FAA would have to provide additional funding. Fifteen percent of the states either planned to obtain additional funding solely from their own state or would not seek additional funds. The pilot program has demonstrated that, with good preparation, states can manage AIP grants to small airports. If the Congress elects to extend or expand the block-grant program before it expires in 1996, many states appear interested in participating, and most seem to have the programs and staff in place to do the job. In our view, the key question now is not whether the states can administer the program, but whose set of priorities should prevail—FAA’s or the states’. Each set of priorities stems from a reasonable position. On the one hand, FAA maintains that federal funds should first be used to meet the needs of a national airport system. On the other hand, the states may prefer to allocate federal funds to local needs, such as encouraging economic development in particular areas or allocating funds to airports that have never ranked high enough to receive competitively awarded grants. FAA has taken the position that unless it receives alternative direction from the Congress, it will continue to require the states to use its national priorities and the states will risk losing discretionary grant funds if they choose otherwise. We make no recommendation as to whether the states should be required to follow FAA’s national priorities or be left free to make their own decisions. However, any policy change may require the Congress to change the current method for allocating AIP funds for small airports, since airports of all sizes compete for AIP discretionary funds. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions you or the Members of the Subcommittee may have. Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Iowa Kentucky Louisiana Maine Massachusetts Minnesota Mississippi Montana Nebraska Nevada New Hampshire New Mexico North Dakota Ohio Oklahoma Pennsylvania South Carolina South Dakota Tennessee Virginia Washington Wyoming The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Federal Aviation Administration's (FAA) state block grant pilot program, which is part of its Airport Improvement Program (AIP). GAO noted that: (1) the 7 states in the pilot program are providing a board range of services to small airports and performing many functions that FAA formerly performed, such as long-range planning assistance, grant administration, safety and security inspections, and technical assistance and oversight; (2) airport officials believe that the only differences between FAA and state services are that state inspectors visit more frequently and are more knowledgeable; (3) the states' success under the pilot program is due to their already established state-financed airport development and inspection programs, experience with planning and oversight functions, and experienced staff; (4) participants and FAA believe that the program has streamlined AIP project approval processes, reduced paperwork requirements, eliminated duplication, and enhanced FAA ability to shift resources to other high-priority tasks; (5) the states would rather use their own project criteria than FAA national criteria because they include more state-level factors, but FAA believes its criteria are more equitable and ensures the development of a national airport system; and (6) 80 percent of nonparticipating states would like to receive block grants and most could successfully administer the grants, but they are concerned about autonomy and the availability of administrative funds under the program.
In February 2011, Boeing won the competition to develop the Air Force’s next generation aerial refueling tanker aircraft, the KC-46. Boeing was awarded a fixed price incentive (firm target) contract for development because KC-46 development was considered to be a relatively low-risk effort to integrate military technologies onto a 767 aircraft designed for commercial use. The contract is designed to hold Boeing accountable for costs associated with the design, manufacture, and delivery of four test aircraft and includes options to manufacture the remaining 175 aircraft. It features two key delivery dates, requiring Boeing to first deliver four development aircraft between April and May 2016, and second, if the Air Force exercises the first two production options, deliver a total of 18 operational aircraft by August 2017. It also specifies that Boeing must correct any deficiencies and bring development and production aircraft to the final configuration at no additional cost to the government.In addition, all required aircrew and maintainer training must be complete and the required support equipment and sustainment support must be in place by August 2017. The contract includes firm-fixed-price contract options for the first and second production lots in 2016, and options with “not-to-exceed” ceiling prices for lots 3 through 13. Barring any changes, the development contract specifies a target price of $4.4 billion and a ceiling price of $4.9 billion, at which point Boeing assumes responsibility for all additional costs. By December 2015, both Boeing and the program office estimated that Boeing would incur additional costs to complete development of the aircraft of about $769 million and $1.4 billion, respectively. To develop a tanker, Boeing modified a 767 aircraft in two phases: In the first phase, Boeing modified the design of the 767 with a cargo door and an advanced flight deck display, borrowed from its new 787, and is calling this modified design the 767-2C. The 767-2C is being built on Boeing’s existing production line. In the second phase, a 767-2C was militarized and brought to a KC-46 configuration. The KC-46 will allow for two types of refueling to be employed in the same mission— a refueling boom that is integrated with a computer- assisted control system and a permanent hose and drogue refueling system. The boom is a rigid, telescoping tube that an operator on the tanker aircraft extends and inserts into a receptacle on the aircraft being refueled. The “hose and drogue” system involves a long, flexible refueling hose stabilized by a drogue (a small windsock) at the end of the hose. See Figure 1 for a depiction of the conversion of the 767 aircraft into the KC-46 tanker with the boom deployed. The FAA has previously certified Boeing’s 767 commercial passenger airplane (referred to as a type certificate) and will certify the design for both the 767-2C and the KC-46 with amended and supplemental type certificates, respectively. The Air Force is responsible for certifying that the KC-46 works as intended. The Air Force will also verify that the KC-46 systems meet contractual requirements and certify the KC-46 with various specified receiver aircraft for refueling operations. For the third consecutive year, the program office has reduced its acquisition cost estimate and it continues to estimate that Boeing will meet performance goals. The total KC-46 program acquisition cost estimate (development, procurement, and military construction costs) has decreased $3.5 billion or about 7 percent—from $51.7 billion to $48.2 billion—since the program started in February 2011. The decrease is due primarily to stable requirements, fewer than expected engineering changes, and changes in military construction plans. In addition, the government competitively awarded a contract for an aircrew training system at a lower price than originally projected. Average program acquisition unit costs have decreased by about the same percent because quantities have remained the same. Table 1 provides a comparison of the initial and current quantity and cost estimates for the program. The current development cost estimate of $6.3 billion includes: $4.9 billion for Boeing’s aircraft development contract; nearly $1 billion for other costs, including training systems development, program office support, and test and evaluation support; and roughly $400 million for risks associated with developing the aircraft and training systems. The program office estimates that Boeing will meet key performance capabilities, such as those related to air refueling and airlift, but has not yet fully verified the estimates through ground and flight testing. Boeing has developed a set of seven technical performance measures to gauge its progress toward meeting these key capabilities, and the program currently predicts that Boeing is on track to meet these measures. For example, the program projects that the aircraft will be able to perform one of its assigned missions at least 92 percent of the time, and that maintainers will be able to fix aircraft problems within 12 hours at least 71 percent of the time. Appendix I lists the status of KC-46 technical performance capabilities. The KC-46 program originally planned to hold its low-rate initial production decision in August 2015, but had to delay the decision 9 months, to May 2016, because Boeing experienced problems developing the aircraft. Although these problems have largely been addressed, Boeing and the government had to revise test and delivery schedules. The changes deferred development aircraft deliveries and, if the Air Force exercises its first two production lot options, will compress production aircraft deliveries. As Boeing implements the new schedule, challenges to flight test completion could affect its ability to deliver aircraft on time. Since the critical design review in July 2013, Boeing has made progress developing the aircraft’s systems, including the extensive electrical and fueling systems that will allow the KC-46 to perform its primary mission. Boeing has also developed and integrated the software needed to support KC-46 operations. Boeing, however, experienced three major development challenges that ultimately contributed to a 9-month delay to the low-rate initial production decision. The following is a summary of these development challenges and steps Boeing has taken to address them. Wiring design issues: Wiring on the first development aircraft was nearly complete in the spring of 2014 when Boeing discovered wire separation issues caused by an incorrect wiring design. Boeing officials told us that a subsequent wiring audit found thousands of wire segments that needed to be changed. Boeing officials estimate that these changes affected about 45 percent of the 1,700 wire bundles on the aircraft. Boeing suspended wiring installation on the remaining three development aircraft for several months while it worked through the wiring issues on the first development aircraft. The required wiring rework led to a delay in the first flight of the first development aircraft and to manufacturing delays on the other development aircraft. Although Boeing has largely addressed this issue, it continues to execute some wiring rework across each of the development aircraft. Aerial refueling system redesign: Boeing identified several aerial refueling parts that needed to be redesigned. For example, according to program officials, the single-point refueling manifold, which is a mechanism that distributes and regulates the flow of fuel to various components, contained a coupler that was not built to withstand the pressure experienced during refueling operations. Following failures during testing, Boeing redesigned the manifold and began using a coupler manufactured by a different supplier. According to officials, Boeing also determined that the process used to manufacture the fuel system’s fuel tube welds did not meet requirements. Boeing has since changed the weld process and implemented an x-ray inspection process for these parts. The process changes caused a delay to the first flight of the second development aircraft, which is being used for aerial refueling testing. Fuel contamination: A mislabeled fuel substitute used for ground testing in July 2015 led to the contamination of the fuel system on the second development aircraft and resulted in another delay to its first flight. According to Boeing, a distributor provided a product improperly labeled as a fluid approved for use as a fuel substitute. The material provided by the distributor and used during a ground test was an industrial cleaner and is highly damaging to aluminum. The incident became apparent when seals in the fuel system began to leak about 30 days after the substance was introduced. By then, the aircraft’s centerline drogue system, fuel manifold, and piping were corroded. Due to the extent of the corrosion, Boeing had to take parts from the third development aircraft to repair the second development aircraft’s damaged fuel system. Since that time, Boeing has had difficulty obtaining replacement parts for the third development aircraft from a new supplier and had to delay some testing on that aircraft. Boeing considers the contamination of the fuel system a one-time event and no longer uses the supplier responsible for mislabeling the packaging of the fuel substitute. As a result of the development problems, Boeing has used all of its schedule reserve and had to revise its testing schedule. To preserve the August 2017 operational aircraft delivery date, Boeing and the Air Force also had to revise the aircraft delivery schedule. Originally, Boeing contracted to complete developmental flight testing and deliver the four development aircraft by April and May 2016. Boeing planned to conduct operational testing starting in April 2016 and to complete that testing in October 2016. Boeing also planned to bring the four development aircraft to operational configuration and deliver those aircraft, along with 14 additional production aircraft, to the Air Force over 14 months, prior to August 2017. The current schedule is much more compressed because Boeing has not completed the developmental flight test program. As of January 2016 Boeing has two development aircraft flying developmental flight tests. The other two aircraft are expected to be ready for developmental flight testing in early March and April 2016, respectively. Boeing now plans to deliver four production aircraft to the Air Force to begin operational testing in May 2017, a year later than originally contracted. It plans to bring two development aircraft to operational configuration and deliver those aircraft, along with 16 additional production aircraft (2 more than it originally planned to deliver in this timeframe), to the Air Force over the 6 months leading up to August 2017. Operational testing will be completed about 2 months after the aircraft are delivered. While this risks late discoveries of aircraft deficiencies, Boeing must correct them at its own cost. Figure 2 illustrates the original and current schedules for test and delivery. In anticipation of the Air Force exercising its options for production lots 1 and 2, Boeing recently began building low-rate initial production aircraft using its own resources. Boeing also plans to enhance its production capabilities by opening a second finishing center to militarize 767-2C aircraft and bring them to a KC-46 configuration. Assuming the Air Force exercises its options for production lots 1 and 2, program officials stated that Boeing may need an additional 4 months beyond August 2017 to deliver all 18 aircraft due to challenges it faces with its developmental test program. Boeing is about 2 years into its developmental test program to determine if the aircraft works as intended. The developmental test program contains about 700 ground and flight test activities to be completed over a 38-month period. At the end of January 2016, Boeing had completed 114 of the test activities. This included a demonstration of the aircraft’s aerial refueling capability with an F-16 aircraft using the boom, which is needed to support the low-rate initial production decision. Boeing faces two primary challenges to completing its planned developmental test program. These challenges and actions Boeing is taking to address them are described below. Complete optimistic test program: Since Boeing lost time addressing problems while developing the aircraft, it must now attain a high degree of test efficiency to adhere to the new schedule. Test efficiency refers to Boeing’s ability to complete scheduled test activities on time. DOD developmental test officials believe that the schedule for completing the remaining test activities is risky because Boeing has not completed test activities at the rate it planned and upcoming tests will be more complex. In January 2016, for example, Boeing completed 7 of 55—13 percent—of the test activities that had been scheduled for that month because aircraft were in maintenance longer than expected, there were delays in completing earlier ground testing, and Boeing may have overestimated how much it could complete with two aircraft. Overall, as shown in figure 3, Boeing had planned to complete 29 percent of its total test activities through January 2016, but has completed 16 percent. Boeing may also face difficulties achieving the test efficiency it needs to complete the remaining 84 percent of the test activities. For example, Boeing may have overestimated how many flight hours it can complete over the next several months because the last two development aircraft will begin flight testing later than expected due to production delays. Further, upcoming test activities will focus to a large extent on demonstrating KC-46 aerial refueling capabilities, which test officials consider to be more complex than the testing already completed. Finally, the company must still complete tests that were not performed in earlier months, which had not been factored into the latest test plans provided for our review. To mitigate these risks, Boeing test officials told us that they are working to improve test efficiency. For example, testers are continually reviewing test plans to identify areas to reduce duplication or eliminate unproductive activities. Obtain FAA approval of key components: FAA and program officials report that while most of the KC-46 components have been deemed ready for certification by the FAA, two key aerial refueling systems have not. In order to obtain airworthiness certification from the FAA, the KC-46 and its components must be designed, built, and then tested through the FAA’s regulatory process. The supplier for the centerline drogue system and wing aerial refueling pods, however, built the systems without following FAA processes. Consequently, the supplier was told by the FAA in late 2014 that the FAA would need to inspect the individual parts to ensure design conformance. During this process, the supplier discovered a design flaw with the aerial refueling pods, which caused further delays. Originally, Boeing estimated that these components would be ready for the FAA to certify by February 2014, and it now projects that they will be ready by July 2017, over 3 years later. To help mitigate schedule risk, Boeing obtained FAA approval in January 2016 to begin testing the KC-46 developmental aircraft without the two aerial refueling components being fully qualified. This will allow the program to proceed with most of the KC-46 certification testing. Once the remaining components have completed qualification testing, Boeing will need to conduct some additional testing to reach full airworthiness certification for the aircraft. The Air Force would then be able to conduct its review to determine that the aircraft and all its systems meet contract requirements and conform to the final design. However, because of these and earlier development delays, Boeing will not be able to complete development activities until June 2018, 5 months later than required. The Air Force and Boeing have agreed in principle to contract changes that reflect the delay. In exchange for extending the development aircraft delivery schedule, Boeing will provide, among other things, 4 production aircraft for operational testing and additional test infrastructure at Boeing Field to support a receiver aircraft needed for system specification verification and aerial refueling certification testing. After Boeing completes the developmental flight test program, the Air Force will begin 5 ½ months of operational testing to determine if the KC- 46 aircraft performs effectively and suitably in its operating environment. Boeing has solved many of its early manufacturing problems and has taken steps to mitigate potential schedule risks. However, the company has a challenging road ahead in testing and delivering aircraft in a compressed amount of time, including possibly producing two more operational aircraft than it originally planned. If the Air Force exercises its options for production lots 1 and 2, any future delays may affect Boeing’s ability to deliver all 18 operational aircraft by August 2017, but that risk is being measured in months rather than years. We are not making any recommendations in this report. We provided a draft of this report to the KC-46 program office for review and comment. The program office provided technical comments, which we incorporated into this report as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force; and the Director of the Office of Management and Budget. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Description Maximum weight of the aircraft without usable fuel. Gallons of fuel per hour used by the aircraft during a mission. Percentage of time aircraft performed at least one assigned mission. Percentage of time mechanical problems were fixed within 12 hours (after 50,000 fleet hours). Percentage of breaks per sorties (after 50,000 fleet hours. Probability of completing the aerial refueling mission and landing safely. Probability an aircraft will be ready for operational use when required. In addition to the contact named above, Cheryl Andrew, Assistant Director; Andrea Bivens; Kurt Gurka; Stephanie Gustafson; Kristine Hassinger; Katheryn Hubbell; Roxanna Sun; and Nate Vaught made key contributions to this report. GAO, KC-46 Tanker Aircraft: Key Aerial Refueling Capabilities Should Be Demonstrated Prior to the Production Decision, GAO-15-308 (Washington, D.C.: April 9, 2015). GAO, KC-46 Tanker Aircraft: Program Generally on Track, but Upcoming Schedule Remains Challenging, GAO-14-190 (Washington, D.C.: April 10, 2014). GAO, KC-46 Tanker Aircraft: Program Generally Stable but Improvements in Managing Schedule Are Needed, GAO-13-258 (Washington, D.C.: February 27, 2013). GAO, KC-46 Tanker Aircraft: Acquisition Plans Have Good Features but Contain Schedule Risk, GAO-12-366 (Washington, D.C.: March 26, 2012).
Aerial refueling—when aircraft refuel while airborne—allows the U.S. military to fly farther, stay airborne longer, and transport more weapons, equipment, and supplies. The Air Force initiated the KC-46 program to replace its aging KC-135 aerial refueling fleet. Boeing was awarded a fixed price incentive contract with a ceiling price of $4.9 billion to develop the first four aircraft, which will be used for testing. Boeing is contractually required to deliver the four development aircraft between April and May 2016. Boeing is also required to deliver a total of 18 aircraft by August 2017, which could include some of the development aircraft if they are brought to operational configuration. The program plans to eventually field 179 aircraft in total. The National Defense Authorization Act for Fiscal Year 2012 included a provision for GAO to review the KC-46 program annually through 2017. This report addresses progress made in 2015 toward (1) meeting cost and performance goals and (2) delivering the aircraft on schedule. GAO analyzed key cost, schedule, development, test, and manufacturing documents and discussed results with officials from the KC-46 program office, other defense offices, the FAA, and Boeing, the prime contractor. KC-46 tanker aircraft acquisition cost estimates have decreased for a third consecutive year and the prime contractor, Boeing, is expected to achieve all the performance goals, such as those for air refueling and airlift capability. As the table below shows, the total acquisition cost estimate has decreased from $51.7 billion in February 2011 to $48.2 billion in December 2015, about 7 percent, due primarily to stable requirements that led to fewer than expected engineering changes. The fixed price development contract also protects the government from paying for any development costs above the contract ceiling price. Source: GAO presentation of Air Force data. | GAO-16-346 Regarding the schedule, the program office delayed the low-rate initial production decision 9 months because Boeing had problems developing the first four aircraft. Boeing has largely addressed the problems, but proposed a new schedule to reflect the delays. (See figure below.) Boeing still plans to deliver 18 operational aircraft to the Air Force by August 2017—assuming the Air Force approves production. Operational testing will be completed later, in October 2017. While aircraft deficiencies could be discovered late, the plan presents little cost risk to the government because Boeing must correct deficiencies using its own resources. Boeing plans to deliver the aircraft over 6 months, instead of 14. Note: The original schedule included delivery of four development aircraft in operational configuration by May 2016. Under the current schedule the last of these aircraft will be delivered by November 2017. Boeing has a challenging road ahead to complete testing and deliver aircraft. Test officials believe Boeing's test schedule is optimistic and it may not have all aircraft available when needed to complete planned testing. Boeing also has not gotten several key aerial refueling parts qualified by the Federal Aviation Administration (FAA) and cannot get final FAA certification of KC-46 aircraft until this occurs. Program officials estimate there are 4 months of schedule risk to delivering 18 aircraft by August 2017 due to testing and parts qualification issues. Boeing is working on ways to mitigate the schedule risks. GAO is not making recommendations at this time. DOD's technical comments on a draft are incorporated as appropriate in the final report.
Title IV-E of the Social Security Act provides for states to obtain federal reimbursement for the costs of their Foster Care programs. While states may provide foster care services to a range of children outlined by state laws and regulations, they may only claim Title IV-E Foster Care funds for children meeting eligibility criteria outlined in the Social Security Act (see table 1). Title IV-E authorizes states to receive federal reimbursement for “maintenance payments” to support expenses for a foster care child, such as food, clothing, shelter, and school supplies. The federal government matches the amounts states pay for maintenance costs under the Medicaid rate. The Medicaid rate varies by state and by year and, for fiscal year 2010, ranged from 50 to 83 percent. In addition to maintenance costs, Title IV-E authorizes states to receive reimbursement for other costs incurred to manage the program. Those other costs and fall under the following three main the allowable reimbursement ratescategories: Child placement services and other administrative activities (administrative costs), which generally cover expenses states incur in identifying eligible children, referring them to services, and planning for permanent placement. These can also include administrative costs used to serve foster care “candidate” children, who are at-risk for foster care but still reside in the home. These costs are matched at 50 percent. State and local training costs (training costs), which are matched at 75 percent. Statewide Automated Child Welfare Information System (SACWIS) development, installation, and operation costs (SACWIS costs). SACWIS helps states manage their child welfare cases and report related information to the federal government. These SACWIS costs are matched at 50 percent. Since 2002, HHS has also approved states to receive federal reimbursement for demonstration project costs involving the waiver of certain provisions of Title IV-E. The waivers grant states flexibility in the use of Title IV-E foster care funds for “demonstration projects” of alternative services that promote safety, permanency, and well-being for children in the foster care system, so long as the projects do not cost the federal government more than the states would have received under Title IV-E. As of June 2010, nine states have active Title IV-E waiver agreements. Data from HHS shows that the average number of children served by Title IV-E Foster Care funds has declined, from over 197,000 in fiscal year 2008 to 181,000 in fiscal year 2010. HHS and child welfare experts have cited a number of reasons for this decline. For example, they noted that a child is required to qualify for the Aid to Families with Dependent Children program (a means-tested program based on a federally defined poverty line) as it was in effect on July 16, 1996, in order to be eligible for Title IV-E. Because income limits for the program have remained static while inflation has raised nominal incomes for some families, fewer children are eligible. For example, to be considered as below the federal poverty line, a family comprised of 4 persons, including 2 children, had to have an annual income below $15,911 in 1996, as compared to $22,113 in 2010. However, the $15,911 threshold continues to be used each year to determine eligibility for the Title IV-E Foster Care program. In addition, states sometimes use other federal programs for children who could otherwise have been claimed under Title IV-E, because the other programs, such as Title XX’s Social Services Block Grants, Medicaid, and Temporary Assistance to Needy Families, provide federal reimbursement for a broader range of services. Of the $4.5 billion in total Title IV-E Foster Care funds paid to states in fiscal year 2010, ACF reported that maintenance costs made up 34 percent of the total while administrative costs accounted for the largest share of the costs at 44 percent. Figure 1 shows federal outlays, as reported by HHS, by type of expenditure for fiscal year 2010. Title IV-E expenditures by type and by state for fiscal year 2010 are presented in appendix II. ACF is responsible for the administration and oversight of Title IV-E funding to states. ACF staff are located in ACF’s headquarters (Central Office) and its 10 regional offices. Collectively, these ACF offices oversee states’ financial internal control processes for the Title IV-E program and monitor their performance and compliance with federal child welfare laws. One key oversight activity related to state Foster Care programs is ACF’s Title IV-E eligibility reviews, as required under the ACF has conducted these Social Security Act and HHS regulations.reviews since 2000. They are intended to help evaluate whether state claims for federal reimbursement for Foster Care maintenance costs are valid and accurate. Title IV-E eligibility reviews are to be conducted by teams composed of both federal and state staff, and are to include (1) desk reviews to ensure that the correct amount of maintenance costs was claimed on behalf of foster care children during the review period and (2) site visits to states to ensure that maintenance costs were claimed only for children who were eligible for the Title IV-E program. As required by the Social Security Act and HHS regulations, there are two stages of Title IV-E eligibility reviews, a primary and secondary review. During a primary review, HHS regulations specify that the review team is to examine a sample of 80 cases per state, selected from the Adoption and Foster Care Analysis and Reporting System (AFCARS). Each case represents a child for whom a Title IV-E Foster Care maintenance payment was made. If a primary review finds fewer than 5 cases with errors in either the amounts paid on behalf of a child or in a child’s eligibility for Title IV-E funds (5 percent of the cases reviewed or fewer), ACF determines that the state is in substantial compliance with the regulations. At that point, the state is scheduled to have another primary review in 3 years.cases are in error (exceeding 5 percent of the number of cases reviewed), ACF determines that the state is not in substantial compliance with the regulations. In those instances, ACF requires states to develop a Program Improvement Plan (PIP) designed to correct the areas of noncompliance identified, such as payments to unlicensed providers or incomplete criminal record checks. Any improper payments the review teams identify during these reviews are classified as disallowed costs On the other hand, if a primary review finds 5 or more that, in general, are to be returned to ACF or withheld from future reimbursement claims. States required to develop a PIP generally have 1 year to implement the corrective actions specified in the PIP, after which a secondary review is to be conducted. During the secondary review, the review team is to examine a sample of 150 cases, as outlined in HHS regulations. If 10 percent of cases or fewer are found to be in error and if the total dollar amount found to be in error is less than 10 percent of the total dollar amount reviewed, then ACF determines that the state is in substantial compliance. Further, if the state exceeded only one of these secondary review error thresholds, then ACF would also determine that the state is in substantial compliance. Only in instances where the state exceeds both the case percentage and dollar percentage error thresholds of 10 percent would ACF determine that the state is not in substantial compliance. HHS regulations require such a state to repay a disallowance percentage applied to its Title IV-E claims during the review period. After conducting a secondary review, ACF would then schedule another primary review in 3 years (see figure 2). Another key ACF oversight activity related to state Foster Care programs is the monitoring of findings from state-level audits conducted under the Single Audit Act and OMB Circular No. A-133, known as Single Audits. The Single Audit Act requires an annual audit of states, local governments, and non-profit organizations that expend $500,000 or more of federal funds in a given year. ACF regional offices are to work with states to resolve Single Audit findings related to the Foster Care program to help ensure that states are using funds in accordance with program requirements and addressing financial management weaknesses. ACF started using the Audit Resolution Tracking and Monitoring System (ARTMS) in 2010 to provide online processing and real-time tracking of ACF’s audit follow-up process. The National External Audit Review Center is a specialized function of the HHS OIG that serves as a clearinghouse to determine which state Single Audit report findings HHS is responsible for resolving. When ACF receives Single Audit finding data from the OIG’s National External Audit Review Center, HHS headquarters staff upload the data into ARTMS and assign the audit finding data to the appropriate ACF regional office staff for resolution. Consistent with OMB Circular No. A-50, ACF considers an audit finding resolved when the auditor and the state agree on action to be taken. ARTMS provides users with notifications of tasks to be performed, such as when an audit is assigned to a financial management specialist for follow-up, and allows them to submit and view all audit resolution information online. The Improper Payments Information Act (IPIA) was enacted in November 2002 to enhance the accuracy and integrity of federal payments.requires agencies to: review all programs and activities and identify those that are susceptible to significant improper payments; obtain a statistically valid estimate of the annual amount of improper payments, including the gross total of over- and underpayments, in those susceptible programs and activities; report to the Congress estimates of the annual amount of improper payments in their susceptible programs and activities and; for estimates exceeding $10 million, implement a plan to reduce improper payments. OMB’s implementing guidance for IPIA, in effect for fiscal year 2010 reporting, required that for any programs and activities identified as susceptible to significant improper payments, agencies must develop a statistically valid methodology, or other methodology approved in advance by OMB, to estimate the annual amount of improper payments, including a gross total of both underpayments and overpayments. The Foster Care program was deemed a risk susceptible program and therefore required to address the IPIA reporting requirements.guidance also requires that, as part of their plan to reduce improper payments for all programs and activities with improper payments exceeding $10 million, agencies identify the reasons their programs and activities are at risk of improper payments (also known as root causes), set reduction targets for future improper payment levels and a timeline within which the targets will be reached, and ensure that agency managers and accountable officers are held accountable for reducing improper payments. ACF annually reports to HHS—for inclusion in its agency financial report used to report to the Congress—an improper payment estimate for Foster Care program maintenance payments based on results of Title IV-E eligibility reviews required under the Social Security Act. For programs administered at the state level such as Foster Care, OMB guidance provides that statistically valid estimates of improper payments may be provided at the state level either for all states or for all sampled states annually. These state-level improper payment estimates should then be used to generate a national dollar estimate and improper payment rate. With prior OMB approval, ACF has taken its existing Title IV-E eligibility review process, already in place under the Social Security Act, and leveraged it for IPIA estimation. OMB granted this approval in December 2004 with the expectation that continuing efforts would be taken to improve the accuracy of ACF’s estimates of improper payments in the ensuing years. ACF provides a national estimated error rate based on a rolling average of error rates identified in states examined on a 3-year cycle. As a result, ACF’s IPIA reporting for each year is based on new data for about one-third of the states and previous years’ data for the remaining two-thirds of states. While each state sample represents a distinct 6-month period under review, the national “composite sample” reflects a composite period under review that encompasses a 3-year period (the Title IV-E eligibility review 3-year improper payment cyclical process). To calculate a national estimate of improper payments, ACF uses error rates that span a 3-year period of Title IV-E eligibility reviews in the 50 states, the District of Columbia, and Puerto Rico. ACF applies the percentage dollar error rate from the sample to the total payments for the period under review for each state. Improper payment error rates by state for fiscal year 2010, as calculated by ACF, can be found in appendix III. On July 22, 2010, the Improper Payments Elimination and Recovery Act of 2010 (IPERA) was enacted. IPERA amended IPIA, and established additional requirements related to federal agency management accountability, compliance and noncompliance determinations based on an Inspector General’s assessment of an agency’s adherence to IPERA requirements and reporting that determination, and an opinion on internal controls over improper payments. Specifically, one new IPERA provision calls for federal agencies’ Inspectors General to annually determine whether their respective agencies are in compliance with key IPERA requirements such as meeting annual reduction targets for each program assessed to be at risk of and measured for improper payments, and to report on their determinations to the agency head, the Congress, and the Comptroller General. ACF’s methodology, which resulted in a reported $73 million (or 4.9 percent) estimate of improper payments in the Foster Care program for had deficiencies in all three phases of its estimation fiscal year 2010,methodology—planning, selection, and evaluation—when compared to OMB’s statistical guidance, GAO guidance, and federal internal control standards, as summarized in table 2. Specifically, ACF’s estimation methodology (1) did not consider nearly two-thirds of reported federal Foster Care program payments for fiscal year 2010, (2) was not based on a probability sample of payments, (3) lacked specific procedures for identifying underpayments and duplicate payment errors, and (4) used a flawed process for aggregating state-level data into an overall national error rate. As a result, ACF’s methodology is not statistically valid or complete, and these deficiencies impair the accuracy and completeness of its reported Foster Care program improper payment estimate. ACF’s annual IPIA reporting for the Foster Care program is incomplete, as it is limited to identifying improper payments for only one type of program payment activity—maintenance payments. For fiscal year 2010, as shown in figure 1 of this report, maintenance payments represented 34 percent of the total federal share of expenditures for the Foster Care program. Administrative and other payments were not considered in ACF’s IPIA estimation process and thus, not included in the Foster Care program improper payment estimate of about $73 million for fiscal year 2010. Administrative costs accounted for 44 percent of the total federal share of expenditures for the Foster Care program, while other costs accounted for the remaining 22 percent. These other costs include operational and development costs associated with SACWIS; training costs; and state demonstration projects to provide alternative services and support for children in the Foster Care system. Figure 3 shows the portion of Foster Care program outlays considered in ACF’s methodology for estimating associated improper payments for IPIA reporting. Because ACF’s methodology does not include an estimate for improper payments related to its administrative payment activity, the related payment errors that meet the definition of improper payments were not accounted for or included in the reported estimate for the Foster Care program. OMB’s December 2004 approval of ACF’s proposed methodology included an expectation that ACF would develop a plan and timetable to test administrative expenses by April 2005. Consistent with this expectation, in order to begin exploring the issues of accounting for and including administrative costs, ACF established a working group in 2006. Then, in 2007, ACF initiated an Administrative Cost Review (ACR) pilot to examine how certain state agencies accumulate costs that are included in their expenditure claims for federal financial participation and to identify improper administrative payments within those pilot states. Seven states volunteered for pilots held from fiscal years 2007 through 2011, and two more states are scheduled for fiscal year 2012. Pilot reports for two states have provided estimates for a gross improper payment total of $11.3 million for the period October 1, 2008, through March 31, 2009. These amounts were not included in ACF’s estimated amounts for improper payments. According to ACF, it will use the results of the ACR pilots to determine the feasibility of developing a methodology to estimate an administrative error rate as part of the calculation of the national Foster Care improper payment error rate. However, as of December 15, 2011, ACF had not yet made a decision with respect to when these reviews would be implemented and, ultimately, whether to establish a methodology to estimate improper payments related to administrative costs. Although ACF did not consider Foster Care administrative expenditures in its fiscal year 2010 IPIA estimation process, its Title IV-E eligibility reviews identified disallowed administrative costs (or improper payment amounts) which were added to the amount of any claims disallowances. For fiscal year 2010, disallowed administrative costs that ACF documented from the Title IV-E eligibility reviews totaled $2.4 million; however, this amount was not included in its Foster Care improper payment estimate. According to ACF, administrative payments are not currently included as part of the reported improper payment estimate because this disallowed amount is based on a calculation and not directly determined from a case file review. ACF calculates the administrative cost disallowance by allocating an average administrative cost for any ineligible time period identified during a Title IV-E eligibility review. The methodology ACF used to estimate improper maintenance payments was not based on a probability sample of payments, which is needed for a direct estimate of the payment error rate and total amount of dollars that were improperly paid.approaches for estimating improper payments in the Foster Care In 2004, ACF proposed three options to OMB as program. OMB approved ACF’s plan to derive the estimate using error dollars per case from state review samples for its base error rate calculation, with the expectation that continuing attention to the statistical processes used would be needed to obtain the best estimate of erroneous payment rates. OMB’s approval reflected the idea that the methods initially used would incorporate annual improvements to the accuracy of improper payment estimates. However, other than a change in 2008 to derive the estimate using sample state dollar error rates, ACF generally continues to use the same methodology outlined in 2004. We found that ACF selected a sample from a universe of all cases receiving Title IV-E Foster Care payments during the period under review. This population of Foster Care cases is drawn from AFCARS. However, AFCARS does not contain any Title IV-E financial data that links a Title IV-E payment amount to each case file. Lacking such payment data, ACF relies on states to provide payment histories for all cases selected for review. According to ACF, it was already using AFCARS to select samples for the Title IV-E eligibility reviews prior to IPIA implementation. Consistent with OMB’s approval of its methodology, ACF opted to use AFCARS data instead of creating or identifying a new data source for meeting IPIA requirements. ACF officials stated that utilizing this existing source of data reduced the burden on states by not requiring them to draw their own samples and employed the AFCARS database in a practical manner. Notwithstanding ACF’s objective of leveraging existing data to estimate foster care improper payments for addressing IPIA requirements, if ACF is to make an estimate that accurately represents the target population of Title IV-E payments, a more direct statistical method would be needed to select a probability sample of Title IV-E payments. OMB’s Standards and Guidelines for Statistical Surveys documents the professional principles and practices that federal agencies are required to adhere to and the level of quality and effort expected in all statistical activities. According to these standards and guidelines, probabilistic methods for survey sampling are one of a variety of methods for sampling that give a known, non-zero, probability of selection to each member of the target population. The advantage of probabilistic sampling methods is that sampling error can be calculated. For the purpose of making a valid estimate with a measurable sampling error that represents a population, the sample must be selected using probabilistic methods. The sample results can then be used to make an inference about the target population, in this instance, foster care cases that received a maintenance payment. While it is possible for ACF to estimate a payment error rate and the total amount of dollars improperly paid for maintenance payments using a combination of AFCARS and supplemental payment data from the states, this would require a more complex estimation methodology than ACF currently uses. Based on our review of the sampling documentation provided, ACF did not consider key factors such as variation in volume of payments and dollars of payments across cases and states. In addition, the population of data that ACF used to select its sample from is not reliable because ACF’s sampling methodology did not provide for up-front data quality control procedures to (1) ensure that the population of cases was complete prior to its sample selection and (2) identify inaccuracies in the data field used for sample selection. During our review, we found that the population of Foster Care cases from AFCARS contained inaccurate information on whether a case had actually received a Title IV-E Foster Care maintenance payment during the period under review, reflecting continuing concerns regarding the accuracy and completeness of AFCARS data. Specifically, ACF had to replace a high percentage of cases sampled from the database of Foster Care cases for the fiscal year 2010 reporting period due to inaccurate information in AFCARS. To ensure that a sufficient number of relevant sample items are available for review, ACF routinely selects an “over- sample” of cases—cases selected in addition to the required 80 or 150 cases initially selected for the primary and secondary reviews. Of the original 4,570 sample cases ACF selected for testing in its primary and secondary reviews for fiscal year 2010, 298 cases (almost 7 percent) had to be replaced with substitutes taken from the “over-sampled” cases because the selected cases had not received Title IV-E Foster Care maintenance payments during the period under review. Of the 298 over- sampled cases used to replace the cases initially selected, 63 cases (more than 21 percent) then had to be replaced again because those cases had also not received Title IV-E Foster Care maintenance payments during the period under review. Although we were able to determine how many sampled (or over-sampled) cases had to be replaced because available records showed no Title IV-E payment was received during the reporting period, neither we nor ACF were able to determine the extent to which the opposite occurred—the extent to which cases that had received a payment (and therefore should have been included in the sample population) had not been coded as receiving Title IV-E payments. As part of its sampling methodology, ACF has not established procedures to identify any such occurrences. Therefore, ACF could not determine whether its sampling universe was complete, i.e., whether all of the cases receiving a Foster Care payment were included in the universe of cases from which it selected sample cases for review. According to GAO’s Assessing the Reliability of Computer-Processed Data, reliable data are defined as data that are reasonably complete and accurate, meet intended purposes, and are not subject to inappropriate alteration. “Completeness” refers to the extent to which relevant records are present and the fields in each record are populated appropriately. “Accuracy” refers to the extent to which recorded data reflect the actual underlying information. GAO’s Internal Control Management and Evaluation Tool provides that reconciliations should be performed to verify data completeness. Also, data validation and editing should be performed to identify erroneous data. Erroneous data should be captured, reported, investigated, and promptly corrected. ACF officials told us they are aware that AFCARS does not contain Title IV-E payment data and acknowledged that they do not perform procedures to identify incorrect or missing information in the population prior to sample selection. However, ACF officials said they continue to use the data to meet IPIA reporting requirements because it is the only database that contains case-level information on all children in foster care for whom the state child welfare agencies have responsibility for placement, care, or supervision. However, without developing a statistically valid sampling methodology that incorporates up-front data quality controls to ensure complete and accurate information on the population, including payment data, ACF cannot provide assurance that its reported improper payment estimate accurately and completely represents the extent of improper maintenance payments in the Foster Care program. In its fiscal year 2010 agency financial report, ACF reported that underpayments and duplicate or excessive payments represented 25 percent of the errors that caused improper payments. While ACF’s methodology for performing Title IV-E eligibility reviews included written guidance and a data collection instrument that focused on eligibility errors, it did not include procedures on how to search for and identify payment errors related to underpayments and duplicate or excessive payments during case reviews. Rather, ACF’s procedures only provided that any observed underpayments and duplicate or excessive payments are to be disclosed as findings in the state’s final eligibility review report. Without detailed procedures to guide review teams in the identification of underpayments and duplicate or excessive payments, ACF’s methodology cannot effectively assure its review team identifies the full extent to which any such underpayments or duplicate or excessive payments exist in the Foster Care program. As defined in IPIA, improper payments include both overpayments and underpayments, and an agency’s estimate should reflect both types of errors. IPIA also includes examples of improper payments, one of which is duplicate payments. According to GAO’s Standards for Internal Control in the Federal Government, operational information is needed to determine whether the agency is achieving its compliance requirements under various laws and regulations. Information is required on a day-to-day basis, to make operating decisions, monitor performance, and allocate resources. Pertinent information should be identified, captured, and distributed in a form and time frame that permits people to perform their duties efficiently. ACF compiles the results of all state eligibility reviews to determine the most common types of payment errors. ACF officials told us that all review team members receive the same training and the results of the state reviews are analyzed to ensure consistency and reliability. For fiscal year 2010, underpayments were the largest percentage of payment errors (19 percent—as a percentage of all Title IV-E maintenance payment errors identified in sampled cases). Duplicate or excessive payments comprised 6 percent of the payment errors. However, the extent of underpayments and duplicate or excessive payment errors identified varied widely by state, and in some instances were not identified at all. The lack of detailed procedures for identifying any such payment errors may have contributed to the variation or whether the teams found any errors. For example, our analysis of the Title IV-E eligibility reviews that comprised the fiscal year 2010 foster care improper payment estimate identified underpayments in 21 of 51 state reviews. Of the 21 states where reviewers had identified underpayments, such payments ranged from 1.3 percent to 12.0 percent of cases reviewed. Similarly, duplicate or excessive payments were identified in only 16 of 51 states. Of the 16 states that had this type of error, these payments ranged from 1.3 percent to 5.0 percent. During our site visits, ACF regional officials told us that states have differing claiming practices for certain expenses. Specifically, officials in one regional office said that if a child became eligible during a specific month, a state could have claimed through the first day of the month for that child, but chose not to so as to not risk a potential error on a future Title IV-E eligibility review. According to these regional officials, the regional offices operate under the presumption that “if a state made the decision not to claim certain expenses, then the failure to claim is not considered an underpayment.” These types of decisions would be discussed during the reviews but ACF guidance does not call for decisions and their rationale to be formally documented. According to ACF’s Title IV-E eligibility review guide, potential underpayments are to be identified during a review of the case record and payment history. ACF’s Title IV-E eligibility review guide provides that payment histories should be submitted, but it does not specify what criteria reviewers are to look for in order to determine instances of underpayments or duplicate or excessive payments. In August 2011, ACF issued a new attachment to the Title IV-E eligibility review guide to provide a tool for calculating and reporting underpayments identified during eligibility reviews. This new attachment provides a template for recording underpayments for the period under review, but it does not provide guidance on how to identify underpayments. Instances of duplicate or excessive payments are to be reported on other existing attachments in the Title IV-E eligibility review guide. However, none of these attachments offer additional guidance for how to identify national underpayments or duplicate or excessive payments. ACF’s fiscal year 2010 Foster Care program improper payment estimate did not appropriately aggregate state improper payment data to derive a national improper payment estimate (dollars and error rate). ACF calculated the national estimate of improper payments each year using data collected in the most recent eligibility review for each of 50 states, the District of Columbia, and Puerto Rico. According to the information ACF presented to OMB in December 2004, ACF’s methodology would calculate the standard error of each state estimate, and of the national estimate, to examine the extent to which the precision requirements as specified in OMB’s implementing guidance for IPIA are met. However, the methodology ACF actually used to aggregate this state-level improper payment data does not take into account each state’s margin of error, which is needed to calculate an overall program improper payment estimate with a 90 percent confidence level generally required by OMB guidance. Figure 4 depicts, at a high level, ACF’s calculation to derive the national improper payment estimate for the Foster Care program. ACF has reported significantly reduced estimated improper maintenance payments, from a baseline error rate of 10.33 percent for 2004 to a 4.9 percent error rate for 2010, but the validity of ACF’s reporting of reduced improper payment error rates is questionable. Examples of corrective actions ACF has identified include reviews, the requirement for state improvement plans, on-site training and technical assistance to states, and outreach to judicial organizations to educate them as to their role in addressing Foster Care eligibility issues. However, the significant weaknesses discussed previously concerning ACF’s estimation methodology impaired the accuracy and completeness of ACF’s reported improper payment estimate for the Foster Care program. Further, we found that ACF’s ability to reliably assess the extent to which its corrective actions reduced Foster Care program improper payments was impaired by deficiencies in (1) its method for requiring when states implement corrective actions and (2) information technology limitations related to monitoring states’ Foster Care program-related Single Audit findings. We identified three deficiencies in ACF’s process for implementing plans to reduce improper payments. ACF did not use reported improper payment error rates—which are based on the dollar amount of improper payments identified in a sample of state Foster Care cases—to determine whether or not a state is required to implement corrective actions. ACF’s measure for assessing corrective action effectiveness is through performance on its secondary review of Title IV-E cases, which has a more lenient passing standard than the primary review. Not all types of payment errors are required to be addressed in the PIP. OMB’s implementing guidance for IPIA requires that agencies put in place a corrective action plan to reduce improper payments. In addition, ACF’s internal guidance requires states to implement corrective actions through a PIP if, during the Title IV-E primary eligibility review, a state is found to have 5 or more cases in error (exceeding 5 percent of the number of cases reviewed). While ACF identifies state PIPs as a corrective action strategy, it does not use the dollar-based estimated improper payments to determine when a state is required to develop a PIP. Instead, ACF uses the number of sample cases found in error to determine which states should develop a PIP. Therefore, some states with improper payment dollar error rates exceeding 5 percent were not required to implement corrective actions to reduce these rates. For fiscal year 2010 reporting, ACF used the results of 44 primary eligibility reviews and 7 secondary reviews. Of the 44 state primary reviews, 13 had dollar-based estimated improper payments greater than 5 percent; however, because ACF uses case error rates as the determining factor for states’ compliance with their primary reviews, not all states were required to complete a PIP. Of the 13 states, ACF determined 7 were noncompliant in their primary eligibility reviews because the case error rate exceeded ACF’s threshold of 5 percent (more than 4 of the 80 cases were found in error) and thus, were required to complete a PIP. The remaining 6 states were found substantially compliant in their primary reviews as their case error rate was below the established 5 percent threshold (4 or fewer cases were found in error). The dollar-based improper payment rates for those 6 states ranged from 5.1 to 19.8 percent—based on the percentage of improper payment dollars found in the sample. Because improper payment rates are not used in applying the PIP corrective action strategy, ACF’s method cannot effectively measure states’ progress over time in reducing improper payments. It also cannot effectively help determine whether further action is needed to minimize future improper payments. This limits the extent to which states are held accountable for the reduction of improper payments in the Foster Care program. Upon a state’s implementation of its PIP, ACF conducts a secondary review to determine whether errors found during the primary review have been addressed. The secondary review is ACF’s principal tool to measure a state’s success in implementing actions to reduce Foster Care program improper payments. These reviews carry the potential financial penalty of an extrapolated disallowance of the state’s federal share of Title IV-E expenditures if the state is found to be noncompliant. However, because ACF’s error threshold to be found noncompliant with a secondary review is twice as high as that of the primary review (10 percent versus 5 percent), it limits ACF’s ability to provide an effective incentive for states to focus continuing attention on causes of improper payments. Based on our analysis of ACF’s Title IV-E eligibility reviews, 27 states have had at least one secondary review between 2002 and 2010. Of the 27 states that received a secondary review, 26 states passed this review (meaning that the error rates were below the ten percent threshold) and only 1 state failed. Of the 26 states that passed, 13 states (50 percent) would have failed if the primary review error threshold of 5 percent was in effect. Of those 13 states, we found at least 3 states that passed the secondary review with a case error rate over 10 percent because the reported improper payment dollar based error rate was below 10 percent. The one state that failed its secondary review in 2003 received an extrapolated disallowance in accordance with HHS regulations. Since the eligibility reviews began in 2000, this is the only state found to be noncompliant with its secondary review. While the extrapolated disallowance is a financial penalty intended to encourage states to address causes of improper payments, after incurring an extrapolated disallowance, this state was again found to be noncompliant based on ACF’s subsequent review in 2006. As such, the state was again required to develop and implement a PIP to address the causes of errors identified in this review. After implementing this PIP, the state was subject to another secondary review and was found to be compliant with a case error rate of 6.67 percent and dollar error rate of 2.84 percent. However, this state would have failed if the primary review error threshold of 5 percent was in effect. According to ACF officials, it established the 10 percent threshold for compliance with secondary reviews in 2000 based on states’ error rates at that time which were between 15 percent and 17 percent (in terms of both cases and dollars). ACF officials told us the 10 percent appeared to be a target that states could meet to demonstrate reductions in improper payments over time. Also, the baseline estimated improper payment error rate reported for the Foster Care program in 2004 was 10.33 percent. Since establishing the 10 percent threshold in 2000, ACF has not conducted a review to validate the continuing propriety of the performance metric. GAO’s Internal Control Management and Evaluation Tool provides that an agency should periodically review and validate the propriety and integrity of both organizational and individual performance measures and indicators. According to this tool, performance measurement factors are to be evaluated to ensure they are linked to mission, goals, and objectives, and that they are balanced and set appropriate incentives for achieving goals while complying with law, regulations, and ethical standards. For fiscal year 2010, ACF reported that error rates for most of the states (33 of 51) were less than 5 percent. In addition, ACF’s process for overseeing states’ implementation of improper payment reduction actions has other weaknesses. ACF’s guidance only requires that the PIP—required if a state has more than four cases found in error in its primary review—address areas that the eligibility review identified as needing improvement. Consequently, states’ corrective action plans may not address all types of previously identified payment errors. There is nothing in the guidance to prevent states from addressing other areas in the PIP. However, based on our review of the guidance for developing PIPs and discussions with Central Office and regional office staff, underpayments and other non-eligibility errors and eligibility errors outside of the period under review might not be addressed in the PIP if these types of errors were not a factor in a state’s compliance. Not including all types of errors in states’ corrective action plans reduces their effectiveness for addressing the causes of payment errors as required under IPIA. In 2010, ACF began using a departmentwide system, ARTMS, to track and monitor the resolution of audit findings for programs it administers, including audit findings concerning Foster Care program payment errors.resolution of reported Single Audit findings. Single Audit findings for states’ Foster Care programs have included, among other issues, deficiencies in state oversight over subrecipients of federal funds, lack of training of state personnel on program eligibility requirements, and potential for unauthorized access to information systems to create and approve cases. Single Audit reports generally include a summary of prior ACF utilizes ARTMS as its primary tool for monitoring states’ audit findings describing any recurring issues and any related corrective actions undertaken by the state agency. Single Audits also generally provide information about any deficiencies in state agencies’ systems and processes that can be useful for ACF in monitoring federal expenditures and identifying and reducing improper payments in the Foster Care program. According to ACF officials, ARTMS is designed to track and monitor the resolution of Single Audit findings by audit report number, but the system does not enable users to search for specific audit findings by type of finding, grantee, state, region, or across years. As a result, regional offices could not use ARTMS to examine trends in the types of findings in their states in order to ensure that any systemic issues are addressed. Limitations with ARTMS decrease ACF’s ability to leverage existing agency data to identify reoccurring issues and other vulnerabilities such as inadequate state monitoring of federal funding that might not be identified during the 3-year eligibility review process and could lead to improper payments. This lack of information could impair ACF’s and regional offices’ ability to effectively monitor states’ efforts to reduce improper payments and the effectiveness of corrective action strategies implemented. According to GAO’s Standards for Internal Control in the Federal Government, information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. ACF regional office officials acknowledged limitations with ARTMS related to functionality in tracking findings. Specifically, a regional office official told us that ARTMS was not designed to be able to generate reports of all audit findings for an entire ACF region. Lacking such capability, a user interested in aggregating the findings would have to obtain the audit findings from each state and manually combine them outside of ARTMS. This regional office used a separate internal spreadsheet to track information related to the audit findings for all states in its purview. Otherwise, staff would need to view each state’s Single Audit findings individually within ARTMS, which could be time-consuming. Another regional office we visited used a similar spreadsheet, which also included information on the time it takes to close findings. A third regional office we visited also utilized an off-line spreadsheet as a means to track the clearance process for closing audit findings. A statistically valid approach for estimating improper payments would help ensure that Foster Care program improper payment estimates are reasonably accurate and complete, reflecting all types of program payments including administrative costs, based on complete and accurate payment data, and aggregated using state-level margins of error. Developing and implementing a sound methodology is a critical program management tool for understanding and addressing financial vulnerabilities in the Foster Care program through approaches such as identifying underpayments and duplicate or excessive payment errors consistently across states. While ACF has reported an improper payment estimate and related reductions for the Foster Care program, the statistical validity of both is questionable. Further, ACF’s method for evaluating the effectiveness of states’ implementation of their corrective action plans has several significant weaknesses, including reliance on ineffective and dated metrics that do not consider states’ improper payment dollar error rates in conjunction with targets that have not been reassessed since 2000. Similarly, deficiencies in its system for monitoring Single Audit findings limit ACF’s ability to efficiently track and compare trends across states. This includes ACF’s ability to measure states’ progress in reducing their improper payment errors, as well as its ability to reliably and completely identify and correct vulnerabilities at the state level that could lead to improper payments. Although OMB’s approval reflected a stated plan for ACF to implement a process to annually improve the accuracy of its improper payment estimate, this has not resulted in substantial changes to the process ACF outlined in 2004. Given the financial accountability challenges reported for state-administered programs, the ongoing imbalance between revenues and outlays across the federal government, and increasing demands for accountability over taxpayer funds, improving ACF’s ability to identify, reduce, and recover improper payments is critical. It will be important for ACF to work closely with OMB in examining and updating its statistical procedures to help assure the validity of ACF’s estimates. In order to more accurately and completely estimate improper payments for the Foster Care program and ensure that its methodology is statistically valid, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary for the Administration for Children and Families to take the following four actions: augment procedures for estimating and reporting Foster Care program improper payments, to include administrative costs; develop and implement procedures to provide a statistically valid methodology for estimating and reporting Foster Care program improper payments based on complete and accurate payment data; augment guidance to teams gathering state-level Foster Care program improper payment estimate data to include specific procedures to follow in identifying and reporting any underpayments and duplicate or excessive payment errors; and revise existing procedures for calculating a national improper payment estimate for the Foster Care program to include a statistically valid method for aggregating state-level margins of error to derive an overall, inflation adjusted, program estimate. To help ensure corrective action strategies effectively reduce Foster Care program improper payments, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary for the Administration for Children and Families to take the following three actions: develop and implement procedures requiring states to implement and report on corrective actions whenever a state’s estimated improper payment dollar error rate exceeds a specified target level for the program; establish and implement procedures requiring periodic assessments of state-level improper payment target levels, including targets associated with Title IV-E secondary reviews, for the Foster Care program for which states are to implement and report on corrective actions; and enhance ARTMS reporting capabilities to provide data on the status of actions taken to address Single Audit findings concerning states’ Foster Care program payments, such as providing reporting capabilities to allow ARTMS users to search for specific audit findings by type of finding, grantee, state, region, or across years. We provided a draft of this report to the Secretary of Health and Human Services for comment. In its written comments, reprinted in appendix IV, HHS agreed that its improper payment estimation efforts can and should be improved. HHS provided a summary of refinements that it had made to its improper payment estimation methodology over the years and also provided information on additional steps it planned to take. HHS stated that our analysis would be a helpful resource as it continued to improve its process. With regard to our seven recommendations to help improve ACF’s methodology to estimate improper payments and its corrective action process, HHS generally concurred with four of the recommendations and agreed to continue to study the remaining three recommendations. HHS also provided technical comments that we incorporated, as appropriate. HHS generally concurred with three recommendations we made related to improving the improper payment estimation methodology for the Foster Care program. Specifically, HHS generally agreed to (1) estimate and report improper payments related to administrative costs, (2) provide specific procedures to identify and report any underpayments and duplicate or excessive payment errors, and (3) revise procedures for calculating the aggregate state-level margins of error to derive an overall, inflation adjusted, program estimate. HHS described several actions currently under way to address these recommendations. Regarding the first recommendation, HHS noted that it was continuing to pilot test the Administrative Cost Reviews, described in this report, in fiscal year 2012; however, HHS’s response did not indicate when it expects these reviews will be fully implemented. In its response to the second recommendation, HHS stated that additional guidance for identifying and reporting underpayments and duplicate or excessive payment errors will be included in the updated Eligibility Review Guide and review instrument during the fiscal year 2012 review cycle. For the third recommendation, HHS stated that it can and will adjust its calculation to incorporate individual state margins of error in aggregating the state-level estimates into the national program estimate. However, HHS also stated that it will seek to determine whether making this revision would add sufficient value given that the estimate spans 3 years and that inflation is relatively low. We maintain that both aggregating state-level margins of error and factoring for inflation are needed to implement a statistically valid method for estimating improper payments. For the other recommendation we made related to the improper payment estimation methodology for the Foster Care program, HHS stated it would continue to study our recommendation to develop and implement a statistically valid Foster Care improper payment methodology based on complete and accurate payment data. HHS agreed that it should use the best data available. In its comments on our draft report, HHS acknowledged that it would be optimal to conduct a separate data collection to obtain a universe of Title IV-E payments, but stated that it needs to balance the goal of appropriate measurement with the cost and burden placed on states. HHS described the quality controls in place over the AFCARS data to help ensure the information is complete and accurate prior to selecting case samples for its Title IV-E eligibility reviews, which form the basis for its Foster Care improper payment estimate. Examples of such controls include automated system edit checks within AFCARS, AFCARS Assessment Reviews, and other outreach efforts to improve state AFCARS reporting. Our report describes some of the steps ACF has taken to address AFCARS data quality, but we also point out limitations in these efforts. For example, the AFCARS Assessment Reviews are not conducted annually for all states and do not address verifying the accuracy or completeness of the specific data element that ACF uses to develop its population of foster care cases for estimating improper payments. HHS also stated that its use of oversample cases demonstrated that its sampling and oversampling process is working properly to exclude cases that do not meet the selection criteria. While this process would identify some cases that did not meet ACF’s selection criteria, our point in this report is that ACF’s extensive reliance on the use of oversampling in its methodology is indicative that the population of cases could contain additional inaccuracies that may not be identified through its existing process. Further, as we stated in our report, neither we nor ACF were able to determine the completeness of the universe of cases used to estimate Foster Care improper payments, that is, whether all cases that had actually received a Title IV-E payment were properly coded as such. Thus, the issues we identified with ACF’s sampling methodology, provides limited assurance that the reported improper payment estimate accurately and completely represents the extent of improper maintenance payments in the Foster Care program. With respect to our three recommendations to help ensure corrective strategies effectively reduce Foster Care improper payments, HHS concurred with one recommendation related to establishing and implementing procedures for periodic assessments of state-level improper payment target levels. HHS also agreed to consider another recommendation to develop and implement corrective action procedures whenever a state’s estimated improper payment dollar error rate exceeds a specified target level for the program in conjunction with the recommendation it concurred with to implement periodic assessments. HHS highlighted several actions it plans to take to enhance its efforts to reduce improper payments, such as taking steps to reexamine and explore the feasibility for lowering the error rate threshold and considering ways to enhance existing Eligibility Review Guide instructions to address any eligibility review findings that involve improper payments not specifically requiring development of a corrective action plan. HHS stated that it plans to further study our recommendation to enhance ARTMS reporting capabilities to provide data on the status of actions taken to address Single Audit findings concerning states’ Foster Care program payments. HHS stated that the agency would study the value of potential enhancements to ARTMS in light of the significant relevant data already available (such as audit information by federal program, grantee name, audit resolution status, and audit periods). Although these data elements are currently available in ARTMS, search results are presented by individual audit reports and include limited information. For example, the search results provide only the number of findings associated with a specific audit report but do not provide details of the individual audit findings that would allow program managers to analyze, for example, trends in the types of findings in states in order to ensure that any systemic issues are addressed. We reaffirm our recommendation to ensure that ACF is able to fully utilize ARTMS as a tool to analyze Single Audit findings. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Health and Human Services; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8486 or [email protected]. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of this report were to (1) determine the extent to which the Administration for Children and Families’ (ACF) estimation methodology generated a reasonably accurate and complete estimate of improper payments across the Foster Care program and (2) determine the extent to which ACF’s corrective actions reduced improper payments. To address these objectives, we reviewed the Improper Payments Information Act of 2002 (IPIA) requirements and related Office of Management and Budget (OMB) guidance effective for fiscal year 2010, Department of Health and Human Services (HHS) regulations on Title IV- E eligibility reviews, and ACF’s internal guidance including policies and procedures on conducting Title IV-E Foster Care eligibility reviews, computing improper payments, implementing corrective action plans for reducing improper payments, and monitoring and resolving audit findings in the Foster Care program. We also reviewed results from the Title IV-E eligibility reviews for periods 2000 through 2010, and prior GAO and HHS Office of Inspector General (OIG) reports. In addition, we reviewed improper payment information reported in HHS’s fiscal year 2010 agency financial report, Improper Payments Section. We reviewed these documents to understand ACF’s efforts to address IPIA requirements and to identify previously reported issues with ACF’s improper payment reporting. To further determine the extent to which ACF’s methodology generated a reasonably accurate and complete estimate of improper payments across the Foster Care program, we: Performed an independent analysis of ACF’s sampling methodology, including a review of the sampling plan and other underlying documentation, as well as evaluated whether ACF’s sampling methodology complied with OMB statistical guidance, GAO guidance, and federal internal control standards as criteria to determine the accuracy and completeness of ACF’s reported fiscal year 2010 improper payment estimate for the Foster Care program. The scope of our review did not include an assessment of individual states’ processes or payment systems that are the underlying data that ACF uses to support the national estimate of Foster Care improper payments. Interviewed ACF officials such as the Acting Associate Commissioner for the Children’s Bureau, its contractor, and staff at selected regional offices such as program managers and financial specialists to gain an understanding of (1) the methodology that ACF uses to estimate improper payments in the Foster Care program in accordance with IPIA requirements and (2) the administrative cost review pilot at five states to develop a methodology for estimating related administrative improper payments. We reviewed available reports for the five pilot reviews to identify what information ACF obtained from these reviews. To further determine the extent to which ACF’s corrective action process reduced improper payments, we: Reviewed ACF policies and procedures to gain an understanding of reported corrective action strategies, including the Title IV-E eligibility review process and development of states’ Program Improvement Plans (PIP) used to address the root causes of improper payments, which are identified from the Title IV-E eligibility reviews. We reviewed applicable states’ PIPs for the period 2001 through 2010. We also inquired of ACF officials from the Program Implementation Division within the Children’s Bureau about other monitoring activities in place for states that did not have a PIP in place for IPIA reporting in fiscal year 2010. Assessed compliance thresholds ACF uses to require states to implement corrective actions against actual performance data to assess the propriety of established performance measures. As part of this analysis, we reviewed our internal control standards as guidance to assess ACF’s evaluation of states’ efforts to implement corrective actions. Conducted a walkthrough of ACF’s Audit Resolution Tracking and Monitoring System (ARTMS) to obtain an understanding of ACF’s monitoring activities to track and resolve state’s Single Audit findings for the Foster Care program. In addition, we interviewed officials in ACF’s Office of Information Services and the Division of Financial Integrity located in the Central Office and selected representative regional offices such as Regional Program Managers, and program and fiscal specialists to determine how ARTMS is used to identify and correct vulnerabilities that could lead to improper payments. Examined states’ reported Single Audit findings for fiscal years 2008 through 2010 from ARTMS and a listing of HHS OIG reports on the Foster Care program to identify vulnerabilities or weaknesses in states’ operations that may not have been identified through ACF’s Title IV-E eligibility reviews. Reviewed agency policies and procedures such as ACF’s Title IV-E Foster Care Eligibility Review Guide issued in March 2006 which includes the Title IV-E Foster Care Eligibility On-Site Review Instrument and Instructions; ACF’s user guide for ARTMS, version 1.2; and ACF’s FY 2010 Corrective Action Plan to Reduce the Estimate Rate of Improper Payments in the Foster Care Program, dated November 12, 2010. In addition, we conducted site visits to three of ACF’s ten regional offices (Philadelphia, PA; Chicago, IL; and San Francisco, CA). These three regional offices provided oversight of states who collectively claimed over half of the total federal share of Foster Care payments made in fiscal year 2009, the most recent data available at the time of our review for site visit selection. We also selected these regional offices to achieve variation in the numbers of error cases and amount of disallowed claims found during Title IV-E eligibility reviews, which ACF conducts to help ensure that states are claiming federal reimbursement only for eligible children. One region represented the highest number of error cases found in the Title IV-E eligibility reviews and the highest maintenance payment disallowance. Another region had the largest amount of foster care maintenance payments in fiscal year 2009 and the states within this region had high improper payment rates. Another region had a low number of error cases and improper payment issues relative to the high amount of maintenance payments it made to states in its purview. During these site visits, we interviewed agency personnel such as program managers, regional grants officers, and financial specialists to gain an understanding of how Title IV-E eligibility reviews are conducted and how the regional offices work with states on corrective actions and follow up on Single Audit findings. We also inquired about other ACF monitoring activities over states to address financial management weaknesses. We conducted this performance audit from February 2011 through March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. SACWIS (operations and development) $ 2,234,570 (662,979) (2,529,684) SACWIS (operations and development) 5,193,320 (490,734) According to ACF, Florida IV-E Reviews suspended pending completion of a statewide Foster Care demonstration project and were therefore, not included in the national error rate. In addition to the contact named above, Carla Lewis, Assistant Director; Betty Ward-Zukerman, Assistant Director; Sharon Byrd, Assistant Director; Sophie Brown; Gabrielle Fagan; Vincent Gomes; and Nhi Nguyen made key contributions to this report. Also contributing to this report were Kay Brown; Wilfred Holloway; Francine DelVecchio; Jason Kirwan; and Karen O’Conor.
Each year, hundreds of thousands of the nation’s most vulnerable children are removed from their homes and placed in foster care. While states are primarily responsible for providing safe and stable out-of-home care for these children, Title IV-E of the Social Security Act provides federal financial support. The Administration for Children and Families (ACF) in the Department of Health and Human Services (HHS) is responsible for administering and overseeing federal funding for Foster Care. Past work by the HHS Office of Inspector General (OIG), GAO, and others have identified numerous financial deficiencies associated with the Title IV-E Foster Care program. GAO was asked to determine the extent to which (1) ACF’s estimation methodology generated a reasonably accurate and complete estimate of improper payments across the Foster Care program and (2) ACF’s corrective actions reduced Foster Care program improper payments. To complete this work, GAO reviewed HHS’s fiscal year 2010 improper payments estimation procedures, conducted site visits, and met with cognizant ACF officials. Although ACF has established a process to calculate a national improper payment estimate for the Foster Care program, the estimate is not based on a statistically valid methodology and consequently does not reflect a reasonably accurate estimate of the extent of Foster Care improper payments. In addition, the estimate deals with only about one-third of the federal expenditures for Foster Care, and is therefore incomplete. ACF’s methodology for estimating Foster Care improper payments was approved by the Office of Management and Budget (OMB) in 2004 with the understanding that continuing efforts would be taken to improve the accuracy of ACF’s estimates of improper payments in the ensuing years. ACF, however, continued to generally follow its initial methodology which, when compared to federal statistical guidance and internal control standards, GAO found to be deficient in all three phases of ACF’s estimation methodology—planning, selection, and evaluation. These deficiencies impaired the accuracy and completeness of the Foster Care program improper payments estimate of $73 million reported for fiscal year 2010. ACF has reported significantly reduced estimated improper payments for its Foster Care maintenance payments, from a baseline of 10.33 percent for fiscal year 2004 to a 4.9 percent error rate for fiscal year 2010. However, the validity of ACF’s reporting of reduced error rates is questionable. GAO found that ACF’s ability to reliably assess the extent to which its corrective actions reduced improper payments was impaired by weaknesses in its requirements for state-level corrective actions. For example, ACF used the number of cases found in error rather than the dollar amount of improper payments identified to determine whether or not a state was required to implement corrective actions. As such, some states with higher improper payment dollar error rates were not required to implement actions to reduce these rates. GAO also found deficiencies in ACF’s Audit Resolution Tracking and Monitoring System that limited its ability to efficiently track and compare trends across states’ Single Audit findings. GAO is making seven recommendations to help improve ACF’s methodology for estimating improper payments for the Foster Care program and its corrective action process. HHS agreed that its improper payment estimation efforts can and should be improved, and generally concurred with four of the recommendations and agreed to continue to study the remaining three recommendations. GAO reaffirms the need for all seven recommendations.
DOD invests in power sources such as batteries, fuel cells, and capacitors to support the warfighting effort by powering weapon systems and equipment. DOD’s power source investment is expected to rise because of an increased reliance on advanced weapon systems and equipment an ongoing efforts to develop new technologies that are smaller, lighter, and Batteries are devices that convert chemical energy more power dense. into electrical energy. The two main types of batteries are primary (non- rechargeable) and secondary (rechargeable). Primary batteries, which are discarded after their charge has been depleted, are the most common battery type for soldier-carried applications. A subclass of primary d batteries called thermal batteries is used for short-term, high-power applications (e.g., missiles). While primary batteries typically self- discharge available energy when not in use, thermal batteries have a longer shelf life because they remain inert until activated. Secondary batteries, which can be reenergized after their charge has been depleted, are less commonly used by deployed units than primary batteries. However, the Army has undertaken educational campaigns to increase their use in light of some cost efficiencies and operational advantages— including overall weight reduction of soldiers’ equipment. Further, the military services are interested in transitioning from non-rechargeable batteries to secondary batteries because their use by deployed units may decrease the number of vehicle convoys needed to supply batteries in war zones. DOD is also interested in limiting the proliferation of battery types to reduce the number of different battery types the soldiers have to carry and limit soldier confusion over which battery is required to operate a device—thus simplifying operations and resupply. See figure 1 for a sample of DOD’s power source inventory. In general, fuel cells and capacitors are less mature technologies than batteries with respect to defense applications. Fuel cells are electrochemical devices that convert the chemical energy in a fuel, such as hydrogen, into electrical energy. Fuel cells look and function very similar to batteries. However, the available energy of a battery is stored within the battery—and its performance will decline as that energy is depleted— while a fuel cell continues to convert chemical energy to electricity as long as it has a supply of fuel. Capacitors are passive electrical components that store energy and may be used for a wide range of commercial and defense applications. Although most capacitors are used for small, primarily consumer-oriented electronic devices, they are increasingly being developed for high-power weaponry. DOD research organizations have ongoing S&T efforts focused on maturing fuel cell and capacitor technologies so they can be deployed. Given the developmental nature of these technologies—as well as the predominance of batteries among tactically deployed power sources—this report principally discusses batteries. DOD invests in power sources in three broad, interrelated investment categories: (1) S&T efforts related to developing and improving power source technologies, (2) purchasing power sources for logistics support as part of routine warfighter resupply, and (3) developing or purchasing power sources for integration into a weapon system or equipment as part of an acquisition program. Ideally, technologies developed as part of S&T efforts will ultimately be incorporated into new or existing weapon systems or equipment. These three investment categories are described below. 1. S&T: DOD research, development, test and evaluation investment is separated into seven discrete investment categories known as budget activities. The first three categories represent basic and applied research and technology development activities and are collectively known as S&T activities. These can include activities such as developing or improving upon different chemical combinations that enhance energy storage or power output capabilities, developing lighter components, and identifying and incorporating novel material components. This research may be conducted by many different entities, including DOD research centers and other government laboratories, power sources manufacturers, and academic institutions. According to DOD officials, these projects may be funded through a variety of mechanisms, including a DOD component’s base budget; small business programs, such as the Small Business Innovation Research (SBIR) program; and additions Congress makes to DOD’s budget (i.e., congressional add-ons). 2. Logistics support: This category includes the provision of logistical services, materiel, and transportation required to support the military in the continental United States and worldwide. Power sources are like any other materiel requirements of military units, such as food and clothing, in that they are a consumable commodity that must be reordered and resupplied according to military service needs. Power sources for logistics support are typically purchased through the Defense Logistics Agency (DLA), which is the primary supplying agent for DOD. 3. Acquisition programs: This category includes the selection of a military standard power source, the selection of a commercial-off-the- shelf (COTS) power source, or the design, development, and production of a program-unique power source as part of a DOD acquisition program. This process may be managed by the program office responsible for the weapon system or equipment acquisition, the contractor developing the system, or both. Since virtually all weapon systems and equipment include a power source, most acquisition programs have to undergo this process. For the purpose of this report, we define coordination as any joint activity by two or more organizations that is intended to produce more public value than could be produced when the organizations act alone. As we have previously reported, interagency coordination is important to avoid carrying out programs in a fragmented, uncoordinated way in areas where multiple agencies address a similar mission. Standardization, which is a form of coordination, includes efforts to expand the use of common or interchangeable parts by developing and agreeing on compatible standards. With respect to power sources, this may include developing standard shapes to facilitate the use of common, nonproprietary power sources in a range of weapon systems and equipment. DOD lacks comprehensive, departmentwide data for its total investment in the power sources area and no single DOD office aggregates these data across all investment categories. Further, availability of complete data varies across the three investment categories: S&T, logistics support, and acquisition programs. We determined that DOD invested at least $2.1 billion in power sources from fiscal year 2006 through fiscal year 2010. While DOD appears to have adequate departmentwide data on S&T efforts, it does not have departmentwide data for all logistics support investments. DOD has limited data on its investments in power sources when they are developed or purchased for acquisition programs. The $2.1 billion amount includes the investments in S&T and logistics support that we were able to identify but not power source investments as part of acquisition programs because of the difficulty in obtaining investment data in that area. In general, a lack of investment information can adversely affect DOD’s ability to avoid unnecessary duplication; control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent and detect fraud, waste, and abuse; and address pressing management issues. We determined that from fiscal year 2006 through fiscal year 2010 DOD invested approximately $868 million in the development of power source technologies through many individual power source S&T projects. However, this amount is approximate as it may not include all power source S&T project funding. Figure 2 depicts DOD’s approximate investment in power sources S&T by DOD component. In the period from fiscal year 2006 through fiscal year 2010, the Army was the largest investor with a total investment of about $361 million and the Navy was the second largest investor with a total investment of about $342 million. During that same time period, the Air Force invested about $90 million, the Defense Advanced Research Projects Agency (DARPA) invested about $51 million, and the Missile Defense Agency (MDA) invested about $26 million. DOD’s investment is largely concentrated within two power source technology areas: batteries and fuel cells. There is also significant investment in projects that involve more than one type of technology, which we refer to in figure 3 as multiple types. We found that the total investment for capacitor-related research was small relative to the other areas. This may be because capacitors for high-power defense applications are an emerging and still immature technology. Officials informed us that DOD-wide interest in capacitors has increased along with an interest in high-power weaponry. As shown in figure 3, the largest investment—about 36 percent of the total for fiscal year 2006 through fiscal year 2010—was in fuel cells. We identified a suite of DOD-wide information technology resources that includes a database used for tracking DOD-wide S&T activities. This database does not categorize projects in such a way that one could readily and reliably extract all activities for a certain research area (such as batteries). Despite these limitations, we were able to obtain suitable data from each research organization, which enabled us to present an approximate investment figure. We found that DOD invested at least $1.2 billion in power sources for logistics support from fiscal year 2006 through fiscal year 2010. Though DLA supplies the nation’s military services with critical resources needed to accomplish their worldwide missions, there are additional methods outside of DLA’s procurement processes by which the military services may purchase power sources. For example, a service might purchase a power source outside of DLA’s procurement processes if that service is the only consumer of the power source item. However, we found no DOD effort to aggregate and analyze these investments, even though DLA and military service logistics databases track investments using a standard governmentwide federal supply coding system that could be used for this purpose. We collected data from DLA and military service databases for investments in power sources for logistics support from fiscal year 2006 through fiscal year 2010. We determined that military service purchases through DLA likely account for the majority of logistics support investments captured by DOD databases. However, while the $1.2 billion investment amount we compiled includes data from these databases, DOD officials informed us that not all of these databases track power source purchases made as part of contractor-performed maintenance for weapon systems and equipment—known as contract logistic support. As we have previously reported, DOD has extensively relied on contractors for activities such as logistics support. Thus, the minimum investment amount we generated does not include what is likely a substantial amount of power source investments for logistics support. Figure 4 depicts DOD’s gistics support. Figure 4 depicts DOD’s minimum investment in power sources for logistics support. minimum investment in power sources for logistics support. Though virtually all DOD weapon systems and equipment rely on a power source, DOD has little data on its total investment in power sources for acquisition programs. DOD officials told us that neither the department nor individual DOD components have information showing the total amount invested in power sources for acquisition programs, although this information may be retained by individual program offices. We asked some program offices if they could provide basic cost information on the principal power sources used by their programs. Some program offices provided this information, but others did not. Some offices that could not provide this information provided an explanation; for example, one program office told us that the cost for the power source was built into the overall cost for the system and thus was not broken out as a specific expense. Other program offices simply provided no cost data and no explanation. We also asked a number of senior DOD officials—including officials from OSD and from the services at the assistant or deputy assistant secretary level—whether they could provide data on total investment in power sources for acquisition programs at the departmentwide or service levels, but none were able to do so. Officials from the Office of the Director of Operational Energy Plans and Programs, an office within OSD that serves as the principal advisor to the Secretary of Defense and others regarding operational energy, concurred. They stated that since these costs are not aggregated, DOD would have to require each acquisition program office to identify power source investments and then consolidate them. They stated that this would be a labor-intensive data collection effort given the large number of DOD acquisition programs. In order to gain an understanding of how some acquisition programs determined which power sources would be used by their programs, we asked several Army, Navy, and Air Force acquisition program offices to provide us with information on this process. Although these offices provided responses with varying levels of detail, we determined that there are several methods by which a program office may acquire power sources. For example: Selection of existing military standard power sources: The program office for the V-22 Osprey, a tilt rotor aircraft developed by the Navy in the 1980s, followed a mandatory Navy specification for rotary aircraft that required the use of government-furnished batteries made to DOD military standards. According to the program office, the V-22 program tested two military standard batteries already used in two other aircraft and determined that they met the power source requirements of the V-22. As such, the program selected these two batteries for use by the V-22. Because the V-22 selected preexisting batteries, the program incurred no development costs; the combined unit costs provided were $3,688. Selection of COTS power sources: Officials from the Navy’s P-8A Multi-mission Maritime Aircraft told us the program uses a COTS battery as the principal power source for its electronics systems. The P-8A is derived from a Boeing 737 commercial aircraft and has roles in antisubmarine and antisurface warfare as well as intelligence, surveillance, and reconnaissance. The program office assessed the suitability of the power source used by the Boeing 737 and found that this COTS solution met their requirements and selected it for use by the program. Because the P-8A selected a preexisting COTS battery, the program incurred no development costs associated with program- unique power sources. The unit cost provided was $11,500. Development of program-unique power sources: Officials in the Joint Air-to-Surface Standoff Missile program office told us that they determined that the program required the design, development, and production of a program-unique thermal battery because of the missile’s strict design parameters in terms of internal space available for the power source. The program developed a new battery, but the program office was only able to provide limited cost information because the costs involved were included in the overall cost of the missile. The unit cost provided was $3,775. DOD coordination mechanisms for power source S&T activities are generally effective in facilitating coordination across pertinent DOD components and with DOE, but opportunities exist for improvement. We also found that DOD’s strategic planning process for appropriately directing S&T investment for power source technologies could be improved. DOD also generally has deficiencies in strategic planning for critical technologies, processes for technology transition, and tools that support transition. Further, S&T planning efforts can be complicated by external factors. For example, congressional additions to DOD’s budget account for just over half of the total S&T funding we identified for power sources. Since this process can be informal and lack transparency, outcomes in this area may be unpredictable and difficult to incorporate into strategic plans. DOD uses various mechanisms to facilitate the coordination of power source S&T activities across pertinent DOD components, DOE, and in some cases industry. According to DOD power source researchers, the principal means for coordinating is the Chemical Working Group of the Interagency Advanced Power Group (IAPG). The Chemical Working Group is part of the long-standing IAPG interagency working group and brings together researchers from relevant DOD components, DOE, and other federal stakeholders to exchange information about power source projects and avoid unnecessary duplication of effort. In addition, the Defense Technical Information Center—an organization responsible for providing information services to DOD—has a number of information technology resources related to S&T that were developed to facilitate information sharing between stakeholders across the DOD research and engineering community. Table 1 lists the principal ways DOD coordinates S&T projects. As an example of the efficacy of these mechanisms, no power source projects presented at the 2010 annual Chemical Working Group meeting were identified as involving duplicative research within DOD or between DOD and DOE, though the meetings have been effective in identifying instances of project duplication in the past. Additionally, both DOD and DOE participate in several other coordinating groups together to leverage common efforts, and in July 2010 DOD and DOE signed a memorandum of understanding developing a framework for cooperation and partnership on energy issues. Both organizations agreed to collaborate on S&T projects at research institutions sponsored by either agency, to synchronize S&T to expand complementary efforts, and to develop joint initiatives for major energy S&T programs of mutual interest. Though we found these mechanisms to be generally effective, agencies may miss opportunities to fully coordinate because attendance at these interagency groups and conferences is voluntary and the level of agency participation varies. Further, conversations with officials from DOD component organizations suggest that there may be limited awareness within the DOD power sources community of the coordination services available through the Defense Technical Information Center. In areas where multiple agencies address a similar mission, interagency coordination is important to collectively meet common goals and avoid carrying out programs in a fragmented, uncoordinated way. As we have previously reported, this lack of coordination can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. Agency officials informed us that the community of power source experts from the federal government, industry, and academia is small and well-connected by interpersonal relationships. Although it is not possible to accurately estimate the impact of these often informal relationships, officials believed that such relationships facilitate information sharing, which is beneficial to DOD- wide power source S&T. We found that though DOD has generally effective S&T coordination mechanisms, its strategic planning process to facilitate the allocation of S&T funds for power source technologies could be improved. Most DOD components generate strategic plans to guide S&T investments, though we found no current Air Force plan. We found that existing military service- level S&T strategic plans are not specific and typically do not discuss investments in power sources in depth, if at all. There have also been several technology roadmaps developed or initiated specifically for the power sources area. However, we have been told by DOD researchers that these roadmaps may quickly become irrelevant without frequent updating because necessary investment levels and the maturity of the pertinent technologies may evolve over time. Further, unless roadmapping efforts are coordinated, DOD cannot be assured that they will be complementary and fully assist agencies in addressing shared technological challenges. Additionally, though DOD has established the Energy and Power Community of Interest to focus on power source issues as part of its broader Reliance 21 program, representatives of this group told us that it is a relatively new organization and is still finalizing organizational planning. They said that the community of interest will develop strategic planning documents specific to power sources that will enable DOD to better plan in this area. We have previously reported that DOD lacked a single executive-level OSD official who is accountable for operational energy matters and recommended that one be designated. We also noted that DOD lacked a comprehensive strategic plan for operational energy. As a result, in October 2009 DOD established the Director of Operational Energy Plans and Programs. According to officials from this office, they will, among other things, coordinate departmentwide policy, planning, and program activities related to operational energy demand and relevant technologies. Further, officials told us that this office will also include power source technologies in its purview. The Director was recently confirmed, and the office is currently working to gather the personnel required to support its efforts. The Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 requires the office to submit an Annual DOD Energy Management Report on departmentwide operational energy. We have previously reported that DOD generally faces problems with deficiencies in strategic planning for critical technologies, processes for technology development and transition, and tools that support transition. Similarly, some DOD officials told us about challenges in transitioning a new power source technology from the laboratory to an acquisition program. We identified some efforts that support power source technology transition within the services. However, DOD researchers said that the overall problem still occurs in this area and that promising technologies may be forgotten or overlooked if they are not transitioned into an acquisition program. In addition, DOD’s lack of oversight and comprehensive data on power source investments for acquisition programs may further complicate technology transition efforts. S&T planning efforts can be complicated by external factors. We found that DOD investments in power source S&T come from several sources, including base budget funds, small business programs (such as the SBIR program), and congressional add-ons—that is, additions Congress makes to DOD’s budget. From the data we collected, we determined that congressional add-ons account for approximately 55 percent of the DOD total investment we identified in power source S&T from fiscal year 2006 through fiscal year 2010. While these add-ons provide funding for S&T, officials at DOD research organizations told us that these add-ons may pose a challenge to strategic planning for two reasons. First, research organizations may lack complete discretion over how to apply the funds— while they may be able to accept or decline an add-on, these add-ons do not give them full control over the project. Second, since this process can be informal and lack transparency, outcomes in this area may be unpredictable and difficult to incorporate into strategic plans. Though DOD officials agree that the department needs to increase its emphasis on power source standardization, it lacks a departmentwide policy to emphasize or compel early consideration of standard power sources. Absent emphasis on early standardization, profit incentives can often lead companies to develop unique, proprietary power sources. The Army has a policy to encourage standardization, but the other services lack comparable policies. Although it is generally more economical to address standardization early in the acquisition process and prior to the deployment of weapon systems or equipment to the field, opportunities may exist to increase standardization by retrofitting weapon systems or equipment for which a proprietary power source has already been developed. This was recently done successfully with the TALON bomb disposal robot. DOD’s lack of emphasis on power source standardization limits opportunities to obtain potential benefits, including reduced item unit costs and a smaller logistical footprint. It is important to emphasize standardization early in a program before certain system decisions are made. Without early consideration of the available standard power source, the design parameters of a system may become more constrained as other parts are developed and integrated. As a result, remaining space may not be sufficient to fit the shape of appropriate standard power sources. Although in some cases developing a program-unique power source is necessary because of legitimate constraints, such as necessary limitations on the space available for a power source, officials told us that companies may develop program- unique power sources unnecessarily. Not requiring power source standardization can result in unnecessary proliferation that may ultimately have downstream implications in terms of resupplying the warfighter. DOD officials we spoke with agree that the department needs to increase its emphasis on power source standardization. However, DOD lacks a departmentwide policy to help emphasize power source standardization and compel early consideration of standard power sources. We found that without policies requiring standardization, programs may choose to develop or select nonstandard power sources when an existing military standard or other preferred item could have been used, potentially hindering standardization efforts. DOD and industry officials told us that power sources are often not considered by program offices, or are thought of by acquisition officials as a peripheral concern because of the low costs relative to overall program costs. Additionally, according to the Defense Standardization Program, DOD’s performance-based acquisition policies give contractors primary responsibility for recommending the use of standard components to meet performance requirements. DOD officials and power source company representatives have told us that program managers may choose not to exercise oversight of these contractor decisions. Further, during these discussions, we were told that companies have a profit motive to develop proprietary power sources as part of the acquisition of a weapon system or equipment because they would prefer to be sole-source suppliers. Thus, they may not consider standard options that would provide more optimal solutions for DOD customers. According to DOD officials, an instance of a contractor choosing a proprietary power source over an existing battery occurred with the batteries for two radio systems used by the Army and the Marine Corps— the AN/PRC-148 Multiband Inter/Intra Team Radio and the AN/PRC-152 Falcon radio. Though the radios are functionally similar, they each use a program-unique proprietary battery instead of an existing battery or a battery common to both radios. Further, although the batteries are very similar in design and each will fit in the other device, a superficial design characteristic on one battery prohibits the battery from powering the other manufacturer’s radio. In addition, the charger interfaces are not compatible, so the batteries cannot be charged using a single charger without modification, such as through an adapter. As a result, the service users of the two radios must manage inventories for two types of batteries and chargers, and the soldiers in the field have to ensure that they take the correct battery for their radio since the other battery will not be compatible. Also, the military services are unable to competitively procure the batteries because each is a proprietary device and the services must rely on the sole-source supplier of each battery—potentially increasing the risk of item shortages or delays. Though DOD officials we spoke with in the power sources area agree that the department needs to increase emphasis on power source standardization early in programs, existing organizational efforts lack authority and resources to implement any policies. For example, DOD’s Defense Standardization Program established the Joint Standardization Board for Power Source Systems to focus specifically on power source standardization. According to the board’s charter, it serves as a standing technical group for power source standardization efforts. Its specific role is to participate in the development of an overarching DOD standardization strategy for power sources and to promote commonality of component parts or interfaces by facilitating a coordinated approach with joint programs. However, the Chairman of the Joint Standardization Board for Power Source Systems told us that though the board is part of the Defense Standardization Program, it does not have the funding it needs to function and thus has had little impact. He added that other joint standardization boards have significant user funding because particular acquisition program managers, or sponsors, have a vested interest in the results of their work. Officials from this board also noted that while emphasizing standardization early in acquisition programs will undoubtedly yield future benefits, DOD lacks a comprehensive plan for creating an appropriate level of emphasis on power source standardization and that DOD also lacks a policy for ensuring the achievement of standardization goals. Accordingly, these officials recommended in a Defense Standardization Program publication that DOD establish a plan (in conjunction with power source experts from throughout the federal government, industry, and academia) to create an appropriate level of DOD-wide emphasis on standardization. Further, they recommended that DOD create a policy that addresses the use of nonstandard power sources and that might articulate a process of senior-level review to determine if requests to use nonstandard power sources are justified. The most significant DOD power source standardization policy we found related to acquisition programs is section 8.8 of Army Regulation (AR) 70- 1. Two main objectives of this policy are to decrease the number and types of batteries the Army uses and limit the development of unique batteries except where necessary. The regulation prioritizes use of military or commercial standard rechargeable batteries in acquisition programs, with a particular emphasis on using rechargeable batteries. Program managers are supposed to coordinate system battery requirements with Army power source subject matter experts, who we were told are currently in the Army Power Division. For programs where military or commercial standard rechargeable battery types are not practical, program offices can choose from a list of military-preferred batteries. The regulation requires that program managers obtain an Army acquisition executive approval—which we were told is the responsibility of the Assistant Secretary of the Army for Acquisition, Logistics and Technology—if a program manager intends to use batteries other than those articulated in the regulation. This approval is based on a favorable technical evaluation by Army Power Division officials. Army Power Division officials stated that there are several difficulties associated with ensuring that acquisition programs consistently follow the regulation. They said that section 8.8 of AR 70-1 can only succeed if there is an effective mechanism for ensuring that acquisition programs comply with it, and they identified challenges that may compromise effective implementation of the regulation. First, Army Power Division officials told us that program managers might not be aware of the requirements. They said that they do not know how many Army acquisition programs comply with section 8.8 of AR 70-1 since they are only aware of the programs to which they provide consulting services as part of the regulation. They could not tell us if any programs did not comply with the regulation and therefore did not request a technical evaluation before developing a program-unique battery. Second, they said that program managers may not comply with AR 70-1 because they do not understand the potential downstream logistical issues that can occur when battery decisions are not made early in the acquisition process. Army Power Division officials said they prefer to get involved with an acquisition program early in the process so they can help identify the best battery solution before system decisions restrict potential choices. They said that to do so they have to earn the respect and trust of program managers so that these programs will seek technical consultation early in the process. They added that the Army Power Division proactively tries to establish and maintain good relationships with the different Army program offices that might have battery needs. Third, the Army Power Division receives approximately half of its funding via customer reimbursement, meaning that it receives funding from program offices when it provides consultative services. These variables put the Army Power Division in a difficult position when current and potential acquisition program customers of their technical services request a favorable technical evaluation to support use of a program-unique battery. Army Power Division officials told us that their evaluation may be influenced by their desire to avoid compromising existing relationships with program offices. They added that an unfavorable evaluation may lead the program manager to forgo consultation with the Army Power Division in the future, meaning the Army Power Division would lose a customer and associated funding. Further, these officials told us that if a program were to request an evaluation of a nonstandard battery late in the weapon system or equipment development process (such as right before the start of production), the Army Power Division might suggest approval of the battery to the Army acquisition executive to avoid delaying production. While Army officials acknowledge compliance issues, the Program Manager-Mobile Electric Power has recently established the position Product Director for Batteries to help facilitate central coordination to reduce battery proliferation in the Army based on a perceived lack of central coordination in the Army on battery issues. This position has just been established and thus has not yet had much impact, but the Product Director for Batteries told us that pending approval he intends to eventually take over and update section 8.8 of AR 70-1—including enforcement and approving or denying of waiver applications—as well as any other Army battery standardization efforts. He told us that because he is a program manager he will have more authority than the Army Power Division to promulgate and enforce policies applicable to increasing the emphasis on standardization. Aside from the Army efforts, we found limited power source standardization efforts in the other military services. In general, they are limited to specific applications, such as aircraft, and are not applicable to the whole service or are not departmentwide. The Navy has several platform-specific efforts within the Naval Air Systems Command to develop military performance specifications for multiple battery types to limit proliferation of aircraft battery types. The Marine Corps Systems Command has developed an interactive computer-adaptive tool to help acquisition personnel in selecting appropriate, existing batteries for their programs. Also, the Marine Corps Systems Command has a topic paper on electrical connectors—including connectors for batteries—that is intended to reduce proliferation of the connectors that connect the battery to weapon systems or equipment. However, use of these tools is voluntary. We did not find any Air Force-wide processes for encouraging the use of existing standard or other preferred power sources. Although it is generally more economical to address standardization early in the acquisition process and prior to the deployment of weapon systems or equipment to the field, opportunities may exist to increase standardization by retrofitting weapon systems or equipment for which a proprietary power source has already been developed. However, DOD has not undertaken a departmentwide assessment to identify other weapon systems or equipment that use a nonstandard power source but that could be retrofitted with a more efficient and lower-cost standard power source with a relatively small investment. Such efforts may provide significant cost savings and operational benefits. For example, Army and Navy research organizations replaced the expensive proprietary batteries used by TALON bomb disposal robots with military standard batteries that are already in the DLA inventory. Army officials noted that their standardization effort for the TALON robot generated a cost savings of about $7,000 per unit of the system. A Navy effort to retrofit TALON robots with military standard batteries extended the robot’s battery life by 23 percent. Because of the success of the standardization effort in terms of cost and operational advantages, the Marine Corps and the Army replaced proprietary battery packs with the retrofitted military standard batteries for deployed units of the system. DOD’s lack of emphasis on power source standardization limits opportunities to obtain potential benefits, including reduced item unit costs and a smaller logistical footprint. According to a Defense Standardization Program case study of an effort by the Army to standardize batteries, standardization may enable DOD components to offer manufacturers greater production volumes and avoid reliance on sole-source suppliers for mission-critical items, which may result in a healthier industrial base and improved operational readiness. In general, the military battery industrial base in the United States is characterized by small and midsized companies that operate in an environment with lower sales volume compared to the commercial battery industry. One study of the industry characterized the United States military battery industry as struggling for survival with some companies relying solely on government sales for income. Further, DOD demand is irregular because of fluctuations based on periods of increased or decreased military activity. For example, a surge in demand for some non-rechargeable batteries related to the initiation of combat operations in Iraq exceeded the amount that the industrial base could produce—which threatened to reduce military capability. Though representatives from a major DOD battery supplier told us that they would prefer to develop and be the sole-source supplier of proprietary power sources, they noted that absent this option they would prefer a scenario where companies could compete to produce standard power sources in order to stabilize their production volumes and revenue. Actions that could contribute to the health of the industrial base—such as providing for greater production volumes through increased standardization—could be beneficial to DOD in ensuring the continued availability of military battery producers and mitigating future potential production and supply shortfalls. The goal of any acquisition program is to provide the warfighter with the best possible weapon system or equipment. However, in light of increasing dependence on power sources, supporting the warfighter’s power needs with more power, longer life, and less weight—as well as ease and sufficiency of supply—is also crucial. The proliferation of unique battery types could become more pronounced and ultimately affect the warfighter as military power demands increase. The current manner in which DOD manages its power source investments and translates them into products that meet warfighter needs is less than optimal. Specifically, DOD is not able to efficiently and effectively plan future investments if it lacks strategic investment knowledge of its total power source investment in S&T, logistics support, and acquisition programs. Further, while DOD mechanisms for coordinating S&T power source projects appear effective, their success depends on voluntary participation by all pertinent agencies. DOD agencies not fully participating in coordination mechanisms limit opportunities to leverage common efforts. Though DOD has some standardization efforts, decisions on what power sources will be put into new equipment and ultimately the hands of the warfighter and the supply system are often not made by DOD program managers and hence these programs may unnecessarily use proprietary power sources. Improving management and coordination of the power sources area could help DOD achieve optimal return on its investment. Without sufficient, departmentwide investment data; more effectively coordinated investments; and increased power source standardization, optimal DOD outcomes in this area cannot be expected. To increase oversight of power source investments and to allow for enhanced strategic planning, we recommend that the Secretary of Defense consider how to best aggregate departmentwide investment data (from S&T, logistics support, and acquisition programs) in the power sources area and develop a mechanism to aggregate power source investment data across these investment categories at a level sufficient to guide decisions and policy. To ensure a high level of interagency participation and coordination in the power sources S&T area, we recommend that the Secretary of Defense determine methods to strengthen pertinent member agency participation in interagency coordination mechanisms. To increase DOD-wide emphasis on power source standardization both during design of weapon systems and equipment as well as for deployed systems, we recommend that the Secretary of Defense identify and direct the appropriate office(s) to take the following actions: Develop a plan to optimize use of standard power sources for weapon system or equipment types that are more amenable to such standardization. Develop a DOD-wide policy—based on the above standardization plan—similar to section 8.8 of Army AR 70-1 that requires senior acquisition executive approval before allowing acquisition programs to use a power source that is not standard or preferred. As part of this new policy, consider requiring an independent review of the appropriateness of using the nonstandard or nonpreferred power source. Identify opportunities to cost effectively retrofit deployed weapon systems and equipment that use a proprietary power source with an existing military standard or other preferred power source. In written comments on a draft of this report, DOD concurred with one of our five recommendations and partially concurred with four. The department stated that it had already taken or plans to take specific actions in response to our recommendations, but it is unclear from DOD’s response what these actions entail. DOD concurred with our recommendation that the Secretary of Defense consider how to best aggregate departmentwide investment data (from S&T, logistics support, and acquisition programs) in the power sources area and develop a mechanism to aggregate power source investment data across these investment categories at a level sufficient to guide decisions and policy. We believe that aggregating these data is important to inform decision making and investment in the power sources area. DOD partially concurred with our recommendation that the Secretary of Defense determine methods to strengthen pertinent member agency participation in interagency coordination mechanisms. DOD commented that existing coordination mechanisms are generally effective and have been improving since the office of the Director, Operational Energy Plans and Programs (DOEPP) was established. DOD added that the DOEPP office will continue to seek ways to strengthen interagency coordination. However, DOD did not provide specific information on how it believes coordination mechanisms have improved or what additional methods might be used to strengthen coordination. Our review identified voluntary attendance and varying participation in interagency groups that if enhanced could further improve coordination. DOD also partially concurred with three recommendations related to power source standardization, namely, that the Secretary of Defense (1) identify and direct appropriate office(s) to develop a plan to optimize use of standard power sources for weapon systems or equipment types more amenable to standardization; (2) develop a DOD-wide policy similar to section 8.8 of Army AR 70-1 that requires senior acquisition executive approval before allowing acquisition programs to use a power source that is not standard or preferred; and (3) identify opportunities to cost effectively retrofit deployed weapons systems and equipment that use a proprietary power source with an existing military standard or other preferred power source. DOD indicated that ongoing activities led by the DOEPP office are adequately addressing all these needs and no expansion of effort is necessary. However, DOD did not provide any details related to specific, ongoing DOEPP activities addressing these needs, and we found no evidence of any such DOD or DOEPP actions while conducting our review. While DOD established the DOEPP office in October 2009, it has only had a Director since June 2010. In late August 2010, DOEPP office officials informed us that they were still writing position descriptions and working to gather the personnel required to support their efforts, but gave no indication that any substantive work had been undertaken. Our review revealed there is no DOD-wide plan or policy to emphasize power source standardization, even though DOD officials told us that DOD needs further emphasis in this area. Without a departmentwide plan to emphasize or compel early consideration of standard power sources, the use of unique, proprietary power sources will likely continue and DOD will not be able to obtain the full benefits of standardization, such as reduced item unit costs and a smaller logistical footprint. By not identifying specific actions the department has taken or plans to take to implement our recommendations, we believe that DOD may not have appropriately considered our recommendations, and as a result we are concerned that in the coming months it will not seek ways to fully implement these recommendations. DOD’s written comments are reprinted in appendix III. We are sending copies of this report to the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. For the purposes of this report, we limited “power sources” to tactical power sources used for soldier-portable and vehicle applications (e.g., motorized land vehicles, aircraft, and ships) as well as munitions and satellite power sources. We excluded power sources for operational or strategic applications, including power sources used to support installations such as temporary or permanent military facilities, because of the size and complexity of the tactical power sources portfolio and its significance to the efforts of the warfighter. We focused on batteries, fuel cells, and capacitors based on (1) language in the congressional mandate, (2) predominance of batteries among tactically deployed power sources, and (3) the recommendations of Department of Defense (DOD) experts. To determine DOD’s total investment in power sources, we met with officials from the Office of the Secretary of Defense (OSD) and across DOD component organizations to determine an appropriate methodology for collecting as complete a set of investment data as possible. We divided investment into three categories generally based on the three main defense technology life cycle areas: (1) science and technology (S&T); (2) logistics support, or the provision of logistics, materiel, and transportation according to military needs; and (3) power sources for DOD weapon systems or equipment acquisition programs. Based on a review of the budget and on discussions with OSD officials, we found that there was no central repository for DOD investments in power source S&T. DOD officials told us that one would have to request the data from each pertinent S&T organization. As a result, we developed a data collection instrument asking each research organization to provide data on all power source projects within our scope. Specifically, we requested project-level information, including the project name, purpose, budget activity, and funding history from fiscal year 2006 through fiscal year 2010. We also requested data on projected future funding, but not all organizations were able to provide this information. The Office of Naval Research (ONR) compiled the data for the Navy since ONR manages all Department of the Navy S&T funds, including those for the Marine Corps. The Army Deputy Director for Technology from the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology’s Research and Technology Division compiled the data from the Army research organizations. The Air Force Research Laboratory compiled data for Air Force power source S&T projects. We assessed the reliability of these S&T data by (1) performing electronic testing of required data elements and (2) obtaining responses from agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of presenting an approximate total of S&T investments in this report. This investment amount is approximate because creating an exhaustive list of all power source S&T projects was not possible because of the lack of centralized DOD management of this area and the fact that we had to rely on data gathered by each research organization. Additionally, since some organizations involved in this area are funded by other DOD customers, it is difficult to accurately track the precise amounts of funding for specific projects. We also interviewed officials from each service and its component research organizations about S&T efforts in the power sources area. For the Army, we met with the Army Deputy Director for Technology from the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology, Research and Technology Division; officials from the Army Research, Development, and Engineering Command; officials from the Army Research Laboratory; officials from the Army Communications- Electronics Research, Development and Engineering Center’s Army Power Division; and the Program Manager-Mobile Electric Power Product Director for Batteries. The Assistant Secretary of the Army for Installations and Environment, and the Army Tank Automotive Research, Development and Engineering Center both provided written responses to our questions. For the Navy, we spoke with officials from ONR; Navy Surface Warfare Center Crane Division; the Naval Undersea Warfare Center Newport Division; the Naval Air Systems Command’s Power and Energy Division; and the Marine Corps Systems Command. The Assistant Secretary of the Navy for Energy, Installations, and Environment provided written responses to our questions. For the Air Force, we spoke with officials from the Air Force Research Laboratory, and we obtained written responses from the Deputy Assistant Secretary of the Air Force for Energy, Environment, Safety and Occupational Health and the Air Force Materiel Command. We also obtained written responses from the Defense Advanced Research Projects Agency (DARPA). We also spoke with officials from U.S. Special Operations Command and obtained data on their power sources S&T investments. To assess the involvement of the defense power sources industry in DOD investments in power source S&T, we met with representatives of Saft America Inc. (Saft), Advanced Thermal Batteries, and EaglePicher Technologies, LLC (EaglePicher). According to the companies, Saft and EaglePicher are two large DOD battery suppliers. We also attended an annual power sources technology conference as well as two meetings of the National Defense Industrial Association (NDIA) Military Power Sources Committee and spoke with representatives from additional companies, including Alion Science and Technology, Dow Kokam, and Yardney Technical Products. We also gathered information through interviews with and written responses from the membership of the NDIA Military Power Sources Committee in order to gain additional perspective from the industry. We also met with members of the South Carolina Research Authority’s Defense Advanced Battery Manufacturing Coalition. To determine DOD’s investments in power sources as part of a DOD weapon systems or equipment acquisition programs, we initially searched DOD budget requests to locate power source investment data related to acquisition programs. This method demonstrated that power sources are typically not broken out as specific cost elements of budget request line items related to acquisition programs. We were told by cognizant DOD officials that this information was not available in an aggregated format. Though we judged that the scope of DOD’s existing acquisition programs, which includes around 100 major defense acquisition programs and smaller programs, was too large for us to obtain information from every program, we decided to obtain information from selected programs. We did not assess the reliability of acquisition program data because we determined that it would not be feasible for DOD to generate these data to enable us to determine the investment in this area for this report. We selected weapon systems and equipment from each of the military services to provide a cross section of weapon system and equipment types (e.g., aircraft, satellites, ships, vehicles, and portable electronics). As part of this effort, we spoke with program office officials and obtained data from the following programs: Army: Patriot/MEADS missile and Joint Light Tactical Vehicle. Navy: Joint Program Executive Office for the Joint Tactical Radio System, DDG 1000 destroyer, AGM-88E Advanced Anti-Radiation Guided Missile, P-8A Poseidon, Joint Multi-mission Submersible, Mine- Resistant Ambush Protected vehicle, and the V-22 Osprey program offices. Air Force: Joint Air-to-Surface Standoff Missile, Navstar Global Positioning System (GPS) GPS III, and Advanced Extremely High Frequency satellites program offices. To determine DOD’s investments in logistics support, we requested Defense Logistics Agency (DLA) data on sales of power sources to the military from fiscal year 2006 through fiscal year 2010. Though these data do not include power sources that DLA might have procured as part of its inventory management processes, they do include all power sources that the military services bought from DLA during this period. To obtain data on military service power source procurements that occur outside of DLA, we obtained data from the Air Force Materiel Command, the Naval Supply Systems Command, and the Army Materiel Command. We assessed the reliability of logistics support data by (1) performing electronic testing of required data elements and (2) obtaining responses from agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of presenting a minimum investment in this area in this report. Our investment total for logistics support represents a minimum amount because, as DOD officials informed us, the data we obtained from DLA and military service logistics databases do not capture power source purchases made as part of contract logistics support—a type of contracting activity on which DOD has relied extensively. To assess the degree to which DOD coordinates power source investments, we spoke with cognizant officials from each of the military services, research organizations across DOD, and DLA—including DLA’s Battery Network group. For information on coordination of S&T investments, we spoke with the Army Deputy Director for Technology from the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology, Research and Technology Division; officials from the Army Research, Development, and Engineering Command; officials from the Army Research Laboratory; officials from the Army Communications-Electronics Research, Development and Engineering Center’s Army Power Division; and the Program Manager-Mobile Electric Power’s Product Director for Batteries. The Assistant Secretary of the Army for Installations and Environment and the Army Tank Automotive Research, Development and Engineering Center both provided written responses to our questions. For the Navy, we spoke with officials from the ONR, the Naval Surface Warfare Center Crane Division, the Naval Undersea Warfare Center Newport Division, the Naval Air Systems Command’s Power and Energy Division, and the Marine Corps Systems Command. We also received written responses to our questions from the Assistant Secretary of the Navy for Energy, Installations, and Environment. For the Air Force, we spoke with officials from the Air Force Research Laboratory, and we obtained written responses from the Deputy Assistant Secretary of the Air Force for Energy, Environment, Safety and Occupational Health. In addition, we obtained written responses from DARPA. We also spoke to officials from the DOD ManTech office and officials involved with the DOD Reliance 21 program and the Energy and Power Community of Interest. We also took part in a training session related to DOD-wide information-sharing resources. To assess the effectiveness of some of DOD’s coordinating mechanisms, we attended the 44th Power Sources Conference where industry, academic, and DOD power source researchers and other experts discussed ongoing power source S&T efforts. We attended the annual meeting of the Chemical Working Group of the Interagency Advanced Power Group as well as a meeting of the Power Sources Technology Working Group. In addition, we spoke with members of the Lithium Battery Technical/Safety Group. To assess DOD coordination with the Department of Energy (DOE), we spoke with representatives of the Joint DOD/DOE Munitions Technology Development Program and the DOE Office of Vehicle Technologies. We also drew extensively on other GAO work related to interagency coordination. To assess the extent to which DOD’s policies facilitate the use of standard power sources, we met with cognizant officials from each of the military services, including officials from the Army Communications-Electronics Research, Development and Engineering Center and the Program Manager-Mobile Electric Power’s Product Director for Batteries. We received written responses to questions from an official from the Defense Standardization Program’s Joint Standardization Board for Power Source Systems. We also received written responses from the Assistant Secretary of the Navy for Energy, Installations, and Environment; the Assistant Secretary of the Army for Installations and Environment; and the Deputy Assistant Secretary of the Air Force for Energy, Environment, Safety and Occupational Health. We also reviewed applicable standardization policies and regulations. We conducted this performance audit from December 2009 to December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Systematic study directed toward greater knowledge or understanding of fundamental aspects of phenomena without specific applications toward processes or products in mind. Systematic study to understand the means to meet a recognized and specific need. Development of subsystems and components and efforts to integrate subsystems and components into system prototypes for field experiments, tests in a simulated environment, or both. Efforts necessary to evaluate integrated technologies, representative modes or prototype systems in a high-fidelity and realistic operating environment. Conducting engineering and manufacturing development tasks aimed at meeting validated requirements prior to full-rate production. RDT&E efforts and funds to sustain, modernize, or both, the installations or operations required for general RDT&E. Development efforts to upgrade systems that have been fielded or have received approval for full-rate production and anticipate production funding in the current or subsequent fiscal year. In addition to the contact named above, Art Gallegos, Assistant Director; John Oppenheim, Assistant Director; Frederick K. Childers; John Dell’Osso; Rosa Johnson; John Krump; C. James Madar; Bill Solis; Don Springman; Bob Swierczek; and Mark Viehman made key contributions to this report.
Virtually all Department of Defense (DOD) weapon systems and equipment rely on power sources, such as batteries. In response to a mandate in the National Defense Authorization Act for Fiscal Year 2010, GAO determined (1) DOD's approximate investment in power sources, (2) the extent to which DOD coordinates its power source investments, and (3) the extent to which DOD's policies facilitate the use of standard power sources. To address these objectives, GAO obtained and analyzed DOD investment data, met with DOD officials and industry representatives, and attended DOD conferences aimed at facilitating power source coordination. GAO determined that DOD has invested at least $2.1 billion in power sources from fiscal year 2006 through fiscal year 2010. However, DOD lacks comprehensive, departmentwide data for its total investment in the power sources area. Availability of complete data varies across the three investment categories: science and technology (S&T), logistics support, and acquisition programs. While DOD appears to have adequate departmentwide data on S&T efforts, it does not have departmentwide data for all logistics support investments. DOD lacks sufficient data on its investments in power sources when they are developed or purchased for acquisition programs. The $2.1 billion amount includes investments in S&T and logistics support that GAO was able to identify, but not power source investments as part of acquisition programs because of the difficulty in obtaining investment data in that area. This lack of complete, departmentwide investment data hinders DOD's oversight and future planning in the power sources area, adversely affecting its ability to ensure basic accountability, anticipate future funding, and measure performance. DOD's mechanisms for coordinating power source S&T--including interagency working groups, conferences, informal networks, and information technology resources--are generally effective. However, in some of these activities participation by pertinent member agencies is voluntary and could be more complete. Agencies may be missing opportunities to coordinate activities--such as avoiding initiation of similar research projects--and leverage resources because agency participation is voluntary and the level of participation by pertinent agencies varies. In addition, DOD's strategic planning process to facilitate the allocation of S&T funds for power source technologies could be improved. The S&T planning efforts can also be complicated by external factors, such as the additions Congress makes to DOD's budget. Although DOD power source experts GAO staff spoke with agree that the department needs to increase its emphasis on power source standardization, DOD lacks departmentwide policies to help emphasize power source standardization. Existing policies have demonstrated limited effectiveness because of compliance problems and because they may only apply to specific power source applications. Although it is generally more economical to address standardization early in the acquisition process, according to DOD officials, power sources are generally not considered early in the process, potentially hindering standardization efforts. DOD has also not evaluated departmentwide opportunities for retrofitting deployed weapon systems and equipment with standard or other preferred power sources when cost effective. To increase oversight of power source investments, GAO recommends that DOD consider how to best aggregate departmentwide investment data. To improve interagency coordination of S&T projects, DOD should determine ways to strengthen agency participation in coordination mechanisms. To increase emphasis on standardization, DOD should develop a standardization plan and enforceable departmentwide policies and identify opportunities to retrofit existing systems with standard power sources when cost effective. DOD concurred with the first recommendation and partially concurred with the other four. It was unclear from DOD's response what actions it plans to take in response to GAO's recommendations.
The President’s fiscal year 2000 budget, which was submitted to the Congress on February 1, 1999, included nearly $2.8 billion for the Census Bureau to perform the 2000 decennial census. This original budget request reflected the bureau’s plan to gather information based in part on using statistical estimation for nonresponding households and to adjust for undercounting and other coverage errors. However, on January 25, 1999, the Supreme Court ruled that the Census Act prohibits the bureau from using sampling for purposes of congressional apportionment. According to the bureau, it must therefore enumerate an estimated 12 million additional nonresponding households and expand programs designed to address undercounting and limit other coverage errors. The bureau also plans to visit 4 million additional addresses for which the Postal Service has returned the questionnaire because it believes that the housing unit is vacant or nonexistent. In light of the need for the additional enumeration, the bureau requested a substantial increase to its original budget request for fiscal year 2000— approximately $1.7 billion—for a total budget of approximately $4.5 billion. Thus, for purposes of apportionment, the 2000 census will be done similarly to those for the last several decades in which questionnaires are mailed to the majority of the nation’s households asking the occupants to mail back the completed questionnaires. In addition, for the 2000 census, the bureau plans to incorporate statistical estimation for other purposes, such as providing states with information for redistricting. As part of the 2000 census, the bureau will allow the public to respond in a variety of new ways, such as through a toll-free telephone number, the Internet, and unaddressed questionnaires made available at public locations. When compared to the 1970 census, the projected full-cycle cost per housing unit of the 2000 census will quadruple (in constant fiscal year 1998 dollars). As shown in figure 1, the cost will nearly double from the previous census (1990). The accuracy of the 1990 census decreased compared to the 1980 census. Specifically, the net undercount for the 1990 census was estimated at 1.6 percent of the population (about 4 million persons) compared to 1.2 percent for the 1980 census. The bureau estimated that about 4.4 million persons were counted twice, and other improper inclusions, in 1990, while 8.4 million were missed. Moreover, the sum of these numbers— 12.8 million—represents a minimum tally of gross errors since it does not include other errors, such as persons assigned to the wrong locations. We interviewed bureau officials who provided us with an overview of the budget formulation process for developing the $4.5 billion amended budget request. Bureau officials provided us with copies of the cost models used to develop the original and amended budget requests. The $1.7 billion requested increase represents the difference between the original and amended budget requests. Therefore, to ensure that the difference between the original and amended budget requests was supported by the two models, we reviewed the summarized output and structure of each model and verified the mathematical accuracy of the difference. We reconciled the output from these models to the budget request amounts to determine whether they mathematically agree with the original and amended budget requests. We then isolated the differences in the models by the bureau’s eight program activities or frameworks. After isolating the differences, we selected for further analysis cost increases/decreases and related changes in assumptions that contributed most to each framework’s total net increase or otherwise warranted explanation. We focused our analysis on the “field data collection and support systems” program activity because nearly $1.5 billion of the $1.7 billion requested increase was related to this framework. We obtained from the bureau documentation supporting or explaining the basis for the change in assumptions and how the increase/decrease related to the inability to use statistical sampling. The bureau categorized the components of the $1.7 billion requested increase into those it considered related to the statistical sampling issue and those it considered to be unrelated. We did not conclude whether the bureau properly categorized the components of the $1.7 billion requested increase as related and unrelated. We did not assess the efficiency or effectiveness of the bureau’s plans to conduct the 2000 census or validate the bureau’s data and assumptions used to develop the budget requests. We performed our work in Washington, D.C., and at the bureau’s headquarters in Suitland, Maryland, from June through September 1999 in accordance with generally accepted government auditing standards. The bureau’s request for an additional $1.7 billion for fiscal year 2000 primarily involves changes in assumptions related to increased workload, reduced employee productivity, and increased advertising. The following sections discuss the changes in these key assumptions between the original and amended budget requests. An assumed increase in workload substantially increased costs in the fiscal year 2000 amended budget request for enumerator and support salaries, benefits, travel, data capture requirements, infrastructure, and supplies. The primary reason for the increase in workload is the bureau’s plan to visit an additional 16 million households using a traditional census approach instead of sampling to estimate these housing units. Also adding to the workload are additional programs intended to improve the coverage and quality of the census. The bureau’s amended budget request assumes that it will enumerate the 12 million nonresponding housing units that under the original budget request would have been statistically estimated. The bureau also estimated that the Postal Service would determine that of the 119 million total estimated housing units, about 6 million would be vacant or nonexistent. The purpose of visiting these 6 million housing units is to confirm that the Postal Service was correct in concluding that a housing unit was vacant or nonexistent. In 1990, the bureau found that about 20 to 30 percent of the housing units identified as vacant or nonexistent were actually occupied. As shown in figure 2, under the original budget request, the bureau assumed that it would visit only 2 million of the 6 million housing units and that the remaining 4 million housing units would be accounted for through statistical estimation. Under the amended budget request, the bureau assumes that it will visit all 6 million housing units. Also contributing to the workload increase are a number of programs intended to improve the coverage and quality of the census that were not in the original budget request. According to the bureau, these programs were added to improve the accuracy of the 2000 census since the bureau would be using a traditional approach and attempting to obtain information directly from all 119 million estimated households. In the bureau’s original budget request, statistical estimation was intended to improve the accuracy of the 2000 census by adjusting census counts for undercounting and other coverage errors. An example of an additional program includes reinterviewing a sample of housing units for which enumerators had previously completed census questionnaires. This program, which is budgeted to cost $22.7 million, involves sampling housing units with questionnaires completed by enumerators, reinterviewing to confirm responses, and adjusting the responses to the census questionnaires to correct any errors found. Another program, budgeted at $25.2 million, involves attempting to redeliver certain census questionnaires that are returned by the Postal Service. This program is targeted at housing units that have a change in address or zip code between the fall of 1999, when the bureau delivers its address files to its printing vendors, and March 2000, when questionnaires are to be delivered. According to the bureau, the improved coverage and accuracy resulting from these programs would have been accounted for in the original plan through statistical estimation techniques. Although the programs the bureau added in the amended budget request are intended to improve the accuracy of the 2000 census, it is unclear whether these additional programs will result in a 2000 census that is more accurate than the 1990 census. It is important to note that another key assumption—the average mail response rate for all questionnaire types—remained constant between the original and amended budget requests at 61 percent. Given an estimated 119 million households in the census population, a 1-percent change in the response rate increases or decreases the number of households that must be enumerated by about 1.2 million. The mail response rate has decreased substantially in recent years, dropping from 78 percent in 1970 to 65 percent in 1990. If the response rate is substantially different from the 61 percent that the bureau projects, assuming that all other assumptions prove to be accurate, the bureau’s fiscal year 2000 amended budget request could differ substantially from the bureau’s needs. A significant factor increasing the bureau’s budget request is a 20-percent reduction in the assumed productivity rate for temporary employee enumerators following up on nonresponding housing units. In the original budget request, the bureau used an average productivity rate of about 1.28 households per hour, which was reduced in the amended budget request to about 1.03. This revised productivity rate was applied to all temporary enumerator employees for nonresponse follow-up—both those to be hired to visit the 30 million housing units under the original budget request as well as those for the additional 16 million housing units added after the use of sampling for nonresponse follow-up was prohibited by the Census Act. The bureau provided several explanations for reducing the assumed enumerator productivity rate from 1.28 in the original budget request to 1.03 in the amended budget request. For example, the bureau believes that given the low unemployment rate in the United States, hiring temporary employees to fill hundreds of thousands of new positions is expected to result in the hiring of less productive workers. In addition, the bureau believes enumerators will work slower in order to avoid mistakes knowing that they are being reviewed in a more thorough and formal manner as part of the additional quality control programs. Also, the bureau assumed an increase in rework resulting from added quality control programs. The bureau did not provide any documented internal or external quantitative analysis or other analysis that supported the original or the revised productivity rate. Consequently, the 20-percent reduction in productivity is based on senior management judgments, which the bureau acknowledges are very conservative. Because of the increased workload and reduced productivity, the bureau increased the total number of temporary field positions from the 780,000 estimated in the original budget to 1,350,000. This increase in positions does not necessarily mean that the bureau is hiring 570,000 additional people. Many positions exist for only a few weeks; thus some individuals can be, and are, hired for more than one position. Of the 570,000 increase, about 200,000 positions (35 percent) are for nonresponse follow-up and 120,000 (21 percent) are for enumeration-related activities such as counting people at homeless shelters; 220,000 positions (38 percent) are for added coverage improvement and quality control programs. (The remaining 6 percent includes various other positions.) The bureau included nearly $72 million for advertising intended to increase questionnaire responses, including advertising that will be targeted at hard- to-enumerate communities. This additional advertising was primarily intended to educate the public on the 2000 census and to increase the mail response rate for questionnaires and thus reduce the bureau’s workload. The bureau’s original budget request included about $56 million for a paid advertising program in fiscal year 2000. The bureau intends to use the additional $72 million for motivational programs intended to increase the mail-in response rate and for an expanded “Census in the Schools” program. An example of how the bureau plans to spend this money includes a planned educational message, delivered several weeks before the data collection period, reminding residents in the hard-to-enumerate communities about the benefits of participating in the census process. Also, advertising is to be targeted to run concurrent with the follow-up visits to nonresponding households so that enumerators have a better chance of successfully gaining access to households and completing census forms. The bureau has no data available to support how much, if any, the increased advertising will increase the response rate. As a result, the bureau’s assumed average questionnaire response rate of 61 percent in the original budget request did not increase in the amended budget request. Thus, the bureau has assumed no cost savings in the form of increased response rate and resultant reduced workload from the increased advertising dollars. According to the bureau, about $1.6 billion of the $1.7 billion requested increase is related to the inability to use statistical sampling, and the remaining $104 million is not related. Unrelated items include costs not included in the original budget request and revisions of prior estimates. Examples of items unrelated to the decision include the following: $29 million for leasing of common space, $23 million for long-distance telephone service, $16 million for improving address lists and delivering questionnaires, $10 million for copier paper and map supplies. We did not conclude whether the bureau properly categorized components of the $1.7 billion requested increase as related and unrelated. The briefing slides in appendix I provide a detailed discussion by program activity (or framework) of the components of the $1.7 billion requested increase. To develop the amended fiscal year 2000 budget request, the bureau used a model that consisted of an extensive set of interrelated software spreadsheets. Both the original and amended budget requests were developed with this cost model, with each estimate being developed independently using different versions of the cost model. The process the bureau used to develop the amended budget request involved (1) revising key assumptions in the cost model supporting the original budget request that were based on the statistical estimation approach for nonresponse follow-up to improve accuracy and (2) incorporating new assumptions based on a traditional census approach. As shown in table 1, the bureau derived the $1.7 billion requested increase by calculating the net difference between the original budget request of $2.8 billion and the amended budget request of $4.5 billion. The output from the bureau’s models mathematically agrees with the original and amended budget requests. The bureau’s budget estimation model is a set of 16 spreadsheets with thousands of separate formulas that generate estimates for specific census program activities. The formulas generate outputs for items such as salaries and benefits, travel, and advertising based on hundreds of different assumptions. The model serves as a collection point for field costs as well as headquarters and other costs estimated outside the model. Of the $4.5 billion amended budget request, about $1.05 billion (23 percent) was calculated outside the model. This $1.05 billion includes costs for headquarters activities and contracts. The assumptions are developed by program managers and are generally based on either third party evidence, such as independent studies, or senior management’s judgment. We provided a draft of this report to the Department of Commerce for comment. As requested, the Director of the Bureau of the Census provided written comments on behalf of the department in 2 days. (See appendix II.) We appreciate the bureau’s rapid response to the draft and its overall cooperation and timely responses to our data requests. In commenting on a draft of this report, the bureau concurred with the facts as presented in the report and provided its perspective on four matters, which we address below. First, the bureau stated that in addition to serving primarily as the provider of apportionment counts, the 2000 census has three other major components of unique usefulness to the American public- providing data to be used for legislative redistricting, federal housing surveys, and distribution of federal funds. We agree with the bureau that these other components of the 2000 census are important. However, the purpose of our report was to analyze the bureau’s overall amended budget request with a focus on the $1.7 billion requested increase. Given that objective, our report focused on activities that resulted in the substantial increase in the bureau’s original fiscal year 2000 budget request. As our analysis shows, the net $1.7 billion requested increase was primarily related to the bureau’s assumptions of increased workload, reduced employee productivity, and increased advertising. We did not see any substantial cost increases related to redistricting, federal housing surveys, and distribution of federal funds. Second, the bureau noted that the current plan for the 2000 census contains components of both a traditional and a sampling methodology. In addition, the bureau noted that beginning in November 1997, it devoted a substantial effort in planning (on a dual track basis) for a census based entirely on a traditional methodology. Based on our review of the bureau’s fiscal year 2000 amended budget request, we agree that the 2000 census includes components of both traditional and sampling methodologies. However, the increase in the estimated cost of the 2000 census is driven primarily by factors relating to a traditional census. The $1.7 billion requested increase in the bureau’s fiscal year 2000 budget is net of a decrease of over $200 million related to the Accuracy and Coverage Evaluation (ACE) program. The purpose of ACE in the amended budget request is primarily to estimate the population for purposes such as redistricting by sampling households from an address list developed independently from the list used to perform the census. Under the bureau’s original budget request, the results of ACE (formerly Integrated Coverage Measurement) were to have been statistically combined with the results of enumeration to provide a single, integrated set of population counts. Evaluating the bureau’s planning for a “dual track” census was beyond the scope of our work. Third, the bureau provided its perspective on the reasons why the full-cycle cost per housing unit of the 2000 census is projected to nearly double when compared to the 1990 census. The bureau pointed to such factors as normal inflationary increases, infrastructure, federal wage increases, declining response rate, paid advertising, postage, and information technology. It is important to note that our analysis eliminates the impact of general inflation and the growth in housing units over the last 4 decades. Thus, our comparison of full-cycle cost per housing unit is an “apples to apples” comparison. The purpose of showing this comparison is to provide a historical perspective on the real increase in the full-cycle cost per housing unit of the decennial census. Providing an analysis of the reasons for the substantial increase of the projected cost of the 2000 census was beyond the scope of our review. We agree with the bureau that factors such as a declining mail response rate would increase the relative costs of the census. However, it is important to point out that the bureau was unable to demonstrate whether the benefits of certain activities, such as increased advertising and additional programs, justify their cost. In addition, the bureau could not say whether a 2000 census costing an estimated $56 per housing unit (in constant 1998 dollars) will result in a census that is more accurate than 1990. Finally, the bureau stated that it believes it is prudent to assume a temporary enumerator employee productivity rate of 1.03 housing units per hour. The bureau cited the immovable calendar and the need to hire a large temporary labor force, the size of which is a direct result of assumed lower productivity. The bureau believes our use of “very conservative” leaves an unwarranted impression of management choices made at one end of a spectrum, which could be shifted to the other end of the spectrum without risk. Based on our analysis, we continue to conclude that the bureau’s productivity assumption of 1.03 is based on senior management’s “very conservative” judgment as was represented to us by bureau staff during our review. What we mean by “very conservative” is that the bureau is planning for a worst case scenario. As we reported, the bureau did not provide any documented internal or external quantitative analysis or other analysis to support the initial or revised productivity rates. As such, our only possible conclusion is that the 1.03 was developed based on management’s judgment, which is what bureau staff represented to us during our review. Because the reported productivity rate from the 1990 census was 1.56 and the assumption used in the original budget was 1.28, we conclude that the 1.03 assumption used in the amended budget request, which covers not only the additional 16 million housing units now planned to be enumerated but also the 30 million housing units to be enumerated in the original budget, is very conservative—or a “worst case scenario.” Should the bureau’s productivity assumption prove to be more or less than 1.03 housing units per hour, the bureau’s fiscal year 2000 amended budget request could differ substantially from the bureau’s needs. We are sending copies of this report to Senator Fred Thompson, Senator Joseph Lieberman, Senator Judd Gregg, Senator Ernest Hollings, Representative Carolyn Maloney, Representative Dan Burton, Representative Henry Waxman, Representative Harold Rogers, and Representative José Serrano in their capacities as Chairs or Ranking Minority Members of Senate and House Committees and Subcommittees. We are also sending copies to the Honorable William M. Daley, Secretary of Commerce; the Honorable Kenneth Prewitt, Director of the Bureau of the Census; the Honorable Jacob J. Lew, Director of the Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. If you have any questions on matters discussed in this letter, please contact me at (202) 512-3406. Other contacts and key contributors to this report are listed in appendix III. Objectives, Scope, and Methodology Components of the $1.7 Billion Requested Increase Process Used to Develop Fiscal Year 2000 Budget The Constitution requires a decennial census of the population in order to apportion seats in the House of Representatives. The decennial census is the nation’s most expensive data gathering program. Public and private decisionmakers also use census data on population counts and social and economic characteristics for a variety of purposes. The bureau typically performs a “dress rehearsal” several years before the actual census to help in its planning efforts. Background (contd) The Presidents fiscal year 2000 budget request, submitted to the Congress on February 1, 1999, included nearly $2.8 billion for the decennial census. The bureaus original request reflected its plan to gather information based in part on using statistical estimation for nonresponding households and to adjust for undercounting and other coverage errors. On January 25, 1999, the Supreme Court ruled that the Census Act prohibits the bureau from using statistical sampling to calculate the population used to apportion the House Following this ruling, in June 1999, the bureau submitted an amended budget request substantially increasing its original fiscal year 2000 budget request. Background (cont’d) In November 1997 in the Department of Commerce and Related Agencies Appropriations Act for 1998, the Congress directed the bureau to begin planning for a census in 2000 without using statistical methods. However, the bureau did not begin detailed budgeting for a nonsampling census until after the Supreme Court’s decision in January 1999. The 2000 census will be done in a manner similar to those for the last several decades, in which questionnaires are delivered to identified households asking the occupants to mail back the completed questionnaires. In addition, for the 2000 census the bureau plans to incorporate statistical estimation for other purposes, such as providing states with information for redistricting. The public will have the option of responding via a toll-free telephone number, the Internet, and unaddressed questionnaires placed in public locations. Background (cont’d) Compared to the 1970 census, the projected full-cycle cost per housing unit of the 2000 census will quadruple (in constant fiscal year 1998 dollars). Compared to the 1990 census, the projected full-cycle cost per housing unit will nearly double. 2000 (Est.) Provide an overall analysis of the key changes in assumptions resulting in the $1.7 billion requested increase. Provide details on the components of the $1.7 billion requested increase and which changes, according to the bureau, are related and which are not related to the inability to use statistical sampling. Describe the process used to develop the increase in the original fiscal year 2000 budget request and the overall amended budget request. Interviewed bureau officials, who provided us with an overview of the budget formulation process. Reviewed copies of the cost models used to develop the original and amended budget requests. Reviewed the summarized output and structure of each model and verified the mathematical accuracy of the difference. Reconciled the output from the models to determine whether they mathematically supported the original and amended budget requests. Scope and Methodology (cont’d) Obtained documentation from the bureau supporting or explaining the basis for the changes in assumptions. Had the bureau categorize components of the $1.7 billion requested increase into those related and unrelated to the inability to use statistical sampling. Did not conclude whether the bureau properly categorized components of the $1.7 billion requested increase as related and unrelated. Selected for further analysis cost increases/decreases and related changes in assumptions that contributed most to a framework’s total net increase or otherwise warranted explanation. Scope and Methodology (cont’d) Did not assess the efficiency or effectiveness of conducting the planned 2000 census or validate the bureau’s data and assumptions used to develop the original and amended budget requests. Performed our work in Washington, D.C., and at the bureau’s headquarters in Suitland, Maryland, between June 1999 and September 1999 in accordance with generally accepted government auditing standards. The net $1.7 billion requested increase in the bureau’s original fiscal year 2000 budget request resulted primarily from changes in assumptions relating to reduced employee productivity, and increased advertising. Increased workload: Housing units The workload change was primarily because planned housing unit visits increased from 30 million to 46 million when sampling for nonresponse follow-up was eliminated. Of the 16 million additional visits 12 million are for following up on nonresponding households, and 4 million are for housing units the bureau plans to visit because the Postal Service believes they are vacant or nonexistent. In 1990, the bureau found that 20-30% of the housing units identified as vacant or nonexistent were occupied. Overall Observations Increased workload: Additional programs The bureau has added programs to improve the accuracy of the census through coverage and quality control operations that were not in the original budget request. These programs are designed to address coverage errors that statistical estimation was intended to address. It is unclear whether these additional programs will result in a 2000 census that is more accurate than the 1990 census. Overall Observations Workload: Average mail response rate The average mail response rate for all questionnaire types is a key assumption. The bureau used a 61% assumed response rate in the original and amended budget requests. The mail response rate dropped from 78% in 1970 to 65% in 1990. A 1 percent change in the response rate increases/decreases the number of households that must be enumerated by 1.2 million. If the actual response rate is different from 61%, the bureau’s needs could differ substantially from the amended budget request. The assumed productivity rate for all temporary enumerator employees hired to follow up on the estimated 46 million nonresponding households decreased approximately 20%, from 1.28 to 1.03 households per hour. According to bureau officials, this decrease is because of the uncertainty of hiring a sufficient number of quality temporary workers in a tight the bureau assumes that enumerators will be more careful to avoid mistakes, knowing that they are being reviewed in a more thorough and formal manner as part of the added quality control programs, and there will be an increase in rework resulting from added quality control programs. The bureau did not provide us any documented internal or external quantitative analysis or other analysis to support the initial or the revised productivity rates. Consequently, the 20% reduction in productivity is based on bureau management’s judgment, which they acknowledge is very conservative. Based on the increased workload and reduced productivity, the number of temporary field positions is expected to increase from 780,000 to 1,350,000. The term position does not equate to a person. One staff member may work in several positions since all enumeration operations do not completely overlap one another. 200,000 positions (35%) are for nonresponse follow-up operations and 120,000 positions (21%) are for other enumeration activities such as counting people at homeless shelters. 220,000 positions (38%) are for added coverage improvement and quality control programs. The bureau increased advertising by $71.9 million. According to the bureau, this advertising is intended to increase public awareness of the census and hopefully increase the mail response rate. The bureau has no data to show whether this advertising will increase the response rate. The bureau assumed no cost savings in the form of increased response rate and, accordingly, a reduced workload for these increased advertising dollars. The table below shows the components of the $1.723 billion increase by the eight program activities (frameworks). The table below represents the bureau’s representation of changes related and unrelated to the inability to use statistical sampling. We did not conclude whether the bureau properly categorized components as related and unrelated. The following slides (see page references) provide information on key components of these changes. Note that only selected portions of each change were analyzed. (dollars in millions) (6.3) 3. Field data collection and support systems (0.8) 5. Automated data processing and telecommunications support 6. Testing, evaluation, and dress rehearsal 7. Puerto Rico, Virgin Islands and Pacific Areas 8. Marketing, communications, and partnerships N/A - Our analysis did not reveal any significant matters related to this framework. Requested increase: $3.2 million The requested increase includes $1.8 million in related travel and training for 25 additional full-time equivalent (FTE) project management staff and $1.4 million in overhead and other costs. The bureau stated that this additional staff is needed to support the additional activities and complexities involved in managing a traditional census. The bureau’s amended budget request states that these funds will also enable revisions to related management information systems. Added positions include program analysts, decennial specialists, and communication positions. According to the bureau, some of these positions would be used to define the changes needed to the Cost and Progress System. Requested increase: $6.3 million The bureau’s amended budget request shows no net change for this framework. However, the $6.3 million increase shown here is offset by $6.3 million in decreases shown on slide 40. The contract for the American FactFinder system increased by $3.5 million. This system is used to tabulate and disseminate census results electronically. According to the bureau, this increase is related because the bureau plans to tabulate and release two sets of data, the traditional census results and the results of its Accuracy and Coverage Evaluation program, and provide the Internet interface to do so. Framework 2 - Data content and products (cont’d) This framework’s increase also includes $2.8 million for 195 FTEs for processing requests for foreign language questionnaires. The bureau stated that increased promotional efforts by its partners (local governments and community-based organizations) and publicizing the availability of in- language questionnaires in the bureau’s paid advertising will increase the request for and use of these in-language questionnaires. The bureau’s original and amended cost models show no change in the assumed number of additional non-English questionnaires that will be requested. The bureau had not provided us evidence of the increased workload or how it was used to estimate this cost increase by the end of our fieldwork. For this framework, we analyzed selected activities within the categories listed in the table below. The following slides (see page references) provide our analysis of the key assumptions and data that changed between the original and amended budget requests. (dollars in millions) (6.9) Accuracy and Coverage Evaluation (ACE) (208.8) N/A - Our analysis did not reveal any significant matters related to this activity. Framework 3 - Field infrastructure Requested increase: $168.9 million This increase is for additional office space and related supplies and services to support the additional workload. To support the increase in workload, the bureau has increased the number of Local Census Offices (LCOs) from 476 to 520. The bureau has assumed the need for 1,500 additional square feet per LCO to support the expanded operations. The amended budget request assumes an increase in average cost per square foot of about $7 for office space rental. The bureau stated that the increase was due to its need for increased space for a larger number of offices in a short time frame and to avoid having to split an LCO between two different buildings. Framework 3 - Field infrastructure (cont’d) $39.5 million is included for additional equipment, faxes, copier paper, furniture rental, and other supplies for the LCOs. About $53.8 million is for field staff telephones, “Be-Counted” supplies, and other support costs for local census offices, none of which were included in the original budget request. This total includes $22.6 million for telephone and other employee reimbursements for which there was not a specifically identifiable amount in the original budget request. This total also includes $28.3 million to perform evaluations of the traditional census programs. However, the bureau indicated that it has not finalized these evaluations and did not provide us with evidence as to how this amount was estimated. $18.5 million is for FTS 2000 long-distance telephone costs. Requested increase: $998.9 million Field enumeration primarily involves the use of temporary census employees who directly contact the public to complete census questionnaires. Most of this increase relates to visiting households that did not mail back a census questionnaire. These visits are referred to as “nonresponse follow-up.” The following slides discuss these follow-up operations and the additional coverage improvement and quality control programs. Nonresponse follow-up Additional coverage improvement and quality control programs Additional coverage improvement and quality control programs Other (update/leave, list/enumerate, special places) Framework 3 - Field enumeration: nonresponse follow-up The $685 million increase in the cost of following up on nonresponding households is primarily due to the following two factors: increase in workload for housing units visited and decrease in enumerator productivity. This increase is not due to any substantial change in assumptions for mail response rate (61%), turnover of enumerators (150%), enumerator production hours per day (5), or enumerator hourly wage rates ($10.25 - $15.25). Components of Changes Framework 3 - Field enumeration: nonresponse follow-up (cont’d) The workload change was primarily due to an increase from 30 million to 46 million planned visits to housing units when sampling was eliminated. The 16 million increase in the housing units includes: 12 million nonresponding households that would have been statistically estimated under the bureau’s original approach and 4 million addresses for questionnaires returned by the Postal Service as vacant or nonexistent housing units. Framework 3 - Field enumeration: nonresponse follow-up (cont’d) Because of the 16 million increase in visits to nonresponding housing units, the bureau assumed increased wage and benefit costs for an additional 200,000 enumerators, crew leaders, and assistants to spend over 33 million hours collecting information for this operation. Framework 3 - Field enumeration: nonresponse follow-up (cont’d) The 18% decrease in productivity rates, from 1.56 to 1.28, resulted from the bureau’s observation of a downward trend in the public’s willingness to cooperate with enumerators. The bureau’s decision to reduce the assumed enumerator productivity rates by another 20% between the original and amended budget requests was based on senior management judgment, which the bureau acknowledged was very conservative. Framework 3 - Field enumeration: nonresponse follow-up (cont’d) for all 46 million housing units also reflects their desire to be very conservative to compensate for the possibility that they may be unable to hire enough enumerators at the current hourly wage rates or they may experience turnover higher than assumed. Since the bureau did not assign a value to each of the factors that contributed to the 20% decline in enumerator productivity from the original budget request, we were unable to determine how much of the decrease in productivity was related to the bureau’s being very conservative. However, reducing the productivity of the enumerators in the amended budget request increased direct payroll costs alone by $235 million. Requested increase: $221.3 million This increase is for additional programs to improve census coverage and quality assurance procedures and thus improve the accuracy of the census. These additional programs, which increase workload and were not included in the sampling census approach, involve, among other things: $145.7 million to improve census coverage through efforts such as revisiting housing units where an enumerator reported the housing unit to be vacant or nonexistent but where the Postal Service was able to deliver a census questionnaire, $25.2 million to redeliver census questionnaires to housing units where the Postal Service returns questionnaires as undeliverable potentially because of changes in addresses and zip codes that occur between the fall of 1999, when the bureau’s questionnaire address file is delivered to its printing vendors, and March 2000, when questionnaires are delivered, and Framework 3 - Field enumeration: additional programs (cont’d) $22.7 million to reinterview the occupants of a sample of nonresponding housing units where census questionnaires were completed by enumerators. It is unclear whether these additional programs will result in a 2000 census that is more accurate than the 1990 census. Framework 3 - Accuracy and Coverage Evaluation (ACE) Requested decrease: $(208.8) million This decrease results from reduced staffing and related costs for ACE. The purpose of ACE is to estimate the population for purposes such as redistricting, by sampling households from an address list developed independently from the list used to perform the census. Under the bureau’s original plans, the results of ACE (formerly Integrated Coverage Measurement (ICM)) were to have been statistically combined with results of the enumeration to provide a single, integrated set of population counts. Framework 3 - Accuracy and Coverage Evaluation (ACE) (cont’d) The original workload was based on a sample size of 750,000 housing units. As a result of not using statistical estimation for apportionment, the bureau revised its sample size to 300,000 housing units and decreased enumerator and supervisory staff from 50,000 to 20,000. The 60-percent reduction in sample size proportionally reduced the cost by $208.8 million. Requested increase: $10.5 million This framework’s net increase includes a $10.5 million increase for 370 FTEs to determine the correct location of addresses from an expected increase in Be Counted and Telephone Questionnaire Assistance (TQA) responses. The bureau stated that this increase is related because, while the bureau’s methodology for determining the correct geographic location of addresses is the same as that planned under the sampling census design, the bureau expects to handle a larger workload. However, the assumed workload for Be Counted and TQA returns, according to the bureau’s original and amended cost models, remains unchanged at 1 million and 2 million, respectively. The bureau had not provided us evidence of the increased workload or how it was used to estimate this cost increase by the end of our fieldwork. population, Puerto Rico would not be subject to the Supreme Court’s decision. However, according to bureau officials, once sampling could not be used for the U.S., the census method was also changed for Puerto Rico for consistency and timing purposes and thus the bureau considers these cost increases to be related. Framework 7 - Puerto Rico, Virgin Islands, and Pacific Areas (cont’d) The requested increase also includes $6.0 million for data collection and support activities of an expanded nationwide program to assess the accuracy of the new and enhanced operations that contributed to the census counts. However, the bureau acknowledges that it did not have time to formulate a complete evaluation program when the amended budget request was prepared. The bureau is in the process of identifying components for each evaluation. Therefore, the $6.0 million figure is an estimate that could change when the bureau completes designing and costing its evaluation programs. The requested increase also includes $2 million for promotion activities similar to those activities in the field data collection and support systems (framework 3). responses, including advertising targeted at hard-to-enumerate communities. As mentioned earlier, the bureau used a 61% assumed response rate in the original and amended budget requests. Consequently, the bureau did not include any cost savings from the additional advertising in its amended budget request. Framework 8 - Marketing, communications, and partnerships (cont’d) posters and flyers; $3.9 million in payroll and travel costs for 46 additional community, media, and government partnership specialists; and $7.2 million for expanded evaluation of this program’s activities. The bureau stated that the $7.2 million figure is only an estimate and will remain so until the bureau identifies the evaluations it wants to perform and prepares study plans. The purpose of these evaluations is similar to the $28.3 million and $6.0 million budgeted for in frameworks 3 and 7, respectively. Requested decrease: $(6.3) million The bureau’s amended budget request shows no net change for this framework. However, the $6.3 million decrease shown here is offset by related increases of $6.3 million, shown on slide 20. This net decrease includes $5.3 million in reduced headquarters payroll, travel, overhead, and other costs. The bureau stated that it decided to cut positions originally budgeted for to pay for other increases in this framework. This net decrease also includes $1 million in reduced postage costs resulting from a 1.6 million decrease in expected number of questionnaires to be mailed back due to a reduction in the expected mail response rate for Puerto Rico. Requested increase: $98.2 million According to the bureau, some of the unrelated costs involving items not included in the original budget request or revisions of prior estimates are as follows: $29 million for rental of common areas not included in the original budget request. $23 million for FTS 2000 long distance service costs revised from the estimates in the original budget request. $16 million attributable to an increase in the number of housing units that will have questionnaires delivered by enumerators as part of the Update/Leave Program. According to the bureau, the addition of 5 million housing units to the 19 million used in the original budget request was based on the results of the address listing operation that ended in January 1999. Framework 3 - Field data collection and support (cont’d) $10 million for map supplies and copier paper costs revised from the estimates in the original budget request. $10 million for staff visits to special place facilities in January and February 2000 not included in the original budget request. $4 million for field follow-up of newly identified addresses as part of the New Construction Program not included in the original budget request. $3 million for printing and mailing weekly earnings statements not included in the original budget request. $3 million for costs associated with the unique travel and logistical requirements to conduct enumeration in Alaska not included in the original budget request. Area Memorandums of Understanding (MOU) above the $5.3 million in the original budget request. These MOUs represent agreements between the bureau and the Island Areas (excluding Puerto Rico) on the responsibilities each party has for conducting the census of the Islands’ population, as well as the budget for these activities. This increase represents revisions of prior cost estimates based on more recent information for such things as space, telecommunications, shipping, wage rates, and travel allowances. The increase also includes updated information on the enumeration workload provided by the Island Area governments and the addition of an office operation to the work originally planned. )A@A@ 2HCH= @ALAFAJ =@ ==CAAJ ,=J= ?JAJ =@ FH@K?JI .EA@ @=J= ?A?JE =@ IKFFHJ IOIJAI !"%"# "#! Consists of 16 interrelated software spreadsheets and numerous formulas and assumptions. Serves as a central collection point for field costs, as well as headquarters and other costs estimated outside the model. Of the $4.5 billion amended budget request, about $1.05 billion (23%) was calculated outside the model. The amounts calculated are for headquarters activities and contract costs (e.g., advertising). The output from the bureau’s models mathematically agrees with the original and amended budget estimates. These budget estimates are based in part and in varying degrees on the judgment of bureau management. In addition to the contact named above, the following individuals made key contributions to this report: Cindy Brown-Barnes, Joan Hawkins, Theresa Patrizio, Mike Vu, and Sophia Harrison. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Bureau of the Census' fiscal year (FY) 2000 budget, focusing on: (1) an overall analysis of the key changes in assumptions resulting in the $1.7 billion request increase; (2) details on the components of this increase and which changes, according to the bureau, are related and which are not related to the inability to use statistical sampling; and (3) the process the bureau used for developing the increase in the original FY 2000 budget request and the amended budget request. GAO noted that: (1) GAO found that the net $1.7 billion requested increase in the original FY 2000 budget request of $2.8 billion resulted primarily from changes in assumptions relating to a substantial increase in workload, reduced employee productivity, and increased advertising; (2) under the nonsampling design, census costs will increase because the bureau expects to follow up on more nonresponding households than for a sampling-based census and plans to use additional programs to improve coverage because it cannot rely on statistical methods to adjust for undercounting and other coverage errors; (3) the bureau assumes an increased workload because the housing units that the bureau expects to visit have increased from an estimated 30 million to 46 million; (4) the 16 million increase includes visiting 12 million additional nonresponding housing units and 4 million additional housing units that the Postal Service says are vacant or nonexistent; (5) this increased workload, which increased costs for most bureau program activities, relates primarily to additional salaries, benefits, travel, data processing, infrastructure, and supplies; (6) another key factor substantially increasing the FY 2000 budget request is that the bureau reduced the assumed productivity of its temporary enumerator employees; (7) because of the assumed increase in workload and reduction in productivity, the total number of temporary field positions increased from 780,000 in the original budget to 1,350,000 in the amended budget request; (8) the bureau also included nearly $72 million of advertising in the amended budget request to increase public awareness and hopefully increase response rates for mailed questionnaires; (9) according to the bureau, about $1.6 billion of this increase is related and $100 million is not related to the inability to use statistical sampling; (10) the items unrelated to the sampling issue include costs not included in the original budget and revisions of prior estimates; (11) the bureau developed its $4.5 billion amended budget request for FY 2000 using a cost model consisting of a series of interrelated software spreadsheets; (12) the original and amended budget requests were developed using this cost model, with each estimate being developed independently using different versions of the cost model; and (13) the bureau derived the $1.7 billion requested increase by calculating the net difference between the original budget request of $2.8 billion and the amended budget request of $4.5 billion.
The Klamath River Basin, spanning the southern Oregon and northern California borders, covers over 15,000 square miles. The Klamath River originates in the Upper Basin, fed by Oregon’s Upper Klamath Lake, a large, shallow body of water composed of flows from the Sprague, Williamson, and Wood Rivers. The river subsequently flows into the Lower Basin in California, fed by tributaries including the Shasta, Scott, Salmon, and Trinity Rivers, and empties into the Pacific Ocean. River flows and lake levels depend primarily upon snowpack that develops during the winter months, melts in the spring, and flows into the river basin. Rainfall and groundwater from natural springs also contribute to flows. On average, about 1.5 million acre-feet of water pass from the Upper Basin to the Lower Basin annually at Iron Gate Dam. The Secretary of the Interior authorized construction of the Klamath Project in 1905. Reclamation dammed Upper Klamath Lake, drained and reclaimed Lower Klamath and Tule Lakes, stored the Klamath and Lost Rivers’ flows, and provided irrigation diversion and flood control on the reclaimed land. About 85 percent of the Project lands obtain irrigation water from Upper Klamath Lake and the Klamath River, while Gerber Reservoir, Clear Lake, and the Lost River supply the remainder of the Project. Water is delivered to Project lands using an elaborate system of canals, channels, and drains, including diversions directly from the Klamath River. The distribution system is considered highly efficient, ensuring that water that is diverted for use within the Project is reused several times before it returns to the Klamath River. Homesteading of the reclaimed lands began in 1917 and continued through 1948. As shown in figure 2, the Project is currently composed of about 207,000 acres of irrigable lands. Historically, about 200,000 acres of Project lands have been in agricultural use annually. For example, in 2003, the most recent year for which data is available, about 202,000 acres were considered to be in agricultural use, of which about 180,000 acres were irrigated and harvested. Crops grown and harvested on the Project include alfalfa, barley, oats, wheat, onions, potatoes, and peppermint, and cattle graze on more than 40,000 acres of irrigated pastureland. In addition to farm and pastureland, four national wildlife refuges were set aside by executive orders in conjunction with the construction of the Project. The refuges, managed by Interior’s Fish and Wildlife Service (FWS) support many fish and wildlife species and provide suitable habitat and resources for migratory birds of the Pacific Flyway. About 23,000 acres of the two refuges within the Project water delivery area—Tule Lake and Lower Klamath National Wildlife Refuges—are leased for agricultural purposes. Reclamation, through contracts, provides water for irrigation and hydropower production and must also provide water for the national wildlife refuges. Reclamation has entered into contracts with numerous irrigation districts and individual irrigators on the Project to provide for the repayment of Project costs and the right to receive Project water. The contracts most commonly specify a land acreage amount to be covered by the contract—not a specific water amount to be delivered. Also by contract with Reclamation, California-Oregon Power Company (now PacifiCorp) obtained the right to use certain amounts of water, after requirements of the Klamath Project are satisfied, for hydropower generation at its privately owned and independently operated dams on the Klamath River downstream of the Project. PacifiCorp’s southernmost hydropower dam, Iron Gate Dam, located about 20 miles downriver of the Oregon-California border, is the last control point before Klamath River flows run freely to the Pacific Ocean. Finally, the national wildlife refuges have federally reserved rights for the water necessary to satisfy the refuges’ primary purposes, and Reclamation must satisfy refuge water needs after its other obligations are met. Reclamation is also obligated to protect tribal trust resources, such as water and coho salmon. The Klamath River Basin is home to four federally recognized tribes, identified by Reclamation as the Klamath Tribes in the Upper Basin area of Oregon, and the Hoopa Valley Tribe, Yurok Tribe, and Karuk Tribe in the Lower Basin area of California. Each tribe has long- standing cultural ties to the Klamath River, its tributaries, and native fish species. Furthermore, the Klamath, Hoopa, and Yurok tribes have, either by treaty or executive order, reserved rights to sufficient water quality and flows to support all life stages of fish life in protection of tribal fishing rights. As with all federal agencies, Reclamation has a trust responsibility to protect these tribal resources and to consult with the tribes regarding its actions in a government-to-government relationship. Reclamation must comply with the Endangered Species Act to ensure that any action it authorizes, funds, or carries out is not likely to jeopardize the continued existence of any listed species of plant or animal or adversely modify or destroy designated critical habitat. Interior’s FWS and Commerce’s NMFS are responsible for administering the act. If FWS or NMFS finds that an agency’s proposed activity is likely to jeopardize a threatened or endangered species or destroy or adversely modify its critical habitat, then a “reasonable and prudent alternative” that would avoid such harm must be identified. Three species of fish that are of particular importance to the cultures of the tribes—the threatened southern Oregon/northern California coho, and the endangered Lost River sucker and shortnose sucker—are affected by Project operations. NMFS listed the coho as threatened in 1997, and FWS listed the two species of suckers, which populate Upper Klamath Lake and rivers other than the Klamath, as endangered in 1988. Drought conditions since 2000 have complicated Reclamation’s efforts to balance the irrigation water demands on the Project with the requirements for specific river flows and lake levels for threatened and endangered species. Reclamation operates the Project according to an annual operations plan that helps the agency to meet its various obligations and responsibilities, given varying hydrological conditions. In 2001, responding to Reclamation’s biological assessment of its proposed Project operations plan, FWS and NMFS issued biological opinions that suggested Reclamation take numerous actions, including maintaining higher water levels in Upper Klamath Lake and two reservoirs on the Lost River and higher flows of the Klamath River below Iron Gate Dam. Because of the new biological opinions and drought conditions, Reclamation was prohibited from releasing normal amounts of water to most Project irrigators, which impaired or eliminated agricultural production on much of the Project. Subsequently, Reclamation proposed a new 10-year Project operations plan for 2002 through 2011. NMFS reviewed Reclamation’s biological assessment of the plan to determine its effect on listed species, and issued a final biological opinion on May 31, 2002, directing Reclamation to establish a multiyear water bank to provide additional river flows. Reclamation incorporated this water bank into its Project operations plan through 2011. NMFS and Reclamation can reconsult on the requirements of the biological opinion as warranted, for example, if new scientific information on river flow requirements for fish is developed. Reconsultation is likely during 2006, when ongoing studies of Klamath River flows are expected to be completed. Although NMFS’ biological opinion recommended a water bank as an alternative and specified the amounts of water to be provided each year, it provided Reclamation little specific guidance regarding the structure, management, or operation of the water bank. Water banking is broadly defined as an institutional mechanism that facilitates the legal transfer and market exchange of various types of surface water, groundwater, and water storage entitlements. Water banks have been proposed or are operating in almost every western state. However, significant differences exist in the way that each bank operates with respect to market structure, degree of participation, pricing, regulatory oversight, environmental objectives, and other factors. Under Reclamation’s water bank program, participating irrigators would be paid to forego their contractual entitlement to water for one irrigation season in order to make more water available for release into the river. Water acquired by Reclamation would accrue to the water bank over the course of the year as participants did not divert water for irrigation purposes as they normally would. A schedule for delivery of additional flows is determined by NMFS and Reclamation by March 31 each year, with the majority of the water bank provided in the spring and early summer when the water is most needed by the coho. According to Reclamation, the water bank would enable the agency to augment river flows for threatened species and also meet its contractual responsibility to deliver water to Project irrigators until other solutions for balancing water demands were identified. Reclamation was also required to initiate a Conservation Implementation Program that would bring together basin stakeholders, including federal agencies, tribes, and the states, to collaboratively develop long-term solutions, some of which would increase flows, such as surface water storage and groundwater resource development. Reclamation modified its water bank operations from year to year as its obligations and costs increased. Reclamation acquired water for the water bank by contracting with irrigators for the water needed to augment Klamath River flows as required by the biological opinion. As it gained more experience each year, Reclamation modified its water bank operations to better meet the increasing obligations and to mitigate costs. As its annual obligations increased, Reclamation’s annual water bank expenditures also increased, totaling more than $12 million through 2004. Based on Reclamation’s estimated annual cost of about $7.6 million for fiscal year 2005 and onward, the cumulative cost of the water bank could exceed $65 million through fiscal year 2011. Reclamation initiated the Klamath Project water bank program in 2002, as recommended under NMFS’ biological opinion, with the objective of purchasing irrigators’ water entitlement for one irrigation season so that this water could be used to provide additional Klamath River flows for threatened coho salmon. The water bank is not a physical reservoir of stored water but an administrative mechanism through which Reclamation contracts with irrigators both on and off the Klamath Project. Through these contracts irrigators agreed to either (1) forego irrigation altogether (crop idling), (2) irrigate using only well water (groundwater substitution), or (3) pump well water into the irrigation canals for others to use (groundwater pumping), thus making water available to augment river flows. Water accrues to the water bank over the course of the irrigation season as water bank contractors forego irrigating their land by crop idling or groundwater substitution and as groundwater is pumped into canals under water bank contracts. However, because Reclamation is required to provide large amounts of water in spring and early summer before sufficient water has accrued to the water bank, it actually “borrows” water for the bank from short-term storage supplies. This water is later replaced by foregone irrigation water over the course of the year. Reclamation modified its water bank operations each year, changing its composition, selection process, contracting process, and program rules as it gained experience to meet its increasing obligations. In 2002 when the obligation was 30,000 acre-feet, the water bank sources included crop idling off-Project and groundwater pumping; in 2003 when the obligation was 50,000 acre-feet, sources included crop idling on-Project and groundwater substitution; and in 2004 when the obligation was 75,000 acre- feet, all three sources of water were included in the water bank. Reclamation modified the selection process from relying on only two irrigators in 2002—without a public application process—to soliciting applications from any qualified irrigator in both 2003 and 2004. In 2004, Reclamation solicited water bank applicants earlier in the year than it had in 2003, in part, to allow successful applicants more lead time in planning their irrigation. Reclamation also modified the contracting process to obtain more flexibility by competitively bidding contract rates in 2004 rather than paying a fixed rate as in 2003 and entering into contingency contracts for groundwater pumping that could be activated “as needed” to deliver additional water to meet its increasing water bank obligation and uncertain delivery schedule. These contingency contracts allowed Reclamation to acquire only the amount of water it needed to meet the agreed upon delivery schedule. Finally, Reclamation expanded its program rules to make participation in the water bank more practical and attractive to potential applicants. For example, Reclamation changed the rules for the 2004 water bank to allow harvesting of crops on land under crop idling contracts, reflecting the fact that some crops such as alfalfa can grow with water from subsurface moisture alone. Similarly, Reclamation modified its monitoring process for the water bank over time. For example, in 2003, Reclamation monitored every participant for compliance with the program rules. Enforcement staff examined and tested each crop idling parcel of land at least once over the course of 2003’s water bank to ensure that no intentional irrigation occurred. In addition, Reclamation relied on self-policing by irrigators who called in tips identifying potential cheaters. A Reclamation official estimated a greater than 95 percent compliance rate in 2003 and only terminated the contract of one participant who intentionally irrigated fields after deciding to withdraw from water bank participation without notifying Reclamation. In contrast, during 2004 Reclamation sought to reduce enforcement costs and increase efficiency by examining and testing crop idling parcels of land only toward the end of the year while following up on tips identifying potential cheaters throughout the year. In 2004, Reclamation found no intentional violations. Reclamation’s water bank expenditures through fiscal year 2004 exceeded $12 million and could total more than $65 million through 2011. As shown in table 1, Reclamation’s total expenditures have increased annually as the water bank obligation has grown from 30,000 acre-feet in 2002 to 75,000 acre-feet in 2004. Reclamation attributes the increasing costs of the water bank to the increasing annual volume of water purchases, as well as increasing administrative costs due to the large increase from 2002 to 2003 in the number of contracts to manage and the addition of the groundwater pumping program in 2004 and its associated contract negotiations. Reclamation estimates that the 100,000 acre-foot water bank requirements for fiscal years 2005 through 2011 will cost at least $7.6 million annually, bringing the total water bank costs to more than $65 million. For 2005 and onward, according to Reclamation, the water bank will be a specific budget item in its budget request. Accordingly, Reclamation requested $7.626 million for fiscal year 2005 and plans to gradually increase annual budget requests to about $7.660 million by 2011. Reclamation’s expenditures fall into five categories: groundwater contract costs, crop idling contract costs, Klamath Basin Rangeland Trust contract costs, administrative costs, and other costs. Reclamation’s largest water bank expenditures were for groundwater contracts with irrigators—for both substitution and pumping—totaling nearly $7 million, or 55 percent of total expenditures from 2002 through 2004. Reclamation’s second largest water bank expenditures were for crop idling contracts with Project irrigators, totaling about $3.3 million, or 27 percent of total expenditures. Reclamation’s contracts with the Klamath Basin Rangeland Trust to forego irrigation of pastureland outside of the Klamath Project totaled more than $1.6 million, or about 13 percent of total water bank expenditures through 2004. Reclamation’s administrative costs—mainly payroll and overhead— for planning and implementing the water bank comprised about 3 percent of total water bank expenditures. Reclamation also incurred other costs related to the operation of the water bank, such as water quality analysis, contract compliance monitoring, and a contract for assistance from the Oregon Water Resources Department. Reclamation met its water bank obligations to provide additional water to supplement Klamath River flows each year since 2002. However, the manner in which the agency has managed and accounted for the water bank has caused confusion for some stakeholders, such as tribes and irrigators, and has reduced the transparency of the water bank’s status and operation. According to NMFS and Reclamation officials, Reclamation’s obligation is to both acquire the amount of water required in the biological opinion each year and deliver the water—some of it or all of it—in accordance with the schedule mutually agreed to by both agencies. Regarding the acquisition of water, NMFS concluded, and our analysis of Reclamation contract records verified, that Reclamation met its obligation to acquire 30,000 acre-feet in 2002, 50,000 acre-feet in 2003, and 75,000 acre-feet in 2004, by contracting for about 47,000; 59,000; and 111,000 acre-feet, respectively. Appendix II provides detailed information on water bank applications and contracts. According to Reclamation officials, they contracted to acquire more water than required, in part to serve as a buffer against unexpected changes in water conditions and as insurance against uncertainty about how much water is actually obtained from crop idling. Regarding the delivery of water to augment flows, NMFS concluded, and our analysis of USGS river flow records verified, that Reclamation met its obligation each year as established in the schedule agreed upon with NMFS. We found that, in total, Reclamation augmented Klamath River flows by approximately 30,000 acre-feet within the brief 2002 water bank time frame—meeting its 30,000 acre-feet schedule requirement; by more than 71,000 acre-feet in 2003—surpassing its 50,000 acre-feet schedule requirement; and by more than 95,000 acre-feet in 2004—surpassing its 74,373 acre-feet schedule requirement. According to Reclamation officials, these augmented flows represent water provided per water bank requirements plus additional releases of water purchased and stored to meet tribal trust obligations. Because the water bank is not a physical pool of water allowing the constant measurement and monitoring of deposits and withdrawals, estimating the status of water bank accruals or deliveries and differentiating water bank deliveries from tribal trust deliveries during the year is neither precise nor easy. Reclamation views water bank deliveries as simultaneously meeting both its requirement to augment river flows under the biological opinion and its tribal trust responsibilities. However, to account for its annual deliveries, Reclamation officials have generally counted augmented flows as first satisfying the water bank requirement and consider excess flows, such as the approximately 20,000 acre-feet delivered above the water bank requirement in 2003 and 2004, as tribal trust deliveries. Augmented flow is defined as the volume of water in excess of base flows measured at Iron Gate Dam. Klamath River base flows are determined according to “water-year types.” Based on an April 1 forecast of snowpack and runoff, Reclamation initially classifies each year as Wet, Above Average, Average, Below Average, and Dry in accordance with the biological opinion. Each classification requires a specific base flow of water at Iron Gate Dam. Forecasts are updated at least monthly, incorporating actual water conditions as the year progresses, and providing Reclamation with increasingly accurate data with which to determine if the water-year type needs to be reclassified during the year. In 2002, the water bank operated from May 1 to May 31, during which Reclamation met its water bank delivery obligation by augmenting flows by approximately 30,000 acre-feet, as shown in figure 3. NMFS released its final biological opinion on May 31 directing Reclamation to operate a water bank through 2011. In 2003, Reclamation met its water bank delivery obligation by augmenting flows by at least 50,000 acre-feet as agreed in its schedule with NMFS. Heavier than expected rainfall in early spring prompted Reclamation to move the official start of the water bank from April 1 to May 21, as shown in figure 4. The water bank operated between May 21 and October 31, during which time Reclamation reclassified the water-year type from Dry to Below Average due to better than expected water conditions. In 2004, Reclamation met its water bank delivery obligation by augmenting flows by at least 74,373 acre-feet as agreed in its schedule with NMFS. As shown in figure 5, the 2004 water bank started on April 1 and ended on October 31, as planned. The water-year type was reclassified from Below Average to Dry on May 7, shortly after the water bank began due to a smaller than expected water supply. Although Reclamation met its annual water bank obligations each year, the manner in which the agency managed and accounted for the water bank confused stakeholders. Specifically, for two issues where the biological opinion is silent—how to count spill water released to prevent flooding, and whether Reclamation can reclassify the water-year type designation midyear due to changing water conditions—Reclamation has not been clear in communicating related management and accounting decisions. Furthermore, Reclamation has not provided stakeholders with systematic and clear information concerning the water bank’s status or operations, and its decision to use river flow data unavailable to stakeholders limited stakeholders’ ability to independently monitor water bank activities. This has led to confusion and doubts among stakeholders on whether Reclamation actually met its water bank obligations. Reclamation’s management of the water bank during 2003’s spill condition—when water was released by dams to prevent overflow or flooding—and its lack of clear communication with stakeholders, caused significant confusion. Heavier than expected rainfall in early spring of 2003 caused a “spill condition” to exist on April 1 when the water bank was set to begin. However, the biological opinion does not specify how much, if any, spill water can be counted as a water bank delivery. In the absence of specific guidance from NMFS, Reclamation could have counted spill toward water bank deliveries in one of three ways: (1) up to the amount already scheduled for delivery during the spill; (2) in its entirety, including water above the scheduled amount; or (3) not at all. According to Reclamation officials, they eventually decided to reset the water bank’s start to May 21—when the spill condition ended—and started counting augmented flows as of that date. However, Reclamation did not clearly communicate its decisions to stakeholders leading to confusion among stakeholders on how, or if, Reclamation was meeting its water bank obligations. For example, according to some tribal representatives, Reclamation provided a preliminary status report in July stating that over 20 percent of the water bank—over 11,000 acre-feet—was delivered during the spill. This contradicted a Reclamation official’s statement that the agency had retroactively reset the water bank’s start date to after the spill conditions ceased. Reclamation officials concede that stakeholder confusion as a result of these actions was understandable. Subsequently, NMFS and Reclamation agreed that, beginning in 2004, spill water will be counted only up to the amount already scheduled for delivery by the water bank. Similarly, Reclamation’s reclassification of the water-year type—which determines the base river flow requirement—has also caused confusion for stakeholders. Like spill conditions, the biological opinion is silent on whether the water-year type can be reclassified midyear after its initial determination in April. NMFS and Reclamation officials contend that reclassifying the water year type to reflect changing water conditions is necessary to reflect the most current and accurate data available. However, some stakeholders, such as the tribes, contend that midyear reclassification is not allowed under the biological opinion and could lead to the improper manipulation of water bank delivery schedules. While Reclamation issued a press release informing the public of its reclassification of the water-year type in 2003, the impact of such changes on the water bank was not clearly articulated. For example, Reclamation did not mention that it would also change its estimate of year-to-date water bank deliveries as a result of the midyear reclassification in water-year type, leading to stakeholder concerns that water deliveries were being manipulated to benefit irrigators at the expense of fish. Reclamation officials believe that reclassifying water-year types is in compliance with the biological opinion and that the confusion related to reclassifying the water-year type stems from Reclamation’s attempt to incorporate the most recent and accurate data on water conditions in their water bank delivery schedules. Reclamation also has not clearly or systematically communicated the water bank’s status and operations, further increasing stakeholder confusion. Specifically, Reclamation does not have a systematic mechanism to communicate information regarding the water bank to all stakeholders. Rather than regularly providing updated calculations of year-to-date deliveries to all stakeholders simultaneously through a single mechanism, such as a Web site or regularly scheduled press releases, Reclamation provides information on the water bank and its status “upon request” and through occasional press releases. Consequently, different stakeholders receive different information at different times. According to Reclamation officials, they meet regularly with the tribes and discuss the water bank’s status. However, Reclamation does not systematically seek feedback on the operation of the water bank from all stakeholders, limiting the opportunities to clarify misunderstandings. As a result, after several years of operation, questions continue to persist among stakeholders, including some Project irrigators and tribes, on basic topics such as the purpose of the water bank. Reclamation placed some information about the water bank application process on its Web site; however, Reclamation has not made other water bank information—such as the year-to-date status— available since that time, in part, because Reclamation has been reluctant to release status information that will almost certainly require revision later in the year. Finally, Reclamation’s use of river flow data generated by PacifiCorp to estimate the water bank’s river flow augmentation has reduced the transparency of the water bank and limited the ability of stakeholders to independently monitor the operation of the water bank. The PacifiCorp data used by Reclamation to calculate actual Klamath River flows is not available to the public. Therefore, interested stakeholders must use a different source—the publicly available USGS data on actual Klamath River flows—to calculate year-to-date water bank deliveries. The PacifiCorp and USGS flow data differ because each uses a different formula to calculate the average daily flow. Thus, Reclamation and stakeholders will arrive at different augmented flow calculations, depending upon which data source they use. For example, we found that, in 2003, augmented flows appeared to be about 2,500 acre-feet greater when using USGS data than when using PacifiCorp data. Furthermore, Reclamation, using PacifiCorp data, would calculate that it had met its water bank obligation on a different date than a stakeholder would using USGS data, creating the potential for stakeholder confusion and doubt regarding the status of water bank deliveries. Reclamation officials told us that as of October 2004 they began using the publicly available USGS data to calculate and communicate the water bank’s status. Reclamation’s water bank appears to have increased the availability of water to enhance river flows by reducing irrigation water use on the Project, but there is uncertainty regarding the extent of its impacts on river diversions and groundwater resources. In 2003, when the water bank primarily relied on crop idling to obtain water, there was a significant increase in the amount of land not using irrigation water compared with recent years. While it was likely that a reduction in river and lake diversions for Project irrigation resulted, a university study funded by Reclamation found that the reduction attributable to the water bank alone was highly uncertain due to the lack of effective flow measurement equipment and monitoring data for the Project. Because Reclamation was uncertain about how much water crop idling actually provided to the water bank, Reclamation shifted to groundwater substitution and pumping as the primary sources for the 2004 water bank. However, USGS and Oregon state officials have since found evidence that groundwater aquifers under the Project, already stressed by drought conditions, are being pumped by an increasing number of wells and refilling at a slower than normal rate, prompting Reclamation to consider lessening its future reliance on groundwater substitution and pumping. In 2003, Reclamation obtained about 60 percent of its water bank acquisitions by contracting with irrigators for crop idling on nearly 14,500 acres of land, based on the assumption that water foregone from irrigation on those lands would be available to enhance river flows. Crop idling contributed to a significant increase in the amount of land not irrigated in 2003, compared with recent years. For example, according to Reclamation’s 2003 crop report, a total of 20,335 Project acres were not irrigated, which is about a 60 percent increase over 2002 when 12,546 acres of land were not irrigated, and well exceeds the average of 7,665 acres of Project land not irrigated due to agricultural fallowing practices from 1998 through 2000—the three years preceding Reclamation’s restriction of irrigation water in 2001. Although the number of acres of crop land idled is a useful indication of the water bank’s impacts, it does not provide a reliable estimate of the true extent to which irrigation water has been made available for river flows. According to Reclamation officials, the precise impact of the water bank cannot be determined because of year-to-year variation in irrigation demand and its determining factors such as temperature, precipitation, and crop types. Moreover, throughout the life of the water bank, Reclamation has used varying assumptions about the amount of water that can be saved by crop idling as more research and information has become available about this practice. Specifically, in 2002, Reclamation assumed that it could obtain about 5 acre-feet of irrigation water per acre of crop idling, in 2003 and 2004 assumed 2.5 acre-feet, and is currently assuming that it can obtain 2 acre-feet per acre through crop idling. To help it quantify the actual results of the water bank, Reclamation has turned to other organizations for assistance. For example, after the 2002 water bank was completed, Reclamation engaged USGS to review the assumptions and results for its off-Project crop idling. In February 2004, USGS reported to Reclamation that, based on the available data, the amount of water actually obtained per acre of crop land idled during the 2002 water bank was most likely in the range of .9 to 1.3 acre-feet of water per acre. Similarly, in 2003, Reclamation was again unable to obtain precise information on the measurable impacts of the water bank for the year, so it contracted with California Polytechnic State University to study this issue. This study concluded that without effective flow measurement equipment and monitoring data for the Project it could not precisely estimate the impact of the water bank in reducing Upper Klamath Lake and Klamath River diversions to the Project. According to the study, in 2003 the reduction in diversions compared with 2000 may have ranged from 11,000 to 71,000 acre-feet and, moreover, this reduction may have been attributable to numerous other factors in addition to the water bank, such as heavy rainfall, a large amount of groundwater pumping, changes in irrigation district operations, and awareness among Project irrigators of the need to reduce water use. Based on subsequent university analysis, Reclamation now estimates that it actually obtained about 2 acre-feet of water per acre from crop idling in 2003 and 2004. Despite the ongoing uncertainty regarding the impact that reducing the amount of irrigated land has on the availability of water for river flows, Reclamation officials told us they must continue to rely on crop idling for a significant portion of the water bank. While some stakeholders favor taking farmland out of irrigation, they are also uncertain of the extent to which crop idling reduces diversions for irrigation. For example, both tribal and fishing industry representatives told us that they doubt that Reclamation can accurately estimate how much additional water is actually made available to the river. Some irrigators question the effectiveness and accountability of crop idling as a strategy for the water bank, and also are concerned about the economic impacts to taking farmland out of production. Because of the uncertainty regarding the measurable impact of crop idling, Reclamation shifted to groundwater for most of its water bank acquisitions in 2004; however, the impact of groundwater pumping on basin aquifers during ongoing drought conditions is largely unknown and continued reliance may not be sustainable. Reclamation obtained over 70 percent of the 2004 water bank deliveries by pumping nearly 60,000 acre-feet of water, either to substitute for irrigation water or to fill canals for use by others. Figure 6 below shows a groundwater pump delivering water into a canal for the water bank. According to Reclamation officials, in the absence of stored water, groundwater pumping is the only way to meet required flows in the spring and early summer because land idling provides little water in the April through June time period. An advantage of groundwater pumping for the water bank is that, unlike crop idling, flow meters on pumps and wells allow the exact measurement of the amount of groundwater being used in place of river diversions for irrigation. The impact of groundwater pumping on Upper Basin aquifers, however, is not well understood, and its use during drought conditions is a matter of growing concern for Reclamation and others. The basin has suffered drought conditions since 2000, resulting in less rain and snowmelt to fill lakes, rivers, and aquifers. Recognizing that water demand would cause more users to turn to groundwater but that there is little reliable information on the groundwater hydrology of the Upper Klamath Basin, USGS and the Oregon Water Resources Department initiated a cooperative study in 1998 to study and quantify the Upper Basin’s previously unknown groundwater flow system. The study, funded in part by Reclamation, is expected to be substantially completed in 2005. Nevertheless, USGS and Oregon Water Resources Department officials have found evidence that groundwater aquifers in the Upper Basin, already stressed by drought conditions, are being pumped by an increasing number of newly drilled wells and refilling at slower than normal rates in recent years. According to state officials, well drilling sharply increased after 2000, and an increasing number of domestic wells have needed to be deepened—a symptom of dropping water levels—in Klamath County during that same time frame. According to state records for Klamath County, Oregon, from 1998 to 2000, 14 irrigation wells were drilled; from 2001—when Project deliveries were restricted—through 2003, 124 irrigation wells were drilled. From 1998 to 2000, 21 domestic wells were deepened; from 2001 to 2003, 30 domestic wells were deepened; and in 2004, another 13 were deepened. Furthermore, USGS officials have identified wells in various parts of the Upper Basin, within and outside the Project boundaries, which have shown significant water level declines. For example, wells outside the Project have shown declines of up to 10 feet since 2000, thought to be primarily attributable to climatic conditions. Wells within the Project have shown a variety of responses to pumping— some wells seem to decline during irrigation season and then recover substantially during winter months, while other wells have shown steady year-to-year declines, some dropping more than 15 feet. Reclamation engaged USGS in May 2004 to conduct an assessment of their current water bank strategies and any potential strategies that could help the agency meet its obligations. Specifically, Reclamation asked USGS to (1) document current and planned water bank activities, (2) assess the effectiveness of the 2003 and 2004 water banks, (3) determine if sufficient information is available to assess the impact of the water bank on Klamath River flows, and (4) develop a matrix of water bank management options, including their potential positive or negative consequences. In December 2004, USGS officials briefed Reclamation officials on their assessment, presenting the pros and cons of various management options to assist Reclamation’s 2005 water bank planning. Reclamation officials are considering lessening their reliance on groundwater pumping and substitution for the 2005 water bank but are uncertain whether they can meet their water bank obligations, particularly for spring flows, while significantly increasing their reliance on crop idling. While several alternative approaches for achieving the water bank’s objectives have been identified by Reclamation and other stakeholders, limited information is available with which to reliably judge the feasibility or costs of these alternatives. Possible alternatives to the water bank include permanently retiring Project land from irrigation, expanding Upper Klamath Lake storage, or building a new reservoir separate from the lake. A large amount of Project land was offered for retirement by willing sellers in 2001, and a number of storage options have been evaluated to some extent, but implementation is not imminent for any of these alternatives. Although one of the objectives of the Conservation Implementation Program, required under NMFS’ 2002 biological opinion, is the collaborative study of the feasibility of water storage and groundwater development alternatives, Reclamation and other stakeholders are still developing the framework for that process. In the interim, Reclamation and NMFS have an ongoing dialogue regarding water bank management and will likely reconsult on Klamath Project operations, including the water bank, in 2006. As an alternative to the water bank, permanently retiring a large area of irrigated Project land could provide 100,000 acre-feet of water to enhance Klamath River flows, but little reliable information is available to comprehensively assess this option. It is not known with any certainty the amount of irrigated land that would need to be retired to replace the water bank, how much irrigated land for retirement could actually be obtained from sellers, or the price at which it could be obtained. Furthermore, while this option is viewed positively by some Klamath River stakeholders, the potential impacts on the agricultural economy from retiring a large portion of Project lands is cause for concern in the farming community. The amount of irrigated land that would need to be retired to reduce irrigation and enhance river flows by 100,000 acre-feet can be roughly estimated at about 50,000 acres but is not precisely known. As discussed earlier in this report, estimates of forgone irrigation water can prove to be much less than expected, and the lack of reliable water flow information on the Project makes it difficult to accurately determine the specific effects of crop idling—which is the short-term equivalent of permanent land retirement—and other strategies for reducing river diversions. Reclamation, irrigators, and tribal representatives told us that they believe that retiring irrigated land would reduce river diversions, but none are certain as to precisely by how much. Nevertheless, Reclamation, based on its most recent estimate of the amount of irrigation water obtained from crop idling for the water bank, assumes that irrigation is reduced by about 2 acre-feet of water per acre idled. Using this assumption, Reclamation estimates that at least 50,000 irrigated acres—about 30 percent of the acreage currently irrigated by water from Upper Klamath Lake and the Klamath River—would need to be retired to reduce irrigation by 100,000 acre-feet. However, according to Reclamation officials, because crop idling provides little water from April to June, such land retirement by itself will not provide sufficient water to meet spring river flow requirements under the biological opinion. Furthermore, the actual reduction in irrigation would depend upon factors such as the extent of irrigation on the land before it was retired and how it is used after retirement. Although there may be a fairly large number of potential willing land sellers on the Project, the amount of irrigated land actually available for purchase and permanent retirement is not known. In 2001, the American Land Conservancy (Conservancy)—a national, nonprofit organization involved in land conservation efforts—obtained 1-year agreements with 78 different landowners to purchase over 25,000 acres of irrigated land for the purpose of land retirement. The Conservancy made agreements with willing sellers—who, according to Conservancy officials, were generally aging and fearful of future drops in property values—expecting that the federal government would purchase the land for retirement. However, according to the Conservancy, Reclamation was not interested because the land was not in a single block. Moreover, according to Reclamation officials, the federal government is not interested in acquiring more land in the Klamath Basin. Subsequently, the Conservancy’s agreements with the sellers lapsed. Whether a coalition of willing sellers could be put together again is unknown. An incentive to potential sellers could come from an expected increase in power rates in 2006. According to a recent Oregon State University economic study, an increase in power rates could raise agricultural production costs by an average of $40 per sprinkler-irrigated acre, potentially making agriculture unprofitable on as much as 90,000 acres of Project land. This scenario could potentially make more land available for sale and might even result in some voluntary land retirement due to lack of profitability, thus increasing river flows. Additionally, the price at which land might be obtained for retirement is unknown. According to the Conservancy, the appraised value of the potential willing sellers’ land in 2001 was $3,000 per acre. However, based on 2001 estimates from an Oregon State University and University of California economic study, the market value for Project irrigated land can range from $300 per acre for Class V soils—the lowest quality for agricultural purposes—to $2,600 for Class II soils— some of the better agricultural soil on the Project. In addition, Project landowners are concerned that property values may have decreased due to the uncertainty of water deliveries for irrigation after the 2001 water restriction. Using the 2001 price estimates from the universities’ study, the total cost to retire 50,000 acres, assuming the land is available from willing sellers, could range from $15 to $130 million, depending upon the mixture of low and high valued land offered for sale. Finally, while tribal representatives and others favor significant irrigated land retirement as a means to reduce demands on the river, the extent of impacts on the agricultural economy is cause for concern in the Project farming community. Tribal representatives and downstream fishing representatives told us that irrigated land retirement is essential to restoring the balance between the supply of and demand for water in the basin. However, according to Klamath irrigators, the Klamath agricultural economy is fragile and must maintain close to current levels of agricultural acreage in production to sustain its infrastructure. Irrigators argue that retiring large amounts of irrigated farmland on the Project could eliminate or adversely impact key aspects of agricultural infrastructure, such as fuel, transportation, equipment and fertilizer suppliers, and affect a whole host of other dynamics of the agricultural community. However, retiring land with the lowest agricultural value could help minimize the potential negative effect on the region’s agricultural economy. According to the study by Oregon State University and the University of California, retiring lands with the least productive soils and, therefore, lowest agricultural value, would have the smallest potential negative effect on the region’s agricultural economy. Adding water storage capacity in the Klamath River Basin could provide an alternative to the water bank for river flow augmentation, and several options to either expand Upper Klamath Lake or build a separate reservoir have been considered or pursued to various extents. In general, Klamath River stakeholders—irrigators, tribes, federal entities, and others—view either option favorably as a potential solution to help balance competing water demands. However, the extent and reliability of information regarding the total cost for each water storage option, the amount of water potentially provided, the certainty and sustainability of water storage, and the environmental impacts are largely unknown. Upper Klamath Lake (including adjoining Agency Lake) is the primary source of water for the Project and the Klamath River. To satisfy the water contracts of irrigators, as well as river flow and lake level requirements, Upper Klamath Lake must be full at the start of the irrigation season. However, when the lake exceeds its maximum storage capacity—generally due to heavy runoff before the irrigation season begins—the lake goes into “spill condition,” releasing water into the Klamath River (and eventually into the Pacific Ocean) to avoid flooding the surrounding area. Expanding the lake’s capacity by purchasing and flooding adjoining properties with water that would otherwise be spilled would enable Reclamation to preserve this water for peak demand periods—late spring to early fall—for both fish and irrigators. It would also reduce irrigation demand from these lands, leaving more water in the lake for Project and other uses. In 1998, Reclamation prepared a report identifying numerous options for expanding the lake, but only six options were evaluated regarding their feasibility for water storage development. Collectively the six options have the potential to provide approximately 100,000 gross acre-feet of water, however, according to Reclamation officials, evaporation losses would reduce the net usable water storage to about half that amount. As shown in figure 7, the six water storage options—listed roughly from north to south and by proximity to each other—include Agency Lake Ranch, Barnes Ranch, Wood River Ranch, the Williamson River Delta Preserve, Caledonia Marsh, and Running Y Marsh. Agency Lake Ranch is a 7,125-acre, Reclamation-owned marshland located on the west side of Agency Lake. Reclamation purchased the land in 1998 to store spill water during periods of high inflow to the lake. Agency Lake Ranch currently has the capacity to store about 13,000 gross acre-feet without flooding neighboring properties. However, it has the potential to store up to 35,000 gross acre-feet of water if the existing levees surrounding the land are raised, at an unknown cost. Barnes Ranch is a privately owned 2,671-acre pasture bordering the west side of Agency Lake Ranch with the capacity to store 15,000 gross acre-feet of water if the levees surrounding the property are improved. If the Barnes Ranch is acquired and Reclamation removed the levees bordering Agency Lake Ranch and Agency Lake, a combined total of approximately 40,000 gross acre-feet of water could be stored and would potentially fill to this capacity in most years. In January 2004, Reclamation had Barnes Ranch appraised for $5.9 million, but the owners and Reclamation have not yet agreed on a purchase price. Wood River Ranch is an approximately 3,000-acre site on the north end of Agency Lake, adjacent to Agency Lake Ranch. The Bureau of Land Management (BLM) purchased Wood River Ranch in 1994 to restore as a wetland, among other objectives. Because of its proximity to Agency Lake Ranch and Barnes Ranch, Reclamation officials would like to convert the land to store approximately 7,500 gross acre-feet of water. However, local BLM managers feel that this would not be compatible with the existing goals and objectives of the Klamath Resource Management Plan, telling us that converting the land to water storage would destroy wildlife habitat and reverse a 10-year, multimillion dollar restoration effort accomplished with many private contributors. The Williamson River Delta Preserve is a 7,440-acre site, located at the southern end of Agency Lake, that was converted from wetland to farmland in the 1930s and 1940s. The Nature Conservancy purchased two properties—Tulana Farms in 1996 and Goose Bay Farms in 1999— and is developing a restoration plan for the combined site. With the encouragement and financial support of Reclamation, the Nature Conservancy has considered the option of returning the properties to Upper Klamath Lake. Reclamation estimates that the preserve would add 35,000 gross acre-feet of water storage capacity, at relatively low cost with the Nature Conservancy’s collaboration. Caledonia Marsh is a privately owned 794-acre farm on the southern end of Upper Klamath Lake with the potential capacity to store nearly 5,000 gross acre-feet of water. According to Reclamation, the owner has expressed interest in selling; however, the surrounding levees would need to be improved and the Highway 140 road bed raised to protect the neighboring property, Running Y Marsh. The cost of these improvements has not been determined. Running Y Marsh is a privately owned 1,674-acre farm and wetland area adjacent to Caledonia Marsh with the potential to store about 10,000 gross acre-feet of water if converted to lake storage. However, because of the high value crops grown there, the owner is not currently interested in selling the property to Reclamation. For all of these options, while it would be relatively easy to determine the amount of additional water storage provided by measuring changes in the lake surface area, there are a number of associated uncertainties and constraints. For example, since these storage areas are essentially extensions of the lake itself, filling the additional capacity is dependent upon adequate flows into the lake—if the lake does not fill to capacity, the storage areas would not be filled to their capacity. In addition, use of the additional stored water in these areas would be constrained by the minimum lake level requirements set out by the FWS biological opinion for Upper Klamath Lake to protect the two species of sucker. As an extension of the lake, the new storage areas could not be drained below these minimum levels. Finally, the environmental impacts of developing water storage areas vary and would need to be addressed by Reclamation as part of the water storage development process. The development of a separate reservoir would create a long-term storage area in the Klamath Basin that could far surpass the capacity of the water bank as a source of flows for the river, potentially benefiting all Klamath River stakeholders and protected species. Evaluation of such potential water storage areas has focused on Long Lake Valley, located southwest of Upper Klamath Lake. Developing Long Lake Valley into a reservoir would enable water to be stored that would otherwise be spilled into the Klamath River when Upper Klamath Lake’s water level exceeds the maximum lake elevation. Reclamation, irrigators, and others generally agree that Long Lake Valley is the most viable option currently available for new reservoir development. According to Reclamation, converting Long Lake Valley into a reservoir could yield up to 250,000 acre-feet of water, with a depth of 250 to 300 feet when full. Thus, Long Lake represents “deep” water storage, which generally contains colder water—beneficial to fish—than shallow Upper Klamath Lake can provide. Reclamation indicated that the reservoir’s 250,000 acre-foot capacity would be filled by pumping water from Upper Klamath Lake to Long Lake between March and June, using the piping system shown in figure 8. However, much like the Upper Klamath Lake expansion options, the certainty of Long Lake’s water supply depends entirely upon the availability of spill water to fill it and, according to NMFS officials, the impacts on the river of diverting these flows to a reservoir need to be studied. Once filled, Long Lake could provide a sustainable supply of water to supplement river flows. In addition, the amount of water stored by Long Lake and delivered to enhance river flows could easily be measured by metering water flow in the pipeline to and from the lake or, potentially, in a pipeline emptying directly into the Klamath River. Reclamation completed an initial study of the geology of Long Lake Valley in March 2004, which determined that Long Lake Valley’s floor would provide a good barrier to prevent water leakage. Geologic investigations of Long Lake Valley are continuing in 2005. To date, Reclamation has not conducted a full feasibility study for Long Lake development, and it will not do so until a funding plan has been established. Reclamation estimates that a feasibility study would take three years to complete and would cost approximately $12 million. Subsequently, reservoir construction funds would need to be obtained. There are no reliable estimates available, but Reclamation’s most recent projection of construction costs is about $350 million, not including real estate acquisition costs. The Long Lake development project would take at least 10 years to complete, which means that Long Lake would not address any immediate water demand issues in the Klamath Basin. Based on Reclamation’s initial study, if Reclamation can address funding, technical, and environmental impact requirements, Long Lake may offer a promising long-term storage option for the Klamath Basin. Storage options and other potential long-term solutions to water quantity, quality, and wildlife resource issues are expected to receive greater attention in coming years under Reclamation’s Conservation Implementation Program. In addition to the water bank, NMFS’ 2002 biological opinion required Reclamation to establish such a program, and Reclamation and other stakeholders began developing the framework for future collaboration in 2003. One of the objectives of the program is the development and implementation of feasibility studies to identify opportunities for increased water storage and groundwater development alternatives. The Governors of the states of California and Oregon and heads of the Departments of the Interior, Agriculture, and Commerce, as well as the Environmental Protection Agency, signed an agreement in October 2004 to coordinate their efforts to achieve program objectives, and Reclamation is currently preparing a third draft program document for stakeholder review. Reclamation and NMFS will have the opportunity to discuss revising some elements of the biological opinion, including the water bank, when they meet for an expected reconsultation in 2006. Reconsultation could address the following potential changes to the biological opinion, affecting Reclamation’s responsibility for river flows, its water bank obligation, and how it operates the water bank: Adjusting Reclamation’s level of responsibility for ensuring Klamath River flows to reflect information currently being developed regarding the water quality and quantity requirements of Klamath River fish, as well as historic natural flows of the Klamath River. Based on a recent USGS study of irrigated acreage in the Upper Basin, Reclamation— currently held responsible for ensuring 57 percent of needed flows— may suggest reducing that number to about 40 percent. Such an adjustment would not directly alter Reclamation’s water bank obligations; however, it would decrease Reclamation’s overall responsibility for ensuring Klamath River base flows by increasing the responsibilities of other basin stakeholders, such as the states and other federal agencies. According to NMFS, such a change would need to be considered within the context of the U.S. District Court’s 2003 criticism of the allocation of responsibility for providing flows. Not requiring a water bank in Above Average or Wet water years, thus eliminating the cost and effort of obtaining and managing the water bank when natural flows are abundant. Changing the method for determining water-year types from a five-tier system to a more incrementally adjustable method that would cause less dramatic changes in flow requirements, thus addressing one of the concerns raised by stakeholders. Currently being piloted by Reclamation with FWS for managing Upper Klamath Lake levels, this method would reduce the magnitude of changes and the need for significant water bank delivery recalculations. Water shortages in the Klamath River Basin have created serious conflicts and placed Reclamation in the difficult position of balancing competing demands for water among numerous stakeholders. Over the last three years, Reclamation has demonstrated commitment and resourcefulness in this task, particularly under drought conditions, by implementing and meeting the obligations of the temporary water bank. However, whether Reclamation can continue meeting its water bank obligation using current methods is unclear, given the uncertain results of crop idling and the unknown sustainability of groundwater pumping. This uncertainty adds urgency to Reclamation and stakeholder efforts to collaboratively identify and evaluate long-term solutions. In the mean time, because the water bank acts as the primary mechanism for balancing competing demands for water, Reclamation must be able to clearly communicate to stakeholders how the water bank is managed and how water is accounted for. This information will make the management and accountability for this public resource more transparent to all those that rely on and are affected by the water bank. We are recommending that Reclamation take steps to improve the information provided to stakeholders regarding water bank management and accounting by regularly and systematically providing—through media such as a water bank Web-link or a monthly or biweekly press release— public information on the rationale and effects of management decisions related to forecasted water availability, unexpected spill conditions, or other significant events, as well as regularly updated information regarding the water bank’s status, including the amount of water bank deliveries to date. We provided copies of our draft report to the Departments of Agriculture, Commerce, and the Interior for their review and comment. We received a written response from the Under Secretary of Commerce for Oceans and Atmosphere that includes comments from the National Oceanic and Atmospheric Administration (NOAA) and from Interior’s Assistant Secretary, Policy, Management and Budget that includes comments from Reclamation and BLM. Overall, NOAA stated that the report accurately reflects the history of the water bank, and Reclamation expressed appreciation for GAO’s efforts to report on the complex Klamath River Basin situation. We requested comments from Agriculture, but none were provided. Reclamation agreed with our recommendation to improve the information provided to stakeholders regarding water bank management and accounting. Reclamation agreed to implement steps to enhance water bank communications through systematic feedback to stakeholders with information regarding the water bank. Reclamation said that it would add a new page to its Web site exclusively for the water bank, which will include background information, new information as it becomes available, links to relevant Web resources such as USGS’ Klamath River gauge at Iron Gate Dam, and graphics showing the status of water bank flow augmentation. This information will be updated at least biweekly, with notices posted to direct stakeholders to updated information. Reclamation plans to complete these changes to its Web site by June 30, 2005. NOAA, Reclamation, and BLM provided comments of a factual and technical nature, which we have incorporated throughout the report as appropriate. Because of the length of the technical comments provided by Reclamation and BLM, we did not reproduce them in the report. Interior’s transmittal letter and response to our recommendation are presented in appendix III, and NOAA’s comments are presented in appendix IV. We are sending copies of this report to the Secretaries of Agriculture, Commerce and the Interior, appropriate congressional committees, and other interested Members of Congress. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Key contributors to this report are listed in appendix V. To determine how the Bureau of Reclamation (Reclamation) operated the water bank and how much it cost, we analyzed Reclamation’s water bank planning, contracting, and expenditure documentation. We researched and analyzed laws, regulations, the National Marine Fisheries Service’s (NMFS) biological opinion, and related court cases pertinent to the water bank and how it operates. For each year of the water bank, we reviewed and analyzed data on applications and contracts in comparison with the biological opinion requirements. We reviewed and analyzed expenditures for contracts and program administration, as well as future budget request estimates, for total costs incurred to date and expected future costs of the water bank. Finally, we interviewed staff from Reclamation, NMFS, and other relevant agencies, as well as stakeholders—including representatives from tribal, commercial fisheries, and irrigator groups—on water bank program obligations, operations, and monitoring. For each year of the water bank program, we reviewed and analyzed data on water bank contracts to determine whether Reclamation met its water bank acquisition obligations, and we reviewed and analyzed scheduled base Klamath River flows, as well as the daily average Klamath River flows, using both U.S. Geological Survey (USGS) and PacifiCorp-generated data to calculate the augmented flows to determine whether Reclamation met its water bank delivery obligations. We interviewed staff from Reclamation and other relevant agencies, as well as stakeholders— including representatives from tribal, commercial fisheries, and irrigator groups—on water bank program obligations, operations, and monitoring. To describe the water bank’s impact on water availability and use in the Klamath River Basin, we interviewed staff from Reclamation, USGS, the Oregon Water Resources Department, California Polytechnic State University, and the Klamath Basin Rangeland Trust. We gathered and analyzed Reclamation crop reports, a USGS study of irrigation water use, and a California Polytechnic State University study of the 2003 water bank to describe the impact of crop idling on river flows. To describe the impacts of groundwater use, we collected and analyzed Oregon Water Resources Department information on groundwater pumping, well drilling, and well deepening in Klamath County, Oregon, and USGS information on well levels in the Upper Basin. We also collected descriptions of the joint USGS/Oregon Water Resources Department study of Upper Basin groundwater and the USGS study of Reclamation’s water bank. In addition, we interviewed and obtained relevant documentation from stakeholders including irrigators, tribes, and commercial fisheries. We did not review the water bank’s impact on fish species because the short history of the water bank makes it difficult to obtain reliable information. To describe alternative approaches to the water bank, we collected information and interviewed staff from Reclamation and the Bureau of Land Management, as well as potential land sellers, irrigators, irrigation experts, economists, and conservationists. We also toured the Klamath Project area by plane and car to visit and observe potential irrigated land retirement options and water storage areas. In addition, we collected and analyzed documentation of potential water storage locations, a study of options for increasing water storage, as well as a Reclamation study of a potential new reservoir. Finally, we reviewed the requirements for coordinated efforts among stakeholders in NMFS’ biological opinion and the status of basinwide planning to increase river flows. To assess the reliability of the noncomputerized data we received, we interviewed officials most knowledgeable about the collection and management of each data set. We assessed the relevant general and application controls and found them adequate. In addition, we reviewed the methodology of the economic and water use studies and interviewed the authors to discuss their scope, data quality, and results. Finally, we conducted tests of the reliability of computerized data. On the basis of these interviews, tests, and reviews, we concluded that the data from the various sources and studies were sufficiently reliable for the purposes of this report. We performed our work between May 2004 and February 2005 in accordance with generally accepted government auditing standards. As shown in table 2, the total numbers of applications from irrigators seeking participation in the water bank decreased from 2003 to 2004; Reclamation did not solicit applications in 2002. The total number of contracts for participation has fluctuated up and down since the inception of the water bank. Reclamation shifted its contracting emphasis from primarily crop idling in 2003 to primarily groundwater contracts in 2004. As such, the number of groundwater contracts (groundwater pumping plus groundwater substitution) has grown to represent a larger proportion of all contracts as Reclamation’s water bank obligation increased, as shown in figure 9. The volume of water (acre-feet) offered in water bank applications increased by almost 50 percent from 2003 to 2004. The volume of water Reclamation acquired through contracts more than doubled since the water bank’s inception, as shown in table 3. As shown in figure 10, from 2002 to 2004, Reclamation has increased the volume of groundwater as a proportion of the total water bank acquired by contract. As shown in table 4, the total irrigated land acreage offered in water bank applications and accepted under contracts has increased since the inception of the water bank. In addition to those individuals named above, Brad C. Dobbins, David A. Noguera, and Tama R. Weinberg made key contributions to this report. Also contributing to the report were John W. Delicath, Philip G. Farah, Curtis L. Groves, Julian P. Klazkin, Kim M. Raheb, and Monica L. Wolford.
Drought conditions along the Oregon and California border since 2000 have made it difficult for the Bureau of Reclamation (Reclamation) to meet Klamath Project irrigation demands and Klamath River flow requirements for threatened salmon. To augment river flows and avoid jeopardizing the salmon's existence, Reclamation established a multiyear water bank as part of its Klamath Project operations for 2002 through 2011. Water banks facilitate the transfer of water entitlements between users. This report addresses (1) how Reclamation operated the water bank and its cost from 2002 through 2004, (2) whether Reclamation met its annual water bank obligations each year, (3) the water bank's impact on water availability and use in the Klamath River Basin, and (4) alternative approaches for achieving the water bank's objectives. Reclamation has changed how it operates the Klamath Project water bank, as it has gained more experience, to help it meet its growing obligations and mitigate costs. For example, Reclamation initially obtained most of the water for the water bank by contracting with irrigators to either forego irrigation altogether (crop idling), or use only well water (groundwater substitution). It later added the option to pump well water into the irrigation canals for others to use (groundwater pumping). For the period 2002 through 2004, Reclamation's water bank expenditures totaled over $12 million, and the cumulative cost could exceed $65 million through 2011. GAO's analysis of water bank contracts and river flow records found that Reclamation met its water bank obligations by acquiring and delivering the required amount of water for 2002 through 2004. However, Reclamation has not provided stakeholders with systematic and clear information concerning the water bank's management and status and its decision to use river flow data that are not publicly available limited stakeholders' ability to monitor water bank activities. This has led to confusion and doubt among stakeholders on whether Reclamation met its water bank obligations. The water bank appears to have increased the availability of water to enhance river flows by reducing the amount of water diverted for irrigation, but the actual impacts are difficult to quantify because Reclamation lacks flow measurement equipment and monitoring data for the Klamath Project. Reviews by external experts of the impacts of the 2002 and 2003 crop idling contracts indicate that significantly less water may have been obtained from these contracts than Reclamation estimated. Given the uncertainty surrounding how much water can be obtained from crop idling, in 2004 Reclamation officials decided to rely primarily upon metered groundwater wells for the water bank. However, Reclamation has since learned that groundwater aquifers under the Klamath Project, already stressed by drought conditions, have shown significant declines in water levels and are refilling at a slower than normal rate in recent years. As a result, Reclamation is considering lessening its reliance on groundwater for the 2005 water bank but is uncertain if it can meet its water bank obligations, particularly for spring flows, while increasing its reliance on crop idling. Although several alternative approaches for achieving the water bank's objectives have been identified by Reclamation and other stakeholders, limited information is available regarding their feasibility or costs. Some alternatives to the water bank include permanently retiring Klamath Project land from irrigation or adding new short-term or long-term storage. Each alternative has been considered to varying degrees, but significant analysis is still needed on most alternatives before any implementation decisions can be made. Meanwhile, Reclamation and the National Marine Fisheries Service have an ongoing dialogue regarding the water bank and will likely reconsult on Klamath Project operations, including the water bank, in 2006.
The Social Security Administration (SSA) manages two major federal disability programs that provide cash benefits to people with long-term disabilities: the Disability Insurance (DI) and Supplemental Security Income (SSI) programs. The DI program was enacted in 1954 and provides monthly cash benefits to severely disabled workers. SSI was enacted in 1972 as an income assistance program for aged, blind, or disabled individuals whose income and resources fall below a certain threshold. For both programs, disability for adults is defined as an inability to engage in any substantial gainful activity because of a severe physical or mental impairment. Both programs also use the same procedures for determining whether the severity of an applicant’s impairment qualifies him or her for disability benefits. In 1998, almost 11 million people received a total of over $73 billion in disability benefits from these programs. SSA’s complex process for determining whether an individual qualifies for a disability benefit—the disability claims process—has been plagued by a number of long-standing problems. For example, claimants who have been dissatisfied with the initial determination and have filed an appeal frequently have had to wait more than 1-1/2 years for a final decision. Moreover, as many as two-thirds of these determinations were subsequently allowed by an administrative law judge (ALJ). In the early 1990s, SSA had difficulty keeping up with a rapidly growing workload, and backlogs of appealed cases waiting for a hearing grew. In response to these problems, SSA concluded that minor improvements to the disability claims process would be insufficient and embarked on an effort to fundamentally reengineer, or redesign, its process. In 1994, the agency issued an ambitious plan for redesigning the process within 6 years. However, 2 years into implementing the redesign plan, SSA had not made much progress, and we and SSA concluded that the scope of the plan was too large. The agency reevaluated its approach and, in February 1997, issued a scaled-back plan with revised milestones. SSA’s disability claims process has long been recognized as complex and fragmented. The decision about whether an individual is disabled is based on standards set forth in the Social Security Act and extensive SSA regulations and rulings. Moreover, disability decisions involve a multilevel process that spans many diverse components, including SSA’s 1,298 field offices, 54 state agencies, and 140 hearing offices. This organizationally complex structure has contributed to a number of problems. For example, through the years a high percentage of claimants who were dissatisfied with their initial determinations received favorable decisions on appeal. Claimants have also waited a long time for final decisions on their eligibility. In the early 1990s, these problems were aggravated by mounting workloads, as applications for disability benefits escalated at the same time that SSA was experiencing a decline in its workforce. This, in turn, caused workloads to back up and increased the time it took claimants to receive decisions on their claims. SSA’s disability claims process, which has not changed fundamentally in over 40 years, is inherently complex and fragmented. The process contains several opportunities for appeal, and the organizational unit involved, professional background of the adjudicator, and procedures for making a decision on appeal are all different from those of the initial determination. Each organizational unit has separate lines of authority and goals without responsibility for the overall outcome of the process. The claims process starts when an individual contacts one of SSA’s 1,298 field offices across the country to apply for benefits. Field office personnel help claimants complete their applications; obtain a detailed medical and work history; and identify other, nonmedical eligibility factors. Field office personnel then forward the claims to one of 54 disability determination service (DDS) agencies that are administered by the 50 states and the District of Columbia, Guam, Puerto Rico, and the Virgin Islands. Under a unique federal-state arrangement, SSA pays state DDSs to determine whether claimants are disabled. At the DDS, a team consisting of a specially trained disability examiner and an agency physician or psychologist reviews the available medical evidence and gathers additional medical evidence, if necessary. In making the disability determination, the team follows official guidance found in SSA’s Program Operations Manual System (POMS), which is based on applicable laws and SSA’s regulations and rulings and also includes detailed instructions for processing cases. If the claimant is dissatisfied with the initial determination, the claimant may request a reconsideration review within 60 days of receiving the determination. Reconsideration is also performed by the DDSs and is based on the same guidance as the initial determination but is carried out by a new adjudicative team. If the claimant is dissatisfied with the determination, he or she has 60 days to appeal and request a hearing before an ALJ. ALJs are hearing officers located at 140 hearing offices around the country that are administered by SSA’s Office of Hearings and Appeals (OHA). ALJs review the file to determine if additional medical evidence is needed, conduct a hearing, and render a decision. ALJs conduct de novo hearings; that is, ALJs may consider or develop new evidence and are not bound by DDS determinations. These hearings often present the first opportunity for face-to-face contact between claimants and the individuals deciding their eligibility. In rendering a decision, ALJs do not follow the POMS but rely directly on applicable laws and SSA regulations and rulings. ALJs are subject to the Administrative Procedure Act, which affords them some independence in making a disability decision. Finally, if the ALJ denies the claim, the claimant has 60 days to request a review by the Appeals Council, an independent review group attached to the OHA and composed of administrative appeals judges. The Appeals Council may decide to dismiss the request for review, grant the request and issue its own decision, or remand the case back to the ALJ. The Appeals Council is the claimant’s fourth and final level of administrative review. Upon exhausting these administrative remedies, the claimant may file a complaint with a federal court. Figure 1.1 shows the four decision points in SSA’s current disability claims process. SSA’s approach to reviewing the quality of the disability decision reflects the complex and fragmented nature of the process. As we have previously reported, current quality assurance reviews focus on DDS determinations and ALJ decisions in isolation from one another, and the approach for reviewing DDS determinations differs from the approach for reviewing ALJ decisions. Reviews of DDS determinations are conducted by staff from SSA’s Office of Quality Assurance (OQA). These reviews focus heavily on DI claims that have been allowed. In conducting their quality review, OQA staff use the same approach, policy, and procedures that the DDSs use in reaching a determination; that is, they rely on the POMS. In contrast, only a small number of ALJ allowance decisions are selected for review by SSA’s Appeals Council. For the most part, reviews of ALJ decisions predominantly consist of reviews of claims denied by ALJs and appealed by claimants to the Appeals Council. In reviewing ALJ decisions, the Appeals Council relies on the same laws and SSA regulations and rulings as those used by ALJs. SSA’s disability claims process has long suffered from problems associated with its complexity and fragmentation. Among these problems are the high allowance rates by ALJs of appealed DDS determinations. In fiscal year 1993, before SSA issued its redesign plan, 68 percent of determinations that were appealed received favorable decisions at the hearing level. High ALJ allowance rates have been attributed to a number of factors. According to SSA, an ALJ might arrive at a different decision than a DDS because the claimant’s condition has worsened, or because ALJs are more likely than DDS decisionmakers to meet with claimants face-to-face, and thus have access to more or different information. However, SSA studies have also found that DDS and ALJ adjudicators often arrive at different conclusions even when presented with the same evidence. Disability decisions require difficult judgments, and adjudicators sometimes reach different conclusions. Further, DDS and ALJ adjudicators use medical expertise differently and rely on different documents for guidance when making decisions. Finally, training has not been delivered consistently or simultaneously to all groups of decisionmakers. This high rate of allowances at the hearing level has raised questions about the fairness, integrity, and cost of SSA’s disability program. In fiscal year 1998, the cost of making a determination at the DDS level was $547 per case, while the cost of an ALJ decision was an additional $1,385. In general, the costs of administering these disability programs reflect the demanding nature of the process: in fiscal year 1998, SSA spent about $4.3 billion, or almost 66 percent of its administrative budget, on the disability programs, even though disability beneficiaries are only 21 percent of the agency’s total number of beneficiaries. Another long-recognized problem with SSA’s claims process is that many claimants must wait a long time for their final decisions. Because of the multiple levels and decision points in the process, a great deal of time passes while a claimant’s file is passed from one employee or office to another. Delays are also caused by the need to obtain extensive medical evidence from health care providers to document the basis for disability.One SSA study conducted in 1993 showed that an average claimant waited up to 155 days from initial contact with SSA until receiving an initial determination notice, during which time 16 to 26 employees might have handled the claim. Only 13 hours of these 155 days were spent on “task time”—that is, time spent working directly on the case. Further, the study found that it could take up to 550 days from initial contact to receipt of a hearing decision, with only 32 hours of this time spent on task time. As a result of these multiple handoffs and the general complexity of the process, SSA believes claimants do not understand the process and have had difficulty obtaining meaningful information about the status of their claims. In the early 1990s, SSA’s problems with its disability claims process came to the fore as the growing workload placed additional pressure on SSA’s already inefficient process. The number of initial claims had been rising steadily, but it increased dramatically between fiscal years 1991 and 1993—from about 3 million to 3.9 million, or almost 32 percent.Moreover, future increases were expected. At the same time, SSA had to manage this growing workload with staffing levels that had been falling since the 1980s. As a result, SSA’s disability workload began to accumulate during this period. Most dramatically, the number of pending hearings almost doubled between 1991 and 1993—from 183,471 to 357,564. To address these long-standing problems and dramatically improve customer service, SSA embarked on a plan in 1994 to radically redesign its disability claims process by completing 83 initiatives over 6 years. We concluded in a 1996 report, however, that 2 years into the plan, SSA had yet to achieve significant progress. SSA’s slow progress was due in part to the overly ambitious nature of the redesign plan, the complexity of the redesign initiatives, and inconsistent stakeholder support and cooperation. Concerned about the inefficiency of the disability claims process and its effect on the quality of service to the public, SSA’s leadership decided in 1993 that the agency needed a strategy for radically improving the process. SSA reviewed reengineering efforts and approaches in other organizations and concluded that process reengineering was critical to achieving its strategic objective of providing world-class service. SSA then created a Disability Process Redesign Team composed of 18 SSA and state DDS employees with varied experience and backgrounds and charged it with fundamentally rethinking and redesigning SSA’s claims process from start to finish. Consistent with commonly held reengineering principles, the team collected extensive information on the process itself and options for improving it. These efforts culminated in a redesign proposal that was widely distributed throughout SSA and the state DDSs and to interested public and private individuals and organizations to solicit comments, concerns, and ideas for improvement. The proposal was also published in the Federal Register, and a comment period elicited 6,000 written responses, which were considered as SSA finalized its initial redesign proposal. In September 1994, SSA issued its vision for fundamentally redesigning the disability claims process. SSA’s vision included five objectives for the redesigned process: (1) making the process “user-friendly,” (2) allowing claims that should be allowed at the earliest possible level, (3) making the disability decision as quickly as possible, (4) making the process efficient, and (5) providing a satisfying work environment for employees. SSA’s vision was based on more consistent guidance and training for all adjudicators; an automated and simpler claim intake and appeal process; a single, comprehensive quality review process; and a simplified method for making disability decisions. From the claimant’s perspective, the redesigned process was to offer a range of options for filing a claim; provide a single point of contact; and have fewer decision points, as shown in figure 1.2. SSA had high expectations for its proposed redesigned process. The agency projected that the combined changes to the process would, by fiscal year 1997, result in a 25-percent improvement in productivity and customer service over projected fiscal year 1994 levels, and a further 25 percent by the end of fiscal year 2000—all without a decrease in decisional accuracy. SSA did not expect the overall redesigned process to alter total benefits paid to claimants, but it estimated that the changes would result in administrative cost savings of $704 million through fiscal year 2001, and an additional $305 million annually thereafter. After putting forth its broad vision, SSA issued in November 1994 a more detailed plan for developing, testing, and finally implementing proposed disability process improvements. The plan originally included 83 initiatives to be accomplished over 6 years. SSA recognized in its implementation plan that most, if not all, of the proposed process changes were interdependent, and that the development, testing, and implementation of related changes would need to be properly sequenced. For example, SSA recognized that all activities and associated benefits were dependent on improvements to its computer system, which were not expected to be completed until the end of the 6-year time frame. In 1996 and 1997, we issued several reports that raised concerns regarding SSA’s redesign effort. These concerns included, among other things, a lack of progress and demonstrable results. For example, we reported that SSA had not fully completed any of the 38 near-term initiatives it had hoped to accomplish in the first 2 years. As a result, SSA did not have any concrete results available to demonstrate the efficacy of its proposed initiatives. SSA’s slow progress was due in part to its overly ambitious redesign plan and the complexity of some of its redesign initiatives. We reported that SSA did not follow best practices when it decided to take on a large number of initiatives concurrently. Specifically, we reported that successful reengineering calls for focusing on a small number of initiatives at one time, whereas SSA decided to tackle 38 initiatives in the first 2 years of its redesign effort. Moreover, some of these initiatives were large in scope and very complex. For example, scheduled implementation of SSA’s large and complicated initiative for redesigning its computer system was delayed because of problems identified during testing. Some aspects of SSA’s redesign plan faced considerable opposition. As part of its redesign effort, SSA had identified over 100 individual groups—both internal and external to SSA—as having a stake in the process and whose involvement was, in many cases, critical to the entire disability claims process. These stakeholder groups—which included various SSA employee unions and associations, state entities and organizations, congressional committees, other federal agencies, and advocacy groups—had a wide variety of views on SSA’s plan, and some opposed specific initiatives. For example, SSA’s plan called for a new position—a disability claims manager (DCM)—that would combine the duties of field office and DDS personnel into one position. The DCM represented significant change to the current process, and SSA faced numerous challenges in obtaining stakeholder cooperation for this key initiative. In light of these difficulties and in order to increase SSA’s chance of success, we recommended in our December 1996 report that SSA reduce the scope of its redesign effort by focusing on those initiatives considered most crucial to improving the process and testing those initiatives together, in an integrated fashion, at a few sites. In another 1996 report, we recommended that, concurrent with the first phase of its DCM test, SSA test alternatives that we believed were more feasible and compare their relative costs and benefits with those of the DCM before deciding to increase the number of DCM test positions. Later, we supported SSA’s redesign efforts associated with its initiative to improve the consistency of disability decision-making and recommended, among other things, that SSA establish a performance goal for this key redesign initiative. As a result of our input, the overall lack of progress, and stakeholder concerns, SSA reassessed its approach to redesign and issued a revised plan in February 1997. The new plan focused on eight key initiatives, each one intended to effect a major change to the system. The plan also included updated tasks and milestones for each key initiative and expanded the time frame for the entire redesign project from 6 to 9 years, ending in 2003. The eight initiatives and their milestones are described in figure 1.3. As shown in figure 1.3, five of the eight initiatives had relatively near-term deadlines—that is, before the end of fiscal year 1998—for completing a key test or beginning implementation. Two of these initiatives involve testing new positions and, if test results warrant, implementing new positions on a stand-alone basis—that is, independently of other, related initiatives. One new position, the single decision maker (SDM), would expand the DDS disability examiner’s authority to determine certain claims without relying on the DDS physician; the SDM would instead use the physician as a consultant on an as-needed basis. The SDM was expected to make the initial determination process faster and more efficient by eliminating handoffs to DDS physicians in those cases in which the appropriate determination was clear. Another new position, the adjudication officer (AO), would review cases that were appealed to the hearing level. The AO was to help claimants understand the appeals process and would have authority to grant disability benefits in cases in which it was clear that the claim merited a fully favorable decision. In all other cases, the AO was to make sure that all pertinent information was included in the case file and was fully explained, thus facilitating its use by the ALJ at the next level of appeal. By performing these tasks, the AO was expected to improve customer service and make the appeals process faster and more efficient. A third near-term initiative is the full process model (FPM) test. The FPM combines five proposed changes into a single test to investigate their interactive effects on creating a more efficient process and better customer service. The five tested changes are (1) creating the SDM position; (2) creating the AO position; (3) establishing a new predecision interview, in which the SDM would interview claimants when the evidence did not support a fully favorable determination in order to obtain any additional information before making the final determination; (4) eliminating the reconsideration step; and (5) eliminating the Appeals Council step—that is, removing the claimant’s option to request a review by the Appeals Council of an ALJ decision. The two other near-term initiatives—process unification and quality assurance—are considered essential elements for achieving correct decisions in the new disability claims process. The intent of the process unification initiative was to achieve similar results on similar cases at all stages of the process. To this end, SSA planned a number of activities, including conducting ongoing training; clarifying policies; and developing unified guidance, called a single presentation of policy, for making disability decisions across all levels of the process. SSA also planned to complete eight additional subinitiatives—all designed to help reduce inconsistencies in decision-making between the DDS and ALJ levels. SSA’s quality assurance initiative included near-term activities in two areas. First, as part of each of the other major redesign initiatives, SSA planned to develop and test “in-line” quality assurance approaches—such as training, mentoring, and peer review—in order to build quality into the process before decisions are made. Second, SSA planned to develop and test a single “end-of-line” quality review mechanism that covered the entire adjudicatory process from beginning to end and provided data on problems or failures in a component’s in-line quality assurance process. Appendix I provides additional information on SSA’s five near-term redesign initiatives. The Chairman of the House Subcommittee on Social Security, Committee on Ways and Means, asked us to (1) assess SSA’s efforts to redesign its disability claims process and (2) identify any actions needed to better ensure future progress. We agreed to focus our work on the five initiatives in SSA’s scaled-back plan that have relatively near-term dates for testing, implementation, or both: the SDM, AO, FPM, process unification, and quality assurance initiatives. In assessing SSA’s redesign experience, we obtained documents from and interviewed SSA officials responsible for planning, managing, and evaluating redesign efforts. We visited several DDS and hearing office test sites and interviewed test participants and managers in Richmond, California; Brooklyn, New York; Raleigh, North Carolina; and Providence and Warwick, Rhode Island. We also interviewed SSA regional officials with responsibility for overseeing or coordinating redesign efforts within their regions as well as representatives of nine major stakeholder groups to obtain their views on SSA’s specific initiatives and general approach for redesign. Finally, we reviewed the literature and interviewed experts on business process reengineering. We conducted our work between August 1997 and November 1998 in accordance with generally accepted government auditing standards with the following exception: we did not independently verify agency data, including test data on redesign initiatives. We did obtain information from SSA on steps it took to obtain and verify the test data and any problems associated with them. We have noted our concerns regarding the validity and reliability of the data in the report, where appropriate. We obtained comments from SSA officials responsible for the redesign tests, which we have summarized in chapter 4. After 4 years of redesign efforts, SSA has made only limited progress toward improving its disability claims process. While narrowing the focus of its redesign plan has helped, SSA has continued to miss milestones and has not clearly demonstrated that the proposed initiatives would significantly improve the current process. As a result, SSA has had to defer service improvements and reduce estimated savings. The agency’s limited progress has resulted, in part, from SSA’s overly ambitious strategy for testing and implementing its redesign initiatives. Conducting a number of tests and other redesign activities simultaneously proved to be too difficult to keep on track. In addition, problems with SSA’s approach to designing and managing its tests of new initiatives contributed to marginal and inconclusive test results and made it more difficult for SSA to discern how a tested initiative would operate if implemented on a widespread basis. SSA has made only limited progress in improving its disability claims process, despite having fewer initiatives in its revised redesign plan than in the original plan. The agency has not met most of its adjusted milestones for testing and implementing its five near-term initiatives. Moreover, results from SSA’s stand-alone tests of two new decisionmaker positions, the SDM and the AO, were not compelling and did not support broader implementation. Therefore, SSA decided to wait for preliminary results of its integrated test, which has in fact produced some promising results. In addition, SSA has made progress under its process unification initiative, such as providing training and clarifying policy, and agency officials believe the actions taken thus far have had a positive effect on customer service. Overall, however, as a result of missed milestones and disappointing test results, SSA has deferred many other process improvements and reduced its redesign expectations for administrative savings. Even under its scaled-back plan, SSA continues to experience delays. As of October 1998, the agency was behind schedule on all five of the plan’s near-term initiatives. After more than 3 years of testing, SSA had yet to complete its test of the AO decisionmaker position and, for reasons discussed in the next section, SSA has delayed its decision on whether to implement both the SDM and AO decisionmaker positions. Also, SSA did not complete its assessment of the FPM test results in fiscal year 1998 as scheduled. SSA has completed some of its planned activities under its process unification initiative, but it has missed other key implementation deadlines. The agency has clarified key policies and, since 1996, has issued policy instructions in the same format for all adjudicators. SSA has also provided an unprecedented training program involving 15,000 decisionmakers and quality reviewers from key components of the disability claims process and has adopted process unification principles for its ongoing training program by providing the same training to all adjudicators. However, SSA has experienced delays in several other planned activities. For example, the agency is behind schedule on a test to study the effect of requiring DDS adjudicators to more fully document the rationale they used in making particular disability determinations. SSA hopes this more detailed explanation will reduce decisional inconsistencies. SSA has begun work on its quality assurance initiative, but this effort has also been delayed. As part of its tests of other redesign initiatives, SSA has been exploring “in-line” quality assurance approaches—such as training and mentoring—that are intended to build quality into the process before decisions are made. SSA planned to institute these practices nationwide when it implemented the other redesign initiatives; however, delays in implementing the initiatives have delayed the widespread use of these quality assurance practices. In addition, SSA is more than a year behind in developing a single end-of-line review mechanism. The agency planned to develop one quality standard for its end-of-line reviews in fiscal year 1997 and to test its use in fiscal year 1998. However, as of the end of fiscal year 1998, SSA had not reached internal agreement on what that single quality review standard should be. Key milestones and the status of SSA’s five near-term initiatives are summarized in table 2.1. Additional information on SSA’s efforts to meet near-term milestones—including specific actions taken to date and the nature and extent of delays—can be found in appendix II. As of October 1998, SSA had not clearly demonstrated that its proposed changes would achieve the desired improvements in the disability claims process. SSA had expected the new SDM and AO positions to significantly improve the efficiency and processing time without sacrificing the quality of decision-making. However, results from the stand-alone tests of these positions have been largely disappointing and, in some cases, inconclusive. As a result, SSA decided to postpone implementation decisions on these two initiatives until results from the agency’s integrated FPM test were available. As an example, SSA had hoped that permitting the SDM to make disability determinations independently, using the DDS physician only on an as-needed basis, would reduce the time spent on the determination process. However, early test results revealed that the SDM position would, on average, reduce by only 1 day the time claimants waited for an initial determination and by only 3.6 minutes the time personnel actually spent working on the case. Moreover, SDM determinations for certain impairment categories were less accurate than under the current system. However, early results from the test of the FPM initiative, where the SDM was tested with other process changes, have shown more promise for the SDM. SSA’s final evaluation of the FPM test for four of the five process changes will not be available before October 1999. Table 2.2 shows the final or most recent results of the tests of the three initiatives. Appendix III contains more detailed information on test results for these three initiatives. SSA has not made enough progress on its two other near-term initiatives, process unification and quality assurance, to fully assess their efficacy. Although SSA has not completed many of its planned measures for these two initiatives, some of the early process unification actions may have had a positive effect on customer service. SSA reported that it has accurately paid benefits to approximately 90,000 people 500 days earlier in the process than otherwise might have been the case. While SSA generally did not test these process unification initiatives before implementing them, officials believe that the increase in allowances made earlier in the process is in large part due to the agency’s process unification efforts. At the same time, these officials noted that other factors can influence allowance rates. Therefore, without conducting carefully structured tests, it is difficult to isolate the effects of actions taken by the agency. As a result of the delays and disappointing test results, SSA has decreased projected administrative savings and postponed the anticipated date for realizing any savings. In 1997, SSA projected savings of 12,086 staff-years for fiscal years 1998 through 2002, resulting from implementing several process changes. SSA planned to use some of these staff-year savings to help with other workloads. Instead, in 1998, SSA both decreased its savings projections and postponed the date it expected to realize savings, which changed its projected staff-year savings to 7,207 through fiscal year 2003. Table 2.3 shows how SSA’s projected staff-year savings changed from its 1998 to its 1999 President’s budget. More importantly, test results have not provided a compelling case for SSA to make these changes and thereby improve customer service as quickly as it had hoped. Overall processing times have not significantly changed since the beginning of redesign; that is, while processing times have decreased at the initial level, they have increased at the ALJ level. On the other hand, more allowances are being made earlier in the process, and SSA attributes this to its process unification efforts, which were planned to improve customer service without significantly increasing the overall cost of providing benefits. SSA’s difficulties in achieving appreciable improvements in its disability claims process have been caused, in part, by the scope of SSA’s revised plan and the agency’s strategy for testing its proposed process changes. Much like its original plan, SSA’s February 1997 plan was designed to achieve quick and major improvements on many fronts simultaneously in response to the pressing problems with the claims process. However, as with the original plan, SSA’s revised plan proved to be too ambitious and difficult to manage within established time frames. Moreover, SSA’s decision to conduct stand-alone tests contributed to marginal SDM and AO test results, and weaknesses in how SSA designed and managed its AO test contributed to unreliable AO test results. Finally, difficulties SSA experienced with testing changes in an operational environment raised questions about how a tested initiative would operate if implemented. Like its original plan, SSA’s revised plan was designed to provide a number of near-term, visible improvements, while also laying the foundation for long-term changes. To accomplish this, SSA acted to make progress on a number of fronts simultaneously. For example, hoping to alleviate growing workloads at the appellate level, SSA began testing and planned to implement the AO position independently of other initiatives, even though certain changes that could support the position, such as the redesigned computer system, were still being developed. SSA also began testing and planned to implement the SDM position by itself because SSA believed it could achieve quick and decisive improvements through this position. The agency believed these quick improvements would build momentum for redesign and increase stakeholder support. While the AO and SDM tests were still ongoing, SSA began its FPM test, which investigated the interactive effects of five process changes together: the new SDM and AO decisionmaker positions, the predecision interview with the claimant, eliminating the reconsideration step, and eliminating the claimant’s option to request a review by the Appeals Council. In addition, SSA began testing and developing subinitiatives under its process unification, quality assurance, and the three remaining longer-term initiatives. Given the urgent need to fix the process, SSA considered this ambitious approach appropriate as well as consistent with reengineering theory. At that time, reengineering theory generally called for short time lines for testing and implementing major process changes. In addition to this multifaceted approach, SSA decided for several reasons to conduct its tests of the proposed redesign changes at many sites and to involve numerous participants. First, officials believed this approach would build trust among employees and other stakeholders, who feared that redesign would negatively affect them. Second, SSA believed it needed to use a large number of test cases to produce statistically valid information in key areas. For example, SSA wanted sufficient data to determine the impact of redesign initiatives on the accuracy of SDM determinations in each major category of impairment. Finally, SSA wanted to have enough data to demonstrate the impact that changes to the process would have on benefit outlays. SSA officials told us that Office of Management and Budget officials were concerned that the proposed changes to the claims process could result in large, unanticipated increases in benefit outlays. Because of the size of the disability programs, even a small increase in the percentage of claimants awarded benefits can result in a significant increase in program costs. For example, SSA officials have roughly estimated that a 1-percent increase in allowances in the disability programs for a period of 10 years could result in an increase of $11 billion in the total benefits paid to beneficiaries—that is, program costs—during that same period. As a result of SSA’s decision to conduct many tests simultaneously, at one point SSA was testing four near-term initiatives and training test participants for another, longer-term initiative, the DCM. These tests were being conducted at more than 100 sites and involved over 1,000 participants. Table 2.4 shows SSA’s testing schedule, including numbers of sites and participants. Despite SSA’s good intentions, its scaled-back plan still proved to be too ambitious, and the agency had difficulty keeping it on track. Conducting several large tests that overlapped in time consumed a great deal of management attention and resources. In addition to developing the test plan, implementing and monitoring the test, and collecting and analyzing data, each test involved negotiating and coordinating activities with test sites, test participants, employee unions, and other stakeholders. This large array of testing and evaluation activities made it difficult for SSA to stay on schedule and simultaneously maintain sufficient focus on other redesign efforts—such as process unification and quality assurance. Unrealistic milestones for specific initiatives also contributed to missed deadlines. For example, SSA allowed itself only 17 months to conduct the FPM test and assess the results, even though it can take up to 21 months for a test case to make its way through the entire disability claims process. In addition, SSA’s milestones for the eight process unification subinitiatives were probably too ambitious (they did not include sufficient time to conduct needed tests or make procedural changes), especially given the overall magnitude of SSA’s redesign efforts and the complexity of the problems these subinitiatives are intended to address. Moreover, some of the factors that contributed to differences between decisions made by the DDS adjudicators and the ALJs have evolved over a number of years and involve sensitive legal issues. Finally, other competing workloads placed considerable strain on SSA’s ability to manage the overall redesign effort. Besides the redesign initiatives, disability program officials and staff had to cope with additional unanticipated duties and responsibilities. For example, legislation that reformed the nation’s welfare program in 1996 also required that children receiving benefits under the SSI program meet a stricter definition of disability than had been applied in the past. As a result, during fiscal year 1997, when many redesign initiatives were being tested, SSA’s disability staff also had to plan and execute a review of the eligibility of over 288,000 children receiving SSI benefits. SSA’s decision to test its AO and SDM initiatives independently from related initiatives contributed to the disappointing test results. SSA conducted these stand-alone tests because it wanted to institute these two positions quickly. However, as initially envisioned, these initiatives were expected to result in process improvements and administrative savings in concert with other initiatives. Tested alone, these positions did not demonstrate potential for significantly improving the process. To illustrate, at the very early stages of its redesign effort, SSA developed expectations for AO productivity assuming that the AO would be operating in a completely redesigned environment. However, the AO test did not include supporting initiatives, such as a redesigned computer system, and, consequently, AO productivity was far below SSA’s expectations. Similarly, the SDM was expected to be operating in a redesigned environment that included, among other changes, the new responsibility of conducting a predecision interview with claimants. The results of the stand-alone SDM test indicated a decline in the accuracy of initial determinations; on the other hand, the integrated FPM test indicated that adding the predecision interview to the SDM’s responsibilities may improve accuracy, as compared with the current process. This improved accuracy may have resulted because SDMs collected more or better data during the predecision interview or because SDMs performed their job more thoroughly in preparation for a meeting with the claimant. While SSA could not have predicted the precise impact of not including a particular process change in its stand-alone tests, the agency understood from the outset of its redesign effort that proposed changes were closely linked and that they depended on each other—especially on computer supports—to dramatically improve the process. Overall, the decision to conduct stand-alone tests caused delays, did not result in the efficient use of resources, and did not achieve the agency’s goal of quickly building trust and enthusiasm among those who resisted the changes. For example, despite the improved performance of the SDM in the FPM test, pockets of opposition to the SDM, particularly among groups representing some DDS physicians, still existed. While groups that perceive themselves to be negatively affected by change may not be swayed on the basis of clear and positive test results, marginal or inconclusive results provide detractors with a firmer basis to oppose change. The AO test suffered from a number of design and management flaws that raised questions about the reliability of certain test results. For example, to ensure that AO sites were staffed with the best employees possible, SSA selected test participants from a national pool and temporarily relocated them to their preferred locations. Since the test lasted some time, many of these employees decided to return to their home units, and SSA had to replace them with new, less experienced employees. Replacing test participants created instability in the test environment that negatively affected the test results. In addition, SSA did not arrange for AOs to have necessary supports (such as computers, clerical assistance, supervisors, or feedback from ALJs), which contributed to poor results. Consequently, SSA took steps to refine the AO test and provided additional supports, including training and feedback, to test participants. Accuracy and productivity subsequently improved, although productivity has not improved to the level originally expected by SSA. Despite these improvements, other problems with how the test cases were handled made it difficult for SSA to assess the efficacy of the AO position. Under the proposed process, an AO cannot deny a claim, so, when an AO does not allow a case, the AO is then required to make sure that all pertinent information is properly arranged in the case file and to prepare a thorough explanation of all medical evidence so that the case can move expeditiously to an ALJ hearing. To fairly assess the impact that the AO had on processing time at the appellate level, SSA planned to compare cases prepared by AOs with a small group of control cases in which no AO had been involved. The two groups were to be handled in a comparable manner; for example, both sets of cases were to be promptly scheduled for hearings. However, in many instances OHA staff did not follow instructions concerning how the cases were to be handled. Since the number of control cases was relatively small, when the improperly handled cases were excluded from the analysis, the number of useable control cases was too small to permit a valid comparison. In addition, SSA did not design its test to determine the overall impact of the AO-prepared cases on the quality of decisions at the next appellate level. Without reliable data on its control group or sufficient data on the impact on quality, SSA could not fully assess the effect of the AO position on the claims process. SSA’s other tests—of the SDM and FPM—suffered from design problems that stemmed largely from difficulties with trying to conduct a test in a “real world” operational environment. While the SDM and FPM tests provided information and insight into the efficacy of these two concepts, operational limitations made it difficult for SSA to conduct a statistically valid test and conclusively demonstrate how a tested initiative would operate if implemented. To a lesser extent than with the AO test, SSA’s test of its SDM initiative also provided incomplete information and limited assurance that the initiative would perform as tested. For example, under the current process, 50 percent of DDS allowance determinations are reviewed by regional quality assurance staff, and errors are returned to the adjudicator for correction. However, under the SDM stand-alone test, 100 percent of all determinations—allowances and denials—were reviewed by SSA quality assurance staff and returned for correction. As a result, a large number of cases were returned to SDM adjudicators even though, on average, there was not a large difference in error rates between the SDM and the current process. SDM test participants and other DDS officials told us that this 100-percent review probably caused test participants to rely more heavily on agency physicians than they might have otherwise. In addition, because SSA does not have administrative control over state DDS programs—which are under the direction of state governors—the agency was not able to select a strictly random group of test sites or participants; nevertheless, SSA officials believe that the participant selection methods they used came as close to random as possible, given the present constraints. Moreover, because workloads and production capacity varied at the sites, SSA could not dictate the number of test cases at each site and was therefore unable to distribute the test caseload in a representative manner. Finally, the test was not initially designed to collect data on test cases as they moved beyond the initial determination level to the appeals level—data that would have helped determine the impact of the SDM on overall appeal rates, processing time, efficiency, and quality of determinations. SSA has since modified its approach to collect some of this information. In designing the FPM test, SSA overcame some, but not all, of the problems experienced with previous tests. For example, SSA was able to persuade states that it believed were nationally representative, on the basis of an analysis of state characteristics, to participate in the test. SSA also decided to track cases through the entire disability claims process, rather than through the initial determination level only. To further ensure a sound test design, SSA hired a consulting firm to independently evaluate the design of the test. While the firm found the test design to be basically sound, it made several suggestions to improve the test and better ensure stakeholder confidence in the validity of the test results. SSA was not able to make all the recommended changes, however. For example, because of state union-management agreements, SSA was unable to obtain data on the qualifications of employees to ensure that test participants were representative of all employees, as recommended by the consultant. In addition, contrary to the consultant’s recommendation, SSA did not mitigate the impact of the 100-percent review of SDM determinations for quality, which may lead to some of the same problems experienced with the SDM stand-alone test. While SSA has experienced, and continues to face, many difficulties with its redesign effort, the agency can still take actions to increase its chances for future progress. As SSA continues its redesign work, it has an opportunity to apply lessons learned from its 4 years of reengineering experience, as well as from other commonly accepted reengineering and management best practices. SSA has already begun to apply some lessons it has learned, such as strengthening executive oversight of its redesign effort. However, the extraordinary difficulty of the task at hand and the performance shortcomings previously experienced suggest that these steps might not be enough. Other fundamental changes in SSA’s approach will probably be necessary. In particular, although more focused than its original plan, SSA’s current redesign plan is still very large in scope and difficult to manage, and the successful completion of key initiatives will likely require that SSA scale back its near-term efforts even further. SSA can also modify its testing approach to avoid pitfalls encountered in the past. As it moves to implement changes that appeared efficacious in a testing environment, SSA can ensure that it has adequate performance measures and goals to assess changes to the process and to provide early warning of problems as well as adequate quality assurance processes to guard against unanticipated results. The need for SSA to improve its disability claims process continues today. SSA’s large pending workload persists, especially at the hearing level. The pending workload at the hearing level grew from 357,564 in 1993 to about 483,712 in 1997. In addition, the average length of time it took to receive a hearing decision upon appeal also grew in the 1990s—from 238 days in 1993 to 386 in 1997. The dramatic growth in initial applications for disability benefits that contributed to these increases and exacerbated long-standing problems has ended. In fact, in recent years the number of individuals applying for disability benefits has declined, which has in turn helped reduce the 1998 appellate backlog to 384,313 and appellate processing time to 371 days. However, no one knows how long this decline will last. An economic downturn could increase unemployment and drive up demand for disability benefits, and the number of applications, at any time. Moreover, the number of applications for disability benefits can be dramatically affected by court cases and changes in the law, such as the possibility of congressional action to increase the retirement age. Finally, SSA expects claims to increase again beginning in 2000, when the eligibility age for full Social Security retirement benefits changes from 65 to 67 years, and more dramatically by 2010, as the baby boom generation approaches its disability-prone years. Taken together, present and future workloads highlight the continuing pressure on SSA to move expeditiously to improve its disability claims process. Many steps remain to be taken under the agency’s February 1997 redesign plan. As of October 1998, the agency was continuing to test the AO, SDM, and FPM initiatives. Should SSA decide to implement any of these positions or process changes, it will face innumerable steps ahead. For example, SSA will need to seek changes in the law or develop new regulations for many of the changes it is considering, a time-consuming and multistep process. For some of the initiatives, such as the SDM, SSA will also need to provide for training, facilities, equipment, and various clerical and managerial supports. In some cases, SSA will need to develop plans for implementing changes in phases, such as installing new positions at a small number of sites each month. In addition, SSA must guard against unwanted effects that could result from making changes to one part of the process without adequately addressing their impact upon other parts. For example, should SSA decide to eliminate the reconsideration step, SSA will need to be aware of the possibility of, and take steps to guard against, the development of more backlogs at OHA caused by the speedier movement of cases through the process to that level. While SSA has made important progress, much remains to be accomplished on two other important near-term initiatives: process unification and quality assurance. For example, under its process unification initiative, SSA intends to review and revise established regulations to develop its planned single presentation of policy—a time-consuming task to which SSA has not yet been able to devote adequate resources. Also under process unification, SSA intends to continue providing systematic ongoing training to adjudicators at all decision-making levels and to continue work on several remaining subinitiatives. Under its quality assurance initiative, SSA still needs to ensure that adequate in-line quality assurance procedures are in place for any changes it makes to the process. SSA is also still trying to reach an agreement on a single “end-of-line” quality review mechanism for the whole disability claims process. Once agreement is reached, SSA will need to test this mechanism. As discussed in chapter 2, developing and testing initiatives can involve a substantial amount of time and effort and require the cooperation of numerous stakeholders. If SSA continues its redesign effort as planned, the agency has even more matters to contend with for its three longer-term initiatives: the DCM; a simplified methodology for making disability decisions; and the Redesigned Disability System (RDS), SSA’s new disability computer system. All three involve major operational changes and are the furthest from implementation. The DCM combines the duties of SSA field office personnel and state disability examiners and will require legislative changes before it can be implemented. As of October 1998, SSA was still conducting the first of three lengthy test phases that precede full implementation of the DCM.The simplified decision methodology initiative is still in the developmental stage, and much more research needs to be accomplished before SSA can begin laboratory testing. Finally, SSA is experiencing a number of problems with its proposed RDS, a system that is viewed as key to many of the planned process efficiencies. In January 1998, we reported that software development problems had delayed the scheduled implementation of RDS by more than 2 years. Later in 1998, we reported that SSA had experienced problems and delays in its RDS pilot effort initiated in August 1997 to assess the performance, costs, and benefits of RDS. For example, systems officials stated that, using RDS, the reported productivity of claims representatives in the SSA field office dropped. Systems officials also stated that because the RDS software had not performed as anticipated, SSA had engaged an independent contractor to evaluate and recommend options for proceeding with RDS. This effort is expected to further delay SSA’s national rollout of the new disability computer system. See table 3.1 for the steps that have yet to be taken under the revised plan. Even if SSA successfully tests and implements all of the redesign initiatives included in the February 1997 updated plan, it is unlikely that all of the problems that gave rise to SSA’s redesign effort in the first place will be satisfactorily resolved. As we have noted, test results to date show only modest improvements in operations, and budgetary savings will not be as large as originally anticipated. Moreover, except for the AO initiative, most of SSA’s redesign efforts to date are focused on improving the process at the initial determination level, leaving problems at the ALJ level largely unresolved. These problems include length of processing times and the large number of backlogged cases at hearing offices, which are among the most pressing problems that SSA faces with the claims process and which require additional solutions. SSA has already taken actions to revise its approach and apply some lessons learned from its early efforts with redesign, including formalizing high-level executive oversight, working to improve test design, and rethinking its strategy for communicating with stakeholders. However, these efforts may not be enough to ensure success. Because of the unique barriers to change inherent in governmental operations, redesign is particularly challenging for government agencies, and SSA may need to consider additional changes in its approach to improve its chances of making tangible future progress. In its 1997 plan, SSA established a new management structure to oversee redesign efforts in order to make its senior managers more accountable and involved. Specifically, SSA centralized authority for redesign efforts by creating an Executive Committee for Disability Redesign, chaired by the principal deputy commissioner. Such high-level oversight is critical, given the organizational complexity of the disability claims process. It is also consistent with government and industry best practices, which provide that the individual in charge of a reengineering effort be responsible for the entire process and its performance. Strengthening executive oversight has already had a positive effect on the progress of redesign. For example, by promoting timely processing of cases for the FPM test, the Executive Committee has helped to expedite analysis of test results. SSA officials told us they believe that Executive Committee oversight has helped provide a new momentum by working to ensure that activities stay on schedule and that critical policy decisions receive sufficient and early high-level attention. SSA is also applying some valuable lessons learned from conducting the AO test. Because the AO test results were inconclusive as a result of problems with the design and management of the test, SSA has taken greater care with the design and management of subsequent tests. For example, SSA enlisted the services of an independent consulting firm to review its proposals for both the FPM and DCM tests. Also, for the DCM test, which is similar to the AO test in that it is lengthy and involves testing the efficacy of a new position, SSA is taking steps to ensure test participants receive adequate training and support, and that the testing environment remains stable. SSA officials told us they have also learned a great deal about balancing the need for open communication with stakeholders with the need to keep initiatives on track and make tough and sometimes unpopular decisions. Effective stakeholder communication is an important area according to reengineering experts. Its importance was recently noted in a private sector survey of 102 private and government organizations that found that sending inconsistent signals and not communicating enough with stakeholders were among the five most serious mistakes top management sponsors made during a major change. However, communicating with stakeholders is different from obtaining consensus on proposed changes, a practice that can sometimes lead to management paralysis. The proposed changes in SSA’s redesign plan affect most aspects of the disability claims process, and it is unlikely that the agency can achieve across-the-board support from all parties affected by the change. Early on in its redesign efforts, SSA leadership took extraordinary steps to reach out to key stakeholders to build acceptance and consensus for its redesign initiatives. SSA officials told us they now understand that they cannot expect to satisfy all stakeholders and believe they lost valuable momentum early in the redesign effort trying to do so. Agency officials have continued with their efforts to communicate with various stakeholder groups, however, and our review showed that, although stakeholders do not unanimously support all of SSA’s redesign initiatives, many of the stakeholders we contacted were satisfied with the level of communication from SSA. Nevertheless, these positive efforts and lessons learned may prove insufficient for achieving appreciable progress. Even with strengthened executive oversight since February 1997, milestones have continued to slip. Compelling test results and improvements to the disability claims process have also proven elusive. With so much remaining to be accomplished, and many barriers to overcome, SSA will need to take additional steps to keep its redesign effort on track and achieve further improvements to the disability claims process. SSA is not the only government agency that has had trouble reengineering its operations. According to reengineering experts, many federal, state, and local agencies have failed in their reengineering efforts. One reason for this high degree of failure is the unique environment of the government workplace, which adds considerable complexity. For example, the flexibility to reengineer a process is often constrained by laws or regulations that require that processes follow certain procedures—such as the requirement, in some cases, that a physician participate in disability cases involving children or mental impairments. Also, government agencies, unlike their private sector counterparts, cannot choose their customers and stakeholders. Agencies must serve multiple customers and stakeholders, who often have competing interests. In addition, following government procedures, such as drafting and issuing new regulations and complying with civil service rules, makes it difficult to implement changes at the quick pace often considered vital for successful reengineering efforts. Finally, public agencies must also cope with frequent leadership turnover and changes in the public policy agenda. For example, as discussed in chapter 2, SSA faced several policy changes during the last few years, such as the need to redetermine the eligibility of thousands of children receiving SSI benefits, at the same time that the agency was trying to conduct large tests of process changes. According to experts in the field, reengineering requires sharp focus and enormous discipline, and organizations are more likely to succeed if they concentrate their efforts on a small number of initiatives at any given time. One way of focusing a reengineering effort is by prioritizing process improvement objectives and identifying those initiatives most likely to achieve those objectives. Basic reengineering precepts suggest that an agency should decide which process or major subprocess should have highest priority for agency action. This decision should be based on selecting process changes that (1) have strong links to the agency’s mission and would have a high impact on customers, (2) are likely to provide a large return on invested resources, (3) enjoy a strong consensus, (4) are feasible given the available resources and infrastructure, or (5) can be achieved within a short period of time in order to gain experience in reengineering. SSA’s own experience strongly underscores the need for focus. As discussed in chapter 1, SSA realized early on that it could not effectively manage the large number of initiatives in its original redesign plan within established time frames, and later that scaling back its plan in February 1997 was a step in the right direction. SSA’s experience was not unlike that of others. Early reengineering theory called for large systemwide changes over a short period of time; but experts now suggest that achieving significant change takes longer and costs more than generally believed several years ago. However, SSA has continued to miss milestones and, with much remaining to be accomplished, additional focus may be necessary to achieve significant and concrete improvements to the process. As we reported in December 1996, process unification, quality assurance, and enhanced information systems are among those initiatives most crucial to producing significant improvements in the process. Other initiatives could be explored on a limited basis or undertaken at a later date once progress was ensured for critical initiatives or when additional resources became available. Concern over the scope of SSA’s plan and the resources used for redesign activities was similarly expressed by the independent, bipartisan Social Security Advisory Board in an August 1998 report. The Board concluded that the costs of the redesign project were significant and could not be sustained indefinitely. The cost of SSA’s redesign efforts is difficult to calculate. According to SSA officials, the agency spent approximately $16.7 million from 1995 through 1998 on redesign activities—mostly on travel associated with relocating test participants around the country, but also on training, rent, supplies, and equipment. In addition to these expenditures, the Advisory Board pointed out that the redesign effort consumed the time and attention of a considerable number of the most experienced and knowledgeable staff within both SSA and the DDSs, diverting them from the routine disability claims process. In the context of constrained administrative resources, the Board advised that resources that had been diverted be returned as soon as possible to their usual functions so that SSA and the state agencies could fulfill their basic program responsibilities. Prioritizing its key redesign objectives might help SSA to better focus its efforts. As discussed in chapter 1, SSA’s redesign effort currently has five key objectives: allowing claims that should be allowed at the earliest possible level and improving efficiency, speed, and customer and employee satisfaction. However, these objectives can work at cross purposes; an improvement in one area can result in a deterioration of performance in another. For example, focusing on efforts that speed up the process and improve efficiency might reduce the amount of attention given to developing evidence and documenting decisions. This, in turn, might result in incorrect allowances (or denials) earlier in the process. On the other hand, focusing on the objective of making the right decision at the earliest possible level could add time at the initial level, which might result in more accurate initial determinations and fewer appeals, which in turn might improve the speed and efficiency of the overall process. SSA officials told us that if they were to begin again, they would consider dividing the redesign effort into smaller, more manageable segments. This would be one way for the agency to better focus on specific initiatives and perhaps be able to achieve more visible near-term gains. In fact, SSA may end up taking this approach during the implementation phase by rolling out small segments of the redesign plan one at a time. Reengineering best practices, as well as SSA’s own experience to date, suggest that modifications to SSA’s testing approach could help the agency to more efficiently demonstrate the likely result of proposed changes. Conducting smaller and more integrated tests could free up resources to address critical initiatives while effectively demonstrating the efficacy of interrelated changes. In addition, some of SSA’s redesign initiatives face considerable barriers to implementation because they represent significant change, affect jobs, or depend on other changes or supports to be effective. SSA could more effectively explore the viability of such initiatives—as well as of alternative approaches—on a small scale or wait until essential supports have been developed before investing significant resources in testing these initiatives. Many reengineering experts believe that entities undergoing reengineering, such as SSA, should conduct small tests of proposed initiatives. Reengineering best practices caution against moving directly from concept to large-scale testing or implementation and suggest that methods such as limited pilot tests and prototyping are cost-effective means for evaluating the effectiveness and workability of proposed changes. As we recommended in our 1996 report on SSA’s reengineering effort, SSA would benefit from concentrating its efforts on first testing initiatives using a smaller, more manageable scope at only a few sites across the country. SSA’s own experience with the AO and SDM tests confirmed that small-scale testing is prudent. Significant resources and time were devoted to large-scale tests of the AO and SDM, only to discover that their efficacy in a stand-alone environment was marginal. The AO test in particular—which lacked good design, disciplined management, and key supports—proved costly and ineffective in proving the AO concept. SSA moved quickly from concept to large-scale testing because it wanted to definitively demonstrate the positive impact of these proposed changes so they could be immediately implemented. But test results did not support immediate implementation. Instead, the outcome has been continual testing that has drained agency resources and energies. In hindsight, SSA could have discovered the marginality of stand-alone initiatives with a much smaller commitment of resources. As noted in chapter 2, SSA officials continue to believe that the agency must conduct tests involving a large number of cases. Given SSA’s desire to collect a sufficiently large amount of data and move quickly to change the claims process, SSA officials believe their approach for the AO and SDM tests was correct, and that if the test results had been positive, all would be well. However, we believe that SSA took a costly risk that may have eroded support for the initiatives. SSA officials have said that in the future they would consider reducing the number of sites that they use in tests by concentrating test sites in a few states or within one SSA region to permit more efficient use of resources and easier test management and oversight. SSA’s current plans involve testing other initiatives, such as the DCM, on a large scale. The DCM test currently under way has a start-up cost of $20 million and involves 210 federal and state participants at 33 sites across the country. Given the uncertainties inherent in this new position, as well as SSA’s past experience with large-scale testing of the AO and SDM initiatives, SSA runs the risk of learning on a large and expensive scale that the DCM does not meet the agency’s redesign objectives. It would be more cost-effective to test this initiative on a small scale and move on to a large-scale test only if initial results suggested the potential for significant gains. In the event of unforeseen difficulties or poor test results, it would be easier and less costly to make any necessary adjustments to a small-scale test than to a larger one. According to a reengineering expert we consulted, stand-alone testing of interrelated initiatives is inefficient and unnecessary because it provides no synergy or learning across the whole process. In addition, as shown by reengineering research, effectively evaluating the overall impact of a redesign effort requires studying the entire business unit or process being reengineered. In fact, we recommended in our 1996 report that SSA combine key initiatives into an integrated process and test that process at a few sites. SSA’s experience confirms the importance of integrated testing. Projected benefits from reengineering were predicated on the assumption that most process changes and supporting initiatives would be operational simultaneously. However, as discussed in chapter 2, SSA has been testing initiatives independently and without the benefit of some key supports. SSA officials maintain that they have learned a great deal from the large-scale, stand-alone tests, such as how to better run a test. They also maintain that the stand-alone tests provided a baseline of information; for example, testing the SDM in a stand-alone environment provided data to compare with the SDM performance in an integrated environment. SSA officials also believe that the tests contributed to improved communication among operational units and opened the door for important cultural changes needed to support redesign. Although SSA may have learned from its stand-alone tests, these tests did not demonstrate dramatic improvements to the process or provide valuable insight on how the AO and SDM would ultimately work in concert with other initiatives. For example, only when SSA began the FPM test did it become apparent that the SDM might have performed differently if it had been tested in an integrated environment. Rather than conducting large-scale testing of individual initiatives, such as the SDM and AO, moving directly into integrated testing, even on a small scale, or waiting until key supports were in place, might have been more efficient. SSA has the opportunity to apply these lessons in future tests of initiatives. For example, the agency recently began testing expanded rationales —an effort designed to more fully document, at the initial level, the reasons a claim has been denied. These tests have been conducted outside of the FPM test, even though expanded rationales are closely related to other process changes in the FPM. Officials are now taking steps to incorporate this feature into the FPM. SSA will be conducting small pilot tests in four states to gather information regarding the impact of expanded rationales when they are added to other FPM process changes. Folding the expanded rationales test into the FPM test will provide more valuable information on the efficacy of this change in the environment in which it was intended to be implemented. Similarly, a new simplified disability decision methodology and computer software support are considered essential to the success of the DCM position. However, since these important support initiatives are not scheduled to be available in time to meet the current schedule for testing the DCM, it is not clear what or how much SSA will learn from this test about the viability or effectiveness of the DCM in a redesigned environment. Reengineering best practices suggest that, before selecting a specific process change for implementation, an organization should develop several possible alternatives to the existing work process and consider the costs and benefits of each. These alternatives should then be explored in order to (1) convincingly demonstrate the potential of each option to achieve the desired performance goals; (2) fully describe the types of technical and organizational changes necessary to support each goal; and (3) if possible, test key assumptions. Also, as part of a cost-benefit analysis, an agency should take into consideration any barriers to and risks in implementing each alternative. SSA might have avoided some of the problems currently being experienced with the AO initiative, which has engendered strong opposition, had other alternative work subprocesses also been explored on a small-scale basis before large-scale AO testing. Alternatives to the AO initiative for improving the appellate process exist, such as SSA’s temporary program to permit senior staff attorneys in hearing offices to allow benefits in clear-cut cases. However, SSA did not adequately assess the merits of the alternatives by obtaining concrete and comparable data on their relative costs, benefits, and risks. After 3 years of testing, SSA must decide whether to abandon the AO initiative, begin seriously exploring other solutions to pressing problems at the appellate level, or both. Compounding matters, opponents of the AO concept have pointed to its marginal test results to support their own favored, albeit untested, alternatives. SSA officials agreed that they did not fully prepare themselves for the possibility that their proposed changes might not work and thus did not adequately pursue alternatives earlier in the redesign process or develop contingency plans. SSA may still be able to apply this important lesson in a remaining area by more fully exploring feasible alternatives to the DCM initiative. As with the AO, the DCM initiative is facing some strong opposition and has perhaps even more barriers to full implementation standing in its way. According to one high-level SSA official, test results would have to be very compelling to support implementation of the DCM initiative. Nevertheless, SSA has begun a 3-year large-scale test of the DCM without adequately exploring feasible alternatives. For example, SSA could have—as we recommended in our 1996 report—systematically tested alternatives such as sequential interviewing to compare their relative effects on the process before beginning the large-scale DCM test. Instead of testing this concept, SSA allowed the individual operating units to decide whether or not they would adopt this approach. SSA officials believe few, if any, units are actively pursuing it. There is still time for SSA to explore such alternatives to the DCM while the agency conducts its protracted test. As of October 1998, SSA was considering widespread implementation of several changes to the disability claims process on the basis of some promising results from its FPM test. While SSA has encountered considerable challenges in testing its initiatives, the risk of further difficulty during their implementation is very high. The experience of other public and private organizations that have attempted business process reengineering strongly indicates that, when compared with developing or testing possible changes to a process, implementation of those changes is more difficult. Moreover, it is possible that certain process changes may not perform as expected outside the test environment. SSA, therefore, needs adequate performance goals and measures for key initiatives and objectives to monitor and assess the impact of any changes made to the process. SSA also needs an adequate quality assurance process in place to ensure the quality and accuracy of decisions. Experience has shown that implementation of a new process is extremely difficult and, compared with development and testing, is the most failure-prone phase of a reengineering effort. During implementation, an organization’s natural resistance to change must be overcome. According to a reengineering expert we consulted, many reengineering efforts fail because too little time and effort are allotted to implementation. The numerous issues that need to be considered and planned for include identifying all tasks, time frames, and needed resources for an orderly transition; structuring the rollout of the new process in a way reasonably suited to the nature of the process and the work and structure of the organization; assigning roles and responsibilities for implementation to the individuals who will do the work of the new process; providing a means for collecting and sharing information on implementation problems and solutions; and providing for close monitoring during implementation. SSA’s implementation plans issued in 1994 and 1997 do not address many of the above considerations. For example, the plans do not address the key roles, responsibilities, and reporting relationships required by the new process. In our discussions with stakeholders, we found increasing anxiety over the fact that some key organizational decisions related to work space, which unit would be responsible for managing the proposed AO positions, and other infrastructure issues had not yet been made. Nor do the implementation plans address how SSA will monitor the process to ensure successful implementation and optimum improvements. Recognizing that its current implementation plan is lacking in many specifics, SSA plans to develop more detailed implementation plans as key decisions are made. In order to be able to effectively monitor the results of its process changes during implementation, SSA will need adequate performance goals and measures. Researchers for the Harvard Business Review found that failure to measure a new process can be particularly damaging to a reengineering effort because, without a comprehensive measurement system that can track the new process’ performance, it is impossible to tell if implementation is succeeding or failing. A National Academy of Public Administration report similarly found that measuring and tracking performance continuously was one of six critical success factors in reengineering in the government sector. The report cites performance management as a key characteristic in successful organizations because it offers the only way for them to assess whether or not reengineering is achieving the results they desire. SSA currently collects a large amount of data related to the disability claims process, but these data could be improved or better tracked for the purpose of determining progress toward redesign goals. Key indicators that SSA uses or could use to measure progress are fragmented, incomplete, or entirely missing. For example, for its agencywide performance plan, SSA is using separate performance measures for disability claims processing times at the initial and appeal levels. This fragmented approach ignores the interrelationship between the two levels; that is, reducing processing time at the initial level might result in premature or poor determinations; cause more cases to be appealed; and, thus, cause overall processing times to increase. Conversely, implementing steps that result in a longer initial processing time but also permit earlier correct allowances could shorten the overall average processing time by reducing appeals. In addition, although SSA has said that process unification is the “cornerstone” in the foundation of the redesigned disability claims process, SSA’s performance plan does not contain a goal for this important initiative; rather, SSA continues to measure performance in a disjointed manner. SSA is collecting some appropriate data for its tests but still needs to make sure they are linked to the agency’s strategic goals and integrated into the agencywide performance measurement system. As stressed by the Chief Financial Officers Council, an organization composed of representatives of federal departments and agencies, government entities should integrate all reform activities, including reengineering, into the framework of the Government Performance and Results Act (commonly known as the Results Act). According to the Council, one of the reasons this is important is to ensure consistency and reduce duplication of effort. Our review of SSA’s fiscal year 1999 performance plan pointed out that SSA’s reengineering effort is not fully integrated into its Annual Performance Plan. Although the Plan noted SSA’s efforts to improve the disability claims process, the Plan did not include any useful discussion of SSA’s major initiative to completely redesign its disability claims process, nor did it indicate whether changes or improvements expected to result from this effort were factored into the performance measures or goals. SSA cannot be certain that its initiatives will perform the same under “real world” conditions as they did in an artificial test environment, and the agency will need to take additional steps to guard against the possibility of unintended results. For example, SSA’s test of the SDM included a quality review of all cases decided under the test, whereas currently, far fewer cases, most of which involve allowance determinations, are reviewed. In the absence of this 100-percent review, the SDM might perform differently, which could have a significant effect on the accuracy of determinations, the number of allowances and appeals, and overall benefit outlays. Possible unintended results could include inaccurate disability determinations, unanticipated increases in benefit outlays, and increased appellate workloads. When test results are marginal, there is a greater chance that expected process improvements might not materialize. SSA needs to be sure that, when implementing a change in the process such as the SDM, an adequate quality assurance process is in place to ensure that benefit eligibility decisions are accurate. Accuracy is important because incorrect decisions can result in wrongful benefit payments, unnecessary appeals, or hardship to the claimants caused by incorrect denials. Under its quality assurance initiative, SSA is seeking to build quality into the decision-making process using tools such as training, mentoring, peer review, and feedback. SSA has been exploring approaches to in-line quality assurance as part of its SDM phase II test, allowing individual test sites to set up their own processes. During implementation of the SDM, and in the absence of a uniform approach, SSA will need to take steps to ensure that individual state processes are sufficient to maintain quality. Ultimately, SSA will need to establish a final quality assurance process that will both identify systemic problems with case decisions and measure the success of SSA’s efforts to build quality into the process. As discussed in chapter 1, current reviews of DDS determinations and ALJ decisions are conducted in isolation from each other. SSA has recently instituted a review of ALJ decisions that will help identify inconsistencies in decision-making between the two levels. However, SSA has yet to develop a single quality review mechanism applicable to both levels. SSA has had particular difficulty getting its initial and appellate decision-making levels to agree on a consistent quality assurance process that cuts across all phases of the decision-making process, including reaching agreement on what constitutes a correct decision. More than 4 years after releasing its original redesign blueprint, SSA is still struggling to make significant improvements to its disability claims process. While the agency has made some progress with process unification, SSA has missed many of its redesign milestones, and the results of early tests did not support implementation of specific proposed changes. The agency is still conducting a number of tests, including yet another large, nonintegrated test at numerous sites. Also, top agency officials would like to begin making some implementation decisions about new decisionmaker positions and other proposed changes. With so much left to do, SSA still has a window of opportunity, which will not be open for long, to apply some lessons learned to help the agency achieve important improvements to its disability claims process. (SSA is no longer experiencing a dramatic growth in applications for disability benefits, but the agency can expect applications to increase again as the baby boom generation ages or if the economy suffers a downturn.) SSA’s ability to learn from past experience will be an important ingredient in the success of future efforts. For example, the size of SSA’s tests and the scope of redesign initiatives slowed SSA’s progress under its original 1994 redesign plan. When the agency revised its redesign plan in 1997 to include fewer initiatives and increased executive oversight, similar problems continued to limit progress. Even this revised plan required the agency to move forward on a number of varied fronts simultaneously, and SSA continued to miss key milestones. Again, the agency may have underestimated the challenges of managing stakeholder input and keeping such an ambitious effort on course. Strong project oversight should continue, but it will probably not be enough to ensure timely progress. Therefore, SSA needs to further focus its efforts by prioritizing its objectives and concentrating its resources on the efforts most likely to achieve those objectives. Such efforts should include those that help to improve consistency in decision-making, ensure accurate results, and achieve large efficiencies through the use of technology. Past experience has shown that a large-scale test of an individual initiative, while providing an abundance of information on how well that initiative performs in isolation from other changes, does not clearly demonstrate how the initiative would function in a redesigned process and is not the most efficient and effective use of resources. Moreover, while SSA hoped that this testing approach would help gain the support of key stakeholders likely to be affected by the changes, it has not done so. To help free up resources and effectively demonstrate the efficacy of proposed changes, SSA should conduct relatively small tests that integrate several of the proposed changes to the process. Smaller tests will allow SSA to more efficiently identify promising concepts before moving to larger-scale testing or implementation. Integrated testing—testing related concepts together and with key supports in place—will help SSA to demonstrate whether proposed changes will perform as intended under the new process. SSA’s experience with the AO test has also shown the risks inherent in devoting considerable time and resources to a single unproven approach or change. Results of AO tests have been consistently disappointing, and SSA now finds itself faced with the same long-standing problems the AO was intended to remedy without a tested alternative solution. Therefore, in the future, before investing significant time and resources on any initiative, SSA should explore feasible alternatives for changing the process on a small scale. For example, as we have recommended before, SSA should explore sequential interviewing as a feasible and less risky alternative to the controversial DCM position. Exploring alternatives and conducting small, integrated tests of related initiatives before making large investments are sound reengineering and management practices, the wisdom of which has been underscored by SSA’s experience to date. Since other organizations have found implementation of process changes to be the most failure-prone phase of a redesign effort, SSA is also likely to encounter numerous pitfalls as it attempts to effect process changes in such a complex environment. As a result, it is especially important for SSA to take action to closely monitor the results of changes it makes to the process and watch for early warnings of problems. It is possible that process changes may not operate as expected outside the test environment. It is also possible that some stakeholders who do not support specific changes may act to undermine their success. If process changes do not operate as expected, the results could include inaccurate decisions, unanticipated program costs, increased appellate workloads, and lack of improvement in service to the claimant. Therefore, SSA should immediately establish a comprehensive set of performance goals and measures—a set that cuts across the whole process and is also linked to SSA’s overall strategic and performance plans—in order to assess and monitor the results of changes to the process. Finally, SSA’s tests of process changes have provided only limited assurance that these changes would not degrade the quality of disability decisions. Specifically, SSA’s tests included artificial steps, such as a quality review of all test cases, that are not likely to be used outside the test environment. Quality is perhaps the most critical aspect of the decision-making process because each inappropriate disability decision does a disservice to claimants, taxpayers, or both. A wrongful denial burdens the claimant and could result in unnecessary administrative costs if the claimant appeals the decision, whereas a wrongful allowance results in a continuous stream of inappropriate benefit payments. Therefore, as changes are made to the process, SSA should ensure that it has a quality assurance process in place that both promotes and monitors the quality of disability decisions. As SSA proceeds with further exploration and testing of redesign initiatives and considers implementation options, it should take the following steps to improve the likelihood of making key improvements to the disability claims process: further focus resources on those initiatives, such as process unification, quality assurance, and computer support systems, that offer the greatest potential for achieving SSA’s most critical redesign objectives; test promising concepts at a few sites in an integrated fashion; establish key supports and explore feasible alternatives before committing significant resources toward the testing of specific initiatives, such as the DCM; develop a comprehensive set of performance goals and measures to assess and monitor changes in the disability claims process; and ensure that quality assurance processes are in place that both monitor and promote the quality of disability decisions. SSA mostly agreed with our report’s observations and the thrust of its recommendations. Specifically, SSA agreed that the tests conducted took longer than anticipated and did not result in the budgetary and operational efficiencies originally hoped for in the 1994 redesign plan. SSA also agreed that it should focus on those areas that will make the greatest contributions to improving the quality and timeliness of decisions. As we have recommended, SSA intends to pursue additional process unification and quality assurance activities. The agency also indicated it will pursue elements of the FPM that will significantly improve customer service. Finally, SSA agreed that systems technology must continue to be an important focus of resources. SSA took issue with our critique of its testing strategy. SSA believes that stand-alone and FPM testing were both needed to gather data and experience that are essential for making responsible decisions. Moreover, SSA believes that testing at fewer sites would not have provided the required information or allowed the agency to complete the tests in less time. While we understand the agency’s desire to conduct large tests in order to obtain statistically valid results, we continue to believe that exploring the efficacy of initiatives initially on a smaller scale before moving to large-scale testing or implementation would result in a better use of resources. Also, because the various initiatives are interdependent, we believe that integrated testing would provide more complete and useful information on how the initiatives will perform in the new process. SSA also stated that its current approach to testing the DCM is consistent with our concerns and recommendations, in that it recognizes and builds upon what SSA has learned from previous testing experiences. However, we still have reservations about SSA’s current approach to testing the DCM. First, SSA continues to test this new position on a rather large scale without having explored the position’s potential efficacy through prototyping or limited pilot testing. Second, SSA is testing this initiative without the benefit of the key supports (such as a new simplified disability decision methodology and computer software support) upon which its efficacy relies. Finally, SSA is moving forward with the DCM test without having explored the feasibility of alternative approaches. While agreeing to focus on certain key initiatives, SSA believes that changes to the decision-making process should precede major computer system changes to enable technological developments to be crafted in the manner most supportive of the new process. Similarly, SSA stated that changes to the decision-making process should precede the development of a new quality assurance process, the purpose of which will be to evaluate the quality of the new process. However, we believe that SSA can make substantial progress toward developing these critical supports before finalizing the process changes. For example, certain key aspects of SSA’s quality assurance initiative—such as ensuring the consistent application of policy across all levels of the process and developing agreement on what constitutes a correct decision—need not rely on final changes to the process of making a decision. Finally, SSA pointed out, and we agree, that the agency’s monitoring and evaluation systems currently capture a significant amount of data related to the disability claims process. However, as our report indicates, these data are not always translated into comprehensive and complete performance goals and measures that look at the efficiency and effectiveness of the process as a whole. As we asserted in our report, SSA’s use of separate performance measures for disability claims processing times at the initial and appeals levels in its agencywide performance plan ignores the interrelationship between the two levels, thereby reducing the usefulness of the performance measures. We also noted the lack of integration of SSA’s redesign objectives with those found in the agencywide performance plan. We believe SSA can do more to make better use of the large amount of data it collects through a carefully crafted set of performance goals and measures.
Pursuant to a congressional request, GAO: (1) reviewed the Social Security Administration's (SSA) efforts to redesign its disability claims process; and (2) identified actions that SSA could take to better ensure future progress. GAO noted that: (1) even with its scaled-back plan, SSA has been unable to keep its redesign activities on schedule and to demonstrate that its proposed changes will significantly improve the claims process; (2) the inability to keep on schedule was caused, in part, by SSA's overly ambitious plan and its strategy for testing proposed changes; (3) other problems with the design of its tests weakened SSA's ability to predict how the initiatives would operate if implemented; (4) the problems that led to SSA's redesign effort persist, and as SSA continues its efforts to improve the disability claims process the agency has an opportunity to learn from its experience and the best practices of other organizations with reengineering experience; (5) SSA could improve its chances of making future progress by further scaling back its near-term efforts to include only initiatives that are critical to improving the disability claims process; (6) in addition, by testing related process changes together, rather than on a stand-alone basis, and at a smaller number of sites, SSA could free up resources while still obtaining valuable data; (7) SSA should also explore feasible alternatives before committing significant resources toward the testing of specific initiatives; (8) because a process change might function differently under actual operational conditions than it did in a test environment, SSA will need to revise its performance measures to better monitor and more fully assess the impact of changes on the process; and (9) moreover, SSA will need to ensure that an adequate quality assurance process is in place so that any changes SSA makes to the process do not compromise the quality of decisions.
The 89 recommendations in the panel report are largely consistent with our past work and recommendations. I will now discuss each of the seven areas the panel reviewed, the general thrust of the panel’s recommendations, and our views on them. The first area the panel reviewed was commercial practices. According to the panel, the bedrock principle of commercial acquisition is competition. The panel found that defining requirements is key to achieving the benefits of competition because procurements with clear requirements are far more likely to produce competitive, fixed-price offers that meet customer needs. Further, the panel found that commercial organizations invest the time and resources necessary to understand and define their requirements. They use multidisciplinary teams to plan their procurements, conduct competitions for award, and monitor contract performance. Commercial organizations rely on well-defined requirements and competitive awards to reduce prices and obtain innovative, high-quality goods and services. Hence, practices that enhance and encourage competition were the basis of the panel recommendations. The panel recommended, among other things, that the requirements process be improved and competitive procedures be strengthened. Our work is generally consistent with the panel’s recommendations, and we have issued numerous products that address the importance of a robust requirements definition process and the need for competition. For example, in January 2007, we testified that poorly defined or broadly described requirements have contributed to undesired services acquisition outcomes. To produce desired outcomes within available funding and required time frames, our work has shown that DOD and its contractors need to clearly understand acquisition objectives and how they translate into the contract’s terms and conditions. The absence of well-defined requirements and clearly understood objectives complicates efforts to hold DOD and contractors accountable for poor acquisition outcomes. This has been a long-standing issue. Regarding competition, we have stated that competition is a fundamental principle underlying the federal acquisition process. Nevertheless, we have reported numerous times on the lack of competition in DOD’s acquisition of goods and services. For example, we noted in April 2006 that DOD awarded contracts for security guard services supporting 57 domestic bases, 46 of which were let on an authorized sole-source basis. The sole- source contracts were awarded by DOD despite recognizing it was paying about 25 percent more than previously paid for the contracts awarded competitively. The second area the panel reviewed was improving the implementation of performance-based acquisitions. The panel reported that performance- based acquisition (PBA) has not been fully implemented in the federal government even though OMB has encouraged greater use of it—setting a general goal in 2001 of making performance-based contracts 40 percent or more of all eligible service acquisitions for fiscal year 2006. The panel reported that agencies were not clearly defining requirements, not preparing adequate statements of work, not identifying meaningful quality measures and effective incentives, and not effectively managing the contract. The panel noted that a cultural emphasis on “getting to award” still exists within the government, an emphasis that precludes taking the time to clarify agency needs and adequately define requirements. The panel recommended that OFPP issue more explicit implementation guidance and create a PBA “Opportunity Assessment” tool to help agencies identify when they should consider using PBA contracts. Like the panel, we have found that agencies have faced a number of issues when using PBA contracts. For example, we reported in April 2003 that there was inadequate guidance and training, a weak internal control environment, and limited performance measures and data that agencies could use to make informed decisions on when to use PBA. We have made recommendations similar to the panel’s. For example, we have recommended that the Administrator of OFPP work with agencies to periodically evaluate how well agencies understand PBA and how they can apply it to services that are widely available in the commercial sector, particularly more unique and complex services. The panel’s concern that agencies are not properly managing PBA contracts is also consistent with our work on surveillance of service contracts. In a March 2005 report, we found that proper surveillance of service contracts, including PBAs, was not being conducted, leaving DOD at risk of being unable to identify and correct poor contractor performance. Accordingly, we recommended that the Secretary of Defense ensure the proper training of personnel in surveillance and their assignment to contracts no later than the date of contract award. We further recommended the development of practices to help ensure accountability for personnel carrying out surveillance responsibilities. We have also found that some agencies have attempted to apply PBA to complex and risky acquisitions, a fact that underscores the need to maintain strong government surveillance to mitigate risks. The third area the panel reviewed was interagency contracting. The panel found that reliance on interagency contracts is significant. According to the panel report, 40 percent of the total 2004 obligations, or $142 billion, was obligated through the use of interagency contracts. The panel also found that a significant reason for the increased use of these contracts has been reductions in the acquisition workforce accompanied by increased workloads and pressures to reduce procurement lead times. Accordingly, the panel made numerous recommendations to improve the use of interagency contracts with the intent of enhancing competition, lowering prices, improving the expertise of the acquisition workforce, and improving guidance for choosing the most appropriate interagency contract for procurements. Our work is generally consistent with the panel’s recommendations on interagency contracting. In fact, 15 of our reports on interagency contracting were cited in the panel report. These included numerous recommendations that are consistent with the panel’s recommendations. Our reports recognize that interagency contracts can provide the advantages of timeliness and efficiency by leveraging the government’s buying power and providing a simplified and expedited method of procurement. However, our prior work has found that agencies involved in the interagency contracting process have not always obtained required competition, evaluated contracting alternatives, or conducted adequate oversight. A number of factors render the use of interagency contracts high risk; these factors include their rapid growth in popularity, their use by some agencies that have limited expertise with this contracting method, and the number of parties that might be involved. Taken collectively, these factors contribute to a much more complex procurement environment— one in which accountability is not always clearly established. In 2005, because we found that interagency contracts can pose risks if they are not properly managed, we designated the management of interagency contracting a governmentwide high-risk area. The fourth area the panel reviewed was small business. The panel made recommendations to change the guidance to contracting officers for awarding contracts to small businesses. These recommendations are intended to improve the policies and, hence, address the socioeconomic benefits derived from acquiring services from small businesses. OFPP has taken the position that all but one of the recommendations requires legislation to implement. While our work on small business has addressed a number of policy issues, we have not made recommendations for statutory and regulatory changes when arguments for such changes are based on value judgments, such as those related to setting small business contracting goals. The fifth area the panel reviewed was the federal acquisition workforce. The panel recognized a significant mismatch between the demands placed on the acquisition workforce and the personnel and skills available within the workforce to meet those demands. The panel found, for example, that demands on the federal acquisition workforce have grown substantially while at the same time, the complexity of the federal acquisition system as a whole has increased. Accordingly, the panel made a number of recommendations designed to define, assess, train, and collect data on the acquisition workforce and to recruit talented entry level personnel and retain its senior workforce. Our work is generally consistent with the panel’s findings and recommendations on the acquisition workforce. On the basis of observations made by acquisition experts from the federal government, private sector, and academia, we reported in October 2006 that agency leaders have not recognized or elevated the importance of the acquisition profession within their organizations. The agency leaders further noted that a strategic approach had not been taken across government or within agencies to focus on workforce challenges, such as creating a positive image essential to successfully recruit and retain a new generation of talented acquisition professionals. In September 2006, we testified that while the amount, nature, and complexity of contract activity has increased, DOD’s acquisition workforce, the largest component of the government’s acquisition workforce, has remained relatively unchanged in size and faces certain skill gaps and serious succession planning challenges. Further, we testified that DOD’s acquisition workforce must have the right skills and capabilities if it is to effectively implement best practices and properly manage the goods and services it buys. In July 2006, we reported that in the ever-changing DOD contracting environment, the acquisition workforce must be able to rapidly adapt to increasing workloads while continuing to improve its knowledge of market conditions, industry trends, and the technical details of the goods and services it procures. Moreover, we noted that effective workforce skills were essential for ensuring that DOD receives fair and reasonable prices for the goods and services it buys and identified a number of conditions that increased DOD’s vulnerabilities to contracting waste and abuse. The sixth area the panel reviewed was contractors supporting the federal government. The panel reported that, in some cases, contractors are solely or predominantly responsible for the performance of mission-critical functions that were traditionally performed by government employees, such as acquisition program management and procurement, policy analysis, and quality assurance. Further, the panel noted that this development has created issues with respect to the proper roles of, and relationships between, federal employees and contractor employees in the “blended” workforce. The panel stated that although federal law prohibits contracting for activities and functions that are inherently governmental, uncertainty about the proper scope and application of this term has led to confusion, particularly with respect to service contracting outside the scope of OMB’s Circular A-76, which provides guidance on competing work for commercial activities via public-private competition. Moreover, according to the panel, as the federal workforce shrinks, there is a need to ensure that agencies have sufficient in-house expertise and experience to perform inherently governmental functions by being in a position to make critical decisions on policy and program management issues and to manage the performance of contractors. The panel recommended (1) that the FAR Council consider developing a standard organizational conflict-of- interest clause for solicitations and contracts that sets forth a contractor’s responsibility concerning its employees and those of its subcontractors, partners, and any other affiliated organization or individual; (2) that OFPP update the principles for agencies to apply in determining which functions government employees must perform; and (3) that OFPP ensure that the functions identified as those that must be performed by government employees are adequately staffed. On the basis of our work, we have similar concerns to those expressed by the panel, and our work is generally consistent with the panel’s recommendations on the appropriate role of contractors supporting the federal acquisition workforce. We have testified and reported on the issues associated with an unclear definition of what constitutes inherently governmental functions, inadequate government experience and expertise for overseeing contractor performance, and organizational conflicts of interest related to contractor responsibilities. We found that there is a need for placing greater attention on the type of functions and activities that could be contracted out and those that should not, for reviewing the current independence and conflict-of-interest rules relating to contractors, and for identifying the factors that prompt the government to use contractors in circumstances where the proper choice might be the use of government employees or military personnel. In our recent work at DHS, we found that more than half of the 117 statements of work we reviewed provided for services that closely supported the performance of inherently governmental functions. We made recommendations to DHS to improve control and accountability for decisions resulting in buying services that closely support inherently governmental functions. Accordingly, our work is consistent with panel recommendations to update the principles for agencies to apply in determining which functions government employees must perform; and to ensure that the functions identified as those that must be performed by government employees are adequately staffed. Finally, the seventh and last area the panel reviewed was federal procurement data. The Federal Procurement Data System-Next Generation (FPDS-NG) is the federal government’s primary central database for capturing information on federal procurement actions. Congress, Executive Branch agencies, and the public rely on FPDS-NG for a wide range of information including agencies’ contracting actions, governmentwide procurement trends, and how procurement actions support socioeconomic goals and affect specific geographical areas and markets. The panel reported that FPDS-NG data, while insightful when aggregated at the highest level, continue to be inaccurate and incomplete at the detailed level and cannot be relied on to conduct procurement analyses. The panel believes the processes for capturing and reporting FPDS-NG data need to be improved if that data is to meet user requirements. As a result, the panel made 15 recommendations aimed at increasing the accuracy and the timeliness of the FPDS-NG data. For example, the panel recommended that an independent verification and validation should be undertaken to ensure all other validation rules are working properly in FPDS-NG. Our work has identified similar concerns as those expressed by the panel. In fact, the panel cited our work numerous times in its report. Like the panel, we have pointed out that FPDS-NG data accuracy has been a long- standing problem and have made numerous recommendations to address this problem. As early as 1994, we reported that the usefulness of federal procurement data for conducting procurement policy analysis was limited. More recently, in 2005, we again raised concerns about the accuracy and timeliness of the data available in FPDS-NG. We have also reported that the use of the independent verification and validation function is recognized as a best business practice and can help provide reasonable assurance that the system satisfies its intended use and user needs. OFPP representatives told us the office agrees with almost all of the 89 panel recommendations and has already acted on some, while potential actions are pending on others. OFPP identified legislative actions and FAR cases that could address over one third of the recommendations. OFPP expects to address at least 51 of the remaining recommendations and plans to work with the chief acquisition officer or senior procurement official within each agency to do so. In some cases, OFPP has established milestones and reporting requirements to help provide it with visibility over the progress and results of implementing the recommendations. Although OFPP has taken some steps to track the progress of selected recommendations, it does not have an overall strategy or plan to gauge the successes and shortcomings in how the panel’s recommendations are implemented and how they improve federal acquisitions. Table 1 shows how OFPP expected the 89 recommendations to be implemented. In October 2007, OFPP representatives noted that while the panel directed 17 recommendations to Congress, legislative actions could address as many as 23 panel recommendations. Panel recommendations directed to Congress include potential legislative changes such as authorizing the General Services Administration to establish a new information technology schedule for professional services and enacting legislation to strengthen the preference for awarding contracts to small businesses. An example of the latter is amending the Small Business Act to remove any statutory provisions that appear to provide for a hierarchy of small business programs. According to the panel, this is necessary because an agency would have difficulty meeting its small business goal if any one small business program takes a priority over the others. Since October 2007, some panel recommendations have been addressed by legislative actions. For example, the panel recommended that protests of task and delivery orders valued in excess of $5 million be permitted. Section 843 of the National Defense Authorization Act for Fiscal Year 2008 allows for such protests, but raised the dollar threshold to orders valued in excess of $10 million. For those recommendations that were expected to be addressed by legislative actions but have not yet been the subject of congressional action, OFPP representatives told us the office could take administrative actions, such as issuing a policy memorandum or initiating a FAR case, to implement most of them. In closing, the SARA Panel, like GAO, has made numerous recommendations to improve federal government acquisition—from encouraging competition and adopting commercial practices to improving the accuracy and usefulness of procurement data. Our work is largely consistent with the panel’s recommendations, and when they are taken as a whole, we believe the recommendations, if implemented effectively, can bring needed improvements in the way the federal government buys goods and services. OFPP, as the lead office for responding to the report, is now in a key position to sustain the panel’s work by ensuring that panel recommendations are implemented across the federal government in an effective and timely manner. To do this, we recommended in our recent report that OFPP work with the chief acquisition officers and senior procurement officials across all the federal agencies to lay out a strategy or plan that includes milestones and reporting requirements that OFPP could use to establish accountability, exercise oversight, and gauge the progress and results of implementing the recommendations. Mr. Chairman and members of the subcommittee this concludes my statement. I would be pleased to respond to any questions you might have. For questions regarding this testimony, please call John P. Hutton at (202) 512-4841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this testimony include James Fuquay, Assistant Director, Daniel Hauser, John Krump, Robert Miller, and Robert Swierczek. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A growing portion of federal spending is related to buying services such as administrative, management, and information technology support. Services accounted for about 60 percent of total fiscal year 2006 procurement dollars. The Services Acquisition Reform Act (SARA) of 2003 established an Acquisition Advisory Panel to make recommendations for improving acquisition practices. In January 2007, the panel proposed 89 recommendations to improve federal acquisition practices. GAO was asked to testify on how the panel recommendations compare to GAO's past work and identify how the Office of Federal Procurement Policy (OFPP) expects the recommendations to be addressed. This statement is based on GAO's analysis of the advisory panel's report. GAO's analysis is included in its December 2007 report titled, Federal Acquisition: Oversight Plan Needed to Help Implement Acquisition Advisory Panel Recommendation, (GAO-08-160). The SARA Panel, like GAO, has made numerous recommendations to improve federal government acquisition--from encouraging competition and adopting commercial practices to improving the accuracy and usefulness of procurement data. The recommendations in the SARA Panel report are largely consistent with GAO's past work and recommendations. The panel and GAO have both pointed out the importance of a robust requirements definition process and the need for competition; the need to establish clear performance requirements, measurable performance standards, and a quality assurance plan to improve the use of performance-based contracting; the risks inherent in the use of interagency contracts because of their rapid growth and their improper management; stresses on the federal acquisition workforce and the need for a strategy to assess these workforce needs; concerns about the role of contractors engaged in managing acquisition and procurement activities performed by government employees and the proper roles of federal employees and contractor employees in a "blended" workforce; and the adverse effects of inaccurate and incomplete federal procurement data, such as not providing a sound basis for conducting procurement analyses. The panel also made recommendations that would change the guidance for awarding contracts to small businesses. While GAO's work has addressed some small business policy issues, GAO has not made recommendations that would change the guidance to be used for awarding contracts to small businesses. OFPP representatives told GAO that OFPP agrees with almost all of the panel recommendations and expected that most of the 89 panel recommendations would be implemented through one of the following means: congressional actions; changes to the Federal Acquisition Regulation; OFPP actions, such as issuing new or revised policy; and federal agency actions. OFPP has already acted on some SARA recommendations, while other actions are pending or under consideration. Milestones and reporting requirements are in place to help OFPP gauge the implementation status of some recommendations but not for others. Moreover, OFPP does not have a strategy or plan to allow it to exercise oversight and establish accountability for implementing all of the panel's recommendations and to gauge their effect on federal acquisitions.
Beginning January 1, 2014, PPACA required most citizens and legal residents of the United States to maintain health insurance that qualifies as minimum essential coverage for themselves and their dependents or pay a tax penalty. Most Medicaid coverage and private health insurance coverage purchased through the exchanges qualifies as minimum essential coverage. To expand individuals’ access to minimum essential coverage, PPACA provided states the option to expand eligibility for Medicaid coverage, with increased federal financing for the newly eligible population. As of January 2014, 25 states had expanded their Medicaid programs, and an additional 4 states had expanded as of March 2015. Beginning in October 2013, individuals were able to shop for private health insurance qualifying as minimum essential coverage through the exchanges, with coverage effective beginning as early as January 1, 2014. As of March 2015, the federal government operated an FFE in 34 states, and 17 states were approved to operate SBEs (see fig. 1). States with SBEs may use the FFE IT systems for eligibility and enrollment functions. In 2014, two states with SBEs used the FFE IT systems for eligibility and enrollment, while in 2015 three states with SBEs did so. PPACA also created federal subsidies for exchange coverage, most notably the premium tax credit available to eligible individuals with household incomes between 100 and 400 percent of the FPL. Individuals eligible for Medicaid or other minimum essential coverage, such as qualifying employer-sponsored coverage, are not eligible for the premium tax credit. The tax credit is refundable and is generally paid to issuers in advance to reduce enrollees’ premium costs for exchange plans. Advance payments of this tax credit are known as advance premium tax credits (APTC) and are calculated based on an eligible individual’s family size and anticipated household income relative to the cost of premiums for a benchmark plan. According to HHS, approximately 87 percent of individuals selecting a plan for the 2015 coverage year in FFE states qualified for the APTC, with an average per person, monthly APTC amount ranging from $155 in Arizona to $534 in Alaska, and an average reduction in premiums of about 72 percent. In addition to the premium tax credit, PPACA provides for cost-sharing reductions to reduce out-of- pocket costs, such as deductibles and copayments, for eligible individuals with household incomes between 100 and 250 percent of the FPL. PPACA required the establishment in all states of a coordinated eligibility and enrollment process for Medicaid and the exchanges. Since the enactment of the law in March 2010, CMS has issued regulations and technical guidance outlining aspects of this coordination. In particular, exchanges and state Medicaid agencies must enter into agreements with one another to ensure prompt eligibility determinations and enrollment of individuals in the appropriate programs regardless of where they apply, and must transmit individuals’ account information—that is, their records—via secure electronic interface. However, the mechanisms through which this coordination occurs may vary depending on the state. In FFE states, CMS has established an account transfer process through which accounts for individuals enrolled in or applying for exchange or Medicaid coverage are electronically transmitted between CMS and state Medicaid agencies where appropriate. If individuals apply for coverage in an FFE state, CMS is responsible for determining or assessing individuals’ eligibility for Medicaid and determining eligibility for exchange coverage, including exchange subsidies and, if applicable, facilitating enrollment in an exchange plan. If CMS determines or assesses that an individual is or may be eligible for Medicaid, it must transfer the individual’s account to the appropriate state Medicaid agency for enrollment, where appropriate. Individuals may also apply for coverage directly through the state Medicaid agency. In this case, the state is responsible for determining eligibility for Medicaid and, for individuals determined ineligible, transferring accounts to CMS for a determination of eligibility to enroll in subsidized exchange coverage. Conversely, states with SBEs are responsible for determining eligibility for both Medicaid and exchange coverage, including exchange subsidies, as well as enrolling individuals in the appropriate programs. There are differences in eligibility and enrollment policies for Medicaid and exchange coverage. Medicaid. Individuals may enroll in Medicaid coverage at any point in time during the year, with their coverage effective as of the date of application, reported eligibility change, or earlier. Individuals enrolled in Medicaid are generally required to report any changes—such as changes to income or household composition—that may affect their Medicaid eligibility. Outside of self-reported changes, eligibility for Medicaid must generally be redetermined every 12 months. When individuals are determined ineligible for Medicaid, states are required to send them notification that their coverage will be terminating at least 10 days prior to their Medicaid termination date. In addition, states may opt to extend Medicaid coverage through the end of the month if it would otherwise be terminated earlier in the month. Exchange coverage. Individuals’ options for enrollment in exchange coverage are generally restricted to an annual open enrollment period that starts near the end of the calendar year, unless they experience a change that qualifies them for a special enrollment period. Exchange coverage is generally prospective, meaning that individuals must select an exchange plan by a certain date in order to have coverage effective the following month. If individuals choose to end their exchange coverage, they must generally provide advance notice at least 14 days before the requested termination date. As with Medicaid, individuals enrolled in subsidized exchange coverage are required to report any changes that may affect their eligibility. Eligibility for subsidized exchange coverage is redetermined during open enrollment and any time an individual reports a change, regardless of when coverage began during the year. If individuals are determined ineligible for continued subsidized exchange coverage, such subsidies must be terminated or they may be held liable for repayment of the APTC as part of the reconciliation process with IRS. The coordination of federal payments for individuals transitioning between Medicaid and subsidized exchange coverage is addressed through Medicaid’s third party liability rule and IRS’s reconciliation process for the APTC. Specifically: Third party liability in Medicaid. Where individuals are enrolled in Medicaid along with another form of coverage, Medicaid operates as the payer of last resort. This means that the other source of coverage must pay to the extent of its liability before Medicaid pays, referred to as third party liability. For example, for individuals enrolled in both Medicaid and exchange coverage for some period of time, the issuer of exchange coverage is required to pay to the extent of its liability before Medicaid does. Reconciliation of the APTC with the IRS. Individuals enrolled in exchange coverage and receiving the APTC must file federal income tax returns with the IRS to reconcile the amount of the premium tax credit allowed with the amount received in advance, and may be liable to pay back any excess credits received during the taxable year. For individuals transitioning from exchange coverage to Medicaid during the year, this reconciliation could include repayment of APTC received after an individual was determined eligible for Medicaid. Most state Medicaid programs have implemented managed care systems, under which the state pays contracted issuers a set amount per beneficiary per month to arrange for all covered services and the issuer assumes the risk for the cost of providing those services. In states that offer managed care in their Medicaid programs, issuers have the potential to participate in both Medicaid and the exchange market. Issuers approved to offer Medicaid managed care, exchange coverage, or both, must comply with applicable state and federal requirements for the respective programs. For example, issuers offering Medicaid managed care must comply with any applicable state and federal restrictions on marketing their plans to Medicaid beneficiaries. In addition, some states may require issuers contracting with the Medicaid program to offer such coverage statewide, while in other states issuers may offer their Medicaid coverage statewide or to enrollees in selected geographic regions within the state. Issuers of exchange coverage have the option of offering their exchange plans statewide or within selected geographic regions. Information from CMS and selected states and issuers indicates that individuals transitioning from Medicaid to exchange coverage may experience coverage gaps, and that duplicate coverage is occurring under several scenarios. CMS and our selected states had a number of enrollment policies, IT mechanisms, and consumer education efforts that minimize the potential for coverage gaps and duplicate coverage; however, our assessment of CMS’s policies and procedures for FFE states found that additional controls are needed. Officials from CMS and four of our eight selected states told us that individuals may experience gaps in coverage when transitioning between Medicaid and exchange coverage, though they did not have information on the extent to which such gaps were occurring. Specifically, as Medicaid coverage is effective as of the date an eligibility change is reported or earlier, officials from two states explained that coverage gaps should generally not occur for individuals who lose eligibility for exchange coverage and are transitioning to Medicaid. However, as exchange coverage is generally prospective, coverage gaps could occur in the other direction. In particular, officials from one state told us that individuals who lose eligibility for Medicaid toward the end of a month may be more likely to experience coverage gaps because they would have a short window of time to enroll in exchange coverage so that coverage is effective the first day of the following month. Individuals who experience gaps in coverage may decide to forgo necessary care rather than pay out-of- pocket, which could negatively affect health outcomes and result in sicker individuals enrolling in exchange coverage. Information from selected states and issuers indicated that duplicate coverage—that is, enrollment in both Medicaid and subsidized exchange coverage—was occurring under the three scenarios outlined below, the first of which is permitted under federal law. However, the full extent to which duplicate coverage was occurring was unknown. Scenario 1: Individuals who are completing the transition from subsidized exchange to Medicaid coverage. According to officials from three of our eight selected states, some amount of duplicate coverage may be expected for individuals transitioning from subsidized exchange coverage to Medicaid. For example, if an individual with subsidized exchange coverage reports a change and is determined eligible for Medicaid on September 16th, the individual could have duplicate coverage for the period of September 16th through September 30th. This is primarily due to differences in the effective dates of coverage. Medicaid coverage is effective as of the date an eligibility change is reported or earlier—while in general exchange coverage can only be terminated prospectively, generally with at least 14 days advance notice. The period of duplicate coverage could be extended if the Medicaid eligibility determination takes longer—and per federal regulations it can take up to 45 days for applicants not applying on the basis of disability. This transitional period of duplicate coverage is permitted under the law; that is, individuals are permitted to be enrolled in both types of coverage through the end of the month of the Medicaid eligibility determination. Scenario 2: Individuals who do not end their subsidized exchange coverage after being determined eligible for Medicaid. One of our selected states identified that 3,500 individuals had duplicate coverage at some point from January to July 2014, in part because some of the individuals did not end their subsidized coverage after being determined eligible for Medicaid. Individuals may not end subsidized exchange coverage for a variety of reasons, including that, depending on their income level and plan selection, some individuals receiving subsidies may not have to make a premium payment and thus may not realize they are still enrolled and need to take steps to end their coverage. If individuals do not end coverage, but stop paying premiums once Medicaid coverage begins, the APTC must still be paid out for a 3-month grace period after premium payments have ceased, though issuers must return the APTC amount for the final 2 months of this period under certain circumstances. Scenario 3: Individuals who enroll in subsidized exchange coverage when already enrolled in Medicaid. One of our selected issuers reported that a small number of individuals enrolled in one of the issuer’s Medicaid plans and later also obtained subsidized coverage through one of its exchange plans—18 individuals as of February 2015. Officials from the Medicaid agency in the state where this issuer operates also told us that they had identified cases of duplicate coverage by selecting a small sample of individuals from one of their Medicaid issuers, and that they had heard from some other issuers in the state that they had members enrolled in both coverage types. Additionally, another of our selected issuers reported that one of its plans had experienced a number of instances of duplicate coverage—which tended to last for many months—and that the volume had increased during 2015 open enrollment for exchange coverage, likely because Medicaid coverage was not identified. To the extent duplicate coverage occurs, there could be financial implications for the federal government. In cases where the state Medicaid program has identified that an individual is enrolled in exchange coverage—and Medicaid is operating as the payer of last resort—there may not be a significant difference in federal costs for the individual during the period of duplicate coverage compared with what would have been spent if duplicate coverage had not occurred. However, evidence suggests that some states may face challenges identifying exchange coverage. We recently found that states face challenges identifying whether Medicaid enrollees have other sources of coverage, which could include exchange coverage. In addition, officials from our four selected FFE states told us that they do not currently have access to exchange enrollment information, and that such information could help them better identify information on Medicaid enrollees’ other sources of coverage. CMS officials told us that CMS has provided exchange enrollment data to one state that requested it for third-party liability purposes, and the agency would consider the appropriateness of providing such data to other states if requested. If the state is not aware of an individual’s exchange coverage, the federal government could be paying twice—that is, subsidizing exchange coverage and reimbursing states for Medicaid spending for the same individual. The risk of duplicate payments may be higher in states with higher Medicaid managed care penetration as the state pays issuers a monthly fee for each enrolled individual, regardless of whether services are received. The tax reconciliation process for the APTC has the potential to reduce the financial implications of any duplicate payments. However, according to IRS officials, the IRS will generally not have the information necessary to identify duplicate coverage as part of reconciling the amount of the APTC an individual may owe until 2016—that is, the tax filing season for tax year 2015—when states are required to report Medicaid enrollment data to IRS. Officials told us that once IRS begins receiving the data their ability to identify the need for repayment due to duplicate coverage will depend on the quality of the data and the IRS’s available resources. Officials said that depending on resources, they may check for Medicaid coverage for each individual receiving the APTC or for a sample of individuals. Duplicate coverage could also have financial implications for individuals. As long as individuals end subsidized exchange coverage upon receiving their Medicaid eligibility determination, they would generally not be liable for repaying the APTC received during the transitional period of duplicate coverage discussed in the first scenario above; however, according to CMS officials, individuals would be responsible for their portion of the exchange premiums during this period. To the extent duplicate coverage occurs outside of the transitional period and the IRS identifies duplicate coverage during the tax reconciliation, individuals may be liable for repaying all or a portion of the APTC received. CMS and our selected states had policies and procedures that minimize the potential for coverage gaps or duplicate coverage when individuals transition between Medicaid and the exchanges. Enrollment-related: CMS and selected states had enrollment policies and procedures that minimize the potential for coverage gaps by facilitating alignment of Medicaid and exchange coverage periods. For example, for individuals transitioning from Medicaid to exchange coverage, CMS requires that, as long as individuals select an exchange plan on or before the day that Medicaid coverage ends, exchanges must ensure that coverage is effective on the first day of the following month. In contrast, most individuals enrolling in exchange coverage must select a plan by the fifteenth of the month in order to have a coverage effective date for the first day of the following month. Additionally, in February 2015, CMS adopted a new regulation governing premium payments in FFE states, allowing individuals transitioning from Medicaid 30 calendar days from enrolling in exchange coverage to pay their first premium. At the state level, officials from one state told us they increased the deadline for mailing notification of Medicaid coverage termination to 20 days prior to termination instead of the minimum required 10, so that individuals have more time to shop for a plan on the exchange. Additionally, officials from all of our selected states reported extending Medicaid coverage to at least the end of a month even when an individual becomes ineligible for Medicaid coverage earlier in the month. IT-related: CMS and selected SBE states also had IT-related policies and procedures that minimize the potential for coverage gaps as well as duplicate coverage. For example, in FFE states, when individuals are determined potentially eligible for subsidized exchange coverage, CMS conducts automated checks of state IT systems to determine if individuals already have Medicaid coverage, thus helping to prevent duplicate coverage. At the state level, officials from all four of our selected SBE states reported that their states had implemented integrated eligibility and enrollment systems for Medicaid and exchange coverage that, among other things, helped avoid gaps in coverage by making eligibility determinations in real time: in other words, at the time an individual reports a change. Officials also said that these integrated systems included system rules that help prevent duplicate coverage by not allowing an individual to be determined eligible for Medicaid and exchange subsidies simultaneously. In addition, officials from three of these states noted that their systems automatically terminate subsidized exchange coverage once individuals are determined eligible for Medicaid, while officials in the fourth state said their systems would have this ability beginning in September 2015. Consumer education-related: Both CMS and an SBE state reported including guidance on exchange websites that could help individuals avoid coverage gaps and duplicate coverage during the transition between Medicaid and exchange coverage. For example, CMS has added guidance on coverage transitions on the FFE website that outlines the steps individuals must take when they have subsidized exchange coverage and are later determined eligible for Medicaid, including that they are responsible for ending subsidized exchange coverage. CMS also notifies individuals in FFE states of this responsibility when they are enrolling in exchange coverage. Similarly, officials from one of our SBE states said that they have tried to improve the clarity of instructions on their exchange website, because most individuals are making eligibility changes online. Despite the steps CMS has taken, its current policies and procedures do not sufficiently minimize the potential for coverage gaps and duplicate coverage in the 34 states that had an FFE in 2015. According to federal internal control standards, in its responsibilities for administering and overseeing Medicaid and the exchanges, CMS should design and implement necessary policies and procedures to enforce agency objectives and assess program risk. These policies and procedures should include internal controls, such as conducting monitoring to assess performance over time, that provide reasonable assurance that an agency has effective and efficient operations and that program participants are in compliance with applicable laws and regulations. We identified a number of weaknesses in CMS’s controls for minimizing coverage gaps and duplicate coverage for individuals transitioning between Medicaid and exchange coverage in FFE states. With regard to coverage gaps, we found that CMS’s controls do not provide reasonable assurance that the accounts of individuals transitioning from Medicaid to exchange coverage in FFE states are transferred by states in near real time, which puts individuals in these states at greater risk of experiencing such gaps. Specifically, federal regulations require that state Medicaid agencies should transfer accounts to CMS promptly and without undue delay. However, according to CMS officials, as of July 2015, the agency was not monitoring the timeliness of account transfers from states, and thus CMS would not be aware if account transfers from FFE states were happening promptly. CMS officials told us that account transfers are not happening in real time, but their understanding was that states typically send transfers at least daily. Officials from three of our four selected FFE states reported that account transfers were occurring at least daily, while officials from the remaining state reported that transfers were sent to CMS three times per week. Given the number of steps involved in the transition from Medicaid to exchange coverage, individuals may be more likely to have gaps in coverage to the extent account transfers from states to CMS are not happening in a timely fashion. For example, if a state sends a notification of termination on September 20, individuals could have just over a week to have their accounts transferred, apply for exchange coverage, and select a plan to avoid a coverage gap (see fig. 2). With regard to duplicate coverage, we found weaknesses in CMS’s controls for preventing, detecting, and resolving duplicate coverage in FFE states. Vulnerabilities in methods to prevent individuals from maintaining subsidized exchange coverage after being determined eligible for Medicaid. Individuals in FFE states might not end subsidized exchange coverage when they are determined eligible for Medicaid. According to CMS officials, in April 2015, the agency revised the notice individuals receive when they are determined eligible or potentially eligible for Medicaid to make clear individuals are responsible for doing so. However, individuals who apply for Medicaid directly through their state Medicaid agency may not receive such notification. In addition, CMS does not have procedures to automatically terminate subsidized exchange coverage when individuals are determined eligible for Medicaid, though CMS officials told us that they are considering options for doing so in the future. Vulnerabilities in methods to prevent individuals enrolled in Medicaid from enrolling in subsidized exchange coverage. While CMS generally checks for Medicaid coverage before initially determining someone eligible for subsidized exchange coverage, officials recognized that there are limitations to this check. Specifically, officials said these checks identify at a point in time whether the person is enrolled in Medicaid. Thus, if, for example, the Medicaid determination was pending, CMS would not know that from the check. Also, according to CMS officials, CMS is not able to conduct checks for Medicaid for the small percentage of individuals who do not provide social security numbers on their applications. Further, CMS did not perform a check for Medicaid coverage for the 1.96 million individuals who were auto-reenrolled in exchange coverage during 2015 open enrollment for FFE states. The absence of such a check increases the risk that duplicate coverage occurring during the year would continue when individuals are enrolled in subsidized exchange coverage for another year. No methods to detect and resolve duplicate coverage. As of July 2015, CMS did not have procedures to detect and resolve cases of duplicate coverage in FFE states. Further, CMS had generally not provided FFE states with exchange enrollment information that they would need to identify cases of duplicate coverage. While CMS has not conducted a formal risk assessment to identify the potential causes of duplicate coverage in FFE states, CMS officials told us that the agency has a number of planned steps to address the risk. The planned approach focuses on taking steps to identify and resolve rather than prevent duplicate coverage. Specifically, CMS has plans to implement periodic checks for duplicate coverage starting in the summer of 2015, and CMS officials told us in July 2015 that the first check would occur later that month. CMS officials estimated that the first check will take about 2 to 3 weeks to perform and will involve, among other steps, querying each FFE state’s Medicaid system. According to the officials, after the first check is complete CMS will notify individuals found to have duplicate coverage that they must contact the FFE to update their coverage information. Further, in 2016, if CMS can build the IT functionality to do so, the agency plans to begin automatically terminating exchange subsidies if individuals identified through the checks do not respond within 30 days of being notified. CMS officials told us that they are considering performing the periodic checks ahead of future open enrollment periods for exchange coverage, which could help prevent duplicate coverage among those automatically reenrolled in exchange coverage. CMS officials told us that the planned checks and notification process are a more efficient way of detecting and resolving duplicate coverage compared to providing exchange enrollment information to states and requiring them to identify duplicate coverage, which CMS would then need to resolve. The effectiveness of CMS’s plans to address duplicate coverage will depend in part on how frequently the checks are conducted and, as of July 2015, CMS had not yet decided the frequency. CMS officials told us that they are considering performing the checks on a regular basis— possibly quarterly—but said the frequency of the checks will depend in part on the agency’s analysis of the first check, including the level of effort required by state Medicaid agencies. Determining the frequency of the checks after completing an analysis of the first check is reasonable and could provide CMS with important insights. However, until CMS establishes the frequency of its checks, the risk of duplicate coverage going undetected continues to exist. Further, the less frequently the checks are conducted, the longer duplicate coverage could last if individuals do not independently take steps to end their subsidized exchange coverage. For example, for individuals who have subsidized exchange coverage and are determined eligible for Medicaid, if the checks are conducted monthly, duplicate coverage could last up to 2 months longer than what might be expected during the transition period; if quarterly, up to 4 months; and if biannually, up to 7 months (see fig. 3). In addition, while CMS officials told us that they intend to monitor the results of the periodic checks, they do not have a specific plan to routinely monitor the effectiveness of their planned checks and other procedures. According to CMS officials, the agency is exploring metrics to help measure the success of the periodic checks, such as identifying the number of people who received notification of duplicate coverage and subsequently ended their subsidized exchange coverage. However, CMS has not set a level of duplicate coverage that it deems acceptable, both in terms of the time period for which individuals have duplicate coverage and the proportion of Medicaid or exchange enrollees that experience duplicate coverage within a given time frame. Without such thresholds, it will be difficult for the agency to provide reasonable assurance that its procedures are sufficient or whether additional steps are needed. Data from three of our selected states—Kentucky, New York, and Washington—indicated that collectively over 70,000 individuals transitioned between Medicaid and exchange coverage in 2014. Specifically, the three states—all of which were SBE states that had expanded Medicaid—reported that about 73,000 individuals transitioned in 2014 (see table 1). These individuals accounted for between 7.5 percent and 12.2 percent of exchange coverage enrollment and less than 1 percent of Medicaid enrollment in those states. Data from the three states also indicated that most individuals transitioned to or from subsidized exchange coverage, rather than unsubsidized exchange coverage. While states were not able to provide data on the demographics of those transitioning, New York officials told us that it was likely mostly adults transitioning, because children have access to CHIP. In New York, CHIP covers children up to 400 percent of FPL—the same income limit as that set for the premium tax credit—compared with the Medicaid limit for adults of 133 percent of FPL. While individuals transitioning accounted for a relatively small percentage of enrollment, the total number of individuals transitioning across states could be significant. Out of the 25 states that had expanded Medicaid as of January 2014, we estimate that Kentucky, New York, and Washington accounted for 22.9 percent of total Medicaid and CHIP enrollment and 18.3 percent of total exchange enrollment in 2014. The data from the three states may understate the extent to which transitions between Medicaid and exchange coverage could occur in those states in future years. In particular, the number of individuals moving from Medicaid to exchange coverage may be greater in future years than in 2014. Individuals newly eligible for and enrolled in Medicaid in early 2014 would not have gone through their first annual redetermination of Medicaid eligibility, and officials in one state told us that they did not expect to see a lot of movement from Medicaid to exchange coverage until those redeterminations began. In addition, the number of individuals moving from exchange coverage to Medicaid in the three states may be greater in future years. Annual redeterminations of eligibility for subsidized exchange coverage are to occur during the annual open enrollment period for exchange coverage, which may extend from the end of a calendar year through the beginning of the following calendar year. As 2014 was the first year of exchange coverage, the data for this year reflected, at the maximum, only changes resulting from annual redeterminations of eligibility during the end of the calendar year— the beginning of the open enrollment period for 2015 exchange coverage. Where selected SBE states were not able to provide data on transitions between Medicaid and exchange coverage, officials told us they were developing or improving the functionality to track those data. In Colorado, which was not tracking transitions in 2014, officials told us that tracking transitions was considered a high priority. Officials told us that, as of July 2015, the state had made changes to its IT system that would provide the functionality to track transitions and they anticipated being able to do so later that year. In New York, officials reported being in the process of developing the functionality to track transitions from Medicaid to exchange coverage, and, in July 2015, the officials told us that they had recently started tracking these transitions. In Washington, a state already tracking transitions, officials told us that, as of July 2015, they had a project underway to begin looking at the demographics of those transitioning, including age and gender. Selected states and CMS could not provide data on the extent to which individuals are transitioning between Medicaid and exchange coverage in FFE states. Officials from all four of our selected FFE states told us that the state did not have access to exchange enrollment information, and therefore the state was not able to provide data on transitions between Medicaid and exchange coverage. Similarly, as of July 2015, CMS could not provide data on transitions between Medicaid and exchange coverage in FFE states. CMS officials told us that the FFE and state Medicaid IT systems are not integrated in a way that would allow for real-time tracking of transitions. Additionally, though CMS has access to both exchange and Medicaid enrollment data for FFE states, officials told us that, as of July 2015, they could not use those data to determine the number of individuals transitioning retrospectively. Officials explained that, for example, there was no single, unique identifier for an individual between the data sets, making it difficult to match people between the two data sets. CMS officials told us that, as of May 2015, representatives from CMS as well as from the Office of the Assistant Secretary for Planning and Evaluation had been working for about a year on a methodology for examining transitions. Officials said these efforts have primarily focused on analyzing transitions in SBE states, but that the findings may inform how to perform such an analysis for FFE states. Information from our selected states and CMS indicated that most states with Medicaid managed care had one or more Medicaid issuers that also offered coverage through the state’s exchange. Seven out of our 8 selected states—all but Iowa—reported having at least 1 issuer offering both Medicaid and exchange coverage in the state in 2014, ranging from 2 to 13 issuers. These results are consistent with an analysis completed by CMS that indicated, in the 40 states with Medicaid managed care, the majority—33—had 1 or more issuers offering both Medicaid and exchange coverage in 2014. CMS did not identify any issuers offering both types of coverage in the remaining 7 states. However, information from our selected states also indicated that in some states, the majority of Medicaid and exchange enrollees may not be enrolled with issuers offering both types of coverage. In the 7 selected states with issuers offering both types of coverage, the issuers accounted for between 8 and 76 percent of Medicaid enrollment and 19 and 74 percent of exchange enrollment where data were available from states (see table 2). The proportion of Medicaid enrollees in plans offered by issuers that also offer exchange coverage is affected by the proportion of Medicaid enrollees who participate in managed care in the state, as enrollees in fee-for-service Medicaid would not be enrolled with an issuer. For example, in Colorado, which had a relatively low percentage of Medicaid enrollees in plans offered by issuers also offering exchange coverage, the majority, or about two-thirds, of Medicaid enrollees were in fee-for-service as of February 2015 according to state officials. Additionally, not all individuals enrolled with issuers offering both types of coverage would be able to remain with their issuer when transitioning, due to differences in issuers’ service areas for their Medicaid and exchange coverage. For example, one of the two issuers that offered both types of coverage in Kentucky in 2014 offered Medicaid coverage statewide, but offered exchange coverage in just 15 of the 120 counties in the state, representing about 41 percent of the state’s population. The other issuer offered exchange coverage statewide and Medicaid coverage in 111 counties, representing about 76 percent of the population. In 7 counties, representing about 5 percent of the population, neither of the issuers offered both Medicaid and exchange coverage. A larger proportion of individuals may have the opportunity to remain with their issuer when transitioning between the coverage types in future years. In 2015, the total number of issuers offering both Medicaid and exchange coverage increased in 3 of our selected states. In addition, information from selected states indicated that in some cases, issuers that already offered both Medicaid and exchange coverage in some counties within a state began to do so in additional counties in 2015. Evidence from selected issuers also suggests that a growing number of individuals may have the opportunity to remain with their issuer moving forward—for example, representatives from one issuer reported that the number of states in which the issuer offered both types of coverage grew from 3 states in 2014 to 16 states in 2015. Representatives from another issuer told us that, given the complexities of offering two new types of coverage, it had so far chosen not to offer exchange coverage in some states in which it was newly participating in Medicaid but anticipated beginning to offer exchange coverage in those states in future years. While a growing number of individuals may have the opportunity to remain with their issuer when transitioning between the coverage types, the extent to which individuals will choose to do so will likely depend on a number of factors, including the following: Desire to change plans. Studies suggest that some individuals are likely to change plans—which may be offered by different issuers— when provided the option to do so. This change may be positive, such as in cases where the new plan better addresses the individual’s health care needs. Cost of exchange plans. Individuals may be less likely to remain with their issuer when transitioning from Medicaid to the exchange if issuers offering both types of coverage are unable to offer competitive premiums for their exchange plans. Representatives from two selected issuers that offered both types of coverage reported that they had relatively low exchange market share in 2014 most likely because they were unable to offer competitive premiums, but said they were able to offer lower premiums in 2015 and have seen or expected to see increased enrollment. Awareness of issuer participation in both types of coverage. Individuals transitioning between coverage types may not be aware that their issuer also offers plans in the new coverage type. For example, in some states Medicaid managed care marketing restrictions may prohibit issuers from marketing their exchange plans to existing Medicaid enrollees. For instance, representatives from one selected issuer reported piloting an outreach program in some states to inform Medicaid members whose coverage was terminating about the issuer’s exchange plans, but noted that the issuer was not permitted to operate this program in at least one state. In addition, issuers may operate under different names in Medicaid and for their exchange coverage, which could make it difficult for individuals to identify whether their issuer operates in the new coverage type. Auto-assignment in Medicaid managed care. Many states with managed care auto-assign individuals to issuers either at the initial eligibility determination or if an individual does not select his or her own plan within a certain time period. While such individuals may have the opportunity to change their Medicaid issuer after auto- assignment, they may choose not to do so or may not be aware of this ability, which may affect their likelihood of remaining with their issuer when transitioning from exchange coverage. Finally, for individuals transitioning between Medicaid and exchange coverage, the benefits of remaining with the same issuer for continuity of care are uncertain. Representatives of some selected issuers reported that covered benefits, cost-sharing, and drug formularies for their Medicaid and exchange plans differed to some extent due in part to differences in state and federal requirements for Medicaid and exchange coverage, with Medicaid requiring coverage of additional services and lower cost-sharing as compared to exchange coverage. These differences will likely persist regardless of whether individuals remain with the same issuer. However, officials from some selected states told us that remaining with the same issuer when transitioning may allow individuals to keep their health care providers, which could lead to improved continuity of care. There is some evidence to suggest that certain issuers offering both Medicaid and exchange coverage offer similar provider networks. Specifically, representatives of three selected issuers that traditionally offered Medicaid coverage reported leveraging their existing Medicaid provider networks when expanding to the exchange, and two of the issuers noted that most providers elected to participate. At the same time, some officials told us that provider networks for issuers offering both types of coverage could differ. Whether individuals transitioning between the coverage types are able to keep their providers may depend in part on the specific exchange plan they choose, as issuers often offer multiple plan options on the exchange, some of which may have more similar provider networks to Medicaid than others. Through the creation of subsidized exchange coverage and the state option to expand Medicaid eligibility under PPACA, many low-income individuals have a new pathway to maintain health coverage despite changes in income or other factors. Federal and state Medicaid and exchange policies and procedures influence the extent to which individuals are able to seamlessly transition between coverage types, including whether they are able to transition without a gap in coverage and whether they end up enrolled in both Medicaid and subsidized exchange coverage for extended periods of time. To the extent coverage gaps and duplicate coverage occur, individuals may decide to forgo needed care or may unnecessarily be paying any remaining share of exchange premiums after APTC when they should only be enrolled in Medicaid. Additionally, duplicate coverage could mean that the federal government is paying for both Medicaid and subsidized exchange coverage for some individuals. SBE states are better positioned to minimize the potential for coverage gaps and duplicate coverage to the extent they are able to share enrollment data across Medicaid and the exchange as well as build controls into their IT systems to prevent duplicate coverage. For FFE states as well as SBE states using the FFE IT systems, CMS implemented several policies and procedures and has additional controls planned that represent positive steps towards minimizing coverage gaps and duplicate coverage. However, as per federal internal control standards, those plans do not sufficiently address the risks. In particular, CMS does not currently track and has no plans to track the timeliness of account transfers from states, which could increase the potential that individuals transitioning from Medicaid to the exchange will experience coverage gaps. Additionally, CMS has not determined the frequency of its planned checks for duplicate coverage, a factor that will be critical to their effectiveness, and does not have a plan—including target levels of duplicate coverage the agency deems acceptable—for monitoring the checks and other procedures. Despite the addition of the checks, vulnerabilities related to preventing duplicate coverage are likely to persist, as, for example, the automated check for Medicaid during eligibility determinations for subsidized coverage will continue to have limitations. Thus, given the potential financial implications of duplicate coverage and if the checks identify that it is occurring at a significant rate, additional steps could protect the federal government and individuals from unnecessary and duplicative expenditures. Our findings indicate that a relatively small proportion of Medicaid and exchange enrollees may be transitioning between coverage types, and thus the incidence of coverage gaps and duplicate coverage could be limited. However, to the extent that transitions increase in the future— particularly if exchange enrollment continues to grow and if additional states expand Medicaid—improvements to CMS controls to minimize coverage gaps and duplicate coverage for these individuals will be increasingly important. To better minimize the risk of coverage gaps and duplicate coverage for individuals transitioning between Medicaid and the exchange in FFE states, we recommend that the Administrator of CMS take the following three actions: 1. Routinely monitor the timeliness of account transfers from state Medicaid programs to CMS and identify alternative procedures if near real time transfers are not feasible in a state. 2. Establish a schedule for regular checks for duplicate coverage and ensure that the checks are carried out according to schedule. 3. Develop a plan, including thresholds for the level of duplicate coverage it deems acceptable, to routinely monitor the effectiveness of the checks and other planned procedures to prevent and detect duplicate coverage, and take additional actions as appropriate. We provided a draft of this report to HHS and IRS for comment. In its written comments—reproduced in appendix II—HHS concurred with our recommendations. With regard to our first recommendation, HHS commented that HHS monitors and reviews account transfers through standard weekly reporting and that, if there are concerns with the frequency of transfers, HHS resolves any issues with the states. However, knowing the frequency of account transfers—that is, how often the state is sending them electronically to HHS—may not provide enough information without HHS also having information on the timeliness of states' transfers—that is, the amount of time it takes the state to transfer an individual's account after making a determination that the individual is no longer eligible for Medicaid. Thus, HHS using its weekly reporting process has the potential to meet our recommendation if the process monitors not only the frequency of transfers but also the timeliness of transfers. With regard to our other recommendations, HHS stated that its first check for duplicate coverage was underway in August 2015, and that HHS will analyze the rate of duplicate coverage identified and gather input from states on the level of effort needed to conduct the check in order to establish the frequency of checks going forward. HHS also stated that it will monitor the rate of duplicate coverage identified in periodic checks. Finally, HHS stated that it is working to implement additional internal controls to reduce duplicate coverage, including automatically ending subsidized exchange coverage for individuals also found to have been determined eligible for Medicaid or CHIP who have not ended this coverage themselves. HHS also provided technical comments, which we incorporated as appropriate. IRS had no comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Carolyn L. Yocom at (202) 512-7114 or [email protected] or John E. Dicken at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contacts named above, Susan Barnidge, Assistant Director; Priyanka Sethi Bansal; Keith Haddock; Laurie Pachter; Vikki Porter; Rachel Svoboda; and Emily Wilson made key contributions to this report.
Due to changes in income and other factors, it is likely that under PPACA many low-income individuals will transition between Medicaid and subsidized exchange coverage. Federal regulations require that state Medicaid agencies and exchanges coordinate to facilitate these transitions, including transferring individuals' accounts to the appropriate form of coverage when eligibility changes occur. However, given the complexity of coordinating policies and procedures for both coverage types, challenges could arise during the transition process resulting in individuals experiencing coverage gaps or duplicate coverage. GAO was asked to review information related to transitions between Medicaid and exchange coverage. In this report, among other objectives, GAO examines the extent to which the federal government had policies and procedures that minimize the potential for coverage gaps and duplicate coverage. GAO reviewed relevant federal regulations, guidance, FFE documentation, and federal internal controls standards, and interviewed CMS officials. GAO also collected information from eight states selected, among other factors, to include four FFE states. CMS's policies and procedures do not sufficiently minimize the potential for coverage gaps and duplicate coverage in federal exchange states. GAO found that individuals transitioning from Medicaid to exchange coverage—that is, private health insurance purchased through the exchanges created under the Patient Protection and Affordable Care Act (PPACA)—may experience coverage gaps, for example, if they lose Medicaid eligibility toward the end of a month. Individuals who experience coverage gaps may decide to forgo necessary care. In addition, GAO found that some individuals had duplicate coverage, that is, were enrolled in Medicaid while also receiving federal subsidies for exchange coverage. While some amount of duplicate coverage may be expected during the transition from exchange to Medicaid coverage and is permissible under federal law, GAO found that duplicate coverage was also occurring under other scenarios. Individuals may be held liable for repaying certain exchange subsidies received during the period of duplicate coverage. Further, the federal government could be paying twice, subsidizing exchange coverage and reimbursing states for Medicaid spending for those enrolled in both. While the Centers for Medicare & Medicaid Services (CMS), the agency within the Department of Health and Human Services (HHS) that operates a federally facilitated exchange (FFE) in 34 states, has implemented policies and procedures that help minimize the potential for coverage gaps and duplicate coverage, GAO identified weaknesses in CMS's controls for FFE states based on federal internal control standards. Specifically: GAO found that CMS's controls do not provide reasonable assurance that accounts—that is, records—for individuals transitioning from Medicaid to exchange coverage in FFE states are transferred in near real time. CMS regulations require that such transfers occur promptly to facilitate eligibility determinations and enrollment. However, as of July 2015, CMS was not monitoring the timeliness of transfers. CMS officials told GAO that transfers are not happening in real time, but their understanding was that states typically send transfers at least daily. Officials from three of the four selected FFE states reported that account transfers were occurring at least daily; officials from the remaining state reported that transfers were sent to CMS three times per week. To the extent transfers are not happening in a timely fashion, individuals may be more likely to have gaps in coverage. GAO found weaknesses in CMS's controls for preventing, detecting, and resolving duplicate coverage in FFE states. For example, as of July 2015, CMS did not have procedures to detect cases of duplicate coverage. According to CMS officials, CMS planned to implement periodic checks for duplicate coverage beginning later that month. However, CMS had not yet determined the frequency of the checks, a key to their effectiveness. In addition, CMS had no specific plan for monitoring the effectiveness of the checks and other planned procedures, making it difficult for the agency to provide reasonable assurance that its procedures are sufficient or whether additional steps are needed to protect the federal government and individuals from duplicative and unnecessary expenditures. GAO recommends that CMS take three actions, including routinely monitoring the timeliness of account transfers from states, establishing a schedule for regular checks for duplicate coverage, and developing a plan to monitor the effectiveness of the checks. HHS concurred with GAO's recommendations.
NARA’s mission is to ensure “ready access to essential evidence” for the public, the President, Congress, and the courts. NARA is to make the permanently valuable records of the government—in all media—available for reference and research. In addition to the best known documents, such as the Declaration of Independence, the Constitution, and the Bill of Rights, NARA preserves billions of pages of textual documents and numerous maps, photographs, videos, and computer records. Each citizen has a right to access the official records of the government. During fiscal year 1998, 2.6 million people (including genealogists, historians, librarians, and veterans) visited NARA’s facilities to browse and do research, and NARA received over 56 million “hits” on its Web site from scholars, students, and other inquirers. “all books, papers, maps, photographs, machine readable materials, or other documentary materials, regardless of physical form or characteristics, made or received by an agency…under federal law or in connection with the transaction of public business and preserved or appropriate for preservation…as evidence of the organization, functions, policies, decisions, procedures, operations, or other activities of the Government or because of the informational value of data in them.” NARA and agency staff work together to identify and inventory an agency’s records to appraise the value of the records and determine how long they should be kept and under what conditions. The formal approval of this work is called scheduling. Agency records must be scheduled through either records schedules specific to each agency or a general records schedule (GRS), which is issued by the Archivist and authorizes disposal, after a specified period of time, of records of a specified form or character common to several or all federal agencies. Records of permanent value (such as final budget submissions and calendars of senior staff) must be preserved and eventually transferred to NARA for archival and research purposes. Other records deemed of insufficient value to warrant their preservation, such as payroll or travel, are considered temporary records and must be preserved by the agency for only a specified length of time. In addition to the Federal Records Act, several other laws, such as the Paperwork Reduction Act, Privacy Act, Freedom of Information Act, Electronic Freedom of Information Act, and Government Paperwork Elimination Act, also address records management requirements for both paper and electronic records. Also, the General Services Administration, the Office of Management and Budget, and individual agencies issue records management regulations. NARA and federal agencies are confronted with many ERM challenges, particularly technological issues. NARA must be able to receive electronic records from agencies, store them, and retrieve them when needed. Agencies must be able to create electronic records, store them, properly dispose of them when appropriate, and send permanently valuable electronic records to NARA for archival storage. All of this must be done within the context of the rapidly changing technological environment. As stated in NARA’s 10-year strategic plan covering the period of 1997 to 2007, NARA’s goals are to determine how to (1) preserve electronic records that the nation will need, (2) manage change, (3) stay abreast of technologies in federal agencies, and (4) use technologies to safeguard valuable information and make it more readily accessible. NARA’s plan also points out that it must meet the public’s need for “on-line” access to information and work in partnership with other entities that are struggling with the same problems. According to our research and discussions with NARA officials and other records management professionals, NARA is faced with a number of challenges to ensure that federal records in electronic format are preserved. NARA officials told us that NARA needs to expand its capacity to accept the increasing volume of electronic records from the agencies. Over the past quarter century, NARA has taken in approximately 90,000 electronic data files. NARA has estimated that federal agencies, such as the Department of State and the Department of the Treasury, are individually generating 10 times that many electronic records annually just in E-mail, many of which may need to be preserved by NARA. One of the items in NARA’s fiscal year 2000 budget request would allow NARA to begin development of a system to save large volumes of E-mail messages and other small data files that agencies are increasingly creating. Some of the initial research and development for this system is being done in fiscal year 1999. In addition to the increasing volume, the increasing variety of electronic records (e.g., word processing documents, E-mail messages, databases, digital images, and Web site pages) complicate NARA’s mission to preserve these records. NARA must address some definitional problems, such as what is an electronic record, when is an E-mail message a record, or when are Web site “virtual records” considered records. Also, electronic records are generated as files that require compatible hardware playback devices and the correct software for retrieving, viewing, and transmitting. Because agencies follow no uniform hardware standards, NARA must be capable of accepting various formats (hardware and software) from the agencies and maintaining a continued capability to read those records. The long-term preservation and retention of those electronic records is also a challenge since providing continued access to archived records over many generations of systems is difficult. The average life of a typical software product is 2 to 5 years. There are currently only three alternatives for maintaining accessibility over time, as follows: (1) maintain records in a software-independent format, (2) reformat and migrate records to new software systems, or (3) maintain the records in the original format and maintain the necessary hardware and software to make them accessible. Another concern is the deterioration of the storage media over time, and NARA must consider the permanency of the formats used by agencies (such as floppy disks and CD ROMs). These and other media that are used now and that are being developed must remain readable over a long period of time or be changed to a different media. Finally, another challenge is NARA’s ability to offer guidance to the agencies regarding the orderly management of electronic records, especially relating to the authenticity and reliability of electronic records that eventually will be transferred into NARA’s custody. Current electronic security measures, encryption, and authentication techniques could increase the reliability and authenticity of electronic records; however, there has been little analysis of the risk or the costs and benefits of implementing those measures for agencywide systems. It is important to note that a properly maintained electronic recordkeeping system provides more security and accountability than a comparable paper-based system because the electronic system can record details on access, revision, and deletion. Records management is initially the responsibility of the agency staff member who creates a record, whether the record is paper or electronic. Preservation of and access to that record then also becomes the responsibility of agency managers and agency records officers. Electronic records are now frequently created on a personal computer. Electronic recordkeeping responsibilities are often overlooked by the staff member who creates the record. The staff member should be made aware of what constitutes an electronic record, how to save it, and how to archive it for future use. Decentralized control over electronic documents is changing the face of records management because records can easily be deleted without records managers even being aware that the record existed. The agencies are challenged with informing employees what is required of them and how to accomplish their records management responsibilities. Agencies receive guidance from NARA, but they must put their own recordkeeping systems in place. Some agencies continue to experience confusion over what constitutes an electronic record and who has responsibility for preserving the record. Questions also arise regarding how to handle multiple copies or versions of documents and whether drafts are official records. Agencies’ employees send and receive huge volumes of E-mail in performing their official duties and responsibilities. Agencies must determine which of these E-mail messages are records. When E-mail messages are determined to be official records, agencies must assign records management responsibility, control multiple versions, and archive the messages. Also, because much internal business deliberation is conducted via E-mail, for privacy reasons, these messages must be reviewed before being released to the public. Agencies’ ERM efforts are competing for attention and resources with other information technology priorities, particularly in those agencies dealing with the Year 2000 problem. NARA officials believe that ERM activities may be slowed by agencies’ concentration on other priorities, such as systems’ upgrades and Year 2000 compliance. Regarding Year 2000 compliance, the old technology that created some electronic records might not be Year 2000 compliant, and this noncompliance could cause future retrieval difficulties for the agencies and NARA. On the basis of our discussions with NARA officials and officials of the four previously mentioned selected agencies and discussions at governmentwide conferences on the subject of records management, we learned that agencies vary in their records management programs and in their capabilities to implement ERM. Some agencies are waiting for more specific guidance from NARA, while others are moving forward by looking for ways to better manage their electronic records. However, there has been no recent governmentwide survey of agencies’ compliance with the archival provisions of the Federal Records Act or agencies’ ERM activities. NARA is planning a BPR effort that will collect limited information from some agencies but will not include a complete governmentwide baseline survey. In the interim, NARA has begun to revise its ERM guidance to provide some immediate guidance and direction for the agencies. Our discussions with NARA officials and officials from the four judgmentally selected agencies indicated that agencies vary in how they are implementing their ERM programs. NARA officials directed us to the Department of Defense (DOD) as one of the agencies that is most advanced in its ERM efforts. NARA has been working with DOD for several years to develop DOD’s ERM software standard, which is intended to help DOD employees determine what are records and how to properly preserve them. NARA endorsed the DOD standard in November 1998 as a tool that other agencies could use as a model until a final policy is issued by NARA. The endorsement does not mandate that agencies use the DOD standard; instead, NARA said that the standard conforms to the requirements of the Federal Records Act and establishes baseline requirements for managing electronic records. NARA also said that while the DOD standard is an appropriate basis for records management, there might be other equally valid ways to address ERM. The DOD standard is intended as a starting point that must be tailored to a specific agency’s needs. NARA said that each agency must still address ERM within the context of its own computer and policy environments. The DOD standard is a tool that is intended to help agencies develop automated systems to file, track, and preserve or destroy its electronic records. DOD’s standard has a series of requirements that are measurable and testable and based on various laws and NARA regulations. The standard, which is mandatory for all DOD components, provides implementation and procedural guidance on the management of records in DOD. ERM information systems that were in place before the approval of this standard must comply with the standard by November 1999. The DOD standard (1) sets forth baseline functional requirements for records management application software that is used by DOD components in the implementation of their records management programs; (2) defines required system interfaces and search criteria to be supported by records management application software; and (3) describes the minimum records management requirements that must be met, based on current NARA regulations. The DOD standard also requires that records management software perform several functions, including the following: Assign each record a unique, computer-generated code that identifies the document. Treat filed E-mail messages, including attachments, as records. Allow records to be searched, screened, and viewed on the basis of record profiles. Identify records that can be sent to NARA for storage. Notify users when a document is eligible for destruction or transfer, and destroy or transfer it after approval. As of June 2, 1999, nine companies had records management application products that were certified by DOD as meeting its standard. Some products are ERM software, while others integrate their document management or workflow products with ERM software from another vendor. Two agencies that are testing ERM software that meet the DOD standard are the National Aeronautics and Space Administration (NASA) and the Department of the Treasury’s Office of Thrift Supervision (OTS). NASA did a limited test of an early version of one ERM product and found it difficult to use and time-consuming to install. The software did not perform well with NASA’s varying hardware and software platforms. According to an agency official, the NASA test did, however, give NASA a better understanding of its requirements, including its records management program in particular. NASA plans to evaluate a newer version of this software later in fiscal year 1999. OTS is testing ERM software that differs from the one NASA used. OTS’ test is meant primarily to organize its electronic files. According to OTS’ manager of its records branch, it is important that ERM software requires users to make no more than two or three extra keystrokes, and that users realize there is a benefit to this additional “burden.” Even though NARA is aware of the efforts of DOD, NASA, OTS, and various other agencies, it does not now have governmentwide data on the records management capabilities and programs of all federal agencies. NARA had planned to do a baseline assessment survey to collect such data on all agencies by the end of fiscal year 2000. According to NARA officials, this survey was needed to identify best practices at agencies and collect data on (1) program management and records management infrastructure, (2) guidance and training, (3) scheduling and implementation, and (4) electronic recordkeeping. NARA had planned to determine how well agencies were complying with requirements for retention, maintenance, disposal, retrieval/accessibility, and inventorying of electronic records. In the early results in the pilot test of the survey at a limited number of agencies, NARA discovered that most of the pilot agencies lacked adequate employee guidance regarding electronic records. The Archivist has decided to put the baseline survey on hold primarily because of what he believes are other higher priority activities, such as NARA’s BPR effort, which could change NARA’s regulations and thereby affect the data that NARA would need to acquire from the agencies. NARA’s BPR effort to address its internal processes, as well as guidance to and interactions with the agencies, is expected to begin before the end of fiscal year 1999. This BPR effort should take 18 to 24 months. However, NARA will now proceed without the rich baseline of information from across the federal government that was originally planned. NARA officials could not give us a time frame regarding when the survey effort would be reinitiated. In the interim, according to a NARA Policy and Communications official, NARA will continue to gather additional information about the status of records management through a targeted assistance program, which will focus on helping agencies that have the most urgent records management needs. This effort, by definition, will not provide a baseline across all agencies. Currently, NARA does in-depth studies of two to four agencies a year in which it looks at the agencies’ records management policies and then recommends areas for improvement. Since some individual agencies have not been reviewed for several years, this method of collecting information on agencies has not yielded a current governmentwide look at the situation. Thus, this effort does not achieve NARA’s strategic planning goal to “stay abreast of technologies in the agencies.” Historically, NARA’s ERM guidance has been geared toward mainframes and databases, not personal computers. In addition to NARA’s planned BPR, NARA is taking some immediate action to revise its guidance to be more appropriate in today’s workplace environment. NARA’s electronic records guidance to agencies is found in the Code of Federal Regulations, which establishes the basic requirements for creation, maintenance, use, and disposition of electronic records. In 1972, before the widespread use of personal computers in the government workplace, NARA issued GRS 20 to provide guidance on the preservation of electronic records. However, agency records officers, data processing staff, and even NARA staff had trouble understanding and applying the first version of GRS 20. Subsequently, GRS 20 had several major revisions, culminating with the 1995 revision authorizing, among other things, that after electronic records, which were created in an office automation environment or computer centers, were placed in a recordkeeping system—electronic, paper, or microfilm—the records could be deleted. NARA’s ERM guidance under the 1995 version of GRS 20 was challenged in a December 1996 lawsuit filed in the United States District Court for the District of Columbia by a public interest group. In an October 1997 decision, the court found that the Archivist exceeded the scope of his statutory authority in promulgating GRS 20. First, the court stated that GRS 20 did not differentiate between program records, which are possibly subject to preservation, and administrative “housekeeping” records, which the court found were the only records allowed to be disposed of through GRSs. Second, the court found that electronic records did not lose their status as program records once they were preserved on paper; they are considered to be unique records and distinct from printed versions of the same record. The court also held that by categorically determining that no electronic records have value, the Archivist failed to carry out his statutory duty to evaluate the value of records for disposal. Moreover, the court determined that GRS 20 violated the Records Disposal Act because it failed to specify a period of time for retention of records that are to be disposed of through a GRS. The court thus declared GRS 20 “null and void.” The government filed an appeal of this ruling in December 1997. In March 1998, NARA issued a bulletin informing agencies that NARA had established a working group with a specific time frame to propose alternatives to GRS 20. The same public interest group that initially challenged GRS 20 went back to court when it realized that the Archivist was informing agencies that they could continue to rely on GRS 20 even after the court had ruled it “null and void.” The court, in a subsequent ruling, found that the Archivist had “flagrantly violated” the court’s October 1997 order and ordered, among other things, the NARA working group to have an implementation plan to the Archivist by September 30, 1998. In September 1998, on the basis of recommendations made by the NARA- sponsored electronic records working group, the Archivist decided to take several steps. Specifically, the Archivist agreed to (1) issue a NARA bulletin to give guidance to agencies on how to schedule the retention of program and unique administrative records in all formats; (2) modify other GRSs to authorize the deletion of electronic source records for administrative records after a recordkeeping copy has been produced; (3) publish guidance on a new GRS for information technology records in the Federal Register by March 15, 1999; and (4) form a follow-on group by January 1999 to continue work on electronic recordkeeping guidance issues. On September 29, 1998, after the Archivist notified the court of NARA’s intended actions, the court ordered that the Archivist was authorized to state that agencies could continue to follow current disposition practices for electronic records until they receive other disposition schedule approval from NARA, notification by NARA that the government’s appeal has been resolved and NARA has provided further guidance as a result of the appellate court’s decision, or further order of the court. In response to the court’s ruling, as of May 1999, NARA had taken the following actions: Issued NARA Bulletin 99-04 on March 25, 1999, to guide agencies on scheduling how long to keep electronic records of their program activities and certain administrative functions formerly covered under GRS 20. Agencies have until February 1, 2000, to submit to NARA either new records schedules for their electronic copies or a detailed plan for scheduling the records. Agencies that submit a plan must commit to scheduling their electronic copies within 2 years, unless NARA approves a different time frame. NARA is also offering no-cost training to agency records officers to assist in developing schedules or plans. Issued a revision in the general records schedules on December 21, 1998, to authorize agencies’ disposal of certain administrative records (such as personnel, travel, and procurement) regardless of physical format, after creating an official recordkeeping copy. Drafted a new general records schedule for certain administrative records to document the management of information technology. NARA has received comments from agencies on this draft, has made revisions, and will send the draft out for agencies’ comments again. NARA plans to incorporate the agencies’ comments and send the draft to OMB for comment. Initiated a follow-on study group in January 1999—Fast Track Guidance Development Project (FastTrack)—intended to answer the immediate questions of agencies about ERM. FastTrack is intended to answer agencies’ questions that can be resolved relatively quickly without major research. FastTrack staff consists of NARA staff, agency officials, and consultants. NARA’s plan is to disseminate information to agencies over its Web site and include best practices and frequently asked questions. Our review of the ERM activities in four states and three foreign governments showed that approaches to ERM differ. These entities often did things differently from each other and/or NARA. Some state governments are making decisions regarding the same ERM challenges that face NARA and federal agencies, while some are waiting to see what works for other governments. Our interviews with officials from four states (Florida, Oklahoma, Oregon, and Texas) revealed that these states approach some issues differently than the federal government or each other. In general, the four state archiving agencies provide centralized policies and procedures that are described in either state law or administrative rules. State archiving agencies that take physical custody of the actual records do so when the records are no longer needed by the individual agencies but are of archival value. In these cases, the states do not have the capability to maintain the records in electronic format but require nonelectronic copies. The four states have relied on record-tracking systems, which allow them to determine where specific records are located. Two of the four states that we selected emphasized the use of the Internet as a mechanism that allows both the archivist and the general public to determine where records may be found. While the state officials indicated that state law and the administrative rules that they issue guide their records management requirements, they also interact with NARA and other states to assist in determining their states’ policies. The state archiving officials we interviewed were all aware, to varying degrees, of the recent federal actions and activities dealing with the archiving of electronic records. However, some of the states are moving forward independently and have been doing so for several years. For example, according to state officials, during the past 10 years, Texas has continually revised its records management manual, records management statutes, and administrative rules. Further, according to state officials, Texas continues to study ways of providing better support to agencies’ records management programs. In November 1998, the Texas Electronic Records Research Committee completed a legislatively directed report that made several recommendations to help agencies manage their electronic records as required and make state agency documents in electronic formats readily available to the public. The committee’s recommendations include guidelines to enable better coordination among records management, archives, and information systems staff within agencies. The recommendations to the Texas State Library and Archives Commission and the Department of Information Resources included establishing administrative procedures and training to ensure that all staff work together to identify and manage electronic records to meet retention and archival preservation requirements, making library and archives standards applicable to all state records maintained in electronic format, seeking a legislative change in the Local Government Records Act so that the rules for managing electronic records can be amended to make these standards applicable to all local government records maintained in electronic format, jointly establishing and publishing guidelines for using standard functional requirements for electronic recordkeeping systems, studying the issues of retaining electronic records of enduring value for historical and research purposes to identify available options and associated costs with the intent of proposing legislative action, developing cost models for providing information to the public on-line, and working with the Office of the Attorney General to jointly establish rules and guidelines for providing and managing access to publicly available government information without compromising the privacy of citizens. Similarly, according to state officials, Florida’s current records management policy is based primarily on 10 to 15 years of legislatively directed studies and reports on information management as well as experience gained through Florida’s archive and historical records program, which it has operated since 1967. In September 1998, a consultant’s report on access to state government electronic information of long-term or archival value recommended that, among other things, the Florida State Archives take custody of electronic records when an agency is defunct and has no successor agency or when the records of an ongoing agency have archival value. The report also recommended that the Florida State Archives (1) serve as a “locator” for information about archived electronic records; (2) review the agencies’ annual reports on information systems; (3) assist in detailed reviews of the records policies and procedures of individual agencies; and (4) contract with an outside party to maintain the electronic records, including storing, providing access, and regularly migrating data to meet preservation requirements. From our interviews with officials in the four states and review of documentation, we learned that some states have arrived at decisions on how to address ERM issues. For instance: Policies and Procedures. According to state officials, the state archives agencies in the four states we surveyed generally provide centralized policies and procedures, described in either state legislation or administrative rules, that are the catalysts for policy development. Other considerations mentioned by the state archives officials were federal laws, recommendations made by internal and state auditors, observations of other states and the federal government, and private business practices. Guidance. The records management regulations in Texas, Florida, and Oregon provide specific guidance to state agencies. For example, Texas provides guidance on (1) standardized definitions for terms related to managing electronic records; (2) minimum requirements for the maintenance, use, retention, storage, and destruction of electronic records; (3) records management program administration that incorporates ERM objectives into agency directives, ensures that training is provided, ensures the development and maintenance of up-to-date electronic systems documentation, and specifies the location and media on which electronic records are maintained; (4) security of electronic records; and (5) public access to electronic records. Although some differences exist in content or approach, state code or administrative rules for Florida and Oregon provide equally detailed, and often closely paralleled, guidance to state agencies for managing electronic documents. Electronic records retention. State agencies in Texas, Florida, and Oklahoma retain archiving responsibility and custody of electronic records. When paper or microfilm records are no longer needed at the agency level, those of archival value are transferred to a central storage facility. In Texas, ERM and archiving system design are functions that are decentralized to state agencies, while Florida establishes minimum electronic recordkeeping requirements for all state agency records management and archiving systems. Texas has implemented an automated inventory tracking system to facilitate access to nonelectronic records maintained by the archives. Florida is considering using a contractor to develop and maintain storage and access for electronic archival records, including migration and software requirements. Development of a governmentwide information locator system. While Oregon and Oklahoma use what is basically a manual system to provide the public with access to archived records, Texas and Florida have developed Internet-accessible government information locator systems. The Texas Record and Information Locator Service is an on-line resource for accessing government information statewide—the next version will identify, describe, and locate individual state government information resources as well as print publications, individual documents, and databases available to the public on the Internet. The Florida Government Information Location System provides public Internet access to the location of electronic and nonelectronic public records. Training. All four states sponsor organized records management training programs or workshops for state employees. Enforcement of records management requirements. Authority to enforce mandated records management requirements varies among the states. For example, according to state officials, Oregon can impound records in danger of being lost, and citizens of Florida can request a state attorney investigation when they think that records may have been prematurely destroyed. Florida is also currently considering a requirement for formal statements of compliance from all state agencies. The Texas State Code establishes requirements for state agencies to transfer archival records to the State Archives or preserve them within the agency. The National Historical Publications and Records Commission (NHPRC), the grantmaking affiliate of NARA, provides funds to state and local archives, colleges and universities, libraries and historical societies, and other nonprofit organizations to help locate, preserve, and provide public access to documents and other historical materials. NHPRC has made several grants to states in recent years to assist them in their ERM efforts. NARA is working with Australia, Canada, and the United Kingdom on common ERM challenges. Our review of public documents showed that, although these countries share common challenges, they each have taken somewhat different approaches to making ERM decisions. The Australian, Canadian, and United Kingdom governments differ from each other, as well as NARA, in how they archive national records. For example, Australia has strong central authority and decentralized custody. Due to this decentralized custody, Australia must rely on a government- maintained information locator system to determine where the records are located. Since agencies within the governments can have various software systems, decentralized custody places the responsibility on the agencies, not the national archives, to ensure that records are retrievable regardless of any changes in hardware or software technology requirements. Use of the Internet is being integrated into their systems for search, retrieval, and/or requests for information. Australia has somewhat detailed records retention guidance to which its agencies must adhere. Since it does not have direct custody of electronic records, the Australian central archiving agency has compliance audit authority to ensure that individual agencies follow records management and archiving policies and laws. Implementation of an automated records management software system is under way. Canada’s national archives takes a somewhat different approach. Canada established “vision statements,” rather than specific policies, and the individual agencies maintain their own electronic records until they have no more operational need for them. At that point, records of archival value are transferred to the national archives. Also, Canada offers use of the Internet for searching, requesting, and retrieving pertinent records. The United Kingdom established broad guidelines, which are put into practice by its individual agencies or departments in a partnership arrangement with its national archives. These guidelines address all types of records, including electronic records. Currently, the Public Record Office has several study groups addressing management of electronic records and overall strategy for E-mail and office desktop systems. Case studies in five different departments are currently in progress to identify alternative practices for electronic recordkeeping. NARA is also part of two ongoing international initiatives that are to study and make recommendations regarding ERM. The first effort—International Research on Preservation of Authentic Records in Electronic Systems (INTER PARES)—is made up of archivists from seven countries (United States, Canada, Ireland, Italy, Netherlands, Sweden, and United Kingdom) and six research teams (United States, Canada, Northern Europe, Italy, Australia, and the Collaborative Electronic Notebook Systems Association). INTER PARES first met in Washington, D.C., in June 1998. The second effort is made up of English-speaking countries (United States, United Kingdom, Australia, and Canada). This group first met in London, England, in July 1998. NARA and federal agencies are being challenged to effectively and efficiently manage electronic records in an environment of rapidly changing technology and increasing volume of electronic records. On the basis of our discussions with officials from NARA and four judgmentally selected agencies, we determined that ERM programs vary greatly across agencies. NARA had planned to conduct a baseline survey intended to obtain governmentwide information on agencies’ ERM programs, but NARA has now postponed the survey because it believes that it should first complete a BPR effort to improve guidance and assistance to agencies. Considering that the BPR effort would more likely result in changes that are practical and functional for the agencies if it included an assessment of where the agencies are today in terms of ERM, the survey should not be postponed. In order for NARA to have the best information to make decisions during its BPR effort and, thereby, improve ERM in the federal government, we recommend that the Archivist, National Archives and Records Administration, conduct a baseline assessment survey now and use the information as input into the BPR effort, rather than postpone the survey until after the effort is completed. On June 7, 1999, we provided the Archivist with a draft of this report for comment. We received his comments in a letter dated June 22, 1999, which is reprinted in appendix II. In commenting on our draft report, the Archivist said that we have ably outlined significant electronic records challenges faced by NARA and federal agencies. The Archivist also commented, however, that he did not concur with our recommendation to conduct a baseline assessment survey now and use the information as input into the BPR effort. The Archivist stated that the survey has been put “on hold only temporarily,” and that he is “committed to conducting it in a timely fashion, and in a way that provides the greatest benefit to NARA and the agencies in improving Federal records management programs.” While there is general agreement that the baseline survey is needed and should be done, we disagree with the Archivist over the timing of the survey. During our review, we looked for justification for conducting the survey before, during, or after the BPR effort. Conducting the baseline survey now could provide valuable information for the BPR effort, while also accomplishing the survey’s intended purpose of providing baseline data on where agencies are with regard to records management programs. Because agencies vary in their implementation of ERM programs, the baseline survey would provide much richer data than the limited information collection effort outlined by the Archivist in his response letter and would fulfill an agency strategic goal. NARA would also be in a better position in later years to assess the impacts of its BPR effort, as well as to assess progress toward achieving its long-range performance targets as outlined in the Archivist’s letter. Finally, we are concerned about how long it may take to complete the baseline survey if it is put on hold until after the BPR effort. Given that this effort is expected to take 18 to 24 months after it is started and the baseline survey is expected to take about 2 years, the baseline of governmentwide records management programs may not be established until perhaps sometime in calendar year 2003. There is also the possibility that the baseline survey would be further delayed while the BPR initiatives have a chance to gain a foothold throughout the government. For these reasons, we continue to believe that the baseline survey should be done now, as the BPR effort gets under way. We are sending copies of this report to the Honorable Joseph Lieberman, Ranking Minority Member of this Committee, and the Honorable John W. Carlin, Archivist of the National Archives and Records Administration. We will make copies available to others upon request. Major contributors to this report are acknowledged in appendix III. If you have any questions, please call me on (202) 512-8676. To obtain information on the challenges that confront the National Archives and Records Administration (NARA) and federal agencies as a result of their increased reliance on electronic media, we interviewed NARA and agency officials from four judgmentally selected agencies—the Environmental Protection Agency (EPA), the General Services Administration (GSA), the National Aeronautics and Space Administration (NASA), and the Department of the Treasury’s Office of Thrift Supervision (OTS). We also (1) interviewed other electronic records management (ERM) professionals from educational institutions and records managers’ organizations and (2) reviewed documents and papers written on the subject by these professionals and others. We also attended ERM seminars, conferences, and meetings where NARA and many agencies were represented, and these challenges were discussed. To obtain information on the status of agencies’ and NARA’s implementation of ERM, we made limited contacts at the previously mentioned agencies to obtain information on their policies and procedures. We interviewed records management officials at these agencies and reviewed pertinent documentation. We selected EPA because they have an active, progressive records management program; we selected GSA because they have oversight records management responsibilities in addition to operating their own records management program. We chose NASA and OTS because they are piloting ERM software to help them manage electronic records. We also obtained and reviewed the Department of Defense’s (DOD) ERM software standard. In addition, we interviewed NARA staff and reviewed NARA’s guidance and oversight responsibilities. We also interviewed an official of the Office of Management and Budget (OMB) to determine how OMB assists NARA in providing guidance to agencies. To obtain information on ERM policies and procedures of some other governments (state and foreign), we judgmentally selected three states (Florida, Oregon, and Texas) on the basis of recommendations from records management professionals who said that these states are considered leaders in ERM. We also contacted another state near our Dallas Field Office that was not mentioned by these professionals (Oklahoma). At the four states, we interviewed officials and reviewed documentation of their policies and procedures. (See footnote 9 in this report.) In addition, we obtained policies, procedures, and other public documentation from the Internet Web sites of three judgmentally selected foreign countries (Australia, Canada, and the United Kingdom) that records management professionals identified as being advanced in ERM. These three countries also work with NARA on various ERM initiatives. Carol M. Hillier The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the preservation of electronic records, focusing on the: (1) challenges that confront the National Archives and Records Administration (NARA) and federal agencies as a result of their increased reliance on electronic media; (2) status of selected agencies' and NARA's implementation of electronic records management (ERM); and (3) ERM policies and procedures of selected other governments (state and foreign). GAO noted that: (1) NARA and federal agencies are faced with the substantial challenge of preserving electronic records in an era of rapidly changing technology; (2) in addition to handling the burgeoning volume of electronic records, NARA and the agencies must address several hardware and software issues to ensure that electronic records are properly created, permanently maintained, secured, and retrievable in the future; (3) also, NARA's and the agencies' ERM efforts are competing with other information technology priorities, particularly the need to ensure that their computers are year 2000 compliant; (4) NARA is responsible for providing guidance and assistance to agencies on how to maintain their official government records and for archiving those records once they are transferred to NARA; (5) the agencies are responsible for ensuring that records are created and preserved in accordance with the Federal Records Act; (6) no centralized source of information exists to document the extent to which agencies are fulfilling their ERM responsibilities under the act; (7) on the basis of GAO's discussions with officials from NARA and four judgmentally selected agencies, GAO found that plans and capabilities for ERM vary greatly across agencies; (8) NARA has recently postponed a planned baseline survey that was intended to obtain governmentwide information on agencies' ERM programs because NARA believes that it should first complete a business process reengineering (BPR) effort; (9) this BPR effort, which is intended to assess and potentially alter NARA's guidance to and interaction with agencies, is expected to take 18 to 24 months; (10) GAO believes that the baseline survey information is critical to ensuring that the BPR results are relevant to the ERM situations at agencies and the survey should not be postponed; (11) these baseline data are needed to meet one of NARA's stated strategic planning goals to stay abreast of technologies in the agencies; (12) even while planning its BPR effort, NARA is taking some immediate action to address the agencies' needs for ERM guidance and direction; (13) state and foreign governments are addressing similar ERM challenges; and (14) from GAO's limited judgmental sample of state and foreign governments, it is clear that these governments and the federal government often differ in: (a) the organization of their archival activities; (b) their philosophies on centralization versus decentralization of recordkeeping responsibilities; and (c) their computer hardware and software capabilities.
EPA administers and oversees grants primarily through the Office of Grants and Debarment, 10 program offices in headquarters, and program offices and grants management offices in EPA’s 10 regional offices. Figure 1 shows EPA’s key offices involved in grants activities for headquarters and the regions. The management of EPA’s grants program is a cooperative effort involving the Office of Administration and Resources Management’s Office of Grants and Debarment, program offices in headquarters, and grants management and program offices in the regions. The Office of Grants and Debarment develops grant policy and guidance. It also carries out certain types of administrative and financial functions for the grants approved by the headquarters program offices, such as awarding grants and overseeing the financial management of these grants. On the programmatic side, headquarters program offices establish and implement national policies for their grant programs, and set funding priorities. They are also responsible for the technical and programmatic oversight of their grants. In the regions, grants management offices carry out certain administrative and financial functions for the grants, such as awarding grants approved by the regional program offices, while the regional program staff provide technical and programmatic oversight of their grantees. As of June 2003, 109 grants specialists in the Office of Grants and Debarment and the regional grants management offices were largely responsible for administrative and financial grant functions. Furthermore, 1,835 project officers were actively managing grants in headquarters and regional program offices. These project officers are responsible for the technical and programmatic management of grants. Unlike grant specialists, however, project officers generally have other primary responsibilities, such as using the scientific and technical expertise for which they were hired. In fiscal year 2002, EPA took 8,070 grant actions totaling about $4.2 billion.These awards were made to six main categories of recipients as shown in figure 2. EPA offers two types of grants—nondiscretionary and discretionary: Nondiscretionary grants support water infrastructure projects, such as the drinking water and clean water state revolving fund programs, and continuing environmental programs, such as the Clean Air Program for monitoring and enforcing Clean Air Act regulations. For these grants, Congress directs awards to one or more classes of prospective recipients who meet specific eligibility criteria; the grants are often awarded on the basis of formulas prescribed by law or agency regulation. In fiscal year 2002, EPA awarded about $3.5 billion in nondiscretionary grants. EPA has awarded these grants primarily to states or other governmental entities. Discretionary grants fund a variety of activities, such as environmental research and training. EPA has the discretion to independently determine the recipients and funding levels for grants. In fiscal year 2002, EPA awarded about $719 million in discretionary grants. EPA has awarded these grants primarily to nonprofit organizations, universities, and government entities. The grant process has the following four phases: Preaward. EPA reviews the application paperwork and makes an award decision. Award. EPA prepares the grant documents and instructs the grantee on technical requirements, and the grantee signs an agreement to comply with all requirements. Postaward. After awarding the grant, EPA provides technical assistance, oversees the work, and provides payments to the grantee; the grantee completes the work, and the project ends. Closeout of the award. EPA ensures that all technical work and administrative requirements have been completed; EPA prepares closeout documents and notifies the grantee that the grant is completed. EPA’s grantees are subject to the same type of financial management oversight as the recipients of other federal assistance. Specifically, the Single Audit Act requires grantees to have an audit of their financial statements and federal awards or program-specific audit if they spend $300,000 or more in federal awards in a fiscal year., Grantees submit these audits to a central clearinghouse operated by the Bureau of the Census, which then forwards the audit findings to the appropriate agency for any necessary action. However, the act does not cover all grants and all aspects of grants management and, therefore, agencies must take additional steps to ensure that federal funds are spent appropriately. In addition, EPA conducts in-depth reviews to analyze grantees’ compliance with grant regulations and specific grant requirements. Furthermore, to determine how well offices and regions oversee grantees, EPA conducts internal management reviews that address grants management. The Office of Management and Budget, as authorized by the act, increased this amount to $500,000 in federal awards as of June 23, 2003. closeouts, as a material weakness—an accounting and internal control system weakness that the EPA Administrator must report to the President and Congress. EPA’s fiscal year 1999 Federal Managers’ Financial Integrity Act report indicated that this oversight material weakness had been corrected, but the Inspector General testified that the weakness continued. In 2002, the Inspector General again recommended that EPA designate grants management as a material weakness. The Office of Management and Budget (OMB) also recommended in 2002 that EPA designate grants management as a material weakness. In its fiscal year 2002 Annual Report, EPA ultimately decided to maintain this issue as an agency-level weakness, which is a lower level of risk than a material weakness. EPA reached this decision because it believes its ongoing corrective action efforts will help to resolve outstanding grants management challenges. However, in adding EPA’s grants management to our list of EPA’s major management challenges in January 2003, we signaled our concern that EPA has not yet taken sufficient action to ensure that it can manage its grants effectively. We identified four key challenges that EPA continues to face in managing its grants. These challenges are (1) selecting the most qualified grant applicants, (2) effectively overseeing grantees, (3) measuring the results of grants, and (4) effectively managing grant staff and resources. In the past, EPA has taken a series of actions to address these challenges by, among other things, issuing policies on competition and oversight, conducting training for project officers and nonprofit organizations, and developing a new data system for grants management. However, these actions had mixed results because of the complexity of the problems, weaknesses in design and implementation, and insufficient management attention. EPA has not selected the most qualified applicants despite issuing a competition policy. The Federal Grant and Cooperative Agreement Act of 1977 encourages agencies to use competition in awarding grants. To encourage competition, EPA issued a grants competition policy in 1995. However, EPA’s policy did not result in meaningful competition throughout the agency, according to EPA officials. Furthermore, EPA’s own internal management reviews and a 2001 Inspector General report found that EPA has not always encouraged competition. Finally, EPA has not always engaged in widespread solicitation of its grants, which would provide greater assurance that EPA receives proposals from a variety of eligible and highly qualified applicants who otherwise may not have known about grant opportunities. EPA has not always effectively overseen grant recipients despite past actions to improve oversight. To address oversight problems, EPA issued a series of policies starting in 1998. However, these oversight policies have had mixed results in addressing this challenge. For example, EPA’s efforts to improve oversight included in-depth reviews of grantees but did not include a statistical approach to identifying grantees for reviews, collecting standard information from the reviews, and a plan for analyzing the results to identify and act on systemic grants management problems. EPA, therefore, could not be assured that it was identifying and resolving grantee problems and using its resources more effectively to target its oversight efforts. EPA’s efforts to measure environmental results have not consistently ensured that grantees achieve them. Planning for grants to achieve environmental results—and measuring results—is a difficult, complex challenge. However, as we pointed out in an earlier report, it is important to measure outcomes of environmental activities rather than just the activities themselves. Identifying and measuring the outcomes of EPA’s grants will help EPA better manage for results. EPA has awarded some discretionary grants before considering how the results of the grantees’ work would contribute to achieving environmental results. EPA has also not developed environmental measures and outcomes for all of its grant programs. OMB found that four EPA grant programs lacked outcome-based measures—measures that demonstrated the impact of the programs on improving human health and the environment—and concluded that one of EPA’s major challenges was demonstrating program effectiveness in achieving public health and environmental results. Finally, EPA has not always required grantees to submit work plans that explain how a project will achieve measurable environmental results. In 2002, EPA’s Inspector General reported that EPA approved some grantees’ work plans without determining the projects’ human health and environmental outcomes. In fact, for almost half of the 42 discretionary grants the Inspector General reviewed, EPA did not even attempt to measure the projects’ outcomes. Instead, EPA funded grants on the basis of work plans that focused on short-term procedural results, such as meetings or conferences. In some cases, it was unclear what the grant had accomplished. In 2003, the Inspector General again found the project officers had not negotiated environmental outcomes in work plans. The Inspector General found that 42 percent of the grant work plans reviewed—both discretionary and nondiscretionary grants—lacked negotiated environmental outcomes. EPA has not always effectively managed its grants staff and resources despite some past efforts. EPA has not always appropriately allocated the workload for staff managing grants, provided them with adequate training, or held them accountable. Additionally, EPA has not always provided staff with the resources, support, and information necessary to manage the agency’s grants. To address these problems, EPA has taken a number of actions, such as conducting additional training and developing a new electronic grants management system. However, implementation weaknesses have precluded EPA from fully resolving its resource management problems. For example, EPA has not always held its staff— such as project officers—accountable for fulfilling their grants management responsibilities. According to the Inspector General and internal management reviews, EPA has not clearly defined project officers’ grants management responsibilities in their position descriptions and performance agreements. Without specific standards for grants management in performance agreements, it is difficult for EPA to hold staff accountable. It is therefore not surprising that, according to the Inspector General, project officers faced no consequences for failing to effectively perform grants management duties. Compounding the accountability problem, agency leadership has not always emphasized the importance of project officers’ grants management duties. EPA’s recently issued policies on competition and oversight and a 5-year grants management plan to address its long-standing grants management problems are promising and focus on the major management challenges, but these policies and plan require strengthening, enhanced accountability, and sustained commitment to succeed. EPA’s competition policy shows promise but requires a major cultural shift. In September 2002, EPA issued a policy to promote competition in grant awards by requiring that most discretionary grants be competed. The policy also promotes widespread solicitation for competed grants by establishing specific requirements for announcing funding opportunities in, for example, the Federal Register and on Web sites. This policy should encourage selection of the most qualified applicants. However, the competition policy faces implementation barriers because it represents a major cultural shift for EPA staff and managers, who have had limited experience with competition, according to EPA’s Office of Grants and Debarment. The policy requires EPA officials to take a more planned, rigorous approach to awarding grants. That is, EPA staff must determine the evaluation criteria and ranking of these criteria for a grant, develop the grant announcement, and generally publish it at least 60 days before the application deadline. Staff must also evaluate applications— potentially from a larger number of applicants than in the past—and notify applicants of their decisions. These activities will require significant planning and take more time than awarding grants noncompetitively. Oversight policy makes important improvements but requires strengthening to identify systemic problems. EPA’s December 2002 policy makes important improvements in oversight, but it still does not enable EPA to identify systemic problems in grants management. Specifically, the policy does not (1) incorporate a statistical approach to selecting grantees for review so EPA can project the results of the reviews to all EPA grantees, (2) require a standard reporting format for in-depth reviews so that EPA can use the information to guide its grants oversight efforts agencywide, and (3) maximize use of information in its grantee compliance database to fully identify systemic problems and then inform grants management officials about oversight areas that need to be addressed. Grants management plan will require strengthening, sustained commitment, and enhanced accountability. We believe that EPA’s grants management plan is comprehensive in that it focuses on the four major management challenges—grantee selection, oversight, environmental results, and resources—that we identified in our work. For the first time, EPA plans a coordinated, integrated approach to improving grants management. The plan is also a positive step because it (1) identifies goals, objectives, milestones, and resources to achieve the plan’s goals; (2) provides an accompanying annual tactical plan that outlines specific tasks for each goal and objective, identifies the person accountable for completing the task, and sets an expected completion date; (3) attempts to build accountability into grants management by establishing performance measures for each of the plan’s five goals; (4) recognizes the need for greater involvement of high-level officials in coordinating grants management throughout the agency by establishing a high-level grants management council to coordinate, plan, and set priorities for grants management; and (5) establishes best practices for grants management offices. According to EPA’s Assistant Administrator for Administration and Resources Management, the agency’s April 2003 5-year grants management plan is the most critical component of EPA’s efforts to improve its grants management. In addition to the goals and objectives, the plan establishes performance measures, targets, and action steps with completion dates for 2003 through 2006. EPA has already begun implementing several of the actions in the plan or meant to support the plan; these actions address previously identified problems. For example, EPA now posts its available grants on the federal grants Web site http://www.fedgrants.gov. In January 2004, EPA issued an interim policy to require that grant funding packages describe how the proposed project supports the goals of EPA’s strategic plan. Successful implementation of the new plan requires all staff—senior management, project officers, and grants specialists—to be fully committed to, and accountable for, grants management. Recognizing the importance of commitment and accountability, EPA’s 5-year grants management plan has as one of its objectives the establishment of clear lines of accountability for grants oversight. The plan, among other things, calls for (1) ensuring that performance standards established for grants specialists and project officers adequately address grants management responsibilities in 2004; (2) clarifying and defining the roles and responsibilities of senior resource officials, grant specialists, project officers, and others in 2003; and (3) analyzing project officers’ and grants specialists’ workload in 2004. In implementing this plan, however, EPA faces challenges to enhancing accountability. Although the plan calls for ensuring that project officers’ performance standards adequately address their grants management responsibilities, agencywide implementation may be difficult. Currently, project officers do not have uniform performance standards, according to officials in EPA’s Office of Human Resources and Organizational Services. Instead, each supervisor sets standards for each project officer, and these standards may not include grants management responsibilities. Once individual project officers’ performance standards are established for the approximately 1,800 project officers, strong support by managers at all levels, as well as regular communication on performance expectations and feedback, will be key to ensuring that staff with grants management duties successfully meet their responsibilities. Furthermore, it is difficult to implement performance standards that will hold project officers accountable for grants management because these officers have a variety of responsibilities and some project officers manage few grants, and because grants management responsibilities often fall into the category of “other duties as assigned.” Although EPA’s current performance management system can accommodate development of performance standards tailored to each project officer’s specific grants management responsibilities, the current system provides only two choices for measuring performance— satisfactory or unsatisfactory—which may make it difficult to make meaningful distinctions in performance. Such an approach may not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. EPA will also have difficulty achieving the plan’s goals if all managers and staff are not held accountable for grants management. The plan does not call for including grants management standards in managers’ and supervisors’ agreements. In contrast, senior grants managers in the Office of Grants and Debarment as well as other Senior Executive Service managers have performance standards that address grants management responsibilities. However, middle-level managers and supervisors also need to be held accountable for grants management because they oversee many of the staff that have important grants management responsibilities. According to Office of Grants and Debarment officials, they are working on developing performance standards for all managers and supervisors with grants responsibilities. In November 2003, EPA asked key grants managers to review all performance standards and job descriptions for employees involved in grants management, including grants specialists, project officers, supervisors, and managers, to ensure that the complexity and extent of their grant management duties are accurately reflected. Further complicating the establishment of clear lines of accountability, the Office of Grants and Debarment does not have direct control over many of the managers and staff who perform grants management duties— particularly the approximately 1,800 project officers in headquarters and regional program offices. The division of responsibilities between the Office of Grants and Debarment and program and regional offices will continue to present a challenge to holding staff accountable and improving grants management, and will require the sustained commitment of EPA’s senior managers. If EPA is to better achieve its environmental mission, it must more effectively manage its grants—which account for more than half of its annual budget. While EPA’s new 5-year grants management plan shows promise, given EPA’s historically uneven performance in addressing its grants management challenges, congressional oversight is important to ensure that the Administrator of EPA, managers, and staff implement the plan in a sustained, coordinated fashion to meet the plan’s ambitious targets and time frames. To ensure that EPA’s recent efforts to address its grants management challenges are successful, in our August 2003 report, we recommended that the Administrator of EPA provide sufficient resources and commitment to meeting the agency’s grants management plan’s goals, objectives, and performance targets within the specified timeframes. Furthermore, to strengthen EPA’s efforts we recommended incorporating appropriate statistical techniques in selecting grantees for in-depth reviews; requiring EPA staff to use a standard reporting format for in-depth reviews so that the results can be entered into the grant databases and analyzed agencywide; developing a plan, including modifications to the grantee compliance database, to use data from its various oversight efforts—in-depth reviews, significant actions, corrective actions taken, and other compliance information—to fully identify systemic problems, inform grants management officials of areas that need to be addressed, and take corrective action as needed; modifying its in-depth review protocols to include questions on the status of grantees’ progress in measuring and achieving environmental outcomes; incorporating accountability for grants management responsibilities through performance standards that address grants management for all managers and staff in headquarters and the regions responsible for grants management and holding managers and staff accountable for meeting these standards; and evaluating the promising practices identified in the report and implementing those that could potentially improve EPA grants management. To better inform Congress about EPA’s achievements in improving grants management, we recommended that the Administrator of EPA report on the agency’s accomplishments in meeting the goals and objectives developed in the grants management plan and other actions to improve grants management, beginning with its 2003 annual report to Congress. EPA agreed with our recommendations and is in the process of implementing them as part of its 5-year grants management plan. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony were Carl Barden, Andrea W. Brown, Christopher Murray, Paul Schearf, Rebecca Shea, Carol Herrnstadt Shulman, Bruce Skud, and Amy Webbink. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Environmental Protection Agency (EPA) has long faced problems managing its grants, which constitute over one-half of the agency's annual budget, or about $4 billion. EPA uses grants to implement its programs to protect human health and the environment and awards grants to thousands of recipients, including state and local governments, tribes, universities, and nonprofit organizations. EPA's ability to efficiently and effectively accomplish its mission largely depends on how well it manages its grants resources. This testimony, based on GAO's August 2003 report Grants Management: EPA Needs to Strengthen Efforts to Address Persistent Challenges, GAO-03-846 , focuses on the (1) major challenges EPA faces in managing its grants and how it has addressed these challenges in the past, and (2) extent to which EPA's recently issued policies and grants management plan address these challenges. EPA continues to face four key grants management challenges, despite past efforts to address them. These challenges are (1) selecting the most qualified grants applicants, (2) effectively overseeing grantees, (3) measuring the results of grants, and (4) effectively managing grant staff and resources. In the past, EPA has taken a series of actions to address these challenges by, among other things, issuing policies on competition and oversight, conducting training for project officers and nonprofit organizations, and developing a new data system for grants management. However, these actions had mixed results because of the complexity of the problems, weaknesses in design and implementation, and insufficient management attention. EPA's recently issued policies and a 5-year grants management plan to address longstanding management problems show promise, but these policies and plan require strengthening, enhanced accountability, and sustained commitment to succeed. EPA's September 2002 competition policy should improve EPA's ability to select the most qualified applicants by requiring competition for more grants. However, effective implementation of the policy will require a major cultural shift for EPA managers and staff because the competitive process will require significant planning and take more time than awarding grants noncompetitively. EPA's December 2002 oversight policy makes important improvements in oversight, but it does not enable EPA to identify systemic problems in grants management. For example, the policy does not incorporate a statistical approach to selecting grantees for review so that EPA can project the results of the reviews to all EPA grantees. Issued in April 2003, EPA's 5-year grants management plan does offer, for the first time, a comprehensive road map with objectives, goals, and milestones for addressing grants management challenges. However, in implementing the plan, EPA faces challenges in holding all managers and staff accountable for successfully fulfilling their grants management responsibilities. Without this accountability, EPA cannot ensure the sustained commitment needed for the plan's success. While EPA has begun implementing actions in the plan, GAO believes that, given EPA's historically uneven performance in addressing its grants challenges, congressional oversight is important to ensure that EPA's Administrator, managers, and staff implement the plan in a sustained, coordinated fashion to meet the plan's ambitious targets and time frames.
The HCFAC program was established under HIPAA to (1) coordinate federal, state, and local law enforcement efforts to control fraud and abuse associated with health plans; (2) conduct investigations, audits, evaluations, and inspections of delivery and payment for health care in the United States; (3) facilitate the enforcement of federal health care fraud and abuse laws; (4) provide guidance to the health care industry in the form of advisory opinions, safe harbor notices, and special fraud alerts; and (5) establish a national database of adverse actions against health care providers. HIPAA requires that HHS and DOJ issue a joint annual report to Congress that outlines the amounts returned to the Medicare Trust Funds for the previous fiscal year under various categories, such as amounts of criminal fines and civil monetary penalties—penalties for certain activities, such as knowingly presenting a Medicare claim that is not medically necessary. Additionally, HHS and DOJ are required to report the amounts deposited into and expended from the Medicare Trust Funds to conduct HCFAC activities during the previous fiscal year and the justification for those expenditures. In addition to the mandatory appropriations provided under HIPAA, which Congress increased in 2010, DOJ and HHS-OIG have received discretionary funding through annual appropriations for the HCFAC program since fiscal year 2009. The annual HCFAC report includes a summary of the key HCFAC activities that the agencies and their components carried out and provides information on the outputs or outcomes of those activities. For example, the report includes information on the amount of money returned to the Medicare Trust Funds as a result of HCFAC activities. Additionally, the report includes sections that describe the activities each agency and component that received HCFAC funding conducted. These sections provide information on the outputs of each component’s activities. For example, DOJ’s USAO section highlights the number of new criminal investigations initiated and the number of civil matters pending. HHS and DOJ receive funding from several appropriations to conduct their HCFAC program activities. Figure 1 describes HCFAC appropriations to HHS, HHS-OIG, and DOJ. Mandatory funds are appropriated by HIPAA from the Medicare Trust Funds, and are available until expended, meaning that the funds can be spent in other years. A large portion of these funds are appropriated to HHS-OIG; the law appropriates the remainder to both HHS and DOJ, which must determine together how to allocate the funds—referred to as the wedge—between the agencies. In each fiscal year, beginning with fiscal year 2009, Congress appropriated discretionary funding to DOJ and HHS-OIG to finance activities conducted under the HCFAC program. In addition, Congress also appropriated discretionary funds to CMS for program integrity activities it conducts in Medicare and Medicaid, which was outside the scope of our review. Although the FBI is a component of DOJ and was allocated a portion of DOJ’s discretionary HCFAC funding (about $3.4 million), the FBI also received mandatory funding under HIPAA to conduct health care fraud and abuse activities. This mandatory funding was appropriated from the general fund of the U.S. Treasury. In addition to the HCFAC mandatory and discretionary funding that HHS, DOJ, and its components receive, the agencies use funding from other appropriations to support HCFAC activities. For example, HHS’s Office of General Counsel (OGC) uses appropriations from HHS's General Departmental Management appropriation to support its HCFAC activities. HHS and DOJ components conduct a variety of activities under the HCFAC program using mandatory and discretionary HCFAC funding. Among other activities, HHS components identify and investigate fraud through programs, including the Administration for Community Living’s (ACL) Senior Medicare Patrol programs, which are designed to educate and train Medicare beneficiaries to identify fraud. HHS’s OGC supports a variety of program integrity work, including assisting DOJ on False Claims Act cases.Pharmaceutical Fraud Program, which is designed to detect pharmaceutical, biologics, and medical device fraud. CMS uses a portion of HHS’s HCFAC funding to improve its financial oversight of the Medicaid program and CHIP, and for a pilot project related to fraud in HHS’s Food and Drug Administration (FDA) conducts the community mental health centers. CMS also uses its portion of HCFAC funding to support its efforts related to the Medicare Fraud Strike Force (Strike Force) teams, which consist of investigators and prosecutors who use advanced data analysis techniques to identify, investigate, and prosecute potentially fraudulent billing patterns in geographic areas with high rates of health care fraud. HHS-OIG conducts a variety of activities to identify and reduce fraud, waste, and abuse. For example, HHS-OIG assesses civil monetary penalties and imposes other administrative penalties—such as excluding individuals and entities from participating in federal health care programs—against individuals and entities for certain types of conduct. Each of HHS-OIG’s components receives HCFAC funding for the work it conducts. Among other activities: The Office of Investigations (OI) coordinates and conducts investigations of allegations of fraud, waste, and abuse in Medicare and Medicaid. The Office of Evaluation and Inspections (OEI) conducts national evaluations on issues related to preventing fraud, waste, and abuse, and promoting economy, efficiency, and effectiveness of HHS programs. The Office of Audit Services (OAS) conducts independent audits of HHS programs, grantees, and contractors. The Office of Counsel to the Inspector General (OCIG) exercises the authority to impose civil and administrative penalties related to health care fraud, as well as issue advisory opinions. The Office of Management and Policy (OMP) provides management, guidance, and resources in support of the other HHS-OIG components. DOJ’s components have the primary role in enforcing U.S. laws related to health care fraud and abuse, including both criminal and civil matters. For example: The Criminal Division prosecutes criminal health care fraud and leads the Strike Force teams. The Civil Division represents the U.S. in civil fraud matters, such as False Claims Act cases and has the authority to bring criminal charges under the Federal Food, Drug, and Cosmetic Act. The USAOs litigate or prosecute civil and criminal health care fraud cases in their 94 districts throughout the country and are part of the Strike Force teams. The Civil Rights Division enforces several laws related to cases of abuse, substandard care, or needless institutionalization of certain individuals. The Justice Management Division (JMD) provides financial oversight of the DOJ components. The FBI serves as an investigative agency with jurisdiction in both federal and private health insurance programs, and participates in task forces and undercover operations to investigate health care fraud. Although the agencies and components conduct certain activities without assistance from other agencies and components, HHS, CMS, HHS-OIG, and DOJ—including the FBI—frequently collaborate to investigate and prosecute fraud in federal health care programs. For example, HHS-OIG, FBI, and DOJ investigators and prosecutors comprise Strike Force teams. Table 2 in appendix I provides further detail on these activities. In fiscal year 2012, HHS, HHS-OIG, and DOJ obligated approximately $583.6 million to fund HCFAC activities. About 78 percent of obligated funds were from mandatory HCFAC appropriations, 11 percent of obligated funds were from discretionary HCFAC appropriations, and 12 percent of obligated funds were from other appropriations.the obligations for HCFAC activities were for personnel costs; some agencies reported obligating funds for services under contract and supplies. Additionally, HHS-OIG and DOJ obligated over 8 percent of their HCFAC funds to support Strike Force teams located in 9 cities nationwide. In fiscal year 2012, HHS, HHS-OIG, and DOJ reported $583.6 million in obligations for HCFAC activities. This total includes obligations of mandatory (about 78 percent) and discretionary (about 11 percent) HCFAC appropriations and other appropriations not specific to the HCFAC program (about 12 percent). HCFAC mandatory funds are available until expended, while discretionary HCFAC funds are available for 2 fiscal years. Other appropriations that agencies use for HCFAC activities vary in how long they are available. Because agencies reported in fiscal year 2012 obligating funds that were carried over from prior fiscal years, and because agencies obligated funds from other appropriations not specific to the HCFAC program, the obligations the agencies reported for HCFAC activities in fiscal year 2012—$583.6 million—exceed the HCFAC funds appropriated to the agencies for that year. For example, HHS, HHS-OIG, and DOJ were appropriated $486.1 million in HCFAC mandatory and discretionary funding for fiscal year 2012. However, for fiscal year 2012, these agencies reported HCFAC obligations of $583.6 million, including over $67 million in obligations of other appropriations, as well as obligations of funds appropriated in prior fiscal years. In fiscal year 2012, DOJ incurred about 48 percent of the agencies’ total HCFAC obligations (about $280.3 million), while HHS-OIG incurred about 44 percent ($258.8 million), and HHS incurred the remaining 8 percent ($44.4 million). See figure 2 for the distribution of HCFAC obligations by appropriations type—HCFAC mandatory, HCFAC discretionary, and other appropriations—by HHS, HHS-OIG, and DOJ’s components for fiscal year 2012. See table 3 in appendix II for the distribution of HCFAC obligations by appropriations type—HCFAC mandatory, HCFAC discretionary, and other appropriations—for HHS, HHS-OIG, and DOJ’s components for fiscal years 2008 through 2012. A portion of the mandatory HCFAC appropriation that supports HHS and DOJ’s HCFAC activities—or wedge funds—is allocated to each agency. According to a HHS official, in fiscal year 2010, the departments reached a standing agreement for the following allocations: approximately 38 percent for HHS and 62 percent for DOJ. Prior to fiscal year 2010, HHS and DOJ negotiated each year how to divide the wedge funds between the two agencies, which a HHS official described as time- consuming. HHS distributes its wedge funds to HHS components based on their annual funding requests that the Secretary approves. In fiscal year 2012, HHS distributed mandatory funding to ACL for the Senior Medicare Patrol programs, OGC to support program integrity work of its clients, FDA to support the Pharmaceutical Fraud Program, and CMS to support Medicaid and CHIP financial specialists and a pilot project related to fraud in community mental health center providers in Texas, Florida, and Louisiana.HCFAC funds—its portion of wedge funds—to its components to carry out their HCFAC activities, and the distribution of such funds has not varied much since the inception of the program. Separately, HHS-OIG receives a mandatory appropriation for its HCFAC activities. This appropriation is HHS-OIG’s primary source of funding for Medicare and Medicaid fraud investigations, as well as for audits, evaluations, and inspections it conducts related to the Medicare and Medicaid programs. According to a DOJ official, DOJ distributes mandatory In fiscal year 2012, DOJ and HHS-OIG obligated discretionary HCFAC appropriations. According to the information each agency reported to us, each DOJ component received a share of DOJ’s discretionary HCFAC appropriation for their HCFAC activities. A DOJ official told us that DOJ components generally received the same amount of funding from the agency’s discretionary HCFAC appropriation for their HCFAC activities in fiscal year 2012 as they had in prior fiscal years. Additionally, the official indicated that a large portion of DOJ’s HCFAC discretionary appropriation supports the Strike Force teams because DOJ believes that these teams reduce fraud. One component in HHS-OIG—OMP—received discretionary HCFAC appropriations.obligations of these funds are for overhead expenses for the HHS-OIG components that are handled by OMP (such as rent and utilities). HHS, HHS-OIG, and DOJ components obligated over $67 million in funds from other appropriations in addition to the mandatory and discretionary HCFAC appropriations they obligated for HCFAC activities in fiscal year 2012. Within HHS, one component—OGC—used other appropriations to To carry out its HCFAC activities, OGC supplement its HCFAC funding. obligated funds from the annual HHS General Department Management appropriation, which accounted for almost half of its overall obligations for HCFAC activities in fiscal year 2012. ACL, CMS, and FDA did not use other appropriations to support their HCFAC activities. HHS-OGC’s reported estimated obligations of other appropriations, which also included reimbursements for attorney services provided to OGC clients within HHS that supported HCFAC activities. Each of HHS-OIG’s components obligated funds from other appropriations to support HCFAC activities in fiscal year 2012. From other appropriations, HHS-OIG obligated about $18.9 million of these appropriations for HCFAC activities in fiscal year 2012, which represented about 7 percent of its overall HCFAC obligations. HHS-OIG reported that the other appropriations used to support HCFAC activities included funds appropriated specifically to support HHS-OIG’s Medicare and Medicaid program integrity work. For example, in fiscal year 2012, each HHS-OIG component reported obligating funds appropriated in section 6034 of the Deficit Reduction Act (which, among other things, established the Medicaid Integrity Program and provided HHS-OIG with increased funding for Medicaid fraud and abuse control activities) to conduct HCFAC activities. Although some DOJ components reported obligating funds from other appropriations for HCFAC activities, they also reported carrying over some of their HCFAC funding into other fiscal years. A DOJ official told us that funds are often carried over to a new fiscal year, such as in the situation of a continuing resolution, which may shorten the number of months in which they are able to obligate the appropriated funds. work, such as to investigate qui tam cases alleging false claims and to prepare cases for trial. The Civil Rights Division reported using DOJ's Salaries and Expenses, General Legal Activities appropriation to fund the rent for office space used by personnel, and the FBI reported using its annual appropriation to cover personnel expenses for investigators working health care fraud cases beyond those covered by HCFAC funds. HHS, HHS-OIG, and DOJ reported that most of their HCFAC obligations were for personnel costs in fiscal year 2012, with some exceptions based on the type of HCFAC activities each component performs (see table 3 of appendix II for HCFAC obligations for fiscal years 2008 through 2012 for HHS, HHS-OIG, and DOJ’s components). A large portion of most HHS components’ HCFAC obligations were for personnel costs. The same was true for HHS-OIG and DOJ. Each agency relied on personnel to conduct HCFAC activities—HHS-OIG employed investigators to examine potential fraud cases and DOJ employed investigators, attorneys, and other support personnel to investigate and prosecute fraud cases. Additionally, HHS-OIG employed auditors and evaluators to study issues related to the Medicare and Medicaid programs, including issues related to fraud in these programs, as well as a variety of other issues. HHS, HHS-OIG, and DOJ components also reported that their next largest amount of HCFAC obligations was for contractual services and supplies. Components reported using these contractual services and supplies for transportation, rent, supplies, or other contractual services— such as for litigation consultants (for example, medical experts) and litigation support (for example, paralegals to review case documentation), among other things. Obligations for personnel and contracted services and supplies generally accounted for almost all of a component’s HCFAC obligations. Specifically, for HHS’s components, obligations for personnel costs represented the largest portion of FDA’s, CMS’s, and OGC’s obligations for HCFAC activities for fiscal year 2012. In contrast, most of ACL’s obligations in fiscal year 2012 were for expanding grants to the Senior Medicare Patrol programs. Each of HHS-OIG’s components, with the exception of OMP, reported obligations for personnel costs as their largest HCFAC obligations for fiscal year 2012, devoting 87 percent or more of their obligations to personnel in fiscal year 2012. For OMP, over 70 percent of its obligations were devoted to rent, communication, utilities, equipment, printing, and other contractual services. OMP officials told us that certain overhead expenses incurred by the HHS-OIG components—for example, rent payments—are handled by OMP. About half or more of DOJ components’ obligations for HCFAC activities were for personnel costs. In fiscal year 2012, the USAOs, Civil Division, Criminal Division, Civil Rights Division, and FBI reported that obligations for personnel costs ranged from 47 percent (Civil Division) to 84 percent (USAOs) of their obligations. For example, for the Civil Division, obligations for contractual services and supplies represented 53 percent of its HCFAC obligations; and officials told us that they use contracted services for litigation consultants (such as medical experts to review medical records or to prepare exhibits to be used at trial) and for litigation support (such as paralegals to review case documentation). In fiscal year 2012, HHS-OIG and DOJ obligated over $47 million in HCFAC funds to support Strike Force teams. This represented about 8.1 percent of the $583.6 million in obligations for HCFAC activities. DOJ officials told us that Strike Force teams are an important and valuable tool for identifying potential health care fraud schemes. (See table 1 for the HCFAC obligations by Strike Force location for fiscal year 2012, and see appendix II, table 4 for information on HCFAC obligations devoted to Strike Force teams for fiscal years 2008-2012.) In fiscal year 2012, DOJ and HHS-OIG obligated over $12.9 million for the Strike Force team in Miami, which represented over 27 percent of funding for all Strike Force teams. The first Strike Force team was officially launched in Miami in fiscal year 2007, based in part on an HHS- OIG evaluation that found aberrant claims patterns for infusion therapy for Medicare beneficiaries with HIV/AIDs that differentiated South Florida Medicare providers and beneficiaries from the rest of the country. Additionally, obligations for Miami’s Strike Force team were more than twice as much as in Detroit, the team with the second highest obligations for fiscal year 2012 ($5.1 million). Based on the obligations reported for fiscal year 2012, HHS-OIG’s Office of Investigations accounted for 45 percent of the total obligations used for Strike Force teams. The FBI incurred 25 percent, DOJ’s Criminal Division incurred 17 percent, and the USAOs incurred 13 percent of obligations for the Strike Force teams. HHS-OIG’s Office of Investigations and the FBI’s agents conduct investigations and gather evidence, such as through surveillance for Strike Force cases, while DOJ’s Criminal Division and the USAOs’ attorneys are the primary prosecutors of Strike Force cases. Additionally, although not reflected in the table above, CMS obligated approximately $350,656 in discretionary HCFAC appropriations to support Strike Force Teams. CMS’s HCFAC obligations were not associated with any one individual Strike Force city. Since fiscal year 2010, the USAOs have used some of their HCFAC discretionary appropriation for three Special Focus teams—in San Francisco, Boston, and Philadelphia. These Special Focus teams are similar to the Strike Force teams, but handle pharmaceutical civil cases rather than criminal cases. Approximately $2.8 million of the USAOs’ HCFAC obligations in fiscal year 2012 were for these Special Focus teams. This amount was in addition to the HCFAC obligations they used for the Strike Force teams. HHS, HHS-OIG, and DOJ use several indicators to assess HCFAC activities as well as to inform decision-makers about how to allocate resources. These indicators include those listed in the annual HCFAC report as well as others outlined in agency reports. For example, FDA assesses the work of its Pharmaceutical Fraud Program by tracking the number of criminal investigations opened and the outcomes of criminal convictions obtained, among other indicators. Additionally, many of the indicators that HHS, HHS-OIG, and DOJ use reflect the collective work of multiple agencies since they work many health care fraud cases jointly. Outputs from some of these key indicators have changed in recent fiscal years. For example, the return-on-investment has increased from $4.90 returned for every $1.00 invested for fiscal years 2006-2008 to $7.90 returned for every $1.00 invested for fiscal years 2010-2012. HHS, HHS-OIG, and DOJ officials reported using several indicators to assess HCFAC activities and that those indicators serve multiple purposes. Several indicators are included in the annual HCFAC report, while other indicators are reported in agency documents or used internally. Additionally, some indicators are collective—in that they reflect the work of multiple agencies—and other indicators outline the activities conducted by a particular agency or component. Appendix III, tables 5 through 8, provides detailed information on indicators used to assess the activities conducted using HCFAC funding, including those outlined in the HCFAC report, as well as other indicators the agencies use, by agency and component. Each HHS component conducts unique activities related to health care fraud and abuse. As a result of these different types of activities, the indicators that each HHS component uses to highlight the accomplishments of its HCFAC activities vary. FDA uses indicators associated with its Pharmaceutical Fraud program— which focuses on detecting, prosecuting, and preventing pharmaceutical, biologic, and medical device fraud—including the number of criminal investigations opened during a fiscal year and the outcomes of criminal convictions obtained (such as amount of jail time, probation, or amount of restitution). FDA officials told us that the indicators they use are outlined in the annual HCFAC report. For example, FDA reported in the fiscal year 2012 HCFAC report that it had opened 42 criminal investigations since the inception of the Pharmaceutical Fraud Program, and 17 investigations during fiscal year 2012. ACL primarily uses indicators that track information related to the Senior Medicare Patrol (SMP) programs—which train senior volunteers to inform fellow beneficiaries on how to detect and prevent fraud, waste, and abuse in the Medicare program—such as indicators related to beneficiary education and training, outreach activities, and events the SMP programs conduct, and cases that were referred for investigation. For instance, ACL tracks the number of group education sessions the SMPs conduct and the estimated number of beneficiaries who attended the sessions. Many of the indicators ACL uses are outlined in an annual HHS-OIG report on According to the SMP programs, as well as the annual HCFAC report.ACL officials, ACL has hired a contractor to assess the adequacy of the current indicators used by ACL and to determine if the indicators are appropriate for evaluating the performance of the SMPs. HHS’s OGC uses several indicators to assess the HCFAC activities it conducts. These indicators include amounts of recoveries for matters on which OGC has assisted—such as False Claims Act matters and civil monetary penalties—and the number of physician self-referral disclosures in which OGC advised. indicators are outlined in the annual HCFAC report. According to the fiscal year 2012 HCFAC report, OGC advised CMS on the new voluntary Self Referral Disclosure Protocol established by the Patient Protection and Affordable Care Act. Pub. L. No. 111-148, § 6409(a), 124 Stat. at 772. Under this protocol, providers of services and supplies may self-disclose actual or potential violations of the physician self-referral law, commonly known as the Stark law. The Stark law prohibits physicians from making certain referrals for “designated health services” paid for by Medicare to entities with which the physician (or immediate family members) has a financial relationship, unless the arrangement complies with a specified exception, such as in-office ancillary services. 42 U.S.C. § 1395nn(a)(1), (b)(2). officials told us that one indicator they use is the drop in number of claims for particular services, which they believe coincides with the efforts of the Strike Force teams to investigate and prosecute fraud. For example, according to information that CMS provided to us, payments for home health services dropped by nearly one-half from 2008 to 2011 in Miami- Dade County, which officials believe was, in part, due to the Strike Force team’s efforts focused on reducing fraud in home health care. HHS-OIG uses a variety of indicators to assess the work it conducts using HCFAC funds. Some of these indicators reflect the collective work of HHS-OIG’s components and some are unique to the activities conducted by a particular component. For example, HHS-OIG tracks the health care savings attributable to HHS-OIG investigations, audits, and evaluations. This indicator includes work from nearly all HHS-OIG components, including the Office of Investigations, the Office of Audit Services, and the Office of Evaluation and Inspections. Among many other indicators, HHS- OIG’s Office of Counsel to the Inspector General tracks the number of corporate integrity agreements monitored for compliance, which is specific to the work of that office. HHS-OIG officials told us that the indicators they use to assess HCFAC activities are reported in the annual HCFAC report and in other HHS-OIG reports (such as its semi-annual reports to Congress). DOJ uses several indicators to assess the work it conducts with HCFAC funding. The indicators it uses relate to the activities that each DOJ component conducts to enforce health care fraud and abuse laws. For example, the USAOs use indicators related to criminal prosecutions, including the number of defendants charged and the number of convictions. In addition to those measures, the USAOs also track information related to civil matters, such as the number of pending civil investigations. In addition to the indicators listed in the annual HCFAC report, officials from DOJ’s components told us that they use other indicators to assess the work they conduct related to health care fraud and abuse. Officials told us that these indicators are tracked at the departmental level and aggregate the work of multiple DOJ components. For example, DOJ tracks the percentage of criminal and civil cases resolved favorably. These indicators include health care fraud cases, as well as other cases that DOJ components handle. Officials from HHS, HHS-OIG, and DOJ told us that they use indicators to inform decision-makers about how to allocate resources. For example, officials from DOJ’s Civil Rights Division told us that they use indicators to help determine what resources they need to handle their current caseload. The Civil Rights Division considers the number of cases the division is currently working along with the number of remedial agreements with facilities that the division needs to monitor in the upcoming year when developing requests for funding. Additionally, officials from FDA told us that they review the preceding year’s number of investigations and the costs associated with those investigations, when requesting annual funding. Additionally, HHS, HHS-OIG, and DOJ officials indicated that they use data to inform their decisions about which activities to prioritize, including what cases or studies to undertake, as well as where to locate specific resources. For example, officials from HHS-OIG told us that they use Medicare claims data to identify which service areas to target for investigations, audits, or evaluations, as well as which geographic regions to focus their efforts. Officials said that they continually review whether HHS-OIG staff are located in the most appropriate geographic areas and have relocated staff to areas to enhance the efficiency of HHS-OIG resources. HHS-OIG officials also told us that the agency uses several indicators for internal management purposes. Additionally, officials from DOJ’s Criminal Division told us that one factor they consider when deciding how to prioritize cases is to review data analyses to focus on cases with large amounts of alleged fraudulent billing. Since HHS-OIG and DOJ’s components work many health care fraud cases jointly, many of the indicators included in the annual HCFAC report highlight the work of both HHS-OIG and DOJ, as well as various components within each agency. For example, the report includes information on the results of HCFAC activities, such as the dollar amount recovered as a result of fraud cases, which HHS-OIG and DOJ officials say reflects the investigative work done by HHS-OIG and FBI, as well as the work of DOJ’s components in prosecuting the cases. Additionally, the report presents several indicators related to the work of the Strike Force teams, such as the number of indictments and complaints involving charges that were filed, the outcomes of the cases, and the total amount of alleged billing to Medicare as a result of these Strike Force cases. The return-on-investment is another indicator that reflects the work of multiple agencies and has changed in recent years. We have recognized that agencies can use a return-on-investment as a valuable tool for assessing a program’s activities and for determining how best to target resources. The return-on-investment is included in the annual HCFAC report and compares the amount of funds that were returned to the Medicare Trust Funds, such as restitution and compensatory damages awarded, with the amount of appropriations for HCFAC activities. Specifically: The total returns—the numerator—includes deposits to the Medicare Trust Funds. The calculation includes amounts that were deposited into the Medicare Trust Funds rather than amounts that were ordered or negotiated in health care fraud and abuse cases, but not yet transferred to the Medicare Trust Funds. Officials reported that although there may be large amounts of restitution ordered or agreed upon in health care fraud cases, the amounts actually returned to the Medicare Trust Funds may be lower. By including only those funds that have been returned to the Medicare Trust Funds, the return-on-investment is not artificially inflated. For example, officials told us that although a defendant convicted of health care fraud may be ordered to pay restitution and penalties in a specific amount, the defendant may pay less than what is ordered as the ability to pay often affects how much is actually received. Many cases discussed in the annual HCFAC report include settlements reached with pharmaceutical and device manufacturers for criminal and civil liabilities. For example, the fiscal year 2012 HCFAC report describes many settlements reached with pharmaceutical and device manufacturers and the settlements ranged from about $200,000 to $3 billion. The total investment—the denominator—includes mandatory and discretionary HCFAC funds that were appropriated to HHS, HHS-OIG, and DOJ (including the FBI’s mandatory funds devoted to health care fraud and abuse reduction activities) and does not include funding from other appropriations. DOJ officials told us that the HCFAC funding that CMS receives through HHS’s wedge fund is included in the return-on-investment calculation, and a small portion of HCFAC discretionary funds that CMS uses to support the Strike Force teams. Return-on-investment is calculated using a 3-year moving average. To account for differences in the amounts returned to the Medicare Trust Funds between years, the return-on-investment is calculated using a 3-year average. For example, a case may have been investigated in fiscal year 2010 but not settled until fiscal year 2012, and thus the funds received from that case would not be deposited until 2012. Similarly, although agencies may carry over HCFAC appropriations into future fiscal years, the amount of appropriations included in the calculation is also based on a 3-year average with carry over amounts included in the year in which they were appropriated. According to the annual HCFAC report, the return-on-investment for fiscal years 2010-2012 was $7.90 returned to the Medicare Trust Funds for every $1.00 of HCFAC funds appropriated for HCFAC activities. The return-on-investment increased steadily from fiscal year 2008 to 2012. In fiscal years 2006-2008, the return-on-investment was $4.90 to $1.00, and in fiscal years 2010-2012, the return-on-investment was the highest at $7.90 to $1.00. See figure 3 for additional information on the return-on- investment for fiscal years 2008-2012. A review of other key outputs listed in the annual HCFAC reports from 2008 through 2012 that reflect accomplishments or outputs of activities conducted by multiple agencies using HCFAC funding shows some key outputs have generally increased and some have remained stable. During the same time period, HCFAC obligations and funding from other appropriations used to support HCFAC activities increased about 38 percent. See figure 4 for data on selected key outputs for fiscal years 2008 to 2012, and see appendix IV, table 9 for additional detailed information on the key outputs for fiscal years 2008 to 2012. One key output that has increased since fiscal year 2008 is the number of defendants convicted of health care fraud. For example, the number of defendants convicted of health care fraud generally increased from around 588 in fiscal year 2008 to 826 in fiscal year 2012 (a 40 percent increase). Some key outputs did not change between fiscal years 2008 and 2012. While funding has increased since 2008, there has not been a consistent pattern of increasing outputs. For example, the number of new criminal health care fraud investigations opened increased from fiscal year 2008 (957 investigations) to fiscal year 2012 (1,131 investigations). Additionally, the number of new civil health care fraud investigations opened did not vary much between 2008 (843 cases) and 2012 (885 cases). HHS-OIG and DOJ officials indicated that there are a number of factors that might contribute to these trends. DOJ officials told us that the complexity of fraud cases has increased in recent years and requires more substantial resources to investigate and prosecute than other, less- complex cases. Officials stated that this limits the amount of resources they are able to commit to other cases. HHS-OIG and DOJ officials also cited other factors, including external factors (such as an increase in the number of defendants opting to go to trial) and significant changes to federal health care programs (such as the implementation of the Medicare Part D prescription drug program), which might influence these trends. Nonetheless, HHS-OIG and DOJ officials indicated that they consider the increase since 2008 in some of the key outputs to be significant. For example, HHS-OIG officials noted that there was an increase of 42 civil fraud investigations from 2008 to 2012, and they consider the increase to be of significance given the complexity of fraud schemes and the resources needed to handle these civil cases. Additionally, DOJ officials told us that they consider increases to the number of new criminal fraud investigations opened (an increase of 18 percent) to be significant. DOJ officials also indicated that several key outputs related to the Strike Force teams have increased since 2008. See appendix IV for detailed information on key outputs related to HCFAC activities, including the Strike Force teams. The indicators used by agencies to track the outputs of HCFAC activities provide information on the accomplishments of HCFAC activities, not on the effectiveness of the activities in reducing health care fraud and abuse. HHS, HHS-OIG, and DOJ officials reported that they consider the indicators to be the outputs or accomplishments of the HCFAC activities they conduct and in that sense they provide a composite picture of the achievements of the HCFAC program. However, difficulty in establishing a causal link between HCFAC activities and output indicators, difficulty in determining the deterrent effect HCFAC activities may have on potential health care fraud and abuse, limited research on the effectiveness of health care fraud interventions, and the lack of a health care fraud baseline hinder a broader understanding of the effectiveness of the HCFAC program in reducing health care fraud and abuse. The indicators that HHS, HHS-OIG, and DOJ use to track HCFAC activities offer insights on the accomplishments and outputs of HCFAC activities, but they do not measure the effectiveness of the HCFAC program in reducing health care fraud and abuse. HHS, HHS-OIG, and DOJ officials reported that they consider the indicators they use to be the accomplishments or outputs of the HCFAC activities they conduct. For example, the key program outputs discussed earlier in this report reflect accomplishments of activities agencies conduct using HCFAC funding. Officials from HHS, HHS-OIG, and DOJ told us that these indicators can be used to provide insights on program activities or the number of actions a component has been able to accomplish in a specific time frame (e.g., the number of defendants convicted in a fiscal year). However, several HHS and DOJ agency officials told us that they do not consider these indicators to be measures of the performance or the effectiveness of the HCFAC program in reducing health care fraud. The return-on- investment is an example of an indicator that describes program results but does not measure program effectiveness. We found that the return- on-investment provides information on the accomplishments of HCFAC activities in relationship to the amount of funds appropriated for these activities, but does not provide information on the extent to which the HCFAC program reduces health care fraud. Additionally, most of the indicators used to track HCFAC activities do not have targets or goals associated with them. Although standard practices for internal controls indicate that ongoing performance monitoring should include comparison of performance indicator data against planned targets, our previous work has recognized that establishing measures and setting specific targets in the law enforcement area can be challenging. Officials from HHS, HHS-OIG, and DOJ told us that they intentionally do not set performance targets for indicators such as the number of health care fraud investigations or prosecutions undertaken because such targets could cause the public to perceive law enforcement as engaging in “bounty hunting” or pursuing arbitrary targets merely to meet particular goals.law enforcement actions that are based on merit and avoid the appearance that they strive to achieve certain numerical quotas. HHS, HHS-OIG, and DOJ officials, as well as literature we reviewed, indicate that there are several factors that make assessing the effectiveness of the HCFAC program in reducing health care fraud and abuse challenging. It is difficult to establish if the HCFAC program has a direct relationship to changes in the amount of health care fraud and abuse. HHS, HHS-OIG, and DOJ officials told us that HCFAC activities—as well as other efforts by federal agencies and others, including non-government entities—may have helped reduce health care fraud; however, the effect that any of these actions may have had on health care fraud and abuse is difficult to isolate. For example, HHS-OIG officials stated that compliance training and guidance provided by the HHS-OIG to health care organization directors—an activity conducted with HCFAC funding—may have had an effect on health care fraud but that it is difficult to isolate how much of an effect the activity has had. However, according to HHS-OIG officials, a rise in the number of provider compliance programs established by hospital organizations in response to shareholder interest in improving compliance with federal and state health care program requirements may also contribute to reductions in health care fraud. Moreover, many efforts within CMS aim to reduce health care fraud and abuse, in addition to those identified as HCFAC activities, and it is difficult to know which CMS program or activity has had an effect on the incidence of fraud. For example, CMS has implemented a number of initiatives to prevent health care fraud and abuse that are not funded with HCFAC funds. One such effort is a change to the provider enrollment process, which is designed to better ensure that only legitimate providers and suppliers are allowed to bill Medicare. However, it is difficult to isolate the effect that either HCFAC activities or broader CMS efforts may have had in reducing health care fraud and abuse. Another factor that limits understanding of the effectiveness of the HCFAC program in reducing health care fraud and abuse is the difficulty in quantifying the HCFAC program’s effect in deterring health care fraud and abuse. DOJ officials provided anecdotal evidence that HCFAC activities help to deter would-be offenders. For example, a Justice Management Division official asserted that DOJ prosecutions that result in doctors being sentenced to prison for health care fraud and abuse deter other doctors who are contemplating committing fraud. Other DOJ officials reported that cooperating witnesses in health care fraud investigations have told officials of instances where a provider committing potentially fraudulent acts had ceased operations because of the pressure brought on by Strike Force prosecutions. DOJ officials stated that they could recall about a dozen examples of specific individuals who have said they were deterred from committing fraud or ceased a fraudulent operation because they saw another individual get caught. However, these examples are anecdotal and DOJ and HHS-OIG officials stated that it is difficult to know how much health care fraud is deterred as a result of HCFAC activities. Research on the effectiveness of health care fraud and abuse interventions, and on ways to measure the effectiveness of health care fraud and abuse interventions has been limited. We found that none of the 49 articles we selected to review for this study evaluated the effectiveness of the HCFAC program specifically, and few studies examined the effectiveness of health care fraud and abuse interventions in general. A recent review of literature conducted by experts in the field found similar results. Another challenge that limits the ability to determine whether HCFAC activities are effective in reducing health care fraud and abuse is the lack of a baseline for the amount of health care fraud that exists at any point in time. Having such a baseline could provide information on the amount of health care fraud and how much it has changed in a given year or over time. We have previously reported that there currently is no reliable baseline estimate of the amount of health care fraud in the United States. Several experts told us or have written about the importance of establishing a baseline in assessing the effectiveness of law enforcement programs. A baseline estimate could provide an understanding of the extent of fraud and, with additional information on program activities, could help to inform decision-making related to allocation of resources to combat health care fraud. HHS and CMS have taken steps to try to establish a health care fraud baseline because, according to the fiscal year 2012 HCFAC report, they appreciate that a baseline would allow the agencies to evaluate the success of fraud prevention activities. HHS officials stated that the Assistant Secretary for Planning and Evaluation initiated work to establish a baseline measurement, and that work was subsequently transferred to CMS’s Center for Program Integrity. According to the fiscal year 2012 HCFAC report, the project is designed to measure probable fraud in home health care agencies and will pilot test a measurement approach and calculate an estimate of probable fraud for specific home health care services. CMS and its contractor will collect information from home health care agencies, the referring physicians, and Medicare beneficiaries selected in a national random sample of home health care claims. The pilot will rely on the information collected along with a summary of the service history of the home health care agency, the referring provider, and the beneficiary to estimate the percentage of total payments that are associated with probable fraud, and the percentage of all claims that are associated with probable fraud for Medicare fee-for-service home health care. CMS reports that after completion of the pilot, it will determine whether the measurement approach should be expanded to other areas of health care. Officials from the Center for Program Integrity stated that as of May 2013, they were beginning the data collection phase of the fraud baseline measurement pilot, which they expect will last two years. Some HCFAC-funded agencies have attempted to determine the effect of HCFAC activities on specific types of fraud in certain locations. DOJ officials provided examples of reductions in billings for certain services in specific locations and told us that they believe these reductions are associated with the work of the Strike Force teams. For example, DOJ officials reported assessing the amount of home health care billings in certain Strike Force cities before the Strike Force began operations and then again after the Strike Force had begun operations. Since the amount of home health care billing was measured before and after the Strike Force was implemented, HHS, HHS-OIG, and DOJ officials are able to estimate some effect that the Strike Force team had on the amount of billing in that area. For example, in a May 14, 2013 press conference, the Attorney General noted that after the Detroit Strike Force began investigating cases of potential group-psychotherapy fraud, claims for this type of treatment in Detroit dropped by more than 70 percent since January 2011. Making progress in preventing and reducing health care fraud and abuse is an essential yet challenging task. HHS and DOJ use a number of indicators to assess the activities they conduct to reduce health care fraud and abuse. However, the indicators do not provide information about the effectiveness of the program, and little is known about whether and how well the HCFAC program reduces health care fraud. While positive results on the program’s return-on-investment can be seen as an indication of program success, the return-on-investment does not indicate the extent to which the program is reducing fraud. For example, the increasing returns from the fraud that is being investigated and prosecuted may indicate that HCFAC programming is effective in detecting or deterring potentially fraudulent schemes or indicate that there is simply an increase in potentially fraudulent activity. CMS’s recent efforts to establish a home health care fraud baseline is a good first step to understanding the extent of the problem and, if implemented as planned, could provide policymakers with information on how much fraud exists and in coming years, how potentially fraudulent activity has increased or decreased over time. However, CMS has not yet determined whether the methodology used to establish a baseline of probable fraud in home health care could be used to assess the amount of fraud in other health care services. Additionally, even with a baseline estimate of the total amount of probable fraud, there will likely be continuing challenges in understanding the effectiveness of the HCFAC program, such as isolating the program’s ability to reduce or prevent fraud and abuse. Despite these inherent challenges, if a health care fraud baseline is established more broadly, it may become feasible to study how individual HCFAC activities, and possibly the program as a whole, affects changes in health care fraud. Results from these studies could provide HHS and DOJ with additional information regarding which activities are the most effective in reducing health care fraud and abuse, and could potentially inform agency decisions about how best to allocate limited resources. GAO provided a draft of the report to HHS and DOJ. In its written comments reproduced in appendix V, HHS discussed its program integrity efforts to reduce fraud, waste, and abuse. HHS also provided examples of CMS’s efforts to reduce fraud, waste, and abuse in Medicare. The examples provided were not included in our review because they were not included in the funding used to calculate the return-on-investment for the HCFAC program. While not commenting specifically on our report, DOJ sent us examples of reductions in Medicare billings for specific services (such as durable medical equipment, home health services, and community mental health center services) in certain Strike Force cities. In their comments, DOJ officials stated that based on their examples, the Strike Force efforts have had a lasting effect on savings to Medicare payments. In addition, HHS and DOJ provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, the Attorney General, the Inspector General of HHS, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Kathleen M. King at (202) 512-7114 or [email protected] or Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. The activities listed in table 2 below represent only activities that are supported with HCFAC funds (as reported in agency documents or interviews with agency officials). The table does not include other activities conducted by the agencies that are not related to health care fraud and abuse control. Table 3 summarizes the HCFAC obligations for the Department of Health and Human Services (HHS), including the HHS Office of Inspector General, and Department of Justice components for fiscal years 2008 through 2012 by type of appropriations. An obligation is a definite commitment that creates a legal liability of the government for payment of goods and services ordered or received. The table includes obligations of mandatory HCFAC appropriations, discretionary HCFAC appropriations, and other appropriations used to support HCFAC activities. Mandatory HCFAC appropriations refer to the HCFAC budgetary resources controlled by a law, principally the Health Insurance Portability and Accountability Act of 1996, rather than appropriations acts. Discretionary HCFAC appropriations refer to budgetary resources provided in annual appropriation acts, other than those that fund mandatory programs. Congress appropriated mandatory funding for HCFAC activities beginning in fiscal year 1997, and appropriated discretionary funding for HCFAC activities beginning in fiscal year 2009. Other appropriations include funding from other appropriations not specific to the HCFAC program that the agencies used, in addition to the HCFAC funds, to carry out activities related to health care fraud and abuse. In addition, the table shows the percentage of HCFAC obligations for personnel services and contracted services and supplies. Table 4 summarizes HCFAC obligations for Strike Force teams for fiscal years 2008 through 2012 by the geographic location of the Strike Force teams. Strike Force teams consist of investigators and prosecutors who use data analysis techniques to identify, investigate, and prosecute potentially fraudulent activities in geographic areas with high rates of fraud. Appendix III: Indicators Used by Agencies to Assess Health Care Fraud and Abuse Control (HCFAC) Program Activities indicates that measure is included in Fiscal Year 2012 annual HCFAC report indicates that measure is included in Department of Health and Human Services’ Office of Inspector General (HHS-OIG) July 2013 report on Senior Medicare Patrol programs. Unless otherwise noted, the information on outcomes/output for each measure is for fiscal year 2012. For the outcomes and outputs for indicators associated with ACL’s Senior Medicare Patrol program, we used the most currently available data, which was calendar year 2012 data obtained from the July 2013 HHS-OIG report on the Senior Medicare Patrol program. The outcomes and output for these indicators is also included in the fiscal year 2012 HCFAC report; however, the outcomes and output is calendar year 2011 data. In HHS-OIG’s July 2013 report on Senior Medicare Patrol program, this indicator for Medicare and Medicaid funds recovered attributable to the programs was expanded to account for both expected and actual funds recovered. However, in the fiscal year 2012 report, the indicator included only actual funds recovered. Amount of expected recoveries, including audit receivables and investigative receivables and non-HHS investigative receivables resulting from work in areas such as states’ shares of Medicaid restitution $6.9 billion consisting of $923.8 million in audit receivables and $6 billion in investigative receivables (which includes $1.7 billion in non-HHS investigative receivables resulting from work in areas such as the states’ shares of Medicaid restitution) Ratio of expected return on investment measuring the efficiency of HHS-OIG’s health care oversight efforts (Target: $12.0) Questioned cost recommendations (dollar value) Funds put to better use recommendations Timeliness of draft reports (or final reports if issued without a draft) (Target: 63 percent) Audit receivables (disallowed questioned cost recommendations) Number of evaluations started (Target: 57 evaluations) Percentage of final reports completed within a year (Target: 55 percent) Complaints received (Target: 6,290 complaints) indicates that the indicator is included in Fiscal Year 2012 annual HCFAC report. indicates that the indicator is included in at least one of the two HHS-OIG’s Semiannual Report to Congress for fiscal year 2012. indicates that the indicator is included in HHS-OIG’s Fiscal Year 2014 Justification of Estimates for Appropriations Committees, which includes outcomes/output of indicators for fiscal year 2012. Number of new civil health care fraud investigations opened Number of civil health care fraud matters pending at the end of the fiscal year Number of investigations completed per Department of Justice attorney working on financial fraud and health care fraud cases (Target: 11.92 investigations per attorney) Percentage of civil cases favorably resolved for litigating divisions (Target: 80 percent of civil cases favorably resolved) Number of cases favorably resolved for litigating components (Target: 80 percent of civil cases favorably resolved) Average number of months of prison sentences in health care fraud cases Amount secured through court-ordered restitution, forfeiture, and fines Number of investigations completed per Department of Justice attorney working on financial fraud and health care fraud cases (Target: 11.92 investigations per attorney) Percentage of criminal cases favorably resolved for litigating divisions (Target: 90 percent of criminal cases favorably resolved) Number of new health care fraud investigations initiated by the FBI (Targets vary by field office) Number of pending health care fraud investigations (Targets vary by field office) Number of FBI health care fraud investigators and analysts that received training (Targets vary by field office) Number of dismantled criminal enterprises engaging in white-collar crime (Target: 360 criminal enterprises) In fiscal year 2011: $1.2 billion in restitutions; $1 billion in fines; $96 million in seizures; $320 million in civil restitution; and over $1 billion in civil settlements. Number of Federal health care fraud related convictions Number of new civil health care fraud investigations opened Number of civil health care fraud investigations pending Number of investigations completed per Department of Justice attorney working on financial fraud and health care fraud cases (Target: 11.92 investigations per attorney) Indicators used by agencies to assess activities (Associated target, if applicable) Percentage of criminal cases favorably resolved for litigating divisions (Target: 90 percent of criminal cases favorably resolved; 80 percent of civil cases favorably resolved) Percent of white collar crimes cases concerning mortgage fraud, health care fraud, and official corruption favorably resolved (Target: 90 percent of white collar cases favorably resolved) 92.2 percent of white collar crime cases favorably resolved in fiscal year 2010 indicates that measure is included in Fiscal Year 2012 annual HCFAC report indicates that measure is in DOJ’s Performance and Accountability report ¡ indicates that the measure is in DOJ’s Performance Plan for Fiscal Year 2012 ⁄ indicates that the measure is in FBI’s Financial Crimes Report to the Public for Fiscal Years 2010 – 2011. Unless otherwise noted, the information on outcomes/output for each measure is for fiscal year 2012. The outputs for these indicators are included in the summary of the HCFAC report and in the section regarding USAO activities. We report the outputs in the Civil Division section and USAO section of this table because the outputs include civil matters handled by the USAOs and/or Civil Division. These measures are reported at the departmental level for DOJ, in which several DOJ components contribute, and include health care fraud cases in addition to other cases. Some of the outcomes/outputs associated with the Strike Force teams are subsets of outcomes/outputs reported for the HCFAC program as a whole. For example, the number of defendants charged in Strike Force cases is a subset of the total number of defendants in health care fraud-related cases where criminal charges were filed. As a result, the outcomes/outputs reported in this table may be duplicative. In addition to the contacts named above, Martin T. Gahart, Assistant Director; Tom Jessor, Assistant Director; Christie Enders; Sandra George; Drew Long; Lisa Rogers; and Meghan Squires made significant contributions to the work.
GAO has designated Medicare and Medicaid as high-risk programs partly because their size, scope, and complexity make them vulnerable to fraud. Congress established the HCFAC program and provided funding to HHS and DOJ to help reduce fraud and abuse in Medicare and Medicaid. GAO was asked to examine how HHS and DOJ are using funds to achieve the goals of the HCFAC program, and to examine performance assessments and other metrics that HHS and DOJ use to determine the program's effectiveness. This report (1) describes how HHS and DOJ obligated funds for the HCFAC program, (2) examines how HHS and DOJ assess HCFAC activities and whether key program outputs have changed over time, and (3) examines what is known about the effectiveness of the HCFAC program in reducing health care fraud and abuse. To describe how HHS and DOJ obligated funds, GAO obtained financial information from HHS and DOJ for fiscal year 2012. To examine how HHS and DOJ assess HCFAC activities and whether key outputs have changed over time, GAO reviewed agency reports and documents, and interviewed agency officials. To examine what is known about the effectiveness of the HCFAC program, GAO conducted a literature review and interviewed experts. In comments on a draft of this report, HHS noted examples of CMS's efforts to reduce health care fraud, though these examples were not included in the HCFAC return-on-investment calculation. Additionally, HHS and DOJ provided technical comments, which GAO incorporated as appropriate. In fiscal year 2012, the Department of Health and Human Services (HHS), HHS Office of Inspector General (HHS-OIG), and the Department of Justice (DOJ) obligated approximately $583.6 million to fund Health Care Fraud and Abuse Control (HCFAC) program activities. About 78 percent of obligated funds were from mandatory HCFAC appropriations (budgetary resources provided in laws other than appropriation acts), 11 percent of obligated funds were from discretionary HCFAC appropriations (budgetary resources provided in appropriation acts), and 12 percent were obligated funds from other appropriations that HHS, HHS-OIG, and DOJ used to support HCFAC activities. HCFAC funds were obligated to support a variety of activities, including interagency Medicare Fraud Strike Force Teams--which provide additional investigative and prosecutorial resources in geographic areas with high rates of health care fraud--located in 9 cities nationwide. HHS, HHS-OIG, and DOJ use several indicators to assess HCFAC activities, as well as to inform decision-makers about how to allocate resources and prioritize those activities. For example, in addition to other indicators, the United States Attorneys' Offices use indicators related to criminal prosecutions, including the number of defendants charged and the number of convictions. Additionally, many of the indicators that HHS, HHS-OIG, and DOJ use--such as the dollar amount recovered as a result of fraud cases--reflect the collective work of multiple agencies since these agencies work many health care fraud cases jointly. Outputs from some key indicators have changed in recent years. For example, according to the fiscal year 2012 HCFAC report, the return-on-investment--the amount of money returned to the government as a result of HCFAC activities compared with the funding appropriated to conduct those activities--has increased from $4.90 returned for every $1.00 invested for fiscal years 2006-2008 to $7.90 returned for every $1.00 invested for fiscal years 2010-2012. Several factors contribute to a lack of information about the effectiveness of HCFAC activities in reducing health care fraud and abuse. The indicators agencies use to track HCFAC activities provide information on the outputs or accomplishments of HCFAC activities, not on the effectiveness of the activities in actually reducing fraud and abuse. For several reasons, assessing the impact of the program is challenging. For example, it is difficult to isolate the effect that HCFAC activities, as opposed to other efforts such as changes to the Medicare provider enrollment process, may have in reducing health care fraud and abuse. It is also difficult to estimate a health care fraud baseline--a measure of the extent of fraud--that is needed to be able to track whether the amount of fraud has changed over time as a result of HCFAC or other efforts. HHS has a project under way to establish a baseline of probable fraud in home health care, and will determine whether this approach to estimating a baseline of fraud should be expanded to other areas of health care. Results from this project and other studies could provide HHS and DOJ with additional information regarding which activities are the most effective in reducing health care fraud and abuse, and could potentially inform agency decisions about how best to allocate limited resources.
Created in 1961, the Peace Corps is mandated by statute to help meet developing countries’ need for trained manpower while promoting mutual understanding between Americans and other peoples. Volunteers commit to 2-year assignments in host communities where they work on projects such as teaching English, strengthening farmer cooperatives, or building sanitation systems. By developing relationships with members of the communities in which they live and work, volunteers contribute to greater intercultural understanding between Americans and host country nationals. Volunteers are expected to maintain a standard of living similar to that of their host community colleagues and coworkers. They are provided with stipends that are based on local living costs and housing similar to their hosts. Volunteers are not supplied with vehicles. Although the Peace Corps accepts older volunteers and has made a conscious effort to recruit minorities, the current volunteer population has a median age of 25 years and is 85 percent white. More than 60 percent of the volunteers are women. The Peace Corps emphasizes community acceptance as the key to maintaining volunteer safety and security. The agency has found that volunteer safety is best ensured when volunteers are well–integrated into their host communities and treated as extended family members and contributors to development. While emphasizing protection measures such as locks and window bars, the Peace Corps generally avoids measures such as housing volunteers in walled compounds, which would reduce volunteer integration into the community. The agency also typically withdraws from countries in which breakdowns in civil authority require strong protection or deterrence measures to protect volunteers. To the extent that they share the Peace Corps’ commitment to advancing intercultural understanding, other organizations that face similar security and safety challenges also tend to emphasize community acceptance as an underlying principle. Appendix II presents in greater detail the safety and security practices of some of these organizations. The Peace Corps’ Office of Medical Services created and operates a system for recording and analyzing crime information that focuses primarily on assault crimes. Peace Corps reports show that reported rates of assault nearly doubled from the early 1990s to the latter part of the decade. Agency officials note that the reason for this may be attributable to a number of factors, including agency efforts to improve data collection and volunteer reporting. The Peace Corps has used its data analyses to gain insight into the characteristics of assaults against volunteers and to shape volunteer training programs. However, the full extent of crime against volunteers is unclear because recent volunteer surveys show that volunteers may significantly underreport crime. Additional analyses would enhance the agency’s ability to understand trends in crime and apply this understanding to its crime prevention and intervention strategies. Since 1990, the Office of Medical Services has collected information on assaults from post medical staff around the world and has produced analyses of incidence rates and characteristics of assaults, such as time and place of occurrence, weapons employed, and injuries sustained. Medical staff also collect summary information on the number of nonassault crimes, such as burglaries and thefts, occurring at posts each month. The office periodically publishes reports containing its analytical results and distributes these reports to senior staff, country directors, and post medical officers. Appendix III provides additional information on the processes employed to gather information and produce these reports. Reported incidence rates for most types of assaults have been higher in recent years, as shown in figures 1 and 2. For example, the reported incidence rate for major physical assaults nearly doubled from an average of about 9 per 1,000 volunteer years in 1991 to 1993 to an average of about 17 per 1,000 volunteer years in 1998 to 2000. Reported incidence rates also increased for minor physical assaults and, to a lesser extent, for minor sexual assaults. The reported rate of major sexual assaults decreased from about 10 incidents per 1,000 female volunteer years at the beginning of the 1990s to an average of slightly more than 8 per 1,000 female employees at the end of the decade. According to agency officials, the decreasing incidence of major sexual assaults in the face of increases in minor sexual assault suggests that the decline in major sexual assaults is a true decline rather than a reporting artifact. Appendix III provides more information on crime rates and trends. According to Peace Corps officials, the general increase in reported assaults may reflect an actual increase in the number of such incidents suffered by volunteers, better efforts by the agency to ensure that all medical officers report all assault events, or an increased willingness among volunteers to report incidents. Through its volunteer satisfaction surveys, the agency is aware that the level of underreporting is significant. For example, according to the 1998 survey, volunteers did not report 60 percent of rapes and 20 percent of nonrape sexual assaults. Underreporting reduces the Peace Corps’ ability to state crime rates with certainty and to develop well-informed plans for addressing crime problems. The agency has taken steps to encourage volunteers to report incidents. For example, the coordinator for volunteer safety and security stated that he is developing training materials for medical officers to ensure that they transmit clear messages to volunteers about incident reporting. The Peace Corps is also including questions about underreporting in its current volunteer satisfaction survey. Volunteers may not report criminal incidents for a variety of reasons, including embarrassment, fear of repercussions, concern about confidentiality, and the belief that Peace Corps staff could not help. Volunteers may decline to report minor incidents when, aside from offering counseling, it is unclear what Peace Corps staff can do for the volunteer. In addition, volunteers are sometimes unclear about what to report, and staff observed that definitions for reportable nonassault crimes, in particular, need clarification. The Peace Corps’ system for gathering and analyzing data on crime against volunteers has produced useful insights, but opportunities for additional analyses may help the agency develop better-informed intervention and prevention strategies. In addition, the results of agency analyses could be more broadly shared. Some post medical officers we interviewed stated that they use headquarters analyses of crime data during volunteer training to illustrate the risks volunteers face. These analyses also have influenced the content of the Peace Corps’ volunteer training programs. For example, agency analyses of the circumstances surrounding rape incidents have shown that nearly 60 percent of such crimes from 1993 to 1999 were perpetrated by volunteers’ friends, coworkers, or acquaintances, and that more than 50 percent occurred in a home environment. The Peace Corps’ coordinator for volunteer safety and security stated that the agency was revising volunteer rape awareness training materials to reflect these insights. In recent years, the Peace Corps has made a number of improvements in its crime data collection and analysis system. In 1999, the agency revised its assault reporting form to include information on victim and assailant alcohol use and on whether victims were alone when incidents occurred. Additional analyses would enhance the Peace Corps’ ability to identify other characteristics of crimes and crime risk factors and develop better- informed prevention and intervention strategies. For example, as shown in figure 3, we found that the number of reported assaults is highest among volunteers in their first few months of service. Nearly a third of all reported assaults after 1993 occurred in the volunteer’s 4th to 8th months of service— immediately after the volunteers have completed training and taken up residence at their assigned sites. This finding could be explored and the results considered in developing volunteer training materials. Medical staff and safety and security staff at the Peace Corps agreed that the agency could benefit from additional research on crime against volunteers but observed that neither the medical office nor the coordinator for volunteer safety and security had staff available to perform such research. Among the new initiatives the Peace Corps has stated that it will implement is the hiring of a statistician to perform additional analyses on crime data. The Peace Corps distributes its crime data analyses to agency officials but does not provide access to this information for potential volunteers. For example, it does not post the results, or a summary thereof, on the agency’s Web site. Most volunteers in the field that we interviewed stated that they had been provided little or no specific information on crime incidents before their arrival in the country for preservice training. The Peace Corps’ safety and security initiatives include efforts to more fully inform applicants and recruits of the safety and security challenges they are likely to face as volunteers. Volunteer health, safety, and security is the Peace Corps’ highest priority, according to the agency. To address this commitment, the agency has adopted policies for monitoring and disseminating information on the security environments in which the agency operates, training volunteers, developing safe and secure volunteer housing and work sites, monitoring volunteers, and planning for emergencies such as evacuations. Headquarters is responsible for providing guidance, supervision, and oversight to ensure that agency policies are implemented effectively. The Peace Corps relies heavily on country directors—the heads of agency posts in foreign capitols—to develop and implement practices that are appropriate for specific countries. Country directors, in turn, rely on program managers to develop and oversee volunteer programs. Volunteers are expected to follow agency policies and exercise some responsibility for their own safety and security. Peace Corps headquarters is responsible for establishing the agency’s safety and security policy framework and supports posts in implementing these policies through (1) guidance and training and (2) supervision and oversight. According to agency officials, the Peace Corps has long regarded volunteer safety and security as its highest priority. The agency maintains this focus in its current strategic planning documents, prepared under the provisions of the Government Performance and Results Act. In 1999, the Peace Corps established a policy framework that outlines the agency’s principles for maintaining volunteer safety and security. These agencywide policies are broadly phrased to give country directors flexibility in developing procedures that suit conditions in countries as diverse as Belize and Kazakhstan. Peace Corps policies cover the following: Monitoring and disseminating information on the security environment in Peace Corps countries. Volunteers should be provided with a clear understanding of the risks they face (including an overall assessment of the risks facing volunteers and information on country-specific conditions) so that they can make informed decisions about their own safety. Training volunteers. Volunteers should be provided with training that prepares them to “adopt culturally appropriate lifestyles and exercise judgment that promotes safety and reduces risk in their home, at work, and while traveling.” Developing volunteer housing and work sites. Volunteers should be placed in “appropriate, safe, and secure housing and work sites.” Criteria for selecting sites include the potential for volunteers’ obtaining and maintaining “acceptance” in the communities where they will live and work. Monitoring sites and volunteers and responding to safety concerns and criminal incidents. Post staff should make periodic visits to volunteer sites and respond to volunteer safety and security concerns and incidents, including crimes against volunteers. Planning for emergencies. Posts must maintain accurate contact information on all volunteers and develop and annually test emergency action plans (EAP) to guide staff and volunteers in the event of a natural disaster, political unrest, or other emergency. Headquarters is to review the EAPs and the EAP test results. Headquarters has developed written guidance and training for headquarters and field staff to support implementation of safety and security policies. In collaboration with other agency officials, the coordinator for volunteer safety and security has developed a variety of guidance materials for posts, including information on “best practices” in safety and security operations from posts around the world, crisis management and rape response handbooks, and training modules that posts can apply in preparing volunteer safety and security training programs. These materials are generally nonprescriptive and can be adapted to country-specific conditions. Peace Corps staff, including country directors and program managers at posts, are given training in safety and security procedures as part of their introduction to their positions. For example, all new program managers attend a 4-week overseas staff training session in Washington, D.C., that addresses safety and security issues and other aspects of their work. Agency staff also attend periodic in-service training events that may include safety and security matters. Headquarters also provides supervision and oversight. Three regional directors, each assisted by a small staff of country desk officers, supervise Peace Corps posts abroad. Agency policies state that these regional directors are to ensure that country directors establish effective volunteer safety and security support systems. The regional directors, with their country desk officers, monitor post operations in all areas—including safety and security—by E-mail, telephone, and occasional country visits. This informal dialogue is supplemented by formal submission and review of post EAPs and EAP test results. In addition to these regional directors, Peace Corps’ Office of Volunteer Safety and Overseas Security (headed by a coordinator for volunteer safety and security) and the Office of the Inspector General contribute to headquarters’ supervision and oversight of post practices. A field-based regional safety and security officer works in each of the three regions. At the request of regional or country directors, these officers review and provide advisory reports on post safety and security practices. The Office of the Inspector General, among other things, reviews safety and security operations at posts and issues formal recommendations that require an official post response. Peace Corps country directors are responsible for developing procedures to ensure the effective implementation in specific countries of the agency’s broadly phrased policies, as previously mentioned. For example, country directors develop safety and security criteria for prospective volunteer sites and procedures for ensuring that sites meet these criteria before volunteers arrive. They also develop and provide volunteer safety and security training programs in accordance with agency policies. Volunteers are expected to exercise responsibility for their own safety and security. They are expected to reduce the level of risk they face at their sites and while traveling by complying with post policies and exercising good judgment. They do this in part through the relationships they build with sponsoring organizations and elements of the local community. Peace Corps posts employ a number of program managers who work with local organizations to develop programs in areas such as education and health and to identify housing and work assignments for volunteers. After 3 months of incountry training, volunteers move to diverse sites, often far from Peace Corps posts, where they live in a community and work with local counterpart organizations such as schools and municipal governments. Program managers are expected to monitor volunteers once they arrive at their sites and to provide support when needed. Volunteers do not work directly for or have daily contact with agency staff, however. They are not considered U.S. government employees for most purposes, nor do they not have diplomatic immunity. Peace Corps’ efforts to implement its safety and security policies have produced varying results. We found mixed performance in key areas, which may expose some volunteers to risk. Volunteers are generally satisfied with the safety and security information and training they receive. We identified a number of instances of uneven performance in developing safe and secure housing and work sites and responding to volunteers’ safety concerns. In addition, while all posts have developed an EAP that they test at least annually, the plans and tests vary in quality and comprehensiveness, and the Peace Corps does not have information about how long it would take to reach its volunteers in case of an emergency. A number of factors, including unclear guidance, inadequate staff training, uneven application of supervisory and oversight mechanisms, and staff turnover, hamper Peace Corps efforts to ensure high-quality performance for the agency as a whole. Posts are responsible for monitoring the host country’s safety and security environment and for keeping volunteers informed about safety and security issues. Numerous volunteers we met with were generally satisfied with post efforts in this area. The Peace Corps does not require country directors to prepare formal assessments of the security environment. In general, country directors stay informed about the security environment through regular discussions with local Department of State security officials, information on crime reported by volunteers, and other means. Posts use various mechanisms, such as newsletters, E-mail, and memorandums to disseminate safety information to volunteers. Although posts vary in how and when they disseminate such information, volunteers at the posts we visited said they were fairly satisfied with the level of information they receive about safety and security. According to the 1998 and 1999 volunteer satisfaction surveys, over 80 percent of volunteers found that the Peace Corps kept them adequately or well-informed regarding safety and security, while around 14 percent said that they were not at all informed or poorly informed. Training is central to the Peace Corps’ approach to volunteer safety. Volunteers are generally satisfied with the safety training that the agency provides. Posts have considerable latitude in the design of their safety training programs, but all provide volunteers with 3 months of preservice training that includes information on safety and security. Posts also provide periodic in-service training sessions that cover technical issues. Many of the volunteers we interviewed said that the safety training they received before they began service was useful and cited testimonials by current volunteers as one of the more valuable instructional methods. In both the 1998 and 1999 volunteer satisfaction surveys, over 90 percent of volunteers rated safety and security training as adequate or better; only about 5 percent said that the training was not effective. Some regional safety and security officer reports have found that improvements were needed in post training practices. The inspector general has reported that volunteers at some posts said cross-cultural training and presentations by the U.S. embassy’s security officer did not prepare them adequately for safety-related challenges they faced during service. Some volunteers stated that the Peace Corps did not fully prepare them for the racial and sexual harassment they experienced during their service. Some female volunteers at posts we visited stated that they would like to receive self-protection training. Although many volunteers are provided with housing that meets Peace Corps standards and well-defined work assignments, some volunteers do not have this experience. We found that volunteer housing is not always inspected before the volunteer arrives, some housing does not meet posts’ standards, and some posts have unclear or nonexistent guidance for selecting volunteer housing. In addition, vaguely defined work assignments and unsupportive counterparts may also increase volunteers’ risk by limiting their ability to build a support network in their host communities. We also found that documentation recording information and problems, by site location, was not maintained, which affects the ability of Peace Corps staff to make informed decisions about future placements and could lead to placing volunteers at sites that have previously experienced safety problems. Peace Corps policies call for posts to ensure that housing is inspected and meets post safety and security criteria before the volunteers arrive to take up residence. Nonetheless, some volunteers arrive at their sites to find that their housing is not ready, has not been inspected, or does not meet post standards. At all of the posts we visited, we found instances of volunteers who began their service in housing that had not been inspected and had various shortcomings. For example, one volunteer spent her first 3 weeks at her site living in her counterpart’s office. She later found her own house; however, post staff had not inspected this house even though she had lived in it for several months. In other cases, volunteers and staff said that housing was approved despite deficiencies, with the understanding that the community would rectify the problems before the volunteer arrived. The community failed to comply, however, and staff did not revisit the sites to ensure that problems had been resolved. Several inspector general safety assessments reported instances where the Peace Corps’ failure to inspect housing resulted in volunteers’ not having appropriate housing when they arrived at their sites. According to recent Peace Corps reports, some posts have unclear or nonexistent criteria for selecting a house, which can result in volunteers living in inappropriate housing. For example, the Peace Corps’ review of one post found that unclear housing standards led to multiple instances of volunteers’ living in inadequate housing. In one case, a volunteer lived in a one-room apartment with her counterpart and the counterpart’s boyfriend. Poorly defined assignments and unsupportive counterparts may also increase volunteers’ risk by limiting their ability to build a support network in their host communities. Our previous work in this area has shown that the Peace Corps has had difficulty providing volunteers with well- structured assignments. At the posts we visited, we met volunteers whose counterparts had no plans for them when they arrived at their sites, and only after several months and much frustration did the volunteers find productive activities. Several inspector general reports support this finding. For example, at one post volunteers reported that their coworkers were not at all or were poorly prepared for their arrival. Some volunteers had no real job to do or had not been assigned a counterpart. Senior Peace Corps officials agreed that poorly defined assignments pose a safety risk because volunteers who lack the routine a job provides may spend time away from their sites and have difficulty integrating into their communities. While 76 percent of volunteers in the 1999 volunteer satisfaction survey said that their assignment responsibilities were moderately or mostly clear, 24 percent said these responsibilities were somewhat or not at all clear. Peace Corps policy requires posts to maintain site history files documenting the placement of volunteers at specific sites. Staff thus should have a record of the safety and security environment at volunteer placement sites to help ensure that other volunteers are not placed at sites with significant problems. Four of the five posts we visited did not fully comply with Peace Corps requirements—most of these kept records of safety and security problems in the volunteers’ personal files, thereby making it difficult for program managers to access information about specific sites. Inadequate or nonexistent site history files can affect staff’s ability to make informed decisions about future placements and could lead to placements in areas where volunteers have previously experienced safety problems. For example, at one post we visited, two female volunteers who experienced severe sexual harassment at their site were reassigned to new sites. Records of the incident were kept in their personal files, but the post had no file organized geographically to track after the volunteers were moved. A female volunteer from another program area was later placed in a nearby assignment that required her to travel regularly through the site where the difficulties had occurred. Reports by Peace Corps’ inspector general and regional safety and security officers have also cited problems with posts’ site history files. Peace Corps guidance does not specify how its posts should monitor volunteers. Peace Corps policy allows each post flexibility in establishing the frequency of required staff visits to volunteer sites. Posts conduct site visits to assist volunteers and monitor their activities. We found that there is variation in the frequency of staff contact with volunteers. In addition, volunteers have mixed views on staff responsiveness to safety and security concerns and criminal incidents. We reviewed about 25 percent of all site visit policies established by posts and found that the required frequency of staff visits to volunteer sites ranged from once per year to four times during the first year of service. Volunteers may have more frequent contact with Peace Corps staff if they wish. At the five posts we visited, we found that staff made regular site visits to most volunteers, in accordance with each post’s policies. In the 1998 volunteer satisfaction survey, 68 percent of volunteers reported that the frequency of site visits was adequate or better; 21 percent said that the frequency of site visits was inadequate. Many volunteers at the posts we visited were satisfied with the frequency of site visits. Many Peace Corps staff told us that it is sometimes difficult for them to stay abreast of volunteers’ whereabouts when volunteers are away from their sites. Some staff also said that volunteers face safety risks when they are away from their sites because the volunteers are outside their supportive network and because public transportation may be unsafe. The posts we visited have policies to keep track of volunteers who leave their sites, but we found that volunteers’ compliance with these policies was uneven. Many volunteers we interviewed said that they do not always inform the Peace Corps when they leave their sites, but they may inform other people such as neighbors. One reason volunteers may not report their whereabouts is that Peace Corps policy states that volunteers are “on duty” 7 days per week. Although posts may not follow this policy in practice, some volunteers said they are reluctant to inform the post when they plan to leave their sites because they worry that the post may deduct vacation days. This practice may make it difficult if the Peace Corps needs to contact volunteers in an emergency. Volunteers had mixed views about the Peace Corps’ responsiveness to safety and security concerns and criminal incidents. (Appendix IV describes Peace Corps provisions for responding to criminal incidents.) The few volunteers we spoke with who said that they were victims of assault expressed satisfaction with staff response when they reported the incidents. However, at four of the five posts we visited, some volunteers described instances in which staff were unsupportive when the volunteers reported non-assault safety concerns. For example, one volunteer we interviewed informed Peace Corps staff several times that she needed a new housing arrangement because her doorman repeatedly locked her in or out of her dormitory. The volunteer said staff were unresponsive, and she had to find new housing without the Peace Corps’ assistance. In the 1998 and 1999 volunteer satisfaction surveys, 60 percent of volunteers stated that they were satisfied with safety and security support provided by Peace Corps staff, and about 35 percent reported that they were unsatisfied or only somewhat satisfied with this support. According to the 1998 survey, 64 percent of volunteers said that staff response to issues raised during site visits was adequate or better, but 26 percent of volunteers said staff response was inadequate. Senior Peace Corps officials recognize the importance of responding to volunteer safety concerns, and one acknowledged the need to improve staff responsiveness, particularly to nonassault incident reports. At two posts we visited, country directors attributed unsupportive responses to poor communications between volunteers and staff and to staff attitudes toward volunteers. Posts must be well prepared in case an evacuation becomes necessary— Peace Corps evacuated more than 1,600 volunteers from 26 posts from 1993 to 2001. Peace Corps policy requires that all posts develop an EAP, test it annually, and submit it and the test results to headquarters. We found that posts complied with these requirements. However, we also found that some posts’ EAPs lacked key information, and none of the EAPs contained all of the dimensions listed in the EAP guidance for developing effective emergency plans. Moreover, the Peace Corps has not defined the criteria for a successful EAP test nor is there a standard format for reporting test results. Both factors contribute to making the Peace Corps’ assessment of posts’ emergency drills difficult. The Peace Corps’ EAP policy requires posts to develop an EAP tailored to the conditions at that post and to test the EAP annually. We found that all posts had developed EAPs and had tested them annually. To guide the post through the development of an EAP, the Peace Corps has created a suggested format designed to assist the posts in formulating effective emergency plans. This format, a checklist of about 25 dimensions, includes items such as providing alternate transportation plans; maps demarcating assembly points; a description of the embassy warden system; a host government collaboration agreement that lists other government offices that could be used as a resource during an emergency; and methods for emergency communications. In our review of 65 EAPs (over 90 percent of total EAPs), we found that none of the EAPs we examined contained all of the dimensions listed in the EAP checklist, and, as illustrated in figure 4, many lacked key information. Recent Peace Corps reviews and inspector general evaluations have also identified numerous deficiencies in post EAPs, including inadequate emergency contact information, undeveloped emergency communication networks, and insufficient or nonexistent collaborative arrangements with the host country government—items called for in the EAP checklist. A Peace Corps official stated that some of the checklist items were not included in the EAP because they were not applicable. However, we found that these submitted EAPs did not explain why this information was not relevant. The Peace Corps’ policy requires that all posts test their EAPs but does not establish detailed criteria for evaluating the results of the tests or for recording the results uniformly. The agency allows country directors discretion in making decisions in these areas. In the EAP guidelines, making contact with volunteers is one of the first steps in responding to a crisis. In some cases, posts set goals on time frames for reaching volunteers, either through communication technology or by travel to the volunteer, as benchmarks for measuring the test’s successfulness. For example, of the five country directors we interviewed, two had set targets for reaching at least 90 percent of their volunteers within 24 hours or less; both country directors achieved their goals. Our review of EAP test results showed that most tests are limited to sending a message to all volunteers during business hours and requesting that volunteers respond when they receive the message. According to a senior Peace Corps official, this does not indicate how the plan would work in a real emergency. As shown in figure 5, in our analysis of 63 EAP test results (over 90 percent of all results) submitted to headquarters, we found that 40 percent of posts did not provide information to headquarters on the length of time it took them to contact volunteers. Several factors contribute to the uneven implementation of Peace Corps’ safety and security policies. These factors include unclear guidance and weaknesses in safety and security training for staff and volunteer leaders, uneven application of supervision and oversight mechanisms, and turnover among U.S. direct hire staff. The Peace Corps’ safety and security framework outlines general requirements that posts are expected to comply with but does not often specify required activities, documentation, or criteria for judging actual practices. This may make it difficult for staff to understand what is expected of them. Many posts have not developed clear reporting and response procedures for incidents such as responding to sexual harassment. The agency’s coordinator for volunteer safety and security said that unclear procedures make it difficult for senior staff, including regional directors, to establish a basis for judging the quality of post practices. The coordinator also observed that, at some posts, regional safety and security officers had found that staff members did not understand what had to be done to ensure compliance with agency policies. Although the Peace Corps provides new staff with training on safety and security procedures, evidence suggests that staff training may not always be adequate. In addition, volunteer leaders and wardens who are assigned safety and security responsibilities are not always provided with relevant training. Program managers with whom we spoke found their initial 4-week overseas staff training useful. However, some country directors said that provisions could be strengthened for training lower level staff with significant safety and security responsibilities and for continuing the education of long-time program managers. A senior Peace Corps official agreed with the latter observation, noting that assessment of staff members’ long-term training experience was warranted. Peace Corps reports have also found that some volunteer leaders who assist in site selection and volunteer monitoring and who act as contact points in the event of an emergency do not receive adequate training and are not prepared to discharge their safety-related duties. Our interviews with volunteer leaders and wardens at five posts support this finding. For example, we visited one post where staff members relied on six volunteer leaders to play a significant role in developing sites and responding to volunteer concerns. Four of these volunteer leaders had held the position for several months, but the Peace Corps had not yet trained them for their duties. All of them expressed concern that post staff expected them to take the lead in site development even though they had not been trained to do this. At another post, we visited a volunteer warden whose site is a consolidation point in the event of volunteer evacuation to a neighboring country. She said that the Peace Corps had provided her with no training on her responsibilities in case of an emergency. Informal supervisory mechanisms and a limited number of staff hamper Peace Corps efforts to ensure effective supervision and oversight. The agency has some formal mechanisms for documenting and assessing post practices, including the annual evaluation and testing of post EAPs and regional safety and security officer reports on post practices. Nonetheless, regional directors and country directors rely primarily on informal supervisory mechanisms, such as staff meetings, conversations with volunteers, and E-mail, to ensure that staff is doing an adequate job of implementing the safety and security framework. Several country directors and a former regional director stated that overreliance on informal communications can hinder adequate oversight of staff performance in key areas. One country director observed, for example, that it is difficult to oversee program managers’ site development or monitoring activities because the post does not have a formal system for overseeing. The Peace Corps’ limited use of written or computerized records compounds difficulties in supervising staff at posts and in identifying implementation problems, including noncompliance records that are kept but not always updated. Officials from the Inspector General’s office noted that their work revealed important disparities among posts in their ability to maintain computerized records, especially site histories and volunteer files. For example, one post we visited had created computerized record–keeping systems that permitted easy access to information on site visits and volunteer concerns, greatly facilitating effective supervisory review of the quality of staff support for individual volunteers over time. Another post was in the initial stages of creating such a system. Other posts, however, had no such systems and did not require staff to complete site visit reports to be filed by volunteer or location. Some posts we visited did not formally document nonassault crimes unless the volunteer reported the incident to the medical office. The Peace Corps’ regional safety and security officers and staff from the Inspector General’s office play an important role in helping posts implement the agency’s security framework. However, the number of staff in these offices limits their ability to provide input to posts. Staff at headquarters and at the posts where the agency’s three regional safety and security officers have provided assistance view these officers as a resource for enhancing volunteer safety and security. Officers’ visits to posts can include activities such as leading workshops with volunteers and post staff to assess security practices; training post staff and volunteers on safety and security issues; assisting posts in testing their EAPs and providing feedback on the results; and helping posts respond to specific safety and security challenges, such as preparing for national elections or reevaluating the security situation in light of changing country conditions. However, according to the Peace Corps, the officers provided input to only about one–third of the agency’s posts between October 2000 and May 2002. Oversight by the inspector general’s staff is also limited because of staffing levels. From December 1999 through December 2001, the inspector general issued reports containing findings on safety and security practices at 12 posts. In addition, the Peace Corps has no system to track post compliance with inspector general recommendations to ensure that they are properly implemented. However, according to agency officials, they are working to develop such a system. One factor that may contribute to the Peace Corps’ difficulty in implementing its safety and security policies is turnover among key managers. According to a June 2001 Peace Corps workforce analysis, turnover among U.S. direct hires was extremely high, ranging from 25 to 37 percent in recent years. This report found that the average tenure of these employees was 2 years, that the agency spent an inordinate amount of time selecting and orienting new employees, and that frequent turnover produced a situation in which agency staff are continually “reinventing the wheel.” The report attributed much of the problem to the 5-year employment rule, which statutorily restricts the tenure of U.S. direct hires, including regional directors, country desk officers, country directors and assistant country directors, and inspector general and safety and security staff. Several Peace Corps officials said that turnover affects the agency’s ability to maintain continuity in oversight of post operations. In addition, the lack of documentation described above, combined with high turnover, means that the agency is losing opportunities to apply lessons learned from previous staff tenures. In May 2002, the Peace Corps informed us of a number of initiatives that the agency had already taken or intended to take to improve its current safety and security practices. Peace Corps officials noted that these initiatives were generated through an agencywide safety and security review that began in fall 2001. The agency’s initiatives are intended to address many of the issues we identified and may lead to improved safety and security practices. However, the Peace Corps faces important challenges in implementing these initiatives, and their impact on agency practices remains to be seen. The Peace Corps’ initiatives are intended to improve the agency’s safety and security practices and make them more uniform. (See figure 6 for an overview of the Peace Corps’ initiatives announced in May 2002.) For example, they are intended to clarify guidance, strengthen supervision and oversight mechanisms, and provide human resources to help maintain documentation and perform research into patterns and trends in crime against volunteers. To support country director efforts, the agency plans to hire additional safety and security staff at all levels. At headquarters, the agency has stated that it will hire an associate director for safety and security who will have responsibility for overseeing all agency safety and security activities. To assist the new associate director, the Peace Corps increased its staff of field-based regional safety and security officers from three to seven in June 2002. The agency plans to add five more officers in 2003. To strengthen the agency’s ability to analyze and apply information on crime against volunteers, the Peace Corps has stated that it will provide the new associate director with a safety and security data manager/analyst who will research crime trends and related issues, in collaboration with the Office of Medical Services. To assist regional directors in supervising country director activities, the Peace Corps plans to provide each of the regional directorates with a headquarters-based security officer who will work with the country desk units to monitor and assist post efforts to ensure that their safety and security systems meet agency expectations. To provide full-time assistance at the country level, all posts have been authorized to hire safety and security administrative associates. The agency expects at least 35 posts to create such positions within a year. Among other things, these new staff members will assume responsibility for ensuring that posts maintain accurate and complete records on site histories, site visits, and criminal incident reports. To improve staff understanding of agency safety and security policies and requirements, a 2-year cycle of safety and security training has been authorized. This training will be delivered through an ongoing series of subregional workshops with six attendees from each post and led by field- based regional safety and security officers. A series of training sessions for country desk officers and other headquarters staff will be led by headquarters-based regional security officers. In addition, the agency has provided easier access to its safety and security guidance by placing all relevant materials in a single location on its agencywide intranet. Posts that do not have easy access to the Internet were provided with these materials on a compact disc, produced in February 2002. As the Peace Corps begins to implement its recently announced initiatives, it will face a number of important challenges. The agency has yet to fully clarify the criteria to be applied in evaluating the adequacy of agency practices or the mechanisms to be used in documenting and sharing information on its progress in attaining compliance with agency policies. The agency’s response to these challenges will have a major impact on its ability to ensure that its initiatives have their desired effect. The key to the Peace Corps’ developing a safety and security framework that achieves its desired goals is the effective implementation of the agency’s safety and security initiatives. Criteria for assessing whether the revised policies are being adequately implemented have yet to be fully defined. The Peace Corps has taken steps to clarify its policies and has improved and provided easier access to its guidance on implementing these policies. However, greater clarity could be provided without imposing detailed requirements that may be impractical or inappropriate in some countries. For example, revised agency guidance requires posts to include formal risk assessments in their EAPs. The agency has guidance available on preparing such risk assessments but does not have models available for posts to use. Similarly, the initiatives include authorization for posts to hire administrative associates who will be assigned various safety and security support tasks, including ensuring that the posts’ filing systems provide ready and complete access to relevant records. However, the agency has not developed criteria or examples for judging the adequacy of these filing systems. The Peace Corps is embarking on a major expansion of its volunteer workforce during a time of heightened risk for Americans living abroad. Providing safety and security for its volunteers is the Peace Corps’ highest priority. Our review of the agency’s efforts to ensure compliance with its basic safety and security policies and guidelines shows that there are cases of uneven implementation of key elements of the safety and security framework that could pose risks to volunteers. These include uneven performance in developing safe and secure housing and work sites, responding to volunteer concerns, and planning for emergencies. The Peace Corps has recently announced several new initiatives to improve overall compliance with its safety and security policies. We believe that, if effectively implemented, the new initiatives can reduce potential risks facing volunteers. However, it is not yet clear how the Peace Corps will document its progress in achieving compliance or will share information about better practices. While the Peace Corps does generate reports on practices at individual posts, the agency does not currently have a means to (1) document the overall quality of its safety and security practices or (2) assess changes in the quality of these practices over time. The initiatives do not contain provisions for formal assessments or for documenting progress in implementing them so that this information can be shared with staff. Moreover, the Peace Corps has not indicated what action, if any, it intends to take in addressing the issue of staff turnover. We believe that the Peace Corps will need to address the implications of staff turnover if it is to effectively implement its new initiatives designed to ensure the safety and security of its volunteers. To help ensure that the Peace Corps’ initiatives have their intended effect, we recommend that the Director develop indicators to assess the effectiveness of the initiatives and include the results of these initiatives in the agency’s annual reports under the Government Performance and Results Act. We also recommend that the director develop a strategy to address staff turnover as it implements its initiatives. Among other things, this strategy could include proposals to Congress to change the law concerning the 5-year limit on employment of U.S. direct hire staff. In written comments on a draft of this report, reprinted in appendix V, the Peace Corps concurred with our findings and provided additional information on the agency’s safety and security initiatives and technical comments that we incorporated as appropriate. In response to our first recommendation, the Peace Corps agreed to report on the results of its safety and security initiatives in its annual reports under the Government Performance and Results Act. In response to our second recommendation, the Peace Corps stated that it had developed a strategy for mitigating the effects of high staff turnover as it implements its safety and security initiatives, but that unless the law concerning the 5-year rule is changed the agency cannot effectively address the difficulties presented by staff turnover. Given the agency’s position on this matter, we modified our recommendation to suggest that the Peace Corps submit a proposal to Congress for changes in the 5-year rule that would facilitate agency efforts to improve its safety and security practices. We are sending this report to interested congressional committees and the Director of the Peace Corps. We will also make copies available to other interested parties on request. In addition, this report will be available at no charge on the GAO Web site at http//www.gao.gov. Please contact me on (202) 512-4268 if you or your staff have any questions concerning this report. An additional GAO contact and staff acknowledgments are listed in appendix VI. You requested that we evaluate the Peace Corps’ safety and security practices. In response, we (1) described rates and trends in crime against volunteers and reviewed the agency’s system for generating such information, (2) described the agency’s framework for maintaining volunteer safety and security, (3) evaluated the Peace Corps’ implementation of this framework and identified factors affecting this implementation, and (4) evaluated the agency’s initiatives to improve current practices. To address our first objective, we (a) examined agency reports on crime trends and characteristics of assaults from 1991 to 2001; (b) reviewed agency guidelines and interviewed medical services staff at headquarters and in the field to clarify the Peace Corps’ processes for gathering, analyzing, disseminating, and applying information; and (c) performed independent analyses of Peace Corps data to determine the extent to which agency findings accurately reflect information from the field and to explore opportunities for additional useful analyses. To perform our independent analyses, we obtained computer files containing original crime data for 1990 through 2001 and excerpts from the Peace Corps’ administrative database on the numbers of volunteers serving during this period and characteristics such as age, gender, and date of entry into service. We used these data to replicate the Peace Corps’ analyses of crime rates and characteristics of assaults, finding that our results were consistent with the Peace Corps’. We also examined the data for missing elements, mislabeled data, and related problems. We found a number of technical problems; for example, inconsistencies in coding sometimes made it difficult to distinguish between missing values and those that were incorrectly coded. However, these problems did not materially affect the Peace Corps’ analyses. To obtain information on underreporting, we reviewed relevant portions of the Peace Corps’ volunteer satisfaction surveys for 1998 and 1999 and interviewed agency staff and volunteers. We interviewed agency staff on the potential usefulness of additional analyses and explored the data made available to us to identify trends or relationships that merit further inquiry. We did not attempt to verify the accuracy or completeness of data collection among medical officers at individual posts. To present a clear description of the agency’s framework for maintaining volunteer safety and security, we reviewed agencywide policies and guidance materials that are provided to post staff, such as handbooks and examples of best practices. We also examined materials that the agency uses in training staff to carry out their safety and security responsibilities. We interviewed key headquarters staff, including regional managers, country desk officers, general counsel and medical office officials, and the agency’s coordinator for volunteer safety and security about their roles and responsibilities and the manner in which agency policies and guidance materials are applied in practice. To obtain broader perspectives on safety and security challenges in developing countries and options for responding to those challenges, we spoke with security specialists at the Department of State in Washington, D.C., and with U.S. embassy security officers in the countries we visited, listed below. We also spoke with headquarters or field-level staff, or both, from a number of organizations that face security and safety challenges similar to those faced by the Peace Corps, including the Japanese Overseas Cooperation Volunteer Program, the British Volunteer Service Organization, and the United Nations’ Volunteer Program. We attended a conference on security practices for nongovernmental organizations sponsored by the American Red Cross. To evaluate the Peace Corps’ implementation of its safety and security framework, we obtained documents from and interviewed headquarters and field-level staff and volunteers. We visited posts in Bulgaria, El Salvador, Kenya, Senegal, and Ukraine to examine safety and security practices. At these posts, we interviewed agency staff with significant safety and security responsibilities, including country directors, program managers, and medical officers, and the three regional safety and security officers employed by the Peace Corps at the time of our work. We examined post record-keeping procedures and relevant files. We spoke with more than 150 volunteers, visiting more than 30 at their sites and speaking with their local counterparts when possible. To broaden our understanding of Peace Corps practices beyond the countries we were able to visit, we consulted the results of the Peace Corps’ worldwide volunteer satisfaction surveys for 1998 and 1999, all 12 reports issued by the agency’s inspector general between December 1999 and December 2001 that contained findings on safety and security issues, and reports on relevant issues at 24 posts generated by the agency’s safety and security staff between September 2000 and November 2001. We examined nine assessments of the security environment in individual countries prepared between 1996 and 2001. In addition, we obtained and analyzed documentation on specific safety and security functions at multiple posts when it was available. For example, we examined 65 post emergency action plans (EAP) and headquarters’ feedback on these plans, and we reviewed site development criteria and procedures from 18 posts in the Peace Corps’ Inter-America/Pacific region, in addition to those from the posts we visited. To evaluate the Peace Corps’ recently announced safety and security initiatives, we obtained and reviewed documentation on the initiatives and the Peace Corps’ efforts to clarify and provide easier access to agency policies and guidance materials. We met with the Peace Corps’ Director and other senior staff to discuss the substance and intent of the proposed measures. We conducted our work from July 2001 through May 2002 in accordance with generally accepted government auditing standards. Organizations that assign personnel to live and work abroad can draw from three basic strategies to develop safety and security procedures: acceptance—reducing the risk level by integrating into a host protection—reducing vulnerability by employing protective devices, such as walls and locks; and deterrence—eliminating threats by posing a counterthreat, for example, by employing armed guards. Organizations that emphasize person-to-person cultural exchange as a major goal tend to rely on the acceptance approach to safety and security; they seek to enhance safety and security primarily by ensuring that individuals are accepted as members of host communities. Nonetheless, these organizations may differ substantially in the details of their approach. As organizations become less concerned with establishing person-to- person ties within a host community and more concerned with achieving specific technical or development goals, they may place more emphasis on protection and, sometimes, deterrence measures. The following are descriptions of strategies employed by organizations that face safety and security challenges similar to those faced by the Peace Corps—the Volunteer Service Organization, the Japanese Overseas Cooperation Volunteers, the United Methodist Volunteers in Mission, the foreign mission program of the Church of Jesus Christ of Latter-day Saints, the United Nations Volunteers, and Save the Children. The Volunteer Service Organization is a British nongovernmental organization whose goals and safety and security approach are similar to the Peace Corps’, with a few key differences. The organization maintains 2,000 volunteers in 71 countries for average tours of 2 years, mostly in rural areas or provincial towns. Like the Peace Corps, the agency seeks to fight poverty and promote international understanding. In contrast to the Peace Corps, the organization advertises and recruits on a job-by-job basis in response to specific requests from counterpart organizations in developing countries. The organization thus faces less of a challenge than the Peace Corps in finding productive employment and supportive organizations for volunteers. Volunteers average 38 years of age and are often experienced. Although its approach to identifying housing and monitoring volunteers is similar to the Peace Corps, the organization provides less safety and security training. It provides general risk-awareness training before volunteers’ departure for their country of service and limited country- and placement-specific risk awareness and management training upon volunteers’ arrival in the country. In contrast to the Peace Corps, which has EAPs in all of the countries where it operates, the organization has EAPs only in countries where such plans are deemed necessary. The Japanese Overseas Cooperation Volunteers also resembles the Peace Corps in its goals and approach to safety and security, with some differences. The organization operates in more than 70 countries under the aegis of the Japan International Cooperation Agency, that country’s bilateral development agency. Similar to the Peace Corps, this program sends volunteers to spend 2 years working in agriculture, civil engineering, health, and other program areas. Unlike Peace Corps volunteers, the Japanese volunteers are considered government employees. Like the Volunteer Service Organization, the Japanese program recruits volunteers for individual jobs and therefore has fewer difficulties with finding suitable jobs for its volunteers. The program does not have a formal policies and procedures manual, although it has been consulting with the Peace Corps on the development of such a manual. The organization uses a five-step classification system to assess risks in specific countries and develops actions to take on the basis of risk level. Program officials stated that the agency provides volunteers with a 3-month training program in Japan, which includes some safety and security training, but the agency provides little, if any, in-country training. Volunteers might use cell phones, satellite phones, radios, or other communication tools; the organization strives to ensure that each volunteer can be reached within 6 hours. The Peace Corps has no such minimum standard. Program officials participate in their parent organization’s EAPs. The United Methodist Volunteers in Mission, while citing intercultural exchange and relationship building as a goal, differs significantly from the Peace Corps in that volunteers generally serve only 1 to 6 months and thus have less time to integrate into a community. This church-sponsored organization, part of the United Methodist Committee on Relief, recruits volunteers to work in areas such as education and construction. Unlike the Peace Corps, these volunteers pay a fee to the Committee on Relief to cover costs, including housing and food, while in the country where they are placed. Most Methodist-sponsored volunteers are middle-aged through retirement age. A program official indicated that the safety and security training the organization provides is not as intense as the Peace Corps’ because volunteers are generally in the country for only a short time; the organization provides some information on cultural sensitivity before volunteers’ departure and an orientation when they arrive in country. Although it is not always possible for volunteers to be in daily contact with office staff, one individual with the volunteers is responsible for them on a 24-hour basis and can contact the office whenever needed. The Church of Jesus Christ of Latter-day Saints sends volunteers to do mission work worldwide. The majority of the volunteers are male and all are young—the upper age limit is 26. A church official indicated that the church provides little training in safety and security. The church monitors volunteers frequently to ensure their safety. Unlike the Peace Corps, church volunteers always travel and live in pairs and report to the in- country mission on a weekly or daily basis, depending on the risk level of the country. Volunteers also have support from local church members in the community in which they serve. Most volunteers have telephone lines in their apartments, but they are not supposed to have cell phones or radios because officials think these items could make volunteers targets for theft and assault. United Nations Volunteers operate under the auspices of the United Nations Development Program. Volunteers generally work on a program project alongside program staff and, much like the program’s regular employees, are chosen for a specific job. United Nations Volunteers are not asked to build intercultural relationships. About 5,000 of these volunteers are currently working in about 150 countries; many are native to the country in which they work. Volunteers usually serve for 2 years, although the program uses some short-term volunteers in times of crisis. Unlike Peace Corps volunteers, these volunteers usually live in the same communities as other United Nations or government staff, often in capital cities or urban areas; many bring their families and are given the use of a vehicle. Program officials stated that they do not perform formal risk assessments, but they added that they do not place volunteers in countries or areas that are considered dangerous. Program officials indicated that they provide little safety and security training, although the United Nations provides a safety and security handbook to staff members and volunteers in the United Nations system. There is little formal monitoring of volunteers. Volunteers typically have telephones in their homes and may also have cell phones or radios for project-related reasons. Save the Children is a development-oriented, nongovernmental organization with offices in about 31 countries, with staff focused on a specific job, not on intercultural exchange. In contrast to Peace Corps volunteers, most expatriate staff have had overseas experiences and are typically in their 30s. Much of the organization’s funding is from the U.S. Agency for International Development, and staff typically work closely with agency and U.S. embassy staff. The organization has not made it a practice to conduct formal risk assessments but instead relies on other nongovernmental organizations and the U.S. embassy for information. However, headquarters is beginning to task overseas offices with responsibility for conducting risk assessments. Although program officials indicated that the organization provides little training in safety and security, they have asked the Peace Corps and other U.S. government agencies for advice on training. Unlike Peace Corps volunteers, Save the Children staff live in the expatriate community and may have radios, cell phones, or both, depending on job needs and risk. They have frequent contact with other nongovernmental organizations and U.S. government employees, who live and work in the same area. In addition, country directors prepare weekly reports on staffs’ current and future locations and vacation schedules. The Peace Corps has established two reporting systems for collecting information on crimes against volunteers. The agency’s medical staff operates both systems. As described in this report, Peace Corps data show that, with the exception of major sexual assaults, reported rates of assault against volunteers have been higher in recent years than in the early 1990s. Historical data for aggravated assaults and rapes—the most consistent data available to Peace Corps analysts—support these overall findings. Reported rates of nonassault crimes, in contrast, have remained essentially unchanged since 1990. Post medical officers are tasked with collecting detailed information on each assault incident reported by volunteers and submitting this information to headquarters through the Peace Corps’ assault notification and surveillance system. In 1997, the medical office refined the reporting categories employed in this system. Formerly asked to differentiate among only four types of assaults, field medical staff are now asked to submit reports on five types of sexual assault and five types of physical assault. When filling out reporting forms, medical officers are asked to ascertain a variety of details on victims, assailants, and the circumstances surrounding each assault, such as time and location of the incident. Medical officers are also asked to submit monthly counts of four types of nonassault crimes through the Peace Corps’ epidemiologic surveillance system, which is a reporting system that focuses primarily on gathering statistics on volunteer injuries and illnesses. These reports do not provide any details on the reported events. Aggravated assault and rape are the only two categories of crime for which reporting definitions remained unchanged when the Peace Corps revised its system for categorizing and recording crimes in 1997. Therefore, data on these crimes may be the most consistent available to the Peace Corps. As shown in figure 7, the reported rate of aggravated assault against volunteers has been consistently higher since 1996 than in earlier years. As shown in figure 8, reports of rape have varied from year to year, most recently declining from a median rate of about 4.6 per 1,000 female volunteer years in 1996–1998 to a median level of about 3 per 1,000 female volunteer years in 1999–2001. Table 1 shows the actual numbers of aggravated assaults and rapes that were reported. Since the numbers of assaults, especially sexual assaults, are small, there is some question about the practical significance of these changes. Rates of nonassault crimes have varied little since 1993, when the agency began collecting information on incidents of burglary, theft, and robbery. Figure 9 shows a slight decrease in reported robberies and burglaries since 1993, while figure 10 shows a slight increase in reported thefts. Peace Corps policy requires posts to develop procedures for responding to all safety and security incidents reported by volunteers. The agency has not developed clear guidance for posts to apply in responding to minor incidents. However, the Peace Corps does have well-defined notification and response protocols for major sexual assaults, and posts follow similar procedures when volunteers report major physical assaults. In addition, when a volunteer decides to prosecute, the Peace Corps’ Office of General Counsel and the Office of the Inspector General’s investigations unit may provide assistance. The Peace Corps’ Rape Response Handbook, developed in 1999, establishes a protocol to ensure timely notification of appropriate staff at posts and at headquarters and describes the roles and responsibilities of post and headquarters staff in responding to a rape or attempted rape. In addition to giving guidance for reporting the incident to agency headquarters as previously described in this report, the handbook clearly establishes that the post’s medical officer is responsible for providing medical care to the volunteer who has been assaulted and for collecting forensic evidence in case the volunteer decides to prosecute. The country director is responsible for ensuring that the victim, as well as other volunteers and trainees, is safe; preserving the option to prosecute (e.g., by advising the volunteer of her legal rights, preserving evidence, etc.); and notifying the security office at the U.S. embassy of the assault while protecting the volunteer’s identity unless identification is essential. Embassy security staff are expected to support the Peace Corps in any investigation or prosecution following the incident. The Peace Corps follows similar notification and response protocols when a volunteer reports a major physical assault. The medical officer reports the assault to the Office of Medical Services at headquarters and provides medical treatment to the volunteer. As with a rape incident, the medical officer notifies the country director of the assault, although in the interest of medical confidentiality the volunteer’s identity and details of the incident may not be disclosed. The country director is responsible for informing the U.S. embassy security officer of the assault and may work with the embassy if the volunteer decides to prosecute. According to Peace Corps data, 18 percent of volunteers who experienced a major sexual assault and 26 percent of volunteers who reported a major physical assault between 1997 and 1999 said that they intended to prosecute. When a volunteer decides to prosecute, the Peace Corps’ Office of General Counsel covers the cost of legal counsel in the country where the assault happened. The Office of the Inspector General’s investigations unit, in conjunction with other federal agencies, may also provide support in investigations of crimes against volunteers. For example, inspector general staff may conduct interviews with Peace Corps staff and local authorities, escort volunteers who are asked to identify suspects, or arrange for examination of forensic evidence. In addition to Ms. Anderson, key contributors to this report were Wendy Ahmed, Kriti Bhandari, Lynn Cothern, Suzanne Dove, Bruce Kutnick, Michael McAtee, James Strus, and Christina Werth. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
About 7,000 Peace Corps volunteers now serve in 70 countries, often living in areas with limited access to reliable communications, police, or medical services. Moreover, as Americans, they may be viewed as relatively wealthy and hence good targets for criminal activity. The Peace Corps has reported rising numbers of assaults against its volunteers since it began collecting data in 1990. However, the Peace Corps' record is mixed when it comes to developing safe and secure housing and worksites for volunteers, monitoring volunteers and responding to security concerns or criminal incidents, and preparing for emergencies. To reduce risks to its volunteers, the Peace Corps has adopted policies that address monitoring and disseminating information on the security environment; volunteer training; development of safe and secure housing and work sites for volunteers; monitoring volunteers and responding to incidents and concerns; and planning for emergencies, such as evacuations. Volunteer surveys and GAO visits to five overseas ports indicate that volunteers are generally satisfied with agency training programs and other efforts designed to emphasize safety and security awareness. The agency is not certain, but officials have stated that efforts to improve its system for collecting crime data may have led to higher reported rates. In May 2002, the Peace Corps told GAO of several initiatives to improve current safety and security practices. Although these initiatives are directed at many of the obstacles to improved performance, they do not address staff turnover.
Although a number of companies manufacture various non-lethal weapons, such as stun guns, the only company that manufactures Tasers is Taser International in Scottsdale, Arizona. First developed in the 1970s for use by police departments, Tasers differ from stun guns in that they can be fired from a distance and do not require contact with skin in order to work. Taser International has produced various models of Taser weapons including Air Tasers and the M-18, M-18L, M-26, X-26, and X-26C models. The M-18 and X-26C models are available to the civilian market. The M-26 and X-26 models are sold only to law enforcement agencies, the military, and more recently have been made available for use in maintaining aviation security. Both models, while varying in size, operate in the same manner and deliver approximately the same electrical charge. For the purposes of this report, Tasers refer to the M-26 and X-26 models. Figure 1 shows a picture of an M-26 model Taser, and figure 2 shows a picture of an X-26 model Taser. The Taser fires two metal barbs that are attached to wires, which can cover a distance of up to 25 feet. Once the barbs are embedded in an individual or on the individual’s clothing, the weapon delivers an electrical charge of 50,000 volts through the wires to the barbs. This charge causes the muscles of the individual to involuntarily contract, which immediately incapacitates the individual for the duration of the shock, usually lasting about 5 seconds. The barbs need not be embedded in an individual’s body in order to function. Because of the high voltage, an individual will be shocked even if the barbs are attached to an outer layer of clothing, such as a coat. If the barbs penetrate the skin, it is impossible to predict how deeply they will embed because of various factors, including wind speed and a subject’s weight and muscle mass. The manufacturer estimated that the barbs will generally penetrate bare skin no more than half an inch. Once the Taser weapon’s shock subsides, the individual can recover completely in about 10 seconds. If the weapon is fired correctly and the barbs hit the individual, no collateral damage occurs to the surrounding environment. The Taser can be reactivated numerous times as long as the barbs remain in the individual or the individual’s clothing. Secondary electric shocks also last for about 5 seconds. The operator has the ability to shut the weapon off, thus ending the charge. A data port contained in the latest models of Tasers provides information suitable for downloading onto a computer detailing the date, time, and duration of each instance that the Taser was fired. A visual battery level indicator is located on the back of the hand guard. The Taser also utilizes a laser sight system. This system enables the operator, even with limited experience, to direct the barbs to the desired location on the individual. The seven law enforcement agencies we contacted have attempted to ensure proper deployment of the Taser weapon by establishing and employing use-of-force policies, training requirements, operational protocols, and safety procedures. Although none of the seven agencies had separate use-of-force policies that specifically addressed Tasers, all of the agencies included the use of such weapons into their existing policies so that police officers would have guidance on the circumstances in which the use of Tasers may be appropriate. A use-of-force policy provides police officers with a clearly defined set of rules or guidance to follow when encountering a subject, based on the subject’s actions, the officer’s perception of the situation, and the available types of officer responses. The use-of-force model— frequently referred to by law enforcement officials as the use-of-force continuum—was developed using federal law enforcement training guidelines established by FLETC. According to FLETC, the continuum serves as a visual tool to help explain about the application of the use-of- force policy. Specifically, the continuum establishes for a police officer various options to use in responding to a subject’s actions, while employing the minimum amount of force necessary under the circumstances. Generally, an officer should employ more forceful means to control a subject only if the officer determines that a lower level of force is inadequate. Officials in the seven law enforcement agencies we contacted told us that they rely on the continuum to help provide officers with guidance in carrying out their law enforcement responsibilities. As shown in figure 3, the use-of-force continuum includes five levels of potential subject actions and corresponding officer responses. For example, if a subject is compliant, an officer should use only “cooperative controls,” such as verbal commands, to control the subject. On the other hand, the guidelines provide that if a subject is assaultive and an officer perceives a threat of serious physical injury or death—a lethal situation on the use-of-force continuum—the officer may use deadly force to control the subject. Officials in the seven law enforcement agencies we contacted stated that each agency has a use-of-force policy in which all officers are trained. Each of the seven agencies has incorporated the Taser into its existing use-of- force policy. The placement of the Taser on the use-of-force continuums of the agencies varied. Specifically, we found that the seven agencies placed the Taser at three different levels on their use-of-force continuums. As shown in table 1, two agencies—the Sacramento Police Department and the Sacramento Sheriff’s Department—permit the use of Tasers when a police officer perceives the situation as potentially harmful, as when a subject engages in assaultive behavior that creates a risk of physical injury to another. Impact weapons, such as night sticks and batons, can also be used in these situations. They include, for example, instances in which a subject attacks or threatens to attack an officer by fighting and kicking. Four other police departments—the Austin Police Department, the Ohio Highway Patrol, the Phoenix Police Department, and the San Jose Police Department—allow the use of Tasers at a lower level in the use-of-force continuum in situations that the officer perceives as volatile. This occurs, for example, when a subject is actively resisting arrest but not attacking the officer. The use of chemical sprays to subdue the subject is another option in such a situation. Finally, one agency—the Orange County Sheriff’s Department—allows the use of Tasers in situations that an officer perceives as tactical, such as when a subject is “passively resisting” by not responding to the lawful, verbal commands of the officer. “…it is of paramount importance that officers expect and receive the same results from one Taser to another. Their confidence in the weapon is based on the knowledge that all Tasers will operate the same each and every time and will achieve the same desired results each and every time.” In all seven agencies, the training cycle begins by disseminating the previously discussed use-of-force policy. Police officers also receive mandatory firearms training. As shown in table 2, three of the agencies we contacted—the Sacramento Police Department, the Sacramento Sheriff’s Department, and the San Jose Police Department—require a minimum of 100 hours of such training; three agencies—the Ohio Highway Patrol, the Orange County Sheriff’s Department, and the Phoenix Police Department— require a minimum of 80 hours; and one agency—the Austin Police Department—requires a minimum of 60 hours. In addition, all seven agencies require Taser-specific training. This training stresses such matters as how to (1) properly handle the weapon, (2) locate the shot, (3) safeguard the Taser, (4) conduct proper function tests, (5) overcome system malfunctions in a timely fashion, and (6) perform post-Taser deployment actions. Three agencies require 8 hours of Taser training, while three require 5 hours and one requires 4 hours. All seven agencies require officers to demonstrate physical competency with the weapon, and three agencies also require written tests generally consisting of approximately 10 true or false questions related to the application of the use-of-force policy, proper use of the weapon, and appropriate safety measures. Furthermore, six of the seven agencies required yearly recertification in the use of Tasers. One agency—the San Jose Police Department—does not require yearly recertification for Tasers and is not currently considering the establishment of such recertification. However, an official from the San Jose Police Department told us that the department includes Tasers in its annual use-of-force simulations training in which officers are trained in the use of Tasers that would be considered appropriate in various law enforcement scenarios. We also discussed with officials from the seven agencies how training other Taser users may differ from training law enforcement personnel in Taser use. All the officials agreed that the length and intensity of training must be increased for users who have no law enforcement experience or firearms training. The officials also stressed that any civilian training curriculum should have a very explicit use-of-force policy. Unlike police officers, civilians are not generally experienced in deciding whether the use of force is justified and, if so, to what extent. Therefore, the officials told us that it should be the goal of any civilian training curriculum to remove the need for independent decision-making as much as possible. Officials from all seven agencies agreed that training for non-traditional law enforcement individuals should involve as many “real life” scenarios as possible so that the trainee understands what level of force is appropriate. The seven law enforcement agencies we contacted have operational protocols, which are written policies and procedures that address and provide guidance on the daily activities of a law enforcement agency’s officers. These protocols address a wide range of issues such as deployment of law enforcement personnel and weapons, inspection techniques, proper use of weapons, and post-incident procedures. Regarding Tasers, the protocols in the seven agencies require, among other things, that Tasers be visually inspected on a daily basis, be appropriately safeguarded, and, in some cases, be tested on a weekly basis or at the beginning of an officer’s shift. With regard to Taser deployment, three of the seven agencies we contacted issued the Taser to all of their officers. Three of the agencies deployed Tasers only to patrol officers because they were considered to be the most likely personnel to have use for the device during the course of their work. The remaining agency issued Tasers to its patrol officers and members of some specialized police units such as narcotics. Regarding inspections, all seven agencies we contacted required a daily function test for Tasers. Officials in the seven agencies told us that this test generally consists of visually inspecting the weapon for any signs of damage or unusual wear and tear, inspecting the firing cartridge to ensure that there is no damage or obvious tampering, and checking the battery strength indicator located on the rear of the weapon. Furthermore, one of the seven agencies required that on a weekly basis, officers conduct a test fire of Tasers in which the officer initiates an arcing of the electric probes by pulling the trigger of a Taser that does not contain a firing cartridge. In addition, two of the seven agencies require that each officer conduct such a test at the beginning of the officer’s shift. All of the agencies mandated that the Taser be safeguarded in the same fashion as a firearm issued by the agency. Once the law enforcement agency’s internal policies and procedures were satisfied, including compliance with the use-of-force policy, the method and manner prescribed for Taser use did not significantly differ among the agencies we contacted. Officials in the seven agencies stated that the Taser is to be aimed at the center of an individual’s largest amount of body mass, which is oftentimes the chest or, in some circumstances, the back. Shots to the neck or face are not advisable unless a significant danger exists to the officer or others, and this area is the only target area presented. All seven agencies we contacted required the officer involved in a use-of-force incident to complete an official form detailing the type of force used. As shown in table 3, three of the agencies required the officer to complete a specific form whenever a Taser was used. These forms included a description of barb placement, the effects achieved, and the subject’s behavior before and after the Taser deployment. Following the use of the Taser, all seven agencies required that the subject be restrained, with handcuffs or an emergency restraint belt, to ensure that there would be no further threat of physical aggression. “…the officer runs the risk of injuring the intended target. A Taser is by nature a weapon and carries with it inherent dangers.” As shown in table 4, the seven agencies’ safety guidelines provide that the Taser should not be used on children, pregnant suspects, or near bystanders or flammable liquids. All the agencies we contacted require an emergency room physician to examine the subject in the event of Taser barb placement in the face or neck. The Orange County Sheriff’s Department also requires any female subject shot in the breast or groin area to be seen by an emergency room doctor. Six of the seven agencies provide officers with the discretion to remove the barbs themselves or to request that emergency medical technicians (EMT) respond to the scene. Once removed, the barbs should be placed in a “Sharps” container to ensure safe and hygienic disposal. For these agencies, if the officer observes an adverse reaction to the electrical shock, he or she can request that the subject be transported to a local hospital emergency room. No other medical follow-up is required. The remaining agency—the San Jose Police Department—does not provide its officers with the discretion to remove Taser barbs. The San Jose Police Department calls for officers to transport subjects hit with Taser barbs to a hospital so that medical personnel can remove the barbs. Also, San Jose officers do not routinely call EMTs to the scene of Taser use. They do so only if other life threatening needs or medical treatment is needed. If such treatment is not needed, the officer transports the suspect to a hospital for medical clearance prior to being booked in the county jail. In reviewing various laws, including statutes, regulations, and ordinances, we found that Tasers were addressed in some federal, state, and local jurisdictions. We also found that these jurisdictions had different requirements for regulating Tasers. In some instances, the extent to which Tasers are regulated in these jurisdictions may depend on whether the Taser is classified as a firearm. For example, at the federal level, ATF has not classified Taser as a firearm, which exempts Taser from federal firearms requirements. However, we identified other federal agencies, such as the Army, that have established Taser-related regulations for the possession, use, and sale of Tasers. In addition, TSA has identified the Taser as a prohibited weapon that cannot be brought past airport security checkpoints by unauthorized personnel. TSA also has authority to approve the use of Tasers by flight crews on commercial aircraft. We also found that the state of Indiana and the city of Chicago, Illinois regulate the sale or possession of Tasers by non-law enforcement persons by requiring that the same restrictions that apply to firearms must also apply to Tasers. Other states, such as California, prohibit Tasers from being carried into public facilities such as schools and airports. At the federal level, we found that ATF—the federal agency responsible for determining whether a weapon should be classified as a firearm, which would make the weapon subject to federal firearms regulations—does not classify the Taser as a firearm. Thus, the Taser is not subject to any federal regulations regarding the distribution, sale, and possession of firearms. As a result, Tasers can be manufactured and distributed domestically without federal restriction. However, we identified some federal agencies that have established regulations that specifically prohibit the sale, possession, and transfer of Tasers. For example, Army regulations prohibit the sale, possession, carrying, or transportation of Tasers on or within specific installations in Georgia, including Fort Gordon and Fort Stewart, which also includes the Hunter Army Airfield. In addition, TSA has a regulation that prohibits unauthorized individuals from carrying weapons, explosives, and incendiaries beyond airport security checkpoints. To help provide guidance in implementing its regulation, TSA has developed a chart outlining specific items that are prohibited in carry-on baggage and has identified Tasers as a prohibited weapon. TSA also has broad authority under the Aviation and Transportation Security Act, as amended by Section 1405 of the Homeland Security Act of 2002, to approve the use of less-than- lethal weapons by flight deck crew members, as long as the TSA Secretary prescribes “…rules requiring that any such crew member be trained in the proper use of the weapon…” and “…guidelines setting forth the circumstances under which such weapons may be used.” Based on this authority, in October 2004, TSA approved a request from Korean Airlines that specially trained cabin attendants be permitted to use Tasers on commercial flights in U.S. airspace. TSA officials told us they anticipate that in the future, other airlines will also submit requests to deploy less- than-lethal weapons. In reviewing various state and local laws, we identified some state statutes and municipal ordinances that specifically regulate the sale or possession of Tasers by non-law enforcement persons within their state or municipal boundaries. For example, in the state of Indiana, Tasers are subject to the same licensing requirements as other handguns. Therefore, in order to lawfully possess a Taser in Indiana, prospective purchasers are required to meet certain license requirements and consent to a criminal history background check. In addition, dealers in Indiana cannot sell a Taser until after requesting and receiving criminal history information on prospective purchasers. Similarly, in Chicago, Illinois, prospective purchasers are required to obtain a permit to lawfully purchase Tasers. Also, in the state of Pennsylvania and the city of Wilmington, Delaware, it is unlawful for non-law enforcement persons to manufacture, make, sell, or possess a Taser. In addition, individuals in various states, including California, Illinois, and Virginia, are prohibited from carrying Tasers in such areas as airports, courthouses, schools, prisons, or public buildings. The seven law enforcement agencies we contacted have established policies and procedures to attempt to ensure proper use of Tasers. Specifically, the agencies employ use-of-force policies, training requirements, operational protocols, and safety procedures, although specific practices vary from agency to agency. For example, the seven agencies place the threshold at which Taser use may be deemed appropriate at three different levels on their use-of-force continuums. However, even when these policies are strictly enforced, each situation in which a Taser may be used is unique. An officer must rely on prior experience and training and exercise good judgment to determine whether using the Taser constitutes an appropriate level of force. Consequently, officials in the seven law enforcement agencies we contacted stressed that proper training is essential for successful deployment. If Taser use becomes more widespread, particularly among non-law enforcement personnel who have little or no firearms experience, we believe that this training will become even more critical for safe, effective, and appropriate use of the weapon. We received written comments on a draft of this report from TSA, which are included in appendix II. In its comments, TSA stated that it generally concurred with the information in the report. Also, TSA stated that it agreed that training and oversight are essential for the use of Tasers. In addition, TSA discussed its authority to approve the use of less-than-lethal weapons by air carriers. Among other things, TSA explained that under the Aviation and Transportation Security Act, as amended by Sec. 1405 of the Homeland Security Act of 2002, air carriers are to contact TSA to request permission to carry less-than-lethal weapons aboard their aircraft. TSA would review the air carrier’s request as well as the training program that the air carrier would provide for the proposed use of the weapon. After TSA approves the air carrier’s request, an amendment to the air carrier’s security program must be made to allow for the weapon’s use while the aircraft is in flight. Requirements could also be mandated for storage of the weapon while the aircraft is standing at an airport. Furthermore, TSA stated that it has received a number of requests from air carriers as they attempt to enhance aircraft security and will continue to evaluate such requests and review training programs provided by air carriers. In addition, TSA and FLETC provided technical comments that we incorporated into this report where appropriate. We also received comments from Taser International and the seven law enforcement agencies we contacted. They generally agreed with the information in the report. In addition, Taser International and three of the seven law enforcement agencies—the Austin, Texas, Police Department; the Phoenix, Arizona, Police Department; and the San Jose, California, Police Department—provided some technical comments that we incorporated into this report where appropriate. As agreed with your office, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its issuance date. At that time, we will send it to the Chairmen and Ranking Members of the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Government Reform. We will also send it to the Chairman and Ranking Member of the House Committee on Homeland Security and the Ranking Member of the Subcommittee on National Security, Emerging Threats and International Relations, Committee on Government Reform. We will also provide copies to the Secretary of the Transportation Security Administration and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Key contributors to this report are listed in appendix III. If you or your staff have any questions concerning this report, please contact me at (202) 512-7455 or at [email protected]. For this report, our first objective was to obtain information on the policies and procedures related to the issues of use of force, training, operations, and safety for selected law enforcement agencies that have purchased and used Tasers. We conducted this work for the purpose of providing information about the policies, procedures, and practices these agencies use to help ensure safe and successful deployment of the Taser. We did not attempt to draw conclusions about whether Tasers are in fact safe. Our second objective was to obtain information on federal, state, and local laws that specifically address Tasers, including the Transportation Security Administration’s (TSA) authority to regulate Tasers on aircraft. To address the first objective, we used Taser International Incorporated’s (Taser International) customer database to identify all U.S. law enforcement agencies that had purchased Tasers. As the sole manufacturer of Tasers, Taser International maintained the only centralized database from which we could obtain this information. Around the time we began our work in May 2004, Taser International reported that a total of over 7,000 law enforcement agencies had purchased Tasers. Time constraints would not permit us to contact all these agencies. Thus, we determined that the most reasonable approach for selecting law enforcement agencies to contact would be to focus on those agencies that had the largest number of Tasers for the longest period of time. To do this, we identified two key data elements for each agency—the date that the agency made its first Taser purchase and the total number of Tasers that the agency purchased. In identifying the initial Taser purchase date, we were able to determine how long ago various agencies had begun buying Tasers. We focused on this date because we determined that by the time we began our work, the agencies that had made the earliest Taser purchases would have been more likely to have established policies and procedures to help ensure the safe and appropriate use of Tasers. In addition to the initial purchase date, we identified for each agency the total number of Tasers that they had purchased. We determined that those agencies that purchased a significant number of Tasers would have been more likely to deploy them widely, which increased the chances that more law enforcement personnel would have used Tasers in training and field situations. As such, we reasoned that to help ensure that Tasers would be safely and appropriately used, law enforcement agencies would take steps as quickly as possible to establish Taser-related policies and procedures. Using these two data elements, we identified seven law enforcement agencies that had deployed the largest number of Tasers for the longest period of time. These agencies were the Austin, Texas, Police Department; the Ohio Highway Patrol; the Orange County, Florida, Sheriff’s Department; the Phoenix, Arizona, Police Department; the Sacramento, California, Police Department; the Sacramento, California, Sheriff’s Department; and the San Jose, California, Police Department. Our efforts in selecting the seven agencies constituted a case-study approach. Because we conducted case studies rather than a statistical survey, the results of our work can be applied only to the seven agencies we contacted; our work results cannot be applied to all law enforcement agencies that, according to Taser International’s data, have purchased Tasers. With the assistance of GAO methodologists, we drafted a series of questions related to use-of-force policies, training requirements, operational protocols, and safety procedures. We asked officials in all seven agencies the same questions to ensure that we could compare their responses. To address the second objective, we researched various federal and state laws, including statutes and regulations, to determine whether Tasers are regulated at the federal and state levels. In addition, we reviewed information obtained from the Department of Justice’s Bureau of Alcohol, Tobacco, Firearms, and Explosives on local ordinances that regulate Tasers. Also, we researched various published local ordinances to determine whether Tasers are regulated at the local level. In addition, we reviewed the Aviation and Transportation Security Act to ascertain federal requirements for approving the use of Tasers onboard aircraft. We conducted our work from May 2004 through February 2005 in accordance with quality standards for investigations as set forth by the President’s Council on Integrity and Efficiency. In addition to the individual named above, Jennifer Costello, Richard Egan, Joseph Funk, Barbara Lewis, Latesha Love, John Ryan, and Barry Shillito made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Emerging domestic and international threats have generated a growing interest in the use of less-than-lethal weapons by government and law enforcement agencies and other entities such as commercial airlines. One such weapon--the Taser--is a hand-held weapon that delivers an electric shock via two stainless steel barbs, effectively incapacitating an individual. According to the manufacturer--Taser International, Incorporated (Taser International)--Tasers are currently used by over 7,000 of the 18,000 law enforcement agencies in the United States, with more than 140,000 Tasers in use by police officers in the field and an additional 100,000 Tasers owned by civilians worldwide. Tasers have been used on over 100,000 volunteers, including individuals involved in training seminars and research experiments, and involved in over 70,000 actual field uses during police encounters. In light of the expanding interest in the Taser, GAO was asked to provide information on (1) the policies and procedures related to the issues of "use-of-force," training, operations, and safety for selected law enforcement agencies that have purchased and used Tasers and (2) federal, state, and local laws that specifically address Tasers, including the Transportation Security Administration's (TSA) authority to regulate Tasers on aircraft. The seven law enforcement agencies we contacted have established use-of-force policies, training requirements, operational protocols, and safety procedures to help ensure the proper use of Tasers. Although none of the agencies have separate use-of-force policies that specifically address Tasers, all seven agencies include the use of Tasers into their existing policies. Taser training is required for officers who use the weapons, and agency officials said that training for officers and other non-law enforcement persons who are allowed to use Tasers is critically important to help ensure their safe use. Operational protocols require that Tasers be visually inspected daily, appropriately safeguarded, and, in some cases, tested weekly or at the beginning of an officer's shift. Safety procedures require that Tasers not be used on children, pregnant suspects, or near bystanders or flammable liquids and that individuals hit in specific body areas with Taser barbs, such as the neck or face, be examined by a physician. Some federal, state, and local jurisdictions have laws that address Tasers but requirements differ. For example, at the federal level, the Army prohibits Tasers from being brought into selected military installations in Georgia. Also, TSA may approve the use of Tasers on aircraft but must prescribe training rules and guidance on appropriate circumstances for using Tasers. At the state and local levels, the state of Indiana and the city of Chicago, Illinois, regulate the sale or possession of Tasers by non-law enforcement persons by subjecting Tasers to the same restrictions that apply to firearms. Other states, such as California, prohibit Tasers from being carried into public facilities such as airports. GAO observes that as the Taser becomes more widely used, especially by non-law enforcement persons, training is critical to help ensure its safe, effective, and appropriate use. TSA, Taser International, and the seven law enforcement agencies we contacted generally agreed with the information in this report.
The natural gas and electricity industries perform three primary functions in delivering energy to consumers: (1) producing the basic energy commodity, (2) transporting the commodity through pipelines or over power lines, and (3) distributing the commodity to the final consumer. Historically, many local utilities in the electricity sector built their own systems of power plants and electricity transmission and distribution lines to serve the needs of all consumers in their local areas. Similarly, natural gas companies built networks of pipelines to deliver natural gas from areas where it was produced to the markets where local distribution companies served all local customers. These local monopolies were overseen by regulators, who restricted the entry of new companies and also approved investments, approved prices paid by customers, and determined profits of these utilities. However, due to rising electricity prices and technological, economic, and policy developments beginning in the 1970s, the electricity and natural gas industries have restructured from a regulated environment to one that places greater reliance on competition to determine entry, investment, prices, and profits. The passage of the Natural Gas Policy Act of 1978, the Natural Gas Wellhead Decontrol Act of 1989, and subsequent FERC orders in 1985 and 1992 opened access to pipelines and required pipeline companies to completely separate transportation, storage, and sales services, all of which facilitated the shift of natural gas to more competitive markets. Similarly, the 1978 passage of the Public Utility Regulatory Policies Act of 1978 and the 1992 passage of the Energy Policy Act facilitated restructuring in the electricity industry. FERC built upon these efforts through major regulatory actions in 1996 and 1999 that required utilities under its jurisdiction to, among other things, provide nonutility companies that generated electricity with access to the utility’s interstate transmission lines and encouraged utilities to join in the creation of independent organizations to operate the transmission system, such as Independent System Operators (ISO) and Regional Transmission Organizations (RTO). Under federal statutes, FERC is the principal federal agency that regulates the natural gas and electricity industries to ensure that wholesale electricity and natural gas prices are fair. FERC is responsible for developing and maintaining the regulatory framework that approves or otherwise influences the utilities’ terms, conditions, and rates for the sale or resale and transmission of natural gas and electricity in interstate commerce. Historically, to ensure that the prices these utilities charged were just and reasonable, FERC regulated rates by basing the prices on the utilities’ costs to provide service plus a fair return on investment. Now, FERC seeks to ensure that wholesale natural gas and electricity prices are just and reasonable by promoting competitive markets, issuing market related rules that encourage efficient competition, and enforcing and correcting market rules as needed. In the newly restructured markets, many energy market participants rely on price information obtained from various sources, including price indices published in trade press because some companies can be reluctant to freely provide data on purchases and sales. Private companies develop these price indices by collecting information about market prices from market participants in a variety of ways, including phone calls to individuals within energy trading companies. Market participants use these indices to, among other things, help them make informed decisions about buying and selling natural gas and electricity. For example, energy market participants use price indices as a benchmark in reviewing the prudence of gas and electricity purchases and often reference price indices in the contracts they develop for gas and electricity purchases. As part of its market oversight efforts, FERC also monitors these price indices to detect anticompetitive behavior. Other federal agencies have roles affecting the electricity and natural gas markets. The Commodity Futures Trading Commission (CFTC) oversees markets and transactions related to the sale of commodity and financial futures and options, while the Federal Trade Commission (FTC) and Department of Justice police deceptive selling practices. In addition to these federal agencies, states also oversee aspects of natural gas and electricity delivery, often through public utility commissions. Since 2003, FERC has undertaken a series of efforts to improve the availability and accuracy of price information, including specifically addressing price indices. In 2000 and 2001 during the energy crisis in the West, some market participants knowingly misreported data to index providers in order to influence these indices for financial gain. Following that, FERC convened a series of conferences and workshops that included regulators, energy market participants, price index publishers, and industry experts. One of these events included participation by the CFTC and another included participation by the National Association of Regulatory Utility Commissioners (NARUC). As a result of these efforts, FERC staff developed a better understanding of market participants’ desired characteristics of the price indices and behavior of other market participants. These conferences and workshops also revealed some practical short- and long-term solutions to problems such as how market price indices are developed and why reduced energy trading activity was occurring. Using the information that it developed through its conferences and workshops, FERC developed new standards and rules of conduct for both market participants submitting trade data and for price index publishers, to help ensure that price indices were more accurate and reliable and to strengthen market participants’ confidence in price indices. FERC outlined the standards that energy market participants and index developers should follow in a 2003 policy statement. According to FERC, these standards were designed to encourage standardization in the voluntary reporting of price and other market information, among other things, and to assure companies that they will not be subject to administrative penalties for inadvertent errors in reporting. These standards also encourage energy market participants to report not only prices but also the volume of the traded commodity and the date and time of the transaction, and encourage the entities that publish price indices (e.g., Platts, Natural Gas Intelligence, and Dow Jones) to also publish this relevant market information. In addition, FERC standards encourage index publishers to verify the price data obtained from companies that provide price data, to indicate when a published price is an estimate made by the publisher rather than data reflecting only the results of actual trades, and to monitor the data to identify attempts to manipulate energy price indices. Finally, FERC standards encourage price index publishers to explain to users how the index is developed and include the formulas used to calculate the index. With regard to rules of conduct, FERC issued two orders in November 2003 designed to establish clear guidelines for sellers of wholesale electricity and natural gas subject to its jurisdiction. These guidelines prohibit actions that do not have a legitimate business purpose and are capable of manipulating prices. For example, they prohibit submitting false or misleading information to FERC or price index publishers. FERC has also taken steps to improve its ability to monitor price indices and enforce related market rules. Recently, we reported that FERC had made significant efforts to revise its oversight approach to better align with its new role in overseeing restructured markets. In particular, we have reported that through the establishment of its Office of Market Oversight and Investigations in 2002, FERC had taken a more proactive approach to monitoring by reviewing large amounts of data, including wholesale prices, for anomalies that could lead to potential market problems. In addition, FERC, which oversees the operators of electricity grids, including ISOs and RTOs, has worked with these organizations’ market monitoring units— many of which collect substantial amounts of information on prices and other data to determine, among other things, whether prices are the result of fair competition or appear to be a result of market manipulation. Finally, the passage of the 2005 Energy Policy Act included FERC’s proposed statutory changes to address misconduct of market participants by increasing civil penalties imposed on companies that participate in anticompetitive behavior or manipulate the market. These changes increase FERC’s ability to levy civil penalties under existing laws, raising potential fines to as much as $1 million per day per violation for as long as the violation continues. A FERC official said that increasing civil penalties would allow it to more effectively deter market manipulation and misconduct that is damaging to competitive markets. Moreover, FERC officials said that it would lead to greater certainty for market participants, thereby increasing participation in markets. The Energy Policy Act also gives FERC authority to collect transaction information if necessary to ensure price transparency. A FERC official said that this authority would give FERC additional tools if the current voluntary system of reporting prices to price index publishers proves inadequate. In addition, in response to requirements in the Energy Policy Act, FERC and the CFTC entered into a memorandum of understanding to share and coordinate requests for information, which they say will allow FERC to more readily identify and sanction market manipulation. Many industry stakeholders report that they are now reasonably confident in short-term price indices, although some concerns about the transparency of long-term electricity markets remain. As part of its effort to assess its efforts to improve price indices, FERC surveyed industry participants in March 2004, asking them to rate their confidence in price indices—with 1 representing no confidence and 10 representing total confidence that price indices accurately represent market pricing. Confidence in price indices ranged from an average of 7.5 for gas utilities to 6.7 for marketers, with nearly half reporting a confidence of 8 or greater. (See fig. 1.) In addition, in 2004, FERC reported that price index publishers have submitted information showing that the volume and number of transactions have increased significantly since 2002 and is influenced by at least two factors. First, companies that had been reporting transactions began reporting more transactions to publishers of price indices. Second, companies that had not been reporting had begun reporting transactions to publishers of price indices. Furthermore, many of the companies reporting in 2004 are among the industry’s larger and more active participants. Consistent with what FERC found, industry trade and research organizations and others that we interviewed reported to us that their members have few significant concerns about the short-term, also called spot, price indices or long-term natural gas indices. They report that, overall, FERC’s efforts to improve the transparency of spot price indices achieve sufficient oversight without being heavy-handed. In addition, industry participants told us that the quality of data being provided to publishers of price indices has improved since 2002. For example, according to a major price index publisher, the reporting of price information has significantly improved in the last 2 years, and, further, the quality of analysis and reliability of the prices that they report has improved. Finally, publishers are providing more information about the market, such as the number of transactions and the amounts of energy bought and sold at specific trading locations. For example, a major publisher reported to us that, as of August 2004, it includes volume and transaction data for each pricing point in the spot market. Despite their general satisfaction with most price indices, some stakeholders reported concerns about price indices for long-term electricity markets. In particular, representatives of one trade organization told us that while data regarding spot prices and long-term natural gas prices have improved, they still have concerns about electricity prices involving long-term purchase arrangements and similar long-term contracts (e.g., forward and futures markets, where long-term contracts for electricity and related financial instruments are bought and sold). Stakeholders are now able to see that these markets witness fewer transactions and, as a result, are less developed than others. One factor affecting price transparency in these long-term markets is that the use of these markets collapsed in 2002 over concerns that prices were manipulated. This collapse, in turn, has resulted in fewer market participants and a market that is less developed, making it difficult for those still wanting to participate in these markets to find a willing trading partner. In addition, two stakeholders told us that there are not many options for obtaining data regarding longer term energy market transactions. Complicating this concern, FERC does not have jurisdiction for overseeing futures markets and has only a limited direct role in long- term markets. As a result, FERC does not formally collect extensive data on futures or long-term markets. As a result, one energy market participant reported that it relies on limited data when developing or valuing long-term electricity contracts. In the absence of a mature and reliable long-term electricity market and information about prices, market participants noted that for now they rely on long-term natural gas markets and indices, which are more developed. These market participants told us that because natural gas is used extensively to generate electricity, the prices often change together. They also said that the availability and use of these natural gas markets only partly mitigates the lack of robust electricity markets, because electricity and natural gas prices can, and do, sometimes move independently. The move away from regulators setting prices and toward markets where prices are increasingly a function of competition has raised the importance of price indices as a mechanism to communicate information to the market. In recent years, market participants have used these indices in structuring their transactions and regulators have used them to judge how the market is performing. As a result, it is important that they accurately and reliably reflect actual prices. The federal government has taken a number of steps to encourage improved availability and accuracy of price indices, which has increased industry confidence in price and other market information provided in spot price indices. Although federal efforts appear to have had a positive impact on short-term (spot) price indices, some concerns remain about price indices for long-term electricity markets. It does not appear that there is an easy way to improve reporting on these long-term electricity markets until the markets themselves mature. Because of the importance of price indices, it will be important for FERC, Congress, and others to remain vigilant in their monitoring of existing price indices and attentive for alternatives to address the remaining issues in longer term markets. We provided a copy of our draft report to FERC for comment. FERC provided written comments, which are presented in appendix I. In its comments, FERC generally agreed with our findings and conclusions. In addition, FERC provided a variety of technical and other comments, which we incorporated as appropriate. To obtain information about efforts FERC has taken to improve natural gas and electricity price indices, we reviewed reports and other documents describing federal efforts to improve price transparency and examined literature on price transparency in the natural gas and electricity markets. In addition, we interviewed government officials at FERC, representatives of trade associations, and industry and academic experts in the field. We assessed the reliability of FERC confidence survey data by reviewing the survey instrument and methodology used to tabulate results, interviewing relevant agency officials knowledgeable about the data to understand any limitations of the data, and corroborating results by interviewing some of the entities surveyed. We conducted our work from June 2005 to November 2005 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman of FERC as well as other appropriate congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix II. In addition to the individual named above, Jon Ludwigson, Kristen Massey, Frank Rusco, Barbara Timmerman, Alison O’Neill, Chris Pacheco, and Kim Wheeler-Raheb made key contributions to this report.
Since the 1970s, the natural gas and electricity industries have each undergone a shift toward greater competition, referred to as restructuring. This restructuring has moved these industries from regulated monopolies to markets in which competitors vie for market share and wholesale prices are largely determined by supply and demand. Amid this restructuring, private companies have published information about these markets, including reports of market prices in various locations--referred to as price indices. These indices, whether for short-term "spot" or long-term "forward" markets, are developed by surveying selected market participants who voluntarily supply price information. Market participants rely on these price indices to help them make informed decisions about trading these commodities and to evaluate new investments. In recent years, confidence in price indices has been shaken due to misreporting and other abuses. During the energy crisis in the West in 2000-2001, several market participants were found to have purposefully misreported prices in order to manipulate these indices for financial gain. In this context, GAO agreed to answer the following questions: (1) What federal regulatory and statutory efforts have been taken to improve price indices in electricity and natural gas markets? (2) Have federal efforts improved industry stakeholders' confidence in these price indices? Since 2003, the federal government has undertaken a series of regulatory and statutory efforts to improve the availability and accuracy of price information in price indices. First, FERC issued standards on voluntary price reporting and rules of conduct in a July 2003 policy statement. Second, FERC has taken steps to improve its ability to monitor price indices and enforce market rules by (1) reviewing wholesale prices for anomalies that could indicate market problems and (2) collaborating with other entities, such as the Commodity Futures Trading Commission (CFTC), and independent market monitoring units that monitor organized electricity markets to detect market manipulation. Third, the Energy Policy Act--enacted in August 2005--increases the amount and types of civil penalties that FERC may impose on companies that participate in anticompetitive behavior, including knowingly misreporting price information to index developers and gives FERC authority to collect additional transaction information if such information is necessary to ensure price transparency. Fourth, FERC and the CFTC entered into a memorandum of understanding to share and coordinate requests for information, which they say will allow FERC to more readily identify and sanction market manipulation. Many industry stakeholders reported that they now have greater confidence in most price indices, but some expressed concern about price indices for long-term electricity markets. FERC reported that stakeholders are generally satisfied with current price indices and that the quality of information has improved. For example, in a recent survey FERC found that two-thirds of respondents reported their confidence in price indices, on a scale of 1 to 10 (10 being most confident), as a 7 or greater. Further, FERC reported that since 2002 the quality of information has improved because (1) more companies are reporting data to publishers and (2) major publishers are providing more information about the number of transactions and volume of electricity and natural gas trades. GAO's own investigations corroborated what FERC found in its survey. Specifically, natural gas and electricity industry stakeholders reported that, in general, they are reasonably confident in the short-term prices now reported by trade publications and the improved quality of overall information. While stakeholders expressed general satisfaction with most price indices, some reported concerns about price indices in long-term electricity markets. Furthermore, stakeholders are now able to see that some of these markets witness fewer transactions and, as a result, are less developed than others. In the absence of a reliable long-term electricity market and information about prices, market participants noted that they rely on long-term natural gas markets and indices that are more developed. Stakeholders told GAO that, because natural gas is widely used to generate electricity, their prices often move together and, therefore, natural gas forward prices can substitute, to some extent, for electricity futures prices. They also said that the use of these natural gas markets only partly mitigates the lack of robust long-term electricity markets, because electricity and natural gas prices sometime move independently.
SBA is charged with providing support to the nation’s small businesses, including those in urban and rural areas. Its support takes several forms. First, it ensures access to credit, primarily by guaranteeing loans through various loan guarantee programs. Second, it provides entrepreneurial assistance through partnerships with private entities that offer small business counseling and technical assistance. Third, SBA administers various small business development and procurement programs that are designed to assist small and disadvantaged businesses in obtaining federal contracts. Finally, SBA makes loans to businesses as well as individuals trying to recover from major disasters. Although most SBA disaster loans are processed at the SBA loan processing center in Sacramento, California, SBA has a network of 68 field offices nationwide. SBA administers several business loan programs, including the Basic 7(a) Loan Guaranty Program, 504/CDC Loan Program , 7(m) Micro Loan Program, and the Small Business Investment Company (SBIC) Program. Recently, it added the Small/Rural Lender Advantage Pilot Program, under its 7(a) Program, specifically for small businesses in rural areas (see fig. 1). Appendix II provides a more detailed description of each program. In addition to its loan programs, SBA offers grant programs that support nonprofit organizations. These grant programs are generally designed to expand and enhance nonprofit organizations that provide small businesses with management, technical, or financial assistance. For example, SBA’s Women’s Business Development Center Program is an SBA grant program available to private, nonprofit organizations to run women’s business centers. The program was established by the Women’s Business Ownership Act of 1988 after Congress found that existing assistance programs for small business owners were not addressing women’s needs. The program, which specifically targets economically and socially disadvantaged women, provides long-term training, counseling, networking, and mentoring to women who own businesses or are potential entrepreneurs. The program’s ultimate goal is to add more well-trained women entrepreneurs to the U.S. business community. Additionally, SBA’s Small Business Development Center (SBDC) Program, which was created by Congress in 1980, provides management and technical assistance to individuals and small businesses. SBDC services include, but are not limited to, assisting prospective and existing small businesses with financial, marketing, production, organization, engineering, and technical problems and feasibility studies. Each state and U.S. territory has a lead organization that sponsors and manages the SBDC program there. The lead organization coordinates program services offered to small businesses through a network of centers and satellite locations at colleges, universities, vocational schools, chambers of commerce, and economic development corporations. Nationwide, 63 lead SBDCs and more than 1,000 satellite locations have contracted to conduct SBDC services. USDA’s Rural Development is responsible for leading and coordinating federal rural development assistance. Rural Development administers over 40 development programs for rural communities, most of which provide assistance in the form of loans, loan guarantees, and grants, through a network of 47 state offices and about 500 area or local field offices. Rural Development has three agencies: Rural Housing Service (RHS), Rural Utilities Service (RUS), and Rural Business and Cooperative Service (RBS). RHS helps rural communities and individuals by providing loans, grants, and technical assistance for housing and community facilities. It provides funding for single-family homes; apartments for low-income persons, the elderly, and farm laborers; and various community facilities such as fire and police stations, hospitals, libraries, and schools. RUS is responsible for administering electric, telecommunications, and water programs that help finance the infrastructure necessary to improve the quality of life and promote economic development in rural areas. RBS administers programs that provide business planning and financial and technical assistance to rural businesses and cooperatives. Specifically, RBS’ guaranteed loans and other loan and grant programs work in partnership with private sector and community-based organizations to meet the business and credit needs of rural businesses. Recipients of RBS’ services include individuals, farmers, producers, corporations, partnerships, public bodies, nonprofits, American Indian tribes, and private companies. The primary business programs include the Business and Industry (B&I) Guaranteed Loan Program, Intermediary Relending Program (IRP), Rural Business Enterprise Grant Program (RBEG), Rural Business Opportunity Grant Program (RBOG), Rural Economic Development Loan and Grant Program (REDLG), and the Renewable Energy Systems and Energy Efficiency Improvements Guaranteed Loan and Grant Program (see fig. 2). Appendix II provides a more detailed description of each program. Rural Development business programs are available in areas that meet the program’s definition of rural—for example, for the B&I program, any area other than a city or town with a population of 50,000 or less and the area contiguous and adjacent to such a city or town. As a result, in general, only individuals and businesses in identified areas with 50,000 or fewer people are eligible for most of these programs. One exception is the Intermediary Relending Program, which is available only to businesses in rural areas with 25,000 or fewer people. Some SBA loan programs and Rural Development business programs are complementary, providing a rationale for the agencies to collaborate. Both types of programs can fund start-up and expansion projects, equipment purchases, and working capital to rural borrowers and, in some cases, the eligibility requirements for the programs are comparable. However, the various programs have different and sometimes unique strengths—for example, larger loan amounts, shorter processing times, or targeting of different market segments. According to SBA and Rural Development officials, collaborative efforts could allow each agency to leverage the strengths of the other. For example, Rural Development can finance larger projects than SBA and lend to nonprofit organizations, something SBA cannot do. However, SBA can offer entrepreneurs a faster turnaround time in loan processing. Similarly, officials noted that certain SBA and Rural Development loan products complemented one another and were used jointly to finance individual projects. To the extent that SBA’s resource partners are considered part of SBA’s rural presence, both agencies have a strong rural presence that provides another rationale for the agencies to collaborate. SBA and Rural Development, which share a similar mission of increasing economic opportunity and improving the quality of life for people in underserved markets, including rural America, serve the same rural geographic areas and communities and have some programs that offer similar products to borrowers for comparable purposes. For example, SBA’s 504 Loan Program and Rural Development’s Intermediary Relending Program both offer economic development loans that can support the growth of rural small businesses and help enhance rural communities through business expansion and job creation. The 504 and Intermediary Relending programs both also provide financing for the acquisition and improvement of land, buildings and equipment, particularly when such funding will help create or retain jobs. According to SBA, the gency provided 504 lon to the owner of helth cre business to prchas new $7.2 million hedquarter building. Two Ntive Americter from Lerton, North Crolin, launched the business in 2000 nd were nmed the 2007 NtionSll Business Peron of the Yer. When it firt opened, the helth cre businessd only one cell phone, two ptient, nd certified ning assnt. Tody, ccording to SBA, the business provide rod rnge of ervice, employing 301 profession nd erving 760 ptientily, with nnuasale over $9 million. Both agencies’ loan and business programs are designed to help local entrepreneurs start up or expand their businesses. For instance, SBA’s 7(a) Loan Guaranty Program and Rural Development’s Business and Industry Guaranteed Loan Programs both provide financing that can be used to establish a new business or to assist in the operation, acquisition, or expansion of an existing business. Specifically, the 7(a) program provides funding for business start-ups, expansion, equipment, working capital, and real estate acquisition. Similarly, the Business and Industry Guaranteed Loan Program provides funding for start-ups and expansion purposes, including acquisition, inventory, real estate, working capital, equipment, construction, and enlargement or modernization of rural businesses. These programs are provided through loan guarantees that limit the risk to lenders. Private lenders underwrite and service the loans and make the decisions to approve or not approve loan requests, and SBA and Rural Development decide whether to guarantee a portion of the outstanding loan balance if the borrower defaults. According to USDA, the Stheast Iow Regionl Plnning Commission ght finncing for revolving lon fnd to erve De Moine, Henry, Lee, nd Losa contie in Stheast Iow. In repone to thi reqt, Rl Development, throgh it Intermediry Relending Progrm, rded the commission $600,000 to provide low-interet lo to public nd nonprofit orgniztion tht, in trn, wold relend thoe fnd to support business nd commnity development. A result of the project, ccording to USDA, 7 business were assted, 200 jobs were creted, nd 259 jobs were saved. Further, both agencies offer programs that provide technical assistance to eligible borrowers and, while SBA does not offer grants to start or grow a business, it has resource partners, such as its SBDCs and Women’s Business Centers, which provide management and technical assistance to prospective small business owners. Rural Development offers grant programs that provide management and technical assistance to rural borrowers. The Rural Business Enterprise Grant and Rural Business Opportunity Grant programs provide technical assistance for business development and to conduct economic planning in rural areas. In addition, some of the loan and business programs have similar eligibility requirements. For example, in administering its Renewable Energy Systems and Energy Efficiency Improvements Guaranteed Loan and Grant Program, Rural Development relies on SBA’s definition of eligible small businesses, including sole proprietorships, partnerships, corporations, and cooperatives. Borrowers must also meet SBA’s small business standards for the type of industry, number of employees, or annual revenue. Moreover, some of SBA’s and Rural Development’s programs have established comparable credit criteria for the borrower. SBA’s 7(a) Loan Guaranty, 504, and Mirco Loan programs and Rural Development’s Business and Industry Guaranteed Loan Program all use similar criteria that are based on the type of project being funded and the borrower’s ability to meet normal commercial lending standards and provide a personal guaranty, if necessary. SBA and Rural Development officials we spoke to stated there was little overlap or duplication between the two agencies’ loan and business programs, in part because of several key differences. First, Rural Development can finance larger projects than SBA. The maximum loan amount for SBA’s 7(a) loan is $2 million, compared with a maximum loan amount for Rural Development’s Business and Industry loan of $25 million. Second, the 7(a) and Business and Industry programs also offer different loan guaranties. The maximum guaranty for 7(a) loans is 85 percent for loans up to $150,000 and 75 percent for loans over $150,000. The maximum guaranty percentage for Business and Industry loans is 80 percent for loans up to $5 million, 70 percent for loans between $5 million and $10 million, and 60 percent for loans of more than $10 million. Third, the costs, fees, and loan terms differ for the two types of loans. For example, SBA charges a guaranty fee of 2 percent for loans up to $150,000, 3 percent for loans between $150,000 and $700,000, and 3.5 percent for loans up to $1 million. SBA also charges an additional quarter of a percent of the guaranteed portion over $1 million. Rural Development charges an initial guaranty fee not to exceed 2 percent of the guaranteed portion of the loan. The maximum loan terms for SBA 7(a) loans are determined by the following: (1) the shortest appropriate term, depending on the borrower’s ability to repay; (2) 10 years or less, unless it finances or refinances real estate or equipment with a useful life exceeding 10 years; and (3) a maximum of 25 years, including extensions. However, the maximum loan terms for Rural Development’s Business and Industry loans are 7 years for working capital, 15 years for equipment, and 30 years for real estate loans. Each program also offers some unique strengths. While Rural Development’s fees tend to be lower than SBA’s, SBA usually processes its loans faster. In general, the average processing time by SBA for SBA loans is 5 to 7 business days and for Rural Development business programs 10 to 60 days, depending on the scope of the project and completeness of the application. SBA can offer shorter turnaround in loan processing, particularly for its 7(a) program (which sometimes takes as little as 2 business days), because of its various express loan options, preapproved lenders, and consolidated loan processing center. Rural Development makes credit and underwriting decisions itself rather than relying on preapproved lenders, and its loans can take as long as 60 days to process. Moreover, Rural Development has certain restrictions on the maximum dollar amount of loans that can be approved by field offices—typically varying by state based on the loan approval authority. Therefore, Business and Industry loans above a state’s loan approval limit must be approved by Rural Development headquarters officials, resulting in additional loan processing times. While both agencies serve rural areas, their programs differ in the types of entities they serve. SBA’s loan programs only serve the for-profit sector, focusing on individual entrepreneurs and small businesses. However, Rural Development’s business programs focus on individual entrepreneurs and small and mid-size businesses, as well as nonprofits. Appendix III further illustrates some of the similarities and differences between SBA’s and Rural Development’s loan and business programs. According to SBA and Rural Development officials who are engaged in collaborative relationships, collaboration allows the agencies to leverage the unique strengths of each agency’s programs and increase the number of financing options to better promote economic development. For instance, SBA and USDA officials in North Dakota said that SBA’s 504 program and Rural Development’s Intermediary Relending programs were frequently coupled in loan packages. In those cases, the 504 program provided funding for land and buildings, and the Intermediary Relending program provided funding for machinery, equipment, working capital, and other uses. The officials estimated that about one of every four 504 loans in rural communities in North Dakota with populations of less than 25,000 residents had been used jointly with Intermediary Relending loans to finance individual projects. Examples of businesses in North Dakota that have received joint financing from SBA and Rural Development include an agricultural retail service that sells chemicals and fertilizer and employs 7 workers and a manufacturer of electric thermal storage heating equipment that employs 140 workers. In each of these examples, the businesses used SBA’s 504 program to acquire a building and used the IRP program to acquire machinery and equipment. Other officials with whom we spoke cited further rationale for the agencies to collaborate. In one instance, a Rural Development official in New Mexico noted that collaboration with SBA allowed him to tap into SBA’s preexisting constituency of banks, expanding the number of lenders that could help provide Rural Development loans to potential borrowers. Similarly, SBA officials in New Mexico said that collaboration with Rural Development allowed SBA to provide additional assistance to small businesses after Rural Development provided initial financing for a community’s infrastructure. The officials involved in the limited instances of collaboration that we identified acknowledged that working together allowed both agencies to coordinate the delivery of their loan and business programs to solve specific credit needs. SBA and Rural Development officials in North Dakota also told us that by collaborating they were able to provide borrowers with more financing options than they could by acting alone, thereby improving service to borrowers. Moreover, according to officials in North Dakota and New Mexico, collaboration also created a synergistic effect to better promote economic development in rural areas. Finally, while some consolidation has occurred over time, both agencies have a strong presence in rural areas. Prior to its 1994 reorganization, USDA had field staff in almost every rural county. Consistent with its reorganization, and as we reported in September 2000, USDA closed or consolidated about 1,500 county offices into USDA service centers and transferred more than 600 Rural Development field positions to the St. Louis Centralized Servicing Center. The number of Rural Development offices across the nation is now closer to the number of SBA offices—47 Rural Development state offices and 68 SBA district offices (see fig.3). In addition to its state offices, Rural Development also has about 500 field offices, including area, subarea, and other local offices in rural areas. SBA officials we spoke to in headquarters believe that SBA has a similar presence in rural communities because of its more than 950 SBDC locations in the 50 states, U.S. territories, and the District of Columbia. In contrast to SBA’s view, Rural Development officials believe that the presence of its 500 field offices in rural areas is unique because each office is staffed by USDA employees. Although SBA’s SBDCs may provide services that differ from services provided by Rural Development field offices, to the extent that SBDCs are considered part of SBA’s rural presence, both agencies have a strong rural presence that provides another rationale for the agencies to collaborate. Overall, in the areas where SBA and Rural Development were collaborating, the efforts were sporadic, were initiated and administered at local levels, and appeared to be dependent on established working relationships among those involved. The results of a query by Rural Development and SBA officials asking their offices whether collaborative efforts were under way also indicated that such efforts were sporadic. We found that the extent of the collaboration that was taking place and the level of formality—that is, the use of cooperative agreements, such as MOUs and other mechanisms to collaborate—varied across the agencies’ field offices. For example, in North Dakota, SBA and Rural Development collaborated frequently and on a relatively formal basis by communicating at least weekly, hosting several joint lender training sessions yearly, and establishing an MOU to deliver financing and technical assistance at one location. In other states we visited, such as Nebraska and New Mexico, SBA and Rural Development worked with each other less frequently and on a more informal basis. In a number of other states, such as Arizona, Colorado, and Georgia, no collaborative efforts appeared to be under way. Federal agencies that are involved in collaborative efforts are generally required by statute to collaborate, but no such specific requirement exists for SBA and Rural Development. As a result, we found that most ongoing collaborative efforts between the agencies had been initiated at the local level and were based on established working relationships among the involved individuals. For example, some SBA and Rural Development field office officials at the three sites we visited told us that they frequently collaborated with each other because they had held the same job positions, within their respective agencies, and worked together for many years and thus had established a rapport. Other officials told us that they were involved in collaborative efforts because they had initiated the efforts on their own or had prior experience in partnering with other agencies and had chosen to continue similar efforts. SBA and Rural Development headquarters officials conducted a query of their respective field office staff to determine the extent to which these offices were involved in any formal or informal collaborative efforts. In addition to information we obtained from the three locations we visited, the query results showed that collaborative efforts developed sporadically among a limited number of offices. For example, of SBA’s 68 district offices, only about half reported having ongoing collaborative efforts with Rural Development. Similarly, only about half of Rural Development’s 47 state offices reported having ongoing collaborative efforts. Of those Rural Development offices that reported not having any ongoing efforts, a few indicated that they had partnered with SBA in the past. Each agency’s query also showed that some SBA and Rural Development field offices seemed to have good working relationships that had been established over the years by the specific individuals involved. Our site visits and the results of the query of field offices identified a few SBA and Rural Development offices, such as those in North Dakota, Ohio, and Washington state that appeared to be collaborating frequently. These offices used formal mechanisms such as MOUs to establish a framework for their efforts. In North Dakota, for example, SBA and Rural Development offices offered at least eight joint lender trainings each year and held quarterly meetings. In addition, in North Dakota the agencies had established an MOU that created the Entrepreneur Centers of North Dakota (ECND), a single entity involving SBA, Rural Development, and other state and local stakeholders. According to officials at the center, the ECND provides “one-stop” access to a variety of products and services, a concept that has been widely used by USDA in its service centers for over 10 years and that was a cornerstone of the agency’s reorganization efforts. Through the ECND, a prospective small business borrower in North Dakota can work with the five ECND partners to obtain financing and technical assistance from any of the more than 15 programs that are offered. ECND partners work with the borrower from the initial point of contact and continue their assistance through the process of securing the appropriate financing and may stay involved until a project is completed. Borrowers can also work with “resource partners,” including SBA’s SBDC and the North Dakota Women’s Business Center (i.e., Center for Women and Technology) to obtain technical assistance in areas such as business management, marketing, production, and the development of feasibility studies. According to SBA and Rural Development officials in North Dakota, the ECND is one of the best examples of teamwork and has proven to be beneficial in helping to provide a high level of customer service to rural borrowers. The SBA and Rural Development offices in Ohio also reported ongoing collaborative efforts. The officials reported having an MOU, which was established in the late 1990s, to guide various joint activities and to promote the use of each other’s programs in marketing and outreach efforts. Under the MOU, which is still used today, the offices provide referrals, conduct periodic meetings to update program information, and engage in forums and joint lender training sessions to educate lenders on their programs. The SBA and Rural Development offices in Washington reported having annual forums to share updated program information. They also said that they had sponsored three joint lender training sessions and a regional lender conference to educate lenders on the various aspects of their loan and business programs. The SBA and Rural Development offices plan to conduct a series of joint lender workshops in 2008 and to establish an MOU that will guide their efforts and cover advertising for the workshops. The two agencies reported several other instances of collaboration, but these were less extensive and formal than those in North Dakota, Ohio, and Washington state. For example, Nebraska SBA and Rural Development officials reported conducting joint lender training sessions to educate loan officers on the agencies’ various loan and business programs and provide information on the technical resources that are available to small businesses throughout the state. In New Mexico, SBA and Rural Development officials reported conducting joint monthly meetings and community outreach sessions, or “Access to Capital” forums. The forums are 1-day events during which Rural Development, SBA, and SBDC officials and other local economic development professionals make presentations on the various types of loan programs that are available to small businesses. The forums’ goal is to involve local economic and political leaders in assisting small businesses in rural areas of the state and to obtain their buy-in and support for SBA and Rural Development programs. SBA and Rural Development officials in other locations reported that they were involved in informal collaborative efforts. In Arkansas, Missouri, and Virginia, these activities were based on referrals. According to officials in these areas, SBA and Rural Development field personnel often refer applicants in need of financing to each other’s agency if the other agency’s programs seem better suited to the applicants’ needs. SBA and Rural Development offices in Massachusetts also reported that they had recently sponsored a joint educational event on renewable energy and energy efficiency grants and loans and had held meetings to exchange program information. Additionally, in New Hampshire, Rhode Island, and Vermont, the offices reported that they had informal relationships and generally kept each other up to date on their respective programs. In many states, however, SBA and Rural Development do not appear to be collaborating at all or to have formal or informal mechanisms to facilitate collaboration. These states include, among others, Arizona, Colorado, Georgia, Maine, North Carolina, Utah, and West Virginia. Because of this lack of collaboration, SBA and Rural Development offices in these states may be missing out on opportunities to work together to better serve entrepreneurs and small businesses in their local communities. SBA and Rural Development have collaborated in the past with each other and with other agencies. Generally speaking, these efforts enabled the agencies to achieve results that they could not have achieved acting alone. For example, SBA and Rural Development collaborated with each other under the RBIP. Section 6029 of the Farm Security and Rural Investment Act of 2002 required USDA to establish the RBIP. The purpose of the program was twofold: first, to promote economic development and create jobs in rural areas by encouraging investments of venture capital to help develop small rural businesses; and second, to establish a developmental venture capital program to address the unmet equity investment needs of small rural businesses. RBIP was modeled after SBA’s Small Business Investment Company program and its New Markets Venture Capital program, and Rural Development was expected to draw upon the experience that SBA had gained in administering these programs. Under an interagency agreement required by the act, Rural Development had oversight responsibility for RBIP, and SBA had the day-to-day responsibility for managing and operating the program using its own staff, procedures, and forms. According to both SBA and Rural Development officials, the success of RBIP was limited due to a lack of funding, in part because the Deficit Reduction Act of 2005 rescinded fiscal year 2007 and subsequent funding for the program. Both agencies also encountered challenges during planning and implementation. For instance, it took about 2 years from the time that the law was enacted in 2002 to finalize and sign the operating agreements, establish interim final rules, and announce funding availability in 2004. Prior to the loss of funding in 2006, only one company was able to raise the necessary capital (i.e., private equity matching dollars) for full approval to become licensed as a rural business investment company under RBIP. According to SBA and Rural Development officials, the agencies have also collaborated with other agencies, and the results have reportedly been beneficial for both SBA and USDA. For instance, both SBA and Rural Development each collaborated with FCA to examine specialized lending institutions. Specifically, SBA oversees small business lending companies (SBLC), which are nondepository lending institutions licensed by SBA that play a significant role in SBA’s 7(a) Loan Guaranty Program. However, SBLCs are not generally regulated or examined by financial institution regulators. SBA entered into a contractual agreement with FCA in 1999 that tasked FCA with conducting safety and soundness examinations of the SBLCs. Under the agreement, FCA would conduct examinations of SBLCs on a full cost-recovery basis, and the agencies would have the option to terminate or extend the agreement after 1 year. Rural Development also collaborated with FCA under an Economy Act agreement to conduct examinations of its nontraditional lenders (i.e., lenders that provide loans to borrowers that do not meet the traditional credit criteria) that participate in Rural Development’s B&I, Renewable Energy Systems and Energy Efficiency Improvements, and Community Facilities Guaranteed Loan Programs. Under the agreement, FCA conducts, on a full cost-recovery basis, examinations of the lending institutions’ safety and soundness, lending practices, and regulatory compliance. These agreements have allowed both SBA and Rural Development to take advantage of FCA’s expertise in examining specialized financial institutions and offered FCA the opportunity to broaden its experience through exposure to different lending environments. Additionally, Rural Development and FEMA collaborated in providing disaster assistance to Hurricane Katrina victims. Through this collaborative effort, Rural Development assisted victims of Katrina by (1) making multifamily rental units available nationwide; (2) providing grants and loans for home repair and replacement; and (3) providing mortgage relief through a foreclosure moratorium and mortgage payment forbearance. Over the years, Rural Development’s Housing and Community Facilities Program and HUD have routinely collaborated with each other to provide affordable housing assistance in rural communities, and the working relationship still exists today. Rural Development and HUD have together created a voucher program, modeled after HUD’s Housing Choice Voucher program that provides rental assistance to families in rural areas. They have also developed cooperative agreements for their multifamily housing assistance programs that allow tenants to use HUD vouchers in USDA subsidized multifamily housing units. We were told that each of the collaborative efforts allowed the agencies to establish common approaches to working together, clarify priorities as well as roles and responsibilities, and align their resources to accomplish common outcomes. SBA and Rural Development have not had a lasting approach to guide them in collaborating with one another more effectively. Our October 2005 report on key practices that can help enhance and sustain collaboration among federal agencies identified a number of practices critical to successful collaboration and identified other factors such as leadership, trust, and organizational culture that are necessary elements of an effective working relationship. In December 2000, SBA and Rural Development entered into an MOU that provided an approach to collaboration. The MOU incorporated three of the key practices we have identified. The MOU expired in 2003 and SBA and Rural Development do not appear to have implemented the MOU when it was active. The ineffective implementation of the MOU has likely contributed to the sporadic and limited amount of collaboration that is taking place between the two agencies. SBA and Rural Development also do not have formal incentives focused on collaboration and do not track the results or impact of collaborative efforts. As a result, the agencies are unable to share information on the benefits of working together and encourage additional efforts to do so. Without a formal approach to encourage further collaboration, the agencies will be less likely to fully leverage each other’s unique strengths to help improve small business opportunities and encourage economic development in rural communities. In our October 2005 report, we identified eight key practices federal agencies could undertake to enhance and sustain their collaborative efforts. These practices included the following: Define and articulate a common outcome—to overcome significant differences in agency cultures and established ways of doing business, collaborating agencies must have a clear and compelling rationale to work together. Establish mutually reinforcing or joint strategies—to achieve a common outcome, collaborating agencies need to establish strategies that work in concert with those of their partners or are joint in nature. Identify and address needs by leveraging resources—collaborating agencies should identify the human, information technology, physical, and financial resources needed to initiate or sustain their collaborative effort. By assessing their relative strengths and limitations, agencies can look for opportunities to address resource needs by leveraging each others’ resources. Agree on agency roles and responsibilities—collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. Establish compatible policies, procedures, and other means to operate across agency boundaries—to facilitate collaboration, agencies need to address the compatibility of standards, policies, procedures, and data systems that will be used in the collaborative effort. Develop mechanisms to monitor, evaluate, and report on results— agencies involved in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. Reinforce agency accountability for collaborative efforts through agency plans ands reports—collaborating agencies should ensure that goals are consistent and, as appropriate, program efforts are mutually reinforced through tools such as strategic and annual performance plans; and Reinforce individual accountability for collaborative efforts through performance management systems—collaborating agencies should use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on fostering the necessary collaboration both within and across organizational boundaries to achieve results. In comparing SBA and Rural Development’s efforts to these key practices, we found that the agencies have taken steps in the past that were consistent with three of the key practices. In particular, the agencies entered into a cooperative agreement—an MOU—in December 2000 that (1) defined and articulated a common outcome; (2) reached agreement on roles and responsibilities; and (3) established a mechanism to monitor, evaluate, and report on results. Specifically, the MOU defined and articulated a common purpose, including to better serve rural areas by coordinating the delivery of programs; increase the number of small business loans guaranteed by both agencies; and develop relationships with federal, state, county, and local agencies, private organizations, and commercial and financial institutions to facilitate and support the development of strong rural businesses. In addition, the MOU described the respective roles and responsibilities each agency would maintain in providing training on their programs, credit analysis techniques, and processing and servicing policies. Finally, the MOU stated that, at least annually, SBA’s Associate Administrator for Field Operations, SBA’s Associate Administrator for Financial Assistance, and Rural Development’s Deputy Administrator for Business Programs, or their designees, would monitor and evaluate the previous year’s joint activities and plan any future work. The MOU, signed in December 2000, was to become active on the date of execution and remain in effect for 3 calendar years at which time the two agencies had the option to extend it for an additional 2 years by written agreement. SBA’s Deputy Administrator and USDA’s Undersecretary for Rural Development signed the MOU and it expired in 2003. Both SBA and Rural Development officials recently confirmed that the MOU was not renewed. Although SBA and Rural Development’s December 2000 MOU contained provisions that are consistent with some of our key practices as described above, the agencies do not appear to have implemented the MOU when it was active. Based on our analysis, there are two potential reasons for this lack of implementation. First, SBA and Rural Development may not have implemented the 2000 MOU when it was active because of a lack of direction and focus from high levels of each agency emphasizing the need for and importance of collaboration. Rural Development officials confirmed that a change in USDA administration occurred after the 2000 MOU was signed, and the officials who signed the MOU were no longer in the positions they occupied at the time of the signing. This explanation is consistent with what others told us about barriers to more effective collaboration between federal agencies. For example, a representative of a rural community development organization with whom we spoke stated that the initial momentum for some collaborative efforts may come from officials in management level positions of a federal agency, but after the responsible officials leave the agency, or a change in administration occurs, the momentum for a collaborative effort may drop off and not be resumed by the officials’ successors. Second, the 2000 MOU may not have been fully implemented because neither agency appeared to be actively monitoring the extent to which collaboration was ongoing. For instance, when we began our work for this review, we asked SBA and Rural Development officials in headquarters to provide examples of formal or informal efforts the agencies have undertaken to work together. The officials were not able to provide any descriptions of such efforts and told us that ongoing collaborative efforts were likely to be sporadic and occurred only as needed in the agencies’ field offices. Because we could not obtain information on the extent and nature of SBA and Rural Development’s collaborative efforts, we asked each agency to query its field offices to provide us with this information. As discussed previously, based on the results of each agency’s query, we found a few locations where SBA and Rural Development are involved in frequent and formal collaborative efforts, some locations where the agencies are involved in informal efforts, and many locations where the agencies appear not to be working together at all. SBA and Rural Development officials did not cite the December 2000 MOU when we began work for this review and, for a period of months, the agencies did not appear to be in agreement as to whether the MOU was active. In March 2008, Rural Development officials informed us that they were operating as though the MOU was active, even though it had expired. However, when we asked about the December 2000 MOU during some of our visits to locations where SBA and Rural Development were collaborating, some officials in the locations were unfamiliar with it. During the course of our review, neither SBA nor Rural Development officials cited actions taken, past or present, in response to the provisions contained in the MOU. Had SBA and Rural Development implemented the MOU, the agencies would have had a framework to guide them and improve upon their collaborative efforts. Based on our analysis, we found that SBA and Rural Development field offices do not have formal incentives to encourage collaboration and do not track the results of their efforts. As mentioned, as we reported in our 2005 report, one of the key practices that can help agencies to enhance and sustain their collaborative efforts involved ensuring that the agencies’ goals are consistent and that their program efforts are mutually reinforced through strategic and annual performance plans. Specifically, federal programs contributing to the same or similar results should collaborate and use their strategic and annual performance plans as tools to drive their efforts to work together. Such plans can reinforce accountability for the collaboration by establishing complementary goals and measures for achieving results and aligning them with the goals and measures of the collaborative efforts. SBA and Rural Development’s performance goals and measures do not focus on their efforts to work together collaboratively. Specifically, in describing their performance goals for district offices, SBA officials stated that each office has goals for technical assistance, including activities such as training, marketing, and outreach. The officials noted that each SBA district office also has goals and measures for the number of loans to be made in underserved markets, which may include rural areas. While these goals and measures focus on participation in SBA’s programs and may encourage offices to partner with others, they do not focus specifically on collaboration with Rural Development. Similarly, Rural Development’s program performance measures, particularly for the B&I program, do not focus on collaboration with another agency. Rural Development’s goals and measures focus on employment opportunities (i.e., jobs created or saved) and community economic benefits (i.e., value added to a community as a result of the economic impact of Rural Development’s programs). Both SBA and Rural Development officials stated that performance goals and measures focused on collaboration could provide an incentive to collaborate. Once established, such goals and measures could provide both agencies a mechanism to encourage interagency working relationships and reward those efforts already occurring. Additionally, SBA and Rural Development officials at the three locations we visited said that they are not currently tracking the results of some collaborative efforts, such as the joint training of lenders and community outreach sessions. The officials did view these collaborative efforts as beneficial in increasing awareness of each agency’s respective programs. According to Rural Development officials in New Mexico, while they are satisfied with the attendance at their “Access to Capital” forums targeted at local economic and political leaders and lenders, they have not been able to document a loan resulting from the forums. Rural Development officials in Nebraska said that they have received phone calls from some lenders after the lenders have attended a joint training session. In these cases, according to the officials, Rural Development has been active in meeting with lenders one-on-one to provide assistance. However, the officials said that they could do a better job of proactively contacting the lenders after the training to solicit feedback and determine if the lender has initiated any new loans as a result of having attended the training session. SBA and Rural Development officials stated that one way to document the benefits of collaboration would be to prepare “success stories” of ventures that SBA and Rural Development had jointly undertaken. The officials further stated that because each agency already prepared success stories that are based upon participation in their individual programs, this practice could be used to document positive benefits stemming from collaborative efforts between the two agencies. Moreover, the officials said that those locations where SBA and Rural Development were not currently working together were more likely to begin doing so if they were made aware of specific, tangible benefits that could be realized through collaboration. The complementary nature of some SBA loan programs and Rural Development business programs provides a rationale for the agencies to collaborate. SBA and Rural Development officials engaged in collaborative working relationships said that they have been able to work together to offer rural borrowers more financing options and better services, as well as to improve efforts to promote economic development in rural areas when collaboration has occurred. However, SBA and Rural Development’s collaborative efforts to date have been sporadic and mostly self-initiated by specific officials in each agency’s field offices. Officials of each agency worked together frequently in some locations and infrequently in others. In many areas, SBA and Rural Development neither appear to be collaborating at all nor have formal or informal mechanisms to guide their collaboration. For SBA and Rural Development, working together to encourage economic development in rural areas is not a new or novel concept. Both agencies entered into earlier cooperative agreements to work collaboratively. However, when comparing these past efforts with our criteria for effective interagency collaboration, we found that the agencies could take further steps to facilitate collaboration by establishing and implementing a formal approach. Such an approach could help SBA and Rural Development establish the guidance, direction, and incentive structure needed to bring about a productive working relationship on a more systematic basis. Our previous work in this area shows that adopting key practices—such as defining and articulating a common outcome; specifying roles and responsibilities; establishing a mechanism to monitor, evaluate, and report on results; and reinforcing agency accountability for collaborative efforts—can help federal agencies enhance and sustain their collaborative efforts. One way SBA and Rural Development can adopt these key practices is to enter into a written cooperative agreement and, just as important, implement that agreement and take appropriate steps to monitor and report on results. Moreover, by creating formal incentives, such as performance goals and measures specifically focused on collaboration or, similarly, preparing success stories to document the benefits of their collaborative efforts, SBA and Rural Development can share and publicize information that would help encourage the two agencies to work together. Such an approach can help SBA and Rural Development to effectively leverage each other’s unique strengths to help improve small business opportunities and promote economic development in rural communities. To improve SBA and Rural Development’s collaborative efforts, we recommend that the Administrator of SBA and Secretary of Agriculture: take steps to adopt a formal approach to encourage further collaboration in support of common economic development goals in rural areas. Such steps could include establishing and implementing a written agreement; defining and articulating a common outcome for rural economic development; specifying roles and responsibilities to ensure proper coordination; establishing mechanisms to monitor, evaluate, and report on results; and reinforcing accountability for collaborative efforts. We provided a copy of our draft report to the Acting Administrator of the Small Business Administration and the Secretary of Agriculture for review and comment. Both agencies provided technical comments, which we incorporated into the final report where appropriate. We are sending copies of this report to other interested congressional committees as well as the Administrator of the Small Business Administration and the Secretary of Agriculture. We also will make copies of this report available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The Small Business Administration (SBA) programs in our scope (see fig. 4) include the major business loan programs—Basic 7(a) Loan Guaranty, 504/Community Development Corporation Loan and the 7(m) Micro Loan, as well as the Small Business Investment Company (SBIC) and the Rural Lender Advantage Pilot programs. The Department of Agriculture (USDA) Rural Development programs in our scope include the primary business programs including the Business and Industry Guaranteed Loan, Intermediary Relending, Rural Business Enterprise Grant, Rural Business Opportunity Grant, Rural Economic Development Loans and Grants, and the Renewable Energy Systems and Energy Efficiency Improvements Guaranteed Loan and Grant programs. In this report, we define collaboration as any joint activity that is intended to produce more public value than can be produced when the agencies act alone. It can include interagency activities that others have previously defined as cooperation, coordination, integration, or networking. To determine the extent to which SBA and Rural Development’s primary loan and business programs are complementary and to identify the rationale for SBA and Rural Development to collaborate, we reviewed the mission and structure of SBA and Rural Development offices. We reviewed relevant agency documents and examined laws, regulations and policies on each agency’s loans, grants, and other business programs. We reviewed eligibility requirements and the type of assistance (i.e., direct loan, loan guaranty, grant, etc.), funding levels, and eligible use of program funds, as well as information about each agency’s loan processes and procedures, participation requirements, number of awarded loans and grants, and loan process times. We also interviewed agency officials on the similarities and differences between the two agencies’ primary loan and business programs, and whether the similarities may have an effect on collaboration. We reviewed our prior work on interagency collaboration and key practices that can help enhance and sustain collaborative efforts. We obtained input from SBA and USDA agency officials, SBA resource partners, lenders, and nonprofit organizations involved in the rural economic development process on the goals and common outcomes they envision from increased collaboration between the SBA and Rural Development. Also, using information collected on the mission and structure of SBA and Rural Development offices, and the purpose, eligible use, and terms/conditions of their primary business programs, we assessed whether factors such as complementary mission or task, compatible geographic location and organizational structure, common client base, program overlap and duplication, or similarities and differences in statutory authority, provide a rationale for the two agencies to work together. As collaboration between SBA and USDA Rural Development is not specifically required by law or regulation, we relied on established practices and agency officials’ and stakeholder views in examining the rationale for why SBA and USDA should collaborate. To determine the types of collaborative efforts currently taking place and that have taken place in the past between SBA and Rural Development, we reviewed internal documents, such as memorandums of understanding (MOU) and training documentation, showing ongoing and past collaborative efforts between SBA and Rural Development. We requested that both SBA and Rural Development conduct a query of their respective district offices and state offices regarding all formal or informal efforts to work collaboratively with the other agency. We received responses from about half the SBA district offices and all of the Rural Development state offices that either described the extent of their collaborative efforts with the other agency, or reported that there were no collaborative efforts ongoing. Of those SBA and Rural Development district and state offices that reported they were working together, we selected three locations and conducted site visits and interviews with knowledgeable staff at each location to obtain a thorough understanding of ongoing collaborative efforts. We selected the sites to visit based on the reported amount of collaboration and degree of formality of the effort. We defined formality by the presence of a written document, such as an MOU, that served as a guide for collaborative efforts. The goal of our selection approach was to obtain information on a range of collaborative efforts, from frequent and formal to infrequent and informal. The locations that we selected and visited were Lincoln, Nebraska; Bismarck, North Dakota; and Albuquerque, New Mexico. For two of these locations, we also spoke with lenders that have participated in both SBA and Rural Development programs. To determine the types of collaborative efforts that have taken place between SBA and other agencies, and Rural Development and other agencies, we reviewed documentation describing the collaborative effort. We examined the mechanisms (e.g., contractual work agreement, MOU or other cooperative agreement, statutory provision, etc.) the agencies used to collaborate. Additionally, we interviewed agency officials on their knowledge of any past collaborative effort. To determine the opportunities to facilitate and remove barriers to more effective collaboration between SBA and Rural Development, we reviewed our prior work on key practices that can help enhance and sustain collaboration and address barriers to more effective collaboration. We also obtained the views and experience of agency officials, SBA resource partners, lenders, and select nonprofit organizations, regarding rural economic issues, and opportunities and barriers to more effective collaboration. We used certain characteristics, such as personnel at both agencies, budget, training, and management, to evaluate opportunities or barriers to collaboration. We also assessed the potential that may be present for Rural Development offices to help market SBA programs and services by making information available through their field offices and whether SBA can play a similar role for Rural Development programs. Finally, we compared SBA and Rural Development’s policies, practices, and performance goals with key practices that can help federal agencies enhance and sustain their collaborative efforts. We conducted this performance audit from October 2007 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Both SBA and USDA Rural Development have several loan and business programs that provide funds to start or expand businesses in rural areas. Through these programs, the two agencies work with individual entrepreneurs, existing or start-up small businesses, state, local, and tribal governments, as well as cooperatives and nonprofit agencies to increase economic opportunity and improve the quality of life for people in rural communities across the country. The following sections describe the primary SBA loan programs and Rural Development business programs. Basic 7(a) Loan Guaranty Program serves as the primary business loan program to help qualified small businesses obtain financing. It can be used for a variety of general business purposes including, working capital, machinery and equipment, land and building (including purchase, renovation, and new construction), leasehold improvements, and certain debt refinancing. SBA sets the guidelines for the loans and backs the loan with a guaranty, while lenders make the loans to the small businesses. SBA offers multiple variations of the Basic 7(a) Loan Program to accommodate targeted needs. For example, the Patriot Express Loan Program, which is specifically geared toward veterans, members of the military community and their spouses, and the Community Express Loan Program, which is aimed at women, minorities, and veterans in underserved communities who want to start or expand a small business, are both expedited versions of the Basic 7(a) Loan Program. 504/Certified Development Company (CDC) Loan Program provides long-term, fixed-rate financing to small businesses to acquire real estate, machinery, or equipment for expansion or modernization. The 504/CDC Loan Program cannot be used for working capital or inventory, consolidating or repaying debt, or refinancing. Typically a 504/CDC project includes a loan secured by a lien from a private-sector lender, a loan secured by an additional lien from a certified development company (CDC) (covering up to 40 percent of the total cost) and a contribution of at least 10 percent equity from the borrower. CDCs are private, nonprofit corporations set up to contribute to the economic development of their communities or regions. The program is designed to enable small businesses to create and retain jobs—the CDC’s portfolio must create or retain one job for every $35,000 provided by the SBA. 7(m) Micro Loan Program provides short-term loans of up to $35,000 to small businesses and not-for-profit child-care centers for working capital or the purchase of inventory, supplies, furniture, fixtures, machinery, or equipment. The average loan size is about $13,000, and proceeds can be used for typical business purposes such as working capital, machinery and equipment, inventory, and leasehold improvements. The proceeds cannot be used to pay existing debts or to purchase real estate. Under this program, SBA makes funds available to intermediaries (nonprofit community-based organizations with experience in lending) that, in turn, make the loan directly to the entrepreneur. The intermediary lenders also provide entrepreneurs with management and technical assistance. SBIC Program provides venture capital to small independent businesses, both new and already established. The structure of the program is unique in that SBICs are privately owned and managed investment funds, licensed and regulated by SBA, that use their own capital plus funds borrowed with an SBA guarantee to make equity capital and long-term loans to qualifying small businesses. In addition to investments and loans, SBICs also provide management assistance to small businesses. Small/Rural Lender Advantage Pilot Program, a part of SBA’s 7 (a) loan program, is aimed at encouraging rural lenders to finance small businesses by streamlining the application and approval processes. Specifically, the Small/Rural Lender Advantage offers a simplified application form for loans of $350,000 or less, the ability to apply online, expedited loan processing, and limited documentation requirements. SBA will guarantee 85 percent of the loan amount for loans of $150,000 and less, and 75 percent of loans above $150,000. It is part of a broader initiative to boost economies in areas that face unique challenges due to factors such as declining population or high unemployment. The pilot program was initiated and tested in SBA’s Region VIII (North Dakota, South Dakota, Colorado, Wyoming, Utah, and Montana) in January 2008. Following enhancements to further streamline it, SBA is now extending the initiative to Region V, which covers Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin. SBA also plans to expand the initiative nationwide by the end of fiscal year 2008. Business and Industry (B&I) Guarantee Loan Program (often referred to as the B&I program) provides financial assistance to rural businesses in the form of a loan guarantee for up to 80 percent of the loan amount. Borrowers work with a local lending agency (e.g., bank or credit union), which in turn seeks a guarantee from Rural Development. A borrower may be an individual; a cooperative organization, corporation, partnership, or other legal entity organized on a profit or nonprofit basis; an American Indian tribe or other federally recognized tribal group; or a public body (i.e., town, community, state agency, and authority). Loan purposes must be consistent with the purpose of the program, which is to improve, develop, or finance business, industry, and employment and improve the economic climate in rural communities. They include, but are not limited to, the following: business and industrial acquisitions under certain conditions; business conversion, enlargement, repair, modernization, or development; purchase and development of land, easements, buildings, or facilities; and purchase of equipment, leasehold improvements, machinery, supplies, or inventory or working capital. The total loan amount available to any one borrower under this program is limited to $25 million. An exception to the limit for loans up to $40 million may be granted for rural cooperative organizations that process value- added agricultural commodities. B&I loans are available to borrowers in rural areas, which include all areas other than cities or towns of more than 50,000 people and the contiguous and adjacent urbanized area of such cities or towns. The B&I Guaranteed Loan Program, with a fiscal year 2007 funding level of $953 million, is Rural Development’s largest business program. Intermediary Relending Program (IRP) finances business and economic development activities that seek to create or retain jobs in disadvantaged and remote communities. Under the IRP program, loans are provided to local organizations (intermediary lenders) for the establishment of revolving loan funds that provide loans to ultimate recipient borrowers. The revolving loan funds are used to assist the borrower with financing business facilities and community development projects. Projects must be located in a rural area, which for this program excludes cities with a population of 25,000 or more. Some examples of eligible projects are as follows: business and industrial acquisitions under certain conditions; business construction, conversion, enlargement, and repair; purchase and development of land, easements, rights-of-way, buildings, or facilities; purchase of equipment, leasehold improvements, machinery, and supplies; start-up operating costs and working capital; transportation services, and; debt refinancing. Intermediary lenders may first borrow up to $2 million and then up to $1 million each time thereafter, not to exceed the total aggregate loan amount of $15 million. An ultimate recipient borrower may borrow the lesser of $250,000 or 75 percent of the total cost of the ultimate recipient’s project for which the loan is being made. Private nonprofit corporations, public entities (i.e., towns, communities, state agencies, and authorities), American Indian tribes or other federally recognized tribal groups, and some cooperatives are eligible to intermediaries. Borrowers that are generally eligible to apply for loans from intermediary lenders include individuals, corporations or partnerships, trusts or other profit-oriented or nonprofit organizations, and public entities. Rural Business Enterprise Grant Program (RBEG) provides grants to public bodies, including American Indian tribes and other federally recognized tribal groups, and private nonprofit corporations, to finance and facilitate the development of small and emerging private businesses in rural areas (i.e., any area other than a city or town that has a population of greater than 50,000 and the urbanized area contiguous and adjacent to such a city or town. Small and emerging private businesses are those that will employ 50 or fewer new employees and have less than $1 million in projected gross revenues. Grants may be used for easements, and rights of way; construction, conversion, or modernization of buildings, facilities, machinery, roads, parking areas, utilities, and pollution control and abatement; loans for start-up operating costs and working capital; technical assistance for private business enterprises; training, when necessary, in connection with technical assistance; and production of television programs to provide information on issues of importance to farmers and rural residents. There is no maximum level of grant funding under RBEG. However, smaller projects are given higher priority. Rural Business Opportunity Grant Program provides grants to public entities, nonprofit corporations, cooperatives, and American Indian tribes and other federally recognized tribal groups for training, technical assistance, and planning activities in rural areas (i.e., any area other than a city or town that has a population of greater than 50,000, and the urbanized area contiguous and adjacent to such a city or town). Grants may be used to identify and analyze business opportunities that will use local rural materials or human resources; identify, train, and provide technical assistance to existing or prospective rural entrepreneurs and managers; establish business support centers; conduct local community or multicounty economic development planning; establish centers for training, technology, and trade; and conduct leadership development training. The maximum grant for a project serving a single state is $50,000. The maximum grant for a project serving two or more states is $150,000. Rural Economic Development Loan and Grant Program (REDLG) provides funding to rural projects through local utility organizations. Under the loan program, Rural Development provides zero interest loans to lending utility organizations that, in turn, pass make loans to for-profit or nonprofit businesses and public entities (i.e., ultimate recipient borrowers), for projects that will create and retain employment in rural areas. The ultimate recipient borrower must repay the lending utility directly, and the lending utility is responsible for repayment to Rural Development. Under the grant program, Rural Development provides grant funds to local utility organizations, which may only use the funding to establish revolving loan funds. Loans are made from the revolving loan fund to projects that will create or retain jobs in rural areas. When the revolving loan fund is terminated, the grant is then repaid to Rural Development. Eligible project costs include start-up venture costs, including working capital; project feasibility studies and; advanced telecommunications services and computer networks for medical, educational, and job training services. The maximum loan and grant to any eligible recipient under the Rural Economic Development Loan and Grant Program is established on an annual basis. Renewable Energy Systems and Energy Efficiency Improvements Guaranteed Loan and Grant Program (renamed Rural Energy for America Program) provides loan guarantees and grants to eligible small businesses, farmers, and ranchers to assist in developing renewable energy systems and to make energy efficiency improvements. The types of energy projects include biofuel, wind, solar, geothermal, and hydrogen- based projects. They must be located in a rural area (i.e., any area other than cities or towns of greater than 50,000 population and the immediate and adjacent urbanized areas of the cities or towns). Under the loan program, borrowers work with local lenders in applying for a loan guaranty up to 85 percent of the loan, depending on the amount of the loan. The loan cannot exceed 50 percent of the project cost, and the project must use commercially proven technology. The maximum loan amount is $10 million per project, and the minimum is $5,000. Grants are limited to a maximum of $500,000 and a minimum of $2,500 for renewable energy systems, and a maximum of $250,000 and a minimum of $1,500 for energy efficiency improvements. Eligible applicants are agricultural producers or rural small businesses. Small businesses must meet SBA’s small business size standards. Small business, for profit corporation, partnership, or proprietorship that will create and/or retain jobs through long- term financing. Start-up and existing micro business that can meet basic lending requirement. Borrowers may be required to attend meetings/classes with technical assistance providers. including individual, cooperative, corporation, partnership, tribal group, government entity, and agency. Any legal entity including individual, public, and private organization, government entity, and agency. Rural electric cooperatives and rural telephone cooperatives. Rural small business, individual, agricultural producer, or group of agriculture producers. Must meet SBA’s small business size standards. Through CDCs, SBA can fund up to 40% of the total project costs, from $50,000 to $1,500,000, or in certain cases up to $2,000,000. Maximum loan amount is $35,000. Development can guarantee up to $25 million. Maximum loan amount is $740,000. Maximum renewable energy grant is $500,000. Maximum grant amount is $300,000. 70%-$5 to $10 million 60%-over $10 million No minimum loan. Intermediaries can make loans to qualified applicants for up to 75% of eligible project. Maximum loan is $250,000. Maximum energy efficiency grant is $250,000. Subject to change annually. Minimum for both grants is $10,000. Maximum loan is $10,000,000. Long-term financing of real estate and equipment. Working capital, inventory, and small equipment. working capital, hard asset acquisition, real estate, equipment and limited refinancing. Up to 50% of loan. New and existing business, equipment purchase, or lease and working capital. Business start-up or expansion projects that create rural jobs. Grants may only establish a revolving loan fund. Purchase equipment, construction energy audits, feasibility studies, business plans, and permit/professional service fees. Renewable Energy Systems & Energy Efficiency Improvements Guaranteed Loan and Grant Program 10–45 business days. 10–60 business days depending on scope of project. Subject to in-state loan approval limit. 10–45 business days. 3 months to 1 year. Subject to national funding competition. Subject to national funding competition. CDC origination fee of 2.25% portion and .5% on bank portion. Nominal fees to cover costs of loan closing. guaranty fee not to exceed 2% of guaranteed portion of the loan and .25% annual renewal fee. 1% origination fee of intermediary loan amount plus closing costs. Varies and is negotiated with cooperatives. Typically, 1% of guaranteed portion of the loan and .125% annual servicing fee. Available anywhere. An SBA program administered by a CDC. Commercial lender required. Available anywhere. A direct loan from an SBA intermediary. rural areas with a population of less than 50,000. Generally negotiated between the commercial lending institution and the borrower. Available only in rural areas with a population of less than 25,000. Rural areas with populations of 2,500 or less are given priority. The rural utility cooperatives provide loans to small businesses. Available only in rural areas with a population of less than 50,000. Requires 75% minimum applicant match for grants, and 50% maximum project level for guaranteed loans. H.R. 6124, the Food, Conservation, and Energy Act of 2008, (the 2008 Farm Bill) became law on June 18, 2008. The 2008 Farm Bill contains 15 titles covering, among other things, support for commodity crops, horticulture and livestock production, conservation, nutrition, trade and food aid, agricultural research, farm credit, rural development, energy, forestry, and other related programs. The 2008 Farm Bill guides most federal farm and food policies through fiscal year 2012. Section 6028 of the 2008 Farm Bill requires the Secretary of Agriculture to establish a new Rural Collaborative Investment Program to support comprehensive regional investment strategies for achieving rural competitiveness. The purpose of the program is to provide rural areas with a flexible investment vehicle, allowing for local control with federal oversight, assistance, and accountability; provide rural areas with incentives and resources to develop and implement comprehensive strategies for achieving regional competitiveness, innovation, and prosperity; foster multisector collaborations that will optimize the asset-based competitive advantages of rural regions, with particular emphasis on innovation, entrepreneurship, and the creation of quality jobs; foster collaborations necessary to provide the professional technical expertise, institutional capacity, and economies of scale that are essential for the long-term competitiveness of rural regions; and better use USDA and other federal, state, and local governmental resources, and to leverage those resources with private, nonprofit, and philanthropic investments, in order to achieve measurable community and economic prosperity, growth, and sustainability. The Act also directed the Secretary to establish within USDA the National Rural Investment Board. The Board’s duties are to provide advice to regional boards on issues, best practices, and emerging trends relating to rural development; to provide advice to the Secretary and the National Institute on Regional Rural Competitiveness and Entrepreneurship, also created by the Act, on the development and execution of the program; and to provide advice to the Secretary and subsequently review the design, development, and execution of the National Rural Investment Plan. The National Rural Investment Plan is expected to, among other things, create a framework to encourage and support a more collaborative and targeted rural investment portfolio in the United States; and cooperate with state and local governments, organizations, and entities to create and enhance the pool of resources committed to rural community and economic development. Section 6028 of the 2008 Farm Bill is one of many actions taken by Congress over the years to encourage the coordination of rural policies and programs. It also further demonstrates Congress’ commitment to promoting rural entrepreneurship and community development through collaboration across federal, state, and local agencies. A total of $135 million in funding has been authorized for the new program. In addition to the individual named above, Paul Schmidt, Assistant Director; Charles Adams; Michelle Bowsky; Tania Calhoun; Emily Chalmers; Elizabeth Curda; Ronald Ito; Marc Molino; and Carl Ramirez made key contributions to this report.
The Small Business Administration (SBA) and the Rural Development offices of the U.S. Department of Agriculture both work in rural areas to foster economic development by promoting entrepreneurship and community development. This report discusses (1) the complementary nature of some SBA and Rural Development programs and the extent to which it provides a rationale for the agencies to collaborate, (2) past and current efforts by SBA and Rural Development to work together and with other agencies, and (3) opportunities for the agencies to improve their collaborative efforts. In completing its work, GAO analyzed agency documentation and prior reports on collaboration, conducted site visits at locations where SBA and Rural Development were working together, and interviewed agency and selected economic development officials. The complementary nature of some SBA loan programs and Rural Development business programs provides a rationale for the agencies to collaborate. SBA and Rural Development have similar economic development missions, and their programs provide financing for similar purposes, including start-up and expansion projects, equipment purchases, and working capital for small businesses. According to SBA and Rural Development officials currently involved in collaborative working relationships, working together allows the agencies to leverage the unique strengths of each other's programs, increase the number of financing options available to borrowers in rural areas, and ultimately better promote economic development in these areas. However, collaboration between SBA and Rural Development to date has been sporadic and mostly self-initiated by officials in field offices. GAO found that the extent of the collaborative efforts and use of formal agreements such as memorandums of understanding (MOU) varied across locations. The two agencies worked together frequently in a few locations, infrequently in others, and not at all in many locations. The SBA and Rural Development offices in North Dakota that GAO visited collaborated frequently and had formal agreements in place. Officials there established an MOU with other community development organizations to provide "one-stop" shopping assistance to borrowers at a single location. The SBA and Rural Development offices in Nebraska and New Mexico that GAO visited worked with each other less frequently and more informally, conducting community outreach sessions and holding periodic meetings and joint training sessions. But many other locations--about half of SBA and Rural Development's field offices--did not appear to be collaborating at all or to have an established framework to facilitate collaboration. Opportunities exist for SBA and Rural Development to improve their collaborative efforts. In an October 2005 report, GAO identified key practices that could help federal agencies enhance and sustain their collaborative efforts. In comparing SBA and Rural Development's efforts with these criteria, GAO found that the agencies could take steps to improve their efforts by implementing a more formal approach to encourage collaboration. This approach would provide the agencies with a mechanism that reflected several of GAO's key practices--to define and articulate a common outcome, agree on roles and responsibilities, monitor key progress and results, and reinforce accountability for collaborative efforts. With such an approach, SBA and Rural Development could more effectively leverage each other's unique strengths and help to improve small business opportunities in rural communities.
In August 2014, we reported that, on the basis of our review of land-use agreement data for fiscal year 2012, VA does not maintain reliable data on the total number of land-use agreements and VA did not accurately estimate the revenues those agreements generate. According to the land- use agreement data provided to us from VA’s Capital Asset Inventory (CAI) system—the system VA utilizes to record land-use agreements— VA reported that it had over 400 land-use agreements generating over $24.8 million in estimated revenues for fiscal year 2012. However, when one of VA’s administrations—the Veterans Health Administration (VHA)— initiated steps to verify the accuracy and validity of the data it originally provided to us, it made several corrections to the data that raised questions about their accuracy, validity, and completeness. Examples of these corrections include the following: at one medical center, one land-use agreement was recorded 37 times, once for each building listed in the agreement; and VHA also noted that 13 agreements included in the system should have been removed because those agreements were terminated prior to fiscal year 2012. At the three VA medical centers we reviewed, we also found examples of errors in the land-use agreement data. Examples of these errors include the following: VHA did not include 17 land-use agreements for the medical centers in New York and North Chicago, collectively. VHA incorrectly estimated the revenues it expected to collect for the medical center in West Los Angeles. VHA revised its estimated revenues from all land-use agreements in fiscal year 2012 from about $700,000 to over $810,000. However, our review of VA’s land-use agreements at this medical center indicated that the amount that should have been reflected in the system was approximately $1.5 million. VA policy requires that CAI be updated quarterly until an agreement ends. VA’s approach on maintaining the data in CAI relies heavily on data being entered timely and accurately by a staff person in the local medical center; however, we found that VA did not have a mechanism to ensure that the data in CAI are updated quarterly as required and that the data are accurate, valid, and complete. By implementing a mechanism that will allow it to assess whether medical centers have timely entered the appropriate land-use agreement data into CAI, and working with the medical centers to correct the data, as needed, VA would be better positioned to reliably account for land-use agreements and the associated revenues that they generate. In our August 2014 report, we also found weaknesses in the billing and collection processes for land-use agreements at three selected VA medical centers due primarily to ineffective monitoring. Inadequate billing: We found inadequate billing practices at all three medical centers we visited. Specifically, we found that VA had billed partners in 20 of 34 revenue-generating land-use agreements for the correct amount; however, the partners in the remaining 14 agreements were not billed for the correct amount. On the basis of our analysis of the agreements, we found that VA underbilled by almost $300,000 of the approximately $5.3 million that was due under the agreements, a difference of about 5.6 percent. For most of these errors, we found that VA did not adjust the revenues it collected for inflation. We also found that the West Los Angeles medical center inappropriately coded the billing so that the proceeds of its sharing agreements, which totaled over $500,000, were sent to its facilities account rather than the medical-care appropriations account that benefits veterans, as required. VA officials stated that the department did not perform systematic reviews of the billings and collections practices at the three medical centers, which we discuss in more detail later. A mechanism for ensuring transactions are promptly and accurately recorded could help VA collect revenues that its sharing partners owe. Opportunities for improved collaboration: At New York and North Chicago, we found that VA could improve collaboration among key internal staff, which could enhance the collections of proceeds for its land-use agreements. For example, at the New York site, the VA fiscal office created spreadsheets to improve the revenue collection for more than 20 agreements. However, because the contracting office failed to inform the fiscal office of the new agreements, the fiscal office did not have all of the renewed contracts or amended agreements that could clearly show the rent due. According to a VA fiscal official at the New York office, repeated requests were made to the contracting office for these documents; however, the contracting office did not respond to these requests by the time of our visit in January 2014. By taking additional steps to foster a collaborative environment, VHA could improve its billing and collection practices. No segregation of duties: On the basis of a walkthrough of the billing and collections process we conducted during our field visits, and an interview with a West Los Angeles VA official, we found that West Los Angeles did not properly segregate duties. Specifically, the office responsible for monitoring agreements also bills the invoices, receives collections, and submits the collections to the agent cashier for deposit. Because of the lack of appropriate segregation of duties at West Los Angeles, the revenue-collection process has increased vulnerability to potential fraud and abuse. This assignment of roles and responsibilities for one office is not typical of the sites we examined. At the other medical centers we visited, these same activities were separated amongst a few offices, as outlined in VA’s guidance on deposits. VA headquarters officials informed us that program officials located at VA headquarters do not perform any systematic review to evaluate the medical centers’ processes related to billing and collections at the local level. VA officials further informed us that VHA headquarters also lacks critical data—the actual land-use agreements—that would allow it to routinely monitor billing and collection efforts for land-use agreements across the department. One VA headquarters official told us that the agency is considering the merits of dispatching small teams of staff from program offices located at VA’s headquarters to assist the local offices with activities such as billing and collections. However, as of May 2014, VA had not implemented this proposed action or any other mechanism for monitoring the billing and collections activity at the three medical centers. Until VA performs systematic reviews, VA will lack assurance that the three selected medical centers are taking all required actions to bill and collect revenues generated from land-use agreements. In our August 2014 report, we found that VA did not effectively monitor many of its land-use agreements at the New York and West Los Angeles medical centers. We found problems with unenforced agreement terms, expired agreements, and instances where land-use agreements did not exist. Examples include the following: In West Los Angeles, VA waived the revenues in an agreement with a nonprofit organization—$250,000 in fiscal year 2012 alone—due to financial hardship. However, VA policy does not allow revenues to be waived. In New York, one sharing partner—a local school of medicine—with seven expired agreements remained on the property and occupied the premises without written authorization during fiscal year 2012. Our review of VA’s policy on sharing agreements showed that VA did not have any specific guidance on how to manage agreements before they expired, including the renewal process. In New York, we observed more antennas on the roof of a VA facility than the New York medical center had recorded in CAI. After we brought this observation to their attention, New York VA officials researched the owners of these antennas and could not find written agreements or records of payments received for seven antennas. According to New York VA officials, now that they are aware of the antennas, they will either establish agreements with the tenants or disconnect the antennas. The City of Los Angeles has used 12 acres of VA land for recreational use since the 1980s without a signed agreement or payments to VA. An official said that VA cannot negotiate agreements in this case due to an ongoing lawsuit brought on behalf of homeless veterans about its land-use agreement authority. We found that VA had not established mechanisms to monitor the various agreements at the West Los Angeles and New York medical centers. VA officials stated that they had not performed systematic reviews of these agreements and had not established mechanisms to enable them to do so. Without a mechanism for accessing land-use agreements to perform needed monitoring activities, VA lacks reasonable assurance that the partners are meeting the agreed-upon terms, agreements are renewed as appropriate, and agreements are documented in writing, as required. This is particularly important if sharing partners are using VA land for purposes that may increase risk to VA’s liability (e.g., an emergency situation that might occur at the park and fields in the city of Los Angeles). Finally, with lapsed agreements, VA not only forgoes revenue, but it also misses opportunities to provide additional services to veterans in need of assistance and to enhance its operations. Our August 2014 report made six recommendations to the Secretary of Veterans Affairs to improve the quality of the data collected on specific land-use agreements (i.e., sharing, outleases, licenses, and permits), enhance the monitoring of its revenue process and monitoring of agreements, and improve the accountability of VA in this area. Specifically, we recommended that VA develop a mechanism to independently verify the accuracy, validity, and completeness of VHA data for land-use agreements in CAI; develop mechanisms to monitor the billing and collection of revenues for land-use agreements to help ensure that transactions are promptly and accurately recorded at the three medical centers; develop mechanisms to foster collaboration between key offices to improve billing and collections practices at the New York and North Chicago medical centers; develop mechanisms to access and monitor the status of land-use agreements to help ensure that agreement terms are enforced, agreements are renewed as appropriate, and all agreements are documented in writing as required, at the New York and West Los Angeles selected medical centers; develop a plan for the West Lost Angeles medical center that identifies the steps to be taken, timelines, and responsibilities in implementing segregation of duties over the billing and collections process; and develop guidance on managing expiring agreements at the three medical centers. After reviewing our draft report, VA concurred with all six of our recommendations. VA’s comments are provided in full in our August 2014 report. In November 2014, VA provided us an update on the actions it is taking to respond to these recommendations in our August 2014 report. These actions include (1) drafting CAI changes to improve data integrity and to notify staff of expiring or expired agreements, (2) updating guidance and standard operating procedures for managing land-use agreements and training staff on the new guidance, and (3) transitioning oversight and operations of the West Los Angeles land-use agreement program to the regional level. If implemented effectively, these actions should improve the quality of the data collected on specific land-use agreements, enhance the monitoring of VA’s revenue process and agreements, and improve accountability for these agreements. Chairman Coffman, Ranking Member Kuster, and members of the subcommittee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. For further information on this testimony, please contact Stephen Lord at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Matthew Valenta, Assistant Director; Carla Craddock; Marcus Corbin; Colin Fallon; Olivia Lopez; and Shana Wallace. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VA manages one of the nation's largest federal property portfolios. To manage these properties, VA uses land-use authorities that allow VA to enter into various types of agreements for the use of its property in exchange for revenues or in-kind considerations. GAO was asked to examine VA's use of land-use agreements. This report addresses the extent to which VA (1) maintains reliable data on land-use agreements and the revenue they generate, (2) monitors the billing and collection processes at selected VA medical centers, and (3) monitors land-use agreements at selected VA medical centers. GAO analyzed data from VA's database on its land-use agreements for fiscal year 2012, reviewed agency documentation, and interviewed VA officials. GAO also visited three medical centers to review the monitoring of land-use agreements and the collection and billing of the associated revenues. GAO selected medical centers with the largest number of agreements or highest amount of estimated revenue. The site visit results cannot be generalized to all VA facilities. According to the Department of Veterans Affairs' (VA) Capital Asset Inventory system—the system VA utilizes to record land-use agreements and revenues—VA had hundreds of land-use agreements with tens of millions of dollars in estimated revenues for fiscal year 2012, but GAO's review raised questions about the reliability of those data. For example, one land-use agreement was recorded 37 times, once for each building listed in the agreement, 13 agreements terminated before fiscal year 2012 had not been removed from the system, and more than $240,000 in revenue from one medical center had not been recorded. VA relies on local medical center staff to enter data timely and accurately, but lacks a mechanism for independently verifying the data. Implementing such a mechanism and working with medical centers to make corrections as needed would better position VA to reliably account for its land-use agreements and the associated revenues they generate. GAO found weaknesses in the billing and collection processes for land-use agreements at three selected VA medical centers due primarily to ineffective monitoring. For example, VA incorrectly billed its sharing partners for 14 of 34 agreements at the three centers, which resulted in VA not billing $300,000 of the nearly $5.3 million owed. In addition, at the New York center, VA had not billed a sharing partner for several years' rent that totaled over $1 million. VA began collections after discovering the error; over $200,000 was outstanding as of April 2014. VA stated that it did not perform systematic reviews of the billing and collection practices at the three centers and had not established mechanisms to do so. VA officials at the New York and North Chicago centers stated that information is also not timely shared on the status of agreements with offices that perform billing due to lack of collaboration. Until VA addresses these issues, VA lacks assurance that it is collecting the revenues owed by its sharing partners. VA did not effectively monitor many of its land-use agreements at two of the centers. GAO found problems with unenforced agreement terms, expired agreements, and instances where land-use agreements did not exist. Examples include the following: In West Los Angeles, VA waived the revenues in an agreement with a nonprofit organization—$250,000 in fiscal year 2012 alone—due to financial hardship. However, VA policy does not allow revenues to be waived. In New York, one sharing partner—a local School of Medicine—with seven expired agreements remained on the property and occupied the premises without written authorization during fiscal year 2012. The City of Los Angeles has used 12 acres of VA land for recreational use since the 1980s without a signed agreement or payments to VA. An official said that VA cannot negotiate agreements due to an ongoing lawsuit brought on behalf of homeless veterans about its land-use agreement authority. VA does not perform systematic reviews and has not established mechanisms to do so, thus hindering its ability to effectively monitor its agreements and use of its properties. GAO is making six recommendations to VA including recommendations to improve the quality of its data, foster collaboration between key offices, and enhance monitoring. VA concurred with the recommendations.
Under SAFETEA-LU, FTA’s primary source of funding for new fixed- guideway capital projects or extensions to existing fixed-guideway-transit systems was the Capital Investment Grant program. Within the Capital Investment Grant program, project sponsors typically applied for funding as either a New Starts or Small Starts project. FTA’s New Starts projects under SAFETEA-LU were defined as new fixed guideway or extensions to existing fixed guideway capital projects with a total capital cost of $250 million or more or a Capital Investment Grant program contribution of $75 million or more. The Small Starts program was created by SAFETEA-LU in 2005 to provide a more streamlined evaluation and rating process for lower-cost and less complex projects, defined as new fixed-guideway projects, extensions to fixed guideways or corridor-based bus projects whose estimated capital cost was under $250 million and whose Capital Investment Grant program contribution was under $75 million. Within the Small Starts program, as defined in SAFETEA-LU, FTA created a category for very low cost Small Starts projects, known as Very Small Starts. These projects must contain the same elements as Small Starts projects and also contain the following three features: (1) location in corridors with more than 3,000 existing riders per average weekday who will benefit from the proposed project, (2) have a total capital cost of less than $50 million for all project elements, and (3) have a per-mile cost of less than $3 million, excluding rolling stock (such as train cars, buses, etc). As part of the application process, sponsors of New Starts, Small Starts, and Very Small Starts projects are expected to identify local sources of funding to contribute to the project along with federal funding provided through both the Capital Investment Grant program and potentially other sources of federal funding. The steps in the development process depend on whether a project is a New Starts project or a Small or Very Small Starts project (see fig.1). New Starts. Under SAFETEA-LU, sponsors of New Starts projects were required by statute to go through a planning and development process. In the alternatives analysis phase, project sponsors identified the transportation needs in a specific corridor and evaluated a range of alternatives to address the locally identified problems in that corridor. Project sponsors completed the alternatives analysis phase by selecting a locally preferred alternative (LPA). Subsequently, during the preliminary-engineering phase, project sponsors refined the design of the locally preferred alternative and its estimated costs, benefits, and impacts. Further, under the National Environmental Policy Act of 1969 (NEPA), as amended, and implementing regulations, New Starts project sponsors were required to complete the NEPA environmental review process to receive Major Capital Investment program funding. When the preliminary -engineering phase was completed and federal environmental requirements are satisfied, FTA may approve the project’s advancement into final design, if the project obtained an acceptable rating under the statutory evaluation criteria and met other readiness requirements. For a project to receive funding, FTA needed to recommend it for a full funding grant agreement (FFGA) in the President’s budget. Small Starts. Under SAFETEA-LU, the development process for Small Starts was condensed by combining the preliminary-engineering and final-design phases into one “project development” phase. When projects applied to enter project development, FTA evaluated and rated them according to the statutory criteria. Under SAFETEA-LU, there were fewer statutory criteria specified for Small Starts projects compared to New Starts projects. Either using annual appropriations or existing FTA appropriations that remain available, FTA provided funding for Small Starts projects in one of two ways: through project- construction grant agreements (PCGA) or single-year construction grants when the Small Starts funding request was less than $25 million. For a project to receive funding, FTA needed to recommend it in the President’s budget. Very Small Starts. Very Small Starts projects also progressed through a single-project development phase and were evaluated and rated on the same project criteria as Small Starts projects. However, they qualified for automatic medium or better ratings, which required submittal of less data to FTA, because they had sufficient existing transit ridership in the corridor and met low cost parameters to “warrant” them for satisfactory ratings. FTA provided funding for Very Small Starts projects through PCGAs or single-year construction grants. For a project to receive funding, FTA needed to recommend it in the President’s budget. Under SAFETEA-LU, any transit project that fit the definition of a new fixed-guideway capital project or extension to an existing fixed-guideway project was eligible to compete for funding under the Capital Investment Grant program that provides funding for New Starts, Small Starts, and Very Small Starts projects. Such projects included: Commuter rail—systems that operate along electric or diesel- propelled railways and provide train service for local, short distance trips between a central city and adjacent suburbs. Heavy rail—systems that operate on electric railways with high- volume traffic capacity and are characterized by separated rights-of- way, sophisticated signaling, high platform loading, and high-speed, rapid-acceleration rail cars operating singly or in multi-car trains on fixed rails. Light rail—systems that operate on electric railways with light-volume traffic capacity and are characterized by shared or exclusive rights-of- way, or low or high-platform-loading, single or double-car trains, and overhead electric lines that power rail vehicles. Streetcars—systems that are similar to light rail, but distinguishable because they are usually smaller and designed for shorter routes, more frequent stops, and lower travel speeds. Bus rapid transit (BRT)—bus systems that vary in design, but generally included service enhancements to attract riders and provide similar transit-related benefits as rail transit, characterized by improvements such as dedicated bus lanes, improved stations, improved vehicles, off-vehicle fare collection, special branding of the service, and frequent service, among other things. Pub. L. No. 112-141, § 20008, 126 Stat. 405 (2012). As noted previously, this report describes the project development process in effect from October 2005 through March 2013, prior to the implementation of changes from MAP-21. New Starts project. FTA has not yet implemented these changes fully, but has issued some guidance on how these changes will affect the program.in the future, though FTA officials told us that there is no firm date on when the various policy changes will take effect. FTA plans to conduct additional rule-making on MAP-21 topics FTA and its project-management oversight contractors are to provide oversight during the development process. FTA maintains its headquarters in Washington, D.C., with 10 regional offices throughout the continental United States, and employs about 500 employees to oversee and provide funding for a variety of transit programs and initiatives, including for New Starts, Small Starts, and Very Small Starts projects. FTA and its contractors are to conduct oversight reviews throughout the project’s planning and design as well as before FTA recommends a project for funding; these reviews are to include an evaluation of the project’s risk, scope, cost, schedule, and project management plan, as well as the project sponsor’s technical capacity and capability. Project sponsors also submit periodic updates to FTA on different aspects of major projects, such as the cost, expected schedule, and projected ridership of the project. In addition, proposed projects are required to complete the NEPA environmental review processes in order to receive federal funding. Specifically, NEPA and implementing regulations require, among other things, an environmental review document with information on alternative courses of action and a consideration of social, economic, and environmental impacts of the proposed transportation improvement. Among the 32 transit projects we reviewed, we found significant variation in the length of time sponsors of New Starts, Small Starts, and Very Small Starts projects needed to complete the development process (see fig. 2). Specifically, for the approved projects we examined, the development process ranged from 2 years for a 6.8 mile Bus Rapid Transit project in Monterey, California, to over 14 years for a 30 mile commuter rail project in Denver, Colorado. The wide range of time needed to plan, design, and secure funding for these transit projects prior to construction is similar to the range of time that is generally considered necessary to plan and design other types of capital projects. For example, studies have suggested that for major highway projects the activities leading up to the construction of a highway may vary from 1 year for a minor project to 7 to 14 years for a major project. The variation across projects is attributable, in part, to conditions and factors specific to each project. For some projects, the development process was lengthy. For example, for the Eagle Commuter Rail Line project, the development process, beginning with the selection of the locally preferred alternative in 1997 until the project was awarded an FFGA, lasted over 14 years. Project sponsors stated that they did not pursue entry into preliminary engineering until after completing further investment studies that expanded the scope of the project in the early 2000s and securing funding through a local referendum in 2004. In addition, prior to entering the Capital Investment Grant pipeline, officials worked to finalize the technology for the project and secure approval from the project sponsor’s board of directors. Officials stated that once the project was approved into the pipeline in 2009, the project progressed quickly through the remainder of the process, and the project sponsor secured an FFGA approximately two and a half years after acceptance into preliminary engineering. For sponsors of the Mason Corridor project, which successfully completed a Small Starts bus rapid-transit line from Aspen to Glenwood Springs, Colorado, the development process extended over 11 years as a result of challenges related to, among other things, securing funding for the project and obtaining agreement for the project among local stakeholders. However, for other projects, the length of the development process was comparatively shorter. For example, the development process for the 7.3- mile, 10 station, Portland-Milwaukie light-rail New Starts project lasted about 4 years. In this case, project sponsors stated that they encountered no major obstacles during this time, though they noted that the process was extended by 6 months while the project sponsor identified additional local funds and reduced to project’s scope in response to lower than anticipated federal funding for the project. We will discuss in more detail the general types of factors that affected the length of the development process later in the report. In general, larger projects, such as those that applied for funding as New Starts projects, required more time to progress through the development process than smaller projects, such as those that apply for funding as Small and Very Small Starts projects. On average, the development process was 17 months longer in duration for New Starts projects than for Small Starts projects and 12 months longer than for Very Small Starts projects. Specifically, according to our analysis of FTA and project sponsor data, we found that New Starts projects took about 3 to 14 years to complete the development process, Small Starts projects took about 3 to 12 years, while Very Small Starts projects took about 2 to 11 years. According to FTA officials, the length of the development process is unique for each project, and generally depended upon the project’s specific characteristics, such as scope, corridor location, and availability of local funding, among other factors. Some of the variability across the New Starts, Small Starts, and Very Small Starts projects resulted from activities that took place later in the process, after the locally preferred alternative was selected and before the project was formally accepted into FTA’s pipeline. The “pipeline” is a sub-component of the overall development process and is defined as the period of time between when a project is accepted into the preliminary- engineering (New Starts) or project-development (Small and Very Small Starts) phase and the final award of construction funding by FTA. Depending on the project, the time between the selection of the locally preferred alternative and entry into the pipeline took as little as a few months to over a decade. According to project sponsors, activities during this period included revising the project scope, securing local funding, and preparing to enter into the project pipeline, among other things. Once a project was been accepted into the pipeline, we found that the length of the process was similar across all three project categories, and generally lasts from 2 to 5 years, and averages about 3 years (see fig. 3). However, within each of the three types of projects, the length of time in the pipeline for an individual project varied widely depending on the project’s specific characteristics. Furthermore, as previously discussed, FTA officials stated it is difficult to characterize an “average” project, as each proposed transit project has its own unique project characteristics, physical-operating environment, and challenges. While there was substantial variation in the length of the overall development process within and across transportation modes, the variation in the Capital Investment Grant pipeline duration was similar across the modes of the projects we reviewed. (See table 1.) For example, light rail projects required 3 to 10 years to complete the development process, while commuter rail projects required 5 to 14 years to complete this process. As previously noted, a portion of the variability in length of the development process was due to activities that occur after a locally preferred alternative is selected but before FTA accepted the project into the pipeline. However, projects generally required 2 to 5 years to progress through the pipeline, regardless of the mode proposed by a project sponsors. Our review found that local factors, specific to each project, were generally the primary elements that determined the development process’s length. Furthermore, our prior work has found that some of these of factors—particularly obtaining project funding and community support—also commonly affect the length of time to complete other types of capital projects, including highway projects. Local Financing: Project sponsors noted that securing local funding, such as through local sales taxes and referendums, can be challenging. We previously found that local funding remains a substantial component of the overall funding for New Starts, Small Starts, and Very Small Starts projects. Sponsors of 17 of 32 of the projects we reviewed stated that activities to secure local funding contributed to the length of the development process. For example, the project sponsor of the Mason Corridor BRT stated that securing local commitment was particularly challenging and extended the development process by about 7 years. The project sponsor selected a bus rapid transit as a locally preferred alternative in 2000, but was unable to secure local funding until 2007. The time needed to identify and secure local funding was a significant factor in extending the development process over 11 years, as federal funding for the project was contingent upon the project’s sponsor securing a local funding source. Similarly, the project sponsor of the Mid-Jordan light rail project in Salt Lake City, Utah, stated that securing local funding for the project delayed its development by about a year. According to the project’s sponsor, while it selected the locally preferred alternative in 2005, the project sponsor did not enter into preliminary engineering until 2007, when the project had secured funding through a local referendum in 2006 that increased the local sales tax. Local Community Support: The development process can also be extended as a result of efforts project sponsors undertake to secure local community approval for a project. Sponsors of 12 of the 32 of the projects we reviewed stated that community support for their project affected the length of the development process. For example, project sponsor officials who oversaw the development of a BRT project in Northern California stated that the major hurdle in the development of the project was overcoming some community opposition to the planned route that arose in 2008 after the selection of the locally preferred alternative. Specifically, the alignment of the project was scheduled to go through a residential area where residents had historically opposed the location of a bus route. According to the project sponsor, the change provoked some community opposition to the project, and as a result, the alignment of the project was modified. Officials estimated that the development process was extended by about 3 to 6 months. However, community support can take a significant amount of time to develop and sustain. For example, officials from the project sponsor overseeing the construction of the High Capacity Transit Corridor heavy rail project in Honolulu worked for 10 years—starting before the locally preferred alternative was selected—to develop support for the project. Stakeholder Coordination: Coordinating with other local government agencies as well as other transportation providers can also be challenging and may affect the length of the development process. Specifically, for 8 of the 32 projects we reviewed, sponsors stated that the process of coordinating with other stakeholders extended the development process. For example, project sponsors overseeing the development of a light rail project in Charlotte, North Carolina, had to coordinate with local freight-rail operators and Amtrak to relocate service to minimize disruption to Amtrak’s existing service. The project’s design was modified as a result of these negotiations, which extended the project’s development nearly 7 months, according to the project’s sponsor. Project sponsor officials stated that they did not anticipate the complexity of the negotiations with the railroad operators and, accordingly, noted that earlier coordination with these operators might have accelerated the project timeline modestly. Environmental review. The effect of a proposed project on the local environment, as well as steps required by law to mitigate environmental impacts from the proposed project, may also affect the duration of the development process. Specifically, under the NEPA environmental review process, project sponsors may measure the impact of different alternatives by the extent to which the alternative meets the project’s purpose and need, and is consistent with the goals and objectives of any local urban planning. The NEPA environmental review process also requires federal agencies to evaluate and in some cases prepare detailed statements assessing the environmental impact of and alternatives to major federal actions significantly affecting the environment. However, according to FTA officials and project sponsors, this process can be time consuming. Specifically, both FTA officials and project sponsors for 8 of the 32 projects we reviewed noted that the required NEPA environmental review process, may add time to the development process. For example, officials from one project sponsor stated that it took nearly two and a half years to complete the NEPA process. Scope and configuration changes. The factors described above may also result in revisions to the project’s scope and configuration, which may in turn extend the development process. Revisions to a project’s design ranged from minor alterations to pedestrian access to a project to changes to a project’s proposed alignment or service route. Project sponsors for 7 of the 32 projects we reviewed identified changes in scope as a factor affecting the development process, which sometimes resulted from one of the factors described above. For example, for the Charlotte light rail project described above, in addition to the design changes arising from coordination with local stakeholders, recession-related reductions in the sales-tax revenue funding the project forced the project sponsors to further revise the project, thus extending the overall development process. According to some project sponsors we interviewed, FTA assistance is generally helpful in completing the development process, though they noted that the duration of some oversight reviews can be lengthy. We have previously found FTA and its oversight approach have improved sponsor’s management of their projects. However, for 12 of the 32 projects we reviewed, project sponsors stated that some types of oversight reviews can be time-intensive and extend the development process, sometimes by weeks or months. For example, project sponsors for 2 of the 32 projects we reviewed cited FTA’s risk assessment as a requirement that affected the length of the development process.addition, sponsors of 4 of the 16 Very Small Starts projects we reviewed speculated that some of the longer review times for smaller projects may have been a result of FTA’s initial uncertainty in how it would implement the simplified review process for Very Small Starts projects. Conversely, one project sponsor noted that because much of the development process is driven by local factors, there was not much FTA could have done to accelerate the process. Finding the right balance between protecting federal investments through project management oversight and advancing projects through the development process is challenging. We have previously found that a balance exists between expediting project development and maintaining the rigor and accountability over the development of New Starts, Small Furthermore, we have previously Starts, and Very Small Starts projects. found that FTA’s oversight efforts help the agency ensure that a federally funded transit project’s scope, schedule, and cost are well developed and that the project’s design and construction conform to applicable statutes, regulations, and guidance.take longer than expected, because project sponsors sometimes provide information that is incomplete or inaccurate, resulting in additional review time and delays. While FTA has acknowledged that the process can be lengthy and frustrating, FTA has taken some steps over the last several years to further streamline the development process. We also previously found that reviews may In its January 2013 final rule implementing some MAP-21 changes, FTA eliminated the requirement for the development of a baseline alternative, removing the requirement to compare a proposed project to a hypothetical alternative.reviewed stated that development of a baseline alternative was a time- and resource-consuming part of the development process. Project sponsors for 3 of 32 projects we The January 2013 final rule also allows proposed projects to automatically receive a satisfactory rating on a certain rating and some evaluation criteria based on the project’s characteristics or the characteristics of the project corridor. For example, for Small Starts projects, if the operating and maintenance cost of the proposed project is less than 5 percent of current system-wide operating and maintenance cost, the project qualifies for automatic medium or better rating on its local financial commitment evaluation. FTA officials told us that they plan to explore expanding the types of projects that may prequalify for automatic ratings. In September 2013, FTA introduced a new tool to assist project sponsors in estimating ridership on their projects. According to FTA officials, the tool, known as the Simplified Trips-on-Project Software (STOPS), may help to significantly shorten the time projects’ sponsors need to develop ridership estimates. We will discuss ridership estimation for projects later in this report. FTA estimates these changes could reduce the development process time for projects by six months or more. as more MAP-21 requirements are formally implemented through the rule-making process, it may identify additional efficiencies in the development process. We found that capital cost estimates for New Starts, Small Starts, and Very Small Starts projects during the development process generally did not change substantially prior to the award of federal funding. Project sponsors told us that cost estimate changes occurred as a result of changing market conditions, FTA’s application of additional project contingencies and scope modifications, among others. However, most estimates did not change much from the initial capital cost estimated upon entry into the development process. The majority of cost estimates of the projects we reviewed did not change significantly. For 23 of the 32 projects we reviewed, the original cost estimated upon entry into the Capital Investment Grant pipeline was within 10 percent of the final cost estimated prior to receiving federal funding. The original capital cost estimates for the remaining 9 projects varied by as much as 41 percent lower and 55 percent higher from the estimates used at the end of the development process. Of those projects, 4 were New Starts, 3 were Small Starts, and 2 were Very Small Starts projects. Figure 4 shows the range of cost changes for these projects. While the majority of the capital cost estimates did not change significantly during the development process, some estimates did change. However, we did not assess project sponsors’ cost-estimating procedures, or related FTA policies, and how that might have contributed to the cost estimates that did change. But, as noted in our previous reports, federal agencies have experienced challenges in conducting cost estimating—some of the agencies’ programs cost more than expected and deliver results that do not satisfy all requirements.may experience some of those same challenges. Reliable capital-cost Project sponsors estimates are necessary for the New Starts program for a number of reasons: to support decisions about funding one capital improvement project over another, to develop annual funding requests to Congress, to evaluate resource requirements at key project-development decision points, and to develop performance measurement baselines. We plan to examine FTA’s and project sponsors’ implementation of best practices for developing and managing capital program costs in future work on the Capital Investment Grant program. Our review identified a number of factors that led to cost estimate changes during the development process, as described below. In some cases, a combination of factors contributed to cost estimate changes. Economic and Market Conditions. Nine project sponsors stated that economic conditions, such as the recession from 2007 to 2009, likely increased competition for some of its contracts and created a bidding environment favorable to the agency for reducing costs. For example, Livermore Amador Valley Transit Authority (located in Livermore, California) officials stated that because of the recession, companies submitted lower bids than initially anticipated on each of the four major construction contracts associated with the project. According to the officials, the project finished about $4.5 million under its approved budget, due in large part to the recession. The Utah Transportation Authority (UTA) also stated that the recession affected the cost estimate over the development process. The officials said that the recession created competition that helped reduce the construction costs associated with these projects due to a reduced demand for construction and contracting services. However, these types of projects are also sensitive to changes in material prices. For example, right before FTA awarded the grant for the Mid-Jordan project, the cost of steel increased substantially, adding $1.5 million to the cost of the overall project. Contingency levels. According to project sponsors, capital cost estimates for 6 projects increased as a result of FTA’s risk and contingency reviews. For example, officials at Valley Transportation Authority (VTA) (located in Santa Clara, California) stated that, as part of FTA’s risk assessment review, the project-management oversight contractor recommended an increase in the contingency amount for the project by $100 million. VTA officials further stated that contingency amounts fluctuated throughout the development process as the design of the project was further refined. We have previously found that FTA’s risk reviews have helped to improve project sponsors’ controls over project costs and provided FTA with a better understanding of the issues surrounding projects, such as the potential problems that could lead to cost increases.reviews to analyze whether the project sponsor has included a sufficient level of contingency within their cost estimate. Scope and Configuration Changes. Project sponsors stated that the scope of 12 projects was reduced or increased significantly during the development process, changes that led to capital cost-estimate changes. For example, the Minneapolis Metropolitan Council stated that its project had a $24 million increase from preliminary engineering to FFGA, and $15.6 million of the increase in capital cost that was attributable to inclusion of three at-grade infill stations. Refined Cost Estimate as Project Progressed: Because the majority of project estimates were developed in the planning stage, they will continue to change as part of the development process. For example, the Denver Regional Transportation District stated that the capital cost estimate for their project decreased because as the project advanced through the project development process, the cost estimators had a better idea of the project’s scope and design, which led to more accurate cost estimating. Generally, the more information that is known about a project, the more accurate and less variable the estimate is expected to be. We have previously found that cost estimates are based on many assumptions and are expected to change as project requirements are clarified. Project sponsors rely on support from MPOs to develop their ridership forecasts. According to FTA officials, most travel-forecasting procedures are maintained by MPOs. The MPOs produce travel forecasts as they prepare transportation plans for metropolitan areas and assess the plans’ conformity with federal air-quality requirements. Based on a Transportation Research Board (TRB) study on metropolitan travel forecasting, MPOs estimate future travel demand and analyze the impacts of alternative transportation investment situations using computerized travel-demand-forecasting models. According to this study, forecasts derived from travel models enable policy makers to make informed decisions on investments and policies relating to the transportation system. In a 2009 report, we found that these MPO travel models are complex and require inputs of extensive current information on roadway and transit system characteristics and operations, as well as current and forecasted demographic information. Creating and operating the models requires a high degree of technical training and expertise. However, we also found in 2009 that some MPOs face challenges in travel demand forecasting, including a lack of technical capacity and data necessary to conduct the complex transportation modeling required to meet planning needs. The TRB also noted that MPOs face a much broader and more complex set of requirements and needs in their travel modeling. By and large, New and Small Starts project sponsors whom we interviewed generally use the regional travel models of the project sponsor’s local MPO to forecast ridership. Eight out of the nine New Starts project sponsors reported using MPO travel models. For example, officials from the Regional Transportation District (Denver, CO) said that the local MPO’s (Denver Regional Council of Government) approved regional travel-demand model is used to develop the Regional Transportation District’s ridership forecasts. Officials from the Utah Transit Authority (Salt Lake City, Utah) also used a regional travel model maintained by the Wasatch Front Regional Council—the MPO for the Salt Lake City area. The model incorporates information from highway usage, rail, and other mass transit ridership, as well as transit rider surveys. However, one project sponsor, Sound Transit (Seattle, WA) used the incremental method to forecast its ridership. This method essentially uses actual transit ridership data, which includes, among other data, observed origins and destinations of transit users and surveys of region-wide transit riders. Three out of four Small Starts project sponsors use travel models developed by the local MPO. For example, for the Portland, Oregon, Streetcar Loop project, the Tri-County Metropolitan Transportation District of Oregon (TRIMET) used travel forecasts prepared by the Portland Metropolitan Planning Organization. According to TRIMET officials, the model includes and is continually updated with employment and population data, as well as, data on roadway and transit routes. According to these officials, the MPO travel model is one of the more sophisticated ridership models for an urban area. One project sponsor used its statewide travel model to forecast ridership, instead of a local MPO travel model. According to the Montachusett Regional Transit Authority (Fitchburg, MA), it used a local travel model which was a component of the overall Massachusetts state travel model to forecast ridership. Project sponsors that use regional travel models to forecast transit ridership for New Starts and Small Starts projects are required to test the forecasts for accuracy against current data describing actual transit ridership, per FTA requirements. To implement this test, the travel models are used to prepare a forecast of current ridership using current population, employment, highway conditions, and transit service levels. According to FTA, comparisons of these current-year forecasts against current-year data demonstrate the extent to which the models grasp actual ridership patterns and support improvements to the models when errors are evident. When the models are able to pass the tests, they are then ready to make forecasts for the proposed project. FTA procedures permit Very Small Starts project sponsors to document current transit volumes in the project corridor and thereby avoid the need to prepare ridership projections for the project. As previously mentioned, according to FTA, one of the key requirements for a Very Small Starts project is that at least 3,000 existing transit riders will use the proposed project on an average weekday. Through this requirement, FTA can ensure that the proposed project will have sufficient ridership and produce enough travel benefits to be considered cost-effective without having to do detailed travel forecasts or other complicated analysis to prove the project is justified. To adequately document the required number of existing transit riders, the sponsoring agency must conduct a detailed counting of riders of existing public transportation in the project corridor, and estimate the number of existing riders that will use the Very Small Starts project. FTA guidance requires that the counts be conducted on existing routes serving the project corridor that either: operate on the street segments where the Very Small Start will on streets parallel or nearby that will be rerouted to operate on the Very Small Starts street segments after the project is completed. For example, the Los Angeles County Metropolitan Transportation Authority (Metro) developed its ridership projections for its two Very Small Starts bus rapid transit projects we reviewed, based on actual experience with another bus rapid-transit service. According to Metro officials, in order to validate ridership projections, Metro used data collected from its Automatic Passenger Counter system on the existing bus rapid-transit service. Metro officials told us that automatic passenger counters are installed on every bus in its fleet to provide accurate passenger ridership data. FTA has endorsed two alternative approaches for developing ridership forecasts that rely less on travel models and more on current data on actual travel patterns. 1. Incremental Methods rely on rider survey data to describe current transit ridership patterns. This method essentially focuses on changes in transit ridership caused by proposed projects and by growth in population and employment. According to FTA officials, in corridors where transit is well established, these incremental methods offer a quick, and possibly more reliable, ridership-forecasting approach. 2. Simplified Trips-on-Project Software package (STOPS) which FTA released in September 2013, is an approach that local agencies can use instead of, or in conjunction with, metro-area models. STOPS uses data from the Census Transportation Planning Package (currently from the 2000 decennial census) to replace some component models and provides already calibrated models of transit- versus-auto choice. For local agencies whose travel models are not ready to provide reliable forecasts for transit projects, STOPS offers an alternative that can avoid the need for project sponsors to perform data collection and model updates, processes that can sometimes take as long as 2 years to complete. Instead, using STOPS, developing ridership forecasts can take as little as 2 weeks. We did not assess the adequacy of any of these travel models. However, the TRB study noted that there is no single approach to travel forecasting or set of procedures that is correct for all applications or all MPOs. Additionally, the study stated that FTA is to be commended for taking steps to ensure quality in the travel forecasting methods used for major project planning. In particular, the study noted that FTA’s initiatives to ensure the quality of New Starts ridership forecasting have been useful in uncovering weaknesses and that FTA has taken a strong role in improving modeling practice. According to FTA officials, regardless of the approach project sponsors use to forecast ridership, all ridership forecasts have uncertainties. FTA officials identified at least two areas of uncertainty: Data inputs that are forecasts. Travel models require information on population, employment, household incomes, transit service levels, transit fares, highway capacity, and other influences on travel patterns. Consequently, ridership forecasts for future years are grounded in predicted future conditions rather than data on actual conditions. For distant years and in rapidly growing metro areas, uncertainties in these predictions can be large. Optimism. Sponsors and planners of new transit projects anticipate good outcomes. As a result, optimistic assumptions are common on such things as operating speeds, accessibility to stations, and the amount of new development within a given area. Travel models tend to compound this across-the-board optimism in many ways leading to forecasts that may be much more optimistic than any one of the inputs, and this optimism may lead sponsors to reject less-than- hoped-for ridership projections and search for ways to increase the projections. Some project sponsors we interviewed also identified the following challenges affecting ridership estimates: The difficulty in developing accurate population and employment growth estimates. The unpredictability of gas price levels on ridership. For instance, higher prices will encourage higher ridership and a large decline in prices will discourage ridership. One project sponsor told us that the economy has a significant effect on ridership, and more specifically, the economy affects the price of gas and cost of parking, which in turn affects ridership. FTA has taken a number of actions to support the development of ridership forecasts. These include the following: Funding. According to FTA officials, the agency contributes funding to state agencies and MPOs to support, among many other activities, the collection of travel data and the development of travel-forecasting procedures. MPOs receive annual funding from both the Federal Highway Administration and FTA, in addition to state matching funds. Nationally, FTA’s share of this funding is about $129 million for fiscal year 2014. Technical support. FTA told us that, since the inception of the Capital Investment Grant program, it has filled at least one staff position with a nationally recognized expert in travel forecasting who is responsible for assisting project sponsors in the development of travel forecasts and for oversight of Capital Investment Grant project ridership forecasts. FTA has also allocated approximately two full-time staff to oversight activities. These activities include the following: Technical assistance in travel forecasting methods development. According to FTA, at the invitation of local agencies, FTA staff provides comments, participates in peer-review panels, and engages in ongoing discussions with local project sponsors and their contractors during the development of new travel forecasting procedures for metropolitan areas. Early reviews of methods and assumptions. FTA officials also stated that the agency encourages project sponsors and their contractors to meet with FTA staff early in the preparation of forecasts in support of proposed projects. These officials said that this early engagement identifies potential problems with forecasting methods and planning assumptions at a point in time when these issues can be dealt with efficiently—essentially avoiding late surprises when project sponsors have finished their forecasts. Reviews of final travel forecasts. Before a proposed project is approved for entry into preliminary engineering (New Starts) or project development (Small Starts), FTA staff review the travel forecasts submitted by project sponsors in support of these projects. Staff document any significant uncertainties found in the forecasts and make recommendations to FTA’s Office of Planning and Environment regarding acceptance of the forecasts as sufficiently reliable for the agency’s use in project evaluation and rating. We interviewed 13 New Starts and Small Starts project sponsors, and a majority (7) said that FTA’s technical assistance, which includes reviewing ridership forecasts, was generally helpful. For example, an official from the Metropolitan Council (Minneapolis and St. Paul, MN) told us that he has found it useful that FTA reviews its ridership forecasts for different projects. In particular, it can be very insightful to have FTA ask probing questions regarding forecasts, a process that gives project sponsors a quality check on the veracity of their ridership-forecast numbers. In another example, officials from Valley Metro (Phoenix, AZ) told us that FTA provided them assistance for 9 months as the ridership forecasting was being developed, assistance that helped them deliver a credible document for evaluation and rating. Furthermore, another project sponsor said that the FTA team that reviewed its ridership projections was both thorough and timely with its reviews. Requirement for testing of travel models. In 2007, FTA required that local travel models used to forecast transit ridership for New Starts and Small Starts project be tested for accuracy against current data describing actual transit ridership. According to FTA, the requirement ensures that local methods used to prepare ridership forecasts submitted to FTA have been demonstrated to have a basic grasp of current local transit ridership. FTA officials said that the 2013 policy guidance on the Capital Investment Grant program continues this requirement. We provided DOT with a draft of this report for review and comment. DOT provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of the Department of Transportation. In addition, this report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix IV. The Moving Ahead for Progress in the 21st Century Act (MAP-21) mandated GAO to biennially review FTA’s processes and procedures for evaluating, rating, and recommending new fixed-guideway capital projects and core capacity improvement projects and the Department of Transportation’s (DOT) implementation of such processes and procedures. In this report, we identify (1) the extent to which the length of the development process varies across New Starts, Small Starts and Very Small Starts projects and what factors affect the length of this process, (2) the extent to which capital cost estimates for New Starts, Small Starts, and Very Small Starts projects change throughout the development process, and what factors contribute to the changes, and (3) how project sponsors forecast ridership, including any support that FTA provides in helping them develop these forecasts. To address all of these objectives, we reviewed and summarized relevant laws, such as The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), FTA circulars and policy guidance, as well as our body of work on the Capital Investment Grant Program. To determine the extent to which the length of the development process varies across New Starts, Small Starts, and Very Small Starts projects and the extent to which capital cost estimates for these projects change throughout this process, we collected and analyzed project milestone data and cost estimate data (see apps. II and III) from FTA staff and analysis of FTA Annual Reports on Funding Recommendations for fiscal years 2008 through 2014. We included all 32 New Starts, Small Starts, and Very Small Starts projects that had been: 1) approved to enter preliminary engineering following SAFETEA-LU (October 2005) and (2) awarded a grant agreement prior to the implementation of MAP-21 (March 2013). To verify and assess the reliability of the data compiled by FTA, we compared it to project data we received from project sponsors we interviewed. We resolved any discrepancies with the data with FTA headquarters staff, and we determined that the data were sufficiently reliable for the purposes of this report. To provide insight on the factors contributing to project’s timeline trends and challenges and project cost-estimate changes and to obtain information on how ridership forecasts are developed, we interviewed 23 project sponsors representing 30 of the 32 projects. Table 2 lists the New Starts, Small Starts, and Very Small Starts project sponsors we interviewed for our review. The information obtained from these interviews is not generalizable to all New Starts, Small Starts, and Very Small Starts projects. We also interviewed FTA officials to determine the support that FTA provides to help project sponsors develop ridership forecasts. We conducted this performance audit from August 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Milestone Dates and Cost Estimates for Selected New Starts Projects Reviewed by GAO (Dollars in Millions) Appendix III: Key Milestone Dates and Cost Estimates for Selected Small and Very Small Starts Projects Reviewed by GAO (Dollars in Millions) In addition to the contact named above, Brandon Haller (Assistant Director), Antoine Clark, Catherine Colwell, Dwayne Curry, Geoffrey Hamilton, Terence Lam, Jason Lee, Karen Richey, Amy Rosewarne, Kelly Rubin, and Matt Voit made key contributions to this report.
FTA provides funds to transit project sponsors to build new or extensions to existing fixed-guideway transit systems through the Capital Investment Grant program. This program funds New, Small, and Very Small Starts projects—funds that are based partly on the project's total estimated cost. For example, for New Starts, project capital costs exceed $250 million or the program contribution exceeds $75 million; for Small Starts, capital costs are less than $250 million and the program contribution is less than $75 million. The pre-construction development process for these projects includes various steps between the time when a project sponsor identifies the project to be funded and the formal award of FTA construction funds. During this process, the scope, capital cost, and ridership estimates can change. The Moving Ahead for Progress in the 21st Century Act mandated that GAO biennially review these types of projects. This report describes (1) the length of the development process across these projects and the factors affecting the length, (2) capital cost-estimate changes throughout this process, and the factors contributing to the changes, and (3) how project sponsors forecast ridership, including support that FTA provides. GAO analyzed pertinent laws, regulations, agency guidance, and FTA data for the 32 New, Small, and Very Small Starts projects initiated and funded from 2005 to 2013, prior to recent changes in program processes. GAO interviewed FTA staff and project sponsors. DOT reviewed a draft of this report and provided technical comments, which were incorporated as appropriate. For the 32 New Starts, Small Starts, and Very Small Starts projects funded from 2005 to 2013 that GAO reviewed, the length of the development process varied substantially, from as little as 2 to as long as 14 years, based on GAO's analysis of data from the Federal Transit Administration (FTA) and project-sponsors. GAO found that the development process took 3 to 14 years to complete for New Starts projects, 3 to 12 years for Small Starts projects, and 2 to 11 years for Very Small Starts projects. The length of the process is generally driven by factors that are often unique to each project, including (1) the extent of local-planning activities prior to formal approval for funding, (2) the extent and availability of local and financial support, and (3) the extent of FTA oversight activities. For example, sponsors of 17 of the 32 projects GAO reviewed stated that activities to secure local funding contributed to the length of the development process. FTA has taken some steps to streamline this process. For example, in January 2012, FTA eliminated the requirement for the development of a hypothetical alternative that served as a basis of comparison to evaluate a proposed project. GAO found that capital cost estimates for New Starts, Small Starts, and Very Small Starts projects during the development process generally did not change substantially prior to the award of federal funding. For 23 of the 32 projects GAO reviewed, the final cost estimated prior to receiving federal funding was within 10 percent of the original cost estimates. The remaining 9 projects varied by as much as 41 percent lower and 55 percent higher than the estimates used at the end of the development process. Several project sponsors told us that, when changes did occur, it was a result of changing market conditions and FTA's recommending that sponsors increase project costs to cover unforeseen events, among other factors. For example, officials at the Valley Transportation Authority, located in Santa Clara, California, stated that FTA recommended that it increase the project's cost by $100 million to cover unforeseen events. New and Small Starts project sponsors whom GAO interviewed generally forecast ridership using regional travel models prepared by metropolitan-planning organizations (MPO). Specifically, 8 out of the 9 New Starts project sponsors and 3 out of 4 Small Starts project sponsors GAO spoke with use these travel models. For example, for a Portland, Oregon, streetcar project, the project sponsor used travel forecasts prepared by the Portland MPO. The other New Starts and Small Starts project sponsors use actual transit-ridership data from surveys of regional transit riders; and a statewide travel model, respectively. On the other hand, FTA procedures permit sponsors of Very Small Starts projects to essentially demonstrate, through a detailed counting of riders of existing public transportation in the project's corridor, that the proposed project will service at least 3,000 transit riders on an average weekday. FTA has taken a number of actions to support the development of ridership forecasts. These include, among other actions, providing funding to state agencies and MPOs to help them collect travel data and develop forecasting procedures and providing technical support, such as reviews of final forecasts. GAO interviewed 13 New Starts and Small Starts project sponsors and most said that FTA's technical assistance, which includes reviewing the ridership forecasts, was generally helpful.
Human trafficking occurs worldwide and often involves transnational criminal organizations, violations of labor and immigration codes, and government corruption. Although their circumstances vary, fraud, force, or coercion typically distinguishes trafficking victims from people who are smuggled. Moreover, most trafficking cases follow the same pattern: people are abducted or recruited in the country of origin, transferred through transit regions, and then exploited in the destination country. People may also be trafficked internally, that is, within the borders of their own country. Trafficking victims include agricultural workers who are brought into the United States, held in crowded unsanitary conditions, threatened with violence if they attempt to leave, and kept under constant surveillance; child camel jockeys in Dubai who are starved to keep their weight down; Indonesian women who may be drawn to a domestic service job in another country, are not paid for their work and are without the resources to return home; child victims of commercial sexual exploitation in Thailand; and child soldiers in Uganda. During the 1990s, the U.S. government began drawing attention to the problem of human trafficking before various international forums and gatherings. In 1998, a presidential memorandum called on U.S. government agencies to combat the problem through prevention of trafficking, victim assistance and protection, and enforcement. This approach came to be known as “the three p’s”—prevention, protection, and prosecution. In 2000, Congress enacted TVPA and reauthorized and amended the act twice. The act defines victims of severe forms of trafficking as those persons subject to (1) sex trafficking in which a commercial sex act is induced by force, fraud, or coercion, or in which the person induced to perform such acts is under age 18 or (2) the recruitment, harboring, transportation, provision, or obtaining of a person for labor or services, through the use of force, fraud, or coercion, for the purpose of subjection to involuntary servitude, peonage, debt bondage, or slavery. The TVPA does not specify movement across international boundaries as a condition of trafficking; it does not require the transportation of victims from one locale to another. Under the TVPA, an alien, who is identified as a victim of a severe form of trafficking in the United States and meets additional conditions, is eligible for special benefits and services. The TVPA, as amended, provides a framework for current U.S. antitrafficking efforts. It addresses the prevention of trafficking, protection and assistance for victims of trafficking, and the prosecution and punishment of traffickers. The TVPA also laid out minimum standards for eliminating trafficking to be used in the Secretary of State’s annual assessment of foreign governments’ antitrafficking efforts. It authorized U.S. foreign assistance for efforts designed to meet these standards and established sanctions—withholding nonhumanitarian, nontrade-related assistance—that could be applied against governments of countries not in compliance with the standards and not making significant efforts to bring themselves into compliance. Responsibility for implementing U.S. government antitrafficking efforts domestically and abroad is shared by the Departments of State, Justice, Labor, Health and Human Services (HHS), Homeland Security (DHS), and the U.S. Agency for International Development (USAID). Each agency addresses one or more of the three prongs of the U.S. antitrafficking approach. Some agencies have more responsibility for implementing international trafficking efforts than others. Figure 1 shows agencies and task forces with responsibilities for antitrafficking efforts. The government has also created several coordinating mechanisms for these antitrafficking efforts, as shown in figure 1. The TVPA directed the President to establish the Interagency Task Force to Monitor and Combat Trafficking in Persons, comprised of various agency heads and chaired by the Secretary of State, to coordinate the implementation of the act, among other activities. Furthermore, the TVPA authorized the Secretary of State to create the Department of State’s Office to Monitor and Combat Trafficking in Persons (Trafficking Office) to provide assistance to the task force. Subsequently, TVPA 2003 established the Senior Policy Operating Group, which addresses interagency policy, program, and planning issues regarding TVPA implementation. The TVPA 2003 directed the Director of the Office to Monitor and Combat Trafficking in Persons to serve as chair of the group. In addition, the Intelligence Reform and Terrorism Prevention Act of 2004, passed in December 2004, established the Human Smuggling and Trafficking Center to be jointly run by the Departments of State, Justice and DHS. This center houses several agency data systems to collect and disseminate information to build a comprehensive picture of certain transnational issues, including, among other things, human trafficking. Since 2001, the U.S. government has obligated approximately $375 million for international projects to combat trafficking in persons. For example, in fiscal year 2005, the U.S. government supported more than 265 international antitrafficking programs in about 100 countries. State, Labor, and USAID are the three largest providers of international assistance to target trafficking (see table 1). During an address to the U.N. General Assembly in September 2003, the President declared trafficking in persons a humanitarian crisis and announced that the U.S. government was committing $50 million to support organizations active in combating sex trafficking, sex tourism, and the rescue of women and children. In 2004, eight priority countries for the initiative were identified—Brazil, Cambodia, India, Indonesia, Mexico, Moldova, Sierra Leone, and Tanzania. The initiative was centered on developing the capacity of each country to rescue women and children, to provide emergency shelters, medical treatment, rehabilitation services, vocational training, and reintegration services, and to conduct law enforcement investigations and prosecutions. Existing estimates of the scale of trafficking at the global level are questionable, and improvements in data collection have not yet been implemented. The accuracy of the estimates is in doubt because of methodological weaknesses, gaps in data and numerical discrepancies. For example, the U.S. government’s estimate was developed by one person who did not document all of his work, so the estimate may not be replicable, casting doubt on its reliability. Moreover, country data are generally not available, reliable or comparable. There is also a considerable discrepancy between the numbers of observed and estimated victims of human trafficking. The U.S. government has not yet established an effective mechanism for estimating the number of victims or for conducting ongoing analysis of trafficking related data that resides within various government agencies. While trafficking data collection in the United States is fragmented, the database created by the International Organization for Migration (IOM) provides a useful systematic profile of victims and traffickers across countries. The U.S. government and three international organizations gather data on human trafficking, but methodological weaknesses affect the accuracy of their information. Efforts to develop accurate trafficking estimates are further frustrated by the lack of country level data. Finally, there is a considerable discrepancy between the numbers of observed and estimated victims of human trafficking. The U.S. government and three international organizations have gathered data on global human trafficking. However, these organizations face methodological weaknesses and institutional constraints that cast doubt on the accuracy of the collected data. The four organizations with databases on global trafficking in persons are the U.S. government, International Labor Organization (ILO), IOM, and UNODC. The U.S. government and ILO estimate the number of victims worldwide, IOM collects data on victims it assists in the countries where it has a presence, and UNODC traces the major international trafficking routes of the victims. The databases provide information on different aspects of human trafficking since each organization analyzes the problem based on its own mandate. For example, IOM looks at trafficking from a migration and rights point of view and ILO from the point of view of forced labor. Despite the fact that the databases use different methodologies for data collection and analysis and have various limitations, some common themes emerge. For example, the largest percentage of estimated victims is trafficked for sexual exploitation. In addition, women constitute the majority of estimated victims. However, the estimated percentage of victims that are children ranges from 13 to 50 percent. Table 2 describes the victim profiles that emerge from the data. Methodological weaknesses and limitations cast doubt on the U.S. estimate of global trafficking flows. We identified several important limitations: Estimate not entirely replicable. The U.S. government agency that prepares the trafficking estimate is part of the intelligence community, which makes its estimation methodology opaque and inaccessible. During a trafficking workshop in November 2005, the government agency provided a one-page overview of its methodology, which allowed for only a very limited peer review by the workshop participants. In addition, the U.S. government’s methodology involves interpreting, classifying, and analyzing data, which was performed by one person who did not document all of his work. Thus the estimate may not be replicable, which raises doubts about its reliability. Estimate based on unreliable estimates of others. The biggest methodological challenge in calculating an accurate number of global trafficking victims is how to transition from reported to unreported victims. The U.S. government does not directly estimate the number of unreported victims but relies on the estimates of others, adjusting them through a complex statistical process. It essentially averages the various aggregate estimates of reported and unreported trafficking victims published by NGOs, governments, and international organizations, estimates that themselves are not reliable or comparable due to different definitions, methodologies, data sources, and data validation procedures. Moreover, the methodologies used to develop these estimates are generally not published and available for professional scrutiny. Internal trafficking data not included. The U.S. government does not collect data on internal trafficking, which could be a significant problem in countries such as India, where forced labor is reportedly widespread. According to the 2005 Trafficking in Persons Report, many nations may be overlooking internal trafficking or forms of labor trafficking in their national legislations. In particular, what is often absent is involuntary servitude, a form of severe trafficking. The report also noted that the TVPA specifically includes involuntary servitude in the U.S. definition of severe forms of trafficking. Nonetheless, the U.S. government estimate does not account for it, because it only collects data on offenses that cross national borders. Estimate not suitable for analysis over time. The U.S. government methodology provides an estimate of trafficking flows for a 1-year period and cannot be used to analyze trafficking over time to determine whether it is increasing, decreasing, or staying the same. Therefore, the estimate cannot help in targeting resources and evaluating program effectiveness. Methodological weaknesses also raise questions about the accuracy of trafficking information from international organizations. For example, UNODC’s methodology attempts to identify global trafficking flows across international borders. It tracks and totals the number of different source institutions that have reported a country having a trafficking incident. However, whether the trafficking incident involved 5 or 500 victims is irrelevant for UNODC’s methodology. In addition, by classifying countries in five categories based on the frequency of reporting, UNODC might rank a country very high as, say, a destination country, due to the country’s heightened public awareness, transparency and recognition of trafficking as a serious crime. Alternatively, ILO’s methodology provides a global estimate of trafficking victims. However, it attempts to overcome the gap between reported and unreported victims using an extrapolation that is based on assumptions and observations that have not been rigorously tested and validated. Moreover, global databases are based on data sources drawn from reports from a limited number of countries or restricted geographically to specific countries. For example, IOM’s data only come from countries where IOM has a presence, which are primarily countries of origin, and the organization is constrained by issues related to the confidentiality of victim assistance. Finally, although the three organizations are trying to collaborate in the area of data collection and research, they are having difficulty in mobilizing the necessary resources for their efforts. Therefore, this fragmentary approach prevents the development of a comprehensive and accurate view of global trafficking. (See app. II for additional information about the different methodologies, analytical assumptions, data validation, and data sources used by the international organizations and the U.S. government.) The quality of existing country level data varies due to limited availability, reliability, and comparability. Table 3 summarizes the main limitations of trafficking data, identified in our review of literature on human trafficking. The availability of data is limited by several factors. Trafficking victims are a hidden population because trafficking is a clandestine activity, similar to illegal migration and labor exploitation. This limits the amount of data available on victims and makes it difficult to estimate the number of unreported victims. Trafficking victims are often in a precarious position and may be unwilling or unable to report to, or seek help from, relevant authorities. Moreover, HHS reported that victims live daily with inhumane treatment, physical and mental abuse, and threats to themselves or their families back home. Victims of human trafficking may fear or distrust the government and police because they are afraid of being deported or because they come from countries where law enforcement is corrupt and feared. In such circumstances, reporting to the police or seeking help elsewhere requires courage and knowledge of local conditions, which the victims simply might not have. In addition, some governments give low priority to human trafficking violations and do not systematically collect data on victims. In most countries where trafficking data are gathered, women and children are seen as victims of trafficking, and men are predominantly seen as migrant workers, reflecting a gender bias in existing information. Men are also perceived as victims of labor exploitation that may not be seen as a crime but rather as an issue for trade unions and labor regulators. Thus, data collection and applied research often miss the broader dimensions of trafficking for labor exploitation. For example, the demand for cheap labor, domestic service, slavery, and child labor have not been sufficiently investigated as factors affecting the scale of human trafficking. The reliability of existing data is also questionable. In developing countries, which are usually countries of origin, capacity for data collection and analysis is often inadequate. In countries of destination, human trafficking convictions are often based on victim testimony. Moreover, estimates of trafficking are extrapolated from samples of reported cases, which are not random. Therefore, it is difficult to determine how representative those cases are of the general population of all human trafficking victims and what biases have been introduced. Data quality is further constrained by limited data comparability. Countries and organizations define trafficking differently. A practice that is considered trafficking in one country may be considered culturally and historically acceptable in another country. For example, in West African countries, people, in particular children, commonly move within and across borders in search of work and are placed in homes as domestic servants or on farms and plantations as laborers. Due to economic deprivation and an abundant supply of children from poor families, a child may be sold by his or her parents based on promises for job training and good education or may be placed with a creditor as reimbursement. The incompatibility of definitions for data collection is exacerbated by the intermingling of trafficking, smuggling, and illegal migration in official statistics. Countries have used different definitions regarding the scope and means of trafficking; the activities involved, such as recruitment, harboring, transportation and receipt of victims; the purpose; the need for movement across borders; and the consent of victims. For example, there are discrepancies in the collection of data on sex trafficking. Under the TVPA, participation of children under the age of 18 in commercial sex is a severe form of trafficking. However, some countries define children as people under the age of 16 and, according to U.S. government officials, this difference has implications for how countries collect data on children engaged in commercial sex. Finally, data are often program and institution specific and focus on the needs of individual agencies. Estimates may be developed for the purpose of advocacy. For example, some NGOs record all victims based on the first contact made with them regardless of whether they subsequently meet the criteria for receiving assistance such as legal counsel, shelter, financial support, or support during a trial, while others record only those who receive assistance. Data are also collected for operational purposes within criminal justice systems, and individual authorities use their own definitions and classifications. There is significant discrepancy between the number of estimated victims and the number of observed victims, which include officially reported, certified, registered and assisted victims. For example, the U.S. government estimated that the number of people trafficked into the United States ranged from 14,500 to 17,500 in 2003. Despite concerted U.S. government efforts to locate and protect victims, the government certified fewer than 900 victims in the United States during the 4 ½ years between March 2001 and September 2005. The June 2006 Attorney General's Annual Report to Congress on U.S. Government Activities to Combat Trafficking in Persons for Fiscal Year 2005 indicates that the 14,500 to 17,500 figure may be overstated because it was an early attempt to quantify a hidden problem. The number of certified victims may not reflect the total number of victims identified. For example, some alien victims need not seek certification because they can remain in the United States through family connections. The Justice Department indicates that further research is under way to determine a more accurate figure based on more advanced methodologies and a more complete understanding of the nature of trafficking. Similarly, the U.S. government estimated that a total of 600,000 to 800,000 people were trafficked across transnational borders worldwide annually. Yet, since 1999, fewer than 8,000 victims in 26 countries have received IOM assistance. Organizations may also publish estimates that incorrectly characterize the data reported by others. For example, in a 2001 report a Cambodian nongovernmental organization states that there were 80,000 to 100,000 trafficked women and children nationwide. However, this statement is based on a report which discusses 80,000 to 100,000 sex workers in the country, who may or may not be trafficking victims. Moreover, the latter report uses two other sources that did not corroborate this estimate. Several factors could explain the differences between the numbers of observed and estimated victims, but it is unclear the extent to which any single factor accounts for the differences. For example, the 2005 Trafficking in Persons Report cited cases in which victims reported by law enforcement were deported before they reached an assistance agency. In addition, agencies may not make sufficient efforts in identifying and helping victims or may have constraints imposed by certain assistance requirements. Victims assisted by IOM missions are those willing to go back to their country of origin. However, if there are other opportunities available in the country of destination, such as receiving a residence permit, victims may not be willing to accept IOM assistance. In the United States, one requirement of receiving official certification is that victims of human trafficking must be willing to assist with the investigation and prosecution of trafficking cases. According to an HHS official, this requirement may work to limit the number of recorded victims. Given the weaknesses in data and methods, it also cannot be dismissed that the estimates may overstate the magnitude of human trafficking. The U.S. government has not yet established an effective mechanism for estimating the number of victims or for conducting ongoing analysis of trafficking related data that resides within various government agencies. The TVPA 2005, passed in January 2006, called on the President, through various agencies, to conduct research into the development of an effective mechanism for quantifying the number of victims of trafficking on a national, regional, and international basis. Since 2005, the U.S. government has funded a project to develop a transparent methodology for estimating the number of men, women, and children trafficked into the United States for purposes of sex or labor trafficking. To date, the modeling has been limited to 10 countries of origin—Colombia, Venezuela, Ecuador, Peru, El Salvador, Guatemala, Nicaragua, Mexico, Haiti, and Cuba—and one arrival point in the United States—the southwest border. The firm developing this methodology is in the early stages of this effort and plans to continue to refine and test its methodology. Thus, it is too early to assess this methodology. The U.S. government also recently funded an outside contractor to improve future global trafficking estimates. To date, the U.S. government has funded few projects to improve estimates of trafficking on a regional or international basis. In addition, the Intelligence Reform and Terrorism Prevention Act of 2004 established the Human Smuggling and Trafficking Center to serve, among other responsibilities, as a clearinghouse for all relevant information and to convert it into tactical, operational, and strategic intelligence to combat trafficking in persons. The Human Smuggling and Trafficking Center collects trafficking information from U.S government agencies and sends this information to other agencies that have an interest in it for law enforcement purposes. Center officials stated that they receive and collate trafficking information from federal government agencies. However, officials stated that they do not systematically analyze the trafficking information they receive and lack the human and financial resources to do so. In addition, we identified eight entities within the federal government that possess some information related to domestic and international trafficking. The Justice Department alone has four different offices that possess domestic trafficking information. None of the federal agencies systematically shares their international data with the others, and no agency analyzes the existing data to help inform international program and resource allocation decisions. (See app. III for information on the type of trafficking data available within agencies.) Furthermore, based upon our analysis of agency data sets, we found that federal agencies do not have data collection programs that could share information or include common data fields. As a result, it is difficult to use existing agency trafficking data to compile a profile of trafficking victims. In previous work, we have reported that it is good practice for agencies to establish compatible policies, procedures, standards, and data systems to enable them to operate across agency boundaries. Although some information exists, agencies were unable to provide an account of the age, gender, type of exploitation suffered, and origin and destination of trafficking victims into the United States. Moreover, some agencies with law enforcement missions were generally unwilling to share demographic trafficking data with us and would release statistics for law enforcement purposes only. The U.S. National Central Bureau was able to extract limited profile information from its case management system. While the information on trafficking victims collected by U.S. agencies is fragmented, the database created by IOM allows for the development of a useful, in-depth profile of traffickers and their victims across 26 countries. Although IOM’s data are limited to countries where IOM provides direct assistance to trafficking victims, has a short history of about 7 years, and may not be easily generalizable, it is the only one of the four databases that contains data directly obtained from victims. Drawing from more than 7,000 cases, it includes information about the victims’ socioeconomic profile, movement, exploitation, abuse, and duration of trafficking. Moreover, the database tracks victims from the time they first requested IOM assistance, through their receipt of assistance, to their subsequent return home. Importantly, it also tracks whether victims were subsequently retrafficked. These factors provide information that could assist U.S. efforts to compile better data on trafficking victims. As shown in figure 2, the victims IOM assisted often were enticed by traffickers’ promise of a job, most believed they would be working in various legitimate professions, and were subjected to physical violence. In addition, based on cases with available data on the duration of the trafficking episode, the average duration of stay in the destination country before seeking help from IOM is more than 2 years. Most of the sexual exploitation victims worked 7 days a week and retained a small fraction of their earnings. Moreover, about 54 percent of the victims paid a debt to the recruiter, transporter and/or other exploiters, and about 52 percent knew they were sold to other traffickers at some stage of the trafficking process. The database also contains information about the recruiters’ and traffickers’ networks, nationality, and relationship to victims. It thus provides insights into the traffickers and the mechanisms traffickers used to identify and manipulate their victims. For example, in 77 percent of the cases, contact with the recruiter was initiated based on a personal relationship. Moreover, the correlation between the nationality of the recruiter and that of the victim was very high (0.92). Trafficking networks may have a complex organization, with the recruiter being only one part of the whole system. The organization may involve investors, transporters, corrupt public officials, informers, guides, debt collectors, and money launderers. The extent of information on victims and traffickers in the database improves the overall understanding of the broader dimensions of trafficking. While federal agencies have undertaken activities to combat trafficking in persons, the U.S. government has not developed a coordinated strategy to combat human trafficking abroad, as called for in a presidential directive. The U.S. government has established an interagency task force and working group on human trafficking, which have focused on complying with U.S. policy on prostitution and avoiding duplication of effort, but they have not focused on developing and implementing a systematic way for agencies to clearly delineate roles and responsibilities in relation to each other, and identify targets of greatest need and leverage overseas activities to achieve greater results. In addition, governmentwide task forces have not developed measurable goals and associated indicators to evaluate the overall effectiveness of efforts to combat trafficking abroad or outlined an evaluation plan to gauge results, making the U.S. government unable to determine the effectiveness of its efforts abroad or to adjust its assistance to better meet needs. Despite the mandate to evaluate progress, the Interagency Task Force has not developed a plan to evaluate overall U.S. government efforts to combat trafficking abroad. In TVPA 2000, Congress called upon the Interagency Task Force to measure and evaluate the progress of the United States and other countries in preventing trafficking, protecting and providing assistance to victims, and prosecuting traffickers. However, the Task Force has not developed an evaluation plan or established governmentwide performance measures against which the U.S. government can evaluate the overall impact of its international antitrafficking efforts. In previous work, we have reported that monitoring and evaluating efforts can help key decision makers within agencies, as well as clients and stakeholders, identify areas for improvement. Further, in its 2005 annual assessment of U.S. government activities to combat human trafficking, the Department of Justice recommended that the U.S. government begin measuring the impact of its antitrafficking activities. Although the project-level documentation that we reviewed from agencies, such as USAID and the Department of Labor, included measures to track activities on specific projects, officials stated that USAID’s agency-level aggregate indicators are intended as a way of communicating agency outputs, not as a means of evaluating the effectiveness of programs. In addition, according to the 2005 State Department Inspector General report, State’s Trafficking Office needs to better identify relevant, objective, and clear performance indicators to compare progress in combating trafficking from year to year. Officials from State’s Trafficking Office recognized the need to establish mechanisms to evaluate grant effectiveness. However, officials stated that the office lacks the personnel to monitor and evaluate programs in the field and that it relies on U.S. embassy personnel to assist in project monitoring. In early 2006, the Trafficking Office adopted a monitoring and evaluation tool to assist embassy personnel in monitoring its antitrafficking programs, but it is too soon to assess its impact. Our review of the Department of State documentation and discussions with agency officials found little evidence of the impact of various antitrafficking efforts. For example, the 2005 Trafficking in Persons Report asserts that legalized or tolerated prostitution nearly always increases the number of women and children trafficked into commercial sex slavery, but does not cite any supporting evidence. However, apart from a 2005 European Parliament sponsored study on the link between national legislation on prostitution and the trafficking of women and children, we found few studies that comprehensively addressed this issue. In addition, the State Inspector General report noted that some embassies and academics questioned the credentials of the organizations and findings of the research that the Trafficking Office funded. The Inspector General recommended that the Trafficking Office submit research proposals and reports to a rigorous peer review to improve oversight of research efforts. In addition, according to agency officials in Washington, D.C. and in the field, there is little or no evidence to indicate the extent to which different types of efforts—such as prosecuting traffickers, abolishing prostitution, increasing viable economic opportunities, or sheltering and reintegrating victims—impact the level of trafficking or the extent to which rescued victims are being retrafficked. As required by the TVPA, the Department of State issues an annual report that analyzes and ranks foreign governments’ compliance with minimum standards to eliminate trafficking in persons. This report has increased global awareness about trafficking in persons, encouraged action by some governments who failed to comply with the minimum standards, and raised the threat of sanctions against governments who did not make significant efforts to comply with these standards. The Department of State includes explanations of the rankings in the report, though they are not required under the TVPA. However, the report’s explanations for these ranking decisions are incomplete, and agencies do not consistently use the report to influence antitrafficking programs. Information about whether a country has a significant number of trafficking victims may be unavailable or unreliable, making the justification for some countries’ inclusion in the report debatable. Moreover, in justifying the tier rankings for these countries, State does not comprehensively describe foreign governments’ compliance with the standards, many of which are subjective. This lessens the report’s credibility and hampers its usefulness as a diplomatic tool. In addition, incomplete country narratives reduce the report’s utility as a guide to help focus U.S. government resources on antitrafficking programming priorities. Each year since 2001, State has published the congressionally mandated Trafficking in Persons Report, ranking countries into a category, or tier, based on the Secretary of State’s assessment of foreign governments’ compliance with four minimum standards for eliminating human trafficking, as established in the TVPA. These standards reflect the U.S. government’s antitrafficking strategy of prosecuting traffickers, protecting victims, and preventing trafficking. The first three standards deal with countries’ efforts to prohibit severe forms of trafficking and prescribe penalties for trafficking crimes, while the fourth standard relates to government efforts to eliminate trafficking. The TVPA instructed the Secretary of State to place countries that are origin, transit, or destination countries for a significant number of victims of severe forms of trafficking in one of three tiers. In 2003, State added a fourth category, the tier 2 watch list, consisting of tier 2 countries that require special scrutiny in the coming year (see fig. 3). Governments of countries placed in tier 3 may be subject to sanctions by the United States. In addition to the rankings, each Trafficking in Persons Report contains country narratives intended to provide the basis for each country’s tier placement. Although the narratives are not required by the TVPA, they state the scope and nature of the trafficking problem, explain the reasons for the country’s inclusion in the report, and describe the government’s efforts to combat trafficking and comply with the minimum standards contained in U.S. legislation. For countries placed in the lowest two tiers, State develops country action plans to help guide governments in improving their antitrafficking efforts. The Trafficking in Persons Report has raised global awareness about human trafficking and spurred some governments that had failed to comply with the minimum standards to adopt antitrafficking measures. According to U.S. government and international organization officials and representatives of trafficking victim advocacy groups, this is due to the combination of a public assessment of foreign governments’ antitrafficking efforts and potential economic consequences for those that fail to meet minimum standards and do not make an effort to do so. U.S. government officials cited a number of cases in which foreign governments improved their antitrafficking efforts in response to their tier placements. For example, State and USAID officials cited the case of Jamaica, a source country for child trafficking into the sex trade, which was placed on tier 3 in the 2005 report. The country narrative noted deficiencies in Jamaica’s antitrafficking measures and reported that the government was not making significant efforts to comply with the minimum standards. Jamaica failed to investigate, prosecute, or convict any traffickers during the previous year, despite the passage of a law to protect minors. In response, the Jamaican government created an antitrafficking unit within its police force and conducted raids that led to nine trafficking-related arrests. In addition, the 2004 report placed Japan on the tier 2 watch list, and the country narrative noted that Japan is a destination country for large numbers of foreign women and children who are trafficked for sexual exploitation. It highlighted weaknesses in Japan’s law enforcement efforts. For example, the lack of scrutiny of Japan’s entertainer visas reportedly allowed traffickers to use them to bring victims into the country. The country narrative also mentioned Japan’s failure to comply with minimum standards for protecting victims, deporting foreign trafficking victims as undocumented aliens who had committed a crime by entering the country illegally. According to State officials and the 2005 report, the Japanese government responded to the report’s criticisms by tightening the issuance of entertainer visas and ceasing the criminal treatment of trafficking victims. Governments of countries placed on tier 3 that do not implement the recommendations in the country action plan may be subject to sanctions or other penalties. The United States, for example, may oppose assistance for the country from international financial institutions such as the International Monetary Fund. Since 2003, full or partial sanctions have been applied to eight countries, most of which were already under sanctions from the United States. According to the presidential directive and the 2005 Trafficking in Persons Report, the annual report is intended as a tool to help the United States engage foreign governments in fighting human trafficking. According to U.S. government officials, the report’s effectiveness as a diplomatic tool for discussing human trafficking with foreign governments depends on its credibility. The country narratives used as the basis for ranking decisions should provide clear and comprehensive assessments of foreign governments’ antitrafficking efforts and demonstrate consistent application of the standards. Our analysis of the 2005 report found limitations in the country narratives, and State officials in the Regional Bureaus expressed concerns that these limitations detract from the report’s credibility and usefulness. These include some countries’ inclusion in the report based on unreliable data, incomplete explanations of compliance with the minimum standards by some of the highest-ranked countries, and country narratives that did not clearly indicate how governments complied with certain standards and criteria. We also found criticisms of the process for resolving disputes about country inclusion and tier rankings. The TVPA requires State to rank the antitrafficking efforts of governments of countries that are sources, transit points, or destinations for a “significant number” of victims of severe forms of trafficking. Since 2001, State has used a threshold of 100 victims to determine whether or not to include a country in the Trafficking in Persons Report. However, as discussed earlier in this report, reliable estimates of the number of trafficking victims are generally not available. For example, according to State officials, one country was included in the report because a junior political officer stated that at least 300 trafficking victims were in the country and that the government’s efforts to combat trafficking should be assessed. According to these officials, this statement was based on the political officer’s informal survey of brothels in that country. Since then, other embassy officials, including the ambassador, have argued that the country does not have a significant number of victims, but it continues to appear in the report. In addition, State officials cited Estonia as a country that was included in the report based on an IOM official’s informal estimate of more than 100 victims. State officials said that a subsequent embassy-funded study of trafficking in Estonia found that the country had around 100 confirmed victims in a 4-year period, but internal discussions have not led to the removal of Estonia from the Trafficking in Persons Report. However, the country narrative for Estonia in the 2005 report was modified from previous years to state that Estonia is a source and transit country for a “small number” of trafficking victims. Our review of country narratives in the 2005 report revealed some cases in which it was not clear how the situations used to justify the country’s inclusion in the report constituted severe forms of trafficking under U.S. law. For example, the country narratives for Algeria, Saudi Arabia, and Singapore described cases in which human smugglers abandoned people, domestic workers were abused by their employers, and foreign women engaged in prostitution. The narratives either did not clearly establish whether the situation involved victims of severe forms of trafficking or failed to provide enough information about the magnitude of the problem to convey the sense that the number of victims had reached 100 people. According to State officials, inclusion of human rights abuses or labor issues in the description of foreign countries’ human trafficking problem can damage the report’s credibility with foreign governments. Some State officials have suggested abandoning the threshold of 100 victims and including all countries in the report. Our analysis of the 2005 report found that many narratives did not clearly state whether and how the government met the minimum standard regarding stringency of punishment for severe forms of trafficking (see app. I for a description of the methodology used to analyze the 2005 report). This standard requires that prescribed penalties for severe forms of trafficking be sufficiently stringent to deter such trafficking and that they reflect the heinous nature of the offense. The Trafficking Office has not defined a threshold for what constitutes “sufficiently stringent” punishment. Our analysis showed that in over one-third of cases, the 2005 report’s country narratives did not characterize the prescribed penalties as sufficiently stringent. Moreover, in many cases the narratives do not state whether or not the government met this minimum standard. State officials agreed that this subjectivity makes it difficult for reports staff and foreign governments to know what constitutes compliance, negatively affecting the report’s credibility and utility as a diplomatic tool. Our analysis of the 2005 report found that many country narratives do not provide a comprehensive assessment of foreign governments’ compliance with the minimum standards, resulting in incomplete explanations for tier placements. Although the 2005 report discusses the importance of imposing strict penalties on traffickers, we found that only 2 of the 24 tier 1 country narratives clearly explained compliance with the second minimum standard established in the TVPA, which, among other things, calls for governments to prescribe punishment for sex trafficking that is commensurate with that for grave crimes such as forcible sexual assault. The narratives for 17 (71 percent) of the tier 1 countries provided information on penalties for sex trafficking but did not compare these with the governments’ penalties for other grave crimes. Five (21 percent) tier 1 countries did not mention whether the governments complied with this standard at all. Our analysis of the tier 1 country narratives in the 2005 report also showed that, while most explained how these governments fully met the core criteria for the fourth minimum standard, related to government efforts to eliminate severe forms of trafficking, some did not. A senior official at the Trafficking Office confirmed this finding. We found that country narratives for 11 (46 percent) of the 24 tier 1 countries raised concerns about the governments’ compliance with key parts of core criteria used to determine if the government is making a serious and sustained effort to eliminate severe forms of trafficking. However, the narratives failed to explain whether and how the governments’ success in meeting the other core criteria outweighed these deficiencies and justified their placement in tier 1. For example, the 2005 report described France, a tier 1 country, as a destination for thousands of trafficked women and children. Although the report states that the French government fully complied with the minimum standards, our analysis of the narrative found that the first three standards were not mentioned. Furthermore, the narrative also discussed the French government’s failure to comply with the criterion on protecting trafficking victims, one of the key objectives of U.S. antitrafficking legislation. The narrative discusses a French law, which harmed trafficking victims by arresting, jailing, and fining them. Senior officials at the Trafficking Office are concerned about France’s lack of compliance with the victim protection criterion. The narrative, however, did not balance the discussion of these deficiencies by explaining how the government’s compliance with the other core criteria allowed it to meet the fourth minimum standard and thus be placed in tier 1. Similarly, the country narratives for two tier 1 countries stated that the governments were not taking steps to combat official corruption, which the 2004 report highlights as a major impediment to antitrafficking efforts. For example, the narrative for Nepal, a source country for women and children trafficked to India and the Middle East, states that the government fully complied with the minimum standards. However, the narrative noted that the government has not taken action against immigration officials, police and judges suspected of benefiting from trafficking-related graft and corruption, and it did not explain how the deficiency in this core criteria was outweighed by Nepal’s efforts with other core criteria. According to State officials, there are a considerable number of disagreements within State about the initial tier placements proposed by the Trafficking Office. These disagreements are not surprising, given that the Trafficking Office focuses exclusively on antitrafficking efforts while the Regional Bureaus manage bilateral relations, which comprise a wide range of issues. However, it is important that the process for resolving these conflicts be credible. Some disagreements on tier rankings are resolved in meetings between the Trafficking Office and the Deputy Assistant Secretaries of the Regional Bureaus, but most are elevated to the undersecretary level. A few disagreements are even referred to the Secretary of State for resolution. According to State officials, some disputes are worked out by clarifying misunderstandings or providing additional information. Although Trafficking Office staff said that these discussions are constructive, staff in State’s Regional Bureaus said that many disagreements over tier rankings are resolved by a process of “horsetrading,” whereby the Trafficking Office agrees to raise some countries’ tier rankings in exchange for lowering others. In these cases, political considerations may take precedence over a neutral assessment of foreign governments’ compliance with minimum standards to combat trafficking. Senior officials at the Trafficking Office acknowledged that political considerations sometimes come into play when making the tier ranking decisions. The Trafficking Office’s implementation plan and the 2005 Trafficking in Persons Report states that the report should be used as a guide to target resources to prosecution, protection, and prevention programs. However, we found that U.S. government agencies do not systematically link the programs they fund to combat trafficking overseas with the tier rankings or the deficiencies that are identified in the report’s country narratives. For example, U.S. agencies did not use the report when they selected high-priority countries to participate in the 2-year $50 million Presidential Initiative to Combat Trafficking in Persons. Moreover, we found that many of the country narratives describing deficiencies in foreign governments’ antitrafficking efforts were incomplete, making it difficult to use them to guide programming. Officials from State’s Trafficking Office acknowledged that the management processes and staff responsible for producing the report are not linked with those managing overseas assistance programs. State’s Inspector General reported in November 2005 that the lack of synchronization between the Trafficking Office’s grants cycle (January and February) and reporting cycle (June) makes it difficult to address the shortcomings identified in the report and the countries’ programming needs. In addition, most of the State requests for grant proposals that we reviewed were generic in scope and were not tailored to address a specific problem or priority. For example, one request for proposal was directed broadly at prevention and protection programs in Africa, the Caribbean, and Latin America. In addition, officials from State’s regional bureaus said that most of their requests for grant proposals are sent to all the embassies in their region and are not targeted to those countries on lower tiers. However, officials from one regional bureau stated that they sent a request for grant proposals dealing with law enforcement issues only to those countries on the tier 2 watch list to ensure the programs were targeted where they were most needed. The presidential directive stated that agencies are to develop a consensus on the highest priority countries to receive antitrafficking assistance through interagency consultation and in consultation with U.S. missions overseas. The Trafficking Office’s implementation plan called for using the annual Trafficking in Persons Report as a guide to target assistance, with priority to countries ranked in the lowest tiers and assistance to only those tier 1 and 2 countries with limited resources and whose governments showed a clear commitment to combat trafficking. In fiscal year 2005, the U.S. government obligated about $96 million to support more than 265 international antitrafficking programs in about 100 countries. Only one-fourth of this money went to countries ranked in the lowest two tiers (see fig. 4). Through the Senior Policy Operating Group, in January 2004 agencies selected eight countries to target their efforts for the presidential initiative to combat trafficking in persons; however, documentation of the decision-making process does not mention use of the Trafficking in Persons Report’s tier rankings or country narratives to affect this selection. Officials from the Trafficking Office and the documents we reviewed stated that the Group selected countries based on several factors, including anticipated host government commitment and the ability to start implementation in a short time frame. The eight countries selected were ranked in tier 2 in the 2003 Trafficking in Persons Report, suggesting that their governments showed some commitment to combating trafficking by making efforts to comply with the minimum standards and criteria outlined in the TVPA. However, it was not clear how the Group applied the criteria in selecting the countries. For example, host government commitment to combat trafficking did not necessarily translate into a willingness to receive U.S. assistance. The Department of State cables indicate that the governments in Brazil and India did not support U.S. efforts to fund antitrafficking programs under the presidential initiative. In addition, despite an emphasis on selecting countries in which the United States could start implementation in a short time frame, agreements necessary to conduct law enforcement projects were not in place in Brazil and Mexico, causing these initiatives to be delayed. Also, according to an agency official and documents we reviewed, Tanzania was included because a senior official had just traveled there and thought trafficking might be a problem. The country narratives’ incomplete assessments of deficiencies in foreign governments’ efforts to combat trafficking diminish the Trafficking in Persons Report’s utility as a programming guide. Our analysis of the 2005 report found that many country narratives failed to include information on the governments’ compliance with some standards and core criteria, making it difficult for U.S. government officials to use the report as a programming guide. For example, all narratives for countries in the lowest two tiers contained some discussion of government efforts to protect trafficking victims. However, we found that 80 percent failed to mention key aspects of the victim protection criterion, including whether victims were encouraged to cooperate with law enforcement, whether the government provided legal alternatives to deportation, and whether victims were protected from inappropriate treatment as criminals (see fig. 5). In addition, 92 percent of country narratives for tier 2 countries, which receive the largest share of U.S. government antitrafficking funds, did not mention compliance with certain standards and criteria. The United States has placed trafficking on the international agenda and has spurred governments and organizations into action through its funding of international programs and the publication of the annual Trafficking in Persons Report. Additionally, the development of a victim-centered approach based upon prevention, protection, and prosecution programs has provided an operational framework for both governments and practitioners in the field. However, more than 5 years since the passage of the TVPA, the U.S. government lacks fundamental information on the nature and extent of the global trafficking problem and an overall strategy for agencies to target their programs and resources abroad. As the United States and other countries work to identify victims of trafficking, the scope of the global trafficking problem remains unknown in terms of overall numbers within countries of origin; victims’ gender, age, and type of exploitation suffered; and the profile and methods of the perpetrators. The United States has provided about $375 million in antitrafficking assistance since 2001 for projects in about 100 countries. However, the lack of an overall government strategy which ties together and leverages the program expertise and resources of agencies with the knowledge of victims’ identity and location, raises questions about whether antitrafficking activities are targeted where they are most needed. Furthermore, little evaluation research has been conducted to determine which international antitrafficking activities are working or how best to tailor them to meet specific needs. The fight against human trafficking will almost certainly require years of effort and the continued monitoring of governments’ actions. To enhance its usefulness as a diplomatic tool, the narratives and country rankings in the annual Trafficking in Persons Report must be viewed as credible by governments and informed human rights and country observers. However, the report does not comprehensively or clearly describe how decisions about tier rankings were reached. Moreover, problems identified in the report provide the means to better identify program needs and allocate resources, but agencies have not linked their activities to identified deficiencies. To improve efforts to combat trafficking in persons abroad, we recommend that the Secretary of State, in her capacity as Chair of the Interagency Task Force to Monitor and Combat Trafficking, consider the following actions: 1. Work closely with relevant agencies as they implement U.S. law calling for research into the creation of an effective mechanism to develop a global estimate of trafficking. This could include assigning a trafficking data and research unit to serve as an interagency focal point charged with developing an overall research strategy, collecting and analyzing data, and directing research. 2. In conjunction with relevant agencies, develop and implement a strategic approach that would delineate agency roles and responsibilities in relation to each other, strengthen mechanisms for integrating activities, and determine priorities, measurable goals, time frames, performance measures, and a methodology to gauge results. 3. To improve the credibility of State’s annual report on trafficking in persons, we recommend that the Secretary of State ensure that the report clearly documents the rationale and support for tier rankings and improve the report’s usefulness for programming by making the narratives more comprehensive. We requested comments on a draft of this report from the Secretaries of State, Justice, Health and Human Services, Homeland Security, and Labor; the Administrator of USAID; the U.S. government agency that prepares the trafficking estimate; and cognizant officials at the ILO, IOM, and UNODC, or their designees. We received written comments from State, which are reprinted in appendix V along with our responses to specific points. State generally agreed with our recommendations. State agreed with our first recommendation to work closely with relevant agencies as they implement U.S. law calling for research into the creation of an effective mechanism to develop a global estimate of trafficking and provided detailed suggestions for areas of future research that are consistent with our findings. Regarding our second recommendation that the Secretary of State develop and implement a strategic approach, State recognized the need for better performance measures and enhanced interagency coordination while also stating that roles and responsibilities have been established. In response, we clarified our recommendation to state that agencies’ roles and responsibilities should be delineated in relation to each other, consistent with our report findings. In response to our third recommendation, State said that while its annual Trafficking in Persons Report can improve, it has become a much richer, more useful product since first published in 2001. State also said our report includes some useful recommendations that the department will explore integrating with ongoing efforts in light of available resources. In addition, State commented that its 2006 Trafficking in Persons Report offers a greater and more consistent examination of the minimum standards as they apply to each country. We conducted a selective review of 26 tier 1 country narratives in the 2006 report and found that many of the concerns we cited in our report remain. For example, none of the tier 1 country narratives clearly explained whether or not the government complied with the second minimum standard established in the TVPA, which, among other things, calls for governments to prescribe punishment for sex trafficking that is commensurate with that for grave crimes such as forcible sexual assault. In oral comments, the U.S. government agency that prepares the trafficking estimate fundamentally concurs with our characterization of the U.S. global estimate of trafficking flows. The agency stated that it has sought to improve upon the 2004 estimate’s accuracy and utility through working with an outside contractor with the intention of thoroughly documenting and vetting a methodology, as well as preparing detailed recommendations for improving future estimates. According to the agency, many of this contractor’s initial recommendations have been in-line with those delineated in our report. Despite these efforts and the inherent difficulty of preparing estimates of hidden populations, the agency agreed with our overall findings—particularly with the idea that housing the estimate in the intelligence community makes it opaque and inaccessible. The agency stated that it believes that other U.S. government agencies are best positioned to produce the global trafficking estimate in the future, because they have access to the same unclassified data, would be better able to vet the methodology, and could provide additional information to allow for a closer link between international and domestic human trafficking flow estimates. State, Justice, Labor, USAID, the U.S. government agency that prepared the trafficking estimate, and the ILO, IOM, and UNODC submitted technical comments which we have incorporated into this report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of State, Justice, Health and Human Services, Homeland Security, and Labor; the Administrator of USAID; the U.S. government agency that prepares the trafficking estimate; ILO; IOM; and UNODC; and interested congressional committees. Copies of this report will also be made available to other interested parties on request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to examine (1) estimates of the extent of global trafficking in persons, (2) the U.S. government’s strategy to combat trafficking in persons abroad, and (3) the Department of State’s (State) process for evaluating foreign governments’ antitrafficking efforts. To examine estimates of the extent of global human trafficking, we conducted an analysis of the global trafficking databases developed and maintained by the U.S. government, the International Organization for Migration (IOM), the International Labor Organization (ILO), and the U.N. Office on Drugs and Crime (UNODC). We met with officials from each organization, determined the reliability of their global trafficking data, reviewed documents and assessed their methodologies for collecting and analyzing human trafficking data, and analyzed the data collected by IOM. We examined ILO, UNODC, and IOM reports. We also reviewed the existing relevant literature on data and methodologies used in global human trafficking research. We collected reports, journal articles, conference presentations, U.S. government sponsored studies, and books that discuss human trafficking. We read and analyzed these documents and used them to identify issues that affect the quality of data on trafficking. We grouped these issues into three major categories: availability, reliability, and comparability. To examine the U.S. government’s strategy for combating human trafficking abroad, we reviewed U.S. laws and presidential directives describing actions that various U.S. government entities were to undertake in combating trafficking. These include the Trafficking Victims Protection Act (TVPA) of 2000 and its reauthorizations in 2003 and 2005, Executive Order 13257, and National Security Presidential Directive 22. We also analyzed documents and interviewed officials from the Departments of Health and Human Services (HHS), Homeland Security (DHS), Justice, Labor, State, and the United States Agency for International Development (USAID). Documents we reviewed include each agency’s plan to implement the presidential directive, agency and project-level monitoring and evaluation documents, project proposals, interagency coordination guidance, the Bureau Performance Plan from State’s Office to Monitor and Combat Trafficking in Persons, USAID’s strategy to combat trafficking in persons, as well as regional and country-level strategic framework documents. To examine State’s process for evaluating foreign governments’ antitrafficking efforts, we reviewed 122 country narratives in the 2005 Trafficking in Persons Report. We examined the narratives for all 66 countries in tier 1, tier 2 watch list, and tier 3. For the 77 narratives in tier 2, we reviewed all of the narratives for the 35 countries whose tiers had changed from the previous year’s report. For the remaining 42 country narratives, we drew a random probability sample of 21 countries. With this probability sample, each narrative in the 2005 report had a nonzero probability of being included and that probability could be computed for any member. Each sample element was subsequently weighted in the analysis to account statistically for all the narratives in the 2005 report, including those not selected. Because we followed a probability procedure based on a random selection of tier 2 countries, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the intervals in this report will include the true values in the study population. All percentage estimates from the narrative review have margins of error of plus or minus 7 percentage points or less, unless otherwise noted. In addition, we systematically compared the country narratives describing these governments’ antitrafficking efforts with the minimum standards and five core criteria in the legislation and determined whether or not the country narrative mentioned each standard or criteria. If the country narrative did not mention a standard or criteria, we coded that as “not mentioned.” If the country narrative did mention a standard or criteria, we determined whether the narrative showed that the government complied or did not comply with the standard or criteria. If we determined that the narrative showed that the government complied with the standard or criteria, we coded that as “yes.” If we determined that the narrative showed that the government did not comply with the standard or criteria, we coded that as “no.” In some cases, the narrative mentioned a standard or criteria, but we could not determine conclusively whether or not the narrative demonstrated the government’s compliance. We coded those cases as “not clear.” Finally, elements of some criteria were not applicable to certain countries. For example, if the report described a country as a source of trafficking victims rather than as a destination for victims, the criterion regarding provision of victims with legal alternatives to deportation would not apply. We coded these cases as “not applicable.” We then tallied the number of responses in each category. Finally, to ensure analytical validity and reliability, our analysis involved multiple phases of checking and review of analytical procedures, categories, and results. Two GAO analysts reviewed a selection of country narratives, independently coded them, and agreed on the basis for the coding decisions. Next, one GAO analyst performed the coding for the remaining country narratives. A second GAO analyst reviewed a number of these coding decisions and both analysts discussed them. Finally, a third GAO analyst performed a review of all coding decisions and tabulations. In addition, to ensure the reliability of the funding data used, we reviewed the information collected by the State Department on each agency’s funding obligations. We then checked with each individual agency to verify that the amounts State reported were correct. We conducted our review from September 2005 to May 2006 in accordance with generally accepted government auditing standards. This appendix describes the data sources, data validation, methodology, and key assumptions used by the U.S. government, ILO, UNODC, and IOM to collect data on and/or estimate the extent of human trafficking as well as the limitations of these databases. (See tables 4 and 5.) This appendix describes the data on human trafficking maintained by eight U.S. government entities. (See table 6.) The following are GAO’s comments on the Department of State’s letter, dated June 30, 2006. 1. State agreed that more research would help in the fight against human trafficking. State said that its Trafficking in Persons office (G/TIP) continues to pursue better estimates of the scope of trafficking; that the Senior Policy Operating Group (the Group) has established a subcommittee on trafficking research to, among other things, ensure regular interagency communication on research and close the most important data gaps; that G/TIP plans to set aside a substantial portion of its program budget for trafficking research; and that G/TIP funds IOM’s database. We recognize two ongoing projects to develop better estimates of trafficking and note that it is too early to assess the results of these projects. The Group subcommittee began meeting within the past year and, at the time of our review, had not established research priorities. During our review, G/TIP staff expressed concern about the limited amount of funding available for research, including continued funding for IOM’s database, which G/TIP partially funds. 2. State said that a better global estimate of the number of trafficking victims should not be the primary focus of additional research initiatives. State said a more valuable approach would be information on the comparative severity of trafficking in particular regions, countries, or localities; on the methods used by traffickers; and the effectiveness of antitrafficking programs. We believe our recommendation is consistent with State’s comments. We agree that additional research on these areas is valuable as discussed in the report. We report that reliable and comparable country data do not exist. We also report that U.S. agencies collect information on traffickers and their victims but do not share their information or analyze the information to identify trends and compile a profile of victims. We also describe the value of IOM’s database in providing information on traffickers’ routes and nationalities and the mechanisms they use to identify and manipulate their victims. We also agree that more information on the effectiveness of antitrafficking programs is needed, and we note that little or no evidence is currently available to indicate the extent to which different types of efforts impact the level of trafficking. Our recommendation calls upon the Secretary of State, in her capacity as Chair of the Interagency Task Force to Monitor and Combat Trafficking in Persons, to consider assigning a trafficking data and research unit but does not call for setting up a new unit as State’s comments suggest. 3. State agreed with the need for better performance measures, said that the Group is looking at how to reconcile the different agency grants processes so as to achieve an earlier exchange of information, and said that State will address enhanced interagency coordination in its upcoming G/TIP office strategy. State said that roles and responsibilities of government agencies in combating trafficking in persons have been established. We have clarified our recommendation to state that agencies’ roles and responsibilities should be delineated in relation to each other, consistent with our report findings. State also said that the Group creates an active forum where interagency representatives work together to identify strengths and weaknesses of of the Group’s the U.S. approach to combat trafficking in real time. State also said that the Attorney General’s annual report and several of the Group’s subcommittees focus on improving efforts to combat trafficking in persons. We reported findings from the Attorney General’s report. We also reported that the Group, through the work of its various subcommittees, served as a forum for agency officials to discuss trafficking policy and programs. However, based on information from the other Group members, we believe that our report remains accurate in also stating that the Group has not developed or implemented a systematic way for agencies to identify priorities and target efforts abroad to complement each others’ activities to achieve greater results than if the agencies were acting alone. 4. The Department of State agrees with our finding that its annual Trafficking in Persons Report could provide more comprehensive and clearer explanations for the tier ranking decisions. The Department of State said that the 2006 report offers a greater and more consistent examination of the minimum standards as they apply to each country. We conducted a selective review of 26 tier 1 country narratives in the 2006 report and found that many of the concerns we cited in our report remain. For example, none of the tier 1 country narratives clearly explained how the government complied with the second minimum standard established in the TVPA, which, among other things, calls for governments to prescribe punishment for sex trafficking involving force, fraud, or coercion that is commensurate with that for grave crimes such as forcible sexual assault. Also, as in the 2005 report, our review found that some tier 1 country narratives in the 2006 report described governments’ failure to comply with certain core criteria, but the narratives did not explain how the governments’ success in meeting the other core criteria outweighed these deficiencies and justified their placement in tier 1. We acknowledge in our report that the Department of State is not legislatively mandated to include country narratives in the annual Trafficking in Persons Report. However, the 2006 Trafficking in Persons Report and reports from previous years characterize the country narratives as “an assessment of the government’s compliance with the minimum standards … as laid out in the TVPA of 2000, as amended.” According to the report, the narratives are also intended to explain the basis for the tier ranking decisions. 5. State said that under G/TIP’s current guidelines to keep narratives short, readable, and focused on deficiencies, the Trafficking in Persons Report does not provide (and the law does not require) an exhaustive examination of compliance with all of the minimum standards’ criteria. According to State, such an approach would create lengthy country narratives that would lose their readability, effectiveness, and policy relevance and would significantly increase the size of the report. As described in our report, we did not assess whether the 2005 report’s country narratives considered all 10 criteria for the fourth minimum standard, and we do not criticize the Department of State for failing to provide an exhaustive examination of governments’ compliance with all 10 of these criteria. Instead, our analysis focused on the four minimum standards required by the TVPA; and for the fourth standard, we looked only at whether the narratives explained governments’ compliance with the five core criteria identified by the Trafficking Office. We believe these issues can be discussed while maintaining a concise reporting format. 6. State said the TVPA requires the Trafficking in Persons Report to include countries with a “significant number of victims of severe forms of trafficking.” As a matter of policy the minimum “significant number of victims” has been defined as 100. As discussed in our report, our interviews with State officials as well as our review of the 2005 report’s country narratives indicated that some countries’ inclusion in the report was questionable. State acknowledges that many countries have not analyzed their crime statistics through the prism of trafficking in persons, making the available data unreliable. 7. State said the law does not clearly define what constitutes a sufficient sentence to deter, or that adequately reflects the heinous nature of the offense. The department has defined “sufficiently stringent” punishment to mean time in jail, preferably at least several years in jail. We recognize the subjectivity of the third minimum standard in our report. Even though some narratives indicate that countries prescribe jail time, State’s report does not explicitly state the department’s definition that sufficiently stringent means some jail time nor did some of the narratives state that the punishment was sufficiently stringent. Thus, it is unclear how the government complied with this minimum standard. Cheryl Goodman, Assistant Director; Suzanne Dove; Christina Werth; Gergana Danailova-Trainor; Bruce Kutnick; Barbara Stolz; and Patrick Dickriede made key contributions to this report. In addition, Lynn Cothern, Martin de Alteriis, Etana Finkler, and Mary Moutsos provided technical or legal assistance.
Human trafficking is a worldwide form of exploitation in which men, women, and children are bought, sold, and held against their will in involuntary servitude. In addition to the tremendous personal damage suffered by individual trafficking victims, this global crime has broad societal repercussions, such as fueling criminal networks and imposing public health costs. In 2000, Congress enacted the Trafficking Victims Protection Act (TVPA) to combat trafficking and reauthorized this act twice. This report reviews U.S. international antitrafficking efforts by examining (1) estimates of the extent of global trafficking, (2) the U.S. government's strategy for combating the problem abroad, and (3) the Department of State's process for evaluating foreign governments' antitrafficking efforts. The U.S. government estimates that 600,000 to 800,000 persons are trafficked across international borders annually. However, such estimates of global human trafficking are questionable. The accuracy of the estimates is in doubt because of methodological weaknesses, gaps in data, and numerical discrepancies. For example, the U.S. government's estimate was developed by one person who did not document all his work, so the estimate may not be replicable, casting doubt on its reliability. Moreover, country data are not available, reliable, or comparable. There is also a considerable discrepancy between the numbers of observed and estimated victims of human trafficking. The U.S. government has not yet established an effective mechanism for estimating the number of victims or for conducting ongoing analysis of trafficking related data that resides within government entities. While federal agencies have undertaken antitrafficking activities, the U.S. government has not developed a coordinated strategy for combating trafficking abroad or developed a way to gauge results and target its overall assistance. The U.S. government has established coordination mechanisms, but they do not include a systematic way for agencies to clearly delineate roles and responsibilities in relation to each other, identify needs, or leverage activities to achieve greater results. Further, the U.S. government has not established performance measures or conducted evaluations to gauge the overall impact of antitrafficking programs abroad, thus preventing the U.S. government from determining the effectiveness of its efforts or adjusting its assistance to better meet needs. The Department of State assesses foreign governments' compliance with minimum standards to eliminate trafficking in persons; but the explanations for ranking decisions in its annual Trafficking in Persons Report are incomplete, and the report is not used consistently to develop antitrafficking programs. It has increased global awareness, encouraged government action, and raised the risk of sanctions against governments who did not make significant efforts to comply with the standards. However, State does not comprehensively describe compliance with the standards, lessening the report's credibility and usefulness as a diplomatic tool. Further, incomplete country narratives reduce the report's utility as a guide to help focus U.S. government resources on antitrafficking programming priorities.
Through its disability compensation program, VA pays monthly benefits to veterans with service-connected disabilities. Under VA’s BDD program, any member of the armed forces who has seen active duty—including those in the National Guard or Reserves—may apply for VA disability benefits prior to discharge. The program allows veterans to file for and potentially receive benefits earlier and faster than under the traditional claim process because medical records are more readily accessible and key forms needed to process the claim can be signed immediately. Establishing that the claim is related to the member’s military service may also be easier under the BDD program because the member is still on active duty status. In 2008, VA and DOD offered the program at 142 bases, providing access to over 70 percent of servicemembers who were discharged in fiscal year 2007. In July 2008, VA issued policy guidance allowing servicemembers being discharged from any military base to initiate BDD claims at other locations where VA personnel were located, such as at all of its 57 regional offices. VA also established an alternative predischarge program, now called Quick Start, to provide members who cannot participate in the BDD program an opportunity to initiate claims before discharge. Last year, over 51,000 claims were filed through the BDD and Quick Start programs. To participate in the BDD program, servicemembers generally must meet six requirements: (1) be in the process of being honorably discharged from military service, (2) initiate their application for VA disability benefits between 60 and 180 days prior to their discharge date, (3) sign a Veterans Claims Assistance Act (VCAA) form, (4) obtain and provide copies of their service medical records to local VA personnel, (5) complete a VA medical exam, and (6) remain near the base until the exam process is done. The 60- to 180-day time frame is intended to provide sufficient time prior to discharge for local VA personnel at BDD intake sites to assist members with their disability applications, including scheduling exams. While VA has examination requirements for those applying for disability compensation, DOD also has examination requirements for those leaving military service. For all servicemembers leaving the military, the military services generally require health assessments that consist of a questionnaire about the member’s general health and medical history, among other topics. In some cases, members who are separating from the military may receive a physical exam to obtain evidence for a particular medical problem or problems that might exist. The purpose of the exam is to obtain information on the individual’s medical history, and includes diagnostic and clinical tests, depending on the types of disabilities being claimed. VA’s exam for disability compensation is more comprehensive and detailed than the military services’ separation exams, which are intended to document continued fitness for duty, whereas the purpose of the VA exam is to document disability or loss of function. Under the BDD program, DOD and VA coordinate efforts to perform exams for servicemembers being discharged that satisfy requirements of both the military and VA. Because of variation in the availability of local resources, such as physicians trained to use VA’s exam protocols, DOD and VA agreed that local military bases should have flexibility to determine whether VA or military physicians or some combination of both will conduct the exam. In 2004, the agencies signed a memorandum of agreement (MOA) delineating their roles and responsibilities. The national agreement delegates authority to VA regional offices and individual military bases to create memorandums of understanding (MOU) that detail how the exam process will be implemented at the local level. VA’s Veterans Benefits Administration (VBA) is responsible for administering and monitoring the BDD program. VBA personnel assemble claims-related information and send the claims to be processed at one of two regional offices. VBA is also responsible for the paperless BDD claims process, an initiative intended to improve efficiency by converting claims-related information stored in paper folders into electronic format, as part of VA’s effort to have all claims processed electronically by the end of 2012. VA has established a performance goal to increase the percentage of first- time disability claims filed through the BDD program. Servicemembers generally learn of the BDD program through VA-sponsored benefits briefings conducted at military bases as part of TAP sessions. Led primarily by the Department of Labor, TAP consists of about 3 to 4 days of briefings on a variety of topics related to benefits and services available to servicemembers as they are discharged and begin life as veterans. Generally, servicemembers are required to attend a short introductory briefing, while all other sessions—including the VA benefits segment in which members learn about BDD—are optional. In addition to its participation goal for the BDD program, VA has three general goals for the timeliness and accuracy of all disability claims: average days pending (i.e., waiting for a final decision), average days to complete all work to reach a final decision, and average accuracy rate (percentage of claims with no processing errors). In 2009, VA reached its performance goal for one measure, i.e., average days to complete claims was 161 days compared with a goal of 168 days. However VA fell short of two goals last year: Average days pending was 117 days compared with a goal of 116 days, and national accuracy rates were 83 percent compared with a goal of 90 percent. VA has established one performance measure for the BDD program that tracks participation in the program. Since fiscal year 2005, VA has tracked the percentage of all disability claims filed through the BDD program within 1 year of discharge. VA’s most recent data for fiscal year 2008 indicate that 59 percent of claims filed within 1 year of discharge were filed through the BDD program—9 percentage points higher than its fiscal year 2008 goal of 50 percent. VA recently revised this measure so that it accounts only for claims filed by members who are discharging from bases covered by the BDD program. Although VA fine-tuned its measure for BDD program participation, VA does not adequately measure timeliness of BDD claims. VA tracks the days it takes to process traditional claims starting with the date a veteran first files a claim, whereas it tracks days to process BDD claims starting with the date a servicemember is discharged. This approach highlights a key advantage of the BDD program—that it takes less time for the veteran to receive benefits after discharge. However, the time VA spends developing a claim before a servicemember’s discharge—at least 60 days according to VA—is not included in its measures of timeliness for processing BDD claims, even though claims development is included in VA’s timeliness measures for traditional disability claims. VA officials told us the agency does not measure the timeliness of BDD claims development for three reasons: (1) VA lacks legal authority to provide compensation until a servicemember is discharged and becomes a veteran; (2) VA officials perceive most development activities, such as obtaining the separation exam and medical records, to be outside of their control; and (3) VA officials said that a primary objective of the program was to shorten the time from which the member was entitled to benefits— by definition, after discharge—to the time he or she actually received them. While it is useful to know how soon after discharge servicemembers begin receiving benefits, excluding the time VA personnel spend on developing BDD claims limits VA’s information on challenges in this stage of the process and may inhibit VA from taking action to address them. Personnel in 12 of the 14 BDD intake bases we reviewed indicated significant challenges with claims development activities, such as scheduling and completing sometimes multiple exams for servicemembers who leave an area. Challenges such as these may delay the development of servicemembers’ claims, putting them at risk of having to drop out of the BDD program. The fact that the servicemember is not yet a veteran does not absolve VA from tracking the time and resources spent developing BDD claims, which could ultimately help VA identify and mitigate program challenges. As for lack of control over the claims development process, VA also faces similar limitations with traditional disability claims, because VA must rely on veterans to submit their applications and on other agencies or medical providers for records associated with the claim. Nevertheless, VA tracks time spent developing these claims and could also do this for BDD claims. VA implemented two initiatives to improve the BDD program but did not fully evaluate either. In 2006, VA finished consolidating claims processing activities for BDD into two regional offices—Salt Lake City, Utah, and Winston-Salem, North Carolina—to improve the consistency and timeliness of BDD ratings. In fiscal year 2007, each office completed about 11,000 BDD claims. Although VA reported to us that it monitors claims workloads between these offices and, in one case, sent claims from one office to the other so that claims could be processed more quickly, VA had not conducted an evaluation to determine whether consistency improved compared with prior practices. VA also has not evaluated a second BDD initiative, known as the paperless claims processing initiative, which is intended to increase the timeliness of claims processing and security of BDD claims information. Since our report, VA told us that all BDD claims have been processed in the paperless environment since August 2008, and that it continues to monitor the BDD paperless initiative by hosting monthly teleconference calls with all 57 regional offices, intake sites, and area offices to provide ongoing guidance and training, as well as address any issues or problems the field may be experiencing. However, VA has not evaluated the extent to which this initiative improved overall timeliness or security. We identified gaps related to VA’s monitoring of the BDD program, but VA has since taken some steps to address those gaps. For example, we found that between September 2002 and May 2008, VA conducted reviews of BDD operations in only 16 of the 40 offices it visited. Further, in 10 of the offices that were reviewed, VA personnel did not document the extent to which BDD claims were fully developed before being passed on to the processing office, pursuant to VA policy. We also found that the review protocol did not prompt reviewers to verify the extent to which claims were being fully developed before being sent to the processing office. In addition, for 14 offices, reviewers did not address whether agreements related to processing BDD claims existed between the processing office and relevant regional office, even though VA’s BDD operations review protocol specifically prompts reviewers to check for such agreements. In response to our recommendations, VA officials reported that they have increased the number of BDD oversight visits, including visits to sites that had not been reviewed in several years, such as Honolulu, Hawaii, and Louisville, Kentucky. Furthermore, VA revised its protocol to require a review of BDD operations as part of its site visits to monitor regional offices. Although the BDD program is designed to provide most servicemembers with access, some members may be unable to initiate a claim 60 to 180 days prior to discharge or remain within the vicinity of the base long enough to complete their exams. According to VA officials, this is a challenge particularly for demobilizing servicemembers of the National Guard and Reserves, who typically remain at a base for only 2 to 5 days before returning home, and are generally unable in this brief time to complete requisite exams or obtain required copies of their service medical records. Servicemembers located in remote locations until just a few days prior to discharge may also be unable to participate. Finally, we were told that servicemembers going through the DOD Medical Board process are ineligible for the BDD program because they typically are not given a firm discharge date in advance of the 60- to 180-day discharge window, and a firm date is required to avoid servicemembers returning to active duty after completing the claims process. In April 2007, VA established an alternative predischarge program, now known as Quick Start, to provide members who cannot participate in the BDD program an opportunity to initiate disability claims before they are discharged. Under this program, local VA personnel typically develop servicemembers’ claims as much as possible prior to discharge and then send the claims to the San Diego or Winston-Salem regional offices, which were designated as consolidated processing sites for Quick Start claims in August 2009. In addition, in 2009, VA also created a predischarge Web site, which allows servicemembers to initiate either a BDD or Quick Start claim electronically, although exams must still be completed in person. We found VA lacked data to assess the extent to which servicemembers benefit from the alternative predischarge program. Specifically, we found that VA was unable to assess participation in the Quick Start program by National Guard and Reserve servicemembers because they could not be distinguished from other servicemembers. In response to our recommendation, the agency reported that it has updated its data system to distinguish between National Guard/Reserves and full-time active duty servicemembers who file such claims. We also found that, like BDD claims, timeliness measures for Quick Start claims do not include days spent developing the claim prior to discharge. According to VA officials, the timeliness of Quick Start claims may vary substantially from both BDD and traditional claims. For example, servicemembers who are on base only a few days prior to discharge, such as members of the National Guard and Reserves, may have enough time only to fill out the application before returning home and may need to schedule the VA exam necessary to fully develop their claim after discharge. Overall, this will most likely result in less timely receipt of VA disability compensation than through the BDD program, but more timely than traditional claims. On the other hand, servicemembers with more time before discharge may be able to complete more or all of the claim development process, including the VA exam. Because VA does not adequately track timeliness of Quick Start, it may be unable to identify trends and potential challenges associated with developing and processing these claims. However, as with BDD claims, VA told us it has no plans to measure time spent developing these particular claims, and we continue to believe it should. VA and DOD have coordinated to provide servicemembers with information about the BDD program through VA benefits briefings and other initiatives, but attending these briefings is optional for most servicemembers. According to DOD and VA personnel, most servicemembers learn about the program through VA benefits briefings conducted as part of TAP sessions, although some may also learn about BDD through base television spots, papers, and word of mouth. However, the Marine Corps is the only service branch to require servicemembers to attend VA benefits briefings. For the other service branches, participation requirements may vary by base and command. We found that commanders’ and supervisors’ support for transition services, such as VA-sponsored benefits briefings, can vary by base. Even though DOD policy requires commanders to allow servicemembers to attend TAP sessions upon the member’s request, we were told at one base that servicemembers have on occasion not been released from their duties to attend the briefings, resulting in VA personnel going up the chain of command to obtain permission for the members to attend. At two bases, VA officials considered outreach to be difficult—because of conflicting missions between VA and DOD and lack of support from some base commanders—resulting often in servicemembers being called away from the briefings. Although some military officials recommended that servicemembers be required to attend TAP sessions, rather than mandate attendance, DOD decided in August 2007 to establish a goal that 85 percent of separating servicemembers and demobilizing National Guard and Reserve members participate in TAP sessions, including VA benefits briefings. We recommended that DOD establish a plan with a specific time frame for meeting this goal, but DOD has not developed such a plan. We continue to believe that DOD should establish a plan for meeting its goal. In the course of our review, we also learned that TAP participation data may be inaccurate or overstated because unique identifiers were not used to document servicemembers’ attendance and servicemembers who attend more than one briefing could be double-counted. Currently, the Department of Labor (DOL), VA, and DOD track participation in their respective TAP sessions separately. We recommended that DOD establish an accurate measure of servicemembers’ participation in TAP, including VA benefits briefings. DOD recently reported it is working in collaboration with DOL and VA to determine what improvements can be made in measuring servicemembers’ participation in all components of TAP. Most BDD sites employ local MOUs to establish a cooperative exam process, and implementation of the exam process varies significantly. According to data provided by VA during our review, more than 60 percent of bases offering the BDD program had local MOUs that called for the exclusive use of VA physicians, 30 percent used VA contractors to conduct exams, and 7 percent used a sequential process involving resources and exams from both VA and DOD. At bases offering the BDD program overseas, VA exams were conducted by physicians under contract with DOD because VA does not have physicians at these bases. At several bases we visited, we identified resource constraints and communication challenges that have affected servicemenbers’ access to the program. Resource challenges we identified at five bases included no designated VA exam provider for more than 7 months, difficulties hiring physicians, and displaced staff because of construction. At seven bases, we identified communication challenges or a lack of awareness of the local cooperative exam MOU caused by uncertainties generally resulting from deployment of a key DOD local official or changes in command leadership. In one case, communication between DOD and VA personnel was conducted on an inconsistent basis, if at all. Such constraints and challenges have caused delays in servicemembers’ exams or otherwise made it difficult to meet time frames required by the BDD program. At the time of our review, DOD and VA had provided some guidance on implementing and maintaining local MOUs; however, personnel in some sites we visited were interested in learning about promising practices at other bases. We recommended that VA and DOD identify and disseminate information on promising practices that address challenges local officials commonly face in ensuring servicemembers have full access to a cooperative exam. DOD officials recently reported collaborating with VA on a September 2009 conference focusing on seamless transition. DOD officials planned to work with conference sponsors to identify best practices for dealing with the cooperative exam process as it relates to the challenges local personnel commonly face. The BDD program appears to be an effective means for thousands of separating servicemembers to receive their disability benefits faster than if they had filed a claim under VA’s traditional process. Despite BDD’s inherent advantages, VA has not followed through on opportunities to ensure accountability and to optimize results. Similarly, although DOD and VA have made significant progress in increasing servicemembers’ access to the BDD and Quick Start programs, opportunities to further ensure or improve access remain. At a time when so many servicemembers are being discharged with injuries, it is more important than ever to process benefits as efficiently and effectively as possible. BDD and Quick Start programs have great potential to achieve these goals, as long as VA maintains a sharp focus on accountability, and both DOD and VA follow through on recommended actions. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have. Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Veterans’ Disability Benefits: Preliminary Findings on Claims Processing Trends and Improvement Efforts. GAO-09-910T. Washington, D.C.: July 29, 2009. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. Veterans’ Benefits: Increased Focus on Evaluation and Accountability Would Enhance Training and Performance Management for Claims Processors. GAO-08-561. Washington, D.C.: May 27, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. VA and DOD Health Care: Progress Made on Implementation of 2003 President’s Task Force Recommendations on Collaboration and Coordination, but More Remains to Be Done. GAO-08-495R. Washington, D.C.: April 30, 2008. VA Health Care: Additional Efforts to Better Assess Joint Ventures Needed. GAO-08-399. Washington, D.C.: March 28, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T. Washington, D.C.: February 27, 2008. Information Technology: VA and DOD Continue to Expand Sharing of Medical Information, but Still Lack Comprehensive Electronic Medical Records. GAO-08-207T. Washington, D.C.: October 24, 2007. DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers. GAO-07-1256T. Washington, D.C.: September 26, 2007. GAO Findings and Recommendations Regarding DOD and VA Disability Systems. GAO-07-906R. Washington, D.C.: May 25, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Are Far from Comprehensive Electronic Medical Records. GAO-07-852T. Washington, D.C.: May 8, 2007. Veterans’ Disability Benefits: Processing of Claims Continues to Present Challenges. GAO-07-562T. Washington, D.C.: March 13, 2007. Veterans’ Disability Benefits: Long-Standing Claims Processing Challenges Persist. GAO-07-512T. Washington, D.C.: March 7, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 31, 2007. Veterans’ Disability Benefits: VA Can Improve Its Procedures for Obtaining Military Service Records. GAO-07-98. Washington, D.C.: December 12, 2006. Military Disability Evaluation: Ensuring Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-561T. Washington, D.C.: April 6, 2006. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. VA and DOD Health Care: Opportunities to Maximize Resource Sharing Remain. GAO-06-315. Washington, D.C.: March 20, 2006. Veterans’ Benefits: Further Changes in VBA’s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans’ Disability Benefits: Claims Processing Challenges and Opportunities for Improvements. GAO-06-283T. Washington, D.C.: December 7, 2005. Veterans’ Disability Benefits: Claims Processing Problems Persist and Major Performance Improvements May Be Difficult. GAO-05-749T. Washington, D.C.: May 26, 2005. Military and Veterans’ Benefits: Enhanced Services Could Improve Transition Assistance for Reserves and National Guard. GAO-05-544. Washington, D.C.: May 20, 2005. VA and DOD Health Care: Efforts to Coordinate a Single Physical Exam Process for Servicemembers Leaving the Military. GAO-05-64. Washington, D.C.: November 12, 2004. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Through the Benefits Delivery at Discharge (BDD) program, the Department of Veterans Affairs (VA) collaborates with the Department of Defense (DOD) to streamline access to veterans' disability benefits by allowing some servicemembers to file a claim and undergo a single collaborative exam process prior to discharge. BDD is designed for servicemembers with conditions that, while disabling, do not generally prevent them from performing their military duties. This program can shorten the time it takes for veterans to receive benefits by several months. GAO was asked to discuss issues surrounding VA's and DOD's BDD program and related Quick Start program, and identify ways VA and DOD could improve these programs for transitioning servicemembers. This statement is based on GAO's September 2008 report (GAO-08-901) that examined (1) VA efforts to manage the BDD program and (2) how VA and DOD are addressing challenges servicemembers face in accessing the BDD program. GAO updated some information to reflect the current status of claims processing and improvement initiatives in the BDD program. Although VA awards disability benefits more quickly under BDD than through its traditional disability claims process, gaps in program management and accountability remain. For example, VA does not separately measure the total time its personnel spend developing BDD claims. As a result, VA has limited information on potential problems and improvement opportunities regarding BDD claims. GAO continues to believe that VA should measure BDD development time; however, VA told GAO it has no plans to capture this information. GAO also found that VA implemented two initiatives to improve the BDD program--i.e., consolidating BDD processing in two offices and instituting paperless processing of BDD claims to increase efficiencies and improve security of information--but did not evaluate whether or the extent to which desired improvements resulted. Finally, GAO found that VA was not completely or consistently monitoring BDD operations at all locations. VA has since taken steps to review BDD operations at more sites and has revised its protocols to ensure more consistent reviews of BDD operations. VA and DOD have taken steps to improve servicemembers' access to the BDD program; however, opportunities remain for further improvement. For servicemembers such as National Guard and Reservists who are generally unable to complete the BDD claims process within the required time frame, VA established an alternative predischarge program called Quick Start. Under this program, servicemembers may still initiate a disability application prior to discharge, but can complete the claims process, including medical exams, at another location after discharge. In response to GAO's recommendation, VA has taken steps to collect additional data to determine the extent to which the Quick Start program is helping those with limited or no access to the BDD program. However, as with BDD claims, VA told GAO it has no plans to measure time spent developing these particular claims, and GAO continues to believe it should. VA and DOD have coordinated to increase BDD program awareness through VA benefits briefings for servicemembers, and DOD established a goal that 85 percent of servicemembers attend these non-mandatory briefings. GAO continues to believe that DOD should establish a plan with a specific time frame for meeting this goal, but DOD has not developed such a plan. Finally, GAO found that some bases faced difficulties maintaining local agreements intended to prevent redundancy and inconvenience for servicemembers in obtaining required medical exams. In response to GAO's recommendation, DOD reported that it is working with VA to identify best practices to address local challenges to implementing their cooperative exam process.
On October 6, 2015, the Bureau released the first version of its 2020 Census Operational Plan, which is intended to outline the design decisions that drive how the 2020 Decennial Census will be conducted— and which are expected to dramatically change how the Bureau conducts the Decennial Census. This plan outlines 350 redesign decisions that the Bureau has either made or is planning to make. The Bureau has determined that about 51 percent of the design decisions are either IT- related or partially IT-related (84 IT-related and 94 partially IT-related) and the Bureau reported that, as of April 2016, it had made about 58 percent of these decisions (48 IT-related and 55 partially IT-related). Examples of decisions that have been made include the following: Internet response—For the first time on a nationwide scale, the Bureau will allow individuals/households to respond to the census on the Internet from a computer, mobile device, or other devices that access the Internet. Non-ID processing with real-time address matching—The Bureau will provide each household with a unique ID by mail. However, users may also respond to the online survey without the unique ID by entering their address. This operation includes conducting real-time matching of respondent-provided addresses. Non-response follow-up—If a household does not respond to the census by a certain date, the Bureau will send out employees to visit the home. These enumerators will use a census application, on a mobile device provided by the Bureau, to capture the information given to them by the in-person interviews. The Bureau will also manage the case workload of these enumerators using an operational control system that automatically assigns, updates, and monitors cases during non-response follow-up. Administrative records—As we reported in October 2015, the Bureau is working on obtaining and using administrative records from other government agencies, state and local governments, and third- party organizations to reduce the workload of enumerators in their non-response follow-up work. For example, the Bureau plans to use administrative records to, among other things, identify vacant housing units to remove from enumerators’ workloads. Mobile devices—The Bureau plans to award a contract that would provide commercially available mobile phones and the accompanying service contract on behalf of the Census Bureau to enumerators, who will use these devices to collect census data. This approach is referred to as the device-as-a-service strategy. Cloud computing—The Bureau plans to use a hybrid cloud solution where it is feasible, and has decided it will use cloud services for the Internet response option as well as for non-ID processing with real- time address matching. Address canvassing—The Bureau has decided to reengineer its address canvassing process to reduce the need for employing field staff to walk every street in the nation in order to update its address list and maps. For example, the Bureau plans to first conduct in-office address canvassing using aerial imagery, administrative records, and commercial data before sending staff into the field. Figure 1 provides an overview of additional decisions and assumptions for the 2020 Census, resulting from the October 2015 operational plan. The decisions made to date have been informed by several major field tests, including the 2014 Census test, which was conducted in the Maryland and Washington, D.C., areas to test new methods for conducting self- response and non-response follow-up; the 2015 Census Test in Arizona, which tested, among other things, the use of a field operations management system to automate data collection operations and provide real-time data and the ability to reduce the non-response follow-up workload using data previously provided to the government, as well as enabling enumerators to use their personally owned mobile devices to collect census data; and the 2015 Optimizing Self-Response test in Savannah, Georgia, and the surrounding area, which was intended to explore methods of encouraging households to respond using the Internet, such as using advertising and outreach to motivate respondents, and enabling households to respond without a Bureau-issued identification number. The following are examples of decisions that had not been finalized as of April 2016: Invalid return detection and non-ID response validation—The Bureau has not decided on its approach for identifying whether fraudulent returns have been submitted for the 2020 Census or the criteria and thresholds to decide whether further investigation may be needed, such as field follow-up. Solutions architecture—While the Bureau has established a notional solutions architecture for the 2020 Census, it has not decided on the final design. Internet response for island areas—The Bureau has not decided on the extent to which the Internet self-response option will be available for island area respondents. Additional uses of cloud—While Bureau officials have decided on select uses of cloud-based solutions, decisions remain on additional possible uses. For example, the Bureau is exploring whether it will use a cloud service provider to support a tool for assigning, controlling, tracking, and managing enumerators’ caseloads in the field. Several of the key systems needed to support the 2020 Census redesign are expected to be provided as CEDCAP enterprise systems under the purview of the Bureau’s IT Directorate. According to Bureau officials, the remaining systems (referred to as non-CEDCAP systems) are to be provided by the 2020 Census Directorate’s IT Division or other Bureau divisions. Specifically, CEDCAP relies on 2020 Census to be one of the biggest consumers of its enterprise systems, and 2020 Census relies heavily on CEDCAP to deliver key systems to support its redesign. Thus CEDCAP is integral to helping the 2020 Census program achieve its estimated $5.2 billion cost savings goal. Accordingly, as reported in the President’s Budget for Fiscal Year 2017, over 50 percent of CEDCAP’s funding for fiscal year 2017 ($57.5 million of the requested $104 million) is expected to come from the 2020 Census program. The CEDCAP program, which began in October 2014, is intended to provide data collection and processing solutions (including systems, interfaces, platforms and environments) to support the Bureau’s entire survey life cycle (including survey design; instrument development; sample design and implementation; data collection; and data editing, imputation, and estimation). The program consists of 12 projects, which have the potential to offer numerous benefits to the Bureau’s survey programs, including the 2020 Census program, such as enabling an Internet response option; automating the assignment, controlling, and tracking of enumerator caseloads; and enabling a mobile data collection tool for field work. Eleven of these projects are intended to deliver one or more IT solutions. The twelfth project—IT Infrastructure Scale-Up—is not intended to deliver IT capabilities, solutions, or infrastructure; rather, it is expected to provide funding to the other relevant projects to acquire the necessary hardware and infrastructure to enable 2020 Census systems to scale to accommodate the volume of users. Table 1 describes the objectives of each project. The eleven projects are to provide functionality incrementally over the course of 13 product releases. The product releases are intended to support major tests and surveys at the Bureau through 2020. Of the 13 product releases, 7 are intended to support 6 remaining major tests the 2020 Census program is conducting as it prepares for the 2020 Census, as well as 2020 Census live production. The remaining 6 releases support the other surveys such as the American Community Survey (ACS) and Economic Census. Most recently, the CEDCAP program has been working on delivering the functionality needed for the third product release, which is to support a major census test, referred to as the 2016 Census Test—conducted by the 2020 Census program to inform additional decennial design decisions. The 2018 Census end-to-end test (mentioned previously) is critical to testing all production-level systems and operations in a census-like environment to ensure readiness for the 2020 Census. The 2020 Census program plans to begin this test in August 2017. Figure 2 identifies which of the 13 CEDCAP product releases support the 2020 Census versus other surveys, as of May 2016. The Bureau’s past efforts to implement new approaches and systems have not always gone as planned. As one example, during the 2010 Census, the Bureau planned to use handheld mobile devices to support field data collection for the census, including following up with nonrespondents. However, due to significant problems identified during testing of the devices, cost overruns, and schedule slippages, the Bureau decided not to use the handheld devices for non-response follow-up and reverted to paper-based processing, which increased the cost of the 2010 Census by up to $3 billion and significantly increased its risk as it had to switch its operations to paper-based operations as a backup. Due in part to these technology issues the Bureau was facing, we designated the 2010 Census a high-risk area in March 2008. We have also identified and reported on numerous occasions concerns about the Bureau’s IT internal control, its IT preparations for the 2020 Census, and its looming deadline. Accordingly, we identified CEDCAP as an IT investment in need of attention in our February 2015 High-Risk report. Further, we testified in November 2015 that key IT decisions needed to be made soon because the Bureau was less than 2 years away from end- to-end testing of all systems and operations to ensure readiness for the 2020 Census and there was limited time to implement it. We emphasized that the Bureau had deferred key IT-related decisions, and that it was running out of time to develop, acquire, and implement the systems it will need to deliver the redesign and achieve its projected $5.2 billion in cost savings. In addition to the IT issues I am testifying on today, there are other risks and uncertainties facing a successful headcount that we are monitoring at the request of Congress. For example, in October 2015, we reported on actions the Bureau needs to take in order to ensure it fully realizes potential cost-savings associated with its planned use of administrative records. Likewise, we are assessing the reliability of the Bureau’s estimate of the cost of the 2020 Census and anticipate issuing that report to Congress later this month. We also have ongoing work evaluating the 2016 Census Test, which is currently taking place in Harris County, Texas, and Los Angeles County, California. As part of our ongoing work, we determined that the 12 CEDCAP projects are at varying stages of planning and design. Nine of the projects began when the program was initiated in October 2014, two of the projects began later in June 2015, and the twelfth project—IT Infrastructure Scale- Up—has not started. The 11 ongoing projects have efforts under way to deliver 17 solutions, which are in different phases of planning and design. For 8 of the 17 solutions, the Bureau recently completed an analysis of alternatives to determine whether it will acquire commercial-off-the- shelf (COTS) solutions or whether they will be built in-house in order to deliver the needed capabilities. On May 25, 2016, the Bureau issued a memorandum documenting its decision to acquire the capabilities using a COTS product. The memorandum also described the process used to select the commercial vendor. For the remaining 9 IT solutions, the Bureau has identified the sourcing approach (e.g., buy, build, or use/modify existing system) and has either identified the solution to be implemented or are in the process of evaluating potential solutions. For example, the Electronic Correspondence Portal project is working on combining an existing government-off-the-shelf product with an existing COTS product. All projects are scheduled to end by September 2020. In 2013, the CEDCAP program office estimated that the program would cost about $548 million to deliver its projects from 2015 to 2020. In July 2015, the Bureau’s Office of Cost Estimation, Analysis, and Assessment completed an independent cost estimate for CEDCAP that projected the program to cost about $1.14 billion from 2015 to 2020 ($1.26 billion through 2024). Bureau officials reported that, as of March 2016, the projects have collectively spent approximately $92.1 million—17 percent of the total program office estimate and 8 percent of the independent cost estimate. According to Bureau officials, the program used the 2013 program cost estimate to establish its current budget and to track project costs. We determined that the three selected CEDCAP projects we reviewed— the Centralized Operational Analysis and Control project, Internet and Mobile Data Collection project, and Survey (and Listing) Interview Operational Control project— did not fully implement best practices for project monitoring and control, which are critical for making sure that projects are meeting their goals and that action can be taken to correct problems in a timely fashion. Determining progress against the plan. This involves comparing actual cost and schedule against the documented plan for the full scope of the project and communicating the results. While the three projects meet weekly to monitor the current status of each project and produce monthly reports that document cost and schedule progress, their plans did not include sufficient detail against which to monitor progress. For example, project planning documents for the three projects did not include key information, such as when build-or-buy decisions were to be made or when final systems are to be released. This is especially problematic when the production systems that these projects are expected to produce need to be implemented in time for the 2018 end-to-end system integration test, which begins in August 2017 (in less than a year and a half). Bureau officials agreed with our concerns and in June 2016 they stated that they are in the process of updating the project plans and expect to be done by August 2016. It will be important that these plans include the full scope of these projects to enable the project managers and the CEDCAP program manager to determine progress relative to the full scope of the projects. Document significant deviations in performance. Projects should identify and document when deviations from planned cost and schedule occur that, if left unresolved, would preclude the project from meeting its objectives. The Bureau’s monthly progress reports capture schedule and cost variances and document when these variances exceed the threshold for significant deviation, which is 8 percent. For example, the Internet and Mobile data collection project had a cost variance of 20 percent in September 2015 and the Survey (and Listing) Interview Operational Control project had a cost variance of 25 percent in September 2015, which were flagged by the projects as exceeding the significant deviation threshold. However, the projects are measuring deviations against their budgeted amounts, which are based on the 2013 CEDCAP program office cost estimate. This estimate was developed based on very early assumptions and limited details about the program and is thus out-of-date. In the absence of an up-to-date cost estimate, the program lacks a basis for monitoring true deviations in performance. Accordingly, our draft report includes a recommendation that the Bureau update the CEDCAP program office cost estimate to reflect the current status of the program as soon as appropriate information becomes available. Taking corrective actions to address issues when necessary. Projects should take timely corrective actions, such as revising the original plan, establishing new agreements, or including additional mitigation activities in the current plan, to address issues when cost or schedule deviates significantly from the plan. The CEDCAP program has established a process for taking corrective actions to address issues when needed and, as of April 2016, Bureau officials stated they have not needed to take any corrective actions to address CEDCAP program issues. For example, while we found several significant deviations in cost and schedule for the three projects in the monthly progress reports, these did not require corrective actions because they were due to, for example, delays in contract payments, contract awards, and other obligations for hardware and software outside the control of the CEDCAP program office. Monitoring the status of risks periodically. This practice can result in the discovery of new risks, revisions to existing risks, or the need to implement a risk mitigation plan. The three projects monitor the status of their risks in bi-weekly project status meetings and monthly risk review board meetings, have established risk registers, and regularly update the status of risks in their registers. However, while according to Bureau officials the projects are to document updates on the status of their risks in their respective risk registers, the Internet and Mobile Data Collection and Survey (and Listing) Interview Operational Control projects do not consistently document status updates. For example, these programs had not updated the status of medium- probability, medium-impact risks for several months. Bureau officials recognized the need to document updates in the risk registers more consistently and stated that efforts are under way to address this, but they did not have an estimated completion date. Until these efforts are complete, the Bureau will not have comprehensive information on how risks are being managed. Accordingly, our draft report includes a recommendation that the Bureau ensure that updates to the status of risks are consistently documented for CEDCAP’s Internet and Mobile Data Collection and Survey (and Listing) Interview Operational Control projects. Implementing risk mitigation plans. Risk mitigation plans that include sufficient detail—such as start and completion dates and trigger events and dates—provide early warning that a risk is about to occur or has just occurred and are valuable in assessing risk urgency. As of October 2015, the three projects had developed basic risk mitigation steps for each of the risks associated with the projects that required a mitigation plan. However, these risk mitigation plans lacked important details such as start or completion dates. Additionally, two projects did not have any trigger events for their risks that exceed a predefined exposure threshold. Bureau officials recognized that there were issues with their risk management process and stated that they were working on addressing them. Bureau officials told us they had revised their risk management process to address these weaknesses, but it was unclear to what extent this process has been implemented. Without detailed risk mitigation plans and trigger events, officials will be hindered in their ability to identify potential problems and mitigate their impacts. Therefore, our draft report includes a recommendation that the Bureau consistently implement detailed risk mitigation plans for the three projects. Despite significant interdependencies between the CEDCAP and 2020 Census Programs, our ongoing audit work determined that the Bureau is not effectively managing these interdependencies. About half of CEDCAP’s major product releases (7 of 13 total), are to align with and support the remaining 6 major 2020 Census tests, as well as the operations of the 2020 Census. Accordingly, the CEDCAP and 2020 Census programs have both established master schedules that contain thousands of milestones and tens of thousands of activities through 2020 Census production and have identified major milestones within each program that are intended to align with each other. In addition, both program management offices have established processes for managing their respective master schedules. However, the CEDCAP and 2020 Census programs maintain their master schedules using different software where dependencies between the two programs are not automatically linked and are not dynamically responsive to change, as called for by best practices identified in our Schedule Assessment Guide. Consequently, the two programs have been manually identifying activities within their master schedules that are dependent on each other, and rather than establishing one dependency schedule, as best practices dictate, the programs have developed two separate dependency schedules for each program, and meet weekly with the intent of coordinating these two schedules. Our schedule guide also indicates that constantly updating a schedule manually defeats the purpose of a dynamic schedule and can make the schedule particularly prone to error. In addition, the programs’ dependency schedules only include near-term schedule dependencies, and not future milestones through 2020 Census production. For example, as of February 2016, the dependency schedules only included tasks associated with the CEDCAP product release in support of the 2020 Census program’s 2016 Census Test through July 2016. According to Bureau officials, they are currently working to incorporate activities for the next set of near-term milestones, which are to support the 2016 Address Canvassing Test. This practice of maintaining separate dependency schedules which must be manually reconciled has proven to be ineffective, as it has contributed to the misalignment between the programs’ schedules. For example: The CEDCAP program originally planned to complete build-or-buy decisions for several capabilities by October 2016, while the 2020 Census timeline specified that these decisions would be ready by June 2016. In November 2015, CEDCAP officials stated that they recognized this misalignment and decided to accelerate certain build- or-buy decisions to align with 2020 Census needs. As of April 2016, while CEDCAP’s major product releases need to be developed and deployed to support the delivery of 2020 Census’ major tests, CEDCAP’s releases and 2020 Census’ major tests milestones were not always aligned to ensure CEDCAP releases would be available in time. For example, development of the seventh CEDCAP release, which is intended to support the 2017 Census Test, is not scheduled to begin until almost a month after the 2017 Census Test is expected to begin (December 2016), and is not planned to be completed until about 2 months after the 2017 Census Test ends (July 2017). Bureau officials acknowledged that CEDCAP release dates need to be revised to accurately reflect the program’s current planned time frames and to appropriately align with 2020 Census time frames. Officials stated that these changes will be made by the end of May 2016. Adding to the complexity of coordinating the two programs’ schedules, several key decisions by the 2020 Census program are not planned to be made until later in the decade, as we testified in November 2015. This may impact CEDCAP’s ability to deliver those future requirements and have production-ready systems in place in time to conduct end-to-end testing, which is to begin in August 2017. For example, the Bureau does not plan to decide on the full complement of applications, data, infrastructure, security, monitoring, and service management for the 2020 Census—referred to as the solutions architecture—until September 2016. The Bureau also does not plan to finalize the expected response rates for all self-response modes, including how many households it estimates will respond to the 2020 survey using the Internet, telephone, and paper, until October 2017. Figure 3 illustrates several IT-related decisions which are not scheduled to be made until later in the decade, and may impact CEDCAP’s ability to prepare for the end-to-end test and 2020 Census. Further exacerbating these difficulties, as of April 2016 (a year and a half into the CEDCAP program), the programs have not documented their process for managing the dependencies, contrary to our schedule guide which indicates that if manual schedule reconciliation cannot be avoided, the parties should define a process to preserve integrity between the different schedule formats and to verify and validate the converted data whenever the schedules are updated. Program officials stated that they aim to document this process by June 2016, but this would at best document a process that has not been effective, likely leading to additional misalignment in the future. We concluded in our draft report that without an effective process for ensuring alignment between the two programs, the Bureau faces increased risk that capabilities for carrying out the 2020 Census will not be delivered as intended. Thus, our draft report (which is with Commerce and the Bureau for comment) includes a recommendation that the Bureau define, document, and implement a repeatable process to establish complete alignment between CEDCAP and 2020 Census programs by, for example, maintaining a single dependency schedule. The CEDCAP and 2020 Census programs were also not effectively managing risks common to the two programs. Both the CEDCAP and 2020 Census programs have taken steps to collaborate on identifying and mitigating risks. For example, both programs have processes in place for identifying and mitigating risks that affect their respective programs, facilitate risk review boards, and have representatives attend each other’s risk review board meetings to help promote consistency. However, our preliminary findings indicate that these programs do not have an integrated list of risks (referred to as a risk register) with agreed- upon roles and responsibilities for tracking them, as called for by best practices identified by GAO for collaboration and leading practices in risk management. This decentralized approach introduces two key problems. First, there are inconsistencies in tracking and managing interdependent risks. Specifically, selected risks were recognized by one program’s risk management process and not the other, including the following examples as of March 2016: The CEDCAP program identified the lack of real-time schedule linkages as a high probability, high-impact risk in its risk register, which as of March 2016 had been realized and was considered an issue for the program. However, the 2020 Census program had not recognized this as a risk in its risk register. While CEDCAP had identified the ability to scale systems to meet the needs of the Decennial Census as a medium-probability, high-impact risk in its risk register, the 2020 Census program had not recognized this as a risk in its risk register. The CEDCAP program had identified the need to define how the Bureau will manage and use cloud services to ensure successful integration of cloud services with existing infrastructure as a low probability, high-impact risk in its risk register; however, the 2020 Census program had not recognized the adoption of cloud services as a formal risk in its risk register. This is especially problematic as the 2020 Census program recently experienced a notable setback regarding cloud implementation. Specifically, the 2020 Census program was originally planning to use a commercial cloud environment in the 2016 Census Test, which would have been the first time the Bureau used a cloud service in a major census test to collect census data from residents in parts of the country. However, leading up to the 2016 Census Test, the program experienced stability issues with the cloud environment. Accordingly, in March 2016, the 2020 Census program decided to cancel its plans to use the cloud environment in the 2016 Census Test. Officials stated that they plan to use the cloud in future census tests. According to 2020 Census program officials, they did not consider the lack of real-time schedule linkages to be a risk because they were conducting weekly integration meetings and coordinating with CEDCAP on their schedules to ensure proper alignment. However, manually resolving incompatible schedules in different software can be time- consuming, expensive, and prone to errors. And, as noted above, the Bureau’s process for managing schedule dependencies between the two programs has not been effective. Regarding the lack of scalability and cloud services risks in the 2020 Census risk log, 2020 Census program officials acknowledged that it was an oversight and that they should have been recognized by the program as formal risks. The second problem of not having an integrated risk register is that tracking risks in two different registers can result in redundant efforts and potentially conflicting mitigation efforts. For example, both programs have identified in their separate risk registers several common risks, such as risks related to late changes in requirements, integration of systems, human resources, build or buy decisions, and cybersecurity. These interdependent risks found in both risk registers can introduce the potential for duplicative or inefficient risk mitigation efforts and the need for additional reconciliation efforts. Thus we concluded in our draft report that until it establishes a comprehensive list of risks facing both the CEDCAP and 2020 Census programs, and agrees on their respective roles and responsibilities for jointly managing this list, the Bureau is in danger of not fully addressing risks facing the programs. Accordingly, in our draft report we include a recommendation that the Bureau establish a comprehensive and integrated list of all interdependent risks facing the CEDCAP and 2020 Census programs, and clearly identify roles and responsibilities for managing this list. Lastly, despite their significant interdependencies, a process for managing requirements for the two programs has not been finalized. The Bureau’s Office of Innovation and Implementation is responsible for gathering and synthesizing business requirements across the Bureau, including from the 2020 Census program, and delivering them to CEDCAP. Additionally, for the 2020 Census program, the Bureau established the 2020 Census Systems Engineering and Integration program office, which is responsible for delivering 2020 Census business requirements to the Office of Innovation and Implementation. CEDCAP receives the requirements on an incremental basis and builds functionality containing subsets of the requirements in the 40-day cycles. However, as of April 2016, the Office of Innovation and Implementation’s process for collecting and synthesizing requirements, obtaining commitment to those requirements from stakeholders, and managing changes to the requirements—as recommended by best practices—had not been finalized. According to Bureau officials, they have drafted the process and are working on incorporating feedback from customers. Office officials stated that they plan to finalize this documentation by June 2016. Additionally, as of April 2016, the 2020 Census Systems Engineering and Integration program had not yet finalized its program management plan which outlines, among other things, how it is to establish requirements to be delivered to the Office of Innovation and Implementation, which are then to be delivered to CEDCAP. According to program officials, they have been working on a draft of this plan and expect it to be finalized by June 2016. As a result, the Bureau has developed three CEDCAP releases without having a fully documented and institutionalized process for collecting those requirements. In addition, the 2020 Census program identified about 2,500 capability requirements needed for the 2020 Census; however, there are gaps in these requirements. Specifically, we determined that of the 2,500 capability requirements, 86 should be assigned to a test prior to the 2020 Census, but were not. These included 64 requirements related to redistricting data program, 10 requirements related to data products and dissemination, and 12 requirements related to non-ID response validation. Bureau officials stated that the 74 redistricting data program and data products and dissemination requirements have not yet been assigned to a Census test because they have not yet gone through the Bureau’s quality control process, which is planned for later this calendar year. Regarding the 12 non-ID response validation requirements, Bureau officials stated that once this area is better understood, a more complete set of requirements will be established, and then they will assign the requirements to particular tests, as appropriate. As of April 2016, the Bureau was in the early stages of conducting research in this area. Thus, it has not tested non-ID response validation in the 2013, 2014, or 2015 Census tests. These tests were intended to, among other things, help define requirements around critical functions. With less than a year and a half remaining before the 2018 Census end-to-end test begins, the lack of experience and specific requirements related to non-ID response validation is especially concerning, as incomplete and late definition of requirements proved to be serious issues for the 2010 Census. Failure to fully define requirements has been a problem for the Bureau in the past. Specifically, leading up to the 2010 Census, we reported in October 2007 that not fully defining requirements had contributed to both cost increases and schedule delays experienced by the failed program to deliver handheld computers for field data collection—contributing to an up to $3 billion overrun. Increases in the number of requirements led to the need for additional work and staffing. Moreover, we reported in 2009 and 2010 that the Bureau’s late development of an operational control system to manage its paper-based census collection operations resulted in system outages and slow performance during the 2010 Census. The Bureau attributed these issues, in part, to the compressed development and testing schedule. As the 2020 Census continues to make future design decisions and CEDCAP continues to deliver incremental functionality, it is critical to have a fully documented and institutionalized process for managing requirements. Additionally, we concluded in our draft report that until measures are taken to identify when the 74 requirements related to the redistricting data program and data products and dissemination will be tested, and to make developing a better understanding of, and identifying requirements related to, non-ID response validation a high and immediate priority, or to consider alternatives to avoid late definition of such requirements, the Bureau is at risk of experiencing similar issues that it experienced during the 2010 Census. Thus, our draft report includes the following recommendations: finalize documentation of processes for managing requirements for CEDCAP; identify when the 74 requirements related to redistricting data program and data products and dissemination will be tested; and make developing a better understanding of and identifying requirements related to non-ID respondent validation a high and immediate priority, or consider alternatives to avoid late definition of such requirements. While the Bureau plans to extensively use IT systems to support the 2020 Census redesign in an effort to realize potentially significant efficiency gains and cost savings, this redesign introduces the following critical information security challenges. Developing policies and procedures to minimize the threat of phishing—Phishing is a digital form of social engineering that uses authentic-looking, but fake, e-mails, websites, or instant messages to get users to download malware, open malicious attachments, or open links that direct them to a website that requests information or executes malicious code. Phishing attacks could target respondents, as well as Census employees and contractors. The 2020 Census will be the first one in which respondents will be heavily encouraged to respond via the Internet. The Bureau plans to highly promote the use of the Internet self-response option throughout the nation and expects, based on preliminary research, that approximately 50 percent of U.S. households will use this option. This will likely increase the risk that cyber criminals will use phishing in an attempt to steal personal information. A report developed by a contractor for the Bureau noted that criminals may pretend to be a census worker caller, or website, to phish for personal information such as Social Security numbers and bank information. Further, phishing attacks directed at Census employees, including approximately 300,000 temporary employees, could have serious effects. The U.S. Computer Emergency Readiness Team (US-CERT) has recently reported on phishing campaigns targeting federal government agencies that are intended to install malware on government computer systems. These could act as an entry point for attackers to spread throughout an organization’s entire enterprise, steal sensitive personal information, or disrupt business operations. To minimize the threat of phishing, organizations such as US-CERT and the National Institute of Standards and Technology (NIST) recommend several actions for organizations, including communicating with users. Additionally, as we previously reported, in 2015 the White House and the Office of Management and Budget identified anti-phishing as a key area for federal agencies to focus on in enhancing their information security practices. Ensuring that individuals gain only limited and appropriate access to 2020 Census data—The Decennial Census plans to enable a public-facing website and mobile devices to collect personally identifiable information (PII) (e.g., name, address, and date of birth) from the nation’s entire population—estimated to be over 300 million. In addition, the Bureau is planning to obtain and store administrative records containing PII from other government agencies to help augment information that enumerators did not collect. Additionally, the 2020 Census will be highly promoted and visible throughout the nation, which could increase its appeal to malicious actors. Specifically, cyber criminals may attempt to steal personal information collected during and for the 2020 Decennial Census, through techniques such as social engineering, sniffing of unprotected traffic, and malware installed on vulnerable machines. We have reported on challenges to the federal government and the private sector in ensuring the privacy of personal information posed by advances in technology. For example, in our 2015 High Risk List, we expanded one of our high-risk areas—ensuring the security of federal information systems and cyber critical infrastructure—to include protecting the privacy of PII. Technological advances have allowed both government and private sector entities to collect and process extensive amounts of PII more effectively. However, the number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. Because of these challenges, we have recommended, among other things, that federal agencies improve their response to information security incidents and data breaches involving PII, and consistently develop and implement privacy policies and procedures. Accordingly, it will be important for the Bureau ensure that only respondents and Bureau officials are able to gain access to this information and that enumerators and other employees only have access to the information needed to perform their jobs. Adequately protecting mobile devices—The 2020 Census will be the first one in which the Census Bureau will provide mobile devices to enumerators to collect personally identifiable information from households who did not self-respond to the survey. The Bureau plans to use a contractor to provide approximately 300,000 census-taking- ready mobile devices to enumerators. The contractor will be responsible for, among other things, the provisioning, shipping, storage, and decommissioning of the devices. The enumerators will use the mobile devices to collect non-response follow-up activities. Many threats to mobile devices are similar to those for traditional computing devices; however, the threats and attacks to mobile devices are facilitated by vulnerabilities in the design and configuration of mobile devices, as well as the ways consumers use them. Common vulnerabilities include a failure to enable password protection and operating systems that are not kept up to date with the latest security patches. In addition, because of their small size and use outside an office setting, mobile devices are easier to misplace or steal, leaving their sensitive information at risk of unauthorized use or theft. In 2012 we reported on key security controls and practices to reduce vulnerabilities in mobile devices, protect proprietary and other confidential business data that could be stolen from mobile devices, and ensure that mobile devices connected to the organization’s network do not threaten the security of the network itself. For example, we reported that organizations can require that devices meet government specifications before they are deployed, limit storage on mobile devices, and ensure that all data on the device are cleared before the device is disposed of. Doing so can help protect against inappropriate disclosure of sensitive information that is collected on the mobile devices. Accordingly, we recommended, among other things, that the Department of Homeland Security, in collaboration with the Department of Commerce, establish measures about consumer awareness of mobile security. In September 2013, the Department of Homeland Security addressed this recommendation by developing a public awareness campaign with performance measures related to mobile security. Ensuring adequate control in a cloud environment—The Bureau has decided to use cloud solutions whenever possible for the 2020 Census; however, as stated previously, it has not yet determined all of the needed cloud capabilities. In September 2014, we reported that cloud computing has both positive and negative information security implications for federal agencies. Potential information security benefits include the use of automation to expedite the implementation of secure configurations on devices; reduced need to carry data on removable media because of broad network access; and low-cost disaster recovery and data storage. However, the use of cloud computing can also create numerous information security risks for federal agencies, including that cloud service vendors may not be familiar with security requirements that are unique to government agencies, such as continuous monitoring and maintaining an inventory of systems. Thus, we reported that, to reduce the risks, it is important for federal agencies to examine the specific security controls of the provider the agency is evaluating when considering the use of cloud computing. In addition, in April 2016, we reported that agencies should develop service-level agreements with cloud providers that specify, among other things, the security performance requirements—including data reliability, preservation, privacy, and access rights—that the service provider is to meet. Without these safeguards, computer systems and networks, as well as the critical operations and key infrastructures they support, may be lost, and information—including sensitive personal information—may be compromised, and the agency’s operations could be disrupted. Adequately considering information security when making decisions about the IT solutions and infrastructure supporting the 2020 Census—Design decisions related to the 2020 Census will have security implications to be considered when making decisions about future 2020 Census design features. As described previously, as of April, the Census Bureau still had yet to make 350 decisions about the 2020 Census, and half of those have an IT component. For example, the Bureau has not yet made decisions about key aspects of its IT infrastructure to be used for the 2020 Census, including defining all of the components of the solution architecture (applications, data, infrastructure, security, monitoring, and service management), deciding whether it will develop a mobile application to enable respondents to submit their survey responses on their mobile devices, and deciding how it plans to use cloud providers. We have previously reported on challenges that the Bureau has had in making decisions in a timely manner. Specifically, in April 2014, and again in April 2015, we noted that key decisions had yet to be made about the 2020 Census, and noted that as momentum builds toward Census Day 2020, the margin for schedule slippages is getting increasingly slim. The Chief Information Security Officer echoed these concerns, stating that any schedule slippage can affect the time needed to conduct a comprehensive security assessment. As key design decisions are deferred and the time to make such decisions becomes more compressed, it is important that the Bureau ensures that information security is adequately considered and assessed when making design decisions about the IT solutions and infrastructure to be used for the 2020 Census. Making certain key IT positions are filled and have appropriate information security knowledge and expertise—As our prior work and leading guidance recognize, having the right knowledge and skills is critical to the success of a program, and mission-critical skills gaps in such occupations as cybersecurity pose a high risk to the nation. Whether within specific federal agencies or across the federal workforce, these skills gaps impede federal agencies in cost- effectively serving the public and achieving results. Because of this, we added strategic human capital management, including cybersecurity human capital, to our High Risk List in 2001, and it remains on that list today. These skills gaps are also a key contributing factor to our high-risk area of ensuring the security of federal information systems. As we reported in February 2015, although steps have been taken to close critical skills gaps in the cybersecurity area, it remains an ongoing problem and additional efforts are needed to address this issue government-wide. We also reported in February 2015, that the Bureau continues to have critical skills gaps, such as in cloud computing, security integration and engineering, enterprise/mission engineering life-cycle, requirements development, and internet data collection. The Bureau has made some progress in addressing its skills gaps and continues to work toward ensuring that key information security skills are in place. However, the Bureau has faced longstanding vacancies in key IT positions, such as the Chief Information Officer (vacant from July 2015 to June 2016) and the CEDCAP Chief Security Engineer (vacant since October 2015). Ensuring that key positions are filled with staff who have the appropriate expertise will be important to ensure that security controls are adequately designed in the systems used to collect and store census data. Ensuring that contingency and incident response plans are in place that encompass all of the IT systems to be used to support the 2020 Census—Because of the brief time frame for collecting data during the Decennial Census, it is especially important that systems are available for respondents to ensure a high response rate. Contingency planning and incident response help ensure that if normal operations are interrupted, network managers are able to detect, mitigate, and recover from a service disruption while preserving access to vital information. Implementing important security controls including policies, procedures, and techniques for contingency planning and incident response helps to ensure the confidentiality, integrity, and availability of information and systems, even during disruptions of service. However, we have reported on weaknesses across the federal government in these areas. Specifically, in April 2014 we estimated that federal agencies (including the Department of Commerce) had not completely documented actions taken in response to detected incidents reported in fiscal year 2012 in about 65 percent of cases. We made a number of recommendations to improve agencies’ cyber incident response practices, such as developing incident response plans and procedures and testing them. Adequately training Bureau employees, including its massive temporary workforce, in information security awareness—The Census Bureau plans to hire an enormous temporary workforce during the 2020 Census activities, including about 300,000 temporary employees to, among other things, use contractor-furnished mobile devices to collect personal information from households that have not yet responded to the Census. Because uninformed people can be one of the weakest links when securing systems and networks, information security awareness training is intended to inform agency personnel of the information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. However, ensuring that every one of the approximately 300,000 temporary enumerators is sufficiently trained in information security will be challenging. Providing training to agency personnel, such as this new and temporary staff, will be critical to securing information and systems. Making certain security assessments are completed in a timely manner and that risks are at an acceptable level—According to guidance from NIST, after testing an information system, authorizing officials determine whether the risks (e.g., unaddressed vulnerabilities) are acceptable and issue an authorization to operate. Each of the systems that the 2020 Census IT architecture plans to rely on will need to undergo a security assessment and obtain authorization to operate before they can be used for the 2020 Census. Properly configuring and patching systems supporting the 2020 Census—Configuration management controls ensure that only authorized and fully tested software is placed in operation, software and hardware are updated, information systems are monitored, patches are applied to these systems to protect against known vulnerabilities, and emergency changes are documented and approved. We reported in September 2015 that for fiscal year 2014, 22 of the 24 agencies in our review (including the Department of Commerce) had weaknesses in configuration management controls. Moreover, in April 2015, US-CERT issued an alert stating that cyber threat adversaries continue to exploit common, but unpatched, software products from vendors such as Adobe, Microsoft, and Oracle. Without strong configuration and patch management, an attacker may exploit a vulnerability not yet mitigated, enabling unauthorized access to information systems or enabling users to have access to greater privileges than authorized. The Bureau’s acting Chief Information Officer and its Chief Information Security Officer have acknowledged these challenges and described the Bureau’s plans to address them. For example, the Bureau has developed a risk management framework, which is intended to ensure that proper security controls are in place and provide authorizing officials with details on residual risk and progress to address those risks. In addition, the Bureau has also embedded three security engineers in the 2020 Census program to provide assistance and guidance to project teams. Bureau officials also stated that they are in the process of filling—or plan to fill— vacancies in key positions and intend to hire staff with expertise in key areas, such as cloud computing. To minimize the risk of phishing, Bureau officials note that they plan to contract with a company to monitor the Internet for fraudulent sites pretending to be the Census Bureau. Continued focus on these considerable challenges will be important as the Bureau begins to develop and/or acquire systems and implement the 2020 design. We have previously reported on Census Bureau weaknesses that are related to many of these information security challenges. Specifically, we reported in January 2013 that the Bureau had a number of weaknesses in its information security controls due in part to the fact that it had not fully implemented a comprehensive information security program. Thus, we made 13 public recommendations in areas such as security awareness training, incident response, and security assessments. We also made 102 recommendations to address technical weaknesses we identified related to access controls, configuration management, and contingency planning. As of May 2016, the Bureau had made significant progress in addressing these recommendations. Specifically, it had implemented all 13 public recommendations and 88 of 102 technical recommendations. For example, the Bureau developed and implemented a risk management framework with a goal of better management visibility of information security risks; this framework addressed a recommendation to document acceptance of risks for management review. Of the remaining 14 open recommendations, we have determined that 3 require additional actions by the Bureau, and for the other 11 we have work under way to evaluate if they have been fully addressed. These recommendations pertain to access controls and configuration management, and are related to two of the security challenges we previously mentioned—ensuring individuals gain only limited and appropriate access, and properly configuring and patching systems. The Bureau’s progress toward addressing our recommendations is encouraging; however, completing this effort is necessary to ensure that sensitive information is adequately protected and that the challenges we outline in this report are overcome. In conclusion, our ongoing audit work determined that the CEDCAP program has the potential to offer numerous benefits to the Bureau’s survey programs, including the 2020 Census program. While the Bureau has taken steps to implement these projects, considerable work remains between now and when its production systems need to be in place to support the 2020 Census end-to-end system integration test—in less than a year and a half. Moreover, although the three selected CEDCAP projects had key project monitoring and controlling practices in place or planned, the gaps we identified in our draft report are impacting the Bureau’s ability to effectively monitor and control these projects. Given the numerous and critical dependencies between the CEDCAP and 2020 Census programs, their parallel implementation tracks, and the 2020 Census’ immovable deadline, it is imperative that the interdependencies between these programs are effectively managed. However, this has not always been the case, and additional actions would help align the programs. Additionally, while the large-scale technological changes for the 2020 Decennial Census introduce great potential for efficiency and effectiveness gains, it also introduces many information security challenges, including educating the public to offset inevitable phishing scams. Continued focus on these considerable security challenges and remaining open recommendations will be important as the Bureau begins to develop and/or acquire systems and implement the 2020 Census design. Our draft report, which is currently with Commerce and the Bureau for comment, includes several recommendations that, if implemented, will help address the issues we identified and improve the management of the interdependencies between the CEDCAP and 2020 Census programs. In addition, prior to today’s hearing we discussed the preliminary findings from our draft report with Bureau officials, including the Decennial Census Programs’ Associate Director, and incorporated their technical comments, as appropriate. According to the officials, they have actions under way to address some of the issues we identified, such as those related to improving risk management for CEDCAP projects. Regarding our finding that the CEDCAP and 2020 programs lack an effective process for integrating schedule dependencies, Bureau officials stated that they believe that they are in compliance with GAO’s schedule guide. However, we maintain that the Bureau is not in compliance with the GAO schedule guide because it has not documented an effective process for managing the dependencies. Regarding our finding that the two programs do not have an integrated list of risks facing both programs, Bureau officials stated that they have an enterprise-wide risk management program, in which the Deputy Director has visibility into risks affecting both programs. While we agree that the Deputy Director has visibility into the CEDCAP and 2020 Census risks, documentation of joint management of key program risks does not exist. Therefore, we maintain our position that it is important that the programs establish a comprehensive list of risks facing both programs and agree on their respective roles and responsibilities for jointly managing the list. Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this statement, please contact Carol C. Harris, Director, Information Technology Acquisition Management Issues, at (202) 512-4456 or [email protected]. GAO staff who made key contributions to this testimony are Shannin G. O’Neill (Assistant Director), Jeanne Sung (Analyst in Charge), Andrew Beggs, Chris Businsky, Juana Collymore, Lee McCracken, and Kate Sharkey. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Census Bureau (which is part of the Department of Commerce) plans to significantly change the methods and technology it uses to count the population with the 2020 Decennial Census. The Bureau's redesign of the census relies on the acquisition and development of many new and modified systems. Several of the key systems are to be provided by an enterprise-wide initiative called CEDCAP, which is a large and complex modernization program intended to deliver a system-of-systems for all the Bureau's survey data collection and processing functions. This statement summarizes preliminary findings from GAO's draft report on, among other things, the Bureau's management of the interdependencies between the CEDCAP and 2020 Census programs, and key information security challenges the Bureau faces in implementing the 2020 Census design. To develop that draft report, GAO reviewed Bureau documentation such as project plans and schedules and compared them against relevant guidance; and analyzed information security reports and documents. The 2020 Census program is heavily dependent upon the Census Enterprise Data Collection and Processing (CEDCAP) program to deliver the key systems needed to support the 2020 Census redesign. However, GAO's preliminary findings showed that while the two programs have taken steps to coordinate their schedules, risks, and requirements, they lacked effective processes for managing their interdependencies. Specifically: Among tens of thousands of schedule activities, the two programs are expected to manually identify activities that are dependent on each other, and rather than establishing one integrated dependency schedule, the programs maintain two separate dependency schedules. This has contributed to misalignment in milestones between the programs. The programs do not have an integrated list of interdependent program risks, and thus they do not always recognize the same risks that impact both programs. Among other things, key requirements have not been defined for validating responses from individuals who respond to the census using an address instead of a Bureau-assigned identification number, because of the Bureau's limited knowledge and experience in this area. The lack of knowledge and specific requirements related to this critical function is concerning, given that there is less than a year and a half remaining before the Census end-to-end test begins in August 2017 (which is intended to test all key systems and operations to ensure readiness for the 2020 Census). Officials have acknowledged these weaknesses and reported that they are taking, or plan to take, steps to address the issues. However, until these interdependencies are managed more effectively, the Bureau will be limited in understanding the work needed by both programs to meet milestones, mitigate major risks, and ensure that requirements are appropriately identified. While the large-scale technological changes for the 2020 Decennial Census introduce great potential for efficiency and effectiveness gains, they also introduce many information security challenges. For example, the introduction of an option for households to respond using the Internet puts respondents more at risk for phishing attacks (requests for information from authentic-looking, but fake, e-mails and websites). In addition, because the Bureau plans to allow its enumerators to use mobile devices to collect information from households who did not self-respond to the survey, it is important that the Bureau ensures that these devices are adequately protected. The Bureau has begun efforts to address many of these challenges; as it begins implementing the 2020 Census design, continued focus on these considerable security challenges will be critical. GAO's draft report includes several recommendations to help the Bureau better manage CEDCAP and 2020 Census program interdependencies related to schedule, risk, and requirements. The draft report is currently with the Department of Commerce and the Bureau for comment.
To encourage the sharing of federal health care resources, the Veterans Administration and Department of Defense Health Resources Sharing and Emergency Operations Act authorizes VA medical centers and DOD military treatment facilities (MTF) to enter into sharing agreements to buy, sell, and barter medical and support services. Local VA and DOD officials have identified benefits that have resulted from such sharing, including increased revenue, enhanced staff proficiency, fuller utilization of staff and equipment, improved beneficiary access, and reduced cost of services. Seven of these sharing agreements are joint venture agreements, which involve the sharing of physical space as well as health care services. These joint ventures range from a single, jointly staffed MTF serving both VA and DOD patients—as is the case with Mike O’Callaghan Federal Hospital at Nellis Air Force Base in Nevada—to more modest sharing in Key West, Florida, where VA and DOD share a building that houses their separate outpatient clinics. In addition to physical space, agreements at these sites usually provide for one agency to refer patients to the other for inpatient and/or outpatient care. As table 1 shows, DOD is most often the host agency, that is, the agency providing the majority of services. In addition to referred patients, joint ventures, like other VA and DOD facilities, share dually eligible patients. Recent changes in VA’s and DOD’s health care programs have increased both the number of dual eligibles and the likelihood that they will obtain services from both systems. The number of veterans, including all military retirees, eligible for VA health care was increased in fiscal year 1999 due to removal of statutory restrictions. In addition, the number of military retirees eligible for DOD health care increased in 2001 when full eligibility was extended to retirees age 65 and over. Furthermore, a February 2002 increase in VA’s copayment for outpatient drugs—from $2 per prescription to $7 per prescription—has given dual eligibles who receive health care from VA more incentive to have their prescriptions filled at a DOD pharmacy. The Institute of Medicine (IOM) raised national awareness of the problem of medication errors with its 2000 study, To Err is Human: Building a Safer Health System. As we reported in 2000, there is general agreement that medication errors are a significant problem, although the actual magnitude of the problem is uncertain. Researchers and patient safety advocates have suggested certain measures to reduce the risk of medication errors, and VA and DOD have incorporated many of these measures as features of their health care systems. Figure 1 illustrates the typical process, including safeguards that VA and DOD use to provide medications to patients. Medication safety experts have identified the following factors that can contribute to reducing medication errors. According to experts from organizations such as the American Society of Health-System Pharmacists (ASHP) and IOM, access to patient medical information is important to both providers and pharmacists in reducing medication errors. A study of adverse drug events conducted by Brigham and Women’s Hospital found that the inaccessibility of patient information—such as information on the patient’s condition, results of laboratory tests, and current medications—was a leading cause of prescribing errors. The ASHP guidelines for preventing hospital medication errors state that prescribers should evaluate the patient’s total status and review all existing drug therapy before prescribing new or additional medications. They also recommend that pharmacists and others responsible for processing drug orders should have routine access to appropriate clinical patient information—including medication and allergy profiles, diagnoses, and laboratory results—to help evaluate the appropriateness and efficacy of medication orders. One way to provide this ready access is a computerized medical record. A computerized medical record can improve health care delivery by providing medical personnel with better data access, faster data retrieval, and more versatility in data display than available with a paper record. Both VA and DOD are in the process of transitioning from paper-based to electronic systems for recording and accessing patient health information. VA’s system, the Computerized Patient Record System (CPRS), captures a wide range of patient information, including progress notes, vital statistics, laboratory results, medications, drug allergies, and radiological and catheterization images. DOD’s system, the Composite Health Care System (CHCS), captures similar, but less extensive, patient information. For example, CHCS cannot capture or store progress notes or electronic images. JCAHO standards for hospitals and ambulatory health organizations require that organizations maintain formularies and direct that they must consider the potential for medication errors as a criterion for selecting drugs that will be stocked. Although frequently considered a mechanism for controlling costs, patient safety experts maintain that formulary systems can also optimize therapeutic outcomes and facilitate medication safety. According to IOM, a formulary system can help reduce adverse drug events because the drugs selected for the formulary are evaluated by knowledgeable experts and chosen based on their relative therapeutic merits and safety. In addition, formularies limit unneeded variety in drug use—a practice supported by ISMP and the Institute for Healthcare Improvement—and assist in educating prescribers on safe and appropriate use of formulary drugs. Both VA and DOD have formulary systems. VA’s national formulary consists of about 1,200 pharmacy items, including over 1,000 drugs, and each of VA’s 21 regional Veterans Integrated Service Networks can augment the national formulary. DOD’s Basic Core Formulary consists of about 165 drugs, and an MTF can add other drugs based on the clinical services and scope of care provided by that facility. Both agencies also have approval processes for prescribers to obtain nonformulary drugs for their patients when medically necessary. As part of their ordering systems, some VA and DOD facilities have also developed electronic decision- making support related to their formularies, such as prompts to remind physicians to order specific laboratory tests prior to administering certain drugs or alerts related to the safe use of certain drugs. CPOE systems can reduce medication errors by eliminating legibility problems of handwritten orders and providing clinical decision-making support by sending alerts and instantaneous reminders directly to providers as orders are being placed. For instance, as providers enter a medication order, they can be given a potential range of doses for medications ordered, alerted to relevant laboratory results, and prompted to verify which medication is being ordered when the drug sounds or looks like another drug on the formulary. Studies have shown computerized provider ordering reduced medication errors by 55 percent to 86 percent. In light of this evidence, the Leapfrog Group for Patient Safety adopted computerized provider order entry as one of its initial safety standards. ISMP has also emphasized the need to take advantage of electronic ordering technology, calling for the elimination of handwritten prescriptions nationwide by 2003. VA and DOD acknowledge the safety benefits of providers electronically ordering medications, and both CPRS and CHCS (for outpatient prescriptions only at most locations) have this capability. VA established a goal in its 2002 Network Performance Plan for 95 percent use of CPOE (both inpatient and outpatient) by 2002, with 100 percent use planned for 2004. While DOD officials told us that CPOE is encouraged and widely utilized, DOD has no written policy or goals related to its use. Both VA’s and DOD’s electronic ordering systems perform automatic checks for potential adverse reactions due to drug allergies and interactions. VA’s CPRS performs checks for drug allergies and interactions between all medications ordered and dispensed by a VA facility, including those sent from VA’s mail order center. Although medications dispensed for the same patient at another VA facility are generally not included in the check, VA officials told us that they are exploring methods to broaden their drug interaction capability. DOD’s system for drug interaction checking is more comprehensive than VA’s system. CHCS checks for drug allergies and interactions between drugs prescribed or dispensed at the MTF, and DOD’s Pharmacy Data Transaction Service (PDTS) aggregates information from CHCS with other points of service—other MTFs, network pharmacies, and DOD’s mail order pharmacy—to perform a complete drug interaction check. Automatic electronic checks for drug interactions, commonly available in retail drug stores, have been shown to greatly minimize medication errors. For example, one study found that an automated review of prescriptions written for 23,269 elderly patients produced 43,007 alerts warning about potential medication problems—24,266 of which recommended a change in drug or dosage. Professional groups such as ASHP and ISMP have also acknowledged the value of these systems. At the six joint venture sites where inpatient services are provided, all patients referred for inpatient care receive medications from the inpatient facility providing the care. Processes used to provide and record inpatient medications to referred patients are the same as those used for the host agency’s own beneficiaries. Inpatient medications are ordered using the host facility’s formulary guidelines and filled through the inpatient pharmacy. Initial supplies of discharge medications (usually 30 days or less) are also typically provided, although patients are expected to return to their home agency pharmacy for longer-term supplies. In contrast, the process for providing medications to shared outpatients differs across sites. At six of the joint venture sites, each agency maintains a separate outpatient pharmacy. As a general rule, each agency expects its beneficiaries to use its pharmacy for outpatient prescriptions, even when providers from the other agency order the prescription. For instance, in Hawaii, both the Tripler Army Medical Center and the VA outpatient clinic next door maintain outpatient pharmacies. VA patients who are referred to Tripler for outpatient specialty care are expected to return to the VA clinic pharmacy to have their prescriptions filled. Even though this is the general rule at most sites, we noted that exceptions occur. For instance, at David Grant Medical Center on Travis Air Force Base, DOD supplies oncology medications to VA patients. Another exception is that all joint venture inpatient facilities provide weekend and after-hours emergency room care to patients of the other agency and, generally, medications are also supplied if needed. In contrast to the general rule, at the DOD facility in El Paso, referred VA patients are not expected to return to their home agency for their initial prescriptions but rather are allowed to obtain an initial supply of drugs from the DOD pharmacy. Subsequent prescriptions for these patients (renewals or refills) must be filled by their VA pharmacy. At the seventh site, Key West, only DOD maintains a pharmacy. It serves both VA and DOD patients. However, VA patients receive only initial, short-term prescriptions (up to 30 days) from this DOD pharmacy and obtain longer-term prescriptions and refills via mail from the VA Medical Center in Miami. VA’s and DOD’s separate, uncoordinated information and formulary systems result in gaps in medication safeguards for shared inpatients and outpatients. Lacking coordinated information systems, providers and pharmacists at joint venture sites often cannot access shared patients’ complete health information, including prescribed medications, nor can providers from one agency use electronic ordering to prescribe drugs that are to be dispensed by the other agency’s pharmacy. Because information systems are uncoordinated, checks for drug allergies and interactions for shared patients are based on incomplete information. In addition, separate formulary systems introduce complications for shared patients because providers must either prescribe from the other agency’s formulary, which may contain drugs unfamiliar to providers, or prescribe a limited supply of a drug, which may later be switched to comply with the formulary of the patient’s home agency. These gaps are illustrated in figure 2. Ready access to pertinent clinical information is an important feature of medication safety; while VA’s and DOD’s patient information systems are capable of serving this function for each agency’s own beneficiaries, gaps exist for shared patients. VA and DOD providers and pharmacists have ready access to health records of their own beneficiaries, largely through CPRS and CHCS, respectively. However, when agencies refer patients for care, the treating agency’s providers and pharmacists have incomplete access to patients’ health and medication information. Although referrals will usually be accompanied by some explanation of patients’ medical conditions, the bulk of their electronic health and medication information, which resides in the health information system of their home agency, will often not be available to providers and pharmacists in the agency where they are referred for care. Access for pharmacists and treating providers to patient information in the referring agency’s information system varies by location. For example, at four joint venture sites, pharmacists filling prescriptions for shared patients have no access to the other agency’s patient information system. At another site, pharmacy access is restricted—at Tripler Army Medical Center in Hawaii, access to VA’s CPRS is available in the inpatient pharmacy, but only one pharmacist has access. Providers at a few facilities have broader access. For example, at the David Grant Medical Center at Travis Air Force Base in northern California, CPRS is installed on every network computer that has CHCS, and providers in certain departments have been granted CPRS access. VA and DOD pharmacists and providers we spoke with noted that lack of relevant patient health information could be a problem for shared patients. One example given to us was a VA provider treating a dual-eligible patient for diabetes. Certain drugs cannot be safely prescribed for diabetics without monitoring through laboratory tests. If the patient receives care from a VA physician but has prescriptions filled at a DOD pharmacy, the pharmacist would be unable to access the patient’s medical record to review these laboratory results. Without this access, the pharmacist must call VA to ensure these laboratory values are within normal limits. In addition, pharmacy personnel at Tripler in Hawaii, where a single inpatient pharmacist has CPRS access, told us that additional pharmacists need CPRS access to facilitate after-hours medication needs of VA patients when this pharmacist is unavailable. Computerized provider ordering of medications increases safety by assisting with medication decisions, providing alerts for drug interactions and allergies, and obviating handwriting legibility and transcription problems. However, prescriptions for shared patients are less likely to be ordered electronically by providers. Although both VA and DOD providers have outpatient electronic ordering capabilities when prescriptions are dispensed at their own pharmacies, patients referred from one agency to the other for care are typically expected to return to their home pharmacy to get prescriptions filled. With the exception of DOD providers in Hawaii, none of the joint venture sites have the capability for providers to electronically order medications through their own computer systems for drugs that are to be dispensed by the other agency’s pharmacy, nor do they typically have access to the other agency’s electronic ordering systems to issue medication orders. Consequently, providers either handwrite medication orders for shared patients or give them printed copies that must be retyped into the patients’ home agency’s pharmacy system. Both situations introduce risks unique to shared patients. We also found situations where providers had the capability to avoid handwriting prescriptions but continued to handwrite them. In Key West, for example, where all drugs are dispensed from the DOD pharmacy, VA providers have access to DOD’s electronic ordering system, CHCS; but, for the most part, they handwrite prescriptions. These providers record patient care and medications in VA’s CPRS, and if they were to electronically order medications, it would necessitate entry into a second system. They told us that using CHCS was slow and cumbersome, and ordering the medications using it took too much time. A VA provider in Hawaii told us that, for these same reasons, providers sometimes handwrote prescriptions for dual eligibles to have filled at the DOD pharmacy when only one or two medications were being ordered. Finally, although VA patients benefit when providers electronically order medications in VA hospitals, they generally lose this benefit when referred to DOD hospitals. Providers in VA hospitals have electronic ordering capability for inpatient medications, but this capability is not generally available in DOD hospitals. VA patients referred to DOD hospitals, like DOD’s own beneficiaries, usually have their prescriptions handwritten by the provider, and then manually entered into CHCS by pharmacy personnel. Thus, these patients are subjected to the risks associated with handwritten prescriptions, such as illegible orders and transcription errors. Shared patients also do not get the full benefit of VA’s and DOD’s automatic checks for drug allergies and interactions. VA and DOD patients who receive all their medications through only one health care system will have comprehensive medication histories stored in either CPRS or CHCS (in conjunction with PDTS). When the medication is ordered, CPRS or CHCS/PDTS will perform automatic checks for drug allergies and interactions. However, if patients are taking medications obtained from both agencies, neither agency’s record of patient medications is complete at any joint venture site. Thus, when interaction checks are done, they will be incomplete for shared patients because the checks are restricted to the information available within each system. Likewise, providers may be unaware of drug allergies. For example, when a patient who routinely gets health care at the VA clinic in El Paso is referred to the Army Medical Center for outpatient specialty care, the DOD pharmacy will fill a prescription for up to 30 days of medications. However, when the pharmacy performs its automatic checks, drug allergies may not be detected because information on drug allergies is likely to be in VA’s CPRS where the bulk of the patient’s clinical information is stored, not in CHCS/PDTS where the drug check will occur. In its interim report, the President’s Task Force to Improve Health Care Delivery for Our Nation’s Veterans stated that the instances of adverse drug events might be substantially reduced for shared patients through use of a comprehensive screening tool like PDTS and plans further analysis in this area for its final report. Because VA and DOD each has its own formulary system, providers who treat referred patients sometimes prescribe from the referring agency’s formulary and sometimes from their own facility’s formulary, depending on where the prescription will be filled. Unless the prescribed drug is common to both formularies, each situation limits the medication safety benefits of a formulary system, such as increased provider familiarity with drugs prescribed and the added safety net provided by clinical decision support. The President’s Task Force to Improve Health Care Delivery for Our Nation’s Veterans noted that a joint VA/DOD formulary could combine the clinical expertise of both VA and DOD and improve patient safety. Providers who use the other agency’s formulary in prescribing for shared patients and find that the drug they would normally prescribe is not listed are disadvantaged in several ways. First, according to formulary system principles endorsed by the American Medical Association, ASHP, and others, one characteristic of a formulary system should be that the pharmacy and therapeutics committee educates providers about drugs on the formulary. A senior official from ISMP told us that provider drug knowledge is also reinforced by a formulary system because formularies limit the number of drugs providers need to be knowledgeable about. Consequently, providers should be less likely to make mistakes in drug selection or dosage when prescribing formulary drugs. Second, when prescribing a drug that is not on their formulary, providers may lose the clinical support capabilities that may be built into their agency’s CPOE system. For example, the medication error prevention committee at Tripler in Hawaii evaluates Tripler’s formulary drugs for safety problems and designs safeguards into CHCS, such as distinctive lettering to alert providers to drug names that look alike or sound alike. However, DOD providers typically try to prescribe for VA outpatients using VA’s formulary. Consequently, this safeguard is lost to the shared patient. Providers usually prescribe from their own facility’s formulary for a referred patient if the prescription is to be filled at their facility’s pharmacy. For instance, at all joint venture sites, referred inpatients receive short-term supplies of discharge medications at the host facility’s pharmacy. If patients need longer-term supplies of medications or refills, they typically are expected to return to their home pharmacy. This situation can also put patients at risk if the original medication is not on the formulary at their home pharmacy. For instance, in Key West, VA physicians write VA patients two different prescriptions: one for their initial supply to be filled at the joint venture’s DOD pharmacy and a second for a longer-term supply that is mailed from the VA Medical Center in Miami. One VA physician told us that when a VA formulary drug he wants to prescribe is not on the DOD formulary, he prescribes an equivalent drug carried by the DOD pharmacy for the short term and orders the VA formulary drug from Miami to use on a long-term basis. Experts agree that such interchanging of drugs in a therapeutic class may sometimes cause problems because differences in individual physiology make some people react differently to a very similar therapeutic agent. Although such interchange is an accepted practice in formulary systems, when physicians are able to avoid switching drugs, they reduce the risk that an adverse reaction will occur. Recognizing these risks for shared patients, joint venture facilities have undertaken efforts intended to address these safety gaps. However, none of these efforts fully solve the problems that exist, nor are they all used at any site. All joint venture sites have taken steps to increase access to patient information. For example, at Tripler in Hawaii, VA and DOD recently added VA’s CPRS to computers in the DOD hospital so that VA physicians monitoring the care of VA inpatients would have electronic access to patients’ VA health records. However, at the time of our visit, most DOD physicians were unaware that the capability to access CPRS existed, and DOD officials at Tripler had no plans to promote its use or to provide training. Similarly, some physicians at all other joint ventures have access to both systems; but, as in Hawaii, this access is generally limited in the number of computers that have this capability and the number of providers who have been authorized to use it. For instance, access to both systems is available at some locations in the Mike O’Callaghan Federal Hospital in Nevada, but VA pharmacy officials at the VA outpatient clinic in this joint venture told us that the lack of such access in the clinic presented a major problem. They told us that not having access to such patient information as test results and physician notes made it difficult for them to research questions about patients’ medications. Only two sites have pharmacies with access to the other agency’s patient information system; access is very limited at one of those sites—at Tripler, only one pharmacist has been authorized to use CPRS. Furthermore, medical personnel who had access told us that its use is hindered by their lack of familiarity with the other agency’s system and by the difficulties of accessing separate, dissimilar systems. Recognizing the increased risks associated with handwriting prescriptions rather than using CPOE, two joint venture sites have devised ways to minimize this risk for shared patients. In Hawaii, VA providers have worked out an agreement with the DOD pharmacy that they will provide dual beneficiaries a computer-printed copy of the electronic order, called an “action profile,” which the pharmacy will accept in lieu of a handwritten order. In Hawaii—at the time of our visit—and northern California, a printer for DOD’s CHCS had been installed in the VA pharmacy so that medication orders from DOD providers could be printed out in the VA pharmacy. VA pharmacy personnel then re-enter orders into CPRS to dispense the medications. While these efforts remove the potential for misreading handwritten prescriptions, they fall short of the full benefits of electronic ordering and filling because re-entering information into CPRS introduces the potential for transcription errors. In August 2002, information technology personnel in Hawaii implemented an electronic link that allows outpatient medication orders entered into CHCS for VA patients to be transmitted directly into CPRS, eliminating the need for manual re-entry in the VA pharmacy. Officials involved in the Hawaii project told us that this link is working well and that this technology was developed with the intent of transferring it to other sites. They also told us that the project was developed with the ultimate intent of two-way—or bi-directional—communications, so that with some additional modification a link could be established allowing VA physicians to send CPRS medication orders to CHCS at Tripler for processing and filling. Three joint venture sites have taken steps to compensate for problems associated with drug interaction checks for shared patients. For example, VA physicians in Hawaii told us that when they provide prescriptions for dual eligibles to be filled at DOD’s pharmacy, they also enter them into VA’s CPRS and mark them “hold” so that they will not be dispensed by the VA pharmacy. Thus, checks for interactions with other drugs prescribed by VA can be performed by CPRS, and the patients’ medication information will be updated to reflect the medication orders. In Texas, VA adds information to CPRS about care and medications provided to referred patients by DOD physicians. This information is recorded in a special section of CPRS. When VA physicians subsequently access patients’ records, CPRS alerts them that new information has been added to this section of the record, but the information is not included in automatic drug checks. The VA clinic in Anchorage, Alaska, uses a different approach to address the problem of incomplete medication records. Officials there told us they have developed software to supplement information in the CPRS record by capturing and displaying information about drugs obtained from DOD and other non-VA sources, including herbal supplements and over-the-counter drugs. Thus, providers and pharmacists have additional information that might help them prevent adverse drug interactions. However, information collected in this way may not be accurate or complete because it depends on patient recall and is entered manually. In addition, this information is not accessed by CPRS’s automatic drug checks because it is a supplement to, not a part of, the CPRS record. Finally, five joint ventures have instituted practices to address safety problems related to separate formularies. For example, the Mike O’Callaghan Federal Hospital at Nellis Air Force Base in Nevada has a combined P&T committee that includes both VA and DOD representatives who select the medications that will be included on the hospital’s inpatient formulary. In addition, the committee approved nearly 50 VA formulary medications to be stocked in the hospital pharmacy for use by VA inpatients at this facility. All measures taken to improve medication safety, such as entering reminders or alerts into CHCS to safeguard against medication mistakes, also apply to VA drugs stocked in the pharmacy. Other sites have undertaken less comprehensive measures to address problems arising from separate formularies. For instance, pharmacies at two sites stock drugs commonly prescribed for the other agency’s patients, but neither host agency’s P&T committee has representatives from both agencies. At two other sites, representatives from both agencies are on the host agency’s P&T committee. While these efforts are helpful in overcoming difficulties associated with separate formularies, none is a complete solution. As VA and DOD strive to improve efficiency and access to care through greater collaboration and sharing of resources, it is likely that the number of patients who receive care from both systems will increase. Consequently, the safety of shared patients merits continuing concern. While our findings are based on the joint venture sites, they may have relevance wherever patient care is shared between VA and DOD. Some joint ventures have taken steps to address medication safety problems for shared patients, but these steps are partial solutions and gaps remain. For example, facilities have provided only limited access to the other agency’s patient medical information system and have not always provided training in its use. Therefore, providers do not have adequate access to patient medical information for shared patients, and lacking the comprehensive capability afforded by a system like PDTS, they can perform only incomplete checks for drug interactions and allergies. In addition, when shared patients return to their home agency to have prescriptions filled, providers give them handwritten or computer-printed prescriptions, rather than electronically ordering medications, creating risk for legibility or transcription errors. Furthermore, separate P&T committees may be unable to effectively overcome problems that arise from separate formularies. The measures already taken by some joint ventures show that risks that shared patients face can be addressed. VA and DOD could develop systemwide rather than local solutions to address the needs of shared patients nationally as well as at the joint venture sites. To better protect shared patients at the joint ventures, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health and that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to develop the capability for VA and DOD providers to access patient medical information relevant to medication decision making, regardless of whether that information resides in VA’s or DOD’s information system and provide training to physicians and pharmacists who need to use this access; develop the capability to perform a comprehensive, automatic drug interaction check that uses medication information from all VA and DOD facilities and mail order operations and DOD’s network pharmacies, and evaluate the potential for DOD’s PDTS to be used for this purpose; require providers to use computerized order entry of medications for shared patients where it is available and implement system modifications that will enable providers to electronically order medications to be dispensed at the other agency’s pharmacies; and establish a joint VA and DOD pharmacy and therapeutics committee, or similar working group, at each joint venture site to determine how best to safely meet the medication needs of VA and DOD shared patients and to overcome obstacles associated with separate formularies. The Department of Veterans Affairs and the Department of Defense provided written comments on a draft of this report. These comments are discussed below and reprinted in appendix I and appendix II, respectively. VA concurred with all our recommendations, while DOD concurred with two of our recommendations, partially concurred with one, and did not concur with one. Both VA and DOD concurred with our recommendation to develop the capability for VA and DOD providers to access patient medical information in both CPRS and CHCS. In their comments, both agencies discussed longer-term solutions, such as the joint VA-DOD Federal Health Information Exchange (FHIE) initiative. While we support long-term efforts that would lead toward a more seamless sharing of information between VA and DOD, we believe that a number of joint venture sites have demonstrated that interim steps, such as giving providers access to and training on the other agency’s system, are both warranted and feasible. Both agencies also concurred with our recommendation regarding the development of comprehensive, automatic drug interaction checks, including the evaluation of PDTS for this purpose. VA stated that this capability would be accomplished with the second phase of the VA-DOD joint plan, called HealthePeople (Federal), which VA expects to be implemented in fiscal year 2005. Although agreeing to evaluate the cost benefit of adopting PDTS, VA said that, based on VA and DOD workload data, a relatively small number of veterans had been treated in both systems in the period from October 2001 through May 2002 (240,716 unique patients, or 29.6 percent of all dual eligibles) and raised the issue of whether the cost of PDTS was justified for so few cases. We believe this almost quarter of a million patients represents a significant opportunity for adverse drug events to occur, especially since, based on the prescription patterns of a typical VA patient, this group received an estimated 4 million prescriptions in this 8-month period. Furthermore, the number of patients potentially at risk is larger than the dual eligible group. It includes an unknown number of patients who receive care and medications from both agencies under VA-DOD resource sharing agreements. While we agree that cost is an important factor, we believe the large number of prescriptions for these patients justifies an evaluation of PDTS that considers both cost and patient safety. VA concurred and DOD partially concurred with our recommendation on CPOE. VA said it has already planned for its providers to use computerized order entry for all orders, including medications, by fiscal year 2004. It also made reference to the Hawaii pilot project discussed earlier in this report as a way of extending this capability for shared patients but said that a more robust bi-directional capability would be included as a systems requirement in the HealthePeople (Federal) effort. DOD also agreed to require that providers use CPOE for shared patients where available; however, it did not agree with system modifications as the approach for extending this capability. Instead, DOD advocated the joint procurement of a commercial off-the-shelf pharmacy information system. It said that this approach would provide greater economic returns and system interoperability since both agencies are pursuing plans to upgrade or replace their pharmacy information system modules. We agree with this approach as a longer-term solution. However, agency officials told us that neither agency has plans to upgrade or replace its system until fiscal year 2005 at the earliest, leaving shared patients at continued risk for medication errors until the new system is operational. System modifications already accomplished in Hawaii indicate that interim steps toward reducing these risks are possible. VA concurred with our recommendation on establishing a joint P&T committee or similar working group at each joint venture site and said it would pursue this recommendation via the VA/DOD Executive Committee, a working group for VA/DOD collaboration issues. DOD did not concur with establishing a joint P&T committee at each site; however, we recommended the establishment of a joint VA-DOD group, either a P&T committee or a similar working group, that would determine how best to safely meet the medication needs of shared patients at each site. DOD expressed support for the already-established working groups, but, as we have noted, only three joint venture sites have such collaborative groups. We are sending copies of this report to the Secretary of Veterans Affairs, the Secretary of Defense, and other interested parties. Copies will also be made available to others on request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7101. Other contacts and major contributors are listed in appendix III. In addition to those named above, the following staff members made key contributions to this report: Irene J. Barnett, Linda Diggs, Mary W. Reich, Karen Sloan, and Thomas Walke.
Medication errors and adverse drug reactions are a significant concern for the Department of Veterans Affairs (VA) and the Department of Defense (DOD) because their large beneficiary populations receive many prescriptions. Each agency has taken steps to reduce the risk of medication errors, such as making patients' medical records more accessible to providers and performing checks for drug interactions. Although each agency has designed safeguards to protect its own patients, some VA and DOD patients receive medication from both agencies. Shared patients face a higher risk of medication error. Joint (DOD and VA) venture sites with inpatient facilities provide services to shared inpatients in the same manner as they do for their own beneficiaries; that is, medications are ordered using the facility's guidelines and filled through the inpatient pharmacy at that facility. Gaps in safeguards result primarily from VA's and DOD's separate, uncoordinated information and formulary systems. Joint venture sites have tried to address some of these safety gaps. For instance, all sites have made patient information more accessible by providing additional, although incomplete, access to the other agency's patient information system.
The Department of Labor required states to implement WIA’s major provisions by July 1, 2000, although six states began implementation a year earlier in July 1999. The act authorizes three separate funding streams for adults, dislocated workers, and youth. WIA’s appropriation for fiscal year 2002 was $950 million for adult, $1.1 billion for youth, and $1.5 billion for dislocated worker programs, for a total of $3.9 billion (see table 1). WIA encourages collaboration and partnerships in making a wide array of services universally accessible to these three populations and allows states broad discretion in designing their workforce investment systems. WIA requires most federally funded employment and training services to be delivered through a one-stop system overseen by newly created state and local workforce investment boards, although the services themselves may be provided by partner agencies and locally contracted service providers. In fact, WIA encourages client referrals to programs offered by one-stop partners. Once Congress appropriates WIA funds, the amount of money that flows to states and localities depends on a specific formula that takes into account unemployment. Thus, any changes in the annual appropriation or elements of the allocation formula will result in year-to-year funding fluctuations. Once the Congress appropriates funds for a given fiscal year, Labor notifies states of their annual allocation—usually in the February to March timeframe. The funds are made available to states and localities at three separate times during the year, depending on the program (see fig. 1). For youth services, all funds for the year are made available on April 1, 3 months before the beginning of the program year on July 1. This once-a- year youth allocation is designed to help states and local areas gear up for summer youth activities. The adult and dislocated worker funding allocations are distributed twice a year from two different years’ appropriations—on July 1 (1/4 of the allotment) and on October 1 (3/4 of the allotment)—with the October allocation funded from a new fiscal year’s appropriation. States and localities are required to manage their WIA programs, including spending, on a program-year basis, regardless of when funds are made available. In addition, WIA allows states 3 program years to spend their funds while local areas have 2 program years. Once WIA funds are made available, they flow from Labor to states, states to local areas, and local areas to service providers. For dislocated worker funds, the Secretary of Labor retains 20 percent of the funds in a national reserve account to be used for emergency grants, demonstrations, and technical assistance and allocates the remaining funds to the states according to a specified formula. Once states receive their allocation, the governor can reserve up to 25 percent of dislocated worker funds for rapid response activities intended to help workers faced with plant closures and layoffs to quickly transition to new employment. In addition to funds set aside for rapid response, WIA allows states to set aside up to 15 percent of the dislocated worker allotment and permits them to combine the dislocated worker set-aside with similar set-asides from their adult and youth allotments. States use the set-asides to support a variety of statewide activities such as helping establish one-stop centers, providing incentive grants to local areas, operating management information systems, and disseminating lists of organizations that can provide training. After funds are set aside for rapid response and statewide activities, the remainder—at least 60 percent for dislocated workers and 85 percent for adult and youth—is then allocated to local workforce areas, also according to a specified formula. In addition, local areas may reserve up to 10 percent from each of the three funding streams for local administrative activities (see table 2). Labor collects quarterly financial status reports from states, detailing expenditures separately for the six funding categories under WIA—two categories at the state level (governor’s set-aside and rapid response) and four at the local level (adult, dislocated worker, youth, and local administration). Appendix I depicts a sample form that states complete and submit to Labor. Because adult and dislocated worker funds for each program year are provided from two separate appropriations, Labor requires states to track financial information separately by the year in which funds are appropriated. As a result, states submit a total of 11 reports each quarter for activities funded by the current program year’s allocation, as shown in table 3. In addition, WIA gives states 3 years within which to spend their grant; consequently, states may be tracking activities that are funded by 3 different program years, thus submitting up to 33 reports each quarter (11 reports multiplied by the 3 program years in which funds are available). In completing their financial status reports, states are required to follow Labor’s guidance that identifies and defines the data elements to be reported. Labor collects “total federal obligations”—which it defines as the sum of expenditures, accruals, and unliquidated obligations—for determining how much states have already spent and how much is still available for spending. Table 4 shows the definitions of each of these terms. In addition, WIA regulations require expenditures to be reported on an accrual basis. This means states should report all cash outlays and all accruals as expenditures on their reports. As of July 2002, all states we contacted told us that they were reporting expenditures on an accrual basis. Financial reporting begins at the local service provider level and progresses through the local, state, and national levels. Figure 2 shows how WIA financial reports flow from one level to the next and the data elements that are reported. After reconciling any discrepancies, states aggregate the local reports and are required to submit a financial status report to their regional Labor office 45 days after the quarter’s end, according to Labor officials. Ten days later, after performing edit checks, regional officials told us that they certify and forward states’ reports to Labor’s national headquarters. The national office then merges information for the six funding categories into the three funding streams— adults, dislocated workers, and youth—and combines the program and fiscal year data into a single program year. Within 5 days of receiving reports from its regional offices, Labor is required to present the Congress with a single report 60 days after the end of the quarter. Labor uses states’ financial reports to determine whether there are any unspent funds that may need to be redistributed among states. Local areas have 2 years within which to spend their annual allocations while states have 3 program years. Thus, program year 2000 funds must be spent by the end of program year 2001 for localities and by the end of program year 2002 for states. If funds are not spent, WIA directs both states and Labor to recapture and, if appropriate, redistribute unspent funds according to specific criteria (see fig. 3). The recapture processes are similar at both the state and federal level. States have a two-tiered process by which they recapture available funds. First, at the end of the initial program year, states may reclaim funding from local areas with total obligations less than 80 percent of their annual allocation and redistribute these recaptured funds to those local areas that have met the criterion for total obligations. Second, at the close of local areas’ 2-year grant period, states may recapture any unexpended local funds and may reallocate the funds to other local areas that have fully expended their allocation or to statewide activities, but only in the third year the grant is available. Like local areas, states are also subject to having their funds recaptured. At the federal level, Labor may recapture funds from states with total obligations less than 80 percent of their annual allotment at the end of the first program year. Labor applies the same recapture process to the end of the second program year. At both intervals, Labor may redistribute these funds to other states that have met the requisite total obligation rate. By the end of the 3-year grant period, Labor may recapture any state funds that have not been fully expended. Because states’ WIA grants expire after 3 years, funds recaptured by Labor at the end of the third year may not be redistributed to other states. Rather, Labor must return the funds to the U.S. Treasury. Our analysis of Labor’s data shows that states are spending their WIA funds within the authorized 3-year timeframe—virtually all funds allocated for program year 1999 have been spent within the requisite 3 years and 90 percent of program year 2000 funds have been spent within 2 years. In addition, states have spent just over half of their program year 2001 allocation within the first year funds were available. By contrast, Labor’s estimate of expenditure rates suggests that states are not spending their funds as quickly because the estimate is based on all funds states currently have available—from older funds carried in from prior program years to those only recently distributed. The newest funds, which states have 2 more years to spend, comprised two-thirds of all funds states had available for program year 2001. Moreover, many of the remaining funds carried over may have already been obligated. However, states do not use the same definition for obligations and what they report to Labor on obligations differs. Lacking consistent information on how much states and local areas have committed to spend, Labor relies on expenditure data and overestimates the funds states have available to spend. Our analysis of Labor’s expenditure data shows that states are spending their WIA funds within the allowed 3-year period. Nationwide, Labor’s data show that states expended essentially all of their program year 1999 funds within the authorized 3-year period that ended with program year 2001. In addition, states have expended 90 percent of program year 2000 funds within the first two years funds were available—55 percent in the first year and another 35 percent in the second year. States have one more year to spend the remaining 10 percent of their program year 2000 funds. In addition, states had expended 56 percent of program year 2001 funds, with 2 years still remaining (see fig. 4). While nationwide data show that funds are being spent within the required time period, state-by-state expenditure rates vary widely. For example, Vermont spent 92 percent of its program year 2000 allocation in the first year and 8 percent in the next, while Kentucky spent 29 percent in the first year and 63 percent in the next. When program year 2000 expenditure rates were combined for the first and second years that funds were available, all states had spent over 70 percent. Forty-four states had spent 90 percent or more of their program year 2000 funds, with 9 of those 44 states—Delaware, Idaho, Maine, Michigan, Montana, North Dakota, Rhode Island, Utah, and Vermont—achieving a 100-percent expenditure rate. (See fig. 5.) Expenditure rates for first year spending of program year 2001 funds were similar to those of program year 2000, and state-to-state spending rates also varied widely, as shown in figure 6, ranging from 19 percent for New Mexico to 92 percent for Vermont. For program year 2001, the majority of states spent at least 55 percent of their funds and 16 states spent at least 70 percent. (See app. II for state-by-state expenditure rates listed for program years 2000 and 2001.) Expenditure rates increased for many states from program year 2000 to program year 2001. Thirty-one states spent funds at the same or faster pace in program year 2001 than they did during the same period in the prior year. However, for 21 states, spending occurred at a slower pace in 2001 compared with 2000. Nevertheless, 9 of the 21 states still spent at or above the nationwide rate of 56 percent in program year 2001. In contrast to our expenditure rate estimate, Labor’s estimated expenditure rate of 65 percent at the end of program year 2001 aggregates data over 3 years and considers all funds states have available. Labor based its calculation on older unexpended funds carried in from prior years as well as the newest funds represented by the program year 2001 allocation, even though that allocation made up the largest share of all available funds. For example, of the total $5 billion available at the beginning of program year 2001, about two-thirds (65 percent) represented the program year 2001 allocation, and about another one-third represented amounts carried in from program years 2000 and 1999 (29 percent and 6 percent, respectively) as shown in figure 7. By basing its calculation of an expenditure rate—65 percent at the end of program year 2001—on the sum of all available funds, Labor did not take into account the 2 years that remain for states to spend the majority of their funds. Differences in how states report expenditures result in data inaccuracies and reporting inconsistencies. WIA regulations require states to include accruals—or amounts owed for goods and services received that have not yet been paid—-when reporting expenditures, but a few states reported only cash outlays in program year 2001. As a result, reported expenditures may have been understated. Some states and local areas may still be using a cash-based accounting system, usually tied to the state’s or local area’s existing accounting system and often used to report expenditures for other programs, such as welfare. State and local workforce officials we spoke with in areas that are reporting cash outlays told us they are modifying their accounting systems and will soon begin reporting accruals. In fact, as of program year 2002, all states we spoke with told us they are beginning to collect and report expenditures on an accrual basis as required under WIA regulations. Excluding accruals may understate expenditures primarily in the short term because invoices for goods and services received in one month are often converted into cash outlays in the next month. However, if this conversion takes a long time to occur and if expenditures are uneven from month-to-month and year-to-year, the effect of accruals for a year may be longer term and expenditures for a given year may be understated. For example, a jobseeker may have completed a training class in June of one program year, but the school does not submit an invoice to the local area until September of the next program year. If the local area captures the cost of training as an expenditure only after paying the invoice, it will wait until the new program year to report it and will understate its prior program year expenditures. Eventually, accruals may catch up with expenditures over the life of the grant—-2 years for local areas and 3 years for states. In addition to reporting expenditures each quarter, states also report obligations—funds committed through contracts for goods and services for which a payment has not yet been made. However, not all of the 9 states we contacted reported obligations in the same way and differences in reporting resulted in data inconsistencies. Labor’s guidance requires that states report obligations but does not specify whether obligations made at the local level—the point at which services are delivered—should be included. States interpret Labor’s definition of obligations in several ways. Some states we contacted include as obligations the amount of the WIA grant they allocate to their local areas. By contrast, other states included funds that their local areas have committed in contracts for individual training accounts, staff salaries, and one-stop operating costs. Officials in these states told us they tracked locally committed funds because they more accurately reflect total spending activity. Of the 9 states we contacted, all collect information on local obligations. However, 4 of them report these data to Labor while the other 5 do not. These differences result in data that are not comparable across states. Labor’s data on obligations do not consistently reflect local commitments; therefore, Labor relies on expenditure data to estimate available funds. In doing so, Labor overestimates the amount states have available to spend. For 3 of the 4 states that report local obligations, the amount of funds the state has available is much smaller when local obligations are taken into account along with expenditures. For example, for New York, available funds are cut almost by a third, and in California and Washington, available funds essentially disappear—-decreasing from 40 percent to 7 percent, and 33 percent to 2 percent, respectively (see fig. 8). For Vermont, the fourth state that collects and reports local obligations, obligations and expenditures were very similar, with about 8 percent of program year 2001 funds available. A key role for Labor under WIA is to monitor state spending; it does so by comparing the expenditure information it receives from states with benchmarks Labor has developed. However, these benchmarks are often not communicated to the states. Labor uses the benchmarks to formulate budget requests and identify which states need monitoring and additional guidance. While Labor has provided additional financial reporting guidance and technical assistance, some state officials told us that they remain concerned about WIA spending and financial reporting and would like further help in developing strategies to effectively manage expenditures. Labor has established several national expenditure rates used as benchmarks against which to judge each state’s spending rate. In program year 2000, for example, Labor set its benchmark at 25 percent of states’ allocations during the first half of the year and 50 percent of their allocation three-quarters of the way through the year, based on its comparison of state expenditure reports. However, Labor’s data show that most states—40 in all—did not meet the 50-percent benchmark stipulated for March 31, 2001. The remaining 12 states either met or exceeded this benchmark. In program year 2001, Labor assumed higher expenditures and projected an expenditure rate of 69 percent, which 26 states met or exceeded. Labor uses its projection to formulate the following year’s budget request and bases it on total WIA funds available, which include the current year allocation and prior years’ unexpended balances carried into the current year. (See app. III for states that met, exceeded, or were below benchmarks.) Labor intended the program year 2000 benchmarks to serve as internal guidelines for targeting oversight efforts and has not always communicated them to states. Some state officials told us that lacking information on benchmarks has created frustration in managing their WIA spending because Labor notified these states that they were underspending their funds but did not specify the goal they had to achieve. Moreover, state and local officials said that it was unclear how the benchmarks take into account states’ 3-year and localities’ 2-year spending windows. Labor established protocols in April 2001 to address WIA spending issues, requiring its appropriate regional offices to contact states whose expenditures appeared low. States whose expenditure rates fell below program year 2000 benchmarks were subject to immediate regional office examination. In addition to reviewing state spending patterns and determining the magnitude of underspending, regional offices were required to work with state staff to determine specific reasons for underspending, help develop corrective action plans, and submit weekly and monthly progress reports on implementation status to Labor headquarters. Labor’s regional offices have taken various approaches to monitoring states’ WIA spending. As of July 2002, six of seven regional offices had sent monitoring letters to 26 states. Three states received letters because spending was below the benchmarks, and these states were required to submit a corrective action plan. The other 23 states received letters as part of ongoing regional oversight, regardless of spending level. The seventh region elected to hold meetings and used other modes of direct communication with state officials instead of sending them formal letters. In addition to sending letters, four regions conducted monitoring site visits to states with low expenditure rates. At the national level, Labor has issued guidance containing financial reporting instructions and definitions as well as a technical assistance guide on financial management. At the regional level, guidance and assistance efforts vary. For example, the Dallas Regional Office issued a memorandum suggesting steps states and local areas could take to address low enrollment and expenditures. Suggestions included modifying policies and procedures to quickly move one-stop clients who are on waiting lists to intensive or training activities and reporting Individual Training Account expenditures on an accrual basis regardless of whether the provider has submitted a bill. The New York Regional Office has developed a quarterly WIA expenditure tracking system and uses the information to conduct extensive briefings, correspondence, and discussions with its states in addition to providing guidance and technical assistance through training sessions. Despite Labor’s guidance and assistance efforts, some state and local officials cited several concerns about financial reporting. As we noted, states are reporting obligations inconsistently because Labor’s definition of obligations is ambiguous. A recent report by Labor’s Inspector General confirms that the definition is unclear and that Labor provided conflicting instructions to Ohio State officials on how to report obligations.Obligations are especially important because WIA requires that recapture decisions be based on amounts expended and obligated. According to state and local officials, three aspects of Labor’s definition were problematic: First, Labor’s definition of obligations does not specify whether local obligations to service providers should be included when states report to Labor or whether obligation data should simply reflect state obligations to local boards. For example, Florida counts as obligations any funds it passes through to local areas, whereas Washington includes obligations made at the local level. Second, even when the issue of reporting local obligations is clarified, what constitutes an obligation is open to interpretation. Officials at a local area in Ohio, for example, said that some local areas report an obligation only when there is a legally binding contract while others include amounts that have been reserved in anticipation of a contract. Third, confusion exists on the timeframe used to define obligations. Colorado state officials noted that some local areas report commitments as obligations if the funds are committed no more than 3 months into the future, others consider obligations only within the current program year, while still others count obligations as any future commitments regardless of the length of the contract period. Ohio officials questioned whether obligations should be recorded for only 1 year given that WIA gives local areas 2 years in which to spend their funds. In addition, officials in several local areas told us that Individual Training Account vouchers, posed a particular financial reporting challenge. It is unclear what portion of the training voucher is to be reported as an obligation given that the vouchers may cover a 2 to 3 year period. Several state and local officials also cited the need for more information on strategies to better manage WIA spending. They told us that they would benefit from sharing these strategies. While they acknowledged that Labor had provided financial reporting guidance, they desired a mechanism or forum for exchanging ideas, questions, and answers on spending issues. Officials at both the state and local level expressed a need for greater clarity in the definition of obligations, more specific and frequent guidance and technical assistance, and systematic sharing of promising practices to effectively manage WIA spending. Labor officials acknowledged that states are misinterpreting the financial reporting guidance and that the guidance could be further clarified. To ensure uniform reporting procedures, a few states have developed their own policy guidance. For example, Colorado recently issued a directive on reporting obligations and accrued expenditures. The directive allows the costs of Individual Training Accounts to be reported as obligations when an order is placed or a contract is awarded for the procurement of goods and services. Furthermore, voucher agreements may be obligated up to 12 months. State and local officials told us that a variety of factors affects WIA expenditure rates. Delays in reporting expenditures result from lengthy spending approval processes and cumbersome contract procurement procedures as well as from a lack of timely provider billing. In addition, fluctuating funding levels affect their willingness to make long-term commitments and inhibit their ability to do long-range planning. Some states and local officials we spoke with said that they use strategies to mitigate these factors and better manage spending. Officials at some states and localities told us that lengthy processes to obtain approval to spend the funds, WIA’s emphasis on contracting for services, and lags in service provider billing all contributed to delays in spending WIA funds. After the state allocates the WIA grant to the local areas, the local areas may go through time-consuming internal procedures to obtain approval to spend the funds before they can disburse or obligate the money: Officials in Cleveland told us that the city council has to approve the grant allocation from the state for each funding stream. This process includes approval of the grant’s receipt as well as its expenditure, taking anywhere from several weeks to 8 months. Local area officials in Colorado told us that county commissioners have to approve the release of funds from the state to the local area. This process takes anywhere from 2 weeks to 3 months, depending on the number of counties comprising a local area. WIA’s emphasis on contracting for services may also delay spending for states and localities, especially for those whose procurement process is lengthy: New York officials told us that contracts must go through a competitive bidding process and many layers of review, including the state’s department of labor, comptroller, and attorney general, resulting in a procurement process lasting an average of 3 months. Illinois state officials attributed slow statewide expenditure rates to the state’s lengthy procurement process, in which it took 8 months to procure a vendor to redesign the state’s case management system. Performance-based contracts also result in financial reporting delays where contractors get paid as they meet agreed-upon performance goals. Officials in 4 of the states we contacted told us that they rely on these types of contracts in at least some of their local areas. As a result, they record expenditures later in the program year than those entities that reimburse contractors whenever costs are incurred: According to Florida State officials, all contracts are performance based, by state law. Contractors are paid at certain intervals during the contract period depending on when they have met stipulated outcomes such as job retention. However, an outcome such as job retention may not be known until as long as 6 months after the contract terminates. Suffolk County in New York pays its contractors at intervals. For example, 50 percent of the contract is paid when 50 percent of the training has been completed. Some key service providers often bill late, sometimes months after providing services. Both state and local officials told us that public institutions—particularly community and technical colleges—are primary providers of training, often delivering such services through Individual Training Accounts. The 4 to 6 month lag in school billing in Miami, for example, not only causes delays in reporting expenditures, but public schools—not accustomed to billing monthly—may also have little financial incentive to expedite billing because they do not rely on WIA funds as a major source of their tuition revenue. Slower spending of statewide funds compared to local funds also affects expenditure rates. Labor’s data for program year 2001 show that states are spending their statewide funds at less than two-thirds the rate of local funds. For example, the governor’s statewide 15 percent set-aside was 37 percent expended compared to 70 percent expended for local adult programs (see fig. 9). The difference in expenditure rates is due, in part, to WIA’s requirement that some of the statewide funds be used for end-of- year incentive grants to local areas for exemplary performance on the local performance measures. In addition, Washington, for example, uses statewide funds for long-term projects and for activities such as program evaluations. Likewise, rapid response funds are held at the state level to enable response to mass layoffs or plant closures. Florida State officials told us that, by state law, the state board must retain 30 percent of its rapid response funds until the latter part of the program year. Although these factors affect when expenditures are incurred and reported, other factors may influence states’ decision on whether to spend their WIA funds. Three key factors affect the extent to which states spend their WIA funds. First, fluctuations in funding levels due to funding formulas or budget decisions affect states’ and localities’ willingness to make long-term commitments and their ability to plan comprehensive workforce systems. Second, WIA’s emphasis on referrals to other one-stop partners’ programs may result in non-WIA funds being spent first. Third, implementation issues, particularly during the early stages of the program, may have resulted in lower expenditures while one-stop centers were still being established. Year-to-year fluctuations in funding, whether due to the allocation formulas or appropriation decisions, make localities reluctant to commit funds for long-term training and education, affecting overall WIA spending. How much states and localities receive can vary dramatically from year to year as a result of WIA’s funding formula allocations for the adult, youth, and dislocated worker programs. The dislocated worker funding formula, which distributes a third of its funds based upon the amount of “excess unemployment” (unemployment exceeding 4.5 percent), is especially volatile. In addition, funds appropriated for WIA programs vary according to annual budget decisions. For program year 2001, for example, $177.5 million was rescinded from the dislocated worker program. State and local area officials told us that they were uncertain whether the rescission would be restored and that the uncertainty contributed to their sense of funding instability. Local area funding levels can also fluctuate when they receive an infusion of unanticipated, unspent statewide funds, as was the case in Seattle and Tacoma. Washington’s governor held back some rapid response funds in anticipation of aluminum plant closings and mass layoffs stemming from the energy shortage along the West Coast. However, when plant closings did not materialize, the state no longer needed the funds for rapid response activities and allocated them to these two cities midway through the program year, with the expectation that the funds would be spent by the end of the program year. Year-to-year fluctuations in funding also hinder states’ and localities’ ability to plan comprehensive workforce investment systems. For example, in New York, funds for dislocated workers decreased by about 40 percent from program year 1999 to program year 2000, a fluctuation that state officials said would inhibit its local areas from committing funds beyond the current program year because future funding levels are uncertain. Similarly, state officials in Ohio told us that their local areas have adopted a cautious approach to current year spending and plan to carry over unspent funds due to funding uncertainty. WIA’s emphasis on referrals to other sources of assistance makes WIA a funding source of last resort. As part of the core services under WIA, adults and dislocated workers can get help in establishing financial aid eligibility for training and education programs that are available in the community but are not funded under WIA. In addition, to qualify for training services under the adult and dislocated worker programs, individuals must be unable to obtain other grant assistance, such as Pell Grants, or must require assistance beyond that provided by other grant aid programs. Sometimes, states make it a priority for local areas to spend other grant funds. For example, in Ohio, WIA spending was delayed because of the large amount of funds to be spent from the Temporary Assistance for Needy Families (TANF) grant. Start-up issues may have also affected expenditures in the initial stages of WIA’s implementation, especially during program years 1999 and 2000. Expenditures during this period may have been lower—many one-stop centers were not fully up and running while states and localities were developing or substantially retooling existing employment and training systems. For example, while Texas got a head start in establishing one- stops under WIA because it was an early implementer, state workforce officials struggled with other issues such as implementing individual training accounts and developing data collection systems for WIA’s performance measures. In addition, some states and local areas initially took a “work-first” approach, emphasizing job placement services that were less expensive compared to long-term training and education services, especially given the positive economic and employment conditions at the time of WIA’s enactment. Workforce officials told us that most of these issues have been resolved since the transition from JTPA. To manage spending more effectively, some states and local areas have developed strategies to mitigate factors affecting spending levels or delays in reporting expenditures. Most states we contacted have a process in place to recapture funds from local areas that have not met their target spending rates and reallocate them to those areas that have done so, although only a few had used it or planned to use it for program year 2000 funds, in part because they were transitioning from JTPA. Florida, however, actively monitors expenditures and requires its local areas to meet a minimum 25 percent expenditure rate after 6 months, 50 percent after 12 months, 75 percent after 18 months, and 100 percent at the end of 24 months when local grants expire. To address lengthy contracting processes, Chicago coordinates the timing of the procurement process with the availability of funds. Florida has addressed delayed school billing by mandating expedited billing in the contract and Vermont pays tuition expenses at the time of participant registration rather than at course completion. To facilitate the spending of statewide funds, Texas’ state WIA plan identifies statewide initiatives at the beginning of the program year so that statewide funds can be allocated more expeditiously. In past reports, we have found that states and local areas have stepped up to the challenge of fundamentally reconfiguring their workforce investment systems to serve the nation’s jobseekers and employers.Though spending was initially sluggish as state and local boards ramped up their workforce systems, the pace of spending picked up as the second full year of implementation under WIA came to a close. Our analysis of Labor’s data shows that states are rapidly spending their funds—in fact, nationwide, states have spent 90 percent within 2 years, much of it often within the first year the funds were available. This pace of spending has occurred even though the law allows states 3 years to spend the funds. But, expenditures by themselves do not provide a complete picture of spending activity. Obligations—funds that have been committed on behalf of WIA customers—must also be considered to accurately gauge how much is truly available for spending. Moreover, the law requires Labor to use obligations in its recapture decision. Taken together, expenditures and obligations are important tools for effective grant management and prudent oversight of the program. Labor has begun taking an active role in monitoring program spending. But, state officials have told us that it is not enough; they need more clear and consistent guidance from Labor on how to manage and report their WIA spending and how to collect and report obligations, particularly those commitments made at the local level. Failing this, states will continue struggling to understand what information is needed, and Labor’s data will continue to be incomplete and inaccurate. Perhaps most problematic, though, is that, lacking consistent, reliable data on obligations, Labor uses only expenditure data to gauge budgetary need. In so doing, Labor does not take into account longer-term commitments made to customers and service providers and, as a result, overestimates available funds. Budget decisions based on underestimated spending levels contribute to funding instability in the system and impair the ability of state and local officials to plan workforce systems that provide the nation’s jobseekers and employers with critically needed services. To build their workforce investment systems, states must carefully plan and use their limited resources in a way that best meets the growing demand for employment and training services, in the current uncertain economic environment. State officials told us that they seek more guidance and assistance in managing their WIA funds wisely and some states have implemented strategies to do so. But states will not be able to effectively manage their spending and sustain service levels without knowing what spending goals they must achieve and without a forum for sharing promising practices to help them succeed. To enhance Labor’s ability to manage its WIA grants and to improve the accuracy and consistency of financial reporting, we are making several recommendations to Labor. Through collaboration with states, Labor should clarify the definition of unliquidated obligations to include funds committed at the point of service delivery in addition to those funds obligated at the state level for statewide WIA activities and not funds that states merely allocate to their local areas, specify what constitutes an obligation to address state and local area concerns regarding contracts, and specify the timeframe for recording an obligation particularly when it covers time periods that are longer than a program year. To provide a more complete picture of spending activity and to obtain accurate information for its recapture decision, Labor should require states to collect and report information on obligations at the point of service delivery and include such obligations in determining states’ available funds. To help states and local areas manage their spending more judiciously, Labor should proactively provide states and local areas with guidance and technical assistance focused on reporting financial information, communicate spending benchmarks that states should meet, and systematically share promising practices and effective spending management strategies. We provided a draft of this report to officials at Labor for their review and comment. Labor’s comments are in appendix IV. In its comments, Labor noted that the report contained a number of findings that will be very helpful during WIA’s reauthorization. In general, Labor agreed with our findings and recommendations related to providing clearer definitions, guidance, and technical assistance to states to help them manage their WIA spending. However, Labor disagreed with our findings and recommendations related to the importance of considering obligations in addition to expenditures as it assesses WIA’s financial position. In response to our finding that states are spending their WIA funds faster than the authorized 3-year period, Labor said that states were exceeding the law’s minimum spending requirements, but that it must look beyond minimum expectations when investing limited resources. We agree with this point. In fact we found an expenditure rate of 90 percent of program year 2000 funds within 2 years, indicating that states are going well beyond minimum expectations. Labor also acknowledged that its spending estimate included all funds available at the start of the program year, without which an analysis of expenditure rates would be misleading. We do not contest Labor’s methodology, but think it is important to note that most of the funds available to states were allocated within the past year, and states have not had long to spend the funds. We continue to assert that a better way to look at expenditure rates is not in the aggregate, but on a year-by-year basis. Regarding our conclusion that Labor’s data do not accurately reflect state spending because they exclude obligations, Labor commented that, while it collects information on obligations due to statutory requirements, obligations are unimportant in formulating the budget because they represent future commitments to provide services, not actual service delivery. We continue to believe that obligations play a significant role in light of WIA’s greater emphasis on contracting for services and are recommending that Labor establish a clearer definition of obligations that states can follow so that they can report more meaningful data to Labor. While agreeing with our recommendation to clarify its definition of obligations, Labor took exception to the recommendation to collect and report obligations made at the point of service delivery. Labor was concerned that a new reporting requirement would be extremely burdensome and costly to implement nationwide, in part because it did not believe that service providers always collected this information. We believe that assessing both obligations and expenditures is an important tool for sound financial management at any level—state, local area, or service provider—and a number of states are already collecting local obligations. We are pleased to note that Labor said it plans to work with states on this recommendation during WIA reauthorization. Labor also concurred with our recommendations to provide additional financial reporting guidance and technical assistance as well as to share promising practices for effectively managing spending. Labor agreed that it would be a priority for the coming year to ensure that all states are aware of requirements for the accounting of WIA funds. Regarding our recommendation that Labor communicate spending benchmarks that states should meet, Labor disagreed with our characterization of the expenditure rates as benchmarks, saying instead that they were projections of spending used to formulate a budget. Labor also commented that expenditure rates used to monitor spending were based on actual financial reports submitted by states, not on Labor’s expectations. Labor has used these expenditure rates as benchmarks to identify states that were underspending their WIA funds and to prioritize oversight efforts. We agree that using benchmarks to prioritize monitoring helps manage limited resources; however, if spending targets are established, they should be disclosed. Finally, Labor was concerned about the unprecedented level of unspent balances carried over from prior years, citing these excess funds as justification for the dislocated worker rescission and for seeking additional budget reductions. While unspent balances under WIA may be larger than those experienced under JTPA, it may not be reasonable to expect comparable spending levels between the two programs. WIA’s requirements represent a significant shift from prior workforce programs, including its emphasis on contracting for services, streamlining services through one-stop centers, and establishing training vouchers on behalf of customers. In addition, we contend that these unspent balances may have already been committed and may be unavailable for spending. We agree that the nation will face many challenges in financing its priorities in the coming years. However, in order to make funding choices, decisionmakers will need comprehensive information that considers expenditures, obligations, and how long the funds have been available for states to spend. We reiterate that additional clarification and guidance from Labor as well as effective management strategies would help states judiciously manage their WIA funds. We will send copies of this report to the Secretary of Labor, relevant congressional committees, other interested parties, and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix V. Kim Reniero, Rebecca Woiwode, Bill Keller, and Elizabeth Kaufman made significant contributions to this report. In addition, Jessica Botsford and Richard Burkard provided legal support, Patrick DiBattista provided writing assistance. Workforce Investment Act: Interim Report on Status of Spending and States’ Available Funds GAO-02-1074. Washington, D.C.: September 5, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Coordination of TANF Services through One- Stops Has Increased Despite Challenges GAO-02-739T. Washington, D.C.: May 16, 2002. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Coordination between TANF Programs and One-Stop Centers Is Increasing, but Challenges Remain GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program GAO-02-274.Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: New Requirements Create Need for More Guidance GAO-02-94T. Washington, D.C.: October 4, 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements GAO-02-72. Washington, D.C.: October 4, 2001.
The administration has twice proposed reducing the Workforce Investment Act's (WIA) budget, citing large amounts of states' unspent funds carried over from the prior year. However, in light of current economic conditions, state and local workforce officials have expressed a need for more funds, not less. GAO was asked to assess whether the Department of Labor's spending information is a true reflection of states' available funds. GAO examined the spending rate for states, what Labor does to determine how states are managing their spending, and what factors affect states' WIA expenditure rates. States are spending their WIA funds much faster than required under the law, according to GAO's analysis of Labor's data. By the end of program year 2001, states had spent virtually all funds allocated in 1999 as well as 90 percent of 2000 funds and 56 percent of 2001 funds. By contrast, Labor's estimate suggests a slower pace of spending because it is based on all available funds, including those only recently distributed. Even though 44 percent of program year 2001 funds are being carried over into program year 2002, many of these funds may have already been committed at the point of service delivery. Furthermore, because of reporting inconsistencies, Labor's data do not accurately reflect funds that have been obligated long-term commitments made by states and local areas on behalf of WIA customers. For a truer picture of available funding, both expenditures and obligations must be considered. But, because Labor lacks consistent data on obligations, it focuses only on expenditures to gauge budgetary need and overestimates funds states have available to spend. Labor compares state expenditures against its benchmarks to determine how states manage their spending, to target guidance and assistance efforts, and to formulate next year's budget request. But Labor does not often communicate these benchmarks to states. Despite active monitoring and additional guidance, state and local officials remain confused by some of Labor's financial reporting requirements. They seek more definitive guidance and the opportunity to share promising strategies to help them better manage spending. Financial reporting delays result from lengthy spending approval and contract procurement procedures lasting as long as 8 months and untimely service provider billing. Also, yearly funding fluctuations affect states' and local areas' willingness to commit resources in the long term and inhibit workforce system planning. Some states and localities have implemented strategies to overcome these factors and better manage their WIA spending.
CDC partners with the National Institutes of Health to publish Biosafety in Microbiological and Biomedical Laboratories, which provides guidance on biosafety principles and practices for protecting laboratory personnel, the public, and the environment from exposure to biological agents for each biosafety level. BSL-3 laboratories work with indigenous or exotic agents with known potential for aerosol transmission or those agents that may cause serious and potentially lethal infections. BSL-4 laboratories work with exotic agents that pose a high individual risk of life-threatening disease by aerosol transmission and for which treatment may not be available. CDC and APHIS were delegated authority by their respective department Secretaries to regulate the use, possession, and transfer of select agents. a new certificate of registration or renewing an existing registration.CDC and APHIS may also conduct interim inspections, such as annual inspections, to assess compliance with select agent regulations. High- containment laboratories may also conduct work with biological agents that have not been designated as select agents and are therefore not registered with the select agent program. Many federal departments and agencies own and operate high- containment laboratories in the United States and abroad. For example, DOD conducts and supports research on detection, identification, and characterization of biological threats and the development of medical countermeasures against those threats at its high-containment laboratories in the United States and located overseas. As part of its bioterrorism preparedness and response program, and in addition to its responsibilities for overseeing other entities’ laboratories under the select agent regulations, CDC also conducts research on potentially high-risk biological agents at its own high-containment laboratories. DOD and CDC had existing policies and procedures that addressed biosafety and biosecurity within their high-containment labs at the time the safety lapses occurred in 2014 and 2015. However, as a result of these lapses—which illustrated multiple breakdowns in compliance with established policies and procedures and inadequate oversight—both DOD and CDC have identified weaknesses in the management of their high-containment laboratories and have begun to take some steps to review and revise policies and procedures and improve monitoring and evaluation activities. DOD Steps to Address Weaknesses in Laboratory Management Our ongoing work shows that DOD has begun to take some steps to address weaknesses in the management of its high-containment laboratories but had not yet implemented them prior to the May 2015 anthrax safety lapse. After an internal reorganization in 2012, DOD began revising its policies and procedures for safeguarding select agents, including security standards for these agents, to streamline policies and improve monitoring and evaluation activities. DOD officials told us that the changes will include new requirements for all service laboratories (within Air Force, Army, and Navy) registered with the select agent program to submit all inspection reports, such as those from CDC’s select agent office, to DOD senior management regardless of inspection findings. Officials stated that, prior to this new requirement, the laboratories were required to report only what they determined to be significant findings to DOD senior management, which officials stated was no longer acceptable. DOD expects to finalize the new policy by September 2015; Air Force, Army, and Navy will have 6 months to become compliant with the updated policy once it is finalized. In addition, DOD officials told us that they identified further changes that they plan to make to this policy as a result of the May 2015 anthrax safety lapse, which they will make after the current changes are finalized. DOD plans to collect inspection reports from its select agent-registered laboratories; however, it does not plan to collect and monitor the results of any reports of inspections conducted at high-containment laboratories that are not registered with the select agent program but nonetheless conduct research on potentially high-risk biological agents. According to officials, DOD does not conduct department-level inspections of its high- containment laboratories, including those high-containment laboratories that do not conduct research with select agents and are not registered with the select agent program. Instead, DOD delegates responsibility for inspections to the services, where management responsibility for conducting or monitoring the results of laboratory inspections varies and may not lie with senior-level offices, depending upon the service. For example, DOD officials stated that high-containment Air Force laboratories are inspected by an office one level higher than the office in which the laboratory is located. Air Force officials told us that inspectors general at various levels of the service inspect Air Force laboratories. However, in our initial conversations, officials we spoke with did not tell us whether senior Air Force offices monitor the results of laboratory inspections. Our ongoing work will examine service-level responsibilities for conducting and monitoring the results of inspections and the extent to which DOD, CDC’s and APHIS’s select agent offices, the services, and the laboratories communicate and coordinate to address significant findings and resolve deficiencies identified during inspections. DOD has also begun to address weaknesses in its incident reporting requirements. DOD requires its laboratories to report potential exposures to and possible theft, loss, or misuse of select agents to CDC’s or APHIS’s select agent office, but, according to officials, DOD does not currently track these incidents or laboratories’ responses to them at the department level. DOD officials told us that the May 2015 anthrax safety lapse is the first incident that DOD has tracked at the department level; the updated biosecurity policy will include requirements for tracking exposures and other biosafety and biosecurity incidents. Our ongoing work will include an examination of the nature of DOD’s tracking and what the department might require from the laboratories or the services as a result of this tracking, such as identifying corrective actions or requiring another type of response. CDC Steps to Address Weaknesses in Laboratory Management Our ongoing work shows that CDC has begun to take a number of steps as a result of the recent safety lapses but has not yet completed implementing some agency recommendations intended to address weaknesses in its laboratory management. In October 2014, an internal workgroup established by CDC issued a report from its review of the 2014 safety lapses, which included recommendations to improve agency management of its laboratories and improve biosafety. Among its findings, the workgroup discovered considerable variation across CDC in the level of understanding, implementation, and enforcement of laboratory safety policies and quality systems. Their recommendations addressed weaknesses identified in six functional areas. Recommendations addressed weaknesses in areas of particular relevance to our ongoing work: (1) policy, authority, and enforcement; (2) training and education; and (3) communications and staff feedback. Policy, authority, and enforcement. The workgroup noted that CDC lacked overarching biosafety policies, which limits accountability and enforcement. The workgroup also noted that CDC needed clear policies and effective training for leaders and managers to help them implement accountability measures, assure competency, and enforce biosafety adherence throughout agency laboratories. To address these gaps, the workgroup recommended that CDC (1) develop agency-wide policies to communicate biosafety requirements clearly and consistently to all of its laboratories and (2) enforce existing laboratory safety policies by clarifying the positive and negative consequences of adhering or not adhering to them. Training and education. The workgroup noted that CDC’s training systems, competency and proficiency testing, and time-in-laboratory requirements varied greatly across the agency’s laboratories. The workgroup recommended a comprehensive review and unification of training and education best practices across all CDC laboratories to improve laboratory science and safety. Communications and staff feedback. The workgroup noted CDC’s need for comprehensive communication improvements to provide a transparent flow of information across the laboratory community regarding laboratory science and safety. The workgroup recommended that CDC should include clearer communication flow diagrams, point-of-decision signs, and improved notification systems to distribute information to neighboring laboratories when an event such as a potential exposure occurs. In addition, in January 2015, an external advisory group completed its review of laboratory safety at CDC and identified recommendations that reinforced the internal workgroup’s findings and recommendations. For example, this advisory group found that CDC lacked a clearly articulated safety mission, vision, or direction and recommended the creation of a biomedical scientist position in the CDC Director’s office. As we conduct our ongoing review of federal management of high- containment laboratories, we are assessing CDC’s progress in implementing the recommendations from its internal and external workgroups. Our preliminary observations show that CDC has taken some steps to implement workgroup recommendations and address weaknesses in laboratory oversight but has not addressed some recommendations or fully implemented other activities. For example, CDC reported that, in response to the recommendation to develop overarching biosafety policies, it is developing policies for specimen transport and laboratory training. In addition, CDC developed a new procedure for scientists leaving the agency to account for any biological specimens they may have been researching, which the agency rolled out in February 2015. This procedure was among those policies the workgroup recommended to be included in overarching agency policies. However, as of July 2015, CDC has not developed other agency-wide policies that include comprehensive requirements for laboratory biosafety, such as policies that outline requirements for appropriate laboratory documentation and for laboratories to maintain site-specific operational and emergency protocols, to fully address the workgroup recommendation. To address the recommendation made by the external advisory group to create a senior-level biomedical scientist position, CDC created a new Laboratory Science and Safety Office within the office of the CDC Director and established the position of Associate Director for Laboratory Science and Safety to lead the new office. The primary responsibility of the associate director is to establish additional agency- level policies for laboratory safety and communicate CDC’s safety efforts to agency staff. As of July 2015, CDC had not yet filled this position with a permanent staff member. In addition, CDC is taking other steps intended to improve the management of high-containment laboratories but has not yet completed these activities. For example, in its 2013 policy for sample and specimen management, CDC included a directive for the agency to implement an electronic inventory management system. According to officials, CDC rolled out its electronic specimen management system for inventorying biological agents to all of its infectious disease laboratories on March 30, 2015. However, CDC has not made the new system available to all agency laboratories; it expects to do so within the next 2 years. Since 2007, we have reported on several issues associated with high- containment laboratories and the risks posed by past biosafety incidents and recommended improvements for increased federal oversight. Our prior work included recommendations that address (1) the need for government-wide strategic planning for requirements for high- containment laboratories, including assessment of their risks; (2) the need for national standards for designing, constructing, commissioning, operating, and maintaining such laboratories; and (3) the need for federal oversight of biosafety and biosecurity at high-containment laboratories. HHS and other agencies to which the recommendations were directed have conducted some activities to respond but have not fully implemented most of the recommendations. For example, In our 2007 and 2009 reports, we found that the number of BSL-3 and BSL-4 laboratories in the United States had increased across federal, state, academic, and private sectors since the 2001 anthrax attacks but no federal agency was responsible for tracking this expansion. In addition, in our 2009 report we identified potential biosafety and biosecurity risks associated with an increasing number of these laboratories. We recommended that the National Security Advisor, in consultation with HHS, the Department of Homeland Security, DOD, USDA, and other appropriate federal departments, identify a single entity charged with periodic government-wide strategic evaluation of high-containment laboratories to (1) determine, among other things, the needed number, location, and mission of high- containment laboratories to meet national biodefense goals, as well as the type of federal oversight needed for these laboratories, and (2) develop national standards for the design, construction, commission, and operation of high-containment laboratories, including provisions for long-term maintenance, in consultation with the scientific community. We also recommended that HHS and USDA develop a clear definition of what constitutes exposure to select agents. The administration, HHS, and USDA have addressed some of our recommendations. For example, in 2013, the administration’s Office of Science and Technology Policy reported that it had begun to support periodic, government-wide assessments of national biodefense research and development needs and has taken some steps to examine the need for national standards for designing, constructing, commissioning, maintaining, and operating high- containment laboratories. CDC and USDA have developed scenarios to more clearly define what exposures to select agents they consider to be reportable. In our 2013 report and 2014 testimony, we found that no comprehensive assessment of the nation’s need for high-containment laboratories, including research priorities and capacity, had yet been conducted. We also found that no national standards for designing, constructing, commissioning, and operating high-containment laboratories, including provisions for long-term maintenance, had yet been developed.assigned responsibility for oversight of high-containment laboratories. In addition, no single federal entity has been In summary, the safety lapses of 2014 and 2015 continue to raise questions about the adequacy of (1) federal biosafety and biosecurity policies and procedures and (2) department and agency monitoring and evaluation activities, including appropriate levels of senior management involvement. Preliminary observations on DOD’s and CDC’s steps to address weaknesses in managing potentially high-risk biological agents in high-containment laboratories—as well as findings and recommendations from our previous work on high-containment laboratories—continue to highlight the need to consider how best the federal government as a whole and individual departments and agencies can strengthen laboratory oversight to help ensure the safety of laboratory personnel; prevent the loss, theft, or misuse of high-risk biological agents; and help recognize when individual safety lapses that appear to be isolated incidents point to systemic weaknesses, in order to help prevent safety lapses from continuing to happen. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this statement, please contact Marcia Crosse, Director, Health Care at (202) 512-7114 or [email protected]; John Neumann, Director, Natural Resources and Environment at (202) 512-3841 or [email protected]; or Timothy M. Persons, Chief Scientist at (202) 512-6412 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Mary Denigan-Macauley, Assistant Director; Karen Doran, Assistant Director; Sushil Sharma, Assistant Director; Cheryl Arvidson; Nick Bartine; Colleen Corcoran; Shana R. Deitch; Melissa Duong; Terrance Horner, Jr.; Dan Royer; Elaine Vaurio; and Jennifer Whitworth. Appendix I: Timeline of Recent Centers for Disease Control and Prevention (CDC) Safety Lapses and Related Assessments This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent safety lapses at high-containment laboratories raise questions about how federal departments and agencies manage high-risk biological agents. DOD and CDC both conduct research on high-risk biological agents at their respective laboratories. Biosafety and biosecurity practices in these laboratories are intended to reduce exposure to, and prevent loss, theft, or misuse of, biological agents. CDC regulates the possession, use, and transfer of certain biological agents that pose potentially severe threats to public health under the select agent program. This statement summarizes (1) preliminary observations from ongoing GAO work on federal laboratories' biosafety and biosecurity policies and practices and (2) GAO's past work on oversight of high-containment laboratories. To conduct ongoing and past work, GAO reviewed documentation and interviewed federal agency officials, including those from DOD and CDC, about policies and procedures for high-containment laboratories; efforts to monitor compliance and evaluate effectiveness of biosafety and biosecurity policies and practices; and the status of federal oversight activities. Recent safety lapses—including shipments of live anthrax bacteria from the Department of Defense (DOD) to U.S. and international laboratories and potential exposures of Centers for Disease Control and Prevention (CDC) laboratory personnel to live anthrax bacteria—have illustrated multiple breakdowns in compliance with established policies and inadequate oversight of high-containment laboratories. In these laboratories, researchers work with potentially high-risk biological agents that may result in serious or lethal infection in humans. Preliminary observations from GAO's ongoing work show that DOD and CDC have begun to address weaknesses in the management of their high-containment laboratories, but their activities have not yet been fully implemented. GAO's ongoing work will include further examination of the status of DOD's and CDC's activities to improve management of high-containment laboratories. DOD began taking steps to address weaknesses in its management of high-containment laboratories in 2012 by reviewing and revising biosecurity policies and procedures. According to officials, the revised biosecurity policies will require all DOD laboratories that conduct research with certain high-risk biological agents to submit all inspection reports to senior DOD management, which was not previously required. DOD plans to finalize these policies by September 2015. DOD also plans to make further changes to these policies as a result of its assessment of the May 2015 anthrax incident, after the first set of revisions is finalized. DOD has also begun to track biosafety and biosecurity incidents at the senior department level, such as potential exposures to or misuse of biological agents, which it had not done prior to the May 2015 anthrax incident. DOD officials said the May 2015 incident is the first incident that DOD has tracked at the senior department level. CDC also began taking steps to address weaknesses identified in internal and external working group assessments of the June 2014 anthrax incident and other safety incidents but has not yet completed implementing some recommendations intended to improve its laboratory oversight. For example, an internal workgroup recommended that CDC develop agency-wide policies to provide clear and consistent requirements for biosafety for all agency laboratories. In response, CDC developed a specimen transport policy but has not developed other agency-wide policies, such as requirements for laboratory documentation and emergency protocols. Since 2007, GAO has reported on issues associated with high-containment laboratories and recommended improvements for federal oversight. GAO's prior work recommended the establishment of a single federal entity to (1) conduct government-wide strategic planning for requirements for high-containment laboratories, including assessment of their risks, and (2) develop national standards for designing, constructing, commissioning, operating, and maintaining such laboratories. Federal departments to which GAO's recommendations were addressed agreed with them and have conducted some activities to respond but have not implemented the recommendation to establish a single federal entity with responsibility for oversight of high-containment laboratories. GAO has previously made recommendations to agencies to enhance biosafety and biosecurity. Because this work is preliminary, GAO is making no new recommendations at this time. GAO shared preliminary observations from this statement with DOD and CDC and incorporated comments as appropriate.
Russia possesses the world’s largest declared chemical weapons stockpile, which is stored at seven sites across the country (see fig. 1). When declared in 1998, the Russian stockpile included 32,500 metric tons of nerve agents and 7,500 metric tons of blister agents. As of March 2006, Russia had destroyed about 1,158 metric tons of blister agents, about 3 percent of its stockpile. Under the CWC, Russia must destroy all of its chemical weapons by the extended deadline of 2012. The CWC is a multilateral arms control treaty that bans the development, production, stockpiling, transfer, and use of chemical weapons and requires the destruction of existing chemical weapons stocks. Until destroyed, chemical weapons remain a proliferation threat. In 1992, the United States agreed to assist the Russian government in eliminating its chemical weapons stockpile. The United States has committed to fund the design, construction, equipment acquisition and installation, systems integration, training, and start-up of the Shchuch’ye facility. When completed, the facility will house about 100 buildings and structures, including the destruction buildings where chemical munitions are destroyed; the administration building where the destruction process is controlled; and support buildings such as the boiler house, which provides heat to the entire facility. As originally planned, the facility’s construction was expected to begin in March 2001 and to be completed in 2005. However, a 2-year congressional freeze on funding postponed the start of construction until March 2003. DOD’s Defense Threat Reduction Agency (DTRA) manages the implementation of the CTR program. To construct the Shchuch’ye facility, DTRA—-through the U.S. Army Corps of Engineers, the contract manager for the project—has contracted with Parsons, which in turn subcontracts the design and construction work to Russian contractors. Contracts are executed, managed, and reviewed in accordance with DOD and Federal Acquisition Regulations (FAR). Subcontractors submit bids in response to Requests for Proposal (RFP) issued by Parsons. Parsons then awards the subcontract on the basis of safety records, past performance, quality of work, price, and other factors. After awarding these contracts, Parsons works with the subcontractors to conduct technical evaluations of the schedule and cost of the work. CTR assistance will finance the construction of all buildings and structures on site, except for one. The Russian Federation has agreed to fund the construction of a second destruction building (Building 101A) nearly identical to Building 101, the U.S. funded destruction structure. Russia is also funding the construction of utilities (gas, electricity, water) needed to operate the facility and to support the local community. Since 1992, Congress has passed 27 laws addressing the CTR program.The legislation includes various DOD requirements for CTR funding, conditions on CTR expenditures, and mandates to report on the implementation of the CTR program. Some legislative provisions apply to the entire CTR program; others are directed at the Shchuch’ye project, including a requirement for a presidential certification that the project is in the U.S. national security interest. The President’s certification authority and the waiver of a prior prohibition on funding chemical weapons destruction in Russia expire on December 31, 2006. In addition, Congress has conditioned funding for the Shchuch’ye facility on the Secretary of Defense’s certification that, among other conditions, Russia has allocated at least $25 million to eliminating its chemical weapons and has developed a practical plan for destroying its chemical weapons stockpile. Since our last visit to the Shchuch’ye site in 2003, we found that Parsons and DOD had made progress in constructing the facility. Several support buildings such as the fire station, worker housing, and warehouse had been completed; and many of the other structures, including the administration/cafeteria building, the processing building, and storage buildings were well under construction. However, key buildings had fallen behind schedule, affecting the facility’s overall cost and schedule. Uncertain progress of Russian construction at the facility and on its infrastructure, an unpredictable Russian operating environment, and assorted technical issues could continue to impact the project’s cost and schedule. Furthermore, the failure of Parsons to develop and implement a usable EVM system has limited DOD’s efforts to oversee project schedule and cost. During our visit to the Shchuch’ye site in November 2005, we observed substantial construction progress compared with our visit in November 2003. In 2003, the site consisted mainly of concrete foundations for the destruction buildings, with only the specialist camp and warehouse under construction. By 2005, however, the support structures of many buildings had been built, and several buildings were at or near completion, including the specialist camp, warehouse, gas rescue station, and fire station. (Fig. 2 shows the completed fire station.) Also under construction were the boiler house and the administration/cafeteria building, seen in figure 3. The concrete outer shells of Building 101 and the administration/control building had been completed. While Building 101 was still open to the elements and contained no inner walls, Russian subcontractors were installing outlets and control panels inside the drywall of the administration building. (See fig. 4 for a comparison of the construction work completed on Building 101 in November 2003 and November 2005.) We also observed piping and wiring being installed above ground for site wide electrical, heat, and water utilities. Despite such progress, the CWDF project has not met scheduled milestones, primarily because of a delay in awarding the contract for the completion of the CTR-funded destruction building (Building 101), pictured in figure 4. In January 2005, DOD estimated that the CWDF would cost $1.039 billion and be transferred to the Russian Federation by July 2009. However, in March 2006, DOD officials stated that they were unable to estimate when the entire facility will be completed and at what cost until they award a contract for the completion of Building 101. As of February 2006, DOD estimated that the construction of the entire CWDF was about 40 percent complete, compared with the more than 52 percent scheduled for completion at that time. As indicated in figure 5, the construction of certain key structures is behind schedule, including the destruction building (Building 101), the control building (administration building), the boiler house, and the water circulation building. Building 101 is on the “critical path”, that is, delays in finishing the building will prolong construction on other parts of the Shchuch’ye facility. Although the exterior shell of Building 101 is on schedule, the award of the construction contract for the remainder of Building 101 is behind schedule. Parsons had planned to award the subcontract for the balance of the building in June 2005, but it may not be awarded until summer 2006. Since October 2005, Parsons has incurred costs for personnel salaries, rent, and transportation of more than $3 million per month, which will continue until the subcontract is awarded. Where possible, Parsons has reduced or delayed recruitment of personnel planned for management of Building 101. Construction activity is still ongoing at other buildings throughout the site. The delay in awarding the contract for the remainder of Building 101 has impacted the overall schedule for completing the facility’s construction. As part of its program management, DOD estimates dates for key project milestones at Shchuch’ye. These include a milestone schedule with objective (ideal) completion dates, threshold (latest acceptable) dates, and estimated completion dates for key activities. As of May 2006, however, DOD does not expect to meet key milestone dates for the CWDF. According to this schedule (as shown in fig. 6), construction of the facility will be delayed by about 1 year, testing using simulated nerve agent will begin some 15 months later than planned, and live agent demonstration will be delayed by about 8 months. While DOD estimates that it will turn over the Shchuch’ye facility to the Russian government in December 2009, such an estimate appears optimistic given the construction and other unknown delays that DOD may encounter in testing the facility with simulated and live nerve agent. DOD officials stated that these milestones may slip even further. The delays in constructing key buildings at the CWDF result from problems Parsons and DOD have had with Russian subcontractors, including the bankruptcy of one major subcontractor, problems in soliciting adequate bids, and difficulty maintaining a competitive-bidding process. First, the 2005 bankruptcy of the Russian construction subcontractor Magnitostroy delayed construction of key buildings. This company was cited during the initial source selection process during 2000 to 2001 for its technical abilities, logistical capability, competitive pricing, and financial responsibility and was the first construction subcontractor to work on the Shchuch’ye project. According to DOD and Parsons officials, Magnitostroy enjoyed the strong support of the Russian government. However, it was discovered in 2005 that a senior executive embezzled millions of dollars from the company in 2003. As a result, the company was unable to afford sufficient labor to complete its work at the site, according to DOD and Parsons officials. The most serious delay involved the construction of the administration building—the command building that will control the destruction process. Although scheduled to be complete at the time of our visit in November 2005, construction of the administration building was only about 36 percent complete. By January 2006, Parsons had assumed direct responsibility for the construction of the building and had divided most of the remaining work among Magnitostroy’s subcontractors. Similarly at that time, two other Magnitostroy buildings were behind schedule, requiring Parsons to extend their completion dates. Given these delays, Parsons has not provided Magnitostroy with RFPs on any new construction packages. Second, DOD and Parsons officials stated that Russian subcontractors had not provided detailed cost and scheduling information in their bids. Although Parsons cited incomplete bids as the cause of the delay, DOD criticized Parsons for a “lack of urgency” in resolving the Building 101 bid issue. Parsons had particular difficulty soliciting adequate bids on the construction package for the work remaining on Building 101. This construction package will complete the building’s physical structure and install the equipment and processing systems needed to destroy the chemical munitions. According to DOD and Parsons officials, it is the largest, most complex construction package of the CWDF project. After Magnitostroy’s bankruptcy, two other contractors, Spetztroy and Stroytransgaz, bid on the remaining Building 101 construction package. According to DOD officials, their bids arrived after the June 2005 deadline and did not include adequate cost and schedule data. Despite a deadline extension, neither subcontractor submitted a complete bid until the end of December 2005. At that time, only Spetzstroy submitted a responsive bid. Its bid price, however, was $239 million over DOD’s budget. Third, the small pool of approved Russian subcontractors has made it difficult to maintain a competitive-bidding process. According to DOD, the subcontractors for the CWDF are selected through a series of joint selection committees. The Russian government develops a list of approved companies that Parsons and a joint commission comprising DOD and Russian government officials examine. In the initial round of subcontractor selections in 2000 to 2001, Magnitostroy was the first CWDF subcontractor chosen. A second round of selections in 2003 added four more subcontractors: Promstroy, Spetztroy, Stroyprogress, and Stroytransgaz. According to DOD officials, before Magnitostroy’s 2005 bankruptcy, Magnitostroy, Stroytransgaz, and Spetztroy were the only subcontractors that were capable of completing larger construction efforts. The small number of Russian contractors discouraged effective competition and limited the number of construction packages that could be awarded. In March 2005, DOD requested that the Russian Federation expand the subcontractor pool to ensure completion of the Shchuch’ye facility on time and within budget. The Russian government added one small specialty subcontractor, Vneshstrojimport, but did not restart the selection process to find a replacement for Magnitostroy. In December 2005, Stroytransgaz withdrew from competition, and the sole remaining contractor, Spetztroy, submitted a bid for $310 million to complete Building 101. However, DOD had budgeted only $71 million for the construction package. To reconcile the cost difference, DOD paid for an independent cost analysis that validated the original Parsons estimate of $56 million. Parsons and DOD also sought the assistance of the Russian government to negotiate with Spetzstroy to lower its bid. When negotiations failed to produce a compromise, Parsons canceled the RFP for the balance of Building 101 on March 2, 2006. In March 2006, DOD resubmitted a request for more subcontractors and provided the Russian government with a list of five potential companies, three of which were added to the pool. In April 2006, Parsons issued a new RFP for the remainder of Building 101. According to DOD officials, Parsons has and will continue to conduct weekly meetings with the bidders and make personnel available for questions and clarifications regarding the RFP. The cost and schedule of the Shchuch’ye facility are subject to continuous risks. The Russian Federation’s uncertain progress in completing work on Building 101A and required utilities could delay the final system testing for the CWDF. The Russian government’s failure to complete promised social infrastructure could generate local opposition to the CWDF. DOD and Parsons must also operate in an unpredictable Russian environment with changing legal and technical requirements that could directly affect schedule and cost. Russian Federation progress in completing Building 101A, as well as the industrial and social infrastructure surrounding the CWDF, remains uncertain. According to DOD officials, the Russian government’s method of construction scheduling contains few itemized tasks, making it difficult to accurately gauge construction progress and uncover issues that could cause delays. Although DOD and Parsons monitor Russian Federation construction progress through monthly progress reports and project site visits, the Russian government has not always followed jointly agreed upon schedules. DOD and Parsons officials remain concerned that systemization timelines could be affected if both destruction buildings are not completed at the same time. Furthermore, Russian progress in constructing utilities for the CWDF and the local community has produced mixed results. For instance, we observed that the Russian government has installed only one of three power lines needed to operate the CWDF. According to Parsons and DOD officials, although the Russian government completed the new water line to the CWDF and the town of Shchuch’ye in 2004, the more water the CWDF uses, the less the town has available. This may lead to a competition for water when the facility begins consuming substantially more water when testing of the facility’s systems begins, and during operation. Furthermore, when the Russian government constructed a new gas line to the CWDF and through the town of Shchuch’ye, it did not connect the line to local homes as promised. A local Shchuch’ye official stated that most local residents cannot afford to pay for connection to the main gas line and expressed concerns that the Russian government will not fulfill its obligations to the local population. To allay public concerns that may impact the CWDF, DOD uses public outreach offices to conduct opinion polls and educate the local populace on the CWDF. DOD and Parsons must contend with an unpredictable Russian business environment that can affect cost and schedule through unexpected changes in Russian legal, technical, and administrative requirements.New regulatory requirements have impacted the CWDF; in one case, stopping work on a building until it could be redesigned to comply with new Russian electrical codes. In November 2005, a new Russian regulatory agency—-the Federal Service for Ecological, Technological and Nuclear Oversight (Rostekhnadzor)—-performed a surprise audit at the Shchuch’ye CWDF. The agency cited Parsons with noncompliance in several areas, including environmental and industrial safety reviews, permits, licenses, and certifications. While Parsons and DOD officials were not aware of these requirements, they agreed to implement corrective actions. As of March 2006, Parsons had resolved 82 percent of the Rostekhnadzor audit findings and was working to mitigate the remainder. DOD continues to negotiate with Rostekhnadzor to meet the requirements of Russian law and is working with the Russian government to identify feasible solutions. Additionally, Parsons has contracted with consultants that specialize in helping companies conform to Russian fire, ecological, and industrial safety regulations at the local and national levels. Furthermore, DOD and Parsons must review new technical requirements raised by Russian government officials. According to DOD officials, some new requirement requests are justified as they relate to the operation of the CWDF, while most others are attempts to transfer cost and risk from Russia to the United States. For example, as a result of code and space deficiencies, DOD accepted the Russian requirement for an additional laboratory building on site, construction of which will increase the project’s cost by an additional $12 million. However, DOD officials have resisted approving Russian requests that they believe are unnecessary or that fall within Russian responsibilities at the site. DOD refused to allow the Russian government to incorporate a new machine into the destruction process, which would have required significant redesign and testing of the process, and led to schedule delays and increased project costs. Russian requirements for long-term visas and value added tax (VAT) exemptions for equipment have affected cost and schedule. The Russian government provides most DOD and Parsons personnel with only 6-month visas, requiring workers to temporarily leave the country while their visas are reissued. One DOD official estimated that transportation costs associated with this practice totaled approximately $3 million as of November 2005. However, DOD officials have noticed improvement in how quickly the Russian Federation processes visas. In addition, when the Russian government reorganized in early 2004, the office in charge of Russian customs was dissolved, leaving no agency able to approve the VAT exemptions for more than 6 months. During that time, all equipment shipped from the United States was halted, causing a 3-month slip in the CWDF construction schedule. In late 2004, the Russian Federation eventually reestablished a new VAT office, and equipment delivery was resumed. Since that time, DOD has encountered no VAT-related delays. Issues associated with the testing of the CWDF’s utilities and automated destruction system (systemization) could further delay the schedule and increase costs. DOD officials identified systemization of the CWDF as the next major challenge after resolving the bid issue for Building 101. Systemization consists of a series of tests to ensure the safety, function, and interoperability of the CWDF internal systems—i.e., water, gas, electric, heat, and the chemical munitions destruction process. Such testing could be delayed if either destruction building (101 or 101A) or essential utilities are not completed on time. The automated destruction process is complex, involving the drilling, draining, and decontamination of various sizes and types of munitions, and the neutralization of the nerve agent they contain. Ensuring that this system works and interfaces properly with the rest of the facility will require the testing and calibrating of roughly 1,000 different processes, according to a DOD official. DOD officials noted that U.S. experiences with destroying chemical weapons found that systemization often encounters difficulties and delays and has the potential to increase costs. Furthermore, DOD and Parsons must compete the systemization contract between two Russian subcontractors, Redkino and Giprosintez, selected by the Russian government. Given previous difficulties working with subcontractors, Parsons may experience delays in obtaining adequate and reasonably priced bids. DOD is attempting to mitigate systemization risk by exploring options to test the CWDF’s systems using Russian rather than U.S. methods. Although the Shchuch’ye facility is a Russian design, it is currently planned to undergo testing procedures similar to those DOD uses in the United States. According to DOD officials, Russian systemization methods are less involved than U.S. processes, which must adhere to stringent environmental and operating regulations and can take 16 to 18 months to complete. The Russian government, however, systemized its CWDF at Kambarka within 6 to 9 months. While DOD officials caution that each CWDF is unique, given the types of munitions to be destroyed, they have begun exploring whether Russian methods may allow for streamlining and compression of the systemization schedule at Shchuch’ye, while still maintaining acceptable safety levels. Parsons and its subcontractors are also testing the automated destruction system equipment before it is installed in Building 101. DOD policy and guidance require the use of EVM to measure program performance. EVM uses contractor reported data to provide program managers and others with timely information on a contractor’s ability to perform work within estimated cost and schedule. It does so by examining variances reported in contractor performance reports between actual cost and time of performing work tasks and the budgeted or estimated cost and time. In September 2004, DOD modified its contract with Parsons, allocating about $6.7 million and requiring the company to apply EVM to the Shchuch’ye project. Parsons was expected to have a validated EVM system by March 2005. As of April 2006, Parsons had not developed an EVM system that provided useful and accurate data to CWDF program managers. In addition, our analysis found that the project’s EVM data are unreliable and inaccurate. According to DOD officials, these problems stem in part from Parsons’ outdated accounting system. EVM guidance states that surveillance of an EVM system should occur over the life of the contract. DOD has not yet conducted an IBR for the Shchuch’ye project and does not plan to do so until after Parsons awards the subcontract to complete Building 101, possibly in June 2006. In December 2005 a Parsons’ self-evaluation stated that the EVM system for the CWDF was “fully implemented.” In contrast, DOD characterized Parsons’ EVM implementation as a “management failure,” citing a lack of experienced and qualified Parsons staff. DOD withheld approximately $162,000 of Parsons’ award fee due to concerns over the EVM system. In March 2006, DOD officials stated that at that point in implementation, EVM was not yet a usable tool in managing the Shchuch’ye project. DOD officials stated that Parsons needs to demonstrate that it incorporates EVM into project management rather than simply fulfilling a contractual requirement. DOD expects Parsons to use EVM to estimate cost and schedule impacts and their causes and, most importantly, to help eliminate or mitigate identified risks. Parsons’ EVM staff stated that they underestimated the effort needed to incorporate EVM data into the system, train staff, and develop EVM procedures. Parsons officials were also surprised by the number of man- hours required to accomplish these tasks, citing a high level of staff turnover as contributing to the problem. According to the officials, working in a remote and isolated area caused many of the non-Russian employees to leave the program rather than extend beyond their initial tour of duty. Based on our review of Parsons’ monthly EVM data for September 2005 through January 2006, we found that the data are inaccurate and unreliable and that Parsons is exercising poor quality control over its EVM data. Specifically, we discovered numerous instances where data were not added properly for scheduled work; Parsons’ EVM reports, therefore, did not accurately capture data needed by project management to make informed decisions about the Shchuch’ye facility. For example, we found that from September 2005 through January 2006, Parsons’ EVM reports contained addition errors that did not capture almost $29 million in actual costs for the CWDF project. Such cost omissions and other errors may cause DOD and Parsons project officials to overestimate the amount of project funding available. Moreover, we found several instances where the accounting data were not allocated to the correct cost accounts, causing large cost over-runs and under-runs. This problem occurred because the accounting data were placed in the wrong account or Parsons’ accounting system was unable to track costs at all levels of detail within EVM. A Parsons official stated that the company was taking measures to identify these inaccuracies and allocate the accounting data to the proper cost accounts. These problems, however, have led to numerous accounting errors in the EVM reports. Such mistakes underestimate the true cost of the CWDF project by ignoring cost variances that have already occurred. Cost variances compare the earned value of the completed work with the actual cost of the work performed. Until Parsons fixes its accounting system, manual adjustments will have to be made monthly to ensure that costs are properly aligned with the correct budget. Such continuous adjustments mean that the system is consistently reflecting an inaccurate status of the project for Parsons and DOD managers. (For specific examples of our findings regarding Parsons’ EVM data, see app. II.) EVM guidance states that surveillance of an EVM system should occur over the life of the contract to guarantee the validity of the performance data provided to the U.S. government. Initial surveillance activities involve performing an IBR of a project within 6 months of awarding a contract and as needed throughout the life of a project. DOD and Parsons have not yet conducted an IBR for the Shchuch’ye project. Program managers are expected to use EVM reports that have been validated by an IBR. Without verifying the baseline, monthly EVM reporting, which tracks project work against a set budget and schedule, is neither meaningful nor valid. Parsons and DOD officials explained that while an IBR has been discussed, one will not be conducted until Parsons awards a contract for completing Building 101. DOD officials estimate that the award process for this contract may not be completed until summer 2006, approximately a year later than planned. According to Parsons, as of January 2006, about $66 million of scheduled work has not been completed as planned, due to the delay in awarding the subcontract for the balance of Building 101. DOD officials stated that while they recognize the importance of conducting surveillance over an EVM system, they currently are focused on the immediate need of establishing a usable EVM system on which to perform surveillance. Furthermore, DOD requires all EVM systems to undergo a compliance audit or “validation” conducted by the Defense Contract Management Agency (DCMA) with assistance from the Defense Contract Audit Agency (DCAA). DCAA found that Parsons’ accounting process was inadequate. A DCAA official on the validation team stated that Parsons is relying on an outdated accounting system that has difficulty capturing actual costs for the Shchuch’ye project and placing them into appropriate cost categories. The DCAA official stated that Parsons management should have discovered such accounting errors before the EVM report was released to DOD. DCAA therefore questioned whether Parsons can generate correct accounting data and recommended that Parsons update its accounting system. As of April 2006, DCMA and DCAA had not yet validated Parsons’ EVM system. (For more information regarding DCMA and DCAA’s assessments of Parsons’ EVM system see app. II.) Since our report in March 2004, the Russian government has approved a plan to destroy its chemical weapons stockpile and has begun financing significantly more of its own destruction activities. However, as of April 2006, the Russian government’s progress in destroying its chemical weapons stockpile has been limited, and the Russian government’s destruction plan may be overly ambitious and reliant on international assistance. We reported in early 2004 that Russia’s lack of a credible chemical weapons destruction (CWD) plan had hindered destruction activities. However, in October 2005, the Russian government approved a plan for destroying its entire chemical weapons stockpile by the CWC-established deadline of 2012. The October 2005 plan calls for using seven destruction facilities to eliminate the entire chemical weapons stockpile. Destruction of the chemical weapons stockpile at Gorniy was completed in December 2005. As of March 2006, only the facility at Kambarka is operational. The plan outlines the construction of the remaining five sites, including Shchuch’ye, where nerve agent is to be eliminated. According to the Russian plan, the blister agents stored at Gorniy and Kambarka were to be destroyed first. In December 2005, the Russian government completed its destruction efforts at Gorniy and began destroying chemical weapons at Kambarka. In accordance with the plan, destruction will next be focused on nerve agents. The storage sites near Leonidovka, Maradykovskiy, and Pochep house large nerve-agent munitions, while those near Shchuch’ye and Kizner store smaller munitions. Table 1 depicts the schedule for Russian chemical weapons destruction facilities. While the Russian plan indicates that the CWDF at Shchuch’ye will be operational by 2008, DOD estimates that the facility may not be operational until 2009. Furthermore, the Russian government’s priority is to destroy nerve agents contained in large munitions, because destroying the larger-sized munitions first would allow Russia to meet its CWC destruction deadlines faster. Accordingly, the destruction of smaller munitions at Shchuch’ye may become less of a priority for the Russian government. However, the Russian government’s destruction plan to eliminate all chemical weapons by 2012 may be unrealistic. It depends on the construction of seven facilities, but only two have been built, two are under construction, and three have not been started. Although the CWDF at Maradykovskiy may be operational in mid-2006, the Shchuch’ye facility is still under construction, and only minimal work has begun at the three remaining sites of Kizner, Leonidovka, and Pochep. According to its CWC destruction schedule, Russia must eliminate 20 percent of its chemical weapons stockpile by April 2007. As of March 2006, it had eliminated about 3 percent at Gorniy and Kambarka. Between April 2007 and April 2012, Russia must eliminate the remainder of its chemical weapons stockpile (about 80 percent) at five destruction facilities that have yet to be completed. It will be extremely difficult for the Russian government to complete and operate the last three facilities by its proposed schedule and meet its CWC commitment to destroy all stockpiles at these sites by the extended deadline of April 2012. Similarly, as of April 2006, DOD announced that the United States will not be able to meet the CWC extended destruction deadline of 2012. DOD estimates indicate that about 66 percent of the U.S. declared chemical weapons stockpile will be destroyed by April 2012. As of March 2006, the United States had destroyed about 36 percent of its declared stockpile. In the United States, DOD had five operating chemical weapons destruction facilities as of March 2006, and two additional facilities were being designed. According to the Russian destruction plan, the estimated cost for eliminating the entire Russian chemical weapons stockpile is more than 160 billion rubles—about $5.6 billion. Over the past 6 years, Russia has substantially increased its annual funding for its chemical weapons destruction efforts. In 2000, the Russian government spent about $16 million for chemical weapons destruction. By 2005, it had spent almost $400 million. For 2006, the Russian government plans to spend more than $640 million. For chemical weapons elimination at Shchuch’ye, the Russian government has budgeted about $144 million since fiscal year 2000. Russian funding at the site supports construction of one of the two destruction buildings (Building 101A), as well as the industrial and social infrastructure (utilities, roads, schools, etc.) needed to support the facility’s operations. The Russian government will need continued international assistance to complete destruction of its chemical weapons stockpile. The United States, Canada, Germany, Italy, United Kingdom, and other donors have committed almost $2 billion in assistance, with the United States committing the largest amount, about $1.039 billion. The Russian government estimates it will need about $5.6 billion to eliminate its entire stockpile. All U.S. assistance for destroying Russian chemical weapons is being provided to the CWDF at Shchuch’ye. As of March 2006, other international donors, such as Canada and the United Kingdom, are also providing significant assistance to Shchuch’ye to help fund the Russian destruction building (Building 101A) and the infrastructure needed to support the facility’s operation. Although Italy is providing some funding for Shchuch’ye infrastructure, most of its contributions are committed to the construction of the CWDF at Pochep. Russia has been relying on German assistance to destroy its stockpile of blister agents at the Gorniy and Kambarka destruction facilities. Table 2 describes the commitments and types of assistance provided by international donors. To facilitate additional international contributions, the Russian government has provided potential donors a list of CWDF projects requiring assistance. Primarily, assistance is needed for the construction of the destruction facilities at Kizner, Leonikovka, and Pochep, as well as related infrastructure support. The Russian government is also seeking international funding to support operations at the Kambarka and Maradykovskiy facilities. Until destroyed, Russia’s stockpile of chemical weapons—especially nerve agents contained in small munitions, such as those stored at Shchuch’ye— remain a proliferation threat, vulnerable to diversion and theft. Since 1992, the United States has been providing CTR assistance for the CWDF at Shchuch’ye to help reduce the threats posed by these weapons. Originally designed as a pilot facility to “jump start” Russian chemical weapons destruction efforts, Shchuch’ye may no longer be a priority for the Russian government. Delays in implementing the Shchuch’ye project over the past 14 years led the Russian government to begin destruction efforts at other sites. Disagreements between the United States and Russia over the types of munitions to destroy and how to destroy them, negotiations to resolve outstanding issues, restrictions on U.S. funding, and difficulties with Russian subcontractors, among other factors, have delayed the Shchuch’ye facility’s completion and increased its costs. Although progress has been made on the physical construction of the facility over the past 3 years, DOD continues to encounter numerous challenges that affect the completion of the Shchuch’ye CWDF. Furthermore, DOD currently cannot reliably estimate when the Shchuch’ye facility will be completed and at what cost. Parsons’ EVM system, implemented to help manage the schedule and cost of the Shchuch’ye project, contains unreliable and inaccurate data; thus, DOD cannot use it as a management tool. Even with significant international assistance at Shchuch’ye and other destruction facility sites, the Russian government will likely fail to destroy its entire chemical weapons stockpile by the CWC extended deadline of 2012. Unreliable EVM data limit DOD’s efforts to accurately measure progress on the Shchuch’ye project and estimate its final completion date and cost. As such, we recommend that the Secretary of Defense direct DTRA, in conjunction with the U.S. Army Corps of Engineers, to take the following three actions: ensure that Parsons’ EVM system contains valid, reliable data and that the system reflects actual cost and schedule conditions; withhold a portion of Parsons’ award fee until the EVM system produces require Parsons to perform an IBR after awarding the contract for completing Building 101. DOD provided comments on a draft of this report, which are reproduced in appendix III. DOD concurred with our recommendation that DTRA in conjunction with the U.S. Army Corps of Engineers ensure that Parsons’ EVM system contains valid, reliable data and reflects actual cost and schedule conditions, and require that Parsons perform an IBR after awarding the contract for completing Building 101. DOD partially concurred with our recommendation that a portion of Parsons’ award fee be withheld until the EVM system produces reliable data. DOD stated that it had withheld a portion of Parson’s award fee in a previous period. DOD further noted that an award fee must be based on the merits of the contractor’s performance and until the performance period is completed, it cannot prejudge Parsons’ performance and predetermine the withholding of award fees based on our recommendation. DOD also provided technical comments, which we have incorporated where appropriate. The Department of State was provided a draft of this report but did not provide comments. We are providing copies of this report to the Secretaries of Defense and State and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess the progress of the Shchuch’ye facility, we collected and analyzed Department of Defense (DOD) and Parsons Global Services, Inc. (Parsons) contractor documents and met with relevant officials. Specifically, we met with officials from the Cooperative Threat Reduction (CTR) Policy Office, the office of the Assistant to the Secretary of Defense for Nuclear and Chemical and Biological Defense Programs, the Defense Threat Reduction Agency (DTRA), and the U.S. Army Corps of Engineers. Within DTRA, we obtained information from the Director of the Cooperative Threat Reduction Directorate, as well as the program and project managers, for chemical weapons elimination. We also met with officials from the Threat Reduction Support Center in Springfield, Virginia. In addition, we met with officials from the DTRA office and the Chemical Weapons Destruction Support Office in Moscow. We traveled to the Russian Federation to observe construction of the CTR- funded chemical weapons destruction facility at Shchuch’ye. At Shchuch’ye and Chelyabinsk, we met with personnel from Parsons and the U.S. Army Corps of Engineers. In Moscow, we met with Russian government officials at the Federal Agency for Industry, the Ministry of Foreign Affairs, the Duma, and the Accounts Chamber of the Russian Federation. We also analyzed the reliability of the earned value management (EVM) data for the Shchuch’ye project. Specifically, we examined Parsons’ EVM reports for a 5-month period from, September 2005 to January 2006, to assess the Shchuch’ye destruction facility’s cost and schedule. We checked the EVM data to see if there were any mathematical errors or inconsistencies that would lead to the data being unreliable. We interviewed officials from the Defense Contract Management Agency (DCMA), the Defense Contract Audit Agency (DCAA), and Parsons officials to better understand the anomalies in Parsons’ EVM data and determine what outside surveillance was being done to ensure the validity of the EVM data. We also used a data collection instrument to obtain detailed information from DOD on the Shchuch’ye project, including the contract, program management activities, independent cost estimates, risk analysis, and award fees. To obtain information on Russian elimination efforts and international donor assistance for Russian chemical weapons destruction, we met with U.S., Russian, and international donor officials and obtained copies of pertinent documents, including the Russian chemical weapons destruction plan. We obtained information from officials in the Bureau of European and Eurasian Affairs and the Bureau of International Security and Nonproliferation at the Department of State. At DOD, we met with officials and acquired documents from the Office of the Secretary of Defense for Cooperative Threat Reduction Policy. In Moscow, we obtained information from Russian government officials at the Accounts Chamber, the Federal Agency for Industry, the Ministry of Foreign Affairs, and the Duma. At Shchuch’ye, we spoke with a local government official involved with public outreach efforts. We obtained data from the U.S., Russian, British, Canadian, and German governments as well as the G-8 Global Partnership on the assistance committed and provided for Russian chemical weapons destruction efforts. To assess the reliability of these data, we corroborated other nations’ data wherever possible, comparing and cross-checking documents and information. We interviewed officials from the United States, Canada, Germany, the United Kingdom, and the Russian Federation. We determined that data on funding and assistance provided for Russian chemical weapons destruction were sufficiently reliable for the purposes of this report. We also determined that data on the status of Russian and U.S. chemical weapons elimination were sufficiently reliable for the purposes of this report. The information on Russian law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We performed our work from June 2005 through May 2006 in accordance with generally accepted government auditing standards. Measuring and reporting progress against cost and schedule commitments is vital to effective program management. To measure program performance, DOD requires the use of EVM, a concept that has been used by DOD since the 1960s for measuring program performance. Through EVM, program offices can determine a contractor’s ability to perform work within cost and schedule estimates by examining variances between the actual and estimated costs and time to perform work tasks. EVM offers many benefits when done properly and serves as a means to measure performance and identify deviations from planned activities, allowing program managers to mitigate risks. Based on our analysis of Parsons’ EVM data, and the findings of DCMA and DCAA, the data are inaccurate and unreliable. Without reliable schedule and cost estimates, DTRA has limited means to accurately assess when the Shchuch’ye facility will be completed and at what cost. In reviewing Parsons’ monthly EVM data for September 2005 through January 2006, we discovered numerous instances of data not adding properly for scheduled work. Further, Parsons’ EVM reports are not capturing all of the data needed by project management to make informed decisions about the Shchuch’ye facility. Such errors may cause DOD and Parsons project officials to overestimate the amount of funding available to cover future risks, such as the systemization of the Shchuch’ye facility. Moreover, we found several instances where the accounting data were not allocated to the correct cost accounts causing large cost over-runs and under-runs. In these cases, the accounting data were placed in the wrong account, or Parsons’ accounting system was unable to track costs at the level of detail EVM requires. Parsons officials stated that measures are being taken to identify these inaccuracies and allocate the accounting data to the proper cost accounts. These problems, however, have led to numerous accounting errors in Parsons’ EVM reports. Furthermore, in reviewing Parsons’ EVM reporting data, we found several errors that a Parsons’ official attributes to the company’s accounting system. For instance, current EVM period data are not accurate due to historical data corruption, numerous mistakes in accounting accruals, and manual budget adjustments. Such mistakes underestimate the true cost of the CWDF project by ignoring cost variances that have already occurred. For example, the Moscow project management task was budgeted at a cost of $100,000. According to the January 2006 EVM report, the work has been completed but the actual cost was $2.6 million, resulting in an overrun of approximately $2.5 million. The EVM report, however, fails to capture the expected $2.5 million overrun. Such data are misleading and skew the project’s overall performance. As indicated in table 3, this is just one example of accounting system errors. In the case of the Moscow project management task, Parsons officials explained that this error occurred because the budget for this account was misaligned and, therefore, caused a false cost variance. Parsons officials stated they would be issuing an internal change order to correct this mistake. Until Parsons’ management updates the company’s accounting system, these types of manual adjustments will have to be made through monthly change orders to ensure that costs are properly aligned with the correct budget. Such continuous adjustments do not allow the EVM system to provide timely and accurate information to Parsons and DOD managers. In addition, DOD guidance and best practices require program managers to conduct an integrated baseline review (IBR) as needed to ensure that the baseline for tracking cost, technical information, and schedule status reflects (1) all tasks in the statement of work, (2) adequate resources in terms of staff and materials to complete the tasks, and (3) integration of the tasks into a well-defined schedule. Program managers are required to use EVM reports that have been validated by an IBR. Without verifying the baseline, monthly EVM reporting—which tracks project work against a set budget and schedule—is insufficient and invalid. Parsons and DOD officials explained that while an IBR has been discussed, one will not be conducted until the contract for completing Building 101 has been awarded. DOD officials estimate that the contract- award process may not be completed until June 2006, resulting in a 1 year delay. Such a delay not only prevents Parsons from holding an IBR, but it also jeopardizes DOD’s ability to accurately estimate the cost and schedule to complete the CWDF program. Until the costs have been negotiated for building the remainder of Building 101, it is unclear whether the CWDF at Shchuch’ye will be completed on time and within budget. DTRA officials explained that if the costs for this effort exceed the original estimate, they will have to cover the shortfall using management reserve funds. Using management reserve funds for construction leaves less contingency funding available to complete and test the Shchuch’ye facility. Until December 2004, DTRA was using EVM data from a simplified Parsons EVM process. In September 2004, DTRA directed Parsons to implement a complete EVM system that was capable of being validated by DCMA. Although Parsons’ EVM validation was originally scheduled for March 2005, Parsons was unable to meet this deadline and requested a series of extensions. In September 2005, DCMA officials visited the Shchuch’ye site for a program assistance visit and then returned in mid- November 2005 to conduct the formal validation review, 8 months later than planned. DOD requires all EVM systems to go through a compliance audit or “validation” conducted by DCMA, with assistance from DCAA. The evaluation team looks for proof that the system meets the 32 criteria for a good EVM system, as well as 2 to 3 months of reliable EVM data. While the DCMA official who led the validation team saw much improvement in Parsons’ EVM system from September to November 2005, he stipulated that an EVM compliance audit only tests whether the contractor has a good, capable EVM system and knows how to use it. A compliance audit does not identify whether the system is used properly, the data are reliable, or the products of the system are read and acted upon by management. The DCMA official stated that continual surveillance of Parsons’ EVM system would be necessary to ensure these actions were occurring. According to the official, DCMA does not expect to perform surveillance for the Shchuch’ye project. DCAA also participated in Parsons’ EVM validation and produced a corrective action report stating that its EVM accounting process was inadequate. Specifically, Parsons did not provide adequate documentation that direct costs of almost $300,000 were based on accurate and reliable accounting data. The source of the accounting data used by Parsons may be unreliable, causing actual costs for September 2005 to be significantly understated. For September 2005, Parsons subtracted almost $1 million without providing sufficient data that the adjustment was reasonable and allowable. A DCAA official stated that these findings are the result of Parsons’ reliance on an outdated accounting system that has difficulty capturing actual costs for the Shchuch’ye project into a proper cost ledger. The official noted that the software Parsons uses to query the accounting system and pull data into the EVM reports also caused errors. DCAA was also concerned with Parsons’ ability to apply effective EVM data quality control. According to DCAA officials, Parsons’ management should have discovered such accounting errors before the EVM report was released to DOD. DCAA therefore questioned whether Parsons can generate correct accounting data and recommended that Parsons update its accounting system. In addition to the individual named above, Muriel Forster (Assistant Director), Jerome Brown, Lynn Cothern, Jennifer Echard, David Hancock, Beth Hoffman León, and Karen Richey contributed to this report. Joanna Chan, Martin DeAlteriis, Mark Dowling, Jennifer Mills, and Jena Sinkfield also provided assistance.
Until destroyed, Russia's stockpile of chemical weapons remains a proliferation threat, vulnerable to theft and diversion. Since 1992, Congress has authorized the Department of Defense (DOD) to provide more than $1 billion for the Cooperative Threat Reduction (CTR) program to help the Russian Federation construct a chemical weapons destruction facility (CWDF) at Shchuch'ye to eliminate about 14 percent of its stockpile. Over the past several years, DOD has faced numerous challenges that have increased the estimated cost of the facility from about $750 million to more than $1 billion and delayed the facility's operation from 2006 until 2009. DOD has attributed the increase cost and schedule to a variety of factors. In this report, we (1) assess the facility's progress, schedule, and cost and (2) review the status of Russia's efforts to destroy all of its chemical weapons. Although DOD has made visible progress over the past 2 years in constructing the chemical weapons destruction facility at Shchuch'ye, it continues to face numerous challenges that threaten the project's schedule and cost. Primarily, key buildings on the site have fallen behind schedule due to difficulties working with Russian subcontractors. Such delays have been costing DOD more than $3 million per month since October 2005 and will continue until the award of a crucial subcontract, possibly in June 2006. Uncertain progress of Russian construction on the site, unpredictable Russian regulatory requirements, and various technical issues, such as testing the facility, could cause further schedule delays and increase costs. Also, DOD lacks a reliable earned value management (EVM) system to record, predict, and monitor the project's progress. DOD allocated $6.7 million to the project's contractor in September 2004 to establish an EVM system and expected to have a validated EVM system in place by March 2005. DOD cannot use the current EVM system to assess the final schedule and cost for completing the Shchuch'ye facility because it contains flawed and unreliable data. In addition, the contractor has not yet conducted an IBR of the Shchuch'ye project. Furthermore, it remains uncertain whether the Russian government can destroy its entire chemical weapons stockpile by the Chemical Weapons Convention (CWC) extended deadline of 2012. As of March 2006, Russia had destroyed about 3 percent of its 40,000 metric tons of chemical weapons at two completed destruction facilities. To eliminate the remainder of its chemical weapons over the next six years, the Russian government must construct and operate five additional destruction facilities, including Shchuch'ye. The Russian government has indicated that it will need continued international assistance to destroy the remaining stockpile.
The Department of Health and Human Services (HHS) is responsible for the administration and oversight of federal funding to states for services to foster children under title IV-E of the Social Security Act. The states are responsible for administering foster care programs, which are supported in part with federal funds. These funds reimburse the states for a portion of the cost of maintaining foster children whose parents meet federal eligibility criteria for the funds. The criteria are based in part on the income level of the parents. Federal expenditures for the administration and maintenance of foster care cases eligible for title IV-E were $3.2 billion in 1997. When foster children are not eligible for title IV-E funding, they may be eligible for child-only benefits under the Temporary Assistance for Needy Families (TANF) program, which are partially funded by the federal government. Otherwise, states and counties must bear the full cost of caring for foster children. Within the foster care system, children can be placed in any of a number of temporary settings, including kinship care, family foster care, private for-profit or nonprofit child care facilities, or public child care institutional care. In the kinship care setting, foster children are placed with their relatives. While the definition of “relatives” varies somewhat by state, relatives are typically adults who are related to a foster child by blood or marriage. They may also be family friends, neighbors, or other adults with whom the child is familiar. In this report, kinship care refers to the formal placement of children in the foster care system with their relatives. It does not include informal arrangements for relatives to care for children who are outside the child welfare system and the purview of the courts. Since at least the 1980s, some portion of foster children in this country have been placed with relatives. Some studies contend that the increase in the number of foster children being placed with relatives may have been, at least initially, the result of a shortage of traditional foster homes.Others suggest that kinship care increased as a result of the Adoption Assistance and Child Welfare Act of 1980. This act required states to place children in the “least restrictive (most family like) setting available,” a requirement that has been interpreted by many states as implying a preference for placing foster children with their relatives. The increase in kinship care may also stem in part from litigation (Matter of Eugene F. v. Gross, Sup. Ct., NY County, Index No. 1125/86) that resulted in New York City’s bringing certain children being cared for by relatives into the formal foster care system and making them eligible for publicly funded services. Regardless of the historical impetus behind the growth in kinship care, section 505 of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 amended federal law to require that the states consider giving priority to relatives when deciding with whom to place children while they are in the foster care system. Kinship care cases are eligible for federal title IV-E funding if, in addition to other criteria, the caregivers meet state licensing requirements for foster homes and the child’s parents meet the income eligibility criteria.In 1996, in about 60 percent of the kinship care cases in California and about 50 percent of such cases in Illinois, the caregiver received title IV-E funding. In the remaining kinship care cases in these states, the caregiver may have received an Aid to Families with Dependent Children (AFDC) grant, which may have been a child-only grant. Thirty-nine states reported in a 1996 survey conducted by the Child Welfare League of America (CWLA) that in 1995 they had a total of about 107,000 foster children in kinship care, or about one-quarter of all foster children in the United States. In 1995, the proportion of all foster children in each state who were in kinship care ranged from 0.4 to 52 percent. As time passes, states appear to be relying more on kinship care. CWLA has reported that between 1990 and 1995, the number of children in foster care increased by 21 percent (from 400,398 in 1990 to 483,629 in 1995), while the number of kinship care children increased by 29 percent. In 1995, the foster care population in California was 87,010, or about 27 percent larger than it had been in 1990, while the kinship care population was about 36 percent larger. According to our survey, as of September 15, 1997, 51 percent of the 74,133 foster children in California who had been in the system since at least March 1, 1997, were in kinship care. In 1997, the foster care population in Illinois was 50,721, or about 159 percent larger than it had been in 1990, while the kinship care population was about 250 percent larger. Up until July 1995, children whose parents were absent and who were living safely with a relative were considered “neglected” under Illinois state law, and the state generally assumed custody of such children. In these cases, the relative’s home at the time was frequently converted into kinship care within the foster care system. This may have accounted for the growth of the kinship care population in Illinois up until that time. Illinois amended the definition of “neglected child,” effective July 1, 1995, and as a result, such children are no longer considered neglected and the state no longer assumes custody.According to our survey, as of September 15, 1997, 55 percent of the 48,745 foster children in Illinois who had been in the system since at least March 1, 1997, were in kinship care. Federal foster care statutes and regulations, which emphasize the importance of both reunifying families and achieving permanency for children in a timely manner, apply to all foster care cases, whether a child is in kinship care or another foster care setting. Outcomes in foster care cases include (1) family reunification, (2) adoption, (3) legal guardianship, and (4) independent living or aging out of the foster care system, usually at age 18. In emphasizing the goal of family reunification, for example, federal law requires that the states make “reasonable efforts” to reunify foster children with their parents. The law requires that the states develop case plans that among other things describe the services that are to be provided to help parents, children, and foster parents facilitate the children’s return to their own safe home or their permanent placement elsewhere. The states are required to review foster care cases at least every 6 months and must hold permanency planning hearings at least every 12 months, during which a judge or a hearing officer determines whether a state should continue to pursue the current goal or begin to pursue some other permanency goal. When foster children cannot be safely returned to their parents in a timely manner, the Adoption and Safe Families Act of 1997 (enacted after the period covered by our survey) includes a provision requiring the states to begin the process to file a petition to terminate parental rights if a child has been in foster care for 15 of the most recent 22 months, unless (1) required reasonable efforts and services to reunify the family have not been made in accordance with the case plan, (2) a “compelling reason” is documented in the case plan indicating why it would not be in the best interest of the child to terminate parental rights at that time, or (3) at the option of the state, the child is being cared for by a relative. At the same time that the states are required to initiate termination procedures, they must also identify and recruit qualified families for adoption. Thus, if none of the exceptions apply, the law attempts to achieve permanency through adoption. Most research on the quality of kinship care has used the demographic characteristics of the caregivers as indirect indicators of the quality of foster care they provide. Although the studies’ results have varied somewhat, many studies have found that kinship caregivers tend to be older, have less formal education and lower incomes, are less often married, and are less healthy than other foster caregivers. On the basis of these characteristics, child welfare researchers and practitioners have inferred that the quality of kinship care may be lower than the quality of care in other foster care settings. Our analysis of the caseworkers’ responses to our survey of open foster care cases in California and Illinois showed that, overall, the quality of both kinship care and other foster care was good and that in most respects the experiences of children in kinship care and in other foster care settings were comparable. In both states, most caregivers in kinship as well as foster care settings received high scores from their caseworkers when it came to performing parenting tasks. We also found that, in general, children in kinship care in these states experienced significantly more continuity in their lives—that is, continued contact with family, friends, and the neighborhood they lived in before entering foster care—than other foster children. However, we also found that while the caseworker in most kinship as well as other foster care cases believed that the caregivers were likely to enforce court-ordered restrictions on parental visits, the proportion of cases in which this view was held was smaller for kinship care cases than other foster care cases. Moreover, requirements such as standards or approval criteria for becoming a caregiver and training for caregivers were less stringent for kinship care in California and Illinois than for other foster care. In both California and Illinois, most kinship and other foster caregivers received comparably high scores from their caseworker in performing nearly all the parenting tasks we asked about in our survey. These tasks covered three areas: (1) providing day-to-day care, such as providing supervision and emotional support to a child, setting and enforcing limits on the child’s behavior, and making sure the child attends school; (2) ensuring that the child is up-to-date on routine medical examinations; and (3) interacting with medical, mental health, and educational professionals. We found no research that directly measured foster parents’ ability to perform such tasks. For nearly all the parenting tasks we asked about, the caseworkers in more than 90 percent of kinship care and other foster care cases in the two states responded that the caregivers performed those tasks either adequately or very adequately. A smaller percentage—about 80 percent—of the children in kinship care in Illinois, however, were up-to-date on their routine vision and dental examinations, compared with 90 percent of other foster children. State officials in Illinois speculated that this was because kinship caregivers are more likely than other foster caregivers to seek vision and dental care for their foster children only as often as they do for themselves, which is less frequently than state standards and guidelines call for. Those officials believed that other foster caregivers are more likely to follow state standards and guidelines when it comes to their foster children. In both California and Illinois, responses to our survey questions indicated that there was significantly more continuity in the lives of children in kinship care than in other foster care settings. While many mental health professionals agree that continuity in relationships is good for children in general, there is less agreement about the merits of continuity in the lives of abused or neglected children. Experts do agree that contact with siblings, and especially living with siblings, is beneficial for a child and that parental visits with foster children are needed to achieve reunification when this is an appropriate goal. Experts also report that a child’s familiarity with the caregiver lessens the trauma of separation from the family, at least in the short run. Advocates of kinship care further assert that placing a foster child with relatives or friends may help maintain continuity in the child’s life by maintaining ties with the child’s community, school, and church. Many believe, however, that parents who neglect or abuse their children learn this behavior from members of a dysfunctional immediate or extended family. So, living with relatives and continued contact with the community may not be in the best interest of the child because the child continues to live in the environment that may have led to the abuse or neglect. Our survey asked for information about three types of continuity in foster children’s lives: (1) their previous familiarity with the person who became their foster parent; (2) their contact while in foster care with their parents, other relatives, and friends; and (3) their involvement, while in foster care, with the community they lived in before they entered the system. Our analysis showed that there was significantly more continuity in the lives of children in kinship care than in other foster care settings with respect to nearly all the indicators we used to measure these three categories of continuity. In general, our findings were consistent with the results of other research about the relationship of kinship care and continuity in foster children’s lives. In measuring children’s familiarity with the persons who became their foster parents, the results of our survey in both California and Illinois indicated that a significantly larger proportion of children in kinship care than other foster care knew their caregivers before entering the system. In addition, a significantly larger proportion of kinship care children had resided with their caregivers previously. (See fig. 1.) In measuring the extent to which foster children were in contact with their parents, other relatives, and friends in California and Illinois, in significantly more kinship care than other foster cases the caseworkers reported that the children were in contact with family and friends. For example, the caseworkers’ responses to our survey showed that mothers with children in kinship care (24 percent in California, 39 percent in Illinois) visited their children more often than specified in their case plans than did mothers with children in other foster care settings (6 percent in California, 11 percent in Illinois). To put this into perspective, however, in both kinship care and other foster care settings, less than 50 percent of mothers visited their children as often as specified in their case plans. Other research has also shown that parents of children in kinship care are more likely to visit their children at least once a year, and visit them more often per year, than parents of other foster children. In both California and Illinois, in a significantly larger proportion of kinship than other foster care cases the caseworkers noted that one or more of a child’s siblings were living in the same foster home. According to our survey, children in kinship care also had more contact with their friends and relatives other than parents, foster parents, or siblings. (See fig. 2.) Other studies reported similar findings. For example, surveys of foster children in Baltimore County, Maryland, in 1993 and in California from 1988 through 1991 have shown that children in kinship care were more likely to live with siblings than were other foster children. Finally, in measuring children’s contact with the communities they lived in before they entered the system, in significantly more kinship care than other foster care cases in California and Illinois caseworkers indicated that children had contact with their established community. More specifically, in both California and Illinois a larger proportion of children in kinship care than in other foster care settings lived in the same neighborhood they had lived in before entering foster care. (See fig. 3.) This is consistent with other studies of foster children in Illinois.Furthermore, according to our survey, a larger proportion of children in kinship care in each state were attending the school they would have attended had they not entered the system. The number of times caregivers changed during a foster care episode has also been used as an indication of continuity in a child’s life. Previous research in California has shown that foster caregivers changed fewer times per foster care episode in kinship care than other foster care cases; the lives of children in kinship care tended to be more stable while they were in foster care. Our survey suggests that the safety of a somewhat larger proportion of children in kinship care than other foster care in California and Illinois may be at risk because their caregivers may be unwilling to enforce court-ordered restrictions on parental visits. Specifically, in 72 percent of the California kinship care cases and 68 percent of the Illinois kinship care cases in which the parents’ visits with their children were restricted, the caseworkers believed that the caregivers were likely to take the necessary action to enforce the restrictions. In contrast, 92 percent of the caseworkers in other foster care cases in California and 80 percent in other foster care cases in Illinois believed that the caregivers were likely to enforce parental visitation restrictions. (See fig. 4.) As noted earlier, parental visits provide stability for children while they are in foster care. In some cases, however, the court may restrict visits by the parents because it believes the child might be harmed by these visits. In more than 85 percent of our survey cases, the court had restricted visits by the parents. Certain elements of California’s and Illinois’s quality assurance systems are less rigorous for kinship care than for other foster care settings. Both California and Illinois have less stringent requirements for becoming a caregiver and provide less training and support to kinship caregivers. States sometimes treat kinship caregivers differently because of the family bond that is assumed to be present between children and their relatives. They believe this bond mitigates the need for more intrusive state oversight in these cases. While some experts in child welfare believe that this exception for kinship caregivers is reasonable, others believe that while a state has custody of a child, all caregivers should be held to the same standards. To become foster caregivers in California or Illinois, a child’s relatives must meet certain criteria specifically designed for kinship care that are less stringent than the licensing requirements that apply to other foster caregivers. For example, since Illinois does not require kinship caregivers to be licensed, they do not have to meet licensing requirements regarding the number of bedrooms or the square footage in the home. Furthermore, they are exempt from some specific requirements designed to ensure a foster child’s safety in the home. Even though kinship caregivers are not required to meet the same requirements as other caregivers, in California if a foster child is eligible for title IV-E funds, the kinship caregivers receive the same maintenance payment as licensed caregivers would. Unlike in California, kinship caregivers in Illinois can receive the same maintenance payment as other caregivers only if they choose to meet the licensing requirements of other foster caregivers and thereby become licensed. Otherwise, relatives must meet less stringent requirements to provide foster care, which results in a lower maintenance payment. State child welfare officials in Illinois indicated that about 50 percent of the kinship caregivers in the state are licensed to provide foster care. Both California and Illinois require caseworkers to periodically visit all foster children. Caseworkers are required to visit foster children in order to, among other things, monitor the quality of the care they are receiving and determine whether the children or caregivers have any unmet service needs. Generally, in California, caseworkers are required to visit foster children at least once a month. When the goal is something other than family reunification, caseworkers are required to visit at least once every 6 months, because in these cases the children are considered to be in a more stable setting. Illinois requires caseworkers to visit foster children at least once a month, regardless of the permanency goal. According to our survey, caseworkers in California and Illinois visited both foster children in kinship care and those in other settings more often on average than formally required, but they visited children in kinship care less often on average than children in other foster care settings. Eighty-five percent of our cases in California were past family reunification so were required to be visited once every 6 months. In California, caseworkers visited kinship care children an average of 3.8 times in 6 months compared with an average of 5.3 visits to other foster children. Similarly, in Illinois caseworkers visited kinship care children an average of 8 times in 6 months compared with an average of 11.3 visits to other foster children.Our survey results were consistent with other research that has also found that caseworkers tend to visit children in kinship care less frequently than other foster children. California and Illinois provide fewer kinship caregivers with training than other foster caregivers. To help ensure good quality foster care, both states require licensed foster caregivers to receive training in topics such as the child welfare system and procedures and caring for children who have been abused or neglected. Since kinship caregivers are not required to be licensed in either California or Illinois, a smaller proportion of kinship caregivers than other foster caregivers in these states receive such training. Because of funding constraints, California has historically precluded kinship caregivers from receiving such training unless they pay for it themselves. Nonetheless, California state officials believe that kinship caregivers should receive training that is specifically designed for them. The Child Welfare Research Center (CWRC) has found that both kinship caregivers and other foster caregivers in California would like more training on subjects such as foster parent licensing, prenatal drug exposure, and how to interact more effectively with social service agencies. CWRC has also found that kinship caregivers in California want more information about court proceedings related to foster care and how to navigate the child welfare system in order to receive needed services. Some states provide fewer kinship caregivers with support services than other foster caregivers. Services such as respite care, housing support, counseling, transportation, child care, legal services, and access to support groups are designed to help foster caregivers successfully perform their role. Research conducted in California found that a smaller proportion of kinship caregivers received such services than other foster caregivers.This research also found that kinship caregivers in California, reacting to the emotional demands of caring for an abused or neglected relative, also wanted to know more about community resources and mental health services that were available to them. Previous research on children who have left the foster care system has shown that children who had been in kinship care were less likely to be adopted and stayed longer in foster care than other foster children. However, we found no consistent pattern between California and Illinois. In California, we found a pattern similar to the research regarding permanency goals among foster care cases in which a child is still in the system. Specifically, kinship care cases in California less often had the goal of adoption or guardianship (and more often had the goal of long-term foster care) than did other foster care cases. In California, there was no difference between kinship care and other foster care in the length of time children spent in foster care. However, in Illinois, in foster care cases in which a child was still in the system, a larger proportion of kinship care than other foster care cases had the goal of adoption and guardianship, and kinship care cases had been in the system a shorter, not longer, period of time. Because outcomes for kinship care cases differed in these two states, it is likely that state foster care policies and practices rather than the type of foster care setting in which children were placed had the greatest influence over a foster child’s permanency goal and length of time in care. It should also be noted that, in both states, we found that most children, regardless of foster care setting, had been in the system much longer than they should have been if the Adoption and Safe Families Act had been in effect at the time of our survey. Several research studies have looked at foster care outcomes and length of stay. Many of these examined the experiences of a group of children who entered the system in the same year. Most have shown that children in kinship care were less likely than other foster children to be adopted. Most have also shown that children in kinship care spent more time than other foster children in the foster care system. In California, our analysis of the survey data indicated that kinship care cases in the foster care system as of September 15, 1997, were more likely to have the goal of long-term foster care than other foster care cases in the system at that time. Where reunification was no longer considered feasible, our survey showed that 67 percent of the cases in kinship care had a goal of long-term foster care compared with 53 percent of cases in other foster care settings. (See fig. 5.) The large number of children in kinship care with the goal of long-term foster care is not surprising given that according to California officials, the state had only recently begun to offer adoption and guardianship options specifically designed for a foster child’s relatives. Survey responses confirmed this belief. In 74 percent of kinship care cases with a goal of long-term foster care, the caseworkers responded that the primary reason why the children did not have adoption as the goal was that they were being cared for by relatives who did not want to adopt and that moving the children to another home would be detrimental to them. State officials in California pointed out several disincentives for adoption and guardianship in kinship care cases. Certain benefits for foster children in California, such as special priority for assistance in schools and financial assistance for college, are no longer available when they have been adopted. Similarly, title IV-E maintenance payments are not authorized for children who leave the foster care system because of legal guardianship. Guardians who are related to a child could receive a TANF child-only grant on behalf of the child instead of title IV-E payments, but this grant is much lower than the title IV-E maintenance payments. In addition, to qualify for a TANF child-only grant, the guardian would have to provide proof that the child attends school and receives medical examinations. According to our survey, more than half of the open kinship care cases in California with the goal of guardianship had a guardian appointed but remained in the foster care system. This may be because guardians can receive the foster care maintenance payment, which is higher than a TANF child-only grant, if the case remains in the foster care system. While our survey found that, of all foster care children in California, 11.3 percent of children in kinship care and 19.1 percent of other foster children had adoption as the goal, in fact, only 2 percent of the children in foster care were adopted in 1997. Therefore, the state foster care agency has set the goal of adoption for many more foster children than are likely to be adopted, given recent experience. According to our survey in California, as of September 15, 1997, children in kinship care had been in the system about as long as those in other foster care settings. A multivariate analysis of cases in California confirmed that the type of foster care setting was not associated with the time foster children had spent in the system. Both children in kinship care and those in other foster care settings as of September 15, 1997, had already spent more than 60 months on average in foster care. This is 45 months longer than the time now allowed under the Adoption and Safe Families Act before the states are required to file a petition to terminate parental rights. Furthermore, we estimate that of the 37,881 children in kinship care in California as of September 15, 1997, who had been in the system since at least March 1, 1997, nearly 82 percent, or 31,025, had been in the system for 17 months or more. Under federal law, however, children in kinship care may be excluded from the requirement to terminate parental rights once a child has been in foster care for 15 of the past 22 months. In contrast to our findings in California, data from our survey in Illinois indicated that children in kinship care as of September 15, 1997, were more likely to have the goal of adoption or guardianship than other foster children in the system at that time. Specifically, 66 percent of kinship care cases had the goal of adoption or guardianship compared with 47 percent of cases in other foster care settings. (See fig. 6.) According to state officials, Illinois has found that kinship caregivers, contrary to popular belief, are willing to adopt, and Illinois is actively pursuing adoption in these cases. While our survey found that in Illinois 41.3 percent of children in kinship care and 37.9 percent of other foster children had adoption as a goal, in fact, only 4 percent of all foster children were estimated to have been adopted in 1997. Therefore, as in California, the state foster care agency has set the goal of adoption for many more children than are likely to be adopted, given recent experience. Our survey in Illinois indicated that foster children in kinship care as of September 15, 1997, had spent 43 months, on average, in the system. Other foster children had been in care for 53 months, on average, as of that date. A multivariate analysis of cases in Illinois also indicated that the type of foster care setting was associated with the time children had already spent in the system. Children in kinship care had been in the system about 10 fewer months, on average, than other foster children. Although children in other foster care settings in Illinois had spent more months in the system, as of September 15, 1997, than children in kinship care, foster children in general had spent much more time, on average, in the system as of that date than the 15 months allowed with the enactment of the Adoption and Safe Families Act before states are required to file a petition to terminate parental rights. Furthermore, we estimated that of the 26,712 children in kinship care in Illinois as of September 15, 1997, who had been in the system since at least March 1, 1997, 87 percent, or 23,213, had been in the system for 17 months or more. As we noted earlier, however, the law allows the states to exclude children in kinship care from the federal requirement to terminate parental rights in cases in which they have been in care 15 of the past 22 months. Since the fall of 1997, both California and Illinois have been instituting new programs and practices that are designed to (1) increase the likelihood that permanent living arrangements will be found for children in kinship care, as well as other foster care settings, who cannot return to their parents and (2) continue to ensure that kinship care is of good quality. They are pursuing efforts to choose the best kinship caregivers by identifying and locating a larger pool of relatives to draw from when deciding with whom to place foster children. To help ensure that children who cannot return to their parents do not remain in the foster care system indefinitely, California and Illinois recently enacted laws and are developing programs that encourage kinship caregivers and other relatives of foster children to provide permanent homes for them when necessary. Both states also support adoption and subsidized guardianship for children in kinship care as pathways out of the foster care system. Both California and Illinois have stepped up their efforts to identify as many of a foster child’s relatives as possible before deciding with whom to place that child. By expanding the pool of potential foster caregivers, the states hope to help ensure a foster child is placed with the relative who is capable of providing good quality foster care in the short term and who is willing to provide a long-term home if reunification with the parents is not feasible. Illinois requires that a “diligent” search for the parents when a child enters foster care include a search for other relatives, as well. The state is contracting with a firm that specializes in identifying and locating relatives and will conduct such searches routinely in foster care cases statewide. Since January 1, 1998, courts in California have had the authority to order the parents of foster children to disclose the names and residences of all the children’s maternal and paternal relatives. According to California officials, parents before then typically provided the names of only one or two relatives, usually the ones with whom they preferred their child to be placed. In addition, before a foster child is placed with a relative, California now applies an expanded assessment requiring that (1) a detailed background check be conducted; (2) the relative’s capacity to help implement the case plan, including family reunification efforts, be considered; and (3) the relative’s ability and willingness to provide a permanent home for the child also be considered. Recent legislation in California has also created the Kinship Support Services Program, one of whose objectives is to help ensure the good quality of kinship care. Services this program provides include case management; social services referral and intervention aimed at maintaining the kinship family unit—for example, housing, homemaker services, respite care, legal services, and day care; transportation for medical care and educational and recreational activities; individual and group counseling in parent-child relationships and group conflict; counseling and referral services aimed at promoting permanency, including kinship adoption and guardianship; and tutoring and mentoring for the children. Both California and Illinois are attempting to help ensure that children in kinship care spend as little time in the foster care system as possible. Anticipating federal and state legislation requiring the states to move more quickly to secure permanent homes for foster children, including those in kinship care, in 1998 the Illinois Department of Children’s and Family Services instituted new policies and programs related to kinship care to meet this requirement. In California, the move to encourage relatives to provide permanent homes for foster children began with the Governor’s Adoption Initiative of 1996, which is a 5-year plan to “identify and implement strategies to maximize adoption opportunities for children in long-term foster care.” In 1996, the state held a policy summit on kinship care that found that current “permanency options present significant cultural and financial barriers to kin to achieve permanency.” Following is an overview of the activities these states are undertaking to take better advantage of opportunities for permanently placing foster children with their relatives. On January 1, 1998, California instituted a kinship adoption program to remove barriers to adoption by current kinship caregivers and other relatives of foster children. In a kinship adoption, caregivers and relatives are permitted to enter into a kinship adoption agreement, a provision that is not typical in traditional adoptions. This agreement can address visitation rights for parents and other family members, as well as how information about a child is to be shared. The law authorizing the program sets out procedures for the agreement’s enforcement, modification, and termination. Under the terms of kinship adoption, parents may voluntarily relinquish their parental rights and designate the relative who will adopt the child, a provision that is also unique to kinship adoption. Concurrent planning allows for planning for the ultimate return of foster children to their parents, as well as another permanency outcome should family reunification prove infeasible. This process is intended to shorten the length of time it takes to secure another permanent home for children once the court decides that they cannot return to their parents. Illinois has recently begun concurrent planning; it is particularly useful when parents have previously been unwilling or unable to provide a safe home for their children or when repeated clinical interventions have failed. “A successful concurrent planning program is one in which the number of children who enter long-term foster care is significantly reduced (ideally, eliminated), the time the typical child spends in the system is reduced, virtually all young children who do not reunify are adopted rather than placed with legal guardians, the number of children replaced is reduced significantly, the proportion of relinquishments increases, and social workers’ comfort with the quality of adoptive families increases.” HHS has granted both Illinois and California a 5-year waiver of the restriction the Social Security Act places on providing title IV-E maintenance payments to legal guardians. This waiver enables the states to subsidize guardianships using title IV-E funds, thus eliminating the financial disincentive for kinship caregivers to become their foster child’s legal guardian. In its first year, the waiver for California applies only to children 13 years of age or older. In each subsequent year, the minimum eligibility age increases by 1 year. When the waiver period ends in 5 years, all children who were covered by the waiver will have reached the age of 18, so they will no longer require title IV-E foster care payments. Thus, California will not be responsible for any further subsidized guardianship payments for these children once the waiver period has ended. California recently notified HHS that it would like to delay the implementation of this waiver until it has fully analyzed recently passed state legislation that also provides for subsidized guardianship. Illinois has received a title IV-E waiver from HHS enabling it to use title IV-E funds for subsidies to kinship caregivers who agree to assume legal guardianship of their foster children. Unlike California, Illinois’s subsidy is available for children of any age. Thus, when this 5-year waiver expires, Illinois will fund the subsidies for children in this program from state revenues until they reach the age of 18. Although there are no age limits under Illinois’s waiver, to be eligible a child must have been in foster care for 1 year and must have lived with the potential guardian for at least 1 year before that guardian can apply for payments under this waiver. California’s Kinship Support Services Program, described earlier, also provides an incentive for kinship caregivers to adopt or assume legal guardianship of their foster children by continuing to make the program’s support services available to them after their foster children leave the system. Thus, these services are available to relatives, whether or not the child in their care is under the jurisdiction of the juvenile court or in the child welfare system. In 1998, California enacted legislation requiring that a plan be developed for a Kinship Care Program that will be separate and distinct from the existing foster care program and will provide services uniquely suited to the needs of children being cared for by their relatives. The Department of Social Services is currently developing a plan for a separate kinship care program. California also enacted legislation in 1998 that set up the Kinship Guardianship Assistance Payment program known as Kin-GAP. According to California officials, the Kin-GAP program allows children in kinship foster care to leave the foster care system by having their kinship caregivers become their legal guardians. This program allows children who have been assessed as being in a long-term stable home to exit the foster care system. Until they reach the age of 18, children in this program have medical coverage and maintenance payments are made for each child. The law limits this payment to no more than 85 percent of the title IV-E foster care maintenance payment. By July 1, 1999, the Department of Social Services must determine what the dollar amount of the payment will be. In order to reaffirm the priority Illinois places on securing permanent homes for foster children, it has established new permanency goals. It has eliminated “long-term relative care” as a permanency goal. Illinois officials noted that caseworkers will thus be forced to more actively seek permanent homes for children in kinship care and thereby prevent them from remaining indefinitely in the foster care system simply because they are being cared for by relatives. New permanency goals include “return home within 5 months,” “return home within a year,” “substitute care pending termination of parental rights,” “adoption,” “guardianship,” “substitute care pending independence,” and “substitute care due to the child’s disabilities or mental illness.” Despite a number of concerns expressed by some child welfare experts about the quality and outcomes of kinship care (the setting in which about one-quarter of the nation’s foster children are placed), the results of our survey of foster care cases in California and Illinois revealed a positive picture but not without some cautionary notes. Parenting-skill assessments by caseworkers in kinship care cases were comparable to parenting-skill assessments by caseworkers in other foster care cases. This was not true for other dimensions of quality. Information from our survey suggests some areas where improvements in kinship care may be needed. Specifically, there may be cause for concern about health and safety, especially with regard to observance of the need for routine dental and eye exams, and about potentially unsafe visits by abusing parents. While California and Illinois apply less stringent standards or approval criteria for kinship caregivers, both states are taking steps to better ensure good quality kinship care. They are raising standards for kinship caregivers and widening the pool of potential kinship caregivers to increase the chances of locating relatives capable of providing good quality care. Since the ultimate goal for foster children is a safe and permanent home, the permanency plan in foster care cases is of paramount concern. Previous research shows that children in kinship care cases stay longer in the system and are less likely to be adopted. In our survey, in California children in kinship care stayed in the system as long as children in other foster care settings and less often had a goal of adoption or guardianship. In contrast, in Illinois children in kinship care stayed in the system a shorter period of time and more often had a goal of adoption or guardianship than children in other foster care settings. Differences in permanency goals and time in foster care, therefore, may depend more on state policies and practices than on foster care setting. Moreover, both states have taken initiatives either to make homes with relatives a viable permanency option or to facilitate permanency planning. We provided a draft of this report to HHS and state child welfare officials in California and Illinois for their review. HHS generally agreed with the report and also described a number of activities of its Administration for Children and Families that it believes will help inform both policy and the child welfare field. HHS also provided technical comments, which we incorporated where appropriate. HHS’s response is in appendix VI. California did not provide official comments. However, California child welfare officials provided oral comments, limited to technical issues related to information about their programs. We incorporated their comments where appropriate. Illinois generally agreed with our report. However, state officials believed that the standards applied to other foster care cases with respect to (1) frequency of caseworkers’ visits, (2) criteria for becoming a caregiver, and (3) caregivers’ willingness to enforce parental visitation restrictions should not be applied to kinship care cases. We believe that it is valid to apply the same standards in both kinship and other foster care cases as far as the number of caseworker visits and a caregiver’s willingness to enforce restrictions on parental visits are concerned. Regarding the number of caseworker visits, we applied the standards that California and Illinois have already set, which in both states are the same for kinship and other foster care cases. Protecting a child’s safety should be the overriding concern of both kinship and other foster caregivers. Therefore, when a restriction is placed on parental visits in the interest of a child’s safety, it seems reasonable to expect kinship caregivers to be as willing as other foster caregivers to enforce that restriction. Although we report that the states apply less stringent requirements for becoming a kinship caregiver, we have taken no position on whether the criteria for kinship and other foster caregivers should be equal. We have modified the report to clarify this. We will send copies of this report to the Secretary of HHS and program officials in California and Illinois. We will also send copies to child welfare program directors in all other states and make copies available to others upon request. Major contributors to this report are listed in appendix VII. If you or your staff have any questions, please contact me at (202) 512-7215 or Clarita A. Mrena, Assistant Director, at (415) 904-2245 or Ann T. Walker, Evaluator-in-Charge, at (415) 904-2169. This appendix contains a detailed description of our review of existing research, interviews with child welfare experts, and survey of open foster care cases in California and Illinois. We conducted this review from April 1997 to December 1998 in accordance with generally accepted government auditing standards. In order to determine what research had been done on kinship care, we conducted a literature search to identify journal articles, reports, dissertations, and theses written between the beginning of 1990 and the fall of 1998 that addressed at least one of the following two research questions: (1) Does the foster care setting affect the quality of care a child receives? and (2) Does the foster care setting affect time in the system and permanency for the child? We began our search by reviewing the bibliographies of three major publications addressing the subject of kinship care: (1) Child Welfare League of America, Selected References on Kinship Care 1962-1994; (2) the Transamerica Systems, Inc., 1997 draft “Study of Outcomes for Children Placed in Foster Care with Relatives”; and (3) Child Welfare League of America, Kinship Care: A Natural Bridge, issued in 1994. We also conducted a computerized search for articles written about kinship care after 1994, the latest year covered in two of these bibliographies. To ensure that we omitted no major articles on kinship care, we sent copies of the three bibliographies and the results of the computerized search to child welfare experts both inside and outside GAO for their review. These experts suggested several additional articles. To identify recently published articles while drafting the report, we updated our computerized search and sent our bibliography to two additional experts outside GAO for their review. As a result of this process, we identified more than 150 documents for preliminary review. We reviewed these documents to determine whether they met our criteria for inclusion in our study and whether they reported any findings related to our research questions. We excluded a number of documents identified in our preliminary review from our final compilation of the research, most often because they (1) did not contain any research results, (2) did not describe original research but instead summarized others’ research, (3) did not differentiate between kinship and other foster care settings, (4) did not differentiate between children in the child welfare system and children being cared for by relatives outside the child welfare system, (5) did not include new data that had not already been summarized in another document written in whole or part by the same authors, and (6) did not address either of our two research questions. Tables I.1 and I.2 list the subquestions we used in the literature search and the tables in appendix III that show the research results for each subquestion. Does the foster child live with siblings who are in foster care? Does the foster child maintain contact with siblings? Does the foster child maintain contact with parents? Does the foster child remain in the same community or neighborhood he or she lived in before entering foster care? Does the foster child feel that he or she is part of the foster family? What is the foster caregiver’s age? What is the foster caregiver’s marital status? What is the foster caregiver’s education? What is the foster caregiver’s health? What is the foster caregiver’s income? What training or preparation did the foster caregiver receive? What required health services does the foster child receive? How often does the caseworker visit the foster child? To what extent does the foster caregiver receive services? How long did the foster child stay in foster care? How many placements in foster care has the foster child had? How long was the foster child in care before adoption, the goal changed to adoption, the child was placed with an adoptive family, or the child was freed for adoption? How long was the foster child in care before reunification with his or her parents? What permanency goals are pursued? To obtain a broader perspective on the issues surrounding kinship care, we interviewed researchers, public policy advisers, physicians, attorneys, family court judges, social workers, adoption caseworkers, and representatives of organizations that have an interest in foster care or child welfare in general. We asked for their opinions about the strengths and weaknesses of kinship care, the quality of kinship care, additional safeguards needed in the system, if any, and the effect of kinship care on foster care outcomes. We also interviewed state program officials to obtain information about kinship care in their state and their opinions about kinship care in general. We surveyed open foster care cases in California and Illinois to obtain information about the quality of care that children in kinship care receive relative to that of foster children in other foster care settings, as well as information about the effect of kinship care on permanency goals and the time children spend in foster care. Each state selected a simple random sample of open foster care cases for our survey, from all cases that were in its foster care system on June 1, 1997, and had been there continuously since at least March 1, 1997. Each sample was intended to represent the entire population of open foster care cases in the state during that time. The samples allowed us to make statements about the experiences of the foster children who made up the foster care population during that time. Because these samples were not drawn from a population of all children who entered the foster care system in a state, however, they do not represent the experiences of all foster children who entered the system. Foster children who spend a relatively short time in the system may be underrepresented in our samples, while children who spend more time in foster care may be overrepresented. Furthermore, while the survey results based on these samples can be generalized to the population of open foster care cases during the specified time in each state, they do not represent the foster care population nationally or in any other state. The foster care cases in California and Illinois combined account for about one-quarter of the entire foster care population nationwide and about half of all kinship care cases. After our samples were drawn, we learned that 22 of the sampled cases from California and 2 from Illinois had not been in foster care continuously from March 1, 1997, through June 1, 1997, and we excluded them from our study. We excluded an additional 57 cases in the California sample and 17 in the Illinois sample because information provided in the questionnaire indicated that they had not been in the foster care system continuously from June 1, 1997, through September 15, 1997—the date in the questionnaire for which caseworkers were asked to provide information about their cases. We assumed that, if all the questionnaires for the cases in each of the initial samples had been returned to us, additional cases would have fallen into these two categories. We used the proportions of each of these types of cases among respondents to estimate how many nonrespondents would have fallen into these two categories. Thus, we reduced our initial samples by 25 cases in California and 6 cases in Illinois. We also adjusted each state’s initial population size by the same proportions. The initial and adjusted population and sample sizes and survey response rates are shown by state in table I.3. The adjusted populations are our best estimates of the number of foster care cases that were in the system continuously from March 1, 1997, through September 15, 1997. We designed a mail questionnaire that asked caseworkers for information, as of September 15, 1997, about the individual foster care cases they were assigned to. We chose this date because it fell just before the date the questionnaires were scheduled to be mailed out, so when caseworkers received the questionnaire they were likely to still recall the facts in a case as of September 15, 1997. Our survey objectives were to collect (1) data not in other research, (2) data more directly related to and thus a better indication of the quality of foster care than the information in other research, and (3) some of the same data as in other research because the foster care population we surveyed and the time covered by our survey were not the same as those in other research. Examples of information our questionnaire collected that we did not find in existing research include foster children’s knowledge of their foster caregivers before entering foster care; foster caregivers’ history of child abuse or neglect, domestic violence, or drug abuse; foster caregivers’ parenting skills; health services foster children received; and the likelihood that foster caregivers would enforce restrictions on parental visits and thus protect children from abusing parents. We pretested the questionnaire with a number of foster care caseworkers in California and Illinois and revised it on the basis of the pretest results.We mailed a questionnaire for each case in our samples to the manager in the office handling that case, who was instructed to give it to the caseworker assigned to that case. The caseworker was asked to respond to the questionnaire with regard to that case. We conducted multiple follow-ups with office managers and caseworkers, by both mail and telephone, encouraging them to respond. In addition to using a mail questionnaire to collect information about foster care cases in our samples, we received an automated file from each state that contained administrative data on each sampled case from that state. The states rely on these data in managing their foster care programs. We did not evaluate the validity of these databases. Our estimates of the number of foster care cases in each state that would be subject to the requirement in the Adoption and Safe Families Act of 1997 to file a petition to terminate parental rights were based on the number of cases in our samples in which a child had been in foster care for at least 17 months as of September 15, 1997. We used 17 months, rather than 15 months as specified in the law, because the clock for determining whether a case is subject to the termination of parental rights requirement begins running on the date the child was adjudicated abused or neglected or 60 days after the date the child was actually removed from the parents’ custody, whichever came first. Since we did not know the adjudication date of the cases in our surveys, we used 17 months as a conservative estimate of the time the case would be subject to the requirement. Most of the conclusions we drew from this survey were based on a comparison within each state of survey responses for cases in kinship care and cases in other foster care settings. In each state, we placed each case in one of these two groups, depending on the caseworker’s response to a question about the type of foster care setting in that case. We placed cases in the kinship care category only when the caseworkers responded that the foster children were in settings that “your state classifies as kinship or relative care.” We placed all other cases in the “other foster care setting” category. About half the cases fell into the kinship care group in each state. The “other foster care setting” category contained cases in settings such as substitute care, specialized care, institutional care, group homes, and traditional foster family homes. The results of these analyses are contained in appendix V. We examined the relationship between type of setting and other variables in the questionnaire by generating crosstabular tables and statistically testing to determine whether any differences between two variables in a table were significant at the .05 level. We calculated most of the percentage estimates we reported in the body of this report and in appendix V using as the base the number of cases for which there was a response to a variable other than “don’t know.” For analyses that involved a child’s date of entry into foster care, we used the date that was recorded in the state’s administrative data file. Thus, our calculation of the average length of time our cross-section of foster children in each state spent in foster care up until September 15, 1997, was based on administrative rather than survey data. In addition to using crosstabulations to identify the relationship, if any, between two variables, we performed multivariate analyses. These analyses tested for associations, at the .05 significance level, between foster care setting—that is, kinship care versus other foster care setting—and permanency goal, as well as the time children spent in foster care, while taking into account other variables—namely, a foster child’s age at entry into foster care, gender, and race and the parents’ history of drug or alcohol abuse—that might also influence the permanency goal or time in the system. For our multivariate analyses of the relationship between foster care setting and permanency goal, we constructed a permanency goal variable by ranking long-term foster care, guardianship, and adoption according to the extent to which each goal allowed children and their families to be independent of the foster care system. Long-term foster care was considered least independent and assigned a value of “0,” guardianship more independent and assigned a value of “1,” and adoption most independent and assigned a value of “2.” We used linear regression—specifically the ordinary least squares method—to examine the relationship between foster care setting and permanency goal in foster care cases in each state, while taking into account the influence other variables may have had on a permanency goal. We found that there was no significant relationship between a child’s race or gender and his or her permanency goal in either state. Therefore, we excluded race and gender from the additional multivariate analyses we conducted. A regression analysis for cases in California indicated that foster care setting and a child’s age at entry into foster care were both related to permanency goal. Specifically, children in kinship care in California were more likely to have long-term foster care as the goal, and children in other settings were more likely to have guardianship or adoption as the goal. Our analyses also indicated that children who entered foster care in California at an early age were more likely than those who entered at a later age to have guardianship or adoption as the goal. A regression analysis for cases in Illinois indicated that foster care setting, child’s age at entry into foster care, and having a parent with a history of drug or alcohol abuse were all related to permanency goal. Specifically, in Illinois, children in kinship care and children who had entered foster care at an early age were more likely to have guardianship or adoption as the goal than children in other foster care settings. We also found that children who had a parent with a history of drug or alcohol abuse were more likely to have the goal of guardianship or adoption than children who had parents with no history of drug or alcohol abuse. See table I.4 for a summary of the results of our regression analyses related to permanency goals. Variation explained (r contributed) Total variation explained (r) We also performed a regression analysis to determine the relationship, if any, between foster care setting and time in foster care, taking into account the influence of permanency goal, a child’s age at entry into foster care, race, gender, and parents’ history of drug or alcohol abuse. We found that there was no significant relationship between a child’s race, gender, or having a parent with a history of drug or alcohol abuse and time in foster care in either state. Therefore, we excluded these variables from the additional multivariate analyses we conducted regarding time in foster care. Our regression analysis for cases in California indicated that there was no relationship between foster care setting and time in foster care. The goal of adoption and a child’s age at entry into foster care, however, were both related to time in the system. Specifically, adoption as the goal explained more than 12 percent of the variation in the length of time children spent in foster care. Children with adoption as the goal spent 47 fewer months, on average, in foster care than children with some other goal. A child’s age at entry explained almost 6 percent of the variation in the length of time spent in foster care. For each additional year of age, children spent an average of 2.4 fewer months in foster care. Among foster care cases in Illinois, we found that both foster care setting and the goal of adoption were related to the length of time children spent in foster care. Specifically, kinship care and adoption explained 3 percent and 1.4 percent of the variation in the amount of time children spent in foster care, respectively. Children in kinship care spent about 9 fewer months in foster care, on average, than children in other foster care settings. Similarly, children with the goal of adoption spent about 10 fewer months in the system, on average, than children with some other goal. See table I.5 for a summary of the results of our regression analyses related to the length of time in foster care. Total variation explained (r) Because the estimates we reported from our survey were based on samples of foster care cases, a margin of error or imprecision surrounds them. This imprecision is usually expressed as a sampling error at a given confidence level. We calculated sampling errors for estimates based on our survey at the 95-percent confidence level. The sampling errors for percentage estimates we cited in this report varied but did not exceed plus or minus 15 percentage points. This means that if we drew 100 independent samples from each of our populations—samples with the same specifications as those we used in this study—in 95 of these samples the actual value in the population would fall within no more than plus or minus 15 percentage points of our estimate. The sampling error for our estimates of the average number of visits by caseworkers in each state never exceeded plus or minus 1.3 visits. Sampling errors for our estimates of the average length of time foster children in each state spent in the system did not exceed plus or minus 8.7 months. Sampling errors for our estimates of the number of foster care children in each state who spent 17 months or more in the system did not exceed plus or minus 2,650 children. Finally, in appendix V, the sampling error for estimates in each state of the (1) average number of a foster child’s siblings never exceeded plus or minus 0.5 siblings, (2) average age at which a child entered foster care never exceeded plus or minus 0.84 years, and (3) average age of children in foster care never exceeded plus or minus 0.92 years. Because of the relatively small number of responses in some of the tables in appendix V, and the resulting imprecision of any population estimates that would be based on those responses, tables in appendix V with fewer than 41 cases present only the number of sample cases for which each response was given. We made no population estimates concerning those responses. This appendix contains studies we identified that compare kinship care and other foster care. A brief description of study design and methodology follows each item. Appendix I describes how we identified research in this area and our criteria for including a study in this bibliography. Appendix III contains the results of analyses from the studies listed here. Benedict, Mary I., and R.B. White. “Factors Associated with Foster Care Length of Stay.” Child Welfare, Vol. 70, No. 1 (1991), pp. 45-58. This article contains the results of a longitudinal study of children in three urban and suburban jurisdictions in Maryland who entered foster care for the first time between January 1, 1980, and December 31, 1983. Data were obtained from the case records of a random sample of 689 of these children and covered a period that began the month a child entered foster care and ended in June 1986. A number of factors, such as the parents’ ability to care for and raise children and foster care placement with relatives, were examined to identify any relationship between them and the amount of time children spent in foster care. Berrick, J.D., R.P. Barth, and B. Needell. “A Comparison of Kinship Foster Homes and Foster Family Homes: Implications for Kinship Foster Care as Family Preservation.” Children and Youth Services Review, Vol. 16, Nos. 1-2 (1994), pp. 33-63. The researchers described the characteristics of a two-stage, random sample of the 88,000 children in foster care in California between January 1988 and the date when the article was written in 1991. A screening questionnaire was mailed to the foster parents of each of the 4,234 children in the initial sample. This sample was split evenly between traditional and relative foster care placements. For the screening questionnaire, foster parents responded in 1,178 (28 percent) of the cases sampled. In 600 of these cases (246 relative foster care placements and 354 traditional foster care placements), the foster parents completed a second questionnaire by either telephone or mail. If they cared for more than one foster child, they were asked to answer the questions for one child older than 2 who had resided in their home for at least 6 months. They provided information about the child’s physical and mental health, the types of services the child received, and their own perceptions of the child welfare agency and caseworkers. Although the gender, age, and ethnicity of children in the ultimate sample were similar to those of children in the total population, the researchers acknowledged that there was no way to determine the representativeness of the sample of providers. Berrick, J.D., and others. Assessment, Support, and Training for Kinship Care and Foster Care: An Empirically-Based Curriculum. Berkeley, Calif.: University of California at Berkeley, Child Welfare Research Center, 1998. A chapter in this curriculum reported the results of a study in which a sample of 161 kin and 96 nonkin caregivers living in the San Francisco Bay Area were interviewed in their homes. The study compared the two groups of caregivers on demographics, the quality of the relationship between caregiver and child, home safety, neighborhood safety, and other factors related to the quality of care the children received. Courtney, M.E. “Factors Associated with the Reunification of Foster Children with Their Families.” Social Service Review, March 1994, pp. 81-108. This study examined the relationship between factors such as a child’s age, type of foster care placement (kinship or nonkinship), reason for removal, and the probability that the child would return to his or her parents. The results were based on statewide administrative data on a random sample of 8,748 of the approximately 88,000 children who entered the foster care system in California for the first time between January 1988 and May 1991. The author cited as study limitations the short time period covered by the data, the limited amount of data recorded for each case, and the quality of items recorded in the database. Gebel, Timothy J. “Kinship Care and Non-Relative Family Foster Care: A Comparison of Caregiver Attributes and Attitudes.” Child Welfare, Vol. 75, No. 1 (1996), pp. 5-18. This study compared the demographics, attitudes, and perceptions of relative and nonrelative foster parents in one urban county in a southeastern state in 1993. The results were based on responses to a questionnaire mailed to the foster parents in random samples of 140 of the 450 relative foster care cases and 140 of the approximately 300 nonrelative foster care cases in that county at that time. Foster parents were asked about their attitudes toward the use of corporal punishment and their perceptions regarding children in their care, the behavior of these children, and the support they received from child welfare agencies. Foster parents in 111 of the traditional placements and 82 of the placements with relatives responded to the survey. Iglehart, Alfreda P. “Kinship Foster Care: Placement Service and Outcome Issues.” Children and Youth Services Review, Vol. 16, Nos. 1-2 (1994), pp. 107-22. This article described the results of a study that compared selected characteristics of adolescents in kinship care to those of adolescents not in kinship foster care. Between February and July 1988, caseworkers in Los Angeles County extracted this information from the case files of all 1,642 children aged 16 or older who were in foster care during that period. Data for about 990 adolescents—352 in kinship care and 638 in traditional foster care—were analyzed for this study. Among the characteristics compared were gender, race and ethnicity, reason for removal, total number of placements, length of time in current placement, and degree of agency case monitoring. Le Prohn, Nicole S. “Relative Foster Parents: Role Perceptions, Motivation and Agency Satisfaction.” Ph.D. dissertation, University of Washington, Seattle, Washington, 1993. This researcher examined the relationship between relative and nonrelative placement with respect to what foster parents believed their role to be, what motivated them to become foster parents, and how satisfied they were with the foster care agency. Associations between foster placement type and the children’s behavior and amount of contact with their parents were also examined. The foster families selected for the study were families in the Casey Family Program, a long-term foster care program with offices in 13 states for children who are unable to be reunited with their birth parents and are unlikely to be adopted. Results were based on a random sample of about 175 nonrelative foster homes selected from all nonrelative foster homes in the Casey program in 1992. That group was compared with the entire population of about 130 relative foster homes in the Casey program during 1992. Data were collected from foster parents using a mail questionnaire and a telephone interview. Eighty-two relative foster homes and 98 nonrelative homes were included in the analysis. Le Prohn, Nicole S., and Peter J. Pecora. Research Report Series: The Casey Foster Parent Study Research Summary. Seattle, Wash.: Casey Family Program, 1994. Same description as for Le Prohn dissertation above. Magruder, Joseph. “Characteristics of Relative and Non-Relative Adoptions by California Public Adoption Agencies.” Children and Youth Services Review, Vol. 16, Nos. 1-2 (1994), pp. 123-31. The author compared adoptions in California by relatives and nonrelatives with respect to children’s gender, ethnicity, and time in placement before adoption and the characteristics of the adoptive parents and their households. Study results were based on the 3,214 public adoptions that took place during that state’s fiscal year 1992, for which data were available. Needell, B. “Placement Stability and Permanence for Children Entering Foster Care as Infants.” Ph.D. dissertation, University of California at Berkeley, Berkeley, California, 1996. A number of samples were drawn for this study from a longitudinal database containing all cases in the California Foster Care Information System from 1988 through 1994. The primary sample consisted of all 43,066 children in California who entered foster care before their first birthday and between 1988 and 1994. Analysis examined the types of placement, length of stay, reasons for infants’ reentry into foster care after reunification, and factors that may have led to an infant’s adoption or reunification. Needell B., and others. Performance Indicators for Child Welfare Services in California: 1994. Berkeley, Calif.: University of California at Berkeley, School of Social Welfare, Child Welfare Research Center, 1995. The results of this study were based on a longitudinal database of 233,000 cases in the California Foster Care Information System. These children were in foster care during 1988 or had entered care before the beginning of 1995. The percentage of children in different types of placements who exited the system by reunification, adoption, guardianship, and emancipation was reported, as well as the median length of the children’s first stay in foster care by foster care placement type. The authors also examined the effect of ethnicity, age at time of entry, and reasons for removal from the home on the relationships between placement type and foster care outcome and between placement type and length of stay. Needell B., and others. Performance Indicators for Child Welfare Services in California: 1996. Berkeley, Calif.: University of California at Berkeley, School of Social Welfare, Child Welfare Research Center, 1997. In this study, the longitudinal database used in the 1995 Needell and others study cited above was expanded to 300,000 children who were in foster care during 1988 or had entered care before 1997. The analyses were similar to those in the 1995 study. Poindexter, Garthia M. “Services Utilization by Foster Parents and Relatives.” Master of Social Work thesis, California State University, Long Beach, California, 1996. The author reported on the use of social services by relative and nonrelative foster parents in Los Angeles County based on 40 foster care cases selected at random from the population of children who entered foster care in that county during 1994. Of the 40 cases, 22 were relative foster care placements and 18 were nonrelative foster care placements. Scannapieco, Maria, Rebecca L. Hegar, and Catherine McAlpine. “Kinship Care and Foster Care: A Comparison of Characteristics and Outcomes.” Families in Societies, Vol. 78, No. 5 (1997), pp. 480-88. From case file information for a cross-section of children in foster care in Baltimore County on March 23, 1993, the researchers attempted to determine whether there were differences between kinship and other foster care placements in terms of permanency planning goals. Of the 106 children sampled, 47 were in kinship care and 59 were in other types of placements. Testa, Mark F. Home of Relative (HMR) Program in Illinois Interim Report. Chicago, Ill.: University of Chicago, School of Social Service Administration, 1993. The author used a database that included information about all children in foster care in Illinois between fiscal years 1965 and 1992 to establish trends in kinship care placements in Illinois and to describe various characteristics of foster children and their foster care outcomes. Testa, Mark F. “Kinship Care in Illinois.” In J.D. Berrick, R.P. Barth, and N. Gilbert (eds.), Child Welfare Research Review, Vol. 2. New York: Columbia University Press, 1997. Pp. 101-29. Focusing on reunification and discharge rates among children in foster care in Illinois between fiscal years 1976 and 1992, the researcher examined the effect of selected factors such as age, race, and type of foster care placement on the likelihood of reunification or discharge. Testa, Mark F. “Professional Foster Care: A Future Worth Pursuing?” Child Welfare: Special Edition on Family Foster Care in the 21st Century. Forthcoming. This study examined the relationship between children’s placement type and whether or not they (1) remained close to their community of origin, (2) were placed with other siblings in the same household, and (3) achieved permanency or stayed in the same foster care setting. The researcher used administrative data from Cook County, Illinois, for three different foster care recruitment programs and two random samples, one of 995 kinship care and one of 852 traditional foster care placements. The samples included only placements between December 1, 1994, and September 30, 1996. Administrative data through September 30, 1997, were used to determine whether or not the children stayed in one foster care setting or left the foster care system. Thornton, Jesse L. “Permanency Planning for Children in Kinship Foster Homes.” Child Welfare, Vol. 70, No. 5 (1991), pp. 593-601. Three surveys were conducted in this study. Semi-structured interviews were administered to a random sample of 20 kinship caregivers in New York City to determine their attitudes toward adoption. Eighty-six foster care caseworkers in New York City completed questionnaires that asked for their perceptions about kinship caregivers’ willingness to adopt. Finally, to compare permanency goals for children in kinship care to those for children in traditional care, the records from 95 active kinship foster care cases in April 1985 were examined along with statistics from an administrative database. U.S. General Accounting Office. Foster Care: Children’s Experiences Linked to Various Factors; Better Data Needed, GAO/HRD-91-64. Washington, D.C.: Sept. 11, 1991. Data on children who entered or left foster care in 1986 in Georgia, Illinois, New York, Oregon, South Carolina, and Texas and Los Angeles County and New York City were analyzed for the relationship of age, ethnicity, gender, location, reason for entry, and foster care placement type to length of stay. For Georgia, Oregon, South Carolina, and Texas, computerized data files of the case records for all children entering or leaving foster care during 1986 were used. For New York, Illinois, Los Angeles County, and New York City, random samples of children who had been discharged from foster care during 1986 were used; the New York and Illinois samples each contained 1,488 children, the sample for Los Angeles County contained 209 children, and the sample for New York City contained 130 children. U.S. General Accounting Office. Foster Care: Health Needs of Many Young Children Are Unknown and Unmet, GAO/HEHS-95-114. Washington, D.C.: May 26, 1995. A random sample of 137 case records of foster children who had been in either kinship or traditional care exclusively was selected from the case records of all foster children younger than 3 years old in Los Angeles County and New York City during 1991 to examine the relationship between placement type and the receipt of health services by foster children in this age group. U.S. General Accounting Office. Foster Care: Services to Prevent Out-of-Home Placements Are Limited by Funding Barriers, GAO/HRD-93-76. Washington, D.C.: June 29, 1993. In this study of the statutory and fiscal barriers the states faced in delivering child welfare services, the researchers used caseload data for the last day of either calendar or fiscal year 1992 in California, Michigan, and New York to describe trends in foster care and child welfare services. Wulczyn, F.H., and R.M. George. “Foster Care in New York and Illinois: The Challenge of Rapid Change.” Social Service Review, June 1992, pp. 278-94. Aggregated administrative data on all children in New York’s child welfare system and similar data from Illinois were used to compare child welfare trends in these two states from 1983 through 1989. Shifts in total caseload size, average age of children entering foster care, and the number of relative foster care placements were examined. The researchers also determined the proportion of children admitted to foster care during 1988 in each state who were (1) discharged within 12 months, (2) discharged between 12 and 24 months, and (3) still in the system after 24 months. They compared the proportions in kinship care placements with those in nonkinship care placements. This appendix contains the results of analyses from the studies we identified that compared kinship care and other foster care. These results are presented in tables organized by research question. Sources are noted after each table. In some instances, the results in the tables were based on data from entire populations of foster children. When they were based on data from samples of foster children, if the researcher reported that a difference between kinship and other foster care was statistically significant, the significance level is noted in parentheses in the table. Appendix II contains a description of the design and methodology of the studies in this appendix. Table III.1: Did the Foster Child Remain in the Same Community or Neighborhood He or She Lived in Before Entering Foster Care? Of nonemergency first placements in Chicago, percentage located in the same community or neighborhood in which the parents or guardians residedPercentage of cases in which the interviewer thought the foster caregiver’s neighborhood was dangerous(.001) Average number of times foster child visited parents in the past yearMother (.001) Percentage of children who had contact with their mothers (.01) or fathersDid not see parents in past 12 months Saw parents at least once in the past 12 months Parents’ whereabouts were unknown Percentage of foster children who saw their parentsAt least once a month More than four times a month (.01) Percentage of foster children placed with siblings also in foster care (.01) Of the foster families with more than one foster child, percentage in which siblings were placed togetherFour or more siblings (.05) At least two siblings (.001) Percentage of children placed with siblingsAverage number of times foster children visited their siblings in the past year (.001) Table III.6: How Many Placements in Foster Care Did the Foster Child Have? In care less than 30 days In care 30 days or more (.001) Total placements (.01) Percentage of foster children with (.01) Percentage of foster children who entered care between 1988 and 1990 in California and had placements within 4 years after entry(continued) Percentage of foster children who had at least one placement before current placement (.001) Percentage of foster children with different degrees of integration according to foster parents (.001) and social workers (.001) Children who felt that they were very much part of the foster family Children who felt somewhat like a foster child Children who felt very much like a foster child Average age of foster caregivers in years Foster mothers (.01) Male foster caregivers (.05) Female foster caregivers (.05) Percentage of female foster caregivers 55 years of age or older (.01) Percentage of primary female foster caregivers by age (.005) Percentage of married foster caregivers (.05) Percentage of single foster caregivers (.001) Percentage of married foster caregivers by genderFoster mothers (.01) Percentage of foster caregivers who had completed high schoolMean number of years of school completedFoster fathers (.05) Foster mothers (.001) Percentage of foster caregivers who did not have a high school diplomaFemale (.001) Male (.01) Percentage of primary female foster caregivers with education by category (.00001) Percentage of foster caregivers in fair or poor healthMale (.001) Female (.001) Percentage of foster caregivers whose income was less than $15,000 a yearPercentage of foster families with income by category (.01) Average annual gross income, including foster care payments (.001) Average annual income, disregarding money received from either Aid to Families with Dependent Children (AFDC)-Family Grant or AFDC–Foster Care Percentage of primary female caregivers with household income by category (.000005) Percentage of foster caregivers whoHad a fire extinguisher (.001) Had a complete first aid kit (.001) Knew cardiopulmonary resuscitation (.001) Percentage who felt that training adequately prepared them to be a foster parent (.01) Percentage who received training (.001) Percentage of foster caregivers who received servicesSpecialized training (.001) Support group (.001) Respite care (.001) Mean number of services foster caregivers received (.001) Percentage of foster children who were not well known to the caseworker (.0001) Mean number of caseworkers’ visits with foster children during a 6-month period (.05) Mean number of caseworkers’ visits with foster children in past 12 monthsAverage number of hours per month foster children spent with a caseworker (.01) Table III.17: What Required Health Services Did the Foster Child Receive? Percentage of foster children up to 3 years old who received health-related servicesNo services (.10) Mean number of days (.05) 1994 study (.001) Percentage of first admissions in 1988 by length of stayStill in care as of June 1990 Still in care as of June 1990 Percentage of foster children entering care in 1986 in foster care for 1 year or longerPercentage of foster children who were in care as of June 30, 1992, by 2-year fiscal period in which they enteredLikelihood of being in care for 1 year or longer (explained in footnote g) Kinship care associated with longer length of stay when controlling for other factors (.007) Percentage difference between the likelihood that foster children who entered in a 2-year fiscal period would be discharged and the likelihood that children in other foster care settings who entered the system before 1977 would be discharged (explained in footnote i) (continued) For example, in Georgia a child in kinship care is almost three times as likely as a child in other foster care settings to remain in care for 1 year or longer. U.S. General Accounting Office, Foster Care: Children’s Experiences Linked to Various Factors; Better Data Needed, GAO/HRD-91-64 (Washington, D.C.: Sept. 11, 1991). For example, children who entered kinship care in fiscal years 1979 to 1980 were 25-percent less likely to be discharged than children who entered other foster care settings before fiscal year 1977, and children who entered other foster care settings in fiscal years 1979 to 1980 were 14-percent more likely to be discharged than children who entered other foster care settings before fiscal year 1977. Discharge includes return to parental custody, placement in private guardianship, adoption, or staying in the child welfare system until age 18. Mark F. Testa, “Kinship Care in Illinois,” in J.D. Berrick, R.P. Barth, and N. Gilbert (eds.), Child Welfare Research Review, Vol. 2 (New York: Columbia University Press, 1997), pp. 101-29. Percentage who were reunified with their parents within 4 years by year enteredPercentage who were adopted within 4 years by year enteredPercentage who exited within 4 years by year enteredPercentage who were emancipated within 4 years by year enteredCumulative percentage who entered care between January and July 1988 who were reunified with their parents afterPercentage who entered care in a 2-year period who were reunified with their parents withinPercentage attaining a permanency outcome by the end of the study period(continued) Entered subsidized guardianship (.0001) This appendix displays the frequency distributions of responses to questions in our survey of foster care cases in California and Illinois. Means and medians are provided for some items. In addition, selected information from the states’ administrative records is provided about these cases. Appendix I includes a detailed description of our survey methodology, and the questionnaire for this survey is in appendix IV. The percentage given for each response category constitutes our estimate of the proportion of all foster care cases in each state’s system as of September 15, 1997, that had been in the system since at least March 1, 1997. Because of the relatively small number of responses in some of the tables in this appendix and the resulting imprecision of any population estimates that might be based on those responses, tables with fewer than 41 cases present only the number of sample cases for which each response was given. No population estimates are given for those responses. The sampling errors for the percentage estimates vary. No sampling error for any of the percentage estimates exceeds plus or minus 15 percentage points. Table V.1 provides a more specific breakdown of sampling errors for the percentage estimates by number of cases for which there was a response. The sampling error for our estimates of the average number of caseworker visits in each state never exceeds plus or minus 1.3 visits. The sampling error for estimates in each state of the average number of a foster child’s siblings never exceeds plus or minus 0.5 siblings. The sampling error for estimates of the (1) average length of time all foster children in each state had spent in the system up until September 15, 1997, never exceeds plus or minus 8.7 months, (2) average age at which children entered foster care never exceeds plus or minus 0.84 years, and (3) average age of children in foster care never exceeds plus or minus 0.92 years. In tables V.9 through V.12, we provide the results of these three calculations for subpopulations of all foster children. Because some of these calculations are based on a relatively small sample of cases in each subpopulation, they do not constitute very precise estimates of the actual averages in the entire subpopulation in each state. These calculations refer to only the cases in our sample. Each category versus the rest combined (continued) Each category versus the rest combined n=88 No, first 3 n=115 No, first 3 categories combined versus the rest combined categories combined versus the rest combined (continued) While the question allowed answers about each of a pair of caregivers, the table shows the answers for only the younger one. While the question allowed answers about each of a pair of caregivers, the table shows the answers for the one in better health. (continued) Table V.5: Caregiver’s Performance of Parenting Tasks n=88 No, first 2 n=114 No, first 2 categories combined versus the rest combined categories combined versus the rest combined (continued) Yes, first 2 categories combined versus the rest combined n=114 No, first 2 categories combined versus the rest combined n=88 No, first 2 n=115 No, first 2 categories combined versus the rest combined categories combined versus the rest combined (continued) n=88 No, first 2 n=115 No, first 2 categories combined versus the rest combined categories combined versus the rest combined n=88 No, first 2 n=114 No, first 2 categories combined versus the rest combined categories combined versus the rest combined n=87 No, first 2 n=115 No, first 2 categories combined versus the rest combined categories combined versus the rest combined n=82 No, first 2 n=98 No, first 2 categories combined versus the rest combined categories combined versus the rest combined (continued) n=83 No, first 2 n=112 No, first 2 categories combined versus the rest combined categories combined versus the rest combined n=86 No, first 2 n=113 No, first 2 categories combined versus the rest combined categories combined versus the rest combined (continued) Yes, first 2 categories combined versus the rest combined n=87 No, first 2 n=114 No, first 2 categories combined versus the rest combined categories combined versus the rest combined n=87 No, first 2 n=111 No, first 2 categories combined versus the rest combined categories combined versus the rest combined (continued) Yes, first 2 categories combined versus the rest combined (continued) Yes, first category versus the rest combined category versus the rest combined n=50 No, first 3 n=82 No, first 3 categories versus the rest combined categories versus the rest combined n=50 No, first 3 n=82 No, first 3 categories versus the rest combined categories versus the rest combined (continued) Yes, first 3 categories versus the rest combined n=131 No, first 3 categories versus the rest combined category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test (continued) Yes, first 2 categories combined versus rest combined n=99 No, first 2 category is too small to perform the test categories combined versus rest combined n=98 No, first 3 category is too small to perform the test categories combined versus rest combined (continued) category is too small to perform the test category is too small to perform the test (continued) While the question allowed answers about each of a pair of caregivers, the table shows the answers for only the one whom the child knew best. While the questions allow answers about each parent, the table shows answers for the parent whose visitation restrictions are least likely to be enforced. Table V.9: Cases With the Goal of Reunification category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test (continued) Table V.10: Cases With the Goal of Adoption category is too small to perform the test category is too small to perform the test category is too small to perform the test (continued) category is too small to perform the test category versus the rest combined n=50 No, first 2 category is too small to perform the test categories combined versus the rest combined category is too small to perform the test category is too small to perform the test category is too small to perform the test (continued) Table V.11: Cases With the Goal of Guardianship category is too small to perform the test category is too small to perform the test (continued) category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test category is too small to perform the test (continued) Table V.12: Cases With the Goal of Long-Term Foster Care (continued) Juvenile Courts: Reforms Aimed to Better Serve Maltreated Children (GAO/HEHS-99-13, Jan. 11, 1999). Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers (GAO/HEHS-98-182, Sept. 30, 1998). Child Protective Services: Complex Challenges Require New Strategies (GAO/HEHS-97-115, July 21, 1997). Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise (GAO/HEHS-97-73, May 7, 1997). Child Welfare: Complex Needs Strain Capacity to Provide Services (GAO/HEHS-95-208, Sept. 26, 1995). Foster Care: Health Needs of Many Young Children Are Unknown and Unmet (GAO/HEHS-95-114, May 26, 1995). Foster Care: Parental Drug Abuse Has Alarming Impact on Young Children (GAO/HEHS-94-89, Apr. 4, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how well kinship care is serving foster children, focusing on the: (1) quality of care that children in kinship care receive compared with that received by other foster children, as measured by a caseworker's assessment of a caregiver's parenting skills, the extent to which a foster child is able to maintain contact with familiar people and surroundings, and a caregiver's willingness to enforce court-ordered restrictions on parental visits; (2) frequency with which state child welfare agencies pursue various permanent living arrangements and the time children in kinship care have spent in the system compared with other foster children; and (3) recent state initiatives intended to help ensure that children in kinship care receive good quality foster care and are placed in permanent homes in a timely manner. GAO noted that: (1) GAO's survey of open foster care cases in California and Illinois showed that in most respects the quality of both kinship and other foster care was good and that the experiences of children in kinship care and children in other foster care settings were comparable; (2) GAO found that caregivers both in kinship care and in other foster care settings demonstrated good parenting skills overall; (3) GAO also confirmed the belief that there is more continuity in the lives of children in kinship care before and after they enter foster care than there is in other foster children's lives; (4) in cases in which the courts have restricted parental visits with foster children to ensure the children's safety, the proportion of cases in which the caseworker believed that the caregiver was likely to enforce the restrictions was somewhat smaller among kinship care cases than among other foster care cases; (5) some of the standards that California and Illinois use to ensure good quality foster care and the level of support each state provides to foster caregivers are lower for kinship care than other types of foster care; (6) previous research on children who have left foster care has shown that children who had been in kinship care were less likely to be adopted and stayed longer in foster care than other foster children; (7) between California and Illinois, GAO's survey showed no consistent findings regarding the relationship between kinship care and permanency goals or the time foster children had spent in the system; (8) in Illinois, kinship care cases were more likely to have a permanency goal of adoption or guardianship than other foster care cases; (9) Illinois has found that kinship caregivers are willing to adopt, and Illinois is actively pursuing adoption in kinship care cases; (10) in California, kinship care cases were less likely than other foster care cases to have adoption or guardianship as a goal; (11) according to California officials, this may be because, at the time of GAO's survey, the state had only recently begun to offer adoption and guardianship options specifically designed for a foster child's relatives; (12) in California, there was no significant difference between the average length of time that children in kinship care and children in other settings had spent in the system; (13) in Illinois, children in kinship care had spent significantly less time in the system than other foster children; and (14) both California and Illinois are now taking steps to better ensure the good quality of kinship care and to encourage kinship caregivers to provide permanent homes for foster children who cannot return to their parents.
Agricultural inspections at U.S. ports of entry had been the responsibility of USDA since 1913. Following the events of September 11, 2001, the Congress passed the Homeland Security Act of 2002, which combined the inspection activities of the Department of the Treasury’s Customs Service, the Department of Justice’s Immigration and Naturalization Service, and APHIS into the newly created DHS Customs and Border Protection (CBP). Among other things, the act (1) transferred, to the Department of Homeland Security, APHIS’s responsibility for inspecting passenger declarations and cargo manifests, international air passengers, baggage, cargo, and conveyances and holding suspect articles in quarantine to prevent the introduction of plant or animal diseases; and (2) authorized USDA to transfer up to 3,200 agricultural quarantine inspection (AQI) personnel to DHS. The Secretaries of the Department of Homeland Security (DHS) and the United States Department of Agriculture (USDA) signed a memorandum of agreement in February 2003, agreeing to work cooperatively to implement the relevant provisions of the Homeland Security Act of 2002 and to ensure necessary support for and coordination of the AQI program functions. The agreement detailed how the AQI program was to be divided, with some functions transferred to DHS and others retained by USDA. Agricultural import and entry inspection functions transferred to DHS included (1) reviewing passenger declarations and cargo manifests and targeting for inspection high-risk agricultural passenger/cargo shipments; (2) inspecting international passengers, luggage, cargo, mail, and means of conveyance; and (3) holding suspect cargo and articles for evaluation of plant and animal health risk in accordance with USDA regulations, policies, and guidelines. Functions remaining in USDA included (1) providing risk- analysis guidance, including in consultation with DHS, and the setting of inspection protocols; (2) applying remedial measures other than destruction and re-exportation, such as fumigation, to commodities, conveyances, and passengers; and (3) providing pest identification services at plant inspection stations and other facilities. The parties agreed to cooperate in the financial management functions, including development of annual plans and budgets, AQI user fees, and funds control and financial reporting procedures. To carry out its new inspection responsibilities, CBP established a “One Face at the Border” initiative, which unified the customs, immigration, and agricultural inspection processes by cross-training CBP officers and agriculture specialists to (1) prevent terrorists, terrorist weapons, and contraband from entering the United States; (2) identify people seeking to enter the United States illegally and deny them entry; and (3) protect U.S. agricultural and economic interests from harmful pests and diseases. Unlike the Customs Service and the Immigration and Naturalization Service, which were moved to DHS in their entirety, APHIS continues to exist within USDA and retains responsibility for conducting veterinary inspections of live imported animals; establishing policy for inspections and quarantines; providing risk analysis; developing and supervising training on agriculture for CBP agriculture specialists; conducting specialized inspections of plant or pest material; identifying agricultural pests; and collecting AQI user fees. CBP and APHIS agreed to support their respective AQI duties by sharing funds from USDA-collected AQI user fees levied on international air passengers, commercial aircraft, ships, trucks, and railroad cars. CBP agriculture specialists are assigned to 161 of the 317 ports of entry that CBP staffs. As shown in figure 1, these ports collectively handle thousands of sea containers and aircraft and over a million passengers each day. Each port of entry can comprise one or more facilities—airports, seaports, or land border crossings—where CBP officers and agriculture specialists process arriving passengers and cargo. For example, the port of Buffalo, New York, has an airport and land border inspection facilities, whereas the Port of Atlanta has only the Atlanta Hartsfield/Jackson International Airport. Individual port directors are responsible for overseeing port operations and assigning agriculture specialists to specific port facilities. The ports are organized into 20 district field offices, each with a director of field operations who is responsible for the operation of multiple ports in a given geographic area and serves as a liaison between CBP headquarters and port management. Day-to-day operations for agriculture specialists may include inspecting pedestrians, passengers, cargo, and vehicles for pests and contraband. Such inspections generally follow a two-stage process—primary and secondary inspections. Figure 2 shows the passenger inspection process at an international airport, as an example. A primary inspection could include questioning passengers about their origin and destination, reviewing their written declarations, and screening their baggage with detector dogs to determine whether to refer the passengers for a secondary inspection. A secondary inspection involves a more detailed questioning of the passenger and an examination of their baggage by X-ray and, if necessary, by hand search. Procedures for inspecting commercial shipments vary according to factors such as the type of agricultural product, risk level associated with the product, and country of origin. To reduce the risk of foreign pests and disease entering the United States, agriculture specialists review cargo documents to select shipments for more detailed physical inspection. The Food, Agriculture, Conservation, and Trade Act of 1990, as amended (FACT Act), authorizes APHIS to set and collect user fees for AQI services provided in connection with the arrival of international air passengers and conveyances (e.g., commercial aircraft and trucks) at a port in the customs territory of the United States. The six AQI user fees are assessed on international air passengers, commercial aircraft, commercial vessels, commercial trucks, commercial truck decals, and commercial railroad cars. These user fees are paid directly by shipping companies or indirectly by air passengers through taxes on tickets. The international passenger and commercial aircraft fees are calculated and remitted quarterly by the individual airline companies to USDA, while rail car fees are remitted monthly. CBP collects the commercial vessel, truck, and truck decal fees at the time of inspection. International air passengers and commercial conveyances entering the United States from Canada are exempt from the user fees. The FACT Act authorizes user fees for (1) providing AQI services for the conveyances, cargo, and passengers listed above; (2) providing preclearance or preinspection at a site outside the customs territory of the United States to international airline passengers, commercial vessels, commercial trucks, commercial railroad cars, and commercial aircraft; and (3) administering the AQI user-fee programs. AQI user fees have been revised several times since the FACT Act was passed in 1990. In November, 1999, APHIS published a final rule in the Federal Register that amended the user-fee regulations by adjusting the fees charged for certain AQI services for part of fiscal year 2000 and for fiscal years 2001 and 2002. The user-fee adjustments were intended to ensure that APHIS covered the anticipated actual cost of providing AQI services. Subsequent rule making, culminating in a final rule published in the Federal Register on January 24, 2003, extended the adjusted fees indefinitely, beyond fiscal year 2002, until the fees are revised again. On December 9, 2004, APHIS published an interim rule to raise user fees, effective January 1, 2005. Since the transfer of agricultural quarantine inspections to CBP, the agency has increased training in agricultural issues for CBP officers and developed a national standard for in-port training. In addition, CBP and APHIS have enhanced the ability of agriculture specialists to better target inspections at the ports. The two agencies also established a joint program to review the agricultural inspections program on a port-by-port basis, and CBP created new agricultural liaison positions at the field office level to advise regional port directors on agricultural issues. CBP has undertaken several training initiatives for CBP officers, whose primary duty is customs and immigration inspection. Under CBP, newly hired CBP officers receive 16 hours of training on agricultural issues at the Federal Law Enforcement Training Center (FLETC) in Glynco, Georgia. Under APHIS, agriculture courses for Customs and Immigrations officers had been limited to 4 hours and 2 hours, respectively, of classroom overviews on agriculture issues. The revamped training provides the newly hired CBP officers with basic agriculture information so they know when to either prohibit entry or refer potential agricultural threats to CBP agriculture specialists. In addition to a more comprehensive course, the curriculum provides for additional testing on AQI knowledge. For example, classroom simulations include agricultural items, and CBP officers’ written proficiency tests now include questions on agricultural inspections. In addition, CBP and APHIS have undertaken an initiative to expand agriculture training for all CBP officers at their respective ports of entry. The purpose of these modules—designed for Customs and Immigration officers—was to provide officers with the ability to make informed decisions on agricultural items at high-volume border traffic areas or to facilitate the clearance of travelers and cargo at ports without agriculture specialists, such as some ports of entry along the Canadian border. According to agency officials, the agencies have now expanded training to 16 hours of lecture and 8 hours of on-the-job training, including environment-specific modules for six inspection environments: northern border, southern border, international mail/expedited courier, maritime, airport processing, and preclearance (i.e., inspections of passengers and cargo prior to arrival in the United States). Additionally, CBP and APHIS have formalized the in-port training program and have developed a national standard for agriculture specialists. Under APHIS, depending on the port to which they were assigned, newly hired agriculture specialists spent anywhere from 1 week to 1 year shadowing senior agriculture specialists. After the transfer, CBP formalized this process to ensure all agriculture specialists were receiving the necessary on-the-job training. This formalized process includes a checklist of activities for agriculture specialists to master and is structured in two modules: an 8-week module on passenger inspection procedures and a 10- week module on cargo inspection procedures. Based on our survey of agriculture specialists, we estimate that 75 percent of specialists hired by CBP believe that they received sufficient training (on the job and at the Professional Development Center) to enable them to perform their agriculture inspection duties. An estimated 13 percent of specialists believe that they probably or definitely did not receive adequate training, and another 13 percent either were uncertain or did not answer the question. (See app. II, survey question 12.) CBP and APHIS have also taken steps to better identify and target shipments and passengers that present potentially high risk to U.S. agriculture. Under CBP, some agriculture specialists receive training and access to computer applications such as CBP’s Automated Targeting System (ATS), which is a computer system that, among other things, is meant to (1) identify high-risk inbound and outbound passengers and cargo for terrorist links, smuggling of WMD, drugs, currency, and other contraband; (2) focus limited inspection resources on higher-risk passengers and cargo; (3) facilitate expedited clearance or entry for low- risk passengers and cargo; and (4) enable users to create ad-hoc queries to filter data to meet specific research needs. ATS helps agriculture specialists select which cargo shipments to inspect based on detailed information contained in the cargo manifests and other documents that shipping companies are required to submit before the ship arrives in a port. CBP and APHIS headquarters personnel also use ATS data to identify companies that have violated U.S. quarantine laws. For example, recently the two agencies used ATS to help identify companies that have smuggled poultry products in seafood containers from Asia, according to a senior APHIS official. The U.S. currently bans uncooked poultry products from Asian countries because of concerns over avian influenza. CBP and APHIS are working together to further refine ATS’s effectiveness in identifying and targeting shipments of agricultural products. Specifically, APHIS assigned a permanent liaison to the CBP National Targeting Center in April 2005 to help develop a rule set (a computerized set of criteria) that will automate the process of identifying companies or individuals that pose a significant agroterrorism risk to U.S. agriculture. According to the APHIS liaison, the rule set will eventually be applicable to nonagroterrorism events, such as smuggling and shipments that are not compliant with U.S. quarantine regulations. CBP officials told us that the agency has set a September 2006 release date for the first version of the rule set. CBP officials also told us that the agency is testing an interim rule set for high- risk commodities regulated by USDA’s Food Safety Inspection Service (FSIS) that they expect to release in July 2006. In addition to ATS, agriculture specialists now also have access to the Treasury Enforcement Communications System (TECS)—a computerized information system for identifying individuals and businesses suspected of violating federal law. TECS serves as a communications system between Treasury law enforcement offices and other federal, state, and local law enforcement agencies. ATS and TECS complement other targeting tools already used by agriculture specialists under APHIS. Specifically, agriculture specialists continue to use CBP’s Automated Commercial System to review the manifests of incoming shipments. At select ports, agriculture specialists also continue to use APHIS’s Agricultural Quarantine Inspection Monitoring (AQIM) system to estimate the amount of quarantine items or pests entering the country. CBP agriculture specialists submit AQIM data to APHIS, where it is used to estimate the extent to which agricultural pests and diseases approach the United States through various pathways (e.g., international air passengers). In fiscal year 2005, CBP and APHIS established a formal assessment process to ensure that ports of entry continue to carry out agricultural inspections in accordance with APHIS’s regulations, policies, and procedures. According to an APHIS official, the new formal assessment process is a means for APHIS to gather some of the information necessary to formulate agricultural inspection policy. The assessments, called Joint Agency Quality Assurance Reviews, entail a visit to ports by APHIS and CBP officials, who complete a questionnaire based on interviews with the port director and other CBP personnel and through direct observation of port operations by the review team. The reviews cover topics such as (1) coordination with other federal agencies, (2) training for agriculture specialists, (3) access of agriculture specialists to regulatory manuals, and (4) processes for handling violations at the port, inspecting passenger baggage and vehicles, and intercepting, seizing, and disposing of confiscated materials. The review teams report on best practices and deficiencies at each port and make recommendations for corrective actions. For example, a review of two ports found that they were both significantly understaffed, and that CBP agriculture specialists at one of the ports were conducting superficial inspections on commodities that should have been inspected more intensely. At the same ports, the review identified best practices in the placing of personnel from CBP, APHIS, and FDA in the same facility and the targeting of tile imports from Italy and Turkey for possible agroterrorism risks. As of February 2006, the joint review team has conducted reviews of nine ports, and the agencies plan to complete seven additional reviews in fiscal year 2006, according to a senior APHIS official. In May 2005, CBP required that each director in its 20 field offices identify and appoint an agriculture liaison, with background and experience as an agriculture specialist, to provide CBP field office directors with agriculture- related input for operational decisions and provide agriculture specialists with senior-level leadership. CBP officials told us that all district field offices had established the liaison position as of January 2006. The CBP agriculture liaison duties include, among other things, advising the director of the field office on agricultural functions; providing oversight for data management, statistical analysis, and risk management; and providing oversight and coordination for agriculture inspection alerts. Since the creation of the position, agriculture liaisons have begun to facilitate the dissemination of urgent alerts from APHIS to CBP. For example, following a large increase in the discovery of plant pests at a port in November 2005, the designated agriculture liaison sent notice to APHIS, which then issued alerts to other ports. Subsequent communications between APHIS and CBP identified the agriculture liaison at the initial port as a contact for providing technical advice for inspecting and identifying this type of plant pest. Several management and coordination problems exist that may leave U.S. agriculture vulnerable to foreign pests and disease. CBP has not developed sufficient performance measures to manage and evaluate the AQI program. CBP also has not developed a staffing model to determine how to allocate newly hired agriculture specialists or used available data to evaluate the effectiveness of the AQI program. In addition, information sharing and coordination between CBP and APHIS has been problematic. Finally, the agriculture canine program has deteriorated. The Government Performance and Results Act of 1993 requires federal agencies to develop and implement appropriate measures to assess program performance. Yet, 3 years after the transfer, CBP has yet to develop and implement its own performance measures for the AQI program despite changes in the program’s mission. Instead, according to senior CBP officials, CBP carried over two measures that APHIS used to assess the AQI program before the transfer: the percentage of (1) international air passengers and (2) border vehicle passengers that comply with AQI regulations. However, these measures address only two pathways, neglecting commercial aircraft, vessel, and truck cargo pathways. CBP’s current performance measures also do not provide information about changes in inspection and interception rates, which could prove more useful in assessing the efficiency and effectiveness of agriculture inspections in different regions of the country or at individual ports of entry. They also do not address the AQI program’s expanded mission—to prevent agroterrorism while facilitating the flow of legitimate trade and travel. CBP officials told us that the agency recognizes that the current performance measures are not satisfactory and is planning new performance measures for the fiscal year 2007 performance cycle. However, such measures had not yet been developed at the time of our review. To accomplish the split in AQI responsibilities in March 2003, APHIS transferred a total of 1,871 agriculture specialist positions, including 317 vacancies, and distributed these positions across CBP’s 20 district field offices. According to senior officials involved with the transfer, APHIS’s determinations were made under tight time frames and required much guess work. As a result, from the beginning, CBP lacked adequate numbers of agriculture specialists and had little assurance that the appropriate numbers of specialists were staffed at the ports of entry. Since then, CBP has hired more than 630 specialists, but the agency has not yet developed or used a risk-based staffing model for determining where to assign its agriculture specialists. Our guidelines for internal control in the federal government state that agencies should have adequate mechanisms in place to identify and analyze risks and determine what actions should be taken to mitigate them. One such risk involves the changing nature of international travel and agricultural imports, including changes to the (1) volume of passengers and cargo, (2) type of agricultural products, (3) countries of origin, and (4) ports of entry where passengers and cargo arrive in the United States. One action to mitigate risk is development and implementation of a staffing model to help determine appropriate staffing levels to address these changing operating conditions. APHIS developed a staffing model, prior to the transfer of AQI functions to CBP, to calculate the number of agriculture specialists necessary to staff the various ports according to work load. However, according to APHIS officials, the model was no longer useful because it had not considered the split of inspectors between the two agencies. Although APHIS updated the model in June 2004 at CBP’s request, CBP still did not use this or any other staffing model when assigning the newly hired specialists to the ports. According to CBP officials, the agency did not use APHIS’s model because it did not consider some key variables, such as the use of overtime by staff. CBP officials also told us the agency is planning to develop its own staffing model, but they were unable to provide us with planned milestones or a timeline for completion. Until such a risk-based model is developed and implemented, CBP does not know if it has an appropriate number of agriculture specialists at each port. An area of potential vulnerability that should be considered in staffing the ports relates to the experience level of agriculture specialists at the ports. More than one-third of CBP agriculture specialists were hired since the transfer—and most within the last year. For example, San Francisco lost 19 specialists since 2003 but gained only 14 new hires or transfers, leaving 24 vacancies as of the end of fiscal year 2005. APHIS officials expressed concern about the turnover of staff at some ports because many of the newly hired CBP agriculture specialists “will need time to get up to speed and do not possess the institutional knowledge related to agricultural issues that the more seasoned specialists had.” The official added that the experience level of specialists is of particular concern at ports of entry staffed by only 1 or 2 agriculture specialists. According to APHIS, data in its Work Accomplishment Data System (WADS) can help program managers evaluate the performance of the AQI program by indicating changes in a key measure—the frequency with which prohibited agricultural materials and reportable pests are found (intercepted) during inspection activities. CBP agriculture specialists routinely record data in WADS for each port of entry, including monthly counts of (1) arrivals of passengers and cargo to the United States via airplane, ship, or vehicle; (2) agricultural inspections of arriving passengers and cargo; and (3) inspection outcomes (i.e., seizures or detections of prohibited (quarantined) agricultural materials and reportable pests). However, CBP has not used this data to evaluate the effectiveness of the AQI program. Our analysis of the data shows that average inspection and interception rates have changed significantly in some geographical regions of the United States, with rates increasing in some regions and decreasing in others (see tables 1 and 2). Table 1 compares average inspection rates—the number of passenger and cargo inspections relative to the total number of arrivals in each CBP district field office—for the 42 months before and 31 months after the transfer. Average inspection rates declined significantly in five district field offices (Baltimore, Boston, Miami, San Francisco, and “Preclearance” in Canada, the Caribbean, and Ireland), whereas rates increased significantly in seven other districts (Buffalo, El Paso, Laredo, San Diego, Seattle, Tampa, and Tucson). Similarly, table 2 compares average interception rates—the number of pest interceptions relative to the total number of passenger and cargo inspections in each CBP district field office—for the two periods of time. The average rate of interceptions decreased significantly at ports in six district field offices—El Paso, New Orleans, New York, San Juan, Tampa, and Tucson—while average interception rates have increased significantly at ports in the Baltimore, Boston, Detroit, Portland, and Seattle districts. Decreases in interception rates, in particular, may indicate that some CBP districts are less effective at preventing entry of prohibited materials since the transfer from APHIS to CBP. Of particular note are three districts that have experienced a significant increase in their rate of inspections and a significant decrease in their interception rates since the transfer. Specifically, since the transfer, ports in the Tampa, El Paso, and Tucson districts appear to be more efficient at inspecting (e.g., inspecting a greater proportion of arriving passengers or cargo) but less effective at interceptions (e.g., intercepting fewer prohibited agricultural items per inspection). Also of concern are three districts—San Juan, New Orleans, and New York—that are inspecting at about the same rate, but intercepting less, since the transfer. When we showed the results of our analysis to senior CBP officials, they were unable to provide an explanation for these changes or to determine whether the current rates were appropriate relative to the risks, staffing levels, and staffing expertise associated with individual districts or ports of entry. These officials also noted that CBP has had problems interpreting APHIS data reports because CBP lacks staff with expertise in agriculture and APHIS’s data systems in some district offices. CBP is working on a plan to collect and analyze agriculture-related data in the system it currently uses for customs inspections, but the agency has yet to complete or implement the plan. CBP and APHIS have an interagency agreement for sharing changes to APHIS’s policy manuals and agriculture inspection alerts, which impact CBP’s agricultural mission. APHIS maintains several manuals on its Web site that are periodically updated for agriculture specialists’ use. These manuals include directives about current inspection procedures as well as policies about which agricultural items from a particular country are currently permitted to enter the United States. When APHIS updates a manual, the agency sends advance notice to CBP headquarters personnel, but agriculture specialists in the ports frequently do not receive these notices. Before the transfer of agriculture specialists to CBP, APHIS e- mailed policy manual updates directly to specialists, according to a senior APHIS official. However, since the transfer, CBP has not developed a list of all agriculture specialists’ e-mail addresses. As a result, APHIS uses an “ad- hoc e-mail list” to notify CBP agriculture specialists of policy manual updates. When an agriculture specialist or supervisor sends an e-mail to the APHIS official who maintains the contact list, that person’s e-mail address is then manually added to the list. The official also noted he has added e- mail addresses sent in by former APHIS personnel who noticed that they were no longer receiving manual update notifications, as they had prior to the transfer to CBP. However, the official also stated that his list is not an official mailing list and is not representative of all of the ports. CBP also could not tell us if the list was accurate or complete. Several agriculture specialists we spoke with indicated that they (1) frequently did not receive any notification from APHIS or CBP when manuals were updated, (2) received updates sporadically, or (3) were unsure whether they received all of the relevant updates. Moreover, based on our survey of agriculture specialists, we estimate that 20 percent of agriculture specialists do not regularly receive notices that the manuals have been updated. According to our survey, 50 percent of agriculture specialists always have access to the online manuals. However, according to specialists we spoke with, it is difficult to keep up with changes to the manuals without being notified as to which policies or procedures are updated by APHIS. One inspector expressed dismay that specialists at the port to which he had recently transferred were unaware of new regulations for conducting inspections to safeguard against avian influenza. Agriculture specialists at a different port told us that they continue to refer to the hard copies of APHIS’s manuals, which APHIS has not updated since it stopped producing hard copies in 2003. In addition, although CBP and APHIS have established a process to transmit inspection alerts down the CBP chain of command to agriculture specialists, many frontline specialists we surveyed or interviewed at the ports were not always receiving relevant agriculture alerts in a timely manner. They identified the time required for dissemination of agriculture alerts down the CBP chain of command as an issue of concern. Specifically, based on our survey, we estimate that only 21 percent of specialists always received these alerts in a timely manner. The level of information sharing appears to be uneven between ports and pathways at ports. For example, an agricultural specialist at one port told us that he received information directly from APHIS on pest movements and outbreaks. An agriculture supervisor at a second port noted that information sharing had improved after port officials established a plant pest risk committee comprising local officials from APHIS, CBP, and other agencies. However, an agriculture specialist at a third port we visited told us that specialists there did not receive any information on pests from APHIS, while a second specialist at the same port expressed concern that alerts on disease outbreaks such as avian influenza arrive many days after the outbreaks are first reported. With regard to coordination between CBP and APHIS, we found that APHIS officers responsible for tracing the pathways of prohibited agricultural items into the United States have experienced difficulty or delays in gaining access to some ports of entry. After the transfer, APHIS and CBP agreed to restrict APHIS officials’ access to ports of entry to ensure clear separation of responsibilities between the two agencies. Under the memorandum of agreement, CBP may grant or refuse access to ports by APHIS personnel, but APHIS officials noted that the difficulties and delays in getting information from the ports has made some of APHIS’s Smuggling Interdiction and Trade Compliance (SITC) activities difficult, if not impossible. Per the agreement, APHIS personnel—including SITC inspectors—are to make advance arrangements with local CBP port directors for access to agriculture inspection areas. CBP agreed to provide APHIS with a written response to any request for access to ports of entry but did not specify a time frame for this response. Prior to the transfer, APHIS SITC inspectors regularly worked with APHIS agriculture inspectors to (1) trace the movement of prohibited agricultural items found in U.S. markets back to ports of entry (traceback), (2) identify parties responsible for importing prohibited items, and (3) determine which weaknesses in inspection procedures allowed the items to enter the United States. Currently, SITC inspectors are still responsible for tasks such as surveying local markets for prohibited agricultural products and gathering information to identify and intervene in the movement of smuggled agricultural commodities that could potentially harm U.S. agriculture. According to SITC officials, their ability to gather timely information at ports of entry is extremely important to SITC’s mission. They added that delays in special operations or port visits following the discovery of prohibited items make it much harder to trace the pathway of such items into the United States. Although SITC officials noted that their inspectors have received access to some ports to perform their duties, they added that CBP has delayed or denied access to SITC inspectors at other ports in both the eastern and western United States. The SITC officials stated that there have been incidents in which CBP did not respond to requests for access until months after APHIS made them. For example, in 2005, SITC requested permission to conduct two special operations at U.S. international airports to help determine whether passengers or cargo from certain countries posed a risk in importing or smuggling poultry products that could be infected with avian influenza. In justifying the operations, SITC wrote, “Many illegal and possibly smuggled avian products have been seized” in several states surrounding the airports. In one case, CBP took 3 months to approve the request; however, SITC had already canceled the operation 2 months earlier because of CBP’s lack of response. CBP approved another special operation several months after SITC’s request, but later canceled it because SITC uniforms did not match CBP specialists’ uniforms, according to senior SITC officials. They added that CBP’s other reasons for delaying or canceling joint operations and visits included (1) inadequate numbers of CBP specialists to participate in operations, (2) scheduling conflicts involving CBP port management, and (3) concerns about SITC inspectors’ lack of security clearances. Agriculture canines are a key tool for targeting passengers and cargo for inspection, but we found that the program has deteriorated since the transfer. The number and proficiency of canine teams has decreased substantially over the last several years. Specifically, APHIS had approximately 140 canine teams nationwide at the time of the transfer, but CBP currently has approximately 80 such teams, about 20 percent of which are newly hired, according to agency officials. They added that, although CBP has authorized the hiring of 15 more agriculture canine teams, the positions remain vacant as of the end of 2005. According to APHIS, CBP has not been able to fill available APHIS agriculture specialist canine training classes. After consulting with CBP, APHIS scheduled 7 agriculture canine specialist training classes in fiscal year 2005 but canceled 2 because CBP did not provide students. Similarly, in fiscal year 2006, APHIS scheduled 8 classes, but, as of April, had to cancel 3 for lack of students to train. In 2005, 60 percent of the 43 agriculture canine teams tested failed the USDA proficiency test, and APHIS officials told us proficiency has declined since the transfer. These proficiency tests, administered by APHIS, require the canine to respond correctly in a controlled, simulated work environment and ensure that canines are working effectively to catch potential prohibited agricultural material. Potential reasons for the deterioration in proficiency scores include CBP not following policy and procedures for the canine program and changes in the program management structure. The policy manual for the canine program states that canines should (1) receive about 4 hours of training per week and (2) have minimal down time in order to maintain their effectiveness. In general, canine specialists we interviewed expressed concern that the proficiency of their canines was deteriorating due to a lack of working time. That is, the dogs were sidelined while the specialist was assigned to other duties. Furthermore, based on results of our survey, we estimate that 46 percent of canine specialists were directed to perform duties outside their primary canine duties several times a week or every day. Additionally, an estimated 65 percent of canine specialists sometimes or never had funding for training supplies. Another major change to the canine program, following the transfer, was CBP’s elimination of all former APHIS canine management positions. In some cases, agriculture canine teams now report to supervisory agriculture specialists, who may not have any canine experience. Formerly, canine teams reported to both the in-port management and regional canine program coordinators, who were experienced canine managers. The program coordinators monitored the canine teams’ proficiency and ensured that teams maintained acceptable performance levels. According to CBP, the agency is considering developing a new management structure to improve the effectiveness of its canine program. However, little progress has been made to date. The law authorizes user fees to cover the costs of the AQI program. However, in the 3 years since the transfer, user fees have not been sufficient to cover AQI program costs. CBP believes that unless the current user-fee rates are increased, the program will continue to face annual shortfalls to the detriment of the AQI program. In addition, CBP underwent a financial management system conversion for fiscal year 2005 and was unable to provide APHIS with actual cost information needed to evaluate the extent to which individual user fees cover program costs. Furthermore, APHIS did not always make regular transfers of funds to CBP as it had agreed to, causing CBP to use other funding sources or to reduce spending. The Secretary of Agriculture has the discretion to prescribe user fees to cover the costs of the AQI program, but program costs have exceeded user- fee collections since the transfer of AQI inspection activities to CBP. Following the events of September 11, 2001, a sharp drop in the number of international airline passengers entering the United States caused a drop in AQI revenue (approximately 80 percent of total AQI user-fee collections come from fees on international airline passengers). Despite the drop in revenue, APHIS had to increase AQI inspection activities because of post- September 11 concerns about the threat of bioterrorism. According to USDA, agriculture specialists began inspecting a greater volume of cargo entering the United States and a greater variety of types of cargo than they had in prior years. Such operations are personnel-intensive and, therefore, costly. Consequently, when the transfer occurred in fiscal year 2003, AQI program costs exceeded revenues by almost $50 million. The shortfall increased to almost $100 million in the first full fiscal year after the transfer. Table 3 provides AQI user-fee collections and program costs for fiscal years 2001 through 2005. For fiscal years 2004 and 2005, the 2 full fiscal years since the transfer, total AQI costs exceeded user-fee collections by more than $125 million. Consequently, in fiscal years 2004 and 2005, APHIS used AQI user-fee collections from previous years, and CBP used another available appropriation to cover AQI costs. In October 2004, APHIS’s Associate Deputy Administrator of Plant Protection and Quarantine wrote to the Executive Director of CBP’s Office of Budget, noting, “We are in dire need of generating increased revenue for the AQI program; without an increase, the AQI account could run out of money on or about July 19, 2005.” The letter also discussed a three-phase approach to ensuring fiscal solvency for the AQI program. The first phase consisted of establishing increased interim user-fee rates to cover costs of pay raises and inflation. The second phase involved removing the exemption from paying AQI user fees granted to passengers, cargo, and commercial vehicles at ports of entry along the U.S.—Canada border. The third phase included identifying all current and future needs of the AQI program, not just pay raises and inflation, to ensure that user fees fully cover AQI program costs. APHIS estimated that it would take up to 2 years to complete the entire Federal Register process and make new phase-three fees effective. On December 9, 2004, APHIS proceeded with the first phase by publishing an interim rule to raise user fees, effective January 1, 2005, through 2010. However, because of the method APHIS used to estimate AQI program costs, this phase-one increase in user-fee revenues is not likely to be enough to cover program costs through fiscal year 2010. Specifically, APHIS used estimated fiscal year 2004 program costs—$327 million—plus 1.5 percent of these costs for pay raises and inflation (or about $4.9 million) to set the fiscal year 2005 user fees. However, APHIS’s base calculation used CBP’s estimated share of fiscal year 2004 user-fee funds—totaling $194 million—but not CBP’s actual reported costs for fiscal year 2004— totaling $222.5 million. Thus, the difference between CBP’s actual and estimated costs of $28.5 million was not included in the base calculation, resulting in less revenue for the program. CBP subsequently acknowledged that APHIS’s decision not to include CBP’s actual fiscal year 2004 costs in the user-fee increase “has put CBP in the position where incoming APHIS user-fee revenues fall short of the expected cost of operating the program.” CBP finance officials also told us that because the costs of performing AQI activities was approximately $222 million in each of the previous 2 years, it is unlikely that the projected $211 million to be transferred to CBP for fiscal year 2006 will be sufficient to cover program costs for fiscal year 2006 and beyond. Despite the shortfall between user-fee collections and program costs, APHIS has not completed the second or third phases of its proposal. As of May 2006, the Secretary of Agriculture had not made a decision whether to proceed with the proposal to lift the Canadian exemption. CBP officials told us that unless the Canadian exemption is lifted, the agency cannot hire the over 200 additional agriculture specialists that it has determined are needed to perform additional inspections on the northern border. APHIS officials told us that because lifting the Canadian exemption will affect estimates of future revenue used in calculating new user-fee rates, APHIS and CBP have not begun the third phase of revising user fees, which APHIS estimates will take approximately 2 years. CBP is required by the interagency agreement to establish a process in its financial management system to report expenditures by each AQI fee type, such as those paid by international passengers and commercial aircraft. APHIS uses this information to set user-fee rates and to audit user-fee collections. Although CBP provided detailed cost information by activity and user-fee type to APHIS for fiscal year 2004, CBP provided only estimated cost information for fiscal year 2005 because of a weakness in the design of the agency’s new financial management system. In November 2005, CBP conducted an internal review and determined that its reported costs of almost $208 million did not include about $15 million in additional salary costs for CBP agriculture supervisors. CBP officials told us that these costs were not included, in large part, because the agency adopted a new financial management system in fiscal year 2005 that allowed agriculture supervisors to record their time spent on AQI activities in a joint account that combined customs, immigration, and agricultural quarantine inspection activities. Thus, the costs related to only agricultural activities could not be segregated. A senior CBP finance official told us that CBP’s Office of Finance could have provided rough estimates of costs by activity to APHIS but chose not to do so because they did not want to combine actual and estimated costs in the same document. Instead, CBP provided estimates of cost by user-fee type in January 2006. CBP did provide APHIS with the required accounting of obligations incurred by program office (e.g., Office of Training and Development, Applied Technology Division, Office of Asset Management, and Office of Chief Counsel) and budget codes (e.g., salary, overtime, and office supplies) for fiscal year 2005. However, a senior APHIS budget official told us that this cost information was not helpful to APHIS for reviewing the user-fee rates because they needed the breakdown of actual costs by user-fee type and because APHIS could not determine if the costs were accurate. Until CBP’s financial management system can provide actual costs by activity and AQI user-fee type, APHIS will not be able to accurately determine the extent to which the user fees need to be revised. In addition, without such information, APHIS does not know whether inspections of international airline passengers and commercial aircraft, vessels, trucks, and railroad cars are being funded by revenue from the appropriate user fee. Although many of the AQI functions were transferred to CBP when the Department of Homeland Security was formed, APHIS continues to collect most user fees and transfers a portion of the collections to CBP on a periodic basis. For fiscal years 2004 and 2005, these transfers were often delayed and their amounts were sometimes less than expected, which adversely affected CBP agricultural inspection activities. In February 2004, USDA and DHS agreed that APHIS would transfer one-fourth of the annual amount of estimated user-fee collections to CBP at the beginning of each quarter, or if the balance in the account was not sufficient to transfer the full quarterly amount in advance, APHIS could make monthly transfers. APHIS officials told us, however, that the agency chose to transfer funds to CBP every other month because the AQI account would not always have had sufficient funds to make quarterly transfers, and monthly transfers would have been administratively burdensome. Nevertheless, as table 4 shows, CBP frequently did not receive the transfers at the time specified or for the agreed upon amount in fiscal years 2004 and 2005. Consequently, according to CBP officials, the agency’s finance office had to use funding sources that they had planned to use for other purposes. In addition, CBP officials told us some ports had to reduce spending for supplies needed for inspection activities or delay hiring personnel or purchasing equipment. Then, for the last transfer of the fiscal year, APHIS did not notify CBP until August 2005 that the transfer would total $43.9 million, about $11 million more than expected (see table 4). As a result, CBP’s budget plans had to be revised late in the year to accommodate this additional funding. In addition, technical difficulties in the fund transfer process also delayed the transfer of funds to CBP, and at one point during fiscal year 2004, CBP did not have available funding from user fees for over 6 months. In this instance, APHIS transferred $88.5 million from October 2003 to February 2004 into a DHS Treasury account used for fiscal year 2003 transfers. However, APHIS officials told us that the Office of Management and Budget had established a new Treasury account for CBP, and CBP officials did not advise APHIS of the change. Ultimately, APHIS withdrew the funds from the original account and transferred them as part of the April 2004 transfer, which totaled $118 million, but it took longer than 5 months to resolve the issue. Similarly, two other fund transfers were delayed in fiscal year 2005 because APHIS did not comply with a Treasury rule requiring that agencies cite the relevant statutory authority when submitting a request to transfer funds to another agency. In one instance, APHIS ultimately transferred $65.6 million to CBP in February 2005 rather than transferring one payment in January 2005 for $32.8 million and another payment in February 2005 for $32.8 million. In October 2005, APHIS and CBP revised their agreement, which outlined the process the agencies would follow for transferring user fees and the financial reporting on the use of those funds. Under the revised agreement, APHIS, beginning in November 2005, is to make 6 bimonthly transfers to CBP in fiscal year 2006 totaling $211.1 million. Figure 3 illustrates the process APHIS uses to collect user fees and transfer funds to CBP for fiscal year 2006. As shown in figure 3, APHIS was to transfer $35,186,667 to CBP in November 2005. However, contrary to the new agreement, APHIS transferred $35,166,667—$20,000 less than CBP expected—on November 30, 2005. When asked why they did not receive the correct amount in accordance with the revised agreement, CBP officials agreed to investigate the discrepancy and found that their staff was working to correct the problem. APHIS officials told us that their budget office used a rounded amount of $211 million for the fiscal year to distribute the payments, resulting in the $20,000 shortage for the distribution. APHIS officials told us that the budget office did not have a copy of the current distribution schedule from the revised agreement and did not know the exact amount of the required payment. They also stated that the budget office now has the agreement and will make the proper bimonthly transfers going forward. According to APHIS officials, the January 2006 transfer included an additional $20,000 to address the discrepancy we identified with the November transfer. APHIS and CBP believe that the revised agreement, which also provides for quarterly face-to-face meetings between the agencies, should improve communication, assure transparency in the transfer process, and prevent future problems in the transfer of funds. The global marketplace and increased imports of agricultural products and international travelers into the United States have increased the number of pathways for the movement and introduction of foreign, invasive agricultural pests and diseases, such as avian influenza and foot-and-mouth disease. Maintaining the effectiveness of federal programs to prevent accidental or deliberate introduction of potentially destructive organisms is critical given the importance of agriculture to the U.S. economy. Accordingly, effective management of AQI programs is necessary to ensure that agriculture issues receive appropriate attention in the context of CBP’s overall missions of detecting and preventing terrorists and terrorist weapons from entering the United States and facilitating the orderly and efficient flow of legitimate trade and travel. Although the transfer of agricultural quarantine inspections from USDA’s APHIS to DHS’s CBP has resulted in some improvements as a result of the integration of agriculture issues into CBP’s overall antiterrorism mission, significant coordination and management issues remain that leave U.S. agriculture vulnerable to the threat of foreign pests and disease. Because the Homeland Security Act of 2002 divided AQI responsibilities between USDA and DHS, the two departments must work more closely to address key coordination weaknesses, including enhancing communication between APHIS’s AQI policy experts and CBP’s agriculture specialists in the field, to ensure that critical inspection information reaches these frontline inspectors; to review policies and procedures for the agriculture canine program to improve the effectiveness of this key inspection tool; and to revise AQI user fees. Furthermore, both departments must work to address key management weaknesses in their respective areas of responsibility. Specifically, in light of the AQI program’s expanded mission, DHS needs to develop and adopt meaningful performance measures to assess the AQI program’s effectiveness at intercepting prohibited agricultural materials; implement a national risk-based staffing model to ensure that adequate numbers of agriculture specialists are staffed to areas of greatest vulnerability; and review its financial management systems to ensure financial accountability for funds allocated to the AQI program. It is also important that user fees be adjusted to meet the program’s costs, as authorized (but not required) by law. Without decisive action, APHIS and CBP could be forced to cut back on agriculture inspections if costs continue to exceed program revenues. Such cutbacks could increase the potential for animal and plant pests and diseases to enter the United States and could disrupt trade if agriculture specialists were not available to inspect and clear passengers and cargo on a timely basis. By overcoming these challenges, the United States would be in a better position to protect agriculture from the economic harm posed by foreign pests and disease. To ensure the effectiveness of CBP and APHIS agricultural quarantine inspection programs designed to protect U.S. agriculture from accidental or deliberate introduction of foreign pests and disease, we are making the following seven recommendations: We recommend that the Secretaries of Homeland Security and Agriculture work together to adopt meaningful performance measures for assessing the AQI program’s effectiveness at intercepting foreign pests and disease on agricultural materials entering the country by all pathways—including commercial aircraft, vessels, and truck cargo—and posing a risk to U.S. agriculture; establish a process to identify and assess the major risks posed by foreign pests and disease and develop and implement a national staffing model to ensure that agriculture staffing levels at each port are sufficient to meet those risks; ensure that urgent agriculture alerts and other information essential to safeguarding U.S. agriculture are more effectively shared between the departments and transmitted to DHS agriculture specialists in the ports; improve the effectiveness of the agriculture canine program by reviewing policies and procedures regarding training and staffing of agriculture canines and ensure that these policies and procedures are followed in the ports; and revise the user fees to ensure that they cover the AQI program’s costs. We recommend that the Secretary of Homeland Security undertake a full review of its financial management systems, policies, and procedures for the AQI program to ensure financial accountability for funds allocated for agricultural quarantine inspections. We recommend that the Secretary of Agriculture take steps to assess and remove barriers to the timely and accurate transfer of AQI user fees to DHS. We provided USDA and DHS with a draft of this report for their review and comment. We received written comments on the report and its recommendations from both departments. USDA commented that the report accurately captures some of the key operational challenges facing the two departments as they work to protect U.S. agriculture from unintentional and deliberate introduction of foreign agricultural pests and diseases. USDA generally agreed with the report’s recommendations, adding that APHIS has already made some improvements to address our recommendations. For example, the department reported that APHIS has made improvements in the transfer of funds to CBP as a result of revisions to the interagency agreement with CBP. We had noted these changes in the report. In addition, USDA offered to work with DHS on our recommendations that DHS (1) adopt meaningful performance measures to assess AQI program’s effectiveness and (2) establish a process to identify and assess the major risks posed by foreign pests and disease and develop and implement a national staffing model to address those risks. We modified the recommendations to involve USDA accordingly. USDA’s written comments and our detailed response appear in appendix III. USDA also provided technical comments that we incorporated, as appropriate, throughout the report. DHS commented that the report was balanced and accurate and agreed with its overall substance and findings. DHS generally agreed with our recommendations and indicated that CBP has begun the process of implementing, or has implemented parts of, our recommendations. For example, as we note in the report, CBP has begun the process of creating new performance measures for assessing the AQI program’s effectiveness. DHS stated that the new measures are scheduled to be in place by the beginning of fiscal year 2007. Also, DHS commented that CBP has developed a prototype staffing model methodology that it intends to develop into a final model to monitor and track the evolving staffing needs and priorities of the agency. With regard to our recommendation that DHS review its financial management systems to ensure accountability for AQI funds, DHS stated that it believes actions taken over the course of our review have addressed our concerns. We continue to believe that DHS needs to monitor outcomes of these recent changes during the coming fiscal year to ensure that they provide necessary accountability for the use of AQI funds. DHS’s written comments and our detailed response appear in appendix IV. DHS also provided technical comments that we incorporated, as appropriate, throughout the report. We are sending copies of this report to the Secretaries of Homeland Security and Agriculture and interested congressional committees. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess the extent to which the U.S. Department of Agriculture (USDA) and the Department of Homeland Security (DHS) have changed the Agricultural Quarantine Inspection (AQI) program since the transfer of responsibilities from USDA to DHS, we reviewed the 2003 Memorandum of Agreement between the United States Department of Homeland Security and the United States Department of Agriculture, dated February 28, 2003, and the associated appendixes governing how USDA and DHS are to coordinate inspection responsibilities. We also reviewed agency documentation, including training materials for newly hired Customs and Border Protection (CBP) officers, information on databases used by CBP agricultural specialists to target agriculture inspections, joint-agency reports on port compliance with agricultural inspection policy, and information related to CBP’s establishment and utilization of new agriculture liaison positions. In addition, we interviewed key program officials at USDA’s Animal and Plant Health Inspection Service (APHIS) and CBP to discuss changes to the AQI program, including officials responsible for training, implementing inspection targeting initiatives, conducting port reviews, and overseeing communication of agricultural issues within CBP. To assess how the departments have managed and coordinated their agriculture inspection responsibilities, we reviewed the interagency memorandum of agreement between DHS and USDA and its associated appendixes. We also reviewed agency documentation, including DHS’s Performance and Accountability Reports, APHIS’s model for staffing agriculture specialists at ports, data from APHIS’s Work Accomplishment Data System for fiscal years 2000 through 2005, agency e-mails communicating agriculture alerts and policy information, proposals for joint-agency special operations at ports, and agency policy governing agriculture inspection training and the agriculture canine program. We performed a reliability assessment of the data we analyzed for fiscal years 2000 through 2005 and determined that the data were sufficiently reliable for the purposes of this report. We also visited all three training centers for port of entry staff that conduct agricultural training—the USDA Professional Development Center, in Frederick, Maryland; the USDA National Detector Dog Training Center in Orlando, Florida; and the Federal Law Enforcement Training Center (FLETC) in Glynco, Georgia—to observe training and interview current students, instructors, and staff. In addition, we interviewed key program officials at CBP and APHIS with knowledge of AQI management issues, such as performance measures, staffing, interagency coordination, training, and the agriculture canine program. Furthermore, to ascertain agricultural specialists’ assessment of the agriculture quarantine inspection program since the transfer of inspection responsibilities from USDA to DHS, we drew a stratified random probability sample of 831 agriculture specialists from the approximately 1,800 specialists (current as of Oct. 14, 2005) in the Department of Homeland Security’s Customs and Border Protection. All canine specialists were placed in one stratum; other strata were defined by the number of specialists at the respective ports. We conducted a Web-based survey of all specialists in the sample. Each sampled specialist was subsequently weighted in the analysis to account statistically for all specialists in the population. We received a response rate of 76 percent. We chose to sample agriculture specialists who had recently been hired by CBP, as well as former APHIS employees who had been transferred to CBP, including agriculture supervisors, to get their various perspectives on the AQI program. The survey contained 31 questions that asked for opinions and assessments of (1) agriculture inspection training, (2) agriculture inspection duties, (3) communication and information sharing within CBP and between other agencies, and (4) changes in the number of agriculture inspections and interceptions since the transfer. In addition, the survey included questions specifically for canine handlers, agriculture supervisors, and former APHIS employees. In developing the questionnaire, we met with CBP and APHIS officials to gain a thorough understanding of the AQI program. We also shared a draft copy of the questionnaire with CBP officials, who provided us with comments, including technical corrections. We then pretested the questionnaire with CBP agriculture specialists at ports of entry in Georgia, Maryland, Texas, and Washington state. During these pretests, we asked the officials to complete the Web-based survey as we observed the process. After completing the survey, we interviewed the respondents to ensure that (1) questions were clear and unambiguous, (2) the terms we used were precise, (3) the questionnaire did not place an undue burden on CBP agriculture specialists completing it, and (4) the questionnaire was independent and unbiased. On the basis of the feedback from the pretests, we modified the questions, as appropriate. The questionnaire was posted on GAO’s survey Web site. When the survey was activated, the officials who had been selected to participate were informed of its availability with an e-mail message that contained a unique user name and password. This allowed respondents to log on and fill out a questionnaire but did not allow respondents access to the questionnaires of others. The survey was available from November 17, 2005, until January 9, 2006. Results of the survey to CBP agriculture specialists are summarized in appendix II. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as 95 percent confidence intervals (e.g., plus or minus 7 percentage points). These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report from our survey of agriculture specialists will include the true values in the study population. All percentage estimates from the survey of agriculture specialists have margins of error (that is, confidence interval widths) of plus or minus 10 percentage points or less, unless otherwise noted. All numerical estimates other than percentages (e.g., means) have margins of error not exceeding plus or minus 15 percent of the value of those estimates, unless otherwise noted. To determine how funding for agriculture inspections has been managed since the transfer from USDA to DHS, we reviewed the interagency memorandum of agreement between DHS and USDA—specifically the appendix, Article 5: Transfer of Funds, originally signed on February 9, 2004, and revised on October 5, 2005. Further, we compared the amount of revenue generated from the user fees with program costs reported by CBP and APHIS on agriculture inspections in fiscal years 2001 through 2005. We also reviewed relevant agency documentation, including proposals for increasing user-fee collections, budget classification handbooks, cost analysis worksheets, and user-fee collection and expense analyses. In addition, we reviewed how funds were transferred between APHIS and CBP and the impact of these transfers on their operations. Lastly, we interviewed senior CBP and APHIS financial management officials concerning AQI user-fee collections, cost management, and the transfer of funds from APHIS to CBP. We conducted our review from April 2005 through March 2006 in accordance with generally accepted government auditing standards. This appendix provides the results from our Web-based survey of CBP agriculture specialists. (App. I contains details of our survey methodology.) We selected a statistical sample of 831 specialists. Within this population, we asked questions of, and analyzed data for, three groups: (1) former APHIS inspectors—also referred to as plant protection and quarantine (PPQ) officers in the survey; (2) newly hired CBP agriculture specialists; and (3) canine agriculture specialists. The survey contained 31 questions about the experiences and opinions of the specialists. We omitted questions 3 and 23, which were used to help respondents navigate the survey. We received 628 completed surveys—an overall response rate of 76 percent. We indicate the number of respondents below each question because not every respondent answered every question. We also rounded the responses to the nearest whole percent, and, therefore, totals may not add to 100 percent. Part 1: Demographic Information 1. What is your job title at CBP? 2. For which of the following pathways did you conduct agricultural inspections during the past year? (Please check all that apply.) n=624 Part 2: Information from Former USDA PPQ Officers 4. When did you begin working as a USDA PPQ Officer (not as an agriculture technician or aide)? 5. During your first year working as a USDA PPQ Officer, about how many weeks did you spend in on-the-job (in port) training? (Please include such things as shadowing, observation, and coaching. Do not include time spent at the Professional Development Center. If you spent less than one week, please enter 1.) 6. Do you believe you received sufficient training (on-the-job and at the Professional Development Center) to enable you to perform your agriculture inspection duties? n=448 7. Are you, personally, doing more, about the same number, or fewer agriculture inspections compared to what you were doing before being transferred to CBP? 8. Are you, personally, doing more, about the same number, or fewer agriculture interceptions compared to what you were doing before being transferred to CBP? n=448 9. Which of the following inspection activities did you regularly perform as a PPQ Officer prior to being transferred to CBP, and which do you regularly perform now? (Please check all that apply.) Specialist (percent) Specialist (percent) Specialist (percent) Specialist (percent) Part 3: Information from CBP Agriculture Specialists Hired After March 1, 2003. 10. When did you begin working as a CBP Agriculture Specialist? n=173 11. During your first year working as a CBP Agriculture Specialist, about how many weeks did you spend in on-the-job (in port) training? (Please include such things as shadowing, observation, and coaching. Do not include time spent at the Professional Development Center. If you spent less than one week, please enter 1.) Began in 2003 or 2004 (mean number of weeks) Began in 2005 (mean number of weeks) 12. Do you believe you received sufficient training (on-the-job and at the Professional Development Center) to enable you to perform your agriculture inspection duties? n=174 Part 4: Your Work at CBP 13. During the past 6 months about what percentage of your time did you spend on agriculture and nonagriculture CBP duties? (Please enter percentages in boxes. If none, enter 0. Percentage total should be 100.) Agriculture inspections and associated activities Customs and Immigration inspections and associated activities Work not related to inspections (e.g., administrative work, training) 14. For each pay period, do you provide the number of hours you worked on agriculture inspection and the number of hours you worked on customs and immigration inspection to your supervisor or timekeeper? n=626 15. Are the following supplies readily available to you? (Please check one in each row. If you do not use a supply, please check ‘Do not Use.’) 16. Do you have easy access to USDA regulatory manuals during inspections? (Please check one in each row.) n=626 17. Do you have enough time to look for pests in agriculture materials intercepted from passengers? 18. How easy or difficult is it for you to get samples to a pest identifier? n=626 19. When you send a sample to a pest identifier, about how long does it usually take to get the results? 20. How are the following types of information delivered to you? If you do not receive a type of information on a regular basis, please indicate that. (Please check all that apply.) Not received on a regular (percent) (percent) communication (percent) basis (percent) 21. Is the information delivered to you in a timely manner? (Please check one in each row.) 22. During the past year, about how many hours per month did you spend compiling and entering data into the databases listed below? (Please check one in each row.) n=626 Part 5: Questions for Canine Handlers 24. Are the following resources readily available to you? (Please check one in each row.) 25. During the past year, have there been any instances when you thought it would be helpful to contact the National Detector Dog Training Center, but you were told by CBP management not to contact them? n=62 26. Does CBP management allow you enough time each month to schedule training with your dog? 27. During the past year, about how frequently have you been directed to perform duties outside your primary mission as a Canine Handler? n=62 Part 6: Your Views and Opinions about Working at CBP 28. Based on you own experiences, how would you describe the work- related communication between Agriculture Specialists and the others listed below? (Please check one in each row.) 1. We agree with USDA’s suggestion regarding two of our recommendations. We now recommend that the Secretaries of Agriculture and Homeland Security work together to (1) adopt meaningful performance measures for assessing the AQI program’s effectiveness at intercepting foreign pests and disease and (2) establish a process to identify and assess the major risks posed by foreign pests and disease and develop and implement a national staffing model to meet those risks. 2. USDA noted that revisions to APHIS’s agreement with CBP should address the concerns we raised in the report regarding the timely and accurate transfer of AQI funds to CBP. USDA states that APHIS made the first three transfers of fiscal year 2006 on time. We discuss these positive steps in our report and note a problem with one of APHIS’s transfers. As USDA carries out its three-phase approach to revising user fees, and DHS works to advance proposed consolidation of customs, immigration, and agriculture user fees (see app. IV), we believe that USDA must ensure that it follows the revised agreement to ensure timely and accurate transfer of AQI user fees to DHS. 1. We continue to believe that the title of the report reflects our conclusion that U.S. agriculture is vulnerable to the unintentional or deliberate introduction of foreign pests and diseases as a result of the management and coordination issues we raise in the report. Until DHS adopts and tracks meaningful performance measures to assess the effectiveness of the AQI program, DHS does not know the effectiveness of the AQI program at performing its mission. Further, until DHS implements a national risk-based staffing model for agriculture specialists, it does not know whether adequate numbers of agriculture specialists are staffed to ports of entry most vulnerable to the introduction of foreign pests and disease. 2. We acknowledge, in the report, the steps that CBP has taken to improve communication and information sharing between headquarters and field offices. However, given the problems with information sharing that we identified in our survey of agriculture specialists, we continue to believe that additional actions are warranted to ensure that urgent agriculture alerts and other information are transmitted through the CBP chain of command to the agriculture specialists. 3. We acknowledge the operational challenges facing CBP as a result of having to manage three different sets of user fees (i.e., agriculture, customs, and immigration) to support inspection functions at U.S. ports of entry. For example, we identified in the report some of the timekeeping issues surrounding the need to appropriately separate time spent on agriculture, customs, and immigration functions. We understand that CBP concluded that to adequately address these challenges, congressional action may be required to consolidate the different user fees and their associated spending, fee setting, and costs recovery authorities and exemptions. In addition to the contact named above, Maria Cristina Gobin (Assistant Director), Terrance N. Horner Jr., Jeff Isaacs, Lynn Musser, Omari Norman, Minette Richardson, Steve Rossman, Sidney Schwartz, Robyn Trotter, and Diana Zinkl made key contributions to this report. Other contributors included Nancy Crothers, Casey Keplinger, and Kim Raheb.
U.S. agriculture generates over $1 trillion in annual economic activity, but concerns exist about the sector's vulnerability to a natural or deliberate introduction of foreign livestock, poultry, and crop pests and disease. Under the Agricultural Quarantine Inspection (AQI) program, international passengers and cargo are inspected at U.S. ports of entry to seize prohibited material and intercept foreign agricultural pests. The Homeland Security Act of 2002 transferred AQI inspections from the U.S. Department of Agriculture (USDA) to the Department of Homeland Security (DHS) and left certain other AQI responsibilities at USDA. GAO examined (1) the extent to which USDA and DHS have changed the inspection program since the transfer, (2) how the agencies have managed and coordinated their responsibilities, and (3) how funding for agricultural inspections has been managed since the transfer. After the terrorist attacks of September 11, 2001, federal agencies' roles and responsibilities were modified to help protect agriculture. In March 2003, more than 1,800 agriculture specialists within USDA's Animal and Plant Health Inspection Service (APHIS) became DHS Customs and Border Protection (CBP) employees, while USDA retained responsibility for AQI activities such as setting inspection policy, providing training, and collecting user fees. Since the transfer, the agencies have expanded training on agriculture issues for CBP officers and agriculture specialists. CBP and APHIS also have taken steps to enable agriculture specialists to better target shipments and passengers for inspections and established a process to assess how CBP agriculture specialists are implementing AQI policy. Finally, CBP created a new agriculture liaison position in each of its district field offices to advise regional directors on agricultural issues. While these are positive steps, the agencies face management and coordination problems that increase the vulnerability of U.S. agriculture to foreign pests and disease. CBP has not developed sufficient performance measures that take into account the agency's expanded mission or consider all pathways by which prohibited agricultural items or foreign pests may enter the country. Specifically, although CBP's measures focus on two pathways that pose a risk to U.S. agriculture, they do not consider other key pathways such as commercial aircraft, vessels, and truck cargo. Also, although CBP has hired more than 630 specialists since the transfer, it has not yet developed or used a risk-based staffing model to ensure that adequate numbers of agriculture specialists are staffed to areas of greatest vulnerability. CBP also has not used available inspection and interception data to evaluate the performance of the AQI program. CBP and APHIS also continue to experience difficulty in sharing information such as key policy changes and urgent inspection alerts, and CBP has allowed the number and proficiency of agriculture canine units to decline. Although APHIS is legally authorized (though not required) to charge AQI user fees to cover program costs, we found that the agencies have not taken the necessary steps to ensure that user fees cover AQI costs. Consequently, the agencies had to use other authorized funding sources to pay for the program. Also, because of weaknesses in the design of CBP's new financial management system, CBP was unable to provide APHIS with information on the actual costs of the AQI program by user-fee type--for example, fees paid by international air passengers. APHIS uses this information to set future user-fee rates. Finally, in fiscal years 2004 and 2005, APHIS did not transfer AQI funds to CBP as agreed to by both agencies, causing some ports of entry to reduce spending on inspection activities in fiscal year 2005.
Best management practices refer to the processes, practices, and systems identified in public and private organizations that performed exceptionally well and are widely recognized as improving an organization’s performance and efficiency in specific areas. Successfully identifying and applying best practices can reduce business expenses and improve organizational efficiency. Best practices we have identified in our work resulting in recommendations to the defense community include: (1) relying on established commercial networks to manage, store, and directly deliver defense electronic items more efficiently; (2) using private sector food distributors to supply food to the military community faster and cheaper; and (3) adopting the use of supplier parks to reduce maintenance and repair inventories. Most of the Defense Management and NASA’s (DMN) best practices reports have focused on using best management practices to improve a specific the Department of Defense (DOD) process. DMN has also reported on management concepts that are important in successfully implementing best management practices throughout an organization, such as reporting on techniques companies use to achieve and manage change. See appendix I for a list of the reports related to the use of best management practices and additional information on each report’s findings. DMN chose initially to look at applying best management practices techniques in the area of supply management, because DOD’s supply system has been an area with long-standing problems in which proposed solutions seldom corrected the conditions identified. Also, DOD’s supply management is a large budget item so the potential for large dollar savings was present. DMN believed that comparing DOD’s supply management practices to those that had a proven track record in the private sector would provide a picture of what improvements were possible and indicate proven strategies. A GAO consultants’ panel, consisting of retired DOD officials and logistics business and academic experts, agreed that looking at private sector practices would help us find ways to improve DOD operations, because many private sector companies had made fundamental improvements in logistics management. DMN’s best practices work can result in radical changes in certain DOD processes, as well as substantial dollar savings. Since 1992, as a direct result of our recommendations, the Defense Logistics Agency (DLA) has taken steps to have private sector vendors supply pharmaceutical products, medical supplies, food, and clothing and textiles directly to military facilities in lieu of the traditional military supply system. As a result, by 1997, DLA expects a 53-percent reduction in its 1992 inventory level of these items. With fewer days’ worth of supplies on hand, DLA depot overhead costs will decline also. Other examples of results of best management practices reviews are shown in figure 1. Why Use the Best Management Practices Approach in Evaluations? Deciding whether to use a best practices approach involves considering a number of factors. Our experience shows that the following questions can serve as a guide in making the decision. Have GAO and others reported on the acknowledged problem areas before, and to what extent has there been attempts to make the process work as designed? In our case, GAO had reported on DOD’s inventory problems for over 30 years, and DOD had generally agreed with our observations and had often taken steps to improve the process. However, improvements were incremental at best and failed to achieve significant gains in effectiveness or dollar savings. Is there a process with similar requirements that can be compared to the one being examined but is implemented in a way that provides significantly better results? For example, a military and private hospital both depend on timely and accurate delivery of supplies. Do the areas being considered have an established counterpart in the private or public sector that will provide evidence of the benefits of a new process? For example, we compared the way DOD procures, stores, and delivers food to base dining halls to the way institutional food users in the private and public sector obtain food. Other areas looked at, such as medical, clothing, and spare parts inventories, also allowed us to make comparisons with processes with similar objectives in the private and/or public sector. A best practices review can be applied to a variety of processes, such as payroll, travel administration, employee training, accounting and budgeting systems, procurement, transportation, maintenance services, repair services, and distribution. You may consider looking at an area where the agency has already begun to implement some best management practices, but with limited success. Additional work in the area may provide a crucial boost to an agency’s efforts. Looking at current industry trends in contracting out business functions (also referred to as “outsourcing”) can also suggest areas that could benefit from a best practices review. For example, private sector companies are beginning to outsource logistics functions, primarily transportation and distribution, and data processing functions. When Is a Best Practices Approach Appropriate? objectives. Ask questions like (1) What drives the costs in a particular process? and (2) Is the process effective in achieving its goals? An initial step is to determine all the variables that contribute to the expenditures associated with the area. Another early step is to start with the areas that the customers think are of major importance to the organization being reviewed. Identifying the scope of the process you plan to review is not always easy. It is not always clear where you begin and where you stop when you decide to benchmark a process. It is important that the entire process be considered, rather than just part of the process. For example, in reviewing DOD’s food supply, we examined the entire food supply system, including buying, storing, and distributing food rather than just a part of the system such as distribution because these parts are interconnected and changes in one part will impact the others. If you fail to capture the entire process then you may push costs into another section of the process or create an improvement that is inhibited by trying to marry old ways with new ways that are in conflict with each other. However, you cannot look at everything. At least initially, select a process which is about ready to accept change. Under a best practices review, you are forced to consider new approaches. Specifically, you will compare how an organization performs functions with one doing them differently—such as a function in a unique government agency with a company performing the same or similar function in the private sector. The different approach may turn out to be a much better way of performing a function. Implementing this better way to perform a process throughout the organization is what allows an agency to make meaningful changes. In identifying best practices among organizations, the “benchmarking” technique is frequently used. In benchmarking with others, an organization (1) determines how leading organizations perform a specific process(es), (2) compares their methods to its own, and (3) uses the information to improve upon or completely change its process(es). Benchmarking is typically an internal process, performed by personnel within an organization who already have a thorough knowledge of the process under review. Our approach is similar. However, GAO’s role is to look at the process from the outside, much like a consultant, and determine if that process can be improved upon or totally changed. The best practices evaluation will look not only at quantitative data, such as costs, but also at how other processes and aspects such as organizational culture might be affected by change. In our work, we have found several elements that any best practices review should include. These elements are listed below and then discussed separately in detail: (1) Gaining an understanding of and documenting the government process you want to improve. (2) Researching industry trends and literature, and speaking with consultants, academics, and interest group officials on the subject matter. How Do You Perform a Best Management Practices Review? (3) Selecting appropriate organizations for your review. (4) Collecting data from these selected organizations. (5) Identifying barriers to change. (6) Comparing and contrasting processes to develop recommendations. The first step in beginning a best practices review is to thoroughly understand the government process you are reviewing before you go out to speak with officials in various organizations. This will help not only to fully understand the process but to recognize opportunities for improvement. Understanding the process will ease your analysis by defining a baseline for comparison and providing more focus to your questions when you make inquiries on the best practices identified in other organizations. Further, a good depth of understanding is essential to selecting appropriate comparison companies. Discussing the process in detail with agency officials and flowcharting the process will facilitate data gathering from the comparison organizations and the comparative analysis. Preliminary planning and research are key elements in preparing a best practices review; both must be done before selecting the organizations for comparison. Performing a literature search, researching industry trends, and speaking with consultants, academics, and industry/trade group officials will provide valuable background information on the process under review. It will also provide you with the names of leading edge companies and public sector organizations. How Do You Perform a Best Management Practices Review? The people you speak with before selecting the organizations for comparison can give you useful information on the best practice you are reviewing, as well as the names of leading edge organizations. They may also be able to provide you with a contact into an organization. You will find the names of consultants, academics, and industry/trade groups during your literature search. Other resources for finding these names range from telephone book listings of industry groups to faculty rosters for schools that specialize in the area you are evaluating. Obtaining company annual reports or other background information on the organization before your visit will help you to prepare for your meetings with officials. Most of the leading edge organizations receive calls from many others to learn about their practices. Therefore, they will only provide you with a limited amount of time. Having a thorough background on the issue, including the government’s process, will allow for an effective use of both parties’ time. After you have reviewed the literature and after all of your discussions with consultants, academics, and industry/trade group officials, you will have compiled a list of many organizations cited as “best” in their respective industry for the process you are reviewing. The next decision is determining how many organizations to visit. In our best practices reports, we visited an average of nine companies per job. Visiting too many companies can cause “analysis paralysis,” according to benchmarking experts. These experts say to keep the number of companies to a manageable number, which can be as low as five. Officials from each organization that you speak with will also be able to tell you which companies are the best in a given area. You may want to add a company to your list if it is one that you keep hearing about. Getting the names of other leading edge organizations from these officials will also help to confirm that you selected the right companies to visit and provide additional leads on others. Depending on the process under review, you may want to select companies that are geographically dispersed. We used this criterion for the selection of companies in the DOD food inventory report. You will need to determine the criteria that best meet your needs. How Do You Perform a Best Management Practices Review? organizations. In these cases, what is important is to find companies that are considered by experts to be among the best at the process you are reviewing. Such companies may be able to give you more time than the very best, which may be flooded with requests to study them. After you have researched and begun planning your review, you should develop a list of questions to use as a guide for discussions with the consultants, academics, and industry/trade group officials. You may need to refine the questions after these discussions and prior to your first interview with private sector company or public sector officials. You may also need to refine the questions again after your first interview with these officials. A standard list of questions will ensure that you are obtaining comparable information among the organizations you speak with. As with the process of the agency you are evaluating, you will need a thorough understanding of the process in the private sector before you can compare and contrast the two and make effective recommendations. The list of questions will help you obtain the information needed from all sources in order to make a detailed analysis. Your analysis will involve looking for common practices and characteristics among the organizations you have identified as having best practices in the selected function you are reviewing. A major challenge to ensuring that your final recommendations will be implemented and effective lies in identifying the barriers to change, whether real or perceived. Your discussions with agency officials and your background research should provide information on such potential sources of barriers as regulatory requirements, organizational culture, and the impact of the change on the agency and its services. Government agencies often must operate under many statutory requirements that do not exist in the private sector. While such regulations do not always prevent the use of best management practices, they may make change difficult. For example, DOD officials were concerned that using private sector distributors to deliver food to base dining halls would eliminate the participation of small businesses. This concern was alleviated when we demonstrated that most private sector food distributors were already small businesses. How Do You Perform a Best Management Practices Review? Organizational culture may be a major obstacle. In our work, we were faced with the fact that DOD has been doing business the same way for over 50 years. Such an entrenched system could make changes difficult to implement. As a way to encourage and support new ways of operating, we did a review on how leading edge companies were able to change their organizational culture in the face of radically new operations. The report provided an impetus for DOD to think differently. However, this work also showed that immediate and comprehensive change is unlikely in any organization: it can take 5 to 10 years or longer to change an organization’s culture. A paramount consideration should be the effect of recommendations on the agency’s future ability to provide its service. For example, if your review leads to recommending that a function be privatized, you will need to consider the impact this will have on taking the function away from the government. You will need to raise—and answer—such questions as what would happen if a strike should occur at the company that takes on the function, a natural disaster destroys the company building, or the company goes out of business. However, it is likely that the private sector may provide information on these instances since the same events would equally have an impact on the private and public sectors. The final step in the best practices review is to compare and contrast the agency’s process to the processes of the organizations you benchmarked, and to decide whether the agency would benefit from implementing new processes. If the answer is “Yes,” remember that flexibility is a key theme, as it may not be possible for the agency to do things exactly as they are done in the other organizations. A successful recommendation strategy in our work that encourages the idea of change is to give the agency a “basket of ideas” from which to choose and adapt to their unique operations. Demonstrating possible savings and recommending key steps for change will help to promote that change. Photographs of the consequences of the government’s process versus the private/public sector’s process are a convincing tool to illustrate the effectiveness of a recommended change. How Do You Perform a Best Management Practices Review? In addition, we have tried to help DOD one step past issuance of the report. Specifically, we have tried to use the knowledge gained during the review to help in facilitating the change. For example, we have met formally and informally with key officials to discuss how the change can be implemented. We also made presentations to groups affected by the change. In work such as this, “follow through” means staying in touch and educating and influencing with whatever assistance can be provided. At the same time, we maintain our ability to critique the results in a constructive way. Perhaps the most convincing argument for implementing recommendations for radical change lies in the environment of tight budgets. At DOD, such constraints have forced DOD officials to look toward new ways to do business and, in turn, save money. Consequently, most officials have been receptive to many of our streamlining recommendations. Much of what we have learned about doing best practices reviews goes into any evaluation-related work. However, we have some specific practices that were so useful to us that we created an ongoing list of helpful tips. These should help in planning the review and in establishing productive relationships with the selected organizations. We used two different approaches to arranging a meeting with the desired officials of the target organization. First, if you have a contact’s name, you can call the person directly and request an interview. You might either call first or send a letter followed up with a call. Second, if you were not able to obtain a name through the literature or through your discussions with the consultants, academics, and industry/trade officials, you can contact the office of the president of the company either by phone or by letter. This office will be able to direct you to the appropriate official(s). With either approach, your letter or your phone call should state your purpose very clearly and ensure them that the information will only be used for benchmarking. Send a copy of the questions to the organization’s officials before your visit. This will allow them the opportunity to prepare for the meeting, gather requested information, and invite pertinent personnel. If the list of questions is long, you may want to consider sending a shorter version. After you have set up a meeting time, date, and place, it is best to mail (or fax) a letter of confirmation. Your questions can be sent with this letter. It is also a good idea to reconfirm the meeting a few days prior to your scheduled time. After the meeting, follow up with a thank you letter. On average, plan to spend between 1/2 day to a day and 1-1/2 days with the company. However, the amount of time a company will give you will vary. DMN’s experiences have run the gamut from a 1-hour phone interview to a 2-week detailed look at a company’s operations. If you plan to use the organization’s name in the report, ask for permission. Inform all interviewees that you will be providing them with a draft or relevant portions of the report for their review. This will help ensure that you correctly interpreted the information obtained from interviews. It also allows the company the opportunity to ensure that they did not give you any proprietary information during the interview. What Else Do You Need to Know? Plan for your review (planning, data collection, and analysis) to take an average of 12 months. As pointed out above, these reviews take a lot of up-front work, and getting into leading-edge companies can take a long time. Nonetheless, we have found that the results of these reviews have justified the time spent. Throughout the review, pay attention to establishing good working relationships with these organizations. As in any evaluation, this provides a sound foundation for future contacts.
GAO reviewed best management practices to make government operations more efficient and less costly, focusing on those approaches adopted by the Department of Defense (DOD) that other federal agencies could use to improve their operations. GAO found that: (1) best management practices refer to the processes, practices, and systems identified in public and private organizations that performed exceptionally well and are widely recognized as improving an organization's performance and efficiency in specific areas; (2) successfully identifying and applying best practices can reduce business expenses and improve organizational efficiency; (3) best practices GAO has identified in its work resulting in recommendations to the defense community include: (a) relying on established commercial networks to manage, store, and directly deliver defense electronic items more efficiently; (b) using private sector food distributors to supply food to the military community faster and cheaper; and (c) adopting the use of supplier parks to reduce maintenance and repair activities; (4) deciding to use a best practices approach involves considering a number of factors and several questions can serve as a guide in making the decision, including: (a) Have the acknowledged problem areas been reported on before and to what extent has there been attempts to make the process work as designed? (b) Is there a process with similar requirements that can be compared to the one being examined but is implemented in a way that provides significantly better results? and (c) Do the areas being considered have an established counterpart in the private or public sector that will provide evidence of the benefits of a new process? (5) a best practices review can be applied to a variety of processes such as payroll, travel administration, employee training, accounting and budget systems, procurement, transportation, maintenance services, repair services, and distribution; (6) the decision to use a best practices review should be made in a larger context that considers the strategic objectives of the organization and then look at the processes and operating units that contribute to those objectives asking questions like: what drives the costs in a particular process; and is the process effective in achieving its goals; (7) it is important that the entire process be considered rather than just part of the process; (8) failing to capture the entire process may push costs into another section of the process or create an improvement that is inhibited by trying to marry old ways with new ways that are in conflict with each other; and (9) not everything can be looked at so, at least initially, a process which is about ready to accept change should be selected.
A forced transfer occurs when a plan participant has separated from an employer, but still has vested savings in the employer’s 401(k) plan and the plan sponsor decides not to allow the savings to remain in the plan. Prior to the Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA), plans could, in the absence of participant instructions, distribute balances of not more than $5,000 by paying them directly to the participant, referred to as a cash-out. EGTRRA sought to protect forced- out participants’ retirement savings by requiring that, in the absence of participant instructions, active plans transfer balances of $1,000 or more to forced-transfer IRAs, thus permitting the plan to distribute them while preserving their tax-preferred status. Expanding upon the statute, regulations later provided that in the absence of participant instructions, plans could opt to also transfer balances of $1,000 or less into forced- transfer IRAs. Active plans may not distribute accounts with contributions of more than $5,000 without the consent of the participant. EGTRRA also required DOL to prescribe regulations providing safe harbors under which the designation of a provider and the investment of funds for a forced-transfer IRA are deemed to satisfy fiduciary duties under ERISA. These regulations, issued in 2004, established a ‘safe harbor’ for plan fiduciaries transferring forced-out participants’ accounts, which includes conditions pertaining to the plan fiduciaries’ selection of the IRA provider and the investments of the transferred funds. We identified five main components in the regulations, which are: (1) preservation of principal, (2) maintenance of the dollar value of the investment, (3) using an investment product of a state or federally regulated financial institution, (4) fees and expenses, and (5) a participant’s right to enforce the terms of the IRA. Plan sponsors forcing participants out of plans by transferring their accounts into forced-transfer IRAs legally satisfy their fiduciary standard of care to participants if they comply with DOL’s safe harbor regulations.transferred into a forced-transfer IRA it is subject to the rules generally governing IRAs. A forced-transfer IRA is a type of IRA that can be opened by a plan on behalf of a participant, without the specific consent or cooperation of that participant. In these instances, a plan signs a contract with an IRA provider, which may or may not be the plan’s record keeper, to establish and maintain the account. While the use of forced-transfer IRAs for accounts under $1,000 is not required, plan sponsors may elect to use forced-transfer IRAs rather than cash-outs when forcing out such accounts in the absence of distribution instructions from the participant, as shown in figure 1. Use of forced-transfer IRAs is common among 401(k) plans. One annual industry survey shows that about half of active 401(k) plans force out separated participants with balances of $1,000 to $5,000. Data provided by the Social Security Administration (SSA) highlight an amount of retirement savings that could be eligible for forced-transfer IRAs. From 2004 through 2013, separated employees left more than 16 million accounts of $5,000 or less in workplace plans, with an aggregate value of $8.5 billion. A portion of those accounts constitutes billions in retirement savings that could be transferred later to IRAs. Even if plans do not force out participants’ accounts immediately upon separation they may do so later in the year. For instance, they may sweep out small accounts of separated participants once a year or amend their plans years after participants separate and then force them out. Multiple federal agencies have a role in overseeing forced transfers and investments, inside and outside the Employee Retirement Income Security Act of 1974 (ERISA) plan environment, as discussed in table 1. SSA analysis of Form 8955-SSA data, which are collected by IRS and then transmitted to SSA. SSA data include benefits left behind by separating participants in all defined contribution plans, including 401(k) plans, as well as in defined benefit plans, which are not subject to forced transfers under 26 U.S.C. § 401(a)(31)(B). GAO assessed the reliability of the data and found that it met our standards for our purposes. Some forced-transfer IRAs are not the short-term investment vehicles for which their default investments are better suited, but providers do not have the flexibility under current DOL safe harbor regulations to use investment vehicles that are better suited to a longer-term investment horizon. Rather, the safe harbor requires that the investment “seek to maintain, over the term of the investment,” the dollar value equal to the amount rolled over. To achieve this, DOL narrowly wrote the investment guidance portion of the forced-transfer IRA safe harbor regulations to effectively limit providers to holding the funds in money market funds, certificates of deposit, or assets with similarly low investment risk typically deemed appropriate for holding money for a short term. While such conservative investments generally ensure that the money is liquid (that is, available to the owner upon demand for cash-out, transfers to another account, or reinvestment elsewhere), they can result in a low return and potentially minimal growth over time. Most forced-transfer IRA balances in accounts we analyzed will decrease if not transferred out of forced-transfer IRAs and reinvested, because the fees charged to the forced-transfer IRAs often outpace the low returns earned by the conservative investments prescribed by DOL’s safe harbor regulations. In recent years, the typical forced-transfer IRA investment, such as a money market account, has earned almost no return. For example, taxable money market funds averaged 1.45 percent for the 10 years ending July 31, 2014. We collected forced-transfer IRA account information from 10 forced-transfer IRA providers, including information about the fees they charged, the default investments used, and the returns obtained (prior to these fees). Among those 10, there were 19 different combinations of fees and returns, as some providers offered more than one combination for their forced-transfer IRA contracts. The typical investment return for the 19 different forced-transfer IRA combinations ranged from 0.01 percent to 2.05 percent. A low return coupled with administrative fees, ranging from $0 to $100 or more to open the account and $0 to $115 annually, can steadily decrease a comparatively small stagnant balance. Using the forced-transfer IRA fee and investment return combinations, we projected the effects on a $1,000 balance over time. While projections for different fees and returns show balances decreasing at different rates, generally the dynamic was the same: small accounts with low returns and annual fees decline in value, often rapidly. In particular, we found that 13 of the 19 balances decreased (See appendix III for all projected outcomes.) For to $0 within 30 years.example, the fees and investment returns of one provider we interviewed would reduce an unclaimed $1,000 balance to $0 in 9 years. Even if an account holder claimed their forced-transfer IRA after a few years the balance would have significantly decreased. Among the 19 combinations we analyzed, our analysis showed an average decrease in a $1,000 account balance of about 25 percent over just 5 years. The rate of investment return needed to ensure a forced-transfer IRA balance does not lose real value varies depending on the rate of inflation and the fees charged. For example, given the median fees for the 19 forced-transfer IRAs we analyzed, the investment return on $1,000 would have to be more than 7.3 percent to keep pace with both the rate of inflation and the fees charged.obtained from forced-transfer IRA providers, five of which told us that their accounts are reduced to zero or do not keep pace with inflation and fees. Target date funds are designed to be long-term investments for individuals with particular retirement dates in mind. For more information on target date funds, see GAO, Defined Contribution Plans: Key Information on Target Date Funds as Default Investments Should Be Provided to Plan Sponsors and Participants, GAO-11-118 (Washington, D.C.: Jan. 31, 2011). digit gains. In recent years, assets in target date fund default investments have produced a higher return than typical forced-transfer IRA investments, which have seen minimal returns. For example, as shown in figure 2, under reasonable return assumptions, if a forced-out participant’s $1,000 forced-transfer IRA balance was invested in a target date fund the balance could grow to about $2,700 over 30 years (173 percent growth), while the balance would decline to $0 if it had been invested in a money market account. We also projected the remaining balance assuming a 15-year average return for money market funds, which is 1.89 percent, and found no material difference in the result. Using that return the balance significantly decreased over time, leaving a $67 balance after 30 years. According to DOL officials, the agency has the flexibility under current law to expand the safe harbor investment options for forced-transfer IRAs, but currently its regulations do not permit those accounts to be invested in the same funds allowed for participants automatically enrolled in 401(k) plans. DOL’s goal of preserving principal is important and consistent with statute, but without more aggressive investment options, forced- transfer IRA balances can continue to lose value over time, causing some former participants to lose the savings they had accumulated in their 401(k) plans. However, allowing forced-transfer IRAs to be invested for growth, such as through a target date fund, may be more effective in preserving principal. Currently the default destination for forced transfers of more than $1,000 from active plans is an IRA. EGTRRA sought to protect forced-out participants by providing that, in the absence of any participant instructions, active plans that choose to force out participants with accounts of $1,000 or more must transfer the accounts to an individual retirement plan, which is defined as an IRA or individual retirement Directing these larger balances to IRAs in lieu of cashing them annuity.out preserves their savings’ tax-preferred status in a retirement account. Current law does not permit DOL and IRS to adopt alternative destinations. The specific investment products held in IRAs and 401(k) plans, as well as the various financial professionals that service them, are subject to oversight from applicable securities, banking, or insurance regulators, which can include both federal and state regulators. provisions.to a forced-transfer IRA it is no longer under the regulatory purview of ERISA and is essentially without DOL oversight. In addition, by transferring forced out participants’ funds in accordance with the safe harbor regulations, a plan satisfies its fiduciary duty of care under ERISA, and the transfer constitutes a final break between the plan and the transferred account. For example, the plan is not required to monitor the forced-transfer IRA to ensure that the provider abides by the terms of the agreement. Thus, when an individual’s 401(k) plan account is transferred Current law permits terminating plans to distribute forced transfers in multiple ways, with the forced-transfer IRA being only one option, as shown in table 2. IRA, terminating plans may also purchase an annuity with the forced- transfer balances or escheat (transfer) the balances to the state. Further, the Pension Protection Act of 2006 (PPA) created a forthcoming alternative for terminating 401(k) plans, which will be to transfer balances to the Pension Benefit Guaranty Corporation (PBGC) when there are no instructions from the participants. Moreover, we found that some providers will not provide forced-transfer IRAs for terminating plans, generally doing so only as an accommodation to ongoing plan clients and because these plans do not have alternatives to the IRA. As a result, a smaller number of participants forced out of terminating plans will end up with savings in a forced-transfer IRA. 26 U.S.C. § 4975 and 29 U.S.C. § 1108. Terminating plans force out both current and separated participants’ balances to dispose of all plan assets, as required to complete a plan termination. States generally have jurisdiction over unclaimed property, but they cannot require escheatment of inactive accounts still in a 401(k) plan because the Employee Retirement Income Security Act of 1974 (ERISA)—the law governing employer-based retirement plans—preempts states’ claims to what would otherwise be considered abandoned property. 29 U.S.C. §1144. Therefore, this escheatment is the plan’s choice for the disposition of the account. U.S. Dept. of Labor, Field Assistance Bulletin 2014-01 (Aug. 14, 2014), available at http://www.dol.gov/ebsa/regs/fab2014-1.html. We also found that forced-transfer IRAs can become long-term investments for the many account holders who do not claim their accounts even though the emphasis placed by the safe harbor regulations on liquidity and minimizing risk are more often associated with short-term investment vehicles. One of the larger providers we interviewed said that during the first year after an account is opened about 30 percent of account holders will do nothing with the account. According to a forced-transfer IRA service provider, an estimated half of the accounts they opened are for missing participants. Many unclaimed accounts may remain so indefinitely. For example, one provider we interviewed reported that nearly 70 percent of the accounts it has opened within the last 5 years remain open and unclaimed. Additionally, an individual could end up with multiple forced-transfer IRAs over time— each incurring its own administrative fees. Two providers we interviewed explained that they do not consolidate forced-transfer IRAs opened for the same individual, meaning that accounts even with the same provider could incur redundant fees. Although there may be alternatives to the forced-transfer IRA today that were not considered in 2001 when the law was passed, without authority to do so, DOL and IRS cannot identify or facilitate alternative destinations for these accounts. Providing an alternative destination for forced transfers would help to preserve participants’ accounts and increase the possibility for growth. Absent the allowance of such an alternative, as we have shown, former plan participants’ savings will continue to be placed in investments unlikely to be preserved or grow over the long term. Current law allows plans that are determining if they can force out a participant to exclude rollover amounts and any investment returns that have been earned while in the plan.exclude a participant’s savings that were rolled into the plan when calculating their vested balance, which determines whether the participant may be forced out. Specifically, separated participants with 401(k) accounts of any size can be forced from a plan if the vested balance (in the absence of rollover amounts and its earnings) is $5,000 or less, as demonstrated in figure 3. A rollover of more than $5,000 would not have been subject to forced transfer if it had remained in the participant’s last plan, but may become subject to it once transferred to the new plan. 401(k) plan participants often lose track of their accounts over time. In the United States, the responsibility is on the individual to manage their retirement savings, including keeping track of 401(k) plan accounts. The considerable mobility of U.S. workers increases the likelihood that many will participate in multiple 401(k) plans. Over the last 10 years, 25 million participants in workplace plans separated from an employer and left at When least one account behind and millions left two or more behind.individuals hold multiple jobs, they may participate in many 401(k) plans or other types of employer-sponsored plans and upon changing jobs face recurring decisions about what to do with their plan savings. Figure 6 illustrates how a participant can accumulate multiple retirement accounts over a career. There are many reasons participants have multiple accounts for retirement savings. Currently, there is no standard way for participants to consolidate their accounts within the 401(k) plan environment. For example, employers do not always permit rollovers into their 401(k) plans. As we previously reported, there are barriers to plan-to-plan rollovers that DOL and IRS need to address to facilitate such rollovers when participants may wish to choose that option. Absent plan-to-plan rollovers, participants frequently roll over their accounts into IRAs or leave their 401(k) savings with their former employers, both of which increase the number of accounts for the participants if they then go on to enroll in their new employers’ plans. Plan-to-plan rollovers help reduce the number of lost accounts because the accounts stay with the participants. This option is, however, irrelevant if the new employer does not offer a plan. Industry representatives we interviewed said automatic enrollment also contributes to participants having multiple accounts. Although automatic enrollment facilitates retirement saving, individuals may be less apt to pay attention to an account that they did not make the decision to enroll in. Industry professionals told us that individuals with a collection of many small accounts may forget about them because the small balances provide them less incentive to pay attention to them. In addition, automatic enrollment is likely to exacerbate the accumulation of multiple, small accounts. As more participants are brought into the system there could be an increase in forgotten accounts, because many of those participants are unengaged from the start, in spite of DOL notification requirements. However, as GAO has previously reported automatic enrollment can significantly increase participation in 401(k) plans. When participants leave their savings in a plan after leaving a job, the onus is on them to update former employers with address and name changes, and to respond to their former plan sponsor’s communications. Plans and record keepers have no automatic way to keep this information up to date for participants, nor do they have ways to ensure that separated participants will respond to their communications. For example, one industry professional noted that if former participants’ e-mail contacts are work e-mails, they will lose contact with their plans when they change jobs and do not provide alternate e-mail addresses. When a plan loses track of a participant it can create a number of challenges because the plan has to spend time and incur the cost of searching for the participant. In addition, there are no standard practices among plans and providers for the frequency or method of conducting searches for missing or nonresponsive participants. While there is agency guidance on searching for missing participants in terminating plans prior to forcing them out, in hearings before the ERISA Advisory Council and in our interviews, providers and other industry professionals reported that the guidance on searches is unclear and insufficient. For instance, it is unclear how to satisfy disclosure requirements when the participant’s address on file is known to be incorrect. One provider told us plans are obligated to make a “good faith effort” to locate participants, but they do not always know what a good faith effort entails. This leaves plans unsure of what steps they must take to satisfy applicable search requirements. Employer actions, such as terminations, mergers, and bankruptcies can also make it difficult for participants to keep track of their accounts. Participants and beneficiaries can lose track of former employers’ plans when the employers change location or name, merge with another company, spin-off a division of the company, or go out of business. DOL officials said that one of the most challenging problems facing participants and their advocates is tracking down lost plans. For example, company records for legacy plans, old plans that no longer have operating units, may be scattered, making employee and participant data difficult to locate, and the former plan’s administrative staff may no longer be available to respond to questions regarding participant records. The current regulatory environment also presents challenges to participants. Participants separating from their employer are to receive information about their accounts via multiple disclosures. Depending on the actions participants take regarding their accounts upon separation, their former employers will provide them and regulatory agencies with relevant required disclosures and reports. (See appendix VI for a list of selected disclosures required when participants separate from employment or when plans undergo certain types of corporate restructuring.) As participants change jobs over time and accumulate multiple accounts, those who remain engaged with their accounts will acquire a large volume of documentation. For example, the hypothetical worker from figure 6 who had separated from three jobs would receive at least nine different notices from the three plan sponsors. In the instances where the worker’s 401(k) plan savings were transferred to another account, the worker would have been provided information about the new account. Over time, the worker would continue to receive annual—or sometimes quarterly—statements from each account as well as various other notices and disclosures depending on any changes in the structure or terms of the plans or accounts. Over 10 years, if that worker had no further job changes or changes to the existing accounts, routine account statements alone would result in at least 40 separate documents. If even one of the accounts issued quarterly statements, the number of documents would increase to 70, which does not include any disclosures the worker might receive about the underlying investments of the accounts or information regarding changes to a plan or IRA. Participants may also have difficulty understanding the complex notices or account statements they receive. As we have previously reported, the quantity of information participants receive may diminish the positive effects such important information could have for them. Our previous work found that participant disclosures do not always communicate effectively, participants often find the content overwhelming and confusing, and many participants rarely read the disclosures they receive. In addition, although 401(k) plans are required to report annually on plan design, finances, and other topics to DOL, IRS, and PBGC via the Form 5500 Series and a number of other forms required by the IRS, the information reported may not always result in a clear record or trail of employer or plan changes. For instance, DOL officials told us that many small plans fail to file an updated or final Form 5500, which would include a valuable piece of information, an employer identification number, which can be used to track a new plan resulting from a merger. In the event of a plan termination, the plan administrator may file Form 5310, a request for a determination letter on a plan’s qualified status, with the IRS, but must provide participants an “Notice to Interested Parties” notifying them of their right to comment on the plan termination. In certain instances of a company spinoff, only the plan that was in existence before the spinoff is required to file a Form 5310-A, making it difficult to trace the new plan back to the original plan. We recently reported that this notice confuses participants and it is difficult for the average pension plan participant to exercise their right to comment. In addition, participants may receive a Notice of Plan Termination that includes information on their account balance, distribution options, and information about making an election and providing instructions to the plan. Federal agency officials told us that inactive participants can fail to find their accounts with former employers and do not always know where to go to seek assistance or find information about their accounts. (See table 4 for descriptions of the role of federal agencies and other entities in helping participants find their plan accounts). Even with the information participants received while still active in plans or at separation, they then have to figure out which agencies and private sector entities to contact to find their accounts. Former employers and record keepers have information participants may need, but participants will need to have stayed in contact with their former employers to get that information. Federal agencies may also have some of the information that participants may need. However, the information that agencies provide is not designed to help participants keep track of multiple accounts or to find lost accounts. Consequently, participants searching for their accounts from former employers may have incomplete information. Moreover, if participants kept the notices and statements sent to them, the information they need may be out of date and located in multiple documents. As a result of such information challenges, even those who obtain assistance from benefits advisors with government or non-profit programs may be unable to locate all of their retirement savings. The Social Security Administration (SSA) provides information that can help participants locate retirement savings left with a former employer. The Potential Private Retirement Benefit Information (Notice) includes information that could be beneficial to individuals looking for missing accounts, including: the name of the plan where a participant may have savings, the plan administrator’s name and address, the participant’s savings balance, and the year that the plan reported savings left behind (see fig. 7). SSA sends the Notice when an individual files for Social Security benefits, which can occur as early as age 62, unless the notice is requested earlier. Individuals appear to be generally unaware that this personal financial information exists or that they may request it from SSA, since few individuals request the form prior to retirement. SSA officials said that they only received about 760 requests for the form in 2013, though according to data provided by SSA the agency has records of potential benefits for over 33 million people. Agency officials told us that they were not aware of any potential advertising or effort on the agency’s website (www.ssa.gov) for promoting the availability of the Notices or informing people about their ability to request the notices. Officials also said that approximately 70,000 Notices are generated for new Social Security beneficiaries every month. Individuals may receive multiple Notices at retirement if they have left savings in more than one employer plan over their career. Although the same information is reported on each form and SSA houses the data for years, the data are not compiled for ease of use by the recipient or for efficiency and cost-savings by the agency. Agency officials explained that in the past people worked for one company for most of their lives and were more likely to have had a traditional defined benefit pension plan, and consequently the format of the Notice only allows for data from one employer. Because the Notice is not currently formatted to display consolidated data on potential benefits from multiple employers, information on benefits from each employer must be sent separately to the participant. Given that many individuals will change jobs often throughout their working life, they can therefore expect to receive several different Notices, adding to the number of disclosures, communications, and notices they are expected to review, understand, and consider in managing their retirement savings. Combining multiple Notices could simplify the process of managing multiple forms for people with more than one account and reduce the costs to SSA. More widely known, SSA also sends individuals a Social Security Statement (Statement) that estimates Social Security retirement benefits—at different claiming ages, and displays a worker’s earnings history. SSA suspended the mailing of paper copies of the Statement in 2011, but in 2014 resumed mailing the Statements to individuals every 5 years, starting at age 25. This information is also available online to anyone who establishes an account. Similar to the Notice, the Statement contains important information about an individual’s potential retirement income. Together these documents give individuals a more complete understanding of their income in retirement. SSA also has earnings recorded by employer, which is available upon request, but it is not mailed to individuals or available online. The earnings record for each year could provide clues as to when certain periods of employment occurred, which is key information that industry professionals suggest individuals looking for their lost 401(k) plan accounts should have when conducting a search. Given the multiple Notices individuals can receive from SSA, in addition to the Statement, finding a way to reduce duplication can help individuals keep track of their accounts and locate missing accounts. SSA has a process in place to review and revise the Statement and the agency already stores all of the data published in the Notice. As suggested in figure 8, providing the Notice at the same time as and with the Statement is one way to give individuals a consolidated, timely resource and reduce the volume of paperwork they need to keep track of over time. As noted earlier, when plans lose track of participants due to outdated mailing addresses, participants fail to receive critical plan communications about their accounts and about any changes to the plan name or administrator that will be vital when they want to communicate with the plan and claim their benefits. An industry professional we interviewed suggested that participants receive some type of reminder to notify plans of address changes. If participants are reminded of their inactive 401(k) plan accounts via the Notice and prompted to inform plans of updated address information, the accounts may be less likely to become lost. Making the information available online or mailing it every 5 years can remind participants of the multiple accounts they have. Providing the combined information on inactive accounts from multiple employers can also give individuals needed information to keep track of their multiple accounts and provide the opportunity to correct inaccurate account information. In addition, having a reminder of the accounts they left behind may increase the likelihood that participants pay attention to other plan communications. To manage inactive workplace retirement accounts, officials in the countries in our study told us that the United Kingdom (U.K.), Switzerland, and Australia use forced transfers and Australia, Denmark, Belgium and the Netherlands use tracking tools. Like the forced-transfer IRAs of the United States, the forced transfers we were told about in these countries transfer account balances without participant consent. In the countries we studied with forced transfers, those accounts follow participants changing jobs, are efficiently managed in a single fund, or are free from fees at a government agency. Each of these approaches helps to preserve the real value of the account for the participant, and generally ensures workplace plans will not be left with the expenses of administering small, inactive retirement accounts. Although the models employed vary by country, the three countries with tracking tools we studied allow participants online access to consolidated information on their workplace retirement accounts, referred to as “pension registries” in this report. Approaches include both databases and “service bus” interfaces connecting providers to participants in real time. Roles for government in these countries range from holding the data and analyzing it for tax and social policy purposes to collaborating with an industry-created pension registry, allowing for information on national pension benefits to be provided in the registry. According to officials we interviewed in the three countries that use forced transfers, they have legislation that (1) consolidates transferred accounts, either in a participant’s new plan or with other forcibly-transferred accounts, and (2) enables these accounts to grow, either at a rate comparable to participants’ current retirement accounts or at least in pace with inflation (see table 5). Switzerland—According to Swiss officials, forced transfers in Switzerland are consolidated in a single fund, the Substitute Plan, administered by a non-profit foundation and invested until claimed by participants (see fig. 9). The Substitute Plan serves as a back-up in those instances when participants fail to roll their money over to their new plan, as they are required to do by law, according to Swiss Federal Social Insurance Office officials. Plans report information on inactive accounts to the Guarantee Fund, the Swiss organization insuring insolvent workplace plans. After 2 years of inactivity, those accounts must be transferred to the Substitute Plan. Officials said the Guarantee Fund is responsible for returning the retirement savings of participants to them in retirement. According to officials at the Swiss Federal Social Insurance Office, the Substitute Plan held about $5.5 to $6.6 billion in 2014. They said its investments have outperformed those in workplace plans in recent years, and compared to most workplace plans, its administrative costs are low, in part because the board manages the investments itself. According to a Substitute Plan board member, the board receives investment guidance from financial experts who counsel its investment committee. The United Kingdom—Officials said that the United Kingdom “pot- follows-member” law is designed to make workplace retirement accounts move with participants throughout their career, or at least until the balances are large enough that they could be used to buy an annuity. Transfers of participants’ savings from a former employer are initiated by the new employer when it is notified, likely through information technology, of an eligible account. Although every employer in the United Kingdom must automatically enroll workers between age 22 and retirement age who earn more than about $17,000 a year, U.K. officials are still considering how they will implement the law when no new plan exists to transfer money to. They said one benefit of putting the responsibility on the new plan is that the trigger for the transfer does not occur until a new plan exists. Once implemented, the pot-follows-member law will automate the plan-to-plan rollover process for participants, keep transferred assets under participant direction, and generally ensure plans do not manage small, inactive accounts. Figure 10 depicts three ways workplace retirement accounts can follow job-changing participants. Australia—Rather than invest forcibly transferred accounts for long-term growth, officials told us the Australian government preserves value while taking proactive steps to reconnect participants with their accounts. Accounts inactive for 1 year are transferred to the Australian Tax Office (ATO), which holds them in a no-fee-environment and pays returns equal to inflation when they are claimed.website to submit their taxes or change their address, they are provided a link to view any account they have that is held by the ATO, and they can consolidate it with another account online. Unlike the United Kingdom and Switzerland, the Australian approach requires participants to take action to invest their transferred accounts for long-term growth, although it provides a tool to help them do so. European Commission officials we talked to said that the most basic form of centralized information on plans should help participants find providers, allow them to view which plans serve which employers, and provide relevant plan contact information. All of the approaches we reviewed in the selected countries went further. The Netherlands, Australia, and Denmark provide consolidated, online information—called pension registries—to participants on all of their workplace retirement accounts, and Belgium is scheduled to do the same by 2016. The pension registry designs in the four countries we studied with such registries share some common elements that make them useful for participants. Active and inactive accounts: All include data on both active and inactive accounts, including account balances and plan or insurer contact information. Website accessibility: All include the identity authentication necessary to securely allow online access to individual participants. Workplace account information: All include information on workplace retirement accounts. While various benefits to participants were cited as the impetus for creating registries in each of these countries, pension registries can be used by plans as well. A representative of one workplace plan in Belgium said they use pension registry data to find missing participants, make payments to the participant as planned, and eliminate liabilities for those payments. The representative also added that when the pension registry goes live to participants the plan may spend less time answering questions from participants who do not have a clear understanding of their rights to benefits. Instead, they will refer participants to the pension Plans in Australia also use the pension registry to registry for answers.identify inactive accounts their participants have in other plans and to talk to participants about consolidating their accounts. Table 6 shows various attributes of the pension registries in the countries we included in our study. Further details on these pension registries can be found in appendix VII. Denmark—Denmark’s pension registry incorporates personal retirement accounts similar to IRAs in the United States, and facilitates retirement planning by allowing participants to see how their financial security in retirement varies with factors like retirement age and spend-down option, according to documentation provided by Danish officials. Although the Danish pension registry is a private non-profit organization financed by participating pension providers, it also works with the government to provide data on public retirement benefits (see fig. 11). The Netherlands—Participants have had access to the pension registry since 2011 using a national digital ID, following the enactment of legislation in 2006 and 2008. The Dutch Social Insurance Bank worked for years with the largest pension plans to develop the registry, though the pension industry in general—including insurance companies and smaller pension funds—provided input into the registry, according to industry representatives we interviewed. The pension industry’s web portal collects and consolidates pension information from funds and insurers when a participant logs in to the system. Participants can also access national, or government, pension information. The pension registry does not store the information in a central location because of security concerns over participants’ private information, according to representatives of the pension registry we interviewed. The government plans to expand the pension registry into a pension dashboard that will project retirement benefits under various life events and allow participants to view their entire financial situation to facilitate retirement planning. The aim of the expansion is to increase financial literacy but also affect behavior. Australia—Participants can access the pension registry, the SuperSeeker, online using a unique electronic government ID. Participants can also use a phone service or smart phone application to get the information, according to the SuperSeeker website. With SuperSeeker, participants can view all of their workplace retirement accounts on the web, including active and inactive accounts, and any lost accounts held by ATO. SuperSeeker can also be used to locate lost accounts not held by ATO. The content of the registry is generated by plans, as they are required to report the details of all lost accounts twice each year to the ATO, in addition to active accounts. Belgium—Officials told us individual participant access is planned for 2016 using a national digital signature. They said the law creating the workplace pension registry was passed in 2006. Pension providers are required by law to submit workplace pension information to the registry. An electronic database of national pensions (similar to Social Security in the United States) already existed for private sector workers and in 2011, the government included public sector workers in the database to create a unified national and workplace pension registry. Starting in 2016, all participants will be able to securely access the integrated data on both national and workplace retirement plans, according to Belgian government officials. European Commission officials told us Denmark has the most advanced pension registry and, as such, is a model for an international registry accessible by participants across the European Union. With a population of over 500 million in 28 member states, the European Union is more similar to the United States in terms of population and geographic size than the individual countries we included in our study, thus the challenges the European Union faces in setting up a pension registry may be particularly relevant for the United States. By creating a pan-European pension registry, European Commission officials said they aim to ensure that workers moving across borders do not lose portions of their retirement entitlements accrued in different jobs and countries. According to European Union data, more European workers are internationally mobile in the labor market, with the number of economically active European Union citizens working across borders having increased from 5 million in 2005 to 8 million in 2013. Their accumulated retirement benefits are scattered over several countries, making it difficult for participants to keep track of them, and for providers to locate missing participants. Because some countries in Europe already have pension registries, European Commission officials said a European registry may involve linking existing registries together. To this end, the European Commission has hired a consortium of six experienced pension providers from the Netherlands, Denmark, and Finland to study possible approaches and come up with a pilot project on cross-border pension tracking. This Track and Trace Your Pension in Europe project presented its initial findings in support of a European Tracking Service to the European Commission. The track and trace project found no uniform approach to pension tracking in the European Union. Although 16 countries report having a national pension tracking service, according to track and trace project documentation, these vary substantially in terms of functionality, coverage, service level, and complexity. The European Commission expects to face challenges implementing a pension registry in the European Union because tax laws and languages vary from country to country. European Commission officials said they face several challenges: the standardization required for all plans in all European Union countries to interface with the same system, the data security required for all those involved to trust the system, and questions about how the system would be financed. For example, according to the track and trace project’s initial findings, few countries have a standardized format on pension communication, though most report having legal requirements for providers to inform participants on a regular basis. These differences across countries reflect the different levels of maturity of the pension systems across Europe. European Commission officials noted it is likely to take many years to standardize data. However, as representatives of the Dutch Association of Insurers pointed out, unless the trend of increasingly frequent job changes reverses, a pension registry will only become more important in the future. Currently, there is no national pension registry in the United States. No single agency or group of agencies have responsibility for developing a pension registry for participants looking for their accounts, and no coalition of financial firms in the retirement industry has acted alone in lieu of government involvement. While projects to provide access to current, consolidated information on workplace retirement accounts are complete or in the final stages in other countries, the United States has not undertaken a coordinated effort to determine how to provide the same to Americans. The current piecemeal approach involving the Department of Labor (DOL), the Pension Benefit Guaranty Corporation (PBGC), Health and Human Services (HHS), and the Social Security Administration (SSA) is largely reactive. Participants often turn to DOL, HHS, or PBGC for assistance once their accounts are lost, and SSA generally only provides information once participants have reached retirement age. The current approach also requires a high level of participant engagement with complex financial information. The current state of financial literacy in the United States and the field of behavioral economics suggest that this kind of participant engagement should not be expected. As discussed earlier, the task of tracking down retirement savings is a substantial challenge for participants who may lack information that would allow other entities to help them find accounts. Similarly, the inactive account information SSA provides pertains to “potential” benefits, leaving the participant to determine whether they exist or not. The pension registries in the countries we reviewed are relatively proactive, and rely less on participant engagement. Their approach of providing access to current, consolidated information on all workplace retirement accounts may help prevent an account from being lost, and the need for a participant to work with government to find it. That approach also relies less on participants, as they need not keep records or update their address with plans to ensure they can later receive their benefits. For example, Australia was able to develop a registry that allows a participant to consolidate their benefits online on a single website, without engaging directly with either plan or calling a government funded assistance program. Congress is aware of the problem participants have tracking accounts, and the Pension Protection Act of 2006 expanded PBGC’s Missing Participant Program. Two industry associations have also suggested that a central database be created that participants can check to determine whether they have a lost account in any ongoing plan in addition to any from terminated plans. Some of the groundwork for consolidating various pieces of information is already in place in the United States through large service providers that manage data on thousands of plans and millions of account holders. For example, information on retirement savings in many IRAs, workplace plans, and Social Security is already accessed online. While the U.S. retirement system is different from those in the countries with pension registries we studied, and the appropriate scope, oversight, and financing method for a pension registry in the United States have not been determined, the variety of examples in place in the countries we reviewed provide ideas to consider. Currently, DOL and other federal agencies do not have any ongoing efforts to develop such a registry. Until there is a concerted effort to determine the potential for a U.S. pension registry, it may be premature to say whether U.S. workers can benefit from the same information as participants in some other countries. The United States has a highly mobile labor force and an economy marked by frequent business formations and failures. The accessibility and portability of the U.S. account-based 401(k) plan system presumably allow participants to retain and manage their retirement assets throughout their careers, even with multiple job changes. However, there are some significant limitations with regard to portability and information access as a result of the current structure and rules of the 401(k) plan system. Given the expected continued growth of this system and the expansion of automatic enrollment, the number of inactive participants and inactive accounts—and the associated challenges—will almost certainly grow, exacerbating inefficiency and eroding the retirement security of U.S. workers. Under current law, there is no mechanism in place that would allow plans or regulators to develop or consider additional default destinations when employees are forced out of 401(k) plans. Although other countries’ approaches pose implementation challenges within the United States, there may be ways that DOL and Treasury, if given the authority to do so, can revise the current forced-transfer model to help achieve better financial outcomes for participants while still providing plans with administrative relief. Another way to protect participants’ 401(k) plan savings is by ensuring that all accounts with balances over $5,000 may remain in the plan environment, even when portions of those balances are from rollovers. Current law addresses the needs of plans and participants by alleviating the burden on plans of maintaining small, inactive accounts, while protecting participants with large balances from forced transfer. Changing the law so that active plans can no longer force-transfer accounts with balances over $5,000 by disregarding rollovers can extend current protections to all accounts of that size, while at the same time continuing to provide plans relief from maintaining small, inactive accounts. Regardless of the size of the balance that is transferred into a forced- transfer IRA, one way to partially mitigate the problems with these accounts is to broaden the investment options for these accounts from the limited conservative menu currently available. DOL can take steps to expand the menu of investment options available under its safe harbor regulations, to include alternatives similar to those available to automatically enrolled 401(k) plan participants. This would enable forced- transfer IRAs to be better protected from erosion by fees and inflation and provide better long-term outcomes for participants. Workforce mobility and frequent changes in corporate structure can result in forgotten accounts, missing participants, and ultimately, lost retirement savings. Participants often have difficulty locating accounts in plans with former employers, especially those employers that have undergone some type of corporate restructuring. SSA holds critical information on accounts left in former employers’ plans, but individuals rarely see that information before retirement and may be unaware that the information exists. As time passes, the information can become outdated and, therefore, less useful to participants trying to locate their retirement savings. Making this information easier to access and available sooner—such as by using the online system for Social Security earnings and benefit statements—can provide participants with a timelier reminder of accounts left in former employers’ plans and provide them better opportunities for keeping track of accounts and improving their retirement security. The lack of a simple way for participants to access information about their retirement accounts is a central problem of our current workplace retirement system. We found that other countries with robust private account-based retirement systems have been grappling with this challenge and have determined that pension registries can provide a meaningful long-term solution. Creating an accurate, easy to access, and easy to use pension registry in the United States would need to take into account important design challenges, including the scope of the data to be included, the entity that would oversee the registry, and how it would be financed. Designing a registry would also require serious discussions among the key stakeholders, including industry professionals, plan sponsor representatives, consumer representatives, and federal government stakeholders on what such a system should look like in the American context. However, the creation of a viable, effective registry in the United States could provide vital information regarding retirement security in a single location to millions of American workers. To better protect the retirement savings of individuals who change jobs, while retaining policies that provide 401(k) plans relief from maintaining small, inactive accounts, Congress should consider amending current law to: 1. Permit the Secretary of Labor and the Secretary of the Treasury to identify and designate alternative default destinations for forced transfers greater than $1,000, should they deem them more advantageous for participants. 2. Repeal the provision that allows plans to disregard amounts attributable to rollovers when determining if a participant’s plan balance is small enough to forcibly transfer it. To ensure that 401(k) plan participants have timely and adequate information to keep track of all their workplace retirement accounts, we recommend that the Social Security Administration’s Acting Commissioner make information on potential vested plan benefits more accessible to individuals before retirement. For example, the agency could consolidate information on potential vested benefits, currently sent in the Potential Private Retirement Benefit Information notice, with the information provided in the Social Security earnings and benefits statement. To prevent forced-transfer IRA balances from decreasing due to the low returns of the investment options currently permitted under the Department of Labor’s safe harbor regulation, we recommend that the Secretary of Labor expand the investment alternatives available. For example, the forced-transfer IRA safe harbor regulations could be revised to include investment options currently under the qualified default investment alternatives regulation applicable to automatic enrollment, and permit forced-transfer IRA providers to change the investments for IRAs already established. To ensure that individuals have access to consolidated online information about their multiple 401(k) plan accounts, we recommend that the Secretary of Labor convene a taskforce to consider establishing a national pension registry. The taskforce could include industry professionals, plan sponsor representatives, consumer representatives, and relevant federal government stakeholders, such as representatives from SSA, PBGC, and IRS, who could identify areas to be addressed through the regulatory process, as well as those that may require legislative action. We provided a draft of this report to the Department of Labor, the Social Security Administration, the Department of the Treasury, the Internal Revenue Service, the Pension Benefit Guaranty Corporation, the Securities and Exchange Commission, and the Consumer Financial Protection Bureau. DOL, SSA, Treasury and IRS, PBGC, and SEC provided technical comments, which we have incorporated where appropriate. DOL and SSA also provided formal comments, which are reproduced in appendices VIII and IX, respectively. CFPB did not have any comments. DOL agreed to evaluate the possibility of convening a taskforce to consider the establishment of a national pension registry. We appreciate that DOL shares our concerns and agrees that there is need for a comprehensive solution to problems related to missing and unresponsive participants. DOL stated, however, that it does not have the authority to establish or fund a registry. Specifically, DOL noted that it does not have authority to require reporting of the information needed for a registry or to arrange for the consolidation of retirement account information from multiple agencies. We reached the same conclusion and for that reason recommended a taskforce as a starting point for the development of a national pension registry. In fact, our recommendation noted that one role for the taskforce would be identifying areas that could be addressed through the regulatory process and those requiring legislative action. DOL also noted that an expansion of PBGC’s missing participant program to include defined contribution plans could address some of these issues. It is our view that there may be a number of policies or programs that could address these problems and we agree that an expansion of PBGC’s program could ultimately be part of a comprehensive solution. Should the taskforce determine that the most appropriate option or options require additional authority for DOL or other agencies, such options should be given careful congressional consideration. DOL disagreed with our recommendation to expand the investment alternatives available under the safe harbor for plan sponsors using forced transfers. While DOL characterized our recommendation as calling for the safe harbor to include qualified default investment alternatives, our recommendation is to “expand the investment options available” and we noted that qualified default investment alternatives could be one option. DOL stated that the limited investments under the safe harbor are appropriate because Congress’ intent for the safe harbor was to preserve principal transferred out of plans. Particularly, DOL noted that given the small balances and the inability of absent participants to monitor investments, the current conservative investment options are a more appropriate way to preserve principal. However, as we show in the report on pages 9-13, the current forced- transfer IRA investment options like money market funds can protect principal from investment risk, but not from the risk that fees (no matter how reasonable) and inflation can result in decreased account balances due to returns on these small balance accounts not keeping pace with fees. Consequently, as our analysis shows and as several forced-transfer IRA providers told us, the reality has been that many forced-transfer IRAs have experienced very large and even complete declines in principal. Regarding our analysis, DOL stated that the performance information that we used to illustrate the effects of low returns on forced-transfer IRAs on pages 9-13 and 20-21 covers too short a period and does not reflect the periodic higher returns earned by money market funds in the more distant past. Our projection in figure 2, p.13, showing the effect of returns from a money market investment versus a target date fund investment on a small balance over 30 years used 10-year mean returns for these investments. Given that the safe harbor for these accounts was issued 10 years ago in 2004, we feel a 10-year average is more appropriate and accurately reflects the returns earned. However, using a longer time period does not materially change our conclusions. A similar calculation using a 15-year mean return shows that these forced-transfer IRA accounts would still not be preserved. (See notes under fig. 2, p.13, and fig 5, p.20-21.) In any case, our recommendation did not aim to eliminate money market funds from investments covered by the safe harbor but to expand the investment alternatives available so that plans and providers that want to operate under the safe harbor have the opportunity to choose the most suitable investment. We stand by our recommendation and encourage DOL to expand the safe harbor to include investment alternatives more likely to preserve principal and even increase it over time. Qualified default investment alternatives could be one option, although certainly not the only one, that could be considered. SSA disagreed with our recommendation to make information on potential private retirement benefits more accessible to individuals before retirement. SSA was concerned that our recommendation would place the agency in the position of having to respond to public queries about ERISA. SSA noted that the agency has no firsthand legal or operational knowledge of pension plans or the private pension system and should not be in a position of responding to questions of that nature or about ERISA, which it considered to be outside the scope of SSA’s mission. We agree with SSA’s view about providing information or advice about private pension plans generally. However, as SSA noted, the Notice of Potential Private Retirement Benefit Information (referred to as the “ERISA notice” in SSA’s letter) already directs recipients to contact DOL with any questions. We would expect that any changes made to make information on potential vested plan benefits more accessible to individuals before retirement would continue to direct recipients to contact DOL with questions about ERISA policy. SSA stated that it will seek legal guidance to determine if it is permissible to include a general statement encouraging potential beneficiaries to pursue any external pension benefits in its benefit Statement. As noted in our report on pages 30-31, individuals may be unaware of the availability of information on potential retirement benefits, therefore we support SSA’s initiative to include language in the Statement encouraging potential beneficiaries to pursue external pension benefits. SSA also stated that there is no interface between potential private retirement information and Social Security benefits. However, as noted in our report on page 31, SSA already stores the potential vested benefits data and provides the information in the Statement. Consolidating the two types of information and making it available every 5 years can provide participants with timely and adequate information to keep track of all of their work place retirement accounts and could possibly lead to administrative efficiencies. Therefore, it may be appropriate for SSA to explore its concern about its legal authority to expend appropriated funds to disclose information that it already provides to the relevant beneficiary, on a more frequent basis and in a more consolidated manner. We continue to believe that this recommendation could enhance the retirement security of millions of Americans, who would benefit from the assistance in keeping track of their multiple accounts from multiple employers and becoming more knowledgeable about funds they may be due in retirement. Should SSA determine that they have authority to implement this legislation, we would strongly urge the agency’s action. However, should SSA decide that it does not have the authority to move ahead on this recommendation, we would urge the agency to seek the necessary statutory authority. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, Acting Commissioner of the Social Security Administration, Secretary of the Treasury, Commissioner of Internal Revenue, Acting Director of the Pension Benefit Guaranty Corporation, Chair of the Securities and Exchange Commission, Director of the Consumer Financial Protection Bureau, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected] Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. This report examines (1) what happens to forced-transfer individual retirement accounts (IRA) over time; (2) the challenges 401(k) plan participants face keeping track of their retirement savings and what, if anything, is being done to help them; and (3) how other countries address the challenges of inactive accounts. To understand what happens to forced-transfer IRAs over time, as well as challenges 401(k) plan participants face keeping track of multiple 401(k) plan accounts, we reviewed relevant data from government, research, and industry sources. Because we found no comprehensive data on the number of IRA accounts opened as a result of forced transfers or other data relevant to their use and management, we collected data from a non-generalizeable group of 10 providers of forced-transfer IRAs about their practices and outcomes, including three of the largest IRA providers. There is no comprehensive list of all forced-transfer IRA providers. For this reason, we built a list of forced-transfer IRA providers through interviews with industry professionals, a review of IRA market data, and online searches. Our objective was to create a group that would cover a large share of assets in the forced-transfer IRA market and represent both small and large forced-transfer IRA providers in terms of company size. We reached out to the largest IRA providers by assets under management, as well as all small forced-transfer IRA providers on our list. We obtained forced-transfer IRA account data from 10 forced-transfer IRA providers that represent this mix of characteristics. We also interviewed plan sponsor groups, 401(k) plan industry groups, research entities, consumer groups, and six federal agencies (Consumer Financial Protection Bureau, Department of Labor, Department of the Treasury, Pension Benefit Guaranty Corporation, Securities and Exchange Commission, and Social Security Administration) about plans’ use of forced-transfer IRAs and what challenges individuals and plans face related to inactive accounts and multiple accounts in the United States. We also reviewed research and industry literature, relevant laws and regulations, 2013 ERISA Advisory Council testimony on missing participants, industry whitepapers on a proposed default roll-in system, and submissions to the 2013 Pension Benefit Guaranty Corporation request for information related to a tracking system for distributions from terminating plans. To understand what happens to forced-transfer IRA accounts over time, we constructed projections of what would happen to an account balance year to year, given certain assumptions. We drew those assumptions from the actual forced-transfer IRA account terms provided by providers we interviewed and on which we collected data. We used the account opening fee, annual fee, search fees, and rate of investment return to project how long it would take for a $1,000 balance to decrease to zero. While the range of average balances transferred into forced-transfer IRAs reported by providers we interviewed was $1,850 to $3,900, we used a $1,000 balance for our projection to make it easier to observe the difference in values over time shown in the projection. Appendix III shows the projected outcome for a $1,000 balance given the fee and return information reported by the forced-transfer IRA providers we contacted. To determine how forced-transfer IRAs are used, as described in appendix II, we projected the balance of a typical low-wage worker at the end of a typical tenure given certain assumptions about savings, investment returns, and employer matching and vesting policies. Specifically, we wanted to see if the projected balance would fall below the $5,000 cap used to determine eligibility for forced transfers. The projections assume the annual mean wage in 2013 for the service sector occupation with the most workers, specifically “food preparation and serving related occupations,” including an annual raise equal to the average annual increase in wage over 15 years (1999-2013). For these assumptions, we referred to the U.S. Bureau of Labor Statistics’ (BLS) Occupational Employment Statistics, National Employment and Wage Estimates. The Occupational Employment Statistics survey covers all full- time and part-time wage and salary workers in nonfarm industries. The survey does not cover the self-employed, owners and partners in unincorporated firms, household workers, or unpaid family workers. We assumed the 2014 median tenure for employed workers in the food preparation- and serving-related occupations, according to BLS data. We also assumed that employer contributions, when there were any, were made concurrently with employer contributions, rather than on a separate periodic or annual basis. These projected savings also reflect a 6.3 percent investment return, which is the geometric mean of 10-year returns for all target date funds according to our analysis of data from Morningstar.com. Target date funds are the most common default investment for individuals automatically enrolled into 401(k) plans, according to the Plan Sponsor Council of America’s (55th) Annual Survey of Profit Sharing and 401(k) Plans, which reflects the 2011 plan experience. We used optimistic, moderate, and pessimistic assumptions to project vested balances (see appendix IV for additional details on our assumptions). To estimate the number and the value of accounts that could potentially be—but were not already—transferred to forced-transfer IRAs, we collected Social Security Administration (SSA) data from the form 8955-SSA. Data on the form 8955-SSA include deferred vested benefits in all defined contribution plans, including 401(k) plans, as well as in defined benefit plans, which are not subject to forced transfers. We assessed the reliability of the data and found that it met our standards, given our use of the data. We previously reported that data from the form 8955-SSA on potential private sector pension benefits retirees may be owed by former employers are not always updated or verified over time and may not reflect later distributions from plans, such as rollovers to a new plan or cash-outs. We also asked PLANSPONSOR.com to include questions about plan sponsors’ use of forced transfers in its newsletter, which is distributed to online subscribers. Respondents to the query included 14 plan sponsors and 4 third-party administrators/record keepers. To assess the reliability of the data we analyzed, we reviewed IRA market data and interviewed IRA providers familiar with forced- transfer IRAs. We determined that these data were sufficiently reliable for the purposes of this report. To better understand forced-transfer IRAs, as well as the challenges people face in keeping track of multiple 401(k) plan accounts, we also interviewed plan sponsor groups, 401(k) plan industry groups, research entities, consumer groups, and six federal agencies (Department of Labor, Department of the Treasury, Social Security Administration, Pension Benefit Guaranty Corporation (PBGC), Securities and Exchange Commission, and Consumer Financial Protection Bureau) about plans’ use of forced-transfer IRAs and what challenges individuals and plans face related to multiple accounts and inactive accounts in the United States. We also reviewed research and industry literature, relevant laws and regulations, 2013 Employee Retirement Income Security Act Advisory Council testimony on missing participants, industry whitepapers on a proposed default roll-in system, and submissions to the 2013 PBGC request for information related to a tracking system for distributions from terminating plans. To examine how other countries are addressing challenges of inactive accounts, we selected six countries to study. We considered countries with extensive workplace retirement systems to include populations that might face challenges similar to those of U.S. participants. To make our selections, we reviewed publicly available research and interviewed researchers, consumer groups, industry groups, and government agencies. We considered the extent to which countries appeared to have implemented innovative policies to help individuals keep track of their retirement savings accounts over their careers, reduce the number of forgotten or lost accounts, make such accounts easier to find, and improve outcomes for those with lost accounts. We also considered how recently legislated or implemented solutions were adopted given the increasingly powerful role information technology can play in connecting individuals with information. On the basis of this initial review, we selected six countries—Australia, Belgium, Denmark, the Netherlands, Switzerland, and the United Kingdom—that could potentially provide lessons for the United States. We interviewed government officials and industry representatives from all the selected countries. We did not conduct independent legal analyses to verify information provided about the laws or regulations in the countries selected for this study. Instead, we relied on appropriate secondary sources, interviews, and other sources to support our work. We submitted key report excerpts to agency officials in each country for their review and verification, and we incorporated their technical corrections as necessary. We conducted this performance audit from May 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for findings and conclusions based on our audit objectives. Active plans force out separated participants primarily to reduce plan costs, administrative burden, and liability. In response to a poll conducted by PLANSPONSOR.com through its newsletter, some respondents (made up of both plans and third party administrators) indicated that plans chose to use forced transfers for balances of $5,000 or less because they wanted to reduce costs from having additional participants who have small accounts. Specifically, plans pay fees based on the total number of participants or on the average plan balance. The administrative burden on plans is another incentive to force out participants with inactive accounts, absent participant instruction, into forced-transfer IRAs. Other respondents to PLANSPONSOR.com’s query reported that they used forced transfers because they wanted to reduce the complexity and administrative responsibilities associated with locating separated, non cashed-out participants. Also, small plans may wish to avoid the additional disclosure requirements and expenses associated with separated employees. In addition, active plans may opt to force out separated participants with eligible balances to reduce the plans’ legal liability related to those individuals. Lastly, plans also use forced-transfer IRAs to help reduce their ongoing responsibility with regard to uncashed checks. When transferring an account into a forced-transfer IRA, a plan must first notify the participant of its intention to forcibly transfer the account and that if the participant does not instruct otherwise, the account will be transferred into a forced-transfer IRA. An example of such a notice is shown in appendix V. See figure 12 for an example of how the forced- transfer IRA process works for active 401(k) plans. Although a plan must try to locate a missing participant before seeking to force the participant out of the plan, actually forcing such a participant out of the plan and initiating the forced transfer to an IRA does not require any additional last-ditch efforts to locate the participant. For example, a plan might get back some returned plan communications mailed to a separated participant and search unsuccessfully for an updated address. Later, when that plan sweeps out the balances of separated participants, the plan is not required to search again for the missing participant. Instead, efforts to locate the missing participant are at the discretion of the provider and generally at the expense of the individual participant’s balance. One industry survey shows that about half of active 401(k) plans force out separated participants with balances of $1,000 to $5,000. We collected forced-transfer IRA account data from 10 IRA providers that have, together, opened more than 1.8 million forced-transfer IRA accounts totaling $3.4 billion in retirement savings, as of 2013. One of the largest forced-transfer IRA providers has projected that more than 600,000 new forced-transfer IRAs could be created each year, given attrition rates, the percentage of vested balances under $5,000, and the rate of non- responsiveness among separating participants faced with making a distribution choice. Based on that estimate and assuming an average account balance of $2,500 (half of the $5,000 cap), a total of $1.5 billion would be transferred into these accounts each year. Data provided by SSA are consistent with those estimates. From 2004 to 2013, separated participants left more than 16 million accounts of $5,000 or less in workplace retirement plans, with an aggregate value of $8.5 billion. Those data reflect both defined contribution and defined benefit plans, but even if a portion of the accounts are in defined contribution plans, it suggests that there are millions of accounts and billions in savings that could be transferred to IRAs if those plans choose to retroactively force transfer eligible accounts. Although the plans reflected in SSA’s data had not yet forcibly transferred these small accounts, the defined contribution plans may still do so. For example, a plan may choose to sweep out eligible accounts once a year or on some periodic basis. Plans’ use of forced-transfer IRAs is also increasing. Some forced- transfer IRA providers have seen the number of new forced-transfer IRAs increase each year. In addition, the largest of the specialty providers we interviewed said that the number of new forced-transfer IRAs that they administered increased nearly 300 percent over 5 years, from 26,011 new They expect that upward trend to accounts in 2008 to 103,229 in 2012.continue. Industry professionals also said that the wider use of automatic enrollment has the potential to result in greater use of forced-transfer IRAs, as participants who are relatively unengaged, and thus less likely to make a choice about where to transfer their savings, are forced out by plans when they separate. Finally, some plans do not yet force out participants because their plan documents do not, as required, include provisions for forcing them out and transferring their eligible 401(k) plan accounts into forced-transfer IRAs. The PSCA survey of 401(k) plans stated that about 40 percent of plans do not currently force out participants with balances of $1,000 to $5,000. If these plans begin to use forced transfers, they can force out participants with eligible balances going forward and go back and sweep out participants who left small accounts years ago. Thus, anyone with a small balance left with a past employer’s 401(k) plan could find themselves notified of the plan’s intention to force them out—no matter how long ago their separation—and of a potential transfer to an forced- transfer IRA. Several forced-transfer IRA providers we spoke with said that forced transfers from terminating plans represent a small part of their forced- transfer IRA business. For example, one provider said that terminated plan transfers constitute 9 percent of the provider’s forced-transfer IRAs. Some providers do not offer forced-transfer IRAs to terminating plans. One of the largest providers told us that it offers forced-transfer IRAs as an accommodation to ongoing plan sponsor clients, but typically does not offer them if the plan is terminating. In some cases, forced-transfer IRA providers will provide the accounts for terminating plans that cannot secure a contract with a larger service provider. Despite the limited use of forced-transfer IRAs by terminating plans, they have prescriptive requirements for notifying participants prior to the forced transfer of balances, which include using certified mail, reviewing other employee benefit plan records for more up-to-date information, and contacting the designated beneficiary to get more accurate contact information. As a result, participants in terminating plans are more likely to have an opportunity to select a distribution destination other than a forced-transfer IRA or individual retirement annuity. Through interviews, data requests, and web research we collected forced-transfer IRA terms for 10 forced-transfer IRA providers. Almost all providers offer varying account terms for different forced-transfer IRAs. In all, there were 19 different account terms from the 10 providers for which we collected data. We collected data on account opening fees, initial address search fees, ongoing account fees, ongoing address search fees, and investment returns. Some forced-transfer IRA providers also charge fees for certain transactions, including distributions and account closings, but we did not incorporate them into our projections, which show the effect of the account terms if the account holder takes no action and the balance remains in the forced-transfer IRA. While not all forced- transfer IRA terms result in the balance decreasing to $0 over 30 years, the growth of those account balances is less than would have resulted had the funds been invested in a typical target date fund. In contrast to the projected outcomes shown in table 7, the projected balance of $1,000 in a forced-transfer IRA with a $6.75 set-up fee, a $42 annual fee (the median fees among the combinations we reviewed), and a 6.30 percent would be $2,708 after 30 years or growth of average target date return,173 percent. Appendix IV: Projected Vested Retirement Savings of a Low-Wage 401(k) Plan Participant Given Pessimistic, Moderate, and Optimistic Assumptions (Table) Immediate vesting 3-year cliff vesting 5-year graduated vesting 6-year graduated vesting $3,185 Under 3-year cliff vesting, the employer’s contribution and investment returns thereon are 100 percent vested after 3 years of service. Under graduated vesting, the employer’s contribution and investment returns thereon are partially vested after each year of service, depending on how long the graduated vesting period is. In our projections, we used: for a 5-year vesting schedule: 0, 25, 50, 75, 100 percent and the ends of years 2 through 5; and for a 6-year gradated vesting schedule:0, 20, 40, 60, 80, and 100 percent at the end of years 2 through 6. Appendix VI: Selected Reporting and Disclosure Requirements at Participant Separation and Certain Plan Events(Table) Form 1099-R, Distributions From Pensions, Annuities, Retirement or Profit-Sharing Plans, IRAs, Insurance Contracts, etc, 26 U.S.C. § 6047(d). 26 C.F.R. § 31.3405(c)-1. 402(f) Special Tax Notice (or Rollover Notice), 26 U.S.C. § 402(f)(1) Participant *Only required for balances over $5,000. Notice of Right to Defer Distribution (Or Participant Consent Notice), 26 C.F.R. § 1.411(a)-11(c) . 8955-SSA, Annual Registration Statement Identifying Separated Participants with Deferred Vested Benefits, 26 U.S.C. § 6057(a). Includes same information as the 8955-SSA; plan name, name and address of plan administrator, name of participant, nature, amount and form of the deferred vested benefit. 1099-R, Distributions From Pensions, Annuities, Retirement or Profit-Sharing Plans, IRAs, Insurance Contracts, etc, 26 U.S.C. § 6047(d). 26 C.F.R. § 31.3405(c)-1. Form 402(f) Special Tax Notice (or Rollover Notice), 26 U.S.C. § 402(f)(1) Summary description of form contents Explains the tax implications of the different distribution options, including explanation of the rollover rules, the special tax treatment for cash- outs (also called lump-sum distributions), and the mandatory withholding of 20 percent of distributions (including those that result in an indirect rollover).Notifies participant that, absent any participant instructions, a distribution will be paid to an individual retirement plan. Reports contributions, rollovers, transfers, and recharacterizations, as well as the fair market value and whether a required minimum distribution is required. 1099-R, Distributions From Pensions, Annuities, Retirement or Profit-Sharing Plans, IRAs, Insurance Contracts, etc, 26 U.S.C. § 6047(d). 26 C.F.R. § 31.3405(c)-1. 402(f) Special Tax Notice (or Rollover Notice), 26 U.S.C. § 402(f)(1) Explains the tax implications of the different distribution options, including explanation of the rollover rules, the special tax treatment for cash- outs (also called lump-sum distributions), and the mandatory withholding of 20 percent of distributions (including those that result in an indirect rollover). Participant *Only required for balances over $5,000. Notice of Right to Defer Distribution (Or Participant Consent Notice), 26 C.F.R. § 1.411(a)-11(c) (2012). 1099-R, Distributions From Pensions, Annuities, Retirement or Profit-Sharing Plans, IRAs, Insurance Contracts, etc, 26 U.S.C. § 6047(d). 26 C.F.R. § 31.3405(c)-1. Form 402(f) Special Tax Notice (or Rollover Notice), 26 U.S.C. § 402(f)(1) Participant *Only required for balances over $5,000. Notice of Right to Defer Distribution (Or Participant Consent Notice), 26 C.F.R. § 1.411(a)-11(c) (2012). Summary description of form contents Explains the tax implications of the different distribution options, including explanation of the rollover rules, the special tax treatment for cash- outs (also called lump-sum distributions), and the mandatory withholding of 20 percent of distributions (including those that result in an indirect rollover). Notifies participant of right to defer receipt of an immediately distributable benefit. To obtain participant’s consent to distribution in excess of $5,000 prior to plan’s NRA, participant must be given a description of the plan’s distribution options and be informed of right to defer distribution and the consequences of failing to defer. Reports contributions, rollovers, transfers, and recharacterizations, as well as the fair market value and whether a required minimum distribution is required. Optional request for a determination on the plan’s qualification status at the time of the plan’s termination. Notifies employees, participants and beneficiaries that an application for determination is being submitted to the IRS and of their right to comment on the plan. Notice must be posted or sent (electronic media permissible) before the application is submitted to the IRS—between 10 and 24 days of the application date. Notifies participants and beneficiaries of the plan’s termination and distribution options and procedures to make an election. In addition, the notice must provide information about the account balance; explain, if known, what fees, if any, will be paid from the participant or beneficiary’s retirement plan; and provide the name, address and telephone number of the individual retirement plan provider, if known, and of the plan administrator or other fiduciary from whom information about the termination may be obtained. See 29 C.F.R. § 2550.404a-3. The notice will be given during the winding up process of the plan termination. Participants and beneficiaries have 30 days from the receipt of the notice to elect a form of distribution. Receiving entity Department of Labor (DOL), IRS, Pension Benefit Guaranty Corporation (PBGC) Form Final Form 5500, Annual Return/Report of Employee Benefit Plan (including any applicable schedules) 26 U.S.C. § 6058(a). 29 U.S.C. § 1024. Summary description of form contents Indicates when all assets under the plan have been distributed when “final return/report” box on the form 5500 is checked. Indicates that all assets were distributed and current value of assets at the date of distribution via schedule H (for plans with 100 or more participants) and schedule I (plans with less than 100 participants). Gives notice of certain plan mergers, consolidations, spinoffs or transfers of assets or liabilities from one plan to another. Each plan with a separate EIN and plan number involved in merger or transfer of assets or liabilities must file. For spinoffs, only plan in existence before spinoff must file. Form must be filed at least 30 days prior to a merger, consolidation, spinoff, or transfers of plan assets or liabilities to another plan. Final Form 5500, Annual Return/Report of Employee Benefit Plan (including any applicable schedules) 26 U.S.C. § 6058(a). 29 U.S.C. § 1024. Indicates when all assets under the plan have been distributed when “final return/report” box on the form 5500 is checked. Indicates that all assets were distributed and current value of assets at the date of distribution via schedule H (for plans with 100 or more participants) and schedule I (plans with less than 100 participants), Schedules H and I also provide the net value of all assets transferred to and from the plan, including those resulting from mergers and spinoffs. Includes the new plan sponsor’s name and address. The Internal Revenue Code (IRC) requires that participants with an eligible rollover distribution have the option to roll their distributions into an IRA or another employer’s tax-qualified plan in the form of a direct rollover. 26 U.S.C. § 401(a)(31)(A). “fees and expenses (including administrative or investment-related fees) outside the plan may be different from fees and expenses that apply to the participant’s account and contact information for obtaining information on such fees.” 73 Fed. Reg. The Economic Growth and Tax Relief Reconciliation Act of 2001 added the notice provision and required the plan administrator notify the distributee in writing (either separately or as part of the § 402(f) notice). Pub. L. No. 107-16, § 657, 115 Stat.38, 135 (codified at 26 U.S.C. §§ 401(a)(31)(B) and 402(f)(1)(A)). In addition, to meet the conditions of DOL’s safe harbor and therefore, be deemed to have satisfied their fiduciary duties with regard to mandatory distributions, plans must provide participants with a summary plan description, or a summary of material modifications, meeting certain requirements. Specifically, it must describe the plan’s forced-transfer IRA provisions (including an explanation that the forced-transfer IRA will be invested in an investment product designed to preserve principal and provide a reasonable rate of return and liquidity), a description of how fees and expense attendant to the individual retirement plan will be allocated (i.e., the extent to which expenses will be borne by the account holder alone or shared with the distributing plan or plan sponsor), and the name, address and phone number of a plan contact (to the extent not otherwise provided). 29 C.F.R. § 2550.404a-2. At a glance Since 2005, individuals in Australia have been able to select a plan (super) of their choosing to which employers they have throughout their career will contribute. However, workers not actively choosing a plan may accumulate accounts in multiple plans selected by their employers. According to Australian Treasury officials, many Australians lose track of their accounts, especially when they change jobs, names, and addresses. Small balances may be eaten away by fees, necessitating forced-transfers to preserve their value. Pension registry The Australian Tax Office (ATO) has established an online tool called SuperSeeker that individuals can use to find lost retirement accounts, via the governmental portal myGov. A smart phone application is also available for accessing the information. Information provided to participants can be used for retirement planning purposes, including consolidation, in order to improve participant retirement security. However, The SuperSeeker does not perform analytical tasks, such as showing retirement outcomes under various scenarios, according to government officials we interviewed. Participants who find lost accounts upon searching for them are able to consolidate them online in a plan of their choice generally within 3 working days. The SuperSeeker now allows paperless “point and click” consolidation. According to ATO, nearly 155,000 accounts were consolidated in 2013-14 with a total value of about AUD 765 million. In addition, the number of lost accounts went down by 30 percent between June 2013 and June 2014. The pension registry is primarily financed through a tax on the superannuation sector, and in some cases such as funding the letter campaign to raise awareness, from general revenue. The tax has fluctuated between AUD 2.4 million in 2002 and AUD 7.3 million in 2011, according to the ATO. At a glance Officials told us participants changing jobs can leave their pension account behind or roll it over (1) to the plan of the new employer, (2) to a “welcome- structure” for outgoing workers often taking the form of a group insurance, or (3) to a qualified individual insurance contract. Sectoral industry plans, negotiated in collective bargaining agreements, allow participants who change jobs but stay in the same industry to keep one retirement account, according to officials. With defined benefit plans, vested benefits in dormant accounts are frozen ( i.e. not subject to further cost of living or wage increases) officials told us, whereas with defined contribution plans, separated participants’ dormant accounts receive the same return as active accounts but the minimum return obligation that the plan sponsor must meet, currently set at 3.25 percent of account balances per year, is frozen. Pension registry The pension registry in Belgium has two components, according to officials: a database of national pensions (similar to Social Security) and one covering workplace retirement accounts, which includes both active and inactive accounts. The pension registry does not have information on personal retirement savings. Since the enactment of legislation in 2006, the Belgian government has been collecting data on workplace accounts for private sector participants and the self-employed, according to officials. The pension registry extracts some information from existing databases (such as the registry of individuals and employers), and data from service providers, officials said. The registry stores some of the information in its database. From 2016, the new pension registry will take over the provision of information on inactive accounts, as indicated in the law adopted in May 2014, according to Belgian officials, and workers with inactive accounts will no longer receive statements from plan sponsors or pension institutions but will be able to consult the registry online. Officials also told us the registry will help the government gather up-to-date information on retirement plans. The government finances the pension registry from general revenue, officials said. Once fully functional, the annual cost of running the registry will be around 3.5 million euro, according to Belgian pension registry officials we interviewed. At a glance Multi-employer industry plans (one plan covers all workers in an industry) allow participants who change jobs but stay in the same industry to use just one retirement account. When individuals change industries, the account remains inactive in the plan, unless the participant takes action to roll it in to the new industry plan, according to Danish officials. Plans sign agreements requiring them to accept transfers requested by participants. Pension registry The Danish Insurance Association’s pension registry called PensionsInfo collects and consolidates retirement savings information from plans and insurers when a participant logs-in, according to materials provided by Danish Insurance Association representatives we interviewed. It stores only the government-issued identification numbers of participants in each plan. Individuals can view contact information for each plan or insurer, which they can use to consolidate accounts. It is voluntary for providers to allow access to their records, but virtually all do, including government authorities who provide information on national (Social-Security-like) retirement benefits. PensionsInfo provides current account balances and projected future distribution amounts for public, workplace and private retirement benefits under various scenarios, for example, comparing lump- sum withdrawals, phased withdrawals and whole life annuities at different retirement ages. Inactive accounts can be flagged for participants in a pop-up window recommending consolidation, according to officials at the Danish Insurance Association. Participants can print or save the consolidated account information for their records, share it with a personal financial advisor, or use it in conjunction with retirement planning software designed to work with it. Insurers and plans voluntarily pay for the pension registry. The fee paid by each is calculated on the basis of the number of participants requesting to view data from them. Recent data from the European Actuarial Consultative Group indicate that the number of unique visitors to the registry has increased from 512,218 in 2010 to 742,426 in 2011 to reach 1,132,488 in 2012. The annual cost of maintaining the pension registry is estimated at 1.5 million euro. At a glance Multi-employer industry plans, in which a single plan generally covers employees in one industry, allow participants who change jobs but stay in the same industry to keep one pension account. According to an Organization for Economic Co-operation and Development Working Paper, about three quarters of workers belong to industry wide multi- employer plans. Pension registry The Netherlands launched the online pension registry in January 2011. The decision to establish the registry was part of the 2006 Pension Act. Participant can see up-to-date pension information on active and also inactive accounts associated with previous employers, including account balances, account numbers and plan contact information. Active account information has to be mailed to participants on an annual basis and inactive account information every 5 years, according to Dutch officials. Recent data from the European Actuarial Consultative Group shows that the number of unique visitors to the registry was 1,500,000 in 2011 and 1,100,000 in 2012. Pension providers, not the government, finance the pension registry at an annual cost of 2.3 million euro or .49 euro per active participant. Officials also said the cost of developing the pension registry was split between the pension fund industry and the insurance schemes industry, based on their share of the workplace retirement plan market. It took about 3 years and cost about 10 million euro to develop the new pension registry, according to Dutch government officials we interviewed. At a glance The Swiss system is an example of how a retirement account can follow a participant as they move from job to job. Participants are required to transfer inactive workplace retirement accounts left with previous employers, according to Swiss government officials we interviewed. There are a variety of defined benefit and defined contribution plan types in Switzerland. As accounts move from plan to plan, conversion rules established in law govern the value of transferred assets. Pension registry There is no pension registry providing consolidated current online workplace retirement account information in Switzerland, according to Swiss government officials. Swiss Officials said when participants need information on a workplace retirement account started at a past employer, they refer to information provided by the employer or plan or contact the Guarantee Fund, which provides insolvency insurance to plans in Switzerland. Participants can contact officials at the Guarantee Fund by phone or e-mail to identify accounts that were forced out of their plan because they were inactive, according to Swiss officials and the Guarantee Fund’s 2013 annual report. Swiss officials said participants can use information on their inactive accounts to roll them over to the plan of their new employer. They said participants are required by Swiss law to transfer their retirement account when they change jobs, and because enrollment is generally mandatory, officials said employers will often help employees roll their money from their old plan to their new plan. Individuals without a new plan are required to purchase a vested benefit account at a bank or insurance company. At a glance Parliament of the United Kingdom adopted The Pensions Act 2014 which will transfer small inactive workplace retirement accounts to an individual’s active plan. Before the legislation, plans had ultimate discretion over whether or not to accept a transfer, according to a U.K. government report, and the onus for pursuing a transfer rested on individuals. Regulations will now be made to automatically transfer workplace retirement benefits, according to a U.K. official, preventing a plan from declining to accept such transfers. That process generally applies to defined contribution plans most U.K. participants are automatically enrolled in. Before the Pensions Act 2014, small, inactive retirement accounts were being reduced by fees and managed separately, and in an inefficient manner. Pension registry There is no pension registry in the United Kingdom providing direct access to consolidated retirement account information online, according to U.K. government officials. Individuals get information on lost accounts through a government service called the Pensions Tracing Service. Participants use the service to trace lost workplace or private retirement account based on information the participant supplies. The Pensions Tracing Service requests information from participants on their workplace or personal pension plan, such as names, addresses, dates of employment or participation in the plan, job title, and industry. Individuals use the contact information of the plan administrator to determine their eligibility for retirement benefits, and if eligible, to claim them, according to a U.K. government research report. If the trace is successful, the Pensions Tracing Service provides the current name or contact details of the plan administrator to the individual. Participants can access the Pensions Tracing Service online, or by phone or mail. The Pensions Tracing Service is a free service available to the general public in the United Kingdom provided by the U.K. Department for Work and Pensions. In addition to the contact named above, Tamara Cross (Assistant Director), Mindy Bowman and Angie Jacobs (Analysts-in-Charge), Ted Leslie, Najeema Washington, and Seyda Wentworth made key contributions to this report. James Bennett, Jennifer Gregory, Kathy Leslie, Frank Todisco, Walter Vance, Kathleen van Gelder, and Craig Winslow also provided support. Private Pensions: Clarity of Required Reports and Disclosures Could Be Improved. GAO-14-92. Washington, D.C., November 21, 2013. 401(k) Plans: Labor and IRS Could Improve the Rollover Process for Participants. GAO-13-30. Washington, D.C.: March 7, 2013. Social Security Statements: Observations on SSA’s Plans for the Social Security Statement. GAO-11-787T. Washington, D.C., July 8, 2011. Defined Contribution Plans: Key Information on Target Date Funds as Default Investments Should Be Provided to Plan Sponsors and Participants. GAO-11-118. Washington, D.C.: January 31, 2011. Individual Retirement Accounts: Government Actions Could Encourage More Employers to Offer IRAs to Employees. GAO-08-590. Washington, D.C.: June 4, 2008.
Millions of employees change jobs each year and some leave their savings in their former employers' 401(k) plans. If their accounts are small enough and they do not instruct the plan to do otherwise, plans can transfer their savings into an IRA without their consent. GAO was asked to examine implications for 401(k) plan participants of being forced out of plans and into these IRAs. GAO examined: (1) what happens over time to the savings of participants forced out of their plans, (2) the challenges 401(k) plan participants face keeping track of retirement savings in general, and (3) how other countries address similar challenges of inactive accounts. GAO's review included projecting forced-transfer IRA outcomes over time using current fee and return data from 10 providers, and interviews with stakeholders in the United States, Australia, Belgium, Denmark, the Netherlands, Switzerland, and the United Kingdom. When a participant has saved less than $5,000 in a 401(k) plan and changes jobs without indicating what should be done with the money, the plan can transfer the account savings—a forced transfer—into an individual retirement account (IRA). Savings in these IRAs are intended to be preserved by the conservative investments allowed under Department of Labor (DOL) regulations. However, GAO found that because fees outpaced returns in most of the IRAs analyzed, these account balances tended to decrease over time. Without alternatives to forced-transfer IRAs, current law permits billions in participant savings to be poorly invested for the long-term. GAO also found that a provision in law allows a plan to disregard previous rollovers when determining if a balance is small enough to force out. For example, a plan can force out a participant with a balance of $20,000 if less than $5,000 is attributable to contributions other than rollover contributions. Some 401(k) plan participants find it difficult to keep track of their savings, particularly when they change jobs, because of challenges with consolidation, communication, and information. First, individuals who accrue multiple accounts over the course of a career may be unable to consolidate their accounts by rolling over savings from one employer's plan to the next. Second, maintaining communication with a former employer's plan can be challenging if companies are restructured and plans are terminated or merged and renamed. Third, key information on lost accounts may be held by different plans, service providers, or government agencies, and participants may not know where to turn for assistance. Although the Social Security Administration provides individuals with information on benefits they may have from former employers' plans, the information is not provided in a consolidated or timely manner that would be useful to recipients. The six countries GAO reviewed address challenges of inactive accounts by using forced transfers that help preserve account value and providing a variety of tracking tools referred to as pension registries. For example, officials in two countries told GAO that inactive accounts are consolidated there by law, without participant consent, in money-making investment vehicles. Officials in the United Kingdom said that it consolidates savings in a participant's new plan and in Switzerland such savings are invested together in a single fund. In Australia, small, inactive accounts are held by a federal agency that preserves their real value by regulation until they are claimed. In addition, GAO found that Australia, the Netherlands and Denmark have pension registries, not always established by law or regulation, which provide participants a single source of online information on their new and old retirement accounts. Participants in the United States, in contrast, often lack the information needed to keep track of their accounts. No single agency has responsibility for consolidating retirement account information for participants, and so far, the pension industry has not taken on the task. Without a pension registry for individuals to access current, consolidated retirement account information, the challenges participants face in tracking accounts over time can be expected to continue. GAO recommends that Congress consider (1) amending current law to permit alternative default destinations for plans to use when transferring participant accounts out of plans, and (2) repealing a provision that allows plans to disregard rollovers when identifying balances eligible for transfer to an IRA. Among other things, GAO also recommends that DOL convene a taskforce to explore the possibility of establishing a national pension registry. DOL and SSA each disagreed with one of GAO's recommendations. GAO maintains the need for all its recommendations.
For the past several decades, the United States has enjoyed relatively inexpensive and plentiful energy supplies, relying primarily on market forces to determine the energy mix that provides the most reliable and least expensive sources of energy—primarily oil, natural gas, and coal. In 1973, oil cost about $15 per barrel (in inflation-adjusted terms) and accounted for 96 percent of the energy used in the transportation sector and 17 percent of the energy used to generate electricity. As shown in figure 2, the 2004 U.S. energy portfolio is similar to the 1973 energy portfolio. In 2004, oil accounted for 98 percent of energy consumed for transportation, and coal and natural gas accounted for about 71 percent of the energy used to generate electricity. Renewable energy—primarily hydropower—remains at 6 percent of U.S. energy consumption. However, since 1973, U.S. crude oil imports have grown from 36 percent of consumption to 66 percent of consumption today, and crude oil prices have jumped particularly in recent years to today’s $60 per barrel level. Despite growing dependence on foreign energy sources, DOE’s budget authority for renewable, fossil, and nuclear energy R&D dropped from $5.5 billion (in real terms) in fiscal year 1978 to $793 million in fiscal year 2005—a decline of over 85 percent. As shown in figure 3, renewable, fossil, and nuclear energy R&D budget authority each peaked in the late 1970s before falling sharply in the 1980s. Total budget authority for the three energy R&D programs has risen after bottoming out in fiscal year 1998. DOE’s renewable R&D program has focused on ethanol, wind, and solar technologies, making steady incremental progress over the past 29 years in reducing their costs. DOE’s goal is for biofuels production in 2030 to replace 30 percent of current gasoline demand, or about 60 billion gallons per year. In 2005, ethanol refiners produced 3.9 billion gallons of ethanol, primarily from corn, that was used (1) as a substitute for methyl tertiary- butyl ether, known as MTBE, which oil refineries have used to oxygenate gasoline and (2) to make E85, a blend of 85 percent ethanol and 15 percent gasoline for use in flex fuel vehicles. To achieve its production goal, DOE is developing additional sources of cellulosic biomass—such as agricultural residues, energy crops, and forest wastes—to minimize adverse effects on food prices. In recent years, DOE’s wind program shifted from high-wind sites to low-wind and offshore sites. Low-wind sites are far more plentiful than high-wind sites and are located closer to electricity load centers, which can substantially reduce the cost of connecting to the electricity transmission grid. Low-wind and offshore- wind energy must address design and upfront capital costs to be competitive. DOE’s solar R&D program focuses on improving photovoltaic systems, heat and light production, and utility-size solar power plants. DOE is exploring thin-film technologies to reduce the manufacturing costs of photovoltaic cells, which convert sunlight into electricity. Similarly, DOE’s solar heating and lighting R&D program is developing technologies that use sunlight for various thermal applications, particularly space heating and cooling. DOE is also working with industry and states to develop utility-size solar power plants to convert the sun’s energy into high temperature heat that is used to generate electricity. Beginning in the mid-1980s, DOE’s fossil energy R&D provided funding through the Clean Coal Technology Program to demonstrate technologies for reducing sulfur dioxide and nitrogen oxide emissions. DOE also has focused on developing and demonstrating advanced integrated gasification combined cycle (IGCC) technologies. More recently, DOE proposed a $1 billion advanced coal-based power plant R&D project called FutureGen— cost-shared between DOE (76 percent) and industry (24 percent)—which will demonstrate how IGCC technology can both reduce harmful emissions and improve efficiency by integrating IGCC with carbon capture and sequestration technologies for the long-term storage of carbon dioxide. According to DOE, FutureGen is designed to be the first “zero- emissions” coal-based power plant and is expected to be operational by 2015. Beginning in fiscal year 1999, DOE’s nuclear energy R&D program shifted from improving safety and efficiency of nuclear power reactors to developing advanced reactor technologies by focusing on (1) the Nuclear Power 2010 initiative in an effort to stimulate electric power companies to construct and operate new reactors; (2) the Global Nuclear Energy Partnership, or GNEP, to develop and demonstrate technologies for reprocessing spent nuclear fuel that could recover the fuel for reuse, reduce radioactive waste, and minimize proliferation threats; and (3) the Generation IV Nuclear Energy Systems Initiative, or Gen IV, to develop new fourth generation advanced reactor technologies intended to reduce disposal requirements and manufacture hydrogen by about 2020 to 2030. Advanced renewable, fossil, and nuclear energy technologies all face key challenges to their deployment into the market. The primary renewable energy technologies with the potential to substantially expand their existing production capacity during the next 25 years are ethanol, a partial substitute for gasoline in transportation, and wind and solar energy technologies for generating electricity. For advanced fossil technologies, the primary challenge is controlling emissions of mercury and carbon dioxide generated by conventional coal-fired plants by using coal gasification technologies that cost about 20 percent more to construct than conventional coal-fired plants and demonstrating the technological feasibility of the long-term storage of carbon dioxide captured by a large- scale coal-fired power plant. For advanced nuclear technologies, investors face substantial risk because of nuclear reactors’ high capital costs and long construction time frames and uncertainty about the Nuclear Regulatory Commission’s (NRC) review of license applications for new reactors. One of ethanol’s biggest challenges is to cost-effectively produce ethanol while diversifying the biomass energy sources so it can grow from its current 3-percent market share. DOE is exploring technologies to use cellulosic biomass from, for example, agricultural residues or fast-growing grasses and trees. In addition, ethanol requires an independent transportation, storage, and distribution infrastructure because its corrosive qualities and water solubility prevent it from using, for example, existing oil pipelines to transport the product from the Midwest to the east or west coasts. As a result, fewer than 1,000 fueling stations nationwide provide E85 compared with 176,000 stations that dispense gasoline. Ethanol also needs to become more cost competitive. Even with the recent spikes in gasoline prices, ethanol producers rely on federal tax incentives to compete. In October 2006, Consumer Reports estimated that drivers paying $2.91 per gallon for E85 actually paid about $3.99 for the energy equivalent amount of a gallon of gasoline because the distance vehicles traveled per gallon declined by 27 percent. Finally, congressional earmarks of DOE’s biomass R&D funding rose from 14 percent of the fiscal year 2000 funds to 57 percent ($52 million) of the fiscal year 2006 funds, according to a DOE program official. Both wind and solar technologies have experienced substantial growth in recent years, but both wind and solar technologies face important challenges for future growth. In particular, wind investors pay substantial upfront capital costs to build a wind farm and connect the farm to the power transmission grid, which can cost $100,000 or more per mile on average, according to DOE officials. Because both wind energy and solar energy are intermittent, utilities have been skeptical about using them, relying instead on large baseload power plants that operate full time and are more accessible to the transmission grid. In contrast, wind turbines operate the equivalent of less than 40 percent of the hours in a year because of the intermittency of wind. In addition, the electricity that is generated must be immediately used or transmitted to the grid because it cannot be cost effectively stored. For the wind industry to expand from high-wind sites to low-wind and offshore locations, DOE needs to also develop bigger wind turbines with longer blades mounted on taller towers, requiring improved designs and materials for blade and drive train components. In addition, offshore wind development faces such technical challenges as understanding the effects of wave and ocean current loads on the base of the structures. The wind industry also faces concerns about environmental impacts, including bird and bat fatalities caused by wind turbines. Finally, investors interested in developing wind energy have relied on the federal production tax credit as a financial incentive to construct wind farms. The credit has periodically expired, resulting in a boom-and-bust cycle for the wind power industry. Solar energy also faces a challenge of developing inexpensive photovoltaic solar cells. As a result of R&D efforts, photovoltaic cells, consisting mostly of crystalline-silicone materials, are becoming increasingly efficient, converting nearly 40 percent of sunlight into electricity for some applications, but the cells are expensive for the typical homeowner. DOE is exploring how to reduce manufacturing costs through thin-film technologies, but at a cost of efficiency. DOE’s challenge is to increase efficiency and reduce costs in the thin-film technologies. Reducing emissions from coal-fired power plants continues to be the priority for DOE’s fossil energy R&D. Having significantly reduced sulfur dioxide and nitrogen oxide, DOE is now focusing on reducing mercury and carbon dioxide emissions. Gasification technologies, such as the IGCC configuration, holds the most promise, but at a 20 percent higher cost than conventional coal-fired power plants. To address global warming concerns, DOE’s challenge is to reduce the cost of gasification technologies and demonstrate the large-scale sequestration and long-term storage of carbon dioxide. A significant obstacle facing nuclear power is the high upfront capital costs. No electric power company has applied for a NRC license to construct a new nuclear power plants in almost 30 years in large part because of a long legacy of cost over-runs, schedule delays, and cancellations. Industry officials report that new nuclear power plants can cost between $1.5 billion and $4 billion to construct, assuming no problems in the licensing and construction process, with additional expenses for connecting the plant to transmission lines. In addition, investors have grown concerned about the disposal of a legacy of spent nuclear fuel. While NRC has revised its licensing process to address past concerns over licensing delays and added costs because of requirements to retrofit plants, investors are uncertain of the effectiveness of the revised regulations. Recently, the Massachusetts Institute of Technology (MIT) and the University of Chicago issued studies comparing nuclear power’s costs with other forms of generating electricity. Both studies concluded that, assuming no unexpected costs or delays in licensing and construction, nuclear power is only marginally competitive with conventional coal and natural gas and, even then, only if the nuclear power industry significantly reduces anticipated construction times. MIT also reported, however, that if carbon were to be regulated, nuclear energy would be much more competitive with coal and natural gas. While federal R&D has declined in recent years, the states have enacted legislation or developed initiatives to stimulate the deployment of renewable energy technologies, primarily to address their growing energy demands, adverse environmental impacts, and their concern for a reliable, diversified energy portfolio. As of 2006, (1) 39 states have established interconnection and net metering rules that require electric power companies to connect renewable energy sources to the power transmission grid and credit, for example, the monthly electricity bill of residents with solar-electric systems when they generate more power than they use; (2) 22 states have established renewable portfolio standards requiring or encouraging that a fixed percentage of the state’s electricity be generated from renewable energy sources; and (3) 45 states offer various tax credits, grants, or loans. For example, renewable energy accounts for 3 percent of Texas’ electricity consumption because Texas enacted legislation in 1999 and 2005 that created a renewable portfolio standard requiring electric utilities to meet renewable energy capacity standards. We identified six countries—Brazil, Denmark, Germany, Japan, Spain, and France—that illustrate a range of financial initiatives and mandates to stimulate the development and deployment of advanced renewable, fossil, and nuclear energy technologies. Through mandates and incentives, Brazil initiated an ethanol program in 1975 that eventually led to an end to Brazil’s dependence on imported oil. Denmark focused on wind energy and, in 2005, derived 19 percent of its electricity from wind energy. Germany began a more diversified renewable energy approach in 2000 and has a goal to increase the share of renewable energy consumption to at least 50 percent by 2050. Japan subsidized the cost of residential solar systems for 10 years, resulting in the installation of solar systems on over 253,000 homes and the price of residential solar systems falling by more than half. Spain hopes to lead the way for European Union investments in an IGCC coal power plant, improving efficiency and generating fewer emissions than conventional coal-fired plants. Finally, France has led Europe in nuclear energy and plans to deploy new nuclear power plants within the next decade. The United States remains the world’s largest oil consumer. In the wake of increasing energy costs with the attendant threat to national security and the growing recognition that fossil fuel consumption is contributing to global climate change, the nation is once again assessing how best to stimulate the deployment of advanced energy technologies. However, it is unlikely that DOE’s current level of R&D funding or the nation’s current energy policies will be sufficient to deploy advanced energy technologies in the next 25 years. Without sustained high energy prices or concerted, high-profile federal government leadership, U.S. consumers are unlikely to change their energy-use patterns, and the United States will continue to rely upon its current energy portfolio. Specifically, government leadership is needed to overcome technological and market barriers to deploying advanced energy technologies that would reduce the nation’s vulnerability to oil supply disruptions and adverse environmental effects of burning fossil fuels. To meet the nation’s rising demand for energy, reduce its economic and national security vulnerability to crude oil supply disruptions, and minimize adverse environmental effects, our December 2006 report recommended that the Congress consider further stimulating the development and deployment of a diversified energy portfolio by focusing R&D funding on advanced energy technologies. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Richard Cheston, Robert Sanchez, and Kerry Lipsitz made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For decades, the nation has benefited from relatively inexpensive energy, but it has also grown reliant on fossil fuels--oil, natural gas, and coal. Periodic imported oil supply disruptions have led to price shocks, yet the nation's dependence on imported energy is greater than ever. Fossil fuel emissions of carbon dioxide--linked to global warming--have also raised environmental concerns. The Department of Energy (DOE) has funded research and development (R&D) on advanced renewable, fossil, and nuclear energy technologies. GAO's report entitled DOE: Key Challenges Remain for Developing and Deploying Advanced Energy Technologies to Meet Future Needs examined the (1) R&D funding trends and strategies for developing advanced energy technologies; (2) key barriers to developing and deploying advanced energy technologies; and (3) efforts of the states and six selected countries to develop and deploy advanced energy technologies. GAO reviewed DOE R&D budget data and strategic plans and obtained the views of experts in DOE, industry, and academia, as well as state and foreign government officials. DOE's budget authority for energy R&D, when adjusted for inflation, fell 85 percent from its peak in fiscal year 1978 to fiscal year 2005. Energy R&D funding in the late 1970s was robust in response to constricted oil supplies and an ensuing energy crisis, but R&D funding plunged when oil prices returned to their historic levels in the mid-1980s. DOE's R&D efforts have resulted in steady incremental progress in reducing costs for renewable energy, reducing harmful emissions of coal-fired power plants, and improving safety and efficiency for nuclear energy. Nevertheless, the nation's dependence on conventional fossil fuels remains virtually the same as 30 years ago. Further development and deployment of advanced renewable, fossil, and nuclear energy technologies face several key challenges. High Capital Costs: The high capital costs of advanced energy technologies worry risk-averse investors. For example, solar cells made to convert solar energy into electricity for homeowners and businesses have been typically too expensive to compete with fossil fuels. DOE's R&D efforts include developing new materials for solar cells that could decrease manufacturing costs. Environmental Concerns: Advanced energy technologies need to address harmful environmental effects, including bird and bat fatalities cause by wind turbines, carbon dioxide and mercury emissions by coal-fired power plants, and spent nuclear fuel from nuclear power reactors. Technology-Specific Challenges: Challenges that are unique to each technology also create barriers to development and deployment. Ethanol, for example, will need to be manufactured with more cost-competitive technologies using agricultural residues or other cellulosic materials in order to expand beyond corn. Other challenges include developing new wind technologies to expand into low-wind and offshore locations; developing advanced coal gasification technologies to further reduce harmful emissions and high capital costs; and working with the nuclear power industry to deploy a new generation of reactors and develop the next generation to enable reactors to reprocess highly radioactive spent nuclear fuel or produce hydrogen. Many states and foreign countries have forged ahead of the federal government by successfully stimulating the deployment of renewable energy technologies. For example, renewable energy accounts for 3 percent of Texas' electricity consumption because Texas enacted legislation in 1999 and 2005 requiring its electric utilities to meet renewable energy capacity standards. Similarly, Denmark has used mandates and financial incentives to promote wind energy, which provided 19 percent of its electricity in 2005.
The low priority assigned to increasing revenue results, in part, from the importance or emphasis given to other values and concerns, especially protecting resources and providing goods and services. Language in federal statutes implies that maximizing revenue should not be the overriding criterion in managing national forests. Moreover, increasingly, legislative and administrative decisions and judicial interpretations have required the Forest Service to give priority to non-revenue-generating uses over uses that can and have produced revenue. For example, the Endangered Species Act and other environmental and planning laws and their judicial interpretations limit the agency’s ability to generate revenue, requiring instead that priority be given to protecting species’ diversity and other natural resources, including clean water and clean air. In addition, both the Congress and the administration have increasingly set aside National Forest System lands for conservation—as wilderness, wild and scenic rivers, national monuments, and recreational areas. Only limited revenue-generating uses, such as timber sales and oil and gas leasing, are allowed in some of these areas. When the Forest Service can generate revenue, it is sometimes required to provide goods and services at less than their fair market value. For instance, the fee system for ski areas on national forests, developed by the ski industry and enacted into law in 1996, does not ensure that fees collected from ski areas reflect fair market value. Other legislative decisions not to charge fees for the use of most recreational sites and areas managed directly by the agency reflect a long-standing philosophy of free access to public lands. In addition, federal statutes and regulations have narrowly defined the instances in which the Forest Service can charge fees for noncommercial recreational activities, such as hunting and fishing by individuals on national forests, and the agency generally defers to state laws regulating these activities. As a result, forest managers do not charge individuals for hunting and fishing on their lands. Other legislative requirements that limit the generation of revenue from activities such as hardrock mining and livestock grazing reflect a desire to promote the economic stability of certain historic commodity uses. For example, the Mining Law of 1872 was enacted to promote the exploration and development of domestic mineral resources as well as the settlement of the western United States. Under the act’s provisions, the federal government receives no financial compensation for hardrock minerals, such as gold and silver, extracted from Forest Service and other federal lands. In contrast, the 11 western states that lease state-owned lands for mining purposes impose a royalty on minerals extracted from those lands. Similarly, the formula that the Forest Service uses to charge for grazing livestock on its lands keeps fees low to promote the economic stability of western livestock grazing operators with federal permits. In addition, revenue-retention and revenue-sharing provisions discourage efforts to control costs. For example, legislation allows the Forest Service to retain a portion of the revenue it generates from timber sales and requires the agency to share a portion of that revenue with states and counties, without deducting its costs. The costs to prepare and administer the sales are funded primarily from annual appropriations rather than from the revenue generated by the sales. As a result, neither the agency nor the states and counties have an incentive to control costs, and the Forest Service may be encouraged to sell timber at prices that would not always allow it to recover its costs. From fiscal year 1992 through fiscal year 1997, the Forest Service spent about $2.5 billion in appropriated funds and other moneys to prepare and administer timber sales but returned less than $600 million in timber sale revenue to the General Fund of the U.S. Treasury. When the Congress has given the Forest Service the authority to obtain fair market value for goods or to recover costs for services, the agency often has not done so. As a result, forgone revenue has cost taxpayers hundreds of millions of dollars, as the following examples from our prior work show. In June 1997, we reported that the sealed bid auction method is significantly and positively related to higher bid premiums on timber sales. However, the Forest Service used oral bids at single-bidder sales rather than sealed bids, resulting in an estimated decrease in timber sale receipts of $56 million from fiscal year 1992 through fiscal year 1996. In December 1996, we reported that, in many instances, the Forest Service has not obtained fair market fees for commercial activities on the national forests, including resort lodges, marinas, and guide services, or for special noncommercial uses, such as private recreational cabins and special group events. Fees for such activities are the second largest generator of revenue for the agency, after timber sales. The Forest Service’s fee system, which sets fees for most commercial uses other than ski operations, has not been updated for nearly 30 years and generally limits fees to less than 3 percent of a permittee’s gross revenue. In comparison, fees for similar commercial uses of nearby state-held lands averaged 5 to 15 percent of a permittee’s total revenue. In December 1996, we also reported that although the Forest Service has been authorized to recover the costs incurred in reviewing and processing all types of special-use permit applications since as far back as 1952, it has not done so. On the basis of information provided by the agency, we estimated that in 1994 the costs to review and process special-use permits were about $13 million. In April 1996, we reported that the Forest Service’s fees for rights-of-way for oil and gas pipelines, power lines, and communication lines frequently did not reflect fair market value. Agency officials estimated that in many cases—particularly in high-value areas near major cities—the Forest Service may have been charging as little as 10 percent of the fair market value. The Forest Service’s failure to obtain fair market value for goods or recover costs for services when authorized by the Congress results, in part, because the agency lacks a financial incentive to do so. One incentive would be to allow the agency to retain and spend the revenue generated to address its unmet needs. For example, from the end of World War II through the late 1980s, the Forest Service emphasized timber production on national forests, in part, because a substantial portion of the receipts from timber sales are distributed into a number of funds and accounts that the agency uses to finance various activities on a sale area. Even now, many forest managers have the opportunity to increase their budgets by increasing timber sales. Conversely, before fiscal year 1996, the Land and Water Conservation Act of 1965, as amended, required that revenue raised through collections of recreational fees be deposited in a special U.S. Treasury account. The funds in this account could become available only through congressional appropriations and were generally treated as a part of, rather than a supplement to, the Forest Service’s regular appropriations. However, in fiscal year 1996, the Congress authorized the fee demonstration program to test recreational fees as a source of additional financial resources for the Forest Service and three other federal land management agencies. The demonstration program legislation allows these agencies to experiment with new or increased fees at up to 100 sites per agency. The Congress directed that at least 80 percent of the revenue collected under the program be spent at the unit collecting the fees. The remaining 20 percent can be spent at the discretion of each agency. In essence, the more revenue that a national forest can generate through new or increased fees, the more it will have to spend on improving conditions on the forest. By allowing the agency to retain the fees collected, the Congress created a powerful incentive for forest managers to emphasize fee collections. Gross revenue from recreational fees on the national forests increased from $10.0 million in fiscal year 1996 to $18.3 million in fiscal year 1997, or by 83 percent, and to $26.3 million in fiscal year 1998, or by 163 percent compared with fiscal year 1996. Five sites each generated over $1 million in fiscal year 1998 compared with only two sites in fiscal year 1997. Two sites—the Mount St. Helens National Volcanic Monument on the Gifford Pinchot National Forest in Washington State and the Enterprise Forest Project in Southern California—each generated over $2.3 million in fiscal year 1998. The legislation also provided an opportunity for the four federal land management agencies to be creative and innovative in developing and testing fees by giving them the flexibility to develop a wide range of fee proposals. As a result, the Forest Service has, among other things, developed new methods for collecting fees and has experimented with more businesslike practices, such as peak-period pricing. These practices can help address visitors’ and resource management needs and can lower operating costs. According to Forest Service officials, the agency is evaluating whether to issue regulations that would allow forest managers to charge fees to recover their costs to review and process special-use permit applications. The administration also plans to forward legislative proposals to the Congress in the near future that would allow the agency to retain and spend all of the revenue generated by fees for commercial filming and photography on the national forests. Other legislative changes being considered by the agency would allow it to retain and spend all or a portion of the (1) revenue generated by fees charged to recover the costs to review and process special-use permit applications and (2) fees collected for resort lodges, marinas, guide services, private recreational cabins, special group events, and other commercial and noncommercial activities on the national forests. On the basis of our work, we offer the following observations on the Forest Service’s ongoing efforts to secure alternative sources of revenue. First, sustained oversight by the Congress will be needed to ensure that the agency maximizes revenue under existing legislative authorities. For instance, according to Forest Service officials, the agency is evaluating whether to issue regulations to allow forest managers to charge fees to recover their costs to review and process special-use permit applications. However, the agency has been authorized by the Congress to recover these costs since 1952 and has twice in the past 12 years developed, but not finalized, draft regulations to implement the authority. According to Forest Service headquarters officials, both times, staff assigned to develop and publish the regulations were reassigned to other higher-priority tasks. As a result, the agency estimates that it forgoes $5 million to $7 million annually. Second, new legislation that would allow the Forest Service to retain and spend more of the revenue generated by fees would provide forest managers with additional incentive to emphasize fee collections. However, providing the agency with this authority at this time would involve risks and difficult trade-offs. In particular, the Forest Service would not be able to accurately account for how it spent the money and what it accomplished with it. While the agency has made progress in recent years, it is still far from achieving financial accountability and possibly a decade or more away from being fully accountable for its performance. Because of its serious long-standing financial management deficiencies and the problems it has encountered in implementing its new accounting system, we recently designated the Forest Service’s financial management as a high-risk area vulnerable to waste, fraud, abuse, and mismanagement. In addition, allowing the Forest Service to retain and spend revenue that is generally treated as a part of, rather than an addition to, its regular appropriations would be included under the limits on discretionary spending imposed by the Budget Enforcement Act, as amended. Allowing the agency to retain fee revenue—rather than depositing the money in the General Fund of the Treasury—would also reduce the Congress’s ability to use these funds for other priorities. Furthermore, while this fee revenue may be initially earmarked for the Forest Service, nothing would prevent the Congress from using the revenue to offset, rather than supplement, the agency’s regular appropriations. Finally, new legislation being proposed or considered by the Forest Service is limited to special-use fees and, as such, does not address other potential sources of revenue. For instance, in a July 1998 report, a team of Forest Service employees identified steps that the agency should take to improve the way it conducts its business. In addition to recreational and special-use fees, the team identified the minerals and geology program and the relicensing of hydroelectric sites on the national forests as the greatest opportunities for securing alternative sources of revenue. In addition, we have reported that enacting legislation to impose a royalty on hardrock minerals extracted from Forest Service and other federal lands could generate hundreds of millions of dollars in increased revenue. However, allowing the Forest Service to collect, retain, and spend more of the revenue generated by goods and services on the national forests would require difficult policy choices and trade-offs. For example, collecting recreational fees conflicts with the long-standing philosophy of free access to public lands. Imposing a royalty on hardrock minerals extracted from national forests conflicts with the desire to promote the economic stability of this historic commodity use. And allowing forest managers to retain and spend revenue from oil and gas leasing and production would give them a strong financial incentive to lease lands that they might otherwise set aside for resource protection or conservation. Therefore, if the Congress believes that increasing revenue from the sale or use of natural resources should be a mission priority for the Forest Service, it will need to work with the agency to identify legislative and other changes that are needed to clarify and modify the Congress’s intent and expectations for revenue generation relative to ecological, social, and other values and concerns. Mr. Chairman, this concludes our prepared statement. We will be pleased to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the barriers and opportunities for generating revenue on lands managed by the Forest Service. GAO noted that: (1) legislative and administrative decisions and judicial interpretations of statutory requirements have required the agency to shift its emphasis from uses that generate revenue, such as producing timber, to those that do not, such as protecting species and their habitats; (2) the Forest Service is required by law to continue providing certain goods and services at less than fair market value; (3) certain legislative provisions also serve as disincentives to either increasing revenue or decreasing costs; (4) because the costs are funded from annual appropriations rather than from the revenue generated, the agency does not have an incentive to control costs; (5) when Congress has provided the Forest Service with the authority to obtain fair market value for certain uses, or to recover costs for services, the agency often has not done so; (6) as a result, the Forest Service forgoes at least $50 million in revenue annually; (7) given a financial incentive and flexibility, the Forest Service can and will increase revenue; (8) for example, the recreational fee demonstration program, first authorized by Congress in fiscal year (FY) 1996, allows the agency to: (a) test new or increased fees at up to 100 sites; and (b) retain the revenue to help address unmet needs for visitor services, repairs and maintenance, and resource management; (9) by allowing the agency to retain the fees collected, Congress created an incentive for forest managers to emphasize fee collections; (10) gross revenue from recreational fees on the national forests increased from $10.0 million in FY 1996 to $26.3 million in FY 1998; (11) the administration plans to forward legislative proposals to Congress, and the Forest Service is considering other legislative changes that would allow the agency to collect, retain, and spend more fee revenue; (12) however, allowing forest managers to retain and spend all or a portion of the revenue they collect would involve risks and difficult trade-offs; (13) in particular, the Forest Service is still far from achieving financial and performance accountability and thus cannot accurately account for how it spends money and what it accomplishes with it; and (14) allowing the agency to collect, retain, and spend more of the revenue generated by goods and services on the national forests would also require difficult trade-offs or policy choices between increasing revenue and other values and concerns.
When disasters such as floods, tornadoes, or earthquakes strike, state and local governments are called upon to help citizens cope. Assistance from FEMA may be provided if the President, at a state governor’s request, declares that an emergency or disaster exists and that federal resources are required to supplement state and local resources. The 1988 Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5121 and following) authorizes the President to issue major disaster or emergency declarations and specifies the types of assistance the President may authorize. The assistance includes temporary housing and other benefits for individuals as well as public assistance. The public assistance program funds the repair of eligible public facilities—such as roads, government buildings, utilities, and hospitals—that are damaged in natural disasters. Under the program, FEMA has obligated over $6.5 billion (in constant 1995 dollars) for disasters that occurred during fiscal years 1989 through 1994. FEMA may make public assistance grants to state and local governments and certain nonprofit organizations for three general purposes: debris removal, emergency protective measures, and permanent restoration. Generally, the grants are to cover not less than 75 percent of the eligible costs. Over the years, the Congress has increased eligibility for public assistance through legislation that expanded the categories of assistance and/or specified the persons or organizations eligible to receive the assistance. FEMA is responsible for developing regulations and guidance to implement the program. Following a disaster declaration, FEMA helps survey damaged facilities and prepares damage survey reports (DSRs) that contain estimates of repair costs. Officials in FEMA’s regional offices make the initial eligibility determinations. The applicants may appeal these decisions, first to the regional office and subsequently to FEMA headquarters. For disasters declared in fiscal years 1989 through 1994, FEMA projects that public assistance grants for permanent repairs and restorations will total over $5.2 billion. Decisions on eligibility effectively determine the level of federal spending for public assistance, affecting the amounts of grants and of FEMA’s and applicants’ administrative costs. The importance of clear criteria is heightened because in large disasters FEMA often uses temporary personnel with limited training to help prepare and process applications. Our review of FEMA regulations and implementing guidance, and discussions with FEMA officials responsible for making eligibility determinations, revealed a need for clarifying the criteria related to the standards (building codes) to which damaged facilities should be restored. Generally, FEMA’s regulations provide that the agency will provide funding to restore an eligible facility on the basis of its design as it existed immediately before the disaster and in accordance with applicable standards. For a number of reasons, determining what standards are “applicable” can be contentious. For example, following the January 1994 Northridge (California) earthquake, a decision on assistance for restoring damaged hospitals was delayed for 2 years because of a dispute over which standards were applicable. To be considered “applicable,” the standards must—among other things—be in a formally adopted written ordinance of the jurisdiction in which the facility is located or be state or federal requirements. The standards do not necessarily have to be in effect at the time of the disaster; if new standards are adopted before FEMA has approved the DSR for the permanent restoration of a facility in the jurisdiction, the work done to meet these standards may be eligible for public assistance. FEMA regional officials cited a need to better define the authority for adopting and approving standards. Similarly, the criteria for determining the eligibility of certain private nonprofit facilities are unclear. The Stafford Act provides that, in addition to specific types of private nonprofit facilities such as educational institutions and medical facilities, “other” private nonprofit facilities that “provide essential services of a governmental nature to the general public” may be eligible for assistance. When developing regulations to implement the legislation, FEMA relied on an accompanying report to define the “other” category. The report’s examples included museums, zoos, community centers, libraries, shelters for the homeless, senior citizens’ centers, rehabilitation facilities, and shelter workshops. FEMA’s regulations incorporated the list of examples from the House report but recognized that other similar facilities could be included. FEMA experienced problems in applying this regulation because, among other things, the wide range of services provided by state and local governments made it difficult to determine whether services were of a governmental nature. In 1993, FEMA amended its regulations to limit eligible “other” private nonprofit facilities to those specifically included in the House report and those facilities whose primary purpose is the provision of health and safety services. However, FEMA officials have still found it difficult to determine whether facilities are eligible. FEMA’s Inspector General has cited examples of private nonprofits that do not appear to provide essential government services yet received public assistance funding. For example, following the Northridge earthquake, a small performing arts theater received about $1.5 million to repair earthquake damage because it offered discount tickets to senior citizens and provided acting workshops for youth and seniors. Clear criteria are important for controlling federal costs and helping to ensure consistent and equitable eligibility determinations. For example, depending on which set of standards—which determine the scope of work needed for permanent restoration—were deemed “applicable,” FEMA’s costs of restoring one of the hospitals damaged in the Northridge earthquake ranged from $3.9 million to $64 million. (The latter estimate is based on the cost of demolishing and replacing the hospital.) Additionally, without clear criteria, inconsistent or inequitable eligibility determinations and time-consuming appeals by grantees and subgrantees may be more likely to occur. According to FEMA officials, between fiscal year 1990 and the end of fiscal year 1995, there were 882 first-level appeals of public assistance eligibility determinations. FEMA headquarters had begun logging in second- and third-level appeals in January 1993 and could not quantify the number of such appeals that occurred before then; but from January 1993 to the end of March 1996, there have been 104 second-level appeals and 30 third-level appeals. Although FEMA may always expect some appeals, clearer guidance on applying eligibility criteria could help reduce their number. The need for clearer, more definitive criteria dealing with the eligibility for public assistance takes on added importance because of FEMA’s use of temporary personnel with limited training to help prepare and process DSRs, which are used in determining the scope of work eligible for funding. The number of large disasters during the 1990s has resulted in a great number of DSRs; for example, over 17,000 after the Northridge earthquake and over 48,000 after the 1993 Midwest floods. FEMA regional officials working on the recovery from the Northridge earthquake pointed out that the lack of training directly results in poor quality DSRs that may cause overpayments or underpayments to public assistance recipients. According to FEMA regional officials, decisions made in determining eligibility following one disaster have not been systematically codified or disseminated to FEMA personnel to serve as a precedent in subsequent disasters. The regulations were intended to be supplemented with guidance, examples, and training to clarify eligibility criteria and help ensure their consistent application, but because of competing workload, this did not occur as envisioned. FEMA’s written guidance supplementing the regulations include a manual published in draft in 1992 and policy memorandums. FEMA and other officials recognize the need to clarify the criteria and improve policy dissemination. At a January 1996 hearing, the Director of FEMA noted that in previous disasters FEMA staff worked without having policies in place that addressed public assistance, making eligibility determinations difficult. FEMA plans to republish and subsequently update the public assistance manual and has begun offering a new training course for officials who prepare DSRs. Also, FEMA has recently taken steps to improve policy dissemination. Examples include (1) a compendium of policy material compiled by one FEMA regional office, which FEMA headquarters is circulating to the other regions; (2) the development of a new system of disseminating policy memorandums, including a standardized format and numbering system; and (3) the dissemination—by headquarters to all regional offices—of the results of second- and third-level appeals. To ensure that expenditures are limited to eligible items, FEMA relies largely on states’ (grantees’) certifications. Further limited assurance is provided by audits. When FEMA approves a DSR, it obligates an amount equal to the estimated federal share of the project’s cost. The obligation makes these funds available to the state to draw upon as needed by the subgrantees. If a subgrantee wishes to modify a project after a DSR is approved, or experiences cost overruns, it must apply through the state to FEMA for an amended or new DSR. This gives FEMA the opportunity to review supporting documentation justifying the modification and/or cost overrun. In accordance with a governmentwide effort launched in 1988 to simplify federal grant administration, FEMA relies on states—in their role as grantees—to ensure that expenditures are limited to eligible items. The states are responsible for disbursements to subgrantees and certify at the completion of each subgrantee’s project and the closeout of each disaster that all disbursements have been proper and eligible under the approved DSRs. FEMA does not specify what actions the states should take to enable them to make the certifications, but provides that inspections and audits can be used. FEMA has no reporting requirements for subgrantees but expects grantees to impose reporting requirements on subgrantees so that the grantees can submit necessary reports. Most disasters stay open for several years before reaching the closeout stage. FEMA officials involved in the closeout process in the San Francisco, Atlanta, and Boston regions told us that they review the states’ closeout paperwork to verify the accuracy of the reported costs, but they rely on the states to ensure the eligibility of costs. Independent audits serve as a further check on the eligibility of items funded by public assistance grants, although the audit coverage is somewhat limited. FEMA’s Office of Inspector General (OIG) audits recipients on a selective basis and attempts to audit any disaster when asked to by a FEMA regional office. Officials in the OIG’s Eastern District Office could not estimate their audit coverage but said that a significant percentage of the dollars were audited by focusing on where the large sums of money went. For example, although the officials had looked at only about 20 of the several hundred public assistance subgrantees for Hurricane Hugo, they believed that those subgrantees represented about $200 million of the $240 million in public assistance costs. Officials in the Western District Office said that less than 10 percent of the disasters receive some sort of audit coverage by the OIG. Overall, they believe that probably less than one percent of DSRs are covered. States may also perform audits of specific subgrantees. Currently, California is the only state that has an arrangement with FEMA’s OIG to perform audits that meet generally accepted auditing standards. (Audit coverage in California is particularly important because in recent years California has received far more public assistance funds than any other state.) OIG officials said that they have attempted to negotiate for similar audit coverage by other states, but none have agreed to provide it, generally citing the difficulty of hiring and paying for the audit staff and keeping a sustained audit effort under way in light of the sporadic nature of FEMA’s disaster assistance. FEMA may obtain additional assurances about the use of its funds from audits of subgrantees conducted as part of the “single audit” process.State and local governments and nonprofit organizations that receive $100,000 or more of federal funds in a year must have a “single audit” that includes an audit of their financial statements and additional testing of their federal programs. Auditors conducting single audits must test the internal controls and compliance with laws and regulations for programs that meet specified dollar criteria. The largest programs, in terms of expenditures, are therefore tested. Entities that receive $25,000 to $100,000 in federal assistance in a year have the option of having a single audit or an audit in accordance with the requirements of each program that they administer. Because the public assistance officials in FEMA’s 10 regional offices are involved in the day-to-day operations of the public assistance program, giving them a high degree of expertise, we obtained their recommendations for reducing the costs of future public assistance. We also asked the officials to identify potential obstacles to implementing those recommendations. We asked the National Emergency Management Association, which represents state emergency management officials, to respond to the options that the FEMA officials generated because implementing many of the options would affect the states. Because the available records did not permit quantifying the impact of each option on past public assistance expenditures, and because future costs will be driven in part by the number and scope of declared disasters, the impact on the public assistance costs of future disasters is uncertain. Options that (1) the FEMA regional officials strongly recommended and (2) the National Emergency Management Association endorsed for further consideration are: Better define which local authorities govern the standards applicable to the permanent restoration of damaged facilities. Limit the time period following a disaster during which those authorities can establish new standards applicable to the restoration. Eliminate the eligibility of facilities that are owned by redevelopment agencies and are awaiting investment by a public-private partnership. Restrict the eligibility of public facilities to those being actively used for public purposes at the time of the disaster. Reduce the number of times that recipients may appeal a FEMA decision on eligibility of work. Improve insurance requirements by (1) eliminating states’ current authority to waive mandatory purchase of property insurance otherwise required as a condition of FEMA’s financial assistance and (2) requiring applicants to obtain at least partial insurance, if it is reasonably available. Additional options strongly recommended by the FEMA officials but not specifically endorsed for further consideration by the National Emergency Management Association include the following: Limit funding for facilities used to temporarily relocate subgrantees during appeals, because the appeals process can take several years. This option would be comparable to the insurance industry’s practice of calculating maximum allowable temporary relocation costs. Eliminate the eligibility of revenue-generating private nonprofit organizations. Eliminate funding from FEMA for some water control projects. Limit funding for permanent restoration to the eligible cost of upgrading only the parts of structures damaged by the disaster. (Applicants would bear the expense of upgrading undamaged parts of the structures.) Eliminate the eligibility of publicly owned facilities that are being rented out to generate income. For example, facilities owned by local governments and rented to the private sector for use as warehouses, restaurants, stadiums, etc., would not be eligible. Eliminate or reduce the eligibility of facilities when the lack of reasonable pre-disaster maintenance contributes to the scope of the damage from a disaster. Eliminate the eligibility of the credit toward the local share of the costs of public assistance for volunteer labor and donated equipment and material. Increase the percentage of damage required for FEMA to replace a structure (rather than repair it) to a threshold higher than the current 50 percent. The National Emergency Management Association proposed that considerable savings in the federal costs of public assistance could be realized by reducing the federal administrative structures. The association also endorsed for further consideration the following options, identified but not most strongly recommended by FEMA respondents: Eliminate the eligibility of postdisaster “beach renourishment,” such as pumping sand from the ocean to reinforce the beach. Limit the scope of emergency work to the legislative intent. (The association believes that assistance for debris removal and emergency protective measures has been used for permanent repairs.) Eliminate the eligibility of revenue-producing recreational facilities, e.g., golf courses and swimming pools. Clearer and more comprehensive criteria (supplemented with specific examples) that are systematically disseminated could help ensure that eligibility determinations are consistent and equitable and could help control the costs of future public assistance. To the extent that the criteria are more restrictive, the costs of public assistance in the future could be less than they would otherwise be. In the 1990s, the potential adverse effects of a lack of clear criteria have become more significant because of (1) an increase in large, severe disasters and (2) the need to use temporary employees with limited training in the process of inspecting damage and preparing damage survey reports. A number of FEMA public assistance officials’ recommendations are consistent with options proposed by FEMA’s Inspector General, with our work, and with our current review. Furthermore, the options highlight a number of instances in which existing eligibility criteria need to be clarified or strengthened with additional guidance. Our May report contains recommendations designed to clarify and help ensure consistent application of the criteria and to identify changes that should be implemented. Natural Disaster Insurance: Federal Government’s Interests Insufficiently Protected Given Its Potential Financial Exposure (GAO/T-GGD-96-41, Dec. 5, 1995). Disaster Assistance: Information on Declarations for Urban and Rural Areas (GAO/RCED-95-242, Sept. 14, 1995). Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs (GAO/T-95-140, Mar. 16, 1995). GAO Work on Disaster Assistance (GAO/RCED-94-293R, Aug. 31, 1994). Los Angeles Earthquake: Opinions of Officials on Federal Impediments to Rebuilding (GAO/RCED-94-193, June 17, 1994). Earthquake Recovery: Staffing and Other Improvements Made Following Loma Prieta Earthquake (GAO/RCED-92-141, July 30, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Federal Emergency Management Agency's (FEMA) public disaster assistance program. GAO noted that: (1) FEMA program criteria are ambiguous; (2) criteria clarifications are needed to determine which damaged facilities should be restored and the eligibility of nonprofit facilities' services for assistance; (3) inconsistent or inequitable eligibility determinations, time-consuming appeals, and waste are more likely to occur if eligibility criteria are not clear and current; (4) FEMA use of temporary employees with limited training to prepare damage survey reports makes the need for clearer criteria more urgent; (5) FEMA has not systematically updated or disseminated eligibility policy changes to its regional offices, but it plans to do so; (6) although it approves specific subgrantee projects, FEMA relies on states as public assistance grantees to certify that expenditures are limited to eligible items; (7) as an additional, limited control over disbursements, FEMA has independent auditors or its Inspector General Office audit some subgrantees; and (8) options to reduce costs include better defining local authorities that govern establishment of restoration standards, eliminating or restricting eligibility of certain facilities, placing limits on the appeals process, improving insurance requirements, limiting temporary relocation costs, and increasing the damage percentage for facility replacement.
Body armor for law enforcement applications includes ballistic-resistant and stab-resistant body armor—usually worn in the form of a vest—that provides coverage and protection primarily for the torso. Ballistic-resistant body armor protects against bullet penetrations and the blunt trauma associated with bullet impacts. This body armor includes soft body armor that protects against handgun bullets and less flexible tactical body armor composed of soft and hard components that protects against rifle bullets.Figure 1 depicts examples of ballistic-resistant and stab-resistant body armor. Stab-resistant body armor protects against knives or spikes. A fuller discussion of the NIJ standards appears in a later section of this report. From the inception of the BVP program in fiscal year 1999 through fiscal year 2011, the program awarded about $340 million to help state and local jurisdictions procure nearly 1 million vests to protect their law enforcement officers. Specifically, the program awarded large jurisdictions about $131 million and small jurisdictions nearly $208 million, which is consistent with the statutory provision of the program favoring jurisdictions with fewer than 100,000 people. In fiscal year 2011, the BVP program implemented a policy that requires that jurisdictions have mandatory wear policies in place to secure awards, which means that law enforcement agencies must establish rules about when and under what circumstances body armor must be worn. In addition, the program requires that the jurisdictions use this funding to purchase only ballistic-resistant and stab-resistant body armor that complies with current NIJ standards. Jurisdictions can only use BVP funds to purchase one vest per officer over the course of their vest replacement cycles at a maximum cost of $2,250 per vest. Finally, the BVP program requires that when procuring body armor with BVP awards recipients do not combine BVP and JAG funding. Jurisdictions apply for BVP awards and reimbursable payments through the online BVP system. From fiscal years 2006 through 2011, the JAG program awarded about $4 billion, including about $2 billion in funding from the American Recovery and Reinvestment Act of 2009, to help state and local jurisdictions fund a wide variety of criminal justice activities, including corrections, prosecution and courts, and law enforcement, among others. Within the “law enforcement” area, the JAG program permits grantees to purchase equipment, such as ballistic-resistant and stab-resistant vests. However, BJA does not know how much grantees have spent on body armor because it is not required to track expenditures for specific purposes. According to preliminary information from our survey of more than 3,900 grantees that had received JAG awards from fiscal years 2005 through 2010, 222 of 1,639 respondents—or about 14 percent—noted that their jurisdictions had used JAG funds to procure ballistic-resistant body armor in fiscal year 2010. Another 37 grantees—or about 2 percent—noted that they had used JAG funding for stab-resistant vest purchases during the same fiscal year. According to BJA, more than 1,000 JAG awards are made each year, and from fiscal years 2006 through 2011, 357 grantees indicated to BJA that they planned to use JAG awards they received during this period to procure ballistic-resistant vests. NIJ’s research has led to the development of its body armor standards and also informs periodic revisions to these standards. In particular, NIJ’s research has supported studies to enhance compliance test methods; augment ballistic materials; improve the design, comfort, and coverage of body armor; explore the effect of increased body armor coverage on the ability of officers to comfortably carry out their duties; and examine the effects of physical and environmental factors, such as extreme temperatures, on the performance and wear and tear of body armor. NIJ also is exploring ways to enhance its body armor testing methods. For example NIJ is working through DOD’s TSWG to simulate aging on the ballistic resistant panels contained in hard body armor and then conducting age-regression studies to assess their degradation, looking for ways to simulate extreme temperature and other environmental and physical conditions and improve related testing mechanisms for wear and tear, and partnering with the Royal Canadian Mounted Police and Canada’s Defense Research Establishment Valcartier to develop a protocol and specifications for testing the capacity of a vest to withstand multiple gunshots within a very small target area. NIJ also serves as an information resource on body armor by posting the results of its research and other relevant information to its website and managing listserves of body armor news for law enforcement. Based on its research and other information, NIJ also develops videos on body armor procurement and usage and hosts workshops on its standards to generate feedback and explore body armor issues with users, researchers, and developers. NIJ has been setting voluntary body armor performance standards since 1972. It is the only federal government entity that sets body armor standards and administers a program to test commercially available body armor for compliance with the standards so that the armor will perform as expected. NIJ is currently working to update its ballistic-resistant body armor standard, last revised in 2008, and its stab-resistant body armor standard, established in 2000. The current NIJ standard for ballistic-resistant body armor establishes minimum performance requirements and test methods intended to protect against gunfire. ballistic performance, as shown in appendix II. For any of these performance levels, NIJ’s test protocol requires that the body armor protects against blunt trauma by specifying that a bullet does not cause an indentation on the back of the body armor that is greater than 44 millimeters. NIJ measures the depth of this indentation on the clay material on which the body armor is mounted, as illustrated in figure 2. NIJ, Ballistic Resistance of Body Armor, NIJ Standard-0101.06 (Washington, D.C.: July 2008). NIJ does not permit body armor industry representatives to be members of the STC. According to NIJ, this provision was put in place to avoid conflicts of interest and to facilitate the participation of law enforcement practitioners. NIJ holds workshops to inform manufactures and expects that they will participate in the public review of the draft standard and related documents that the STC produces. The standard contains the minimum design and performance requirements that the body armor must meet, as well as the test methods to be used to assess the performance. The conformity assessment requirements document includes all requirements for a third party independent conformity assessment organization to demonstrate that body armor meets the standard and typically includes periodic factory surveillance and follow-up testing of production items. The selection and application guide provides a nontechnical description of the standard and conformity assessment requirements; performance levels, if applicable; and guidance on procurement, selection, care, maintenance, training, and administrative issues. Regarding NIJ’s update of its stab-resistant body armor standard from 2000, the Special Technical Committee (STC) expects to finish its draft documents—the standards and related requirements and guide—by June 2012. At that point, NIJ plans to post them for public review so that body armor manufacturers, and any other interested parties, may submit comments. The STC then plans to address the comments and refine its draft documents and NIJ plans to have any necessary validation testing performed. Once NIJ reviews and publishes the updated standard and related documents, compliance testing of body armor against the new standard will begin. NIJ hopes to complete this entire process by December 2012. In terms of updating its ballistic-resistant body armor standard from 2008, NIJ’s Body Armor Technology Working Group held a meeting to identify needs and requirements in December 2011. NIJ expects to establish the STC by May 2012 and finalize the standard in November 2013. To test body armor for compliance with its standards, NIJ uses its National Law Enforcement and Corrections Technology Center (NLECTC) to administer its compliance testing program. During compliance testing, manufacturers register with the compliance testing program, submit body armor model application documents to the program, and send body armor model samples to an NIJ-approved laboratory. NIJ-approved laboratories tested 159 body armor models under the NIJ compliance testing program in 2010—137 models of ballistic-resistant body armor and 22 stab-resistant models. Of the 159 models, 81 of them, or about half, passed compliance testing and NIJ added them to the appropriate compliant product list. We include additional details on the controls that NIJ has designed to manage its compliance testing process in the next section of this report. Within its BVP program, BJA has designed several controls to check the eligibility of grantee payment requests, help prevent improper payments to grantees, and ensure grantee compliance with program requirements. BJA has designed several controls within the online BVP system to ensure the eligibility of payment requests. Specifically, the online BVP system is designed to allow only jurisdictions approved through the award process to submit payment requests to ensure the eligibility of the jurisdictions; require that the highest elected official in the jurisdiction, or his or her designee, electronically verify payment requests to ensure accountability; allow BVP funding recipients to request payments for purchased vest models approved by NIJ, which appear on the drop-down list within the online system, to ensure that funds are only used for NIJ- compliant body armor; require BVP funding recipients to manually enter details from the purchase invoice, including the quantity, date ordered, and unit price to ensure that the body armor was purchased within the 2-year period specified in the terms of the BVP award and enhance accountability by allowing the request to be traced back to a specific purchase; and not allow BVP funding recipients to enter costs exceeding the authorized limit of $2,250 per vest. To help prevent improper payments, BJA procedures call for BJA to review monthly batches of all payment requests submitted during the previous 1-month period to (1) detect anomalies between the total number of vests purchased by each jurisdiction and the number of officers in the jurisdiction and (2) identify potential duplicate requests. To detect anomalies in the number of vests purchased by a jurisdiction, BJA is to compare the number of vests the jurisdiction purchased using BVP funding over the previous 3-year period to the number of officers in the jurisdiction. If the number of vests the jurisdiction purchased during the 3- year period exceeds the number of officers in the jurisdiction by more than 10 percent, then BJA is to ask the jurisdiction to provide a response to support the large number of vests purchased. The Office of Audit, Assessment, and Management (OAAM) supports DOJ’s grant efforts by coordinating and developing grant policies across the agency and overseeing and monitoring grantees and grant programs. OAAM, Review of the Bureau of Justice Assistance Verification Process for Payment Programs (Washington, D.C.: November 2011). site visits that OJP’s Office of the Chief Financial Officer plans to conduct this year. This type of ongoing monitoring is consistent with standards for internal control and could be integral to helping BJA with the effective stewardship of government resources. To help ensure compliance with its new fiscal year 2011 requirement that jurisdictions have mandatory body armor wear policies in place, the BVP program asked a random sample of 110 of the 4,960 jurisdictions to which it awarded fiscal year 2011 funds to submit copies of their mandatory wear policies for BJA’s review. In addition, BJA officials told us they are randomly selecting 5 percent of the jurisdictions requesting payments from fiscal year 2011 awards to obtain a copy of their mandatory wear policy as part of BJA’s monthly payment request reviews. Seeking supporting documentation from a random selection of all grantees has been identified as a grant management best practice by DOJ’s Office of the Inspector General. BJA has designed several controls, but it could take two key actions to strengthen them. These are better management of undisbursed funds from grants in the BVP program that have closed and improved efforts to reduce the risk of grantee noncompliance with program requirements. The BVP program has not deobligated undisbursed funds for future use from grant awards whose terms have ended. BJA could improve its financial controls by better managing its obligations and disbursements for grants that have closed. Figure 3 shows the trends in BVP program awards (obligations) and disbursements, or reimbursements, from fiscal years 1999 through 2011. In most years, disbursements generally track with obligations; however, they show greater differences in some years. From the start of fiscal year 1999 through November 2011, the BVP program had awarded—or obligated—approximately $340 million to grantees. Of this amount, the program disbursed about $247 million to grantees through reimbursements. The $93 million difference reflects funds that BJA has awarded but for which grantees have not sought reimbursement. According to BJA officials, several reasons explain why grantees might not seek reimbursements from BJA: their grant term has not yet ended or they have been awarded an extension, they decided not to purchase some or all of the intended vests, or they purchased vests using funds from other sources. BJA reports that the $93 million in undisbursed funds can be broken down in the following manner (see also fig. 4): About $14 million in funds that BJA deobligated. BJA first awarded this money from fiscal years 1999 through 2008, but because grantees never claimed it, BJA was able to deobligate the money. According to BJA officials, once they deobligated the $14 million, they used it in two ways: (1) $8 million was used to offset a 2009 rescission in DOJ’s budget and (2) the balance helped fund additional BVP program awards. About $27 million in funds that BJA could deobligate. BJA first awarded this money from fiscal years 2002 through 2009. The grant terms for each of these grantees have ended and as a result, grantees are no longer eligible for reimbursement. Thus, BJA could deobligate funds from these grants that have closed. About $52 million in funds from awards whose terms have not yet ended. BJA awarded the bulk of this money in either fiscal year 2010 or fiscal year 2011 so grantees can still submit payment requests and the funds remain available for grantee reimbursement. In response to our audit work, BVP program officials told us that, as of February 2012, they and their colleagues in the Office of the Chief Financial Officer were in the process of examining the $27 million available for possible deobligation and considering how to use it. However, DOJ had not yet made a final decision on this matter before we finalized this report, and officials stated that a decision likely would not be made until September 2012. Once a grant’s term has ended, a granting agency typically closes out the grant and deobligates the funds. We have previously reported that grant closeout is an important final point of accountability for grantees, ensuring that they have met all program requirements. Closing out grants also allows agencies to identify and redirect funds to other projects and priorities or return the funding to the Treasury. In the case of the BVP program, since Congress appropriates its funds though no-year appropriations, DOJ does not have to return deobligated BVP funds to the Treasury. Instead, it could enhance its management of BVP funding through its grants closeout process by, for example, redirecting any funds from closed grants to grantees in future award cycles or reducing the amount it requests in new appropriations. Given that the BVP program requested $30 million—and received about $23 million—in fiscal year 2012, deobligating this $27 million could have significant benefits. The BVP program lists its requirements, such as those related to document retention and the prohibition on combining BVP and JAG funds, in limited areas, thus increasing the risk that grantees will not be aware of them. By expanding publicity of the program requirements, BJA could reduce the risk of noncompliance and increase the efficiency of its operations. Specifically, the BVP program requires that jurisdictions retain documentation on all BVP transactions for at least 3 years and prohibits the use of JAG funding to help pay for the portion of the grantees’ costs that the BVP program does not cover. Currently, these requirements are only specifically cited in the “Frequently Asked Questions” section of the BVP program’s website. They are not included in other grant documents, such as the solicitation, or online BVP system where grantees could more easily notice them and be aware. Emphasizing the need to comply with grant award requirements and including clear terms and conditions in funding award documents are leading practices in improving grant accountability and are fundamental to internal control standards.could improve its emphasis by, for example, including these requirements in its solicitation announcing the availability of BVP funds and in its online system for tracking fund use. None of the officials we met with from the 10 jurisdictions was aware of any specific BVP documentation retention requirements. Further, officials from 4 of these jurisdictions were not aware of the prohibition on using JAG funds as matching funds for the BVP program. All of the officials told us that as a matter of practice, however, they retain their documents and had not been combining JAG and BVP funds. It was not within our scope to independently verify their compliance or assess the extent to which all BVP program grantees were aware of the documentation retention and matching fund requirements. Nevertheless, the fact that all 10 of the jurisdictions within our sample were unaware of the documentation retention requirement and 4 jurisdictions were unaware of the prohibition on combining BVP and JAG funds raises questions about the risks associated with noncompliance, such as financial mismanagement. Further, since the Office of the Chief Financial Officer has not yet begun any on-site financial monitoring of BVP grantees, it will be difficult for BJA to assess and mitigate these risks until the site visits are under way. BJA officials told us that the Help Desk had commonly referred BVP recipients to the “Frequently Asked Questions” section of the website, but acknowledged that better disseminating and publicizing information would help ensure that BVP recipients comply with the documentation and matching fund requirements. BJA officials stated that BJA was planning to include information on the prohibition on using JAG funds as matching funds for the BVP program in the fiscal year 2012 BVP announcement and application that they plan to release in April 2012, partly in response to our review. The officials did not have plans to further publicize the documentation retention requirement. Emphasizing the need to comply with these program requirements could help BJA improve grant accountability. BJA also has developed an instructional manual to assist jurisdictions in using the online BVP system to complete applications and funding requests. However, none of the officials we met with from the 10 jurisdictions was aware of this resource and they all indicated that they rely on the Help Desk when they have questions concerning the online system. BJA officials told us they wanted to make the manual easily accessible to jurisdictions by making it available through the program’s website, but acknowledged that including links to the manual in the grant solicitation and the online system could help further raise grantees’ awareness of this resource and the information it provides. BJA officials stated that they plan to include information on the manual in the fiscal year 2012 BVP announcement and application that they expect to release in April 2012, partly in response to our review. Further disseminating information about this resource would be consistent with standards for internal control and could help improve the efficiency of the program by providing jurisdictions with relevant information up front and potentially reducing the number of calls and emails to the Help Desk. BJA could improve consistency across its body armor grant programs by harmonizing JAG and BVP purchase and wear requirements. It also could strengthen its monitoring practices for grantees’ compliance with existing program requirements. Further, BJA could enhance its tracking of which grantees use JAG awards to purchase body armor to facilitate compliance with all existing and any new requirements it might add. Unlike the BVP program, the JAG program does not require that grantees using JAG funding for body armor purchases have mandatory wear policies in place or purchase armor that is NIJ compliant. BJA could enhance its grant management controls by harmonizing requirements across the BVP and JAG programs so that both are holding grantees accountable to the same standards designed to ensure officer safety. We have previously identified establishing mutually reinforcing strategies and compatible policies and procedures as key coordination practices. BJA officials told us that the mandatory wear and NIJ compliance requirements were implemented for the BVP program because jurisdictions use BVP funding more often than JAG funding to purchase body armor. They told us in January 2012 that as a result of our audit work, they planned to begin a review to consider inclusion of these requirements in the JAG program. BJA officials did not provide an estimate for how long such a review would take and did not state whether such a requirement would be included. However, the officials acknowledged that they had not considered addressing the inconsistencies before. Establishing body armor requirements within JAG that are consistent with the BVP program could help BJA better promote officer safety. This could help reduce the risk that officers do not wear the body armor that was purchased with federal funds or that they are wearing body armor that does not meet NIJ standards, given that both our survey and BJA’s data show that JAG grantees are using funds to purchase body armor. BJA could strengthen its monitoring practices to better ensure compliance with the prohibition on combining JAG and BVP program awards by documenting pertinent monitoring procedures. Currently, BJA grant managers perform desk reviews, in which officials review grant documentation off-site, to assess compliance with general programmatic requirements. During these desk reviews, BJA officials told us that JAG grant managers use a checklist to guide their monitoring and acknowledged that this checklist did not contain specifics for monitoring instances where BVP and JAG funding were combined. Officials said that grant managers are trained to check for inappropriate accounting practices no matter what program they are reviewing, but they did not provide evidence of this guidance in the training curriculum and acknowledged that a documented procedure to specifically check for the combining of BVP and JAG funding was needed. They also acknowledged that the cost of documenting this monitoring step would not be prohibitive. Documenting grant managers’ desk review procedures for monitoring compliance with this requirement would be consistent with standards for internal control in the federal government. In addition, such documentation could help ensure consistency in grant managers’ monitoring practices, which in turn could help BJA better ensure grantees’ compliance with JAG program requirements. BJA also could strengthen its guidance to JAG grantees on the prohibition against combining BVP and JAG funds to purchase body armor. BJA officials acknowledged that, as with the BVP program, the current “Frequently Asked Questions” section of the JAG program’s website is the only place that grantees can learn of the program requirement that JAG funds not be used as matching funds for the BVP program. Currently, the prohibition is not contained in the JAG program grant solicitation or within the Grants Management System (GMS). BJA officials recognized the importance of grantees’ compliance with the prohibition on combining JAG and BVP funds and explained two additional controls they are planning to implement to enhance the information they provide to grantees on this topic, partly in response to our audit work. First, they have drafted a new section of the “Frequently Asked Questions” document that they plan to post on their website, pending final review, to better inform grantees of the prohibition on using JAG funds as matching funds for the BVP program. Second, they drafted a “special condition” that describes the prohibition on combining BVP and JAG funding that they plan to include in the JAG grant agreement, once this condition has been approved. The special condition will require prospective grantees to certify that they will not use JAG funds to match their BVP funds or combine JAG and BVP funds to purchase the same vests. The officials told us they expected to include these documents in the fiscal year 2012 grant cycle, which they thought would be announced in March 2012. Improved dissemination of this requirement could help ensure conformity with internal control standards and leading grant management practices. BJA has limited visibility over which JAG grantees are using their awards for body armor purchases. BJA could enhance its tracking to know which grantees used the JAG funds for this purpose and, as a result, be better informed and better positioned to target its monitoring for compliance with existing body armor requirements and any new ones the JAG program might add, consistent with standards for internal control. Currently, BJA uses the online Grants Management System (GMS) to track JAG spending across more than 150 specific categories—each associated with a “project identifier.” BJA officials explained that since fiscal year 2011, they have required potential grantees to select up to five identifiers that reflect the significant ways in which they planned to use their JAG funds. If the applicants do not select any identifiers, or if JAG grant managers believe different identifiers are more appropriate, the grant managers can select as many identifiers as they deem appropriate and enter them directly into GMS. Although “bulletproof vest” is among the project identifiers, no project identifier exists that could be used for stab- resistant vests. Officials told us that in response to our audit work, they would consider adding an identifier for “stab-resistant” vests in the future. However, BJA had not made a decision on this matter before we finalized our report. Although BJA officials acknowledge that project identifiers have limitations when used to track spending, they noted that enhancing GMS with mechanisms more precise than the identifiers would take a significant financial investment. Thus, they agree that maximizing the utility of the existing project identifier system—by adding another vest category—would be a low-cost way of better understanding JAG spending on body armor in the meantime. In addition to body armor research that NIJ funds through cooperative agreements with universities and research institutions, NIJ also has a collaborative body armor research effort with NIST. Under an interagency agreement, NIJ and NIST negotiate an annual fiscal year program plan and statement of work for NIST to provide technical and research services to support NIJ’s standards and compliance testing programs, including test laboratory accreditation. We have previously reported that establishing agency plans, such as the NIST program plan, is a key collaboration practice that can reinforce agency accountability and ensure that goals are consistent and mutually reinforcing. For example, selected fiscal year 2011 NIST projects to support NIJ body armor efforts included research of the links between mechanical damage (e.g., stitching, bending, folding, or stretching) of common body armor materials and ballistic performance to determine how well the test protocol in the current ballistic-resistant body armor standard simulates real-life mechanical wear and proficiency testing to understand how different body armor test laboratories compare with each other and support enhancements to the laboratory accreditation program. NIJ has also designed mechanisms to leverage and share information on body armor with DOD and federal, state, and local law enforcement practitioners. For example, NIJ participates in DOD’s interagency Combating Terrorism Technical Support Office’s TSWG, which conducts research and development to identify and address the needs of federal, state, and local organizations that have responsibilities to prevent and respond to terrorism. TSWG has a personnel protection subgroup that focuses on developing techniques that improve the performance of body armor by reducing weight and optimizing material performance, among other things. NIJ participates in this subgroup joining NIST, the U.S. Secret Service, and the Department of Energy, among others. Currently, TSWG is leveraging resources and expertise from across participating agencies to conduct research on body armor technology to reduce blunt trauma and optimize the design of multithreat body armor worn beneath an officer’s uniform, as well as similarly worn lightweight armor, among other things. In addition, NIJ has established the Law Enforcement and Corrections Technology Advisory Council and the Body Armor Technology Working Group, which consist of federal, state, and local law enforcement practitioners, to identify the body armor needs of and solicit opinions from the law enforcement community. NIJ and NIST take the practitioners’ input into account when developing the annual body armor research program plan. For example, the fiscal year 2010 program plan called for NIST to convene a workshop to evaluate the test methods for hard body armor in response to concerns raised by the Body Armor Technology Working Group. Finally, NIJ holds public conferences and workshops where the results of body armor research conducted by NIJ, DOD, NIST, and others are presented. These coordination mechanisms help NIJ leverage resources to identify and address body armor needs, consistent with key practices to help enhance and sustain collaboration among federal agencies. Another way that NIJ coordinates with stakeholders in the body armor arena is through its new process to update its standards, which we described in an earlier section of this report. All six of the body armor manufacturers and both of the ballistic-resistant and stab-resistant materials manufacturers we met with expressed concerns that the new standards revision process would not afford them sufficient opportunities to provide input. However, according to NIJ officials, any interested party, including manufacturers, may participate in the process to develop the standards by providing input at public workshops or providing comments on the draft document. Given that this new standards revision process was still under way during the production of our report, it was too soon to tell how effective the process will be in leveraging stakeholders’ knowledge and meeting globally accepted principles for stakeholder involvement in standards development, such as openness, due process, and transparency. NIJ requires that body armor models being tested for NIJ compliance be examined at a laboratory that NIJ has approved. To obtain such approval, a laboratory must be accredited by NIST’s National Voluntary Laboratory Accreditation Program as meeting general international standards for laboratory technical competence and quality management, as well as meeting specific technical requirements to perform the body armor tests contained in the NIJ standards; be an independent, third-party laboratory and conduct all body armor compliance testing within the United States; and demonstrate its freedom from potential conflicts of interest and maintain an independent decisional relationship from its clients, affiliates, contractors, and other organizations. In addition to undergoing laboratory ballistic or stab-resistance testing, the body armor models must meet workmanship and labeling requirements. To ensure the integrity of the compliance testing results, laboratories send the test results directly to NLECTC, whose staff review the test results for compliance. NIJ then reviews the compliance test data and NLECTC’s recommendation and makes the final compliance decision. Figure 5 illustrates this process. To further enhance the compliance testing program and ensure that the body armor used by law enforcement and corrections officers is safe and reliable, NIJ has implemented a follow-up inspection and testing requirement. Under this requirement, each body armor manufacturer with an NIJ-compliant ballistic-resistant body armor model is subject to six follow-up inspections and testing over a 60-month period, consisting of inspections of recently manufactured body armor samples to determine if the body armor in production continues to be constructed in the same way as the samples that were submitted for compliance testing and ballistic testing to ensure that unnoticed or unintentional variations have not occurred during the manufacturing process that could affect the performance of the armor. Currently, only NIJ-compliant ballistic-resistant body armor is subject to follow-up inspection and testing, but NIJ plans to implement a follow-up inspection and testing requirement for NIJ-compliant stab-resistant body armor following the issuance of the new stab-resistant body armor standard in December of 2012. NIJ is taking steps to increase the uniformity of compliance testing procedures to address factors that may affect the outcome of ballistic or stab compliance testing. Officials we met with from all six of the body armor manufacturers expressed concerns that the results of the compliance tests may be affected by factors not controlled for in the standards’ testing protocols. For example, three manufacturers said that they believe that variations in the clay that body armor is mounted to during ballistic testing could affect test results. NIJ has provided funding to NIST to conduct research to develop guidelines to standardize the process for building the clay-filled backing material fixtures used in ballistic testing to improve the repeatability of tests. NIJ expects to incorporate the results of this research in the update of the ballistic- resistant body armor standard. In addition, five of the six manufacturers raised concerns about the treatment of female body armor, and cited the lack of a clear definition that describes what constitutes female body armor and the lack of specific detailed protocols for testing female body armor. NIJ has recognized this as a challenge for the compliance testing program and has provided funding to NIST to conduct research to develop standard definitions of body armor types and standardized test methods for assessing the performance of contoured body armor designs. NIJ expects to incorporate the results of this research in the update of both the stab- and ballistic-resistant body armor standards. Additionally, NIST officials told us that they were working with NIJ to develop proficiency testing protocols to compare testing results across NIJ-approved laboratories as part of NIST’s effort to meet international standards for this type of compliance testing within the next 3 years. Body armor’s ability to protect an officer during a critical incident depends upon 1) whether the officer is wearing body armor and 2) the level of performance and the effectiveness of the armor he or she is wearing. A number of factors can affect the use and effectiveness of body armor. For example, agency policies as well as the comfort, fit, and coverage of the body armor can affect use. Body armor fit and coverage can also affect the effectiveness of the armor, along with factors including degradation because of wear and tear, care and maintenance, and exposure to environmental conditions. DOJ has taken steps to address these factors and has efforts under way to further advance the use and effectiveness of body armor. In particular, NIJ expects to complete an evaluation within the next 3 years on the impact of its body armor efforts on law enforcement practices and policies and on body armor design and quality. Several DOJ activities have addressed agency BVP funding for body armor has been instrumental in helping jurisdictions provide body armor for their officers’ use, according to all 10 of the jurisdictions in our sample. BJA provided funding to support the development of IACP’s model body armor wear policy. Once BJA required BVP applicants to have a written mandatory wear policy in place to receive funding, it made the IACP model policy available to jurisdictions upon request to assist them in fulfilling this requirement. NIJ has produced an informational video on body armor that highlights the benefits of the BVP program. See video (www.gao.gov/multimedia/video#video_id=588456) providing information on the BVP program. Comfort, fit, and coverage. Comfort, fit, and coverage affect both the use and effectiveness of body armor. For example, body armor can create discomfort for an officer through reduced mobility, increased weight, heat buildup under the armor, and chafing—thereby causing him or her to discontinue its use. Officials from one law enforcement association noted that complaints about body armor heat buildup are not restricted to officers in hot climates, such as Arizona, and that officers in temperate climates, such as Washington State also report experiencing discomfort from heat. In addition, if the body armor is poorly fitting, it can create both discomfort and affect total coverage area. Body armor that extends too low in the front can cause discomfort by hitting the officer’s gun belt which can cause the vest to ride up toward the officer’s throat or pinch the skin of the abdomen between the gun belt and the armor. Body armor that is not wide enough can leave portions of the officer’s sides unprotected. One manufacturer explained that if body armor does not fit properly, it can develop set wrinkles that become weak spots in the armor and reduce its protection. Designing comfortable, well-fitting body armor for female officers is particularly a challenge, according to the six body armor manufacturers in our sample. One manufacturer explained that constructing formfitting bust cups requires additional stitching that can weaken the ballistic materials, and as a result, more layers of ballistic materials may be needed to compensate, making the vests thicker and less comfortable. Many manufacturers, including all six in our sample, will custom fit body armor to the specific body contours of individual officers. DOJ’s activities to address comfort, fit, and coverage include NIJ taking the following actions: Issuing guidance that advises agencies to take comfort into account when selecting body armor and provides information on design elements that can affect comfort. NIJ’s body armor guidance also provides information on elements of proper fit and advises agencies to inspect body armor routinely to ensure proper fit. In addition, the NIJ guidance advises agencies to select body armor that provides full front, side and back protection and includes information to help agencies select body armor that offers an appropriate balance of protection and comfort. This guidance is available on NIJ’s website and the website NLECTC operates. Funding a study on the effect of body armor use on core body temperature to gain a better understanding of comfort issues. NIJ also plans to present the issue of including ergonomic or “wearability” test protocols to the STC as it consider revisions of the ballistic-resistant body armor standard. Providing funding to NIST to develop standard definitions of body armor types and standardized test methods for assessing the performance of contoured body armor designs for females. In addition, NIJ plans to bring the issue of female body armor testing methods to the STC developing the ballistic-resistant body armor standards for its consideration. Producing a video on body armor that provides information on body armor fit and coverage. See video (www.gao.gov/multimedia/video#video_id=588457). Wear and tear. Age alone does not degrade the ballistic-resistant properties of body armor, but wear and tear from normal use can contribute to the deterioration of body armor’s performance over time. There is little conclusive data on the extent to which normal wear and tear affects the useful lifespan of body armor. According to NIJ, many manufacturers offer 5-year warranties on their body armor, including the six we met with, but this is not necessarily a reflection of the service life of the armor. DOJ does have several activities under way, however, that address wear and tear factors. In particular: NIJ’s ballistic-resistant body armor test protocols examine body armor’s performance after undergoing mechanical wear in a tumbler for 10 days while being exposed to hot and humid conditions. Appendix IV depicts a tumbler used for conditioning ballistic-resistant body armor before testing. NIJ’s body armor guidance contains information on body armor life expectancy and replacement policies, and encourages agencies to visually inspect armor for signs of excessive wear and tear at least once a year. NIJ and NIST are jointly researching the properties of used body armor and how ballistic materials change over time, the relationship between changes in the ballistic materials and ballistic performance that could inform test methods for used armor, body armor designs that are less vulnerable to mechanical damage from wear and tear, and artificial aging methods that could be used to predict the service life of armor. NIJ has produced a video that provides information on inspecting body armor for signs of wear and tear. See video (www.gao.gov/multimedia/video#video_id=588458). Care and maintenance. Dry cleaning solvents, harsh detergents, bleach, and accumulated soap residue can damage body armor and curtail its effectiveness. Further, improper storage can lead to the development of set wrinkles, stretching, and exposure to environmental conditions that can degrade performance. For example, hanging body armor on a hanger may stretch out the elastic shoulder straps and reduce the ballistic- resistant panels’ proper coverage across the torso. DOJ has taken several actions to address these care and maintenance factors. Specifically, NIJ requires that all NIJ-compliant body armor contain labels that include care instructions and NIJ’s body armor guidance also includes information on caring for body armor, and has produced an informational video on body armor that discusses body armor care. See video (www.gao.gov/multimedia/video#video_id=588459). Exposure to environmental conditions. Exposure to environmental conditions of extreme temperature, moisture, humidity, and ultraviolet light can degrade ballistic-resistant materials. In particular, the amount of moisture associated with normal perspiration is not sufficient to affect ballistic performance and most commercially manufactured armor is treated with water-repellant materials or enclosed in water-resistant covers. NIJ has several activities that aim to address these environmental factors: NIJ has ballistic-resistant body armor test protocols that examine armors’ performance after being submerged in water and being environmentally conditioned for 10 days in a tumbler that subjects the armor to heat and humidity and mechanical wear. Appendix IV shows a tumbler for conditioning ballistic-resistant body armor before testing. NIJ plans to present the issue of including environmental testing protocols to the STC as it considers revisions to the stab-resistant body armor standard. NIJ and NIST are jointly researching the effect of heat, humidity, and moisture on the strength of newer body armor materials and exploring how to verify or improve the environmental conditioning protocol in the ballistic-resistant body armor standard. Body armor has demonstrated its ability to better protect law enforcement officers and DOJ has a number of efforts under way to promote its use and improve its effectiveness. Further, in managing its funding programs, BJA has designed several financial controls and has plans to further enhance grantee monitoring. Given the importance of the body armor initiatives under way to the safety of law enforcement officers—and the importance of sound financial management to program operations— opportunities exist for BJA to improve in several areas. In particular, deobligating undisbursed funds from BVP grant awards whose terms have ended and the grants have closed could help prevent improper accounting and enhance its management of program funds. Further, increasing grantees’ awareness of the documentation retention requirement could help ensure grantees’ accountability in the use of federal funds. Additionally, harmonizing requirements across the BVP and JAG programs could improve consistency in the department’s efforts to ensure law enforcement officers’ protection. Finally by fully documenting its procedures for monitoring compliance with program requirements and improving its tracking of which JAG grantees are using funds for stab- resistant body armor purchases, BJA could better target its compliance efforts. To enhance management of body armor funding, improve grantee accountability in the use of federal funds, reduce the risk of grantee noncompliance with program requirements, and ensure consistency in the department’s efforts to promote law enforcement officer safety, we recommend that the BJA Director take the following five actions: 1. Deobligate undisbursed funds from grants in the BVP program that have closed. 2. Expand information available to BVP grantees on the current program requirement for jurisdictions to retain documentation on all transactions for at least 3 years. 3. Establish requirements within the JAG program that grantees using the money for body armor purchases have written mandatory wear policies in place and that they are permitted to purchase only NIJ- compliant body armor. 4. Document procedures for desk review checks on compliance with JAG program requirements. 5. Establish a project identifier within GMS to track stab-resistant body armor. We provided a draft of this report to DOJ for comment. We received written comments on the draft report, which are reprinted in appendix V. DOJ agreed or agreed in part with all five recommendations in the report, and we believe that DOJ’s planned actions address the intent of each. Specifically: DOJ agreed with the recommendation that the department deobligate undisbursed funds from BVP grants that have closed and said that in the absence of statutory restrictions stating otherwise, BJA intends to use the deobligated, undispersed BVP funds to supplement appropriations in fiscal years 2012 and 2013. DOJ agreed with the recommendation that it expand the information available to BVP grantees on the program requirement that jurisdictions retain documentation for at least 3 years and said it will add language in the fiscal year 2012 BVP program requirements to address this issue. DOJ generally agreed with the recommendation that it establish requirements within the JAG program that grantees using the money for body armor purchases have written mandatory wear policies in place and that they are permitted to purchase only NIJ- compliant body armor. DOJ stated that it had sufficient legal authority to establish these requirements in the JAG program, but noted that it plans to implement such requirements carefully to avoid impeding the ability of local jurisdictions to purchase ballistic equipment that does not have standards, such as K-9 ballistic vests, and to accommodate other JAG program requirements. DOJ agreed, in part, with the recommendation that it document procedures for its checks on compliance with JAG program requirements, acknowledging the importance of closely monitoring this requirement. However, it stated that it did not believe that desk reviews are the best mechanism for ensuring that grantees are separately tracking and administering JAG and BVP funds and stated that it would develop and institute additional controls beyond desk reviews to ensure grantees’ compliance. DOJ agreed with the recommendation that it establish a project identifier within the Grants Management System to track stab- resistant body armor and stated that it will add a project identifier for stab-resistant vests during the fiscal year 2012 JAG program application process. DOJ also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This report answers the following questions: (1) What efforts does the Department of Justice (DOJ) have under way to support state and local law enforcement’s use of body armor? (2) To what extent has DOJ designed controls over these efforts and coordinated them with stakeholders within and outside of the department? (3) What factors affect body armor’s use and effectiveness and how has DOJ has addressed these factors? To address all three questions we obtained and assessed body armor- related documents from the Bureau of Justice Assistance (BJA) and the National Institute of Justice (NIJ), such as program requirements, procedures, processes, and budget information for fiscal years 1999 through 2011, and interviewed BJA and NIJ officials. In addition, we attended NIJ workshops in 2011 on its body armor standards and observed body armor testing firsthand. To identify DOJ’s body armor efforts to support state and local law enforcement, we examined program data on BJA’s Bulletproof Vest Partnership (BVP) program for fiscal years 1999 through 2011 as well as its Edward Byrne Justice Assistance Grant (JAG) program for fiscal years 2006 through 2011. These are two grant programs supporting state and local law enforcement’s purchases of body armor. To assess the reliability of the BVP data, we talked with BJA officials about data quality control procedures and reviewed relevant documentation. We determined that the data were sufficiently reliable for the purposes of this report. We also examined preliminary information from a GAO survey of the more than 3,900 JAG grantees that had received awards from fiscal years 2005 through 2010 to determine the extent to which they had procured ballistic and stab-resistant body armor in fiscal year 2010. The survey data included in this report reflect a 42 percent response rate and are not generalizable to all JAG grantees. To evaluate the extent to which DOJ designed controls over and coordinated its body armor efforts, we assessed DOJ’s body armor program policies, procedures, processes, and coordination efforts using standards for internal control in the federal government and leading practices for grant management and stakeholder coordination.discussed body armor efforts and coordination issues with federal officials inside and outside of DOJ. In particular, we interviewed officials from DOJ’s law enforcement components, including the Bureau of Alcohol, Tobacco, Firearms and Explosives; the Bureau of Prisons; the Drug Enforcement Administration; the Federal Bureau of Investigation; and the U.S. Marshals Service. Furthermore, we interviewed officials from the Department of Defense’s Technical Support Working Group subgroup for personnel protection and the Department of Commerce’s National Institute of Standards and Technology who are involved in body armor research, standards, and testing to discuss their efforts and the extent to which they coordinate with DOJ. Unlike a random sample, a nonprobability sample is more deliberatively chosen, meaning that some elements of the population being studied have either no chance or an unknown chance of being selected as part of the sample. armor use with male and female law enforcement officers who wear body armor and visited their offices to see the body armor that had been purchased using federal funds. Although the information from these nonprobability samples is not generalizable, it provides valuable insight into body armor issues. We obtained perspectives on NIJ’s coordination efforts and body armor standards and compliance testing programs from interviews with nonprobability samples of six body armor manufacturers, two NIJ- approved body armor testing laboratories, and two body armor materials manufacturers. We selected our sample of six body armor manufacturers, based upon the size of the company and the types of armor produced. The six manufacturers were Armor Express, Force One, Paraclete, Point Blank, Safariland, and US Armor. We selected the following two NIJ- approved laboratories to visit based upon their proximity to our office: Chesapeake Testing and H.P. White Laboratory. We also met with officials from DuPont and Honeywell, two body armor materials manufacturers, based upon their officials’ availability to meet at a location near GAO. Although the information from these nonprobability samples is not generalizable, it provides valuable insight into body armor issues. For our analysis of the factors affecting body armor’s use and effectiveness, we reviewed body armor literature and discussed these factors with the officials we interviewed for the second question, described above. We also reviewed BJA and NIJ programmatic information, such as program requirements and research plans, to determine the extent to which DOJ had taken actions to address these factors. The documents reviewed cover the period from 1974 through 2012, which generally corresponds to the time period of DOJ’s body armor efforts. During the course of our review, the Point Blank and Paraclete body armor brands were acquired by another body armor manufacturer. Our selection of these manufacturers and the information we obtained from them preceded the acquisitions. We conducted this performance audit from March 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The stab-resistant body armor standard defines three levels of performance for stab-resistant body armor. For each of the three protection levels, the test protocol requires that the knife blade or spike impact the armor test sample at two energy levels. NIJ established a maximum allowable penetration limit of 7 millimeters for the first energy level, based upon research indicating that internal injuries to organs would be extremely unlikely at this depth. The second energy level is an “overtest” to ensure that an adequate margin of safety exists. NIJ defined a maximum allowable penetration limit of 20 millimeters for this second energy level. Table 2 illustrates the performance standards at each level. In addition to the contact named above, Joy Booth, Assistant Director, and Juan Tapia-Videla, Analyst-in-Charge, managed this assignment. Heather May and Ana Ivelisse Aviles made significant contributions to the work. Stanley Kostyla assisted with design and methodology. Willie Commons III provided legal support. Katherine Davis provided assistance in report preparation and Lydia Araya made contributions to the figures and videos presented in the report.
Since 1987, body armor—in the form of ballistic-resistant and stab-resistant vests—has reportedly saved the lives of over 3,000 law enforcement officers nationwide. Recognizing body armor’s value, the Department of Justice (DOJ)—through its Bureau of Justice Assistance (BJA) and its National Institute of Justice (NIJ)—has implemented initiatives to support state and local law enforcement agencies’ use of body armor. GAO was asked to examine (1) DOJ’s efforts to support the use of body armor, (2) the extent to which DOJ has designed controls to manage and coordinate these efforts, and (3) the factors affecting body armor’s use and effectiveness and steps DOJ has taken to address them. GAO reviewed information on DOJ’s efforts, and interviewed officials from BJA, NIJ, 6 manufacturers, 2 laboratories, 3 law enforcement associations, 10 state and local jurisdictions, and 12 stakeholders in and outside of government. GAO selected these organizations nonrandomly based in part on their size, and location. GAO also examined body armor literature on key factors affecting body armor’s use and effectiveness and reviewed DOJ’s efforts to address these factors. The Department of Justice (DOJ) has a number of initiatives to support body armor use by state and local law enforcement, including funding, research, standards development, and testing programs. Two Bureau of Justice Assistance (BJA) grant programs provide funding to state and local law enforcement to facilitate their body armor purchases. The Bulletproof Vest Partnership (BVP) program offers 2-year grants on a reimbursable basis. The Edward Byrne Memorial Justice Assistance Grant (JAG) program provides 4-year grant money up front that can be used to fund body armor procurement along with other criminal justice activities. Since the BVP program’s inception in 1999, it has reimbursed grantees about $247 million for their purchases of nearly 1 million vests. The JAG program has provided nearly $4 billion from fiscal years 2006 through 2011, but BJA does not know how much of this amount grantees have spent on body armor because it is not required to track expenditures for specific purposes. BJA reports that from fiscal years 2006 through 2011, 357 grantees intended to use JAG funds for ballistic-resistant vest procurement, but it does not track how many grantees intended to purchase stab-resistant vests. The National Institute of Justice (NIJ) sponsors body armor research, establishes body armor performance standards, and oversees body armor testing for compliance. DOJ designed several internal controls to manage and coordinate BJA’s and NIJ’s body armor activities, but could take steps to strengthen them, consistent with standards for internal control. For example, the BVP program has not deobligated about $27 million in undisbursed funds from grant awards whose terms have ended. To strengthen fund management, DOJ could deobligate these funds for grants that have closed and, for example, apply the amounts to new awards or reduce requests for future budgets. Also, unlike the BVP program, the JAG program does not require that the body armor purchased be NIJ compliant or that officers be mandated to wear the armor purchased. To promote officer safety and harmonize the BVP and JAG programs, DOJ could establish consistent body armor requirements. Factors affecting body armor use and effectiveness include law enforcement agencies’ policies mandating wear; the comfort, fit, and coverage of the vests; degradation caused by wear and tear; and exposure to environmental conditions. Among other efforts to address these factors, DOJ has revised its standards and compliance tests to incorporate the latest technology. GAO recommends that among other actions, DOJ deobligate undisbursed funds from grants in the BVP program that have closed, establish consistent requirements within its body armor grant programs, and track grantees’ intended stab-resistant vest purchases. DOJ generally agreed with the recommendations.
Between 1988 and 1995, the Department of Defense (DOD), acting under special legislative authorities, conducted four rounds of base realignments and closures (BRAC). According to DOD’s calculations, when all BRAC actions from those rounds are completed, no later than 2001, DOD will have reduced its domestic military basing structure by about 20 percent. DOD believes it needs to reduce its domestic basing infrastructure even further to bring it more into line with reductions in its force structure and funding levels and free up funds for other programs, including modernization. Consequently, in 1997 and 1998, the Secretary of Defense requested the Congress to authorize additional rounds of base closures. However, the Congress continues to have many questions about the four BRAC rounds and has not been willing to authorize additional ones to date. Some in the Congress, noting the lengthy time frame allowed for closures and realignments to be completed, have suggested that additional BRAC rounds should not be authorized until prior recommendations have been implemented and the effects of those decisions fully assessed. Some members have also raised questions about the adequacy of DOD’s accounting for the costs and savings associated with BRAC decisions, including environmental restoration costs and other costs to the government not borne directly by DOD; the extent to which environmental restoration associated with BRAC might continue beyond 2001; and the economic impact on communities affected by closures and their ability to recover. DOD has characterized the four rounds of BRAC actions as representing about 20 percent of its major bases, producing decisions to close 97 out of 495 major domestic installations and many smaller ones and to realign many other facilities. However, trying to fully assess the magnitude of closures, tally the precise numbers of bases closed or realigned, or differentiate between the two is difficult. For example, individual BRAC commission recommendations may have included actions affecting multiple bases. Additionally, BRAC commissions in the later rounds made changes, or what are termed “redirects,” to prior BRAC decisions. In total, the four BRAC rounds produced 499 recommendations affecting about 450 military activities. In our 1995 report on the BRAC process, we noted that the term base closure often leaves the impression that a larger facility is being closed.However, that may not actually be the case. Military installations are diverse and can include a base, camp, post, station, yard, center, home port, or leased facility and can vary in size from a few acres to hundreds of thousands of acres. Further, an installation may house more than one mission or function. For example, in 1993 the Navy closed the Norfolk Naval Aviation Depot, which was located on the Norfolk Navy Base, which included the Norfolk Navy Station, Supply Center, and Air Station. Our report noted that full closures may involve relatively small facilities, rather than the stereotypical large military base. It also noted that the number of bases recommended for closure or realignment in a given BRAC round was often difficult to precisely tabulate because closure decisions did not necessarily completely close facilities. In the BRAC process, decisions generally were made to either close or realign facilities. While the 1990 BRAC enabling legislation did not specifically define what is meant by “close,” it did define a realignment as any action that reduces and relocates functions and civilian positions. Our 1995 report noted that an individual BRAC recommendation may actually affect a variety of activities and functions without fully closing an installation. More specifically, the nature of closures and realignments was such that both could result in the closure of portions of facilities, and the distinction between the two was not always clear. For example, our 1997 report on BRAC lessons learned contained a listing of base closure decisions DOD reported as major closures. Excluded from that list was the BRAC 1995 decision regarding Kelly Air Force Base, Texas, which DOD characterized as a major base realignment. The actual decision included shifting a portion of the base’s property to the adjacent Lackland Air Force Base and moving the depot maintenance workload of the Air Logistics Center located on Kelly to other DOD depots or to private sector commercial activities as determined by the Defense Depot Maintenance Council. Some closures, as well as realignments, such as those involving the Army’s Fort Pickett, Virginia, and Fort Hunter Liggett, California, essentially call for cessation of active military presence on the installations while retaining nearly all of the property for use by reserve components. Finally, efforts to precisely determine the numbers of bases closed or realigned are complicated by changes that are made to BRAC decisions in later BRAC rounds. The BRAC process allowed DOD to propose changes to previous commission recommendations, or redirects, while it was considering new base closures in rounds conducted in 1991, 1993, and 1995. Redirects often meant redirecting the planned movement or activity to a base other than the one cited as the receiving base in a prior BRAC round. By law, DOD must initiate closure or realignment actions no later than 2 years after the President submits the recommended BRAC list to the Congress and must complete implementation within 6 years. However, this 6-year period refers only to the time permitted to implement realignment or closure decisions, such as moving functions from one base to another or halting military activities on a base as a base closes. DOD’s involvement on an installation can go beyond the 6 years as it completes the process of cleaning up environmental contamination on the bases and disposing of the unneeded property. DOD must comply with cleanup standards and processes associated with laws, regulations, and executive orders in conducting assessments and cleanup of its base closure property. DOD spends about $5 billion annually to fulfill its environmental mission, including compliance and cleanup of contamination from hazardous substances and waste on active, closing, and formerly used DOD sites. While DOD has an ongoing environmental program at each of its military bases, the decision to close a military base and dispose of unneeded property can require expedited cleanups that may not have otherwise occurred. The time needed to accomplish required cleanup activities can extend many years beyond the 6 years allowed under BRAC legislation for ceasing military operations and closing a base. The status of cleanup activities can also affect transferring title of the property from the federal government to others. The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 ( 42 U.S.C. 9601 et seq.) provides the framework for responding to contamination problems. CERCLA authorizes the federal government to respond to spills and other releases of hazardous substances. It generally requires that the government warrant that all remedial action necessary to protect human health and the environment has been taken before property is transferred by the United States to nonfederal entities, such as communities or private parties. While CERCLA had originally authorized property transfers to nonfederal ownership only after all remedial action had been taken, the act was amended in 1996 to expedite transfer of contaminated property. Now such property, under some circumstances, can be transferred to nonfederal users before all remedial action has been taken. However, remedial action must still be taken at some point. Given the large amount of land being affected by the BRAC process and the delays that could be encountered due to environmental cleanup, the Congress included provisions in the National Defense Authorization Act for Fiscal Year 1994 (P.L. 103-160) that were intended to stimulate base reuse prior to property transfer. That legislation authorized the military services to lease property to facilitate state or local economic reuse without limiting the length of a lease. Previous leases were subject to certain limitations, including a term not to exceed 5 years and DOD’s right to revoke the leases at will. Although leasing property allows its reuse before cleanup has been completed, DOD is still liable for environmental cleanup costs. Once property is no longer needed by a federal agency, the property is declared excess by the agency and is offered to other federal agencies to satisfy their requirements. Excess property that is not selected by federal agencies is declared surplus to the federal government. At that point, the Federal Property and Administrative Services Act of 1949 authorizes disposal of the property through a variety of means, including transfers to states and local governments for public benefit purposes and negotiated or public sales. Additionally, a 1993 amendment to the BRAC legislation states that under certain circumstances, surplus real property can be transferred to local redevelopment authorities under economic development conveyances for economic development and job creation purposes. This section enables communities to obtain property under more flexible finance and payment terms than previously existed. For example, a community can request property at less than fair market value if it can show the discount is needed for economic development. An important step for communities as they seek to recover from the adverse effects of base closures is to organize local base reuse authorities to interact with DOD on base closure, property disposal, and reuse issues. As shown in figure 1.1, local reuse authorities generally seek surplus property under one of the public benefit transfer or economic development authorities because these can be no-cost or no-initial cost acquisitions. If the property reuse does not meet the requirements for these conveyances, local reuse authorities can still pursue a negotiated sale without competing with other interested parties. Any surplus property that remains is available for sale to the general public. While our previous work has shown that BRAC savings are likely to be substantial, accounting precisely for the costs and savings of BRAC actions is a difficult task. DOD does not have systems in place to track and update savings. Further, some costs associated with BRAC actions, such as federal assistance to BRAC-affected communities, are not included in BRAC implementation budgets and are not considered when calculating overall costs. We have previously reported that savings from prior BRAC rounds are expected to be substantial, although DOD has not always documented them well or updated them on a regular basis so as to provide the precision needed to support savings claims. Likewise, as stated in our July 1997 report, significant net savings are likely once up-front closure costs have been paid, although such costs have been higher than initially estimated and have caused net savings not to be realized as quickly as DOD projected. The first publicly released costs and savings forecasts from BRAC actions are the numbers typically associated with DOD’s list of proposed closures and realignments that are endorsed by the commission. DOD’s and the commissions’ initial BRAC decision-making did not include the cost of environmental restoration, in keeping with DOD’s long-standing policy of not considering such costs in its BRAC decision-making, whereas subsequent BRAC implementation budget estimates do. This policy is based on DOD’s obligation to cleanup contaminated sites on military bases regardless of whether they are closed. We agree with DOD in not considering these costs in developing its cost and savings estimates as a basis for base closure recommendations. At the same time, we agree with DOD’s position that environmental restoration costs are a liability to it regardless of its base closure decisions, and we have reported that these costs are substantial. The subsequent inclusion of environmental cleanup costs in DOD’s budget has the practical effect of reducing the short-term savings from BRAC actions and delaying the beginning of net annual recurring savings. We have also reported that another difficulty in precisely determining BRAC savings is that accounting systems—not just those in DOD—are designed to record disbursements, not savings. The services develop savings estimates at the time they are developing initial BRAC implementation budgets, and these are reported in DOD’s BRAC budget justifications. Because DOD’s accounting systems do not track savings, updating these estimates requires a separate data tracking system, which DOD does not have. The lack of updates is problematic because initial savings estimates are based on forecasted data that can change during actual implementation, thereby increasing or decreasing the amount of savings. We have recommended that regardless of whether the Congress authorizes future BRAC rounds, DOD needs to improve its periodic updating and reporting of savings projections from prior BRAC decisions. As stated in our July 1997 report, this information has been needed to strengthen DOD’s budgeting process and ensure that correct assumptions were being made regarding expected reductions in base operating costs, as well as to provide greater precision to DOD’s estimates of BRAC savings. We have also noted that not all federal costs associated with implementing base closures are included in DOD’s BRAC implementation budgets. We previously reported that various forms of federal assistance have been made available to communities, including planning assistance to help communities determine how they could best develop the property, training grants to provide the workforce with new skills, and grants to improve the infrastructure on bases. Our 1996 report stated that over $780 million in direct financial assistance to areas affected by the 1988, 1991, and 1993 BRAC rounds was not included in the BRAC budget. The economic impact on communities affected by BRAC actions has been a long-standing source of public anxiety. Because of this concern, DOD included economic impact as one of eight criteria it used for making BRAC recommendations in the last three BRAC rounds. While economic impact did not play as large a role in initial BRAC deliberations as did other criteria and was not a key decision factor, such as military value, its importance was such that DOD components were required to calculate the economic impact of each of their recommendations. For BRAC 1995, where the cumulative economic impact of prior BRAC rounds also became a concern, we found little documentation indicating that DOD components had eliminated potential closure or realignment candidates from consideration for economic impact reasons. While defense civilian job loss and other adverse effects on communities are an inescapable byproduct of base closures, at least in the short term, we noted in our July 1997 report that some limited studies indicated that, in a number of BRAC-affected communities, the local economies appeared to be able to absorb the economic losses, though some communities were faring better than others. To some extent, it appears that the various federal programs and benefits provided to those communities affected by BRAC actions helped to cushion the impact of base closures. Still unanswered were questions about overall changes in employment and income levels in the broad range of communities affected by BRAC actions, particularly those in less urban areas with less diverse economic bases. In part, because of lingering questions about the costs and savings generated by previous BRAC rounds, in 1997 the Congress required the Secretary of Defense to report on the costs and savings attributable to prior BRAC rounds and the need, if any, for additional BRAC rounds, among other issues. DOD issued its report in April 1998 and concluded that BRAC costs were below or close to its original estimates and that BRAC actions would save billions of dollars after up-front costs were paid. DOD emphasized that excess capacity in its installations warrants two additional BRAC rounds and that upkeep for unneeded installations wastes resources needed for modernization. DOD also reported that BRAC rounds enhanced military capabilities primarily by enabling the services to consolidate activities and shift funding from infrastructure support to other priorities. In our review of DOD’s report, we agreed that BRAC savings would be substantial after up-front costs were paid but questioned the preciseness of the estimates. We also agreed that DOD had excess capacity at its installations, but questioned DOD’s methodology for assessing its infrastructure capacity. To assist the Congress should it consider the need for future BRAC rounds in the future, we reviewed a number of important issues associated with the prior rounds. At the request of Mr. John E. Sununu, House of Representatives, we are providing information that addresses (1) DOD’s progress in completing action on BRAC recommendations and transferring unneeded base property to other users, (2) the precision of DOD’s estimates of BRAC costs and savings, (3) environmental cleanup progress and estimated associated costs, and (4) reported trends in economic recovery in communities affected by base closures. To determine whether DOD has taken action on BRAC commissions’ recommendations as required by law, we compiled a comprehensive listing of recommended actions included in the commissions’ reports. Because DOD reports typically focus on major closures and realignments and it is not readily apparent what constitutes a major action because the military services define the term differently, our listing is as complete as possible. We compared the commissions’ recommended actions to military service and defense agency data to determine if they were completed within a 6-year period specified by law. We also performed a comparative analysis of the completed actions by round and the time to complete them. To assure that we were using the most reliable data available, we followed up to reconcile discrepancies. While we examined the timing of the completed actions based on March 1998 data, we did not attempt to determine whether the specific actions taken complied with the commissions’ recommendations. To assess DOD’s progress in transferring unneeded base property to other users, we reviewed property disposition plans as of September 30, 1997, and compared the plans with available data on actual property transfers. We collected transfer data from the services and defense agencies and reconciled discrepancies with data from our prior reviews. We validated selected data by visiting several closing bases and comparing their property records to those provided by the services’ and defense agencies’ BRAC offices. The bases where we performed work included Lowry Air Force Base, Colorado; Mather Air Force Base, California; Mare Island Naval Shipyard, California; Defense Distribution Depot, Ogden, Utah; Tooele Army Depot, Utah; Cameron Station, Virginia; and Vint Hill Farms Station, Virginia. Our visits provided us with a mix of service and defense agency BRAC sites across various closure rounds. To determine to what extent DOD has routinely updated its cost and savings estimates for BRAC actions, we relied, in part, on our prior BRAC reports and reviewed Congressional Budget Office, DOD, DOD Office of Inspector General, and service audit agency reports. We also interviewed officials in the DOD Comptroller office and the BRAC and budget offices of the military services and two defense agencies—the Defense Logistics Agency and the Defense Information Systems Agency—to obtain their views concerning DOD policy, procedures, and practices for updating cost and savings estimates. To determine how frequently these estimates were updated, we compared estimates presented in DOD’s fiscal year 1993-99 BRAC budget submissions for the 1991, 1993, and 1995 rounds. We did not evaluate the 1988 round because DOD and military service officials cited numerous budget estimation difficulties with BRAC 1988 activities. While we did not independently determine the reliability of the budget data we used for our analysis, we did examine data included in the services’ and DOD’s budget submissions to ensure that the figures were consistent. In this regard, we found some inconsistencies and informed appropriate officials who took corrective actions. To assess the completeness of DOD’s cost and savings estimates for BRAC-related actions, we reviewed data included in the estimates. Because two major cost elements—expected environmental costs beyond 2001 and certain federal agency economic assistance provided to BRAC-affected communities—were not included in the estimates and not used to calculate savings, we obtained available cost data for these elements to assess their relative impact on BRAC net savings. To determine DOD’s progress and costs associated with its environmental work at BRAC bases, we analyzed DOD documentation on environmental program initiatives and met with officials from the military services, the Defense Logistics Agency, and the Office of the Deputy Under Secretary of Defense for Environmental Security to discuss difficulties in cleaning BRAC bases and overall program status; contacted U.S. Environmental Protection Agency officials to obtain financial data and their views on DOD’s environmental cleanup efforts; spoke with California, Colorado, and Utah environmental regulators to obtain their views on the cleanup process; and visited several BRAC bases to discuss environmental issues with base officials and community personnel. The bases where we performed work were Lowry Air Force Base; Mather Air Force Base; Mare Island Naval Shipyard; Fort Ord, California; Defense Distribution Depot, Ogden, Utah; and Tooele Army Depot. These bases provided us a mix of service and defense agency BRAC sites across various BRAC rounds. Some sites afforded us an opportunity to gain insights into specific environmental issues. For example, the Fort Ord site has extensive unexploded ordnance (UXO) contamination, which presents a costly and challenging cleanup task for DOD. Because DOD has not developed a total environmental cost estimate for its BRAC bases, we developed such an estimate, using available program cost data from various DOD financial sources. We had to reconcile discrepancies in environmental cost data in multiple DOD documents in order to use the most reliable data for developing that estimate. Even so, the estimate is subject to variability because of unknowns and unresolved cleanup issues associated with UXO. To gain a sense of the potential costs of removing UXO, we discussed the matter with DOD and Environmental Protection Agency officials. To assess the economic recovery of communities affected by base closures and realignments, we reviewed several studies dealing with this issue. We also (1) performed an economic assessment of communities where more than 300 civilian jobs were eliminated in the four closure rounds and (2) visited the surrounding communities of six major base closures. In performing our economic assessment, we used unemployment rates and per capita income as measures for analyzing changes in the economic condition of affected communities. We chose to use unemployment rates and per capita income as key performance measures because (1) DOD used these measures in assessing the economic condition of local areas in its economic impact analysis for recommended BRAC locations in the closure rounds and (2) these measures are commonly used by economists to gauge changes in the economic health of an area over time. During our site visits, we collected additional information to (1) enhance our understanding of the relationship between base closures and local communities and (2) provide a close-up of how a base closure affects individual communities. To establish a baseline for our economic analysis, we obtained selected economic indicator data from the Logistics Management Institute (LMI), a Federally Funded Research and Development Center that maintains a database of key economic data for impact areas surrounding base closures during the four rounds. Data obtained were multiyear data (1988 through September 30, 1997) on total employment, unemployment rate, total income, per capita income, and population for local economic impact areas that experienced a base closure. The employment data originated in the Department of Labor’s Bureau of Labor Statistics and the income and population data, which were only available through 1995, came from the Department of Commerce’s Bureau of Economic Analysis. The economic impact areas, based on 1990 census data, were defined using accepted standard definitions for metropolitan and nonmetropolitan statistical areas and reflected the impact areas used in the 1995 BRAC round. The 1995 BRAC areas were configured to reflect the residences of the majority of military and civilian employees at an activity. LMI routinely validates data and reconciles discrepancies as necessary. We also performed a limited reliability assessment of the data by comparing selected data to Bureau of Labor Statistics and Bureau of Economic Analysis data available on those agencies’ Internet sites. We did not find any discrepancies. In analyzing the economic condition of BRAC-affected communities over time, we compared unemployment rates and per capita incomes to national averages for the time period encompassing the four BRAC rounds to the present to assess if communities were below national averages. We analyzed the data for bases closed under BRAC that had government and contractor civilian personnel reductions of 300 or more. While our assessment does provide an overall picture of how these selected communities compare to other communities based on national averages, it does not necessarily isolate the condition or the changes in that condition that may be attributable to a BRAC action. In selecting sites for our visits, we sought to satisfy several criteria: significant civilian job loss; at least one site from each military service; geographic diversity; at least one major shipyard or depot complex; and a mix of urban and rural sites. We focused on 1991 BRAC round sites because DOD and communities had more experience than those in the 1988 round, and the 1993 and 1995 rounds did not provide enough time to assess recovery. Our site visits included Philadelphia Naval Base and Shipyard, Pennsylvania; Naval Air Station, Chase Field, Texas; Eaker Air Force Base, Arkansas; Castle Air Force Base, California; Fort Devens, Massachusetts; and Fort Benjamin Harrison, Indiana. At these sites, we met with various local officials, including business leaders and government officials, to gain their perspective on how the closures affected their communities and how the communities recovered. While information of this nature reflects unique experiences and thus presents a limited basis for drawing general conclusions about the impacts and recovery of all communities undergoing base closures, we were able to highlight common trends and themes. In performing site visits, we asked local officials to discuss how base reuse contributes to economic recovery, and some of those discussions covered governmental assistance and the property disposal process. We also collected data on certain federal assistance provided to BRAC communities (see app. I). Because of data problems and the subsequent inability to make valid projections or generalizations, we did not track the after-closure employment status and job quality of specific individuals who lost their jobs due to base closures. Personnel data were generally incomplete or not readily available at closing bases, and local employment officials had only limited relevant data. We did, however, obtain data on the estimated number of civilian jobs lost and actual jobs created at major base closures and realignments for the four rounds (see app. II). We performed our review between August 1997 and September 1998 in accordance with generally accepted government auditing standards. We obtained DOD comments on a draft of this report. The comments have been summarized in chapters 2 through 5 and are presented in their entirety in appendix V. By the end of fiscal year 1998, DOD had completed action on about 85 percent of 451 BRAC commissions’ recommendations for the four BRAC rounds. The four BRAC commissions actually generated 499 recommendations; however, only 451 of these ultimately required action because 48 were changed in some manner by recommendations of a later commission. According to DOD documentation, all of the 1988 and 1991 round recommendations have been completed within the statutory 6-year period. Furthermore, from the first round to the second, the services accelerated the pace at which they completed recommendations, from an average of just under 5-1/2 years for the first round to just over 3 years for the second. DOD’s plans to complete remaining 1993 and 1995 round recommendations indicate that the pace will be consistent with the 1991 round. Despite timely completion of BRAC recommended actions, disposal of unneeded base property is proceeding slowly. About 464,000 acres were designated as unneeded real property at closing or realigning locations, but, as of March 1998, only about 31 percent of the property designated for nonfederal users had actually been transferred by formal deed, and only 8 percent of the property designated for federal entities had actually been transferred. DOD and service officials cited various impediments such as environmental cleanup that extend property disposal time frames. To help ease this situation, DOD has been using interim leasing to get usable property to users quicker until a deed transfer can be issued. Nonetheless, DOD has much to do before it completes the transfer of its unneeded property. DOD has typically reported to the Congress on its progress in implementing BRAC actions that the services have defined as major. According to a DOD official, DOD has completed 77 of 152 major recommendations. However, what constitutes a major or minor recommendation is not always apparent because the services define these terms differently. We analyzed all BRAC commissions’ recommendations directed to the military departments and defense agencies. Our count of 499 recommendations is based on the BRAC commissions’ reports, which are somewhat arbitrary in the way they enumerate recommendations. For example, a closure or a realignment in which several missions are disestablished or relocated may count as one recommendation or several. The types of recommendations are shown in figure 2.1. Overall, according to DOD data, 383, or about 85 percent, of 451 recommendations were completed as of September 30, 1998, including all recommendations associated with the 1988 and 1991 rounds; 68 actions remain in process. For the 1993 and 1995 rounds, the completion rates were 87 and 60 percent, respectively, at that time. Further, DOD reported completing recommendations within mandated time frames. The statutory completion dates for the four rounds were September 30, 1995, July 11, 1997, July 2, 1999, and July 13, 2001, respectively. Our review showed 1988 and 1991 round recommendations were completed within the required time frames. DOD’s schedule for the 1993 and 1995 rounds also anticipates completion within mandated time frames. According to DOD, the sooner a BRAC recommendation is completed, the faster savings can begin to materialize and unneeded property can be transferred to users who can benefit by putting the property to alternative use. We agree that recurring savings could begin to accrue earlier and the property disposal process could be underway earlier to the extent that military operations at a closing base can be terminated earlier than expected. The average time required to complete a BRAC recommendation has been shortened in all rounds since the 1988 round, which took an average of nearly 5-1/2 years to complete. As a result, the subsequent rounds were over two-thirds complete after 3 years. Service officials generally attributed the faster completion rate to lessons learned during the first round. However, they added that implementation of individual recommendations could be slowed by unavailability of funds or complexity of actions required to construct new facilities and move organizations and units. The cumulative pace of completion for each round and the average completion pace for all four rounds are shown in figure 2.2. BRAC-affected installations contained about 464,000 acres that the individual military services and components did not need. Property disposition has been decided for about 79 percent of this acreage. Plans indicate that federal entities, including DOD activities, are the largest recipient of this property. As of September 30, 1997, 46 percent, or about 213,000 acres, of the unneeded BRAC property was to be retained by the federal government; 33 percent, or about 154,000 acres, was slated for nonfederal users such as state and local authorities or private parties; and the disposition of 21 percent, or about 98,000 acres had not yet been determined. However, only about 8 and 31 percent of the property designated for federal and nonfederal recipients, respectively, had been transferred. DOD officials cited various factors that affect property disposal. These factors include the iterative process of preparing site-specific reuse plans, environmental cleanup, preparing conveyance documentation, and, in some cases, communities’ delays in assuming responsibility for the property. To get more property to users faster, DOD has been leasing property for several years, pending transfer of title. As shown in figure 2.3, DOD data indicate that a substantial portion of BRAC acreage will be retained by DOD or transferred to other federal agencies. Most of the property to be retained by the federal government is to go to the Fish and Wildlife Service, Department of the Interior, for use as wildlife habitats (see fig. 2.4). Other federal agencies, such as the National Park Service, the Federal Aviation Administration, and the Department of Veterans Affairs, are also to receive property. Further, DOD intends to retain property for, among other things, administrative space for the Defense Finance and Accounting Service. As previously noted, DOD is actually retaining more property than this because, in many cases during the BRAC process, the property of an active military service base was turned over to a reserve component without being declared excess; such actions would not be displayed in the figure. In particular, available DOD data indicate that over 330,000 acres of BRAC property are being retained for use by the reserve components. About 324,000 acres of this amount are attributable to five Army BRAC 1995 round bases—Fort Hunter Liggett, California; Fort Chaffee, Arkansas; Fort Pickett, Virginia, Fort Dix, New Jersey; and Fort McClellan, Alabama. In transferring property to nonfederal entities, several conveyance methods—public benefit transfers, economic development conveyances, and sales—are used (see fig. 2.5). Through public benefit transfers, property can usually be obtained at no cost for public benefit purposes such as airports, parks and recreation, education, and homeless assistance. Through economic development conveyances, property can usually be obtained at no-cost or no-initial cost for economic development and job creation purposes. To use this authority, however, a nonfederal entity must show that economic development and job creation cannot be accomplished under established sales or public benefit transfers. Finally, property can be sold. Our work at seven BRAC sites showed the various forms of property conveyance the communities were using to obtain property. Appendix III provides a summary of the status of property disposition at these sites. In the early years of BRAC, DOD was projecting higher revenue from land sales than it is now experiencing. DOD originally projected about $4.7 billion in revenue from such sales for the four closure rounds; however, according to the fiscal year 1999 budget, total expected sales are about $122 million for those rounds. The decrease in sales is attributable primarily to national policy changes and legislation that emphasize assisting communities that are losing bases. While DOD has plans for transferring most of its unneeded property, actual transfers are much less than planned. Overall, DOD data indicate that about 14 percent, or about 64,000 acres, of the 464,000 acres of unneeded property has been transferred to federal or nonfederal entities. Specifically, about 17,000 acres have been transferred to federal entities and about 47,000 acres have been transferred to nonfederal entities. Excluding that property for which no plans have been established for final disposition, DOD has reportedly transferred about 8 percent of the property to federal entities and about 31 percent of the property to nonfederal entities. Progress in transferring title of BRAC property to users is slowed by many factors. Planning for reuse can be a lengthy process and many actions must precede disposition. For example, the Defense Base Closure and Realignment Act of 1990, as amended, requires the Secretary of Defense to consult with local authorities about their plans before transferring former military property. The law also states that the Secretaries of Defense and of Housing and Urban Development must review and approve the reuse plan of a local redevelopment authority before DOD can transfer property to assist the homeless. In addition, DOD guidelines require that a redevelopment authority complete a reuse plan before DOD can transfer property for economic redevelopment and job creation purposes. Furthermore, the need to address environmental contamination can also delay final disposition. (See ch. 4 for a discussion of environmental laws and regulations and other environmental issues.) Finally, according to DOD officials, some communities are not prepared to assume responsibility for control of unneeded base property. Specifically, communities need to, among other things, establish an organization to administer prospective property, determine uses, and arrange for financing for providing for property protection, maintenance, and improvements. While awaiting property transfers, communities can sometimes begin using base property through interim leasing. Military service leasing policies and practices provide opportunities for communities to lease property before environmental cleanup and final disposal are complete, then find tenants to sublease it. According to community representatives, leasing is a useful interim measure to promote reuse and job creation. It can also help DOD gain an advantage as the community assumes responsibility and pays for protecting and maintaining the property. Interim leasing may not always be viable, however. Prospective tenants may experience financing difficulties or are sometimes reluctant to sublease property while DOD retains title. For example, DOD and community officials told us that tenants may have difficulty obtaining financing for redevelopment because banks are disinclined to lend money under these circumstances. Also, since much of the property under consideration has remaining environmental contamination, there are liability issues to be addressed, and tenants are reluctant to lease until these are resolved. The services do not centrally maintain leasing information and could not readily provide comprehensive data. However, service data we were able to obtain indicated that during the second quarter of fiscal year 1998, nearly 38,000 acres, or 8 percent of the unneeded BRAC acreage, were operating under some type of lease. According to these data, about 25 percent of the property planned for nonfederal recipients and awaiting transfer was under interim leases. Three of the sites where we performed work on property disposal (see app. III) were using leases while actions for final disposal progressed. The conditions we noted regarding leases are summarized below: At the former Mather Air Force Base, California, about 93 percent of the property requested under an economic development conveyance is operated under an interim lease. The remaining property under this conveyance has already been deeded, although a portion of the property devoted to family housing has been vacant since the base closed in 1993 and has increasingly deteriorated as negotiations continued between the Air Force and the community over property transfer. Agreement was recently reached for a negotiated sale of the property. Also, the airport property is under a 55-year lease to Sacramento County, California, pending a public benefit conveyance. At the former Vint Hill Farms Station, Virginia, the Army has approved several interim leases and is planning an additional lease to support development of a golf course. At the former Mare Island Naval Shipyard, California, the Navy and the local reuse authority have entered into a short-term lease for about 48 percent of the property requested under an economic development conveyance. As of July 1998, the local authority had 58 subleases that covered over 178 acres of land and buildings. DOD has reportedly completed most of the commissions’ recommendations and accelerated the pace of completion since the 1988 round. Those recommendations that remain outstanding are generally attributable to the 1993 and 1995 rounds, and DOD’s plans call for closing them out within required time frames. However, the actual transfer of unneeded base property has been slow due to a variety of factors. Activities and rules governing the disposition process, while designed to ensure that all requirements of applicable laws and regulations are met, contribute to the slow rate of progress. This situation has been somewhat eased by the use of leases. Nonetheless, DOD has much to do before it completes its task of transferring remaining BRAC property it no longer needs. DOD stated that its goal in property disposal is to convey property as quickly as possible to advance both the local communities’ economic recovery and to accelerate DOD savings by eliminating costs associated with maintaining the property. However, DOD acknowledged that property transfer is a complex process involving many challenges, including time needed to clean up BRAC property. In this regard, DOD stated it supports a variety of initiatives to accelerate, refine, or simplify the process. Through 2001, DOD estimates it will achieve a net savings of about $14 billion as a result of BRAC actions. Beyond 2001, DOD expects to save about $5.7 billion annually. Because DOD is relying on BRAC savings to help free up funds for future defense programs, such as weapons modernization, and has adjusted its prospective budgets to reflect savings, it is important that savings estimates be adjusted to reflect experience. The services have updated costs annually, but they have not routinely updated savings. The lack of current data on savings raises doubts about the precision of net savings estimates, and estimates should be considered a rough order of magnitude. In addition, DOD cost estimates exclude two categories of closure-related costs. First, one-time costs of over $1 billion in federal financial assistance provided to communities affected by BRAC actions are excluded. While these costs are incurred by the federal government, they are not funded through BRAC budget accounts. Second, DOD has not included estimated costs of at least $2.4 billion to complete environmental cleanup at BRAC bases for its annual savings projections beyond 2001. Including these costs would reduce overall savings and delay the point at which net savings begin, even though the impact is relatively small. Despite these omissions and the lack of current savings data, our prior work and the work of others, such as the DOD Inspector General, indicate that BRAC net annual savings will be substantial once implementation costs have been offset. DOD expects that the four BRAC rounds will cumulatively result in substantial net savings through 2001 and in additional ongoing recurring savings after that time. DOD expects one-time costs of about $23 billion for the period of 1990 through 2001, while achieving total savings of almost $37 billion, resulting in net savings of about $14 billion (see fig. 3.1). As shown in the figure, DOD reports that cumulative BRAC savings are expected to surpass cumulative BRAC costs for the first time in fiscal year 1998. If community assistance costs of over $1 billion are considered as a BRAC cost and included in the costs and savings calculations, the breakeven point for costs and savings would occur later in fiscal year 1998. BRAC costs and savings differ by round because of variations in the number and scope of closures and realignments in each round. The BRAC 1991 round is the only one where DOD expects to achieve a net savings during the 6-year implementation period; after the implementation periods, however, DOD expects substantial recurring savings for all BRAC rounds. The highest costs occurred in the BRAC 1993 round, but this round also accounted for the highest level of estimated recurring net annual savings. The lowest costs occurred in the BRAC 1988 round, but this round is expected to produce the lowest annual estimated recurring savings. For the 6-year implementation periods for the rounds, total estimated costs are slightly higher than total estimated savings; however, following 2001, DOD estimates annual recurring savings of $5.7 billion (see table 3.1). Potential costs and savings of a BRAC action were factors the BRAC commissions considered in recommending which bases to realign and close. DOD developed initial cost and savings estimates by using its Cost of Base Realignment Actions (COBRA) model, to compare various alternative BRAC actions. While COBRA was useful in the decision-making process, it was not intended to produce data for developing specific cost and savings estimates for any particular action that was to be implemented. After BRAC decisions were finalized, DOD intended to replace the COBRA estimates with more refined estimates for submission in its annual budgets to the Congress. Starting in fiscal year 1993, DOD was required to update these estimates on an annual basis in its budget submissions. The COBRA model consists of a set of formulas that incorporate standard factors, such as moving and construction costs, as well as base-specific data, such as average salaries and overhead cost computations. It incorporates data pertaining to three major cost elements—the current cost of operations, the cost of operations after a BRAC action, and the cost of implementing the action. In our analyses of the BRAC commissions’ recommendations for the four BRAC rounds, we found and reported on various problems with COBRA. Improvements were made to the model after each BRAC round. In our review of the 1995 BRAC round, we stated that COBRA estimates are only a starting point for preparing BRAC implementation budgets and that COBRA is a comparative tool, rather than a precise indicator of budget costs and savings. DOD agrees that COBRA provides a methodology for consistently estimating costs and savings for alternative closure options but that it is not intended to be used in its budget submissions. DOD submits costs and savings estimates for BRAC actions with its annual budget. COBRA estimates were a starting point for the military services in preparing initial BRAC implementation budgets. BRAC legislation, supplemented by DOD Financial Management Regulations, requires that for fiscal year 1993 and thereafter, DOD submit annual schedules estimating BRAC cost and savings, as well as the period during which savings are to be achieved. DOD components are required to prepare budget justification books for each BRAC commissions’ recommendations with narrative and financial summary exhibits. Each service is also required to prepare a cost and savings exhibit for each base closure package, showing one-time implementation costs, anticipated revenues from land sales, and expected savings. The projected BRAC costs and savings are reported in the budget for the 6-year implementation period for each round. The Congress uses these estimates in appropriating funds annually for BRAC actions. Data developed for the budget submissions differ from those in COBRA for a variety of reasons, including the following: Some factors in COBRA estimates are averages, whereas budget data are more specific. COBRA costs are expressed in constant-year dollars; budgets are expressed in inflated dollars. Environmental restoration costs are not included in COBRA estimates, but these costs are included in BRAC implementation budgets. COBRA estimates show costs and savings pertinent to a given installation even if multiple tenants are involved; BRAC implementation budgets represent only a single component’s costs. Accurately gauging BRAC savings is important because DOD is depending on them to help fund future defense programs, such as weapons modernization. To the extent that the savings are greater than estimated, DOD could have more resources for future programs than needed while the opposite would hold true if the savings are less than estimated. DOD and service BRAC officials stated that estimated BRAC savings are applied to future annual budgets formally in the budget process. Estimated amounts of net savings projected at the beginning of a BRAC round are subtracted from the expected future cost of each service’s plans in DOD’s Future Years Defense Program (FYDP). These early estimates, according to DOD and service officials, are generally not updated for more current estimates of savings. Further, the services have discretion in how they apply the estimated savings. DOD officials told us, for example, that the Army distributes savings across a number of different budgetary accounts, while the Navy applies savings as a lump sum against future budget authority. We could not confirm that all BRAC savings estimates were applied to future budgets because they may be combined with savings from other initiatives or, as in the Army’s case, distributed as small amounts across many accounts. While DOD and its components have emphasized the importance of accurate and current cost estimates for their annual BRAC budgets, the military services have not placed a priority on updating BRAC savings estimates. DOD has consistently updated BRAC costs in its annual budget; however, the services seldom update estimates of BRAC savings and do not change savings estimates to reflect actual savings. Among the reasons savings estimates are not updated are that DOD’s accounting system, or other accounting systems, is not designed to track savings and that updating savings has not been a high priority. For BRAC 1991, 1993, and 1995 round budget submissions, the military components reviewed and revised their total cost estimates for base closures and realignments annually. The components provide guidance to their major commands and/or installations detailing instructions for supporting BRAC costs included in budget submissions. Each service’s estimated costs in the budget requests showed annual changes of varying size. Costs for two defense agencies—the Defense Logistics Agency and the Defense Information Systems Agency—did not change in some years, but agency officials told us that the costs were carefully evaluated during the budget process. We did not verify the accuracy of the estimates; however, the DOD Inspector General, in a BRAC 1993 audit of costs and savings, noted that DOD has a reasonably effective process for updating BRAC cost estimates. In contrast, savings updates were infrequent. Although our review showed the Defense Logistics Agency and the Defense Information Systems Agency updated savings projections annually, the services have seldom revised savings estimates, despite requirements to do so. The BRAC 1990 legislation required that, for fiscal year 1993 and thereafter, DOD submit annual schedules estimating the cost and savings of each BRAC action. In 1996, DOD provided additional budget guidance to the military components, requiring that savings estimates be based on the best projection of the savings that would actually accrue from approved realignments and closures. DOD Defense Planning Guidance issued that year stated that, as a matter of general policy, the military components should track actual BRAC savings and compare them with projected savings. The Air Force has not updated its savings estimates, and the Army and the Navy have rarely done so. For the 1991, 1993, and 1995 BRAC rounds, each service had 11 opportunities in its annual budget submissions to update savings estimates for one round or another—for a total of 33 opportunities. Altogether, they submitted a total of seven updates. The Navy updated savings in four budget submissions and the Army updated savings in three submissions. In addition to not updating its savings estimates, the Air Force did not refine its initial COBRA estimates for its annual budget submissions. The Air Force’s budget estimates consist of COBRA data, with adjustments for inflation and recurring cost increases at gaining installations. Air Force officials stated that its BRAC office never instructed major commands to update savings estimates. They stated that at the beginning, the Air Force decided not to update savings estimates because there was no accounting system to track savings changes and no resources to create one. These officials agreed that COBRA estimates are broad estimates that may differ from actual savings. In contrast, the Navy refined COBRA estimates for its budget submission at the start of each round. Thereafter, according to Navy officials, it was Navy policy to update savings only when major BRAC changes occurred that could affect overall savings. For example, the Navy’s 1998 budget submission for the 1995 round showed increased savings over the prior year’s submission. Specifically, Navy officials stated that the decisions to privatize workloads at the Naval Air Warfare Center at Indianapolis, Indiana, and the Naval Surface Warfare Center at Louisville, Kentucky, instead of closing them and transferring some jobs to other locations, resulted in greater savings estimates at both locations. These centers were the only 1995 round installations for which the Navy updated the savings estimates; savings for other locations were neither reviewed nor revised. However, we believe the revised savings estimates for these two locations may be overstated because our previous reviews of BRAC actions involving privatization have questioned the cost-effectiveness and whether it reduces excess capacity. In particular, our 1996 report on the Navy’s Naval Surface Warfare Center in Louisville showed that the plan for privatizing workloads in place will not reduce excess capacity in the remaining depots or the private sector and may prove more costly than transferring the work to other depots. Like the Navy, the Army revised COBRA savings estimates to more precise estimates based on its BRAC implementation plans but, until recently, had not instructed commands to annually update initial savings estimates. Acting on Army Audit Agency recommendations, the Army updated its savings estimates for selected BRAC 1995 actions in the fiscal year 1999 budget. The Army Audit Agency reviewed costs incurred and avoided at 10 BRAC 1995 closures and developed revised savings estimates. In August 1997, the Army BRAC office instructed major commands to incorporate these revised savings estimates in the 1999 budget request and to update estimates annually in future budgets. The Army, however, did not review or revise savings estimates for any installations that were not included in the Army Audit Agency review. Officials cited a number of reasons for not routinely updating savings estimates. BRAC officials told us that the emphasis in preparing the annual budget has always been to update costs—not savings. Service officials stated that updating savings estimates would be very labor intensive and costly and that a fundamental limitation in updating savings is the lack of an accounting system that can track savings. Like other accounting systems, DOD’s system is oriented toward tracking cost-related transactions, such as obligations and expenditures. In addition, as we reported in July 1997, some DOD and service officials stated that the possibility that the components’ appropriations would be reduced by the amount of savings gives them a disincentive to separately track savings. BRAC net savings estimates consist of a comparison of BRAC expenditures with anticipated savings, but they exclude some BRAC-related costs. First, expected environmental cleanup costs of at least $2.4 billion after 2001 are not included in annual recurring savings estimates. (See ch. 4 for a discussion of DOD’s environmental program for BRAC bases). Second, BRAC-related economic assistance costs, much of which are funded through agencies other than DOD, are not included in the calculation of one-time implementation savings. We identified about $1.1 billion that was provided in assistance for purposes such as base reuse planning, airport planning, job training, infrastructure improvements, and community economic development. About $334 million was provided by the Department of Commerce’s Economic Development Administration to assist communities with infrastructure improvements, building demolition, and revolving fund loans. About $271 million was provided by the Federal Aviation Administration to assist with converting military airfields to civilian use. About $210 million was provided by the Department of Labor to help communities retrain workers who have lost their jobs because of closures. About $231 million was provided by DOD’s Office of Economic Adjustment to help communities plan the reuse of BRAC bases. About $90 million in unemployment compensation was provided for employees who lost jobs during the four BRAC rounds. According to DOD, data were not available to provide base-by-base estimates for this cost. Despite the imprecision associated with DOD’s cost and savings estimates, our analysis continues to show that BRAC actions will result in substantial long-term savings after the costs of closing and realigning bases are incurred. For example, we reported in April 1996 that overall base support costs for DOD had been reduced, although DOD’s reporting system could not indicate how much of the reduction was due to BRAC and how much was due to force structure or other changes. We found that by fiscal year 1997, DOD had expected to reduce annual base support costs by $11.5 billion annually from a fiscal year 1988 baseline, resulting in a cumulative reduction over the period of about $59 billion. In addition, an Army Audit Agency audit concluded that BRAC actions would result in overall savings, although savings estimates were not precise. In its July 1997 report, the Army Audit Agency concluded that savings would be substantial after full implementation for the 10 BRAC 1995 sites it had examined but that annual recurring savings beyond the implementation period were 16 percent less than the major commands’ estimates. DOD Inspector General audits have also concluded that savings estimates will be substantial. The Inspector General’s report on bases closed during BRAC 1993 stated that for the implementation period, savings will overtake costs sooner than expected. DOD’s original budget estimate for the 1993 round indicated costs of $8.3 billion and savings of $7.4 billion for a net cost of $900 million. The Inspector General’s audit showed that the costs were closer to $6.8 billion and that savings could approach $9.2 billion, which would result in up to $2.4 billion in net savings. The report indicated that the greater savings were due to factors such as obligations for one-time implementation costs (which were never adjusted to reflect actual disbursements), canceled military construction projects, and less of an increase in overhead costs than originally projected at a base receiving work from a closing base. Additionally, some undefined portion of the savings included personnel reductions that could not be solely attributed to BRAC. The Inspector General’s audit of selected BRAC 1995 closures showed variation between budget estimates and implementation experience. The audit of 23 closed bases noted savings during the implementation period were within 1.4 percent and costs were within 4.3 percent of budget estimates. However, the audit excluded costs and savings from two activities—the Naval Air Warfare Center in Indianapolis and the Naval Surface Warfare Center in Louisville—that were privatized-in-place. However, our prior reviews have raise cost-effectiveness questions about privatization-in-place efforts. As noted previously, our 1996 report on the Navy’s Louisville activity showed that the plan for privatizing workloads may prove more costly than transferring the work to other depots having underutilized capacity. DOD is depending on BRAC savings to help fund future defense programs. Although evidence indicates that BRAC savings should be substantial, savings estimates have not been routinely updated and certain costs are not considered in developing estimates, thereby calling into question the degree of precision that is associated with the expected savings. To the extent that actual BRAC savings differ from the estimated amounts applied to future budgets, DOD either will have to seek additional funds for programs it hoped to fund with BRAC savings in the future or may have more funds available than anticipated. DOD concurred with our conclusion that BRAC savings will be substantial once implementation costs have been offset. DOD acknowledged that savings estimates are important because they help measure the value of the BRAC process. However, DOD stated that such estimates are difficult to track and update, and that it does not maintain a separate system to account precisely for savings. Nonetheless, DOD stated it is taking measures to improve the accuracy of its savings estimates. For example, DOD cited that the DOD Comptroller, in a May 1998 memorandum to the military services, had reiterated the requirement to update savings estimates in annual budget submissions as much as practical. The process of making BRAC property available for transfer and reuse involves cleaning up environmental contamination resulting from years of military operations. While DOD had an environmental program at its military bases prior to BRAC 1988, the onset of realignments and closures and the desire to cease operations and transfer property as quickly as possible have heightened the interest in environmental cleanup. Addressing environmental problems has proven to be both costly and challenging for DOD. Although DOD has not compiled a total cost estimate, available DOD data indicate that BRAC environmental costs are likely to exceed $9 billion, of which at least $2.4 billion is needed to continue restoration after the BRAC implementation authority expires in fiscal year 2001. Cleanup is expected to continue many years beyond that time and the potential for higher costs exists, given uncertainties associated with the extent of cleanup of UXO and monitoring of cleanup remedies needed at selected sites. In the early years of the BRAC program, much of the emphasis was on site studies and investigations. Now, DOD has reported that, with much of that investigative work completed, the program’s emphasis has shifted to actual cleanup. To expedite cleanup and help promote the transfer of BRAC property, DOD established the Fast-Track Cleanup program in fiscal year 1993 to remove needless delays in the cleanup process while protecting human health and the environment. Most of the key provisions of the program have been met. Further, DOD, the services, and regulators generally agree that the program has contributed to environmental program progress. However, while some of the steps leading to actual cleanups have been accelerated, actual cleanups can still be lengthy and projections for completing cleanups extend well into the next century. The BRAC environmental program involves restoring contaminated sites to meet property transfer requirements and ensuring that the property is in compliance with federal and state regulations. The program consists of restoration, closure-related compliance, and program planning and support activities. Restoration activities involve the cleanup of contamination caused by past disposal practices, which were accepted at the time but which have proved damaging to the environment. Compliance activities ensure that closing bases clean up hazardous waste following specific practices outlined in environmental laws and regulations.Program planning is generally associated with examining the environmental consequences of property transfer and reuse decisions.Program support activities include program management, administration, travel, training, and other support requirements, such as funds provided to the federal and state environmental regulatory agencies and the Agency for Toxic Substances and Disease Registry. Of the $23 billion estimated cost for the entire BRAC program through 2001, about $7.2 billion, or 31 percent, is associated with environmental protection efforts. Also, additional environmental costs of at least $2.4 billion are expected after that time because the duration of environmental activities is dependent on the level of cleanup required for reuse and the selected remedy. In some cases, the contamination problem can be addressed quickly, but in other cases, the cleanups may require years to complete. The estimated costs after 2001 are expected to be incurred over a number of years and would therefore only slightly reduce DOD’s projected annual recurring savings over the long term. Currently, available data indicate that environmental program costs at BRAC locations are expected to exceed $9 billion (see table 4.1); however, this estimate is conservative because DOD has not projected all costs for the program’s duration. Further, costs could increase if (1) cleanup standards or intended property reuses are revised, (2) DOD undertakes significant UXO cleanups, or (3) selected remedies fail to clean up contaminated sites. Likewise, costs could decrease if (1) cleanups standards or intended property reuses are revised or (2) new cleanup technologies are developed and implemented. Over 40 percent of the $9.6 billion estimate had been obligated through fiscal year 1997. Over 75 percent of the total environmental cost is expected to be devoted to restoration actions. As noted in the table, some cost estimates are not all inclusive because either DOD had not estimated future costs or the data were commingled with other environmental data. A major potential compliance cost that is not included in DOD’s estimate is the cleanup of UXO. However, DOD does not define the cleanup of UXO as a restoration activity. Thus, UXO cleanup costs are not included in DOD’s estimate for the restoration of BRAC bases. For example, according to Fort Ord’s base environmental coordinator, DOD’s annual restoration report does not include the estimated $150 million cost of UXO cleanup at the fort. The Army indicated that such costs were not included in DOD’s annual cleanup report because they were considered compliance, not restoration, costs. Regardless, UXO must be cleaned up or addressed in some manner before property can be transferred and reused. While environmental cost estimates have risen over the years and the potential exists for even greater costs, DOD has decreased its cost estimate to complete BRAC cleanup at identified sites by about $900 million over the last year. Among the reasons the services have given for the estimate decrease are factors such as enhanced estimating capability based on experience, improved site identification, and use of innovative technology. As DOD noted, some early estimates were based on worst-case scenarios, which have generally not occurred. DOD also sometimes assumed that it would be required by local redevelopment authorities to clean property to the highest cleanup standard, that of unrestricted use; this assumption has proved to be untrue in some cases. For example, at the Long Beach Naval Station, the estimated cost to complete cleanup at the installation decreased from $152.4 million in fiscal year 1996 to $85.4 million in fiscal year 1997. While the earlier estimate was based on dredging all contaminated harbor sediments, Navy officials said they were able to decrease the estimated cleanup cost by negotiating a reduced amount of dredging and cleanup with the community. Further, the adoption of some innovative cleanup technologies is expected to reduce costs. Ten years into the cleanup process, the military services have voiced increased confidence in their environmental cleanup estimates for sites where contamination exists. This confidence is due, in part, to what they perceive as their enhanced experience in identifying contaminated sites and selecting appropriate cleanup methods. The services report that they have used the experiences of successive closure rounds and their continued programs at active installations. Assessing the accuracy of estimates, however, is difficult because data upon which to base conclusions are limited. Fiscal year 1996 was the first full year in which the services used a new model, referred to as the cost-to-complete model, to develop their estimates. Whereas earlier estimates were based on completing “projects,” which could involve multiple sites with differing cleanup requirements, the new model formulates estimates on a site-by-site basis. The services stated that these cost-to-complete estimates are based on current remedies and known contamination; the discovery of new contamination or the development of new technology could change them. The cost to complete cleanup could increase if selected remedies are unsuccessful, and other remedies are required. While overall cleanup cost estimates for BRAC bases are decreasing, the processes of identifying, designing, and implementing a cleanup program are nonetheless costly. As we reported in 1996, key factors contributing to the high cost of cleanup are the (1) number of contaminated sites and difficulties associated with certain types of contamination, (2) requirements of federal and state laws and regulations, (3) lack of cost-effective cleanup technology, and (4) intended property reuse. Although most bases had some type of environmental cleanup activity while the bases were active, DOD officials told us that the requirements for disposing of property usually entail a more extensive review of potential contamination than is necessary for ongoing operations. As a result of such a review, more contaminated sites are often identified. While most BRAC bases have been closed and most investigative studies have been completed, new sites are still being identified. For example, DOD reported a total of 4,960 sites requiring cleanup in fiscal year 1997, an increase over the 4,787 sites reported in fiscal year 1996. As we have reported, the extent of site contamination is often difficult, time-consuming and costly to investigate and may not be fully determined until environmental cleanup is underway. For example, at the Tooele Army Depot, the base environmental coordinator indicated that by 1990 sufficient sites had been identified to place the depot on the National Priorities List (NPL), yet nine additional sites were identified after the property was selected for closure in 1993. With cleanup underway in 1995, another contaminated site was identified. The coordinator estimates the additional necessary cleanup cost for the last site alone would be $12 million. The type of contamination also affects cleanup costs. For example, cleaning up contaminated ground water, an environmental problem at many closing bases, is often expensive. Further, given available technology, cleaning up UXO is costly, labor intensive, time-consuming, and dangerous. According to a recent Defense Science Board Task Force report, DOD does not know the full extent of the UXO problem at its domestic bases, BRAC or otherwise, so it cannot accurately estimate cleanup costs. However, the Board’s report indicates that over 15 million acres on about 1,500 sites are potentially UXO contaminated. The report notes that even if only 5 percent of the suspected sites require cleanup, costs could exceed $15 billion. While BRAC bases represent only a portion of this acreage, UXO contamination is a potentially costly and unresolved problem at BRAC bases. Issues still to be determined are how much acreage will require cleanup and to what degree. According to DOD, efforts are underway to identify requirements and provide a comprehensive evaluation of the need for a UXO program, and the services are identifying UXO requirements in their budgetary planning. Also, DOD is developing policy delineating the methods it will use for UXO cleanup. Until that policy is published in mid-1999 and experience is gained using the methods, it will be difficult to predict reliably what the cleanup will cost. As we reported in September 1996, the requirements of federal and state environmental laws and regulations have a significant impact on the cost of environmental cleanup. Under the existing environmental legal framework, cleanup standards and processes associated with existing laws, regulations, and executive orders establish procedures in conducting assessments and cleanup of DOD’s base closure property. (See app. IV for a partial listing of these requirements.) In addition to federal requirements, states may have their own requirements. These requirements vary by state and, in some instances, may be more stringent than the federal requirements. For example, California has some drinking water standards that are higher than federal standards and thus contamination could be more costly to clean up. In many cases, technology that is used to clean contaminated property may reduce the costs of cleanup. However, there is some expected reluctance on the part of the regulatory community, the services, and the communities to experiment with unproven technology because of the risks associated with innovation. While innovative technology offers the potential for reducing the cost of cleanup, it also entails a risk that the desired goal will not be achieved. In that case, both time and money will be lost and another remedy must be implemented. New technologies that are being tested offer the potential to greatly decrease the cost of cleaning up groundwater, UXO, and other contaminants. However, their effectiveness has not yet been validated. For example, at the former Mare Island Shipyard, the Navy is testing a new technique that could significantly reduce the cost of cleaning up contaminated soil. An engineer in the Environmental Protection Agency noted that this technique could reduce the per-ton cleanup cost of contaminated soil from $1,000 to $300. Although initial results have been promising, a Navy official cautioned that the new technique has been tested on a small area only and that the results not been validated. Following validation, the technique must also go through the approval and adoption process before it can be put into practice. The cost of cleanup also depends partly on the intended reuse of the property, as the reuse in part determines cleanup level standards. For example, if there is interest in developing residential housing on a former industrial site, a higher level of cleanup will be required than if the property is slated for industrial reuse similar to its former use. The residential cleanup standard, which involves having no restrictions on the future use of the property, can be the highest and costliest to achieve. A less expensive alternative (at least in the short run) is to limit the reuse of property and maintain institutional controls, such as deed restrictions, fences, and warning signs to inform the public of restricted activities. While the services noted that estimates were initially developed based on the expectation that property would be cleaned to the highest standard, this has not always occurred. Both DOD and environmental regulators indicate that communities have generally been reasonable in their expectations for cleanup. For example, recognizing the magnitude of the UXO problem at the Army’s Jefferson Proving Ground, the community has not sought to have the property cleaned up. Instead, it is considering making the area a wildlife refuge. Fiscal year 1996 was a turning point for the BRAC environmental cleanup program with a greater emphasis on cleanups than studies to determine what cleanups are needed. According to DOD, cleanup efforts since fiscal year 1996 have shifted from the investigative arena to the implementation phase. Thus, for the first time since 1988 when the first closure round was announced, DOD reported that 55 percent of BRAC-obligated environmental funds were spent on cleanup activities and 45 percent on investigations. Prior to that year, more money was obligated for investigations than for cleanup, primarily because disposing of unneeded property requires a more comprehensive review of the property. Not only are these investigations time-consuming, but they often uncover contaminated sites not previously identified. While DOD has made progress in identifying contaminated sites and developing solutions, cleanup actions at most sites have yet to be completed, and long-term monitoring may be needed at many sites. As a result, DOD will continue having financial obligations at BRAC installations for many years. DOD has made progress in identifying contaminated sites and developing solutions, although cleanup actions at most sites have yet to be completed. However, it is difficult to estimate when operations and maintenance and long-term monitoring and associated costs of the activities will end. DOD has established milestones for (1) forming BRAC cleanup teams, (2) completing environmental baseline surveys, and (3) putting remedies in place or completing responses at its BRAC bases. DOD data indicate that it has achieved the first two goals. The services are working toward the third milestone, set in defense planning guidance, of (1) having remedial systems in place or responses complete at 75 percent of the bases and 90 percent of the sites by 2001 and (2) having 100 percent of the installations and sites with remedial systems in place or responses complete by 2005. According to DOD, as of September 30, 1997, 77 of 205 BRAC installations had all remedial systems in place or achieved responses complete. Twenty of the 77 bases had achieved response complete for all sites. In some instances, response complete is the end of any activity at a site; however, in other cases, long-term operations and maintenance and monitoring may still be needed depending on the specific site conditions and the chosen remedy. For example, soil contamination can be addressed by physically removing the contaminated soil or by implementing some type of on-site soil treatment system. These activities have different time and cost requirements associated with their use. Additionally, the chosen remedy may need to be replaced or modified over time if it failed to achieve the expected cleanup standard or if a new method of cleanup was warranted and adopted. To ensure the effectiveness of a remedy and that cleanup goals are met, long-term monitoring may be necessary—possibly in perpetuity. While DOD cannot provide dates when operations and maintenance and long-term monitoring will be completed, estimated long-term monitoring costs associated with remedies are included in its projected costs after 2001. DOD officials indicated that such estimates assume that site closeout will occur 5 years after the remedial action is completed. A review of the site remedy is required by law no less often than each 5 years after the initiation of remedial action if hazardous substances remain at the site to ensure that ongoing response actions are still protective of human health and the environment. However, it is possible that operations and maintenance and monitoring costs could continue beyond this period. BRAC-earmarked funding ceases in 2001, however, and although the services are committed to completing cleanup, the BRAC environmental program will have to compete for funding with other DOD needs, such as active base cleanup and mission requirements. To the extent that funding available for BRAC cleanup is curtailed, the program’s completion could be delayed. The Air Force expects to spend more than any other service for environmental efforts after 2001. The Air Force estimates it will require $1.3 billion for cleanup, operations, and monitoring after that time. At McClellan Air Force Base, California, a 1995 BRAC activity, cleanup costs after 2001 are expected to be about $396 million, with cleanup completion, except for continued monitoring, expected in 2033. Activities associated with completing cleanup include operation of cleanup systems, sampling and analysis, long-term monitoring of contaminated ground water, landfill cap maintenance, institutional control monitoring, regulatory reporting, and performance reviews. The Air Force estimates that one-third of its installations will complete long-term monitoring and operations by 2011, another one-third by 2021, and the remaining one-third, where there is extensive groundwater contamination, some decades later. Mather Air Force Base is among the bases that require many years of monitoring and operations, extending to an estimated closeout in 2069. In September 1993, DOD established the Fast-Track Cleanup program to overcome obstacles associated with environmental cleanup and to help make BRAC property available quickly for transfer and reuse. DOD reports that 110 BRAC bases participate in the program, 32 of which are also NPL sites. Through this program, DOD expected to support the President’s Five Part Community Reinvestment program, which was established in July 1993 and made early community redevelopment of BRAC property a priority. According to DOD, the services, and regulators, the program has been successful in improving environmental cleanup progress, particularly in the processes leading up to the actual cleanup of contamination. However, actual cleanups can still be lengthy, depending on, among other factors, site conditions and available technology. In a January 1996 report, DOD asserted that cleanup schedules had been accelerated as a result of the program; we did not, however, independently verify DOD’s findings.Further, our analysis showed that most key program provisions had been met. The key provisions are (1) establishing cleanup teams at major BRAC bases, (2) making clean parcels quickly available for transfer and reuse, (3) providing indemnification, and (4) accelerating the review process associated with requirements of the National Environmental Policy Act. While DOD has been successful in meeting the first three provisions, it has not been fully successful in meeting the fourth. In addition to the specified program provisions, several mechanisms were developed to support the program. Two of the mechanisms focus on identifying and documenting properties that are clean or that are in the process of cleanup and can thus be transferred or leased to the community. The third mechanism, which is generally referred to as early transfer authority, makes it possible to transfer property prior to it being cleaned up, thus making it available for reuse more quickly. DOD has created BRAC cleanup teams at its major bases. The teams, made up of state and federal regulators and service officials, were developed with the expectation that they would find ways to expedite cleanup actions to prepare real property for transfer and reuse. By working together and fostering communication and coordination, DOD hoped to avoid the slow, uncoordinated reviews and comments and have a forum to settle disagreements over cleanup standards and methods. DOD indicated that the creation of the teams has reduced the time and costs to complete cleanup actions. For example, DOD reported in January 1996 that the program eliminated nearly 80 years from the cleanup process and that more than $100 million was saved due to the early involvement of stakeholders in that process. Team members we spoke with during our site visits agree that the collaborative effort has created a more efficient working environment, allowing them to make decisions more quickly, resolve disputes, and thus save time and money. However, many of the cleanup activities are still lengthy. Thus, while the initial steps of the cleanup process were shortened (i.e., reaching agreement on both the level of cleanup and the remedy), actual physical cleanups may extend many years. DOD has also been successful in making clean parcels of BRAC property immediately available for transfer and reuse. Under the requirements of the Community Environmental Response Facilitation Act, DOD is to seek concurrence from the Environmental Protection Agency on the identification of uncontaminated parcels within 18 months of the BRAC round being approved. DOD data indicate that it has fulfilled this requirement, identifying approximately 100,000 acres of uncontaminated property for disposal from all four BRAC rounds. In 1993, the Congress authorized DOD to indemnify future owners for the cleanup of contamination resulting from past DOD operations. According to DOD, this allows it to more readily lease or transfer real property and promote reuse. DOD, however, has not in all instances met the fourth provision of speeding the review process associated with the National Environmental Policy Act. By statute, DOD is required, to the extent practicable, to complete any environmental impact analysis required with respect to an installation and any redevelopment plan for an installation no later than 1 year after the redevelopment plan is submitted. This requirement significantly shortens the usual time frame of 2 to 4 years. DOD officials acknowledge, however, that this requirement has not been met in all instances and are attempting to determine the cause of the delays. DOD reports that, as of September 1998, 37 of the 101 installations that it tracks had not completed the required environmental documentation within the specified time frame; another 30 were in the process of preparing the documentation, and their compliance is undetermined at this point. In an effort to achieve the Fast Track’s goal of making property available for reuse as quickly as possible, DOD has developed additional mechanisms for speeding up the availability of unneeded base property. In 1994, DOD developed two mechanisms to identify and document properties that are clean and thus can be transferred or that are in the process of cleanup and can thus be leased to the community. These mechanisms are referred to as the Findings of Suitability to Lease and the Findings of Suitability to Transfer. According to DOD officials and regulators, the documents serve to (1) act as a link between the environmental efforts and community reuse and (2) inform the public about the types of contamination on the base, actions taken or to be taken to address the problems, and restrictions associated with the use of that property. This information is important for both the environmental and real estate sides of the reuse and transfer process. As of September 30, 1997, DOD reported that lease or transfer documentation had been prepared for 25 percent of the acres that were available for transfer. Of about 438,000 acres at 112 major BRAC installations, 43,000 acres had completed transfer documentation, and 68,000 acres had completed lease documentation. In fiscal year 1997, DOD obtained the early transfer authority to transfer property before all remedial action has been taken. To assure new owners of DOD’s commitment to cleaning up contamination after a transfer occurs, deeds contain an assurance stating that necessary response actions to clean up the property will be taken and a schedule for completion of the response actions. Also, the deed is to contain use restrictions and schedules to further uninterrupted response actions. While this authority allows DOD to make property available for reuse more quickly, it is too early to determine what impact this will have on property transfers. As of July 1998, only acreages at Grissom and Mather Air Force Bases had been transferred under this authority. Several other reuse authorities, including those at Griffiss Air Force Base, Naval Air Station, Memphis, and Tooele Army Depot, are pursuing early transfers. Concerns, however, are being raised. For example, during a meeting between the Army, and state and local reuse authority officials over the early transfer of Tooele Army Depot property, the issue of enforcement of land use restrictions was raised. State officials wanted to know how restrictions would be monitored and enforced and by whom because the Army would no longer retain the property’s deed and therefore enforcement powers. According to DOD and Environmental Protection Agency officials, these issues are being examined. As is the case for its active bases, cleaning up environmental contamination on BRAC bases has proven to be costly and challenging for DOD. However, it is a task that must be done to meet environmental laws and facilitate the transfer of unneeded property to other users. While DOD has made progress from the earlier BRAC years when much of its efforts were largely devoted to investigative studies and has established initiatives to expedite cleanup, many cleanup activities remain. As a result, DOD expects to continue its environmental efforts beyond 2001, the final year of BRAC implementation authority. Further, DOD estimates that $2.4 billion is required after 2001, not including estimated costs for UXO, a potentially costly issue at this point in time. Until such time that this issue is fully addressed and questions regarding how long sites will require monitoring before achieving site closeout, determining the overall cost of the program is difficult. DOD stated that time and cost associated with the cleanup at BRAC bases is driven by the regulatory framework. Nonetheless, DOD cited its Fast-Track Cleanup program as one initiative that has accelerated the cleanup process through partnerships with state and regulatory agencies as well as with local communities. DOD believes these partnerships produce more cost-effective cleanups with consideration to future reuse and community concerns. The expected negative economic impact of base closures on local communities has long been a concern for the citizens of those communities, as well as Members of Congress. A base closure can result in the loss of hundreds or even thousands of jobs in a community. Nevertheless, most communities where bases were closed under the four BRAC rounds have fared relatively well over time. A majority of such communities had 1997 unemployment rates that were lower than or equal to the national average and had per capita income growth rates that exceeded the national average during 1991-95. A few communities, however, continued to experience high unemployment rates and/or declining per capita incomes. Our work at six selected base closure sites with varying population, economic circumstances and geography not only showed that the surrounding communities were recovering from BRAC but also that the transition was not necessarily easy. Community officials told us, in general, that they were recovering from the impacts of base closure and were optimistic about the future of their communities. Many of these officials credited the strong national economy and diversifying economic activity in their regions as key to their economic recovery. At the same time, they pointed to the considerable difficulties, frustrations, and losses that communities experience as they adjust to the loss of military jobs and the redevelopment of base property. These pains of adjustment included decreasing retail sales at some establishments, leading to some business closings; declining residential real estate values in areas predominately populated by base personnel; and social losses felt in local schools, churches, and organizations that benefited from military personnel and their families. Selected economic indicators for BRAC-affected communities compared favorably to national averages. We used unemployment rates and real per capita income growth rates as indicators of the economic health of those communities where base closures occurred during the prior BRAC rounds.We identified 62 communities involving 88 base closures in which government and contractor civilian job loss was estimated to be 300 or more. Unemployment rates for BRAC-affected communities compared favorably with national averages. About two-thirds of the communities affected by recent base closures (42 of 62) had a 1997 unemployment rate at or below the national rate of 5.1 percent. This situation compared favorably to when the BRAC process was beginning in 1988. At that time, 37 communities, or 60 percent of the 62 communities, had unemployment rates at or below the U.S. average (then 5.5 percent). For all BRAC-affected communities with a higher than average 1997 unemployment rate, only two—the Merced area surrounding the now-closed Castle Air Force Base and the Salinas area surrounding the now-closed Fort Ord (both in California)—had double-digit unemployment rates: 15 percent and 10.3 percent, respectively. A comparison of the communities’ 1997 unemployment rates to the national rate of 5.1 percent is shown in figure 5.1. Similarly, a June 1996 report by the Congressional Research Service found that a majority of the localities affected by BRAC actions had unemployment rates that were near to or well below the 1995 U.S. rate of 5.7 percent. It states that most communities affected by any one of the BRAC rounds “have a relatively low degree of economic vulnerability to job losses that are estimated to result from these actions. As with unemployment rates, real per capita income growth rates for BRAC-affected communities compared favorably with national averages. From 1991 to 1995, 63 percent, or 31, of the 49 areas (excluding the 1995 round) had an estimated average per capita income growth rate that was at or above the average of 1.5 percent for the nation. Of the 18 communities below the national average during this period, 13 had average per capita income growth rates above zero percent, and 5 had declining income (see fig. 5.2). Number of local impact areas At or above 1.5% 0% to 1.49% These figures show some improvement since the 1988-91 period, when the BRAC process was just beginning to take effect and the U.S. average rate of growth was only 0.2 percent. At that time, 55 percent, or 27, of the 49 communities had estimated average rates of real growth in per capita income at or above the national average. Twenty of the 49 communities showed decreases in per capita income during this period. Because a less diversified economy might make smaller communities more vulnerable to the adverse effects of a base closure, we analyzed their economic performance separately. As shown in figure 5.3, 10 of the 18 small city and rural areas, or 56 percent, had a 1997 unemployment rate above the U.S. average, compared to 32 percent of BRAC-affected communities overall. On the other hand, 10 of 14 communities (again excluding those involved only in the 1995 round), or 71 percent, had a per capita income growth rate that was greater than or equal to the national average between 1991 and 1995, a higher proportion than that of BRAC-affected communities overall (see fig. 5.4). In general, the communities where we performed work reported suffering initial economic disruption, followed by recovery. Less tangible, but harder to correct, were social losses resulting from the departure of base personnel, such as the cultural diversity base personnel and their families brought to the local communities. As factors in economic recovery, officials pointed to the strong national economy, diversifying local economies, government assistance, and base redevelopment. However, some local officials were dissatisfied with the pace of redevelopment, citing delays in the transfer of base property. (See ch. 2 for our discussion on DOD’s progress in transferring base property.) Through our work at the surrounding communities of six major base closures, we were able to learn how each community was unique in how it drew on local and regional strengths to recover from the job losses associated with base closures. We also identified common economic impacts and trends across the communities. The local impact areas for Fort Benjamin Harrison, Fort Devens, and the Philadelphia Naval Base and Shipyard fell within large metropolitan regions. These areas had low 1997 unemployment rates and 1991-95 average real per capita income growth rates near or higher than the national average and past trends. The rural area around Eaker Air Force Base had a relatively high 1997 unemployment rate compared to the national average, though it was significantly lower than the 1988 rate when it was 13.5 percent, and the average real per capita income growth rate was considerably higher than the national average. In contrast, the rural area surrounding Merced and Atwater had a high unemployment rate and declining real per capita income, though the rate of decline decreased in 1991-95 compared to 1988-91. Local officials told us that Merced and surrounding communities have a high unemployment rate because of the large seasonal employment associated with the agriculture and canning industries and the large Hmong and Punjabi populations that have migrated into the area and are still assimilating into the American culture. The other rural area that showed some economic decline was Beeville, Texas. Though its 1997 unemployment rate was relatively low compared to the 13.2 percent it experienced in 1993, the rate in per capita income growth from a healthy 2.9 percent during 1988-91 declined to a below average of 0.5 percent during 1991-95. Local officials told us that the new prisons have created many new jobs and boosted the population in the Beeville area, but the decline in income growth suggests that the level of total personal income has not kept pace with the population growth. However, prisoners are counted in the population estimates used to calculate per capita income and thus partially explain much of the decline in the rate of growth. Table 5.1 shows preclosure and recent economic data for each of the local impact areas representing the communities we visited. Our findings are consistent with a 1996 report by the RAND National Defense Research Institute, which studied the impact of three base closures on neighboring California communities. It concluded that “while some of the communities did indeed suffer, the effects were not catastrophic not nearly as severe as forecasted. Impacts of closure that officials conveyed to us included initial economic disruption caused by the news of impending closure; decreasing retail sales at some establishments, leading businesses to close; declining residential real estate values in areas predominately populated by base personnel; and social losses felt in local schools, churches, and organizations that benefited from active, educated military personnel and families. Examples of how a base closure affects the surrounding community and its business establishments, schools, real estate markets, and social network, as provided by local officials, are shown in figure 5.5. We did not independently verify the data. Local officials from each of the communities we visited described the initial reaction to the announcement of a base closure as one of anger, fear, panic, and denial. They said that people in the affected area feared the worst, in some cases predicting the dissolution of their town itself. At the very least, the loss of the base was expected to cause significant economic disruption. The rumors of a closure generated fear throughout the community, driving down consumer spending on major items and business expansion. This initial public reaction resulted in real economic impacts, such as a drop in real estate values and car sales. Officials from several communities told us that the announcement of the closure and previous threats of closure were more damaging to economic activity in the area than the actual closure. Each of the communities made an effort to reverse the decision, but eventually resigned itself to the loss and organized a base reuse authority to represent its interests in the base’s redevelopment. Generally, we were told that the citizens and businesses overcame the turmoil associated with base closure and adjusted their lives to a new environment. For the communities we visited, the closure of a military base led to a decline in retail sales, affecting some stores more than others and forcing some to close. Local officials said businesses affected the most included new and used car dealers, clubs, small personal service businesses such as barbers and some nearby “mom & pop” stores. On the other hand, some local officials emphasized that it was often difficult to determine whether the demise of a business was caused by a base closure or other economic factors. Two officials from communities outside of Fort Devens suggested that the recent growth in large discount stores and chains also hurt small retail businesses during the same period of the base closure. A local business official in Blytheville said that some businesses survived the closure of Eaker Air Force Base and were now doing better than ever, while others failed because they could not seem to adjust their business plans to serve a new environment. Some cases were more clearly attributable to the base closure. For example, officials in Beeville pointed to the demise of several small businesses, including a convenience store and a janitorial service that contracted with the base. At the same time, we were told by local officials that the economic impact of the departure of base personnel was not as severe as had been feared. Some local officials believed that military bases tended to be closed environments where personnel spent much of their income on base to take advantage of favorable prices at the commissary and post exchange. Also, local business officials in Beeville told us that many of the Navy officers and pilots and their families may have spent more of their disposable income in the nearby urban areas of San Antonio and Corpus Christi. Local officials cited three events following a base closure that they believe can cause residential real estate values to decline. First, the demand for housing drops as base employees and their incomes leave an area. Second, base housing may be placed on the market, increasing the supply of housing. Third, DOD often purchases the off-base housing units of transferring base personnel and places these units back in the market for resale, also increasing supply. The net result of these factors is an increase in supply of housing units at the same time that a community may be losing people who would most likely be buying homes. Local officials from Atwater (Castle Air Force Base area), Gosnell (Eaker Air Force Base area), and Ayer and Shirley (Fort Devens area) described how rental units that catered to single service personnel had to lower rents and perhaps offer weekly rents to stay in business. In two communities, local officials told us that the result was an influx of a less stable population, which often led to undesirable conditions, such as increased crime and disorderly conduct and a drain on public assistance resources. Several officials from Atwater mentioned that DOD’s program to purchase housing from transferring military and defense personnel lowered housing values. However, officials from communities surrounding Eaker Air Force Base and Fort Devens told us that the market for single-family homes has recovered and in some cases has exceeded preclosure levels. For example, housing values have increased in the communities surrounding Eaker Air Force Base. The communities we visited generally regretted the loss of base personnel, with whom they had good relationships. The loss was often described as a cultural loss rather than an economic one. This loss was less pronounced in the urban areas, but in the rural towns, the bases had brought in people with diverse backgrounds from various parts of the country. Officials described how local institutions benefited from these outsiders’ viewpoints and experiences, particularly in communities where the military people became involved with the local government, the schools, and the arts. An official from one of the communities near Fort Devens remarked about the high quality of people that had entered the community who worked at the Army Intelligence school. In Beeville, some local officials told us about the pride they had at being the home of Chase Field, which trained naval pilots. Base employees were also affected by an installation’s closure. While many base employees accept transfers to other facilities during a base closure, those who choose to remain in the local community may face periods of unemployment. In cases where the military base provided most of the high-paying, high-skilled jobs for the area, as was the case at Castle Air Force Base and Naval Air Station Chase Field, some former base employees who chose to remain in the area reportedly had difficulty finding a job at a comparable salary. Several factors play a role in determining the fate of the economies of closure communities and the recovery of communities (see fig. 5.6). Officials from several of the communities we visited cited the strong national or regional economy as one explanation of why their communities were able to avoid economic devastation and find new areas for economic growth. The national unemployment rate for 1997 was the lowest in a generation. Officials from the communities surrounding Castle and Eaker Air Force Bases said employers are now finding their communities attractive because these rural areas have higher unemployment rates and therefore a large population looking for jobs. These observations are consistent with a 1993 report in which the Congressional Budget Office reviewed the impacts of DOD’s downsizing on defense workers, stating that the best solution for displaced defense workers is a growing economy. Officials from each of the communities expressed the importance of having other local industries that could soften the impact of job losses from a base closure. Urban communities, as officials from the more urban areas confirmed, are better able to absorb the job losses from a base closure because they have more diversified economies that provide a wider range of job and business opportunities. In a January 1998 report, we examined defense-related spending trends in New Mexico and the relationship between those trends and New Mexico’s economy. We reported that while defense-related spending has been declining in the state, the state’s gross product and total per capita income have been increasing and that this economic growth may be due to efforts to diversify the economy away from defense. Officials also pointed to several other economic forces at work in their regions at the time of a closure, during the transition period, and at the current time. For example, officials from the communities surrounding Fort Devens said that at the time of the closure, the area was suffering from the downsizing and restructuring of the computer industry. Today, those same communities are benefiting from the economic growth in the larger Boston metropolitan area. Philadelphia has been going through deindustrialization for the past 20 years. Officials from Philadelphia said their city has been also losing job and population for many years—the closure of the shipyard was not the first big loss they have experienced. However, at the time the closure was announced, the shipyard was the largest manufacturing concern in the region, and one official said that it is difficult for any city to lose such a large employer even if the loss does not fundamentally hurt the local economy of a large metropolitan area like Philadelphia. Figure 5.7 describes the economic and regional context of the base closure for the communities we visited. The rural areas we visited, where agriculture has historically dominated the economy, have benefited from their efforts to diversify. In Blytheville, Arkansas, for example, where Eaker Air Force Base closed, the steel industry found a foothold in the late 1980s before the announcement of the base closure and has been a growing presence ever since. The Blytheville area is attractive to the steel companies because of its access to the Mississippi river and a major interstate as well as an available labor pool. Beeville, Texas, where Chase Field closed, has a long history of farming and ranching, but has recently benefited from an expanding state prison industry. In these cases, the emergence of major employers was coincidental with the base closure, but officials in both towns stated the importance of these employers to recovery. The redevelopment of base property is widely viewed as a key component of economic recovery for communities experiencing economic dislocation due to jobs lost from base closures. The closure of a base makes buildings and land available for a new use that can generate new economic activity in the local community. DOD’s Office of Economic Adjustment surveys the local reuse authorities representing base closures from all four rounds on the number of jobs that have been created from redevelopment of bases. As of March 1998, the Office of Economic Adjustment reported that reuse of base property from closed bases had generated almost 48,000 new jobs (compared with approximately 100,000 government civilian and contractor estimated job losses from BRAC actions). Table 5.2 shows the number of jobs created from redevelopment of base property at the six closed bases we visited. From our meetings with local officials, publicizing redevelopment goals and efforts for former bases is a key strategy for attracting industry and helping communities gain confidence in recovery from the closure. For example, Philadelphia officials recently closed a deal with Kvaerner Shipbuilding of Norway that will bring several hundred shipbuilding jobs back to the shipyard. Though this deal will not replace about 7,000 shipyard lost jobs from the closure, it has helped to allay fears that the shipyard would stay idle in the long term. Officials from other communities stressed the importance of successful base redevelopment to their communities’ long-term economic health. We did not attempt to assess the extent that government assistance programs speeded economic recovery of communities experiencing base closures. However, some officials emphasized that federal assistance in the form of planning and infrastructure grants helps communities overcome many barriers to redevelopment, such as the complex property disposal process and deteriorating or outdated infrastructure. Specifically, local officials told us that Office of Economic Adjustment grants helped them plan for redeveloping base property and Economic Development Administration grants provided funding for infrastructure improvements to integrate base property into the community’s infrastructure. A recent study requested by the Economic Development Administration and prepared by a research team led by Rutgers University evaluated the success of the Economic Development Administration’s defense adjustment grants in helping local communities diversify away from dependence on former military bases or defense contractors. The study concluded that the assistance succeeded in aiding job creation and economic recovery from base closures and defense downsizing. In helping base employees adjust to closures, the communities took advantage of federal, state, and local programs to provide displaced workers with career transition counseling, job retraining, and placement services. One major effort to assist displaced workers occurred in Philadelphia. According to Navy data, about 8,000 civilian jobs were eliminated by the shipyard’s closure from 1991 to 1996. Of these 8,000 employees, about 1,400 were laid off, 2,000 accepted separation incentives, and almost 2,000 transferred to other military installations while hundreds left through retirement, disability separation, and resignation. The Philadelphia base created a career transition center that provided one-on-one counseling to over 4,000 workers, as well as skills assessments, workshops, on-site retraining, and information on career choices. The center formed partnerships with the Private Industry Council, state employment office, and local colleges to ensure that every opportunity for retraining and assistance was used. The shipyard developed flexible training plans for the employees with the Navy reassigning people to new positions that supported their training. One official expressed frustration that more shipyard workers did not use the training opportunities and suggested that a barrier to assisting workforces similar to the one at the Philadelphia shipyard is the older age of this workforce. Most of the shipyard work force had been doing shipyard work all their working lives and did not want to start at the bottom again or learn a new trade despite the fact that the Philadelphia area has a lot of jobs, such as in construction, that would be suitable with some retraining. The most consistent major concern cited by the officials in the six communities we visited was that the transfer of property to the reuse authority was slow. (See ch. 2 for a discussion on DOD’s progress in transferring base property.) In the case of Eaker Air Force Base, some of the property was conveyed to the reuse authority through an economic development conveyance just this past September. The Bee Development Authority still does not have title to a large portion of Chase Field. The local reuse authority for Castle Air Force Base is in the process of obtaining an economic development conveyance. In each of these cases, the base had been closed sometime between 1993 and 1996. However, both Fort Benjamin Harrison and Fort Devens reuse authorities have title to base property, and the Fort Devens authority has been especially successful in turning over property to commercial enterprises. One problem caused by transfer delays is the increased cost of rehabilitating the facilities, which continue to deteriorate from the time of closure to the transfer of title. This situation is occurring in Beeville, Texas, despite the fact that a large portion of the base was transferred to the state of Texas through a public benefit conveyance for state prison facilities. Officials from the Bee Development Authority said they wish to diversify the local economy by attracting manufacturing to the area; they see the remaining base property as an asset to attract such development. However, a large hangar and office facility is deteriorating because the reuse authority does not have the money to maintain it, nor can it attract businesses that would supply maintenance funds without title to the facility. Two Beeville officials suggested the absence of a DOD base transition coordinator, an on-site official who serves as an advocate for the community and a local point of contact with the federal government, may have contributed to the local authority’s problems. Local officials stated that DOD officials responsible for property disposal do not seem to understand that delaying property conveyance is bad for business. Some local officials told us they do not think that responsible offices have enough real estate expertise. For example, some officials told us that property appraisals did not consider the cost of bringing a building up to local health and safety codes and therefore overvalued the property. Consistent with DOD statements in chapter 2, local officials acknowledged that some of the delay is due to property disposal process requirements. In addition, some local officials said transition delays are due to the lengthy environmental cleanup process. DOD officials agreed that the property disposal process can be frustrating to base reuse and economic recovery efforts but explained that DOD was using all available policy options to speed the process and remain within the boundaries of the law. A DOD official also noted that 1991 base closures may not have benefited as much from initiatives begun in 1993 to speed the process of transferring property to communities. These initiatives included the creation of economic development conveyances and base transition coordinators. Many officials said that once the transition is completed, they will be able to attract tenants, and they believed that in the long run, the community could generate more economic activity and accrue other quality of life dividends such as parks and recreation facilities than when the base was active. A majority of base closure communities have been able to absorb the economic loss without a significant economic decline. A growing national economy and a diverse regional economy play significant roles in economic recovery, making it easier for communities to absorb job losses and generate new business activity. However, some communities are not economically strong based on economic indicators and may have incurred deeper and longer economic impacts from base closures. Local officials said the impact from base closure was not as bad as they had feared. Though some communities encountered negative economic impacts during the transition from the announcement of base closure to recovery, local officials said they are optimistic about the long-term outlook for their communities. They told us they now view a base closure as an opportunity for their community to craft a new identity for itself and diversify the local economy. To the extent that redevelopment of the base may play a role in economic recovery, the speed of the property disposal process remains a local concern. DOD agreed that most base closure communities have been able to absorb the economic loss associated with closures and show positive economic growth at or above national averages. DOD cited this as a tribute to the initiative and persistence of local and state redevelopment officials who take advantage of the regional opportunities that an expanding national economy can offer. DOD stated it will continue to support the base redevelopment efforts of local and state officials as they transition to a more diversified economy.
Pursuant to a congressional request, GAO reviewed: (1) the Department of Defense's (DOD) progress in completing action on military base realignments and closures (BRAC) recommendations and transferring unneeded base property to other users; (2) the precision of DOD's estimates of BRAC costs and savings; (3) environmental cleanup progress and estimated associated costs; and (4) reported trends in economic recovery in communities affected by base closures. GAO noted that: (1) by September 30, 1998, DOD had completed actions on about 85 percent of the four BRAC commissions' 451 recommendations; (2) in taking actions on the recommendations, DOD declared about 464,000 acres of base property as excess; (3) as of September 30, 1997, 46 percent of the unneeded BRAC property was to be retained by the federal government, 33 percent was slated for nonfederal users, and the disposition of 21 percent had not yet been decided; (4) 8 percent of the property slated for federal use has been transferred, while 31 percent of the property slated for nonfederal use has been transferred; (5) DOD officials noted a number of obstacles that must be overcome before transfer can occur; (6) by 2001, DOD estimates it will have spent $23 billion on BRAC and saved $37 billion in costs it would have incurred if BRAC actions had not occurred, for a net savings of $14 billion; (7) beyond 2001, when the last of the four rounds is complete, DOD expects to save $5.7 billion annually as a result of BRAC actions; (8) however, the cost estimates exclude certain types of federally incurred costs, some of which are funded outside of DOD BRAC budget accounts, while the savings estimates have not been routinely updated and thus are not precise; (9) a major cost factor in BRAC actions, as well as a major obstacle to the disposal of unneeded property, is the need for environmental cleanup at BRAC bases; (10) both the eventual cost and the completion date for the BRAC-related environmental program are uncertain; (11) however, available DOD data indicate that the total environmental cost will likely exceed $9 billion and that cleanup activities will extend well beyond 2001; (12) the potential for higher costs exists, given uncertainties associated with the extent of cleanup of unexploded ordnance and monitoring of cleanup remedies needed at selected sites; (13) DOD has made progress since the earlier BRAC years when it was investigating sites for contamination; and (14) the majority of communities surrounding closed bases are faring well economically in relation to the national average, according to the latest data available at the time of GAO's analysis, and show some improvement since the time closures were beginning in 1988.
In 2000, in response to calls from Congress, SEC directed U.S. stock and options markets to change from quoting equity securities and options in fractions of a dollar, such as 1/16th, to quoting in decimals. Proponents of this change believed decimal pricing would make stock prices easier for investors to understand, align U.S. markets with other major stock markets of the world, and lower investors’ trading costs by narrowing spreads to as little as one penny. At the time of SEC’s order, U.S. markets were the only major securities markets in the world still trading in fractions. After a phase-in period of several months, the major exchanges and Nasdaq began using decimal pricing for all quotes on equity securities and options on April 9, 2001. The national securities markets, including the New York Stock Exchange (NYSE) and Nasdaq, chose to allow quoting on their markets with an MPV, or tick size, of one penny. The MPV is the minimum increment in which stock prices on these markets are allowed to be quoted. However, even before the transition to decimal pricing, some stocks were trading in increments of less than the MPV, such as 1/256th of a dollar. Since U.S. markets converted to decimal pricing, professional traders trading outside the national securities markets have been the primary users of subpenny prices. Although the national securities markets set their MPVs at one penny, several electronic trading systemsknown as electronic communication networks (ECNs)display quotes and execute orders entered by their customers in subpennies and allow traders to quote prices and trade in subpenny increments. When quotes from these proprietary systems are displayed to traders outside the proprietary systems, the quotes are rounded to the nearest full penny increment— specifically, down for buy orders and up for sell orders—to comply with the required one-penny MPV of the national securities markets. In such instances, orders executed against these quotes receive the subpenny price. According to SEC staff and others, although several ECNs initially allowed quoting in subpennies, some have curtailed the use of such quotes. At the time we prepared this statement, we were aware of only two ECNs that allowed quoting in subpennies—Instinet’s INET and Brut ECN—for a few selected stocks. The extent to which stocks are quoted in subpenny increments appears to be limited. According to SEC staff, data on the extent to which subpenny increments are used to quote securities across all U.S. equity markets are not routinely reported or readily available. However, a 2001 Nasdaq report to SEC that reviewed trading in stocks listed on its market showed that less than 15 percent of limit orders were submitted in subpennies after decimals were introduced. A vast majority of the subpenny limit orders cited in the 2001 Nasdaq report were handled by a single ECN. SEC staff also conducted a study of the use of subpennies in trading that took place between April 21 and 25, 2003, and found that subpenny trades accounted for about 13 percent of trades in Nasdaq stocks, 10 percent of trades in American Stock Exchange stocks, and 1 percent of the trades in NYSE stocks. These trade execution data, however, do not directly demonstrate the extent of subpenny quoting, because trades may be executed using the subpenny increment for other reasons. For example, some institutional investors may ask their broker-dealers to execute orders at the weighted average price at which a stock traded on a particular day. This weighted average price can be carried out to several decimal places. Representatives of one ECN told us that it allowed traders to quote certain stocks in subpennies because its customers wanted to be able to quote in these increments. They also said that this use of subpenny quotes was a way to differentiate their business from that of their competitors. In addition, these ECN representatives said that subpenny quoting enhanced the efficiency of trading in certain actively traded securities, such as the Nasdaq 100 Index Tracking Stock (QQQ). According to SEC staff and market participants with whom we spoke, subpenny quotes are used primarily by professional traders, such as day traders or traders for hedge funds, to gain a competitive price advantage over other traders. However, some ECNs that were allowing their customers to use subpenny quoting more widely have significantly curtailed the number of stocks that could be quoted in subpennies. According to a representative at one ECN, its share of the total trading volumes of these stocks increased rather than declined after it stopped quoting in subpennies. Although some market participants saw benefits to subpenny pricing, most cited various disadvantages to the use of subpenny quotes. Some market participants said subpenny quoting allowed traders to raise the priority of their orders. For example, a representative of one ECN told us that when a large number of traders were all quoting the same full penny price, one trader could increase the chances of executing a trade by improving the price by a subpenny increment. This representative also said that the customers on the other side of the trade also benefited from the subpenny increment, as their orders were executed at slightly better prices. ECNs we contacted also told us that subpenny pricing allowed for more efficient and competitive markets. For example, a one-cent MPV could act as an artificial constraint on pricing for stocks that trade actively. According to representatives of one ECN, allowing such actively traded stocks to trade in increments of less than a penny allows buyers and sellers to discover a stock’s true market price. However, most of the market participants we contacted mainly cited disadvantages to subpenny quoting. First, many participants told us that the benefits of subpenny pricing accrue to professional traders but not to the general investing public. Representatives of one firm with whom we spoke told us that quotes in subpenny increments were available to professional traders who pay to access proprietary trading systems the ECNs operate. Through these proprietary systems, professional traders can use fast order routing systems to obtain the subpenny prices, which may be better than those that are publicly displayed on other markets that use one-cent MPVs. According to market participants, many broker- dealers do not accept orders from their customers in subpenny increments, and so the average investor generally cannot access the subpenny quotes. A representative of a large broker-dealer stated at an April 2004 SEC hearing that his firm had stopped allowing clients to submit orders priced in subpenny increments for this reason. Further, representatives at one securities market argued that the integrity of the securities markets was reduced when some traders have advantages over others. Many of the market participants we contacted told us that quoting in subpenny increments also resulted in more instances of traders “stepping ahead” of large limit orders. According to some market participants, reduced MPVs that accompanied decimal pricing have negatively affected traders displaying large orders at one price. These traders find that their orders go unexecuted or have to be resubmitted when other traders step ahead of them by quoting a better price in increasingly small amounts. These participants argued that at higher MPVs, which were previously 1/8th or 1/16th of a dollar per share, traders stepping ahead of other orders were taking a greater financial risk if their orders were executed and prices then moved against them. However, market participants with whom we spoke said subpenny increments were generally an economically insignificant amount and that traders using them faced much lower financial risk. Recent SEC and Nasdaq studies of subpenny trading found that most trades executed in subpenny increments clustered at prices 1/10th of a cent above and below the next full penny increment, suggesting that subpenny quotes were primarily being used to gain priority over other orders and were not otherwise the result of natural trading activities. Market participants also told us that the more likely it is that a trader can step ahead of other orders—as they can by using subpenny quotes—the less likely traders are to enter their limit orders, which are an important source of liquidity. This reduced incentive to enter limit orders also reduces the number of shares displayed for sale and potentially affects liquidity and market efficiency. Furthermore, some market participants also saw subpenny quoting as reducing market transparency for retail investors and depth for institutional investors. When the MPV decreases, for example to subpennies, the number of potential prices at which shares can be quoted—called price points—increases, because displayed liquidity is spread over more price points. For example, subpenny quotes using 1/10th of a penny ($.001) increase the number of price points to 1,000 per dollar. This affects retail investors, because fewer shares are generally quoted at the only prices visible to them—the current best prices for purchase or sale. This affects institutional investors, because the more price points that must be considered, the more difficult it becomes to determine whether sufficient shares are available to fill larger orders. Market participants said that quotes in a subpenny pricing environment change more rapidly (a phenomenon known as quote flickering) and make determining the actual prices at which shares are available more difficult. Quote flickering reduces broker-dealers’ ability to determine whether the trades they have conducted satisfy their regulatory responsibility to obtain the best execution price for their clients. Finally, some market participants told us that subpenny pricing has the potential to greatly increase the processing and transmission capacity requirements for the market data systems that transmit price and trade information, causing firms to expend resources to redesign electronic systems. SEC’s proposed rule to prohibit market participants from pricing stocks in increments of less than one penny appears to be widely supported. As part of its proposed rule changes to Regulation NMS, SEC has proposed establishing a uniform pricing standard for stocks that trade in all market centers, which SEC defines as exchanges, over-the-counter market makers, specialists, and ECNs. Specifically, SEC proposes to prohibit market participants from accepting, ranking, or displaying orders, quotes, or indications of interest in a pricing increment finer than a penny in any stock, unless the stock has a share price of less than one dollar. The proposed rule would not prohibit executing trades in increments of less than one penny, which most markets currently permit, because there are instances when subpenny trading is appropriate—for example, when the trade’s price is based on some averaging mechanism. According to SEC staff, this change would address differences in pricing that exist across markets and that benefit some investors at the expense of the general investing public. According to the staff, banning subpenny pricing should also reduce the extent to which limit orders lose priority because of subpenny pricing, thereby preserving incentives to display limit orders, which are an important source of liquidity for the markets. Most market participants we have contacted to date and most commenting on SEC’s proposal appear to support a ban on subpenny pricing for stocks priced at more than one dollar. Of the over 500 comment letters available on SEC’s Web site as of July 16, 2004, we determined that about 50 provided comments on the proposed ban. Of these, 86 percent of the commenters supported banning subpenny quoting. According to NYSE and Nasdaq representatives with whom we spoke, the current existence of quotes that not all investors can access is a significant reason for their support of SEC’s proposed subpenny prohibition. Nasdaq’s support for banning subpenny quoting comes despite filing for a proposed rule change with SEC in 2003 that would permit Nasdaq to adopt an MPV of 1/10th of one cent for its listed securities. According to the Nasdaq representatives, if SEC does not prohibit subpenny quoting, Nasdaq would want SEC approval to begin quoting in subpennies in order to compete with ECNs. Nasdaq subsequently withdrew its propsed rule change, presumably because SEC is proposing to ban subpennies in its proposed changes to Regulation NMS. Representatives at several institutional investors and broker-dealer firms also agreed that quoting in subpenny increments should be prohibited. In its June 30, 2004, comment letter to SEC, the Investment Company Institute (which represents the interests of the $7 trillion mutual fund industry) stated that quoting in subpennies eliminates many of the benefits brought by decimal pricing and exacerbates many of the unintended consequences that have arisen in the securities markets since its implementation that have proven harmful to mutual funds and their shareholders. However, other market participants and other commenters opposed SEC’s proposal to ban subpenny quoting. Several of the organizations that opposed a ban said that subpenny quotes allow traders more ability to improve the prices they offer to others. A group of 10 academic researchers that commented to SEC argued that the impacts of subpenny quoting on market transparency could be resolved with technology. For example, data vendors can choose to update quotes only when there are meaningful changes. A letter from a university regulatory research center noted that banning subpenny quoting could stifle innovation in the way that quotes are displayed to investors. For example, graphical displays could replace flickering quotes with fluid motion and use patterns and shapes to help investors recognize changes. A ban could also reduce incentives for other market participants to invest in innovative technologies. Opinions among some ECNs were mixed, with roughly an equal number supporting and opposing SEC’s proposal to ban subpenny quoting. Representatives of two ECNs indicated that SEC should not enact a ban, arguing that tick size is best determined by demand in the marketplace. Furthermore, representatives of two ECNs noted that stocks that trade at a spread of a penny benefit from the increased efficiency afforded by subpenny increments; one representative noted that a penny MPV artificially constrains price discovery for these stocks. In addition, this representative said that stocks with low share prices should be quoted in subpenny increments because subpennies become economically significant when the share price is a few dollars or less. Finally, these representatives said that as more traders and firms upgrade their trading technology, they may find more advantages from quoting in subpennies and that a regulatory ban enacted now might become an unpopular constraint in the future. One of the ECNs is supporting SEC’s proposal to ban subpenny quoting because its customers preferred not to have subpennies used on that ECN’s system. At the time we prepared this statement, we had not yet talked to entities that are reported to be key users of subpenny quotes and who may be opposed to SEC’s proposal, such as day traders, hedge funds, or entities whose sole business is computer-enabled trading. At the request of this Committee’s Subcommittee on Securities and Investment, we are conducting additional work to review the impact of decimal pricing on the securities markets, securities firms, and retail and institutional investors. To conduct this work, we are reviewing relevant regulatory, academic, and industry studies that address decimal pricing impacts. We are also interviewing and obtaining information from market participants, including: securities markets, including stock and options markets; securities firms, including broker-dealers that conduct large-block trading, market makers, and exchange specialists; industry associations, including those representing securities traders, broker-dealers, and mutual funds; trade analysis firms; institutional investors, including pension and mutual fund investment managers; and academic researchers who have studied trading and decimal pricing. To identify trends and changes since decimal pricing was introduced, we are also attempting to collect and analyze data on the characteristics of markets, firms, and investors and the impact of decimalization on these entities (table 1). In addition, we plan to conduct research and analysis using a comprehensive electronic database of quotes and trades that have occurred on U.S. stock markets. The Trade and Quote (TAQ) database offered by NYSE consolidates all quotes and trades that have occurred on NYSE, Nasdaq, the American Stock Exchange, and the regional exchanges. As part of this research, we plan to expand and extend analysis done for a recently published study on the impact of decimal pricing on trade execution costs and market quality, including volatility and liquidity. Among the types of information we plan to analyze using this database are: quotation sizes (i.e., number of shares being quoted), the percentage of trades and shares executed at prices less or greater than the best quoted price prevailing at the time of executions, and the volatility of returns from investing. We plan to use this analysis to shed light on how trade execution costs and market quality may have changed in transitioning from a fractional to a decimal pricing environment. In addition to the variables considered in the published study, we plan to gather data on trade size and the numbers of trades and quotes that may provide evidence on changes in trading behavior. We also plan to analyze the TAQ data to identify whether and to what extent clustering occurs when quotes or trade executions occur more frequently than would be expected at particular price points (e.g., multiples of 5 cents and 10 cents) despite the existence of the one-cent tick. Because we are continuing to review issues relating to decimal pricing, we do not have definitive conclusions on subpenny pricing at this time. Our work to date has shown that subpenny quoting can provide advantages to some traders but can also create disadvantages to others and potentially impair incentives to display liquidity. A significant majority of market participants appear to support SEC’s proposed ban on quoting in subpennies, but little information is available on the impact of using these quotes. On the one hand, given that such quotes are currently used only in a few trading venues and for a limited range of stocks, SEC’s proposed ban would probably not result in a significant change for the overall markets or most investors. On the other hand, if SEC did not ban subpenny quotes, it is possible that exchanges and more markets would want to quote in subpennies—a change that could have a significant impact on U.S. equity markets. Still, a ban would take away the ability of individual markets and investors to choose whether to use subpenny quotes if they decide their use would be advantageous. Subsequent changes in market structure, technology, and investor needs could require SEC to reconsider whether the use of subpenny quotes would be appropriate at some future date. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. For questions concerning this testimony, please contact Cody Goebel at (202) 512-8678 or [email protected]. Other key contributors to this statement were Jordan Corey, Emily Chalmers, Joe Hunter, Kathryn Supinski, and Richard Vagnoni. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2001, U.S. stock and options markets, which had previously quoted prices in fractions, began quoting in decimals. Since then, various positive and negative effects have been attributed to the transition to decimal pricing. As part of this transition, the major stock markets chose one penny ($.01) as the minimum price variation for quoting prices for orders to buy or sell. However, some electronic trading systems allowed their customers to quote in increments of less than a penny (such as $.001). The use of subpenny prices for securities trades has proved controversial and the Securities and Exchange Commission (SEC) has proposed a ban against subpenny quoting for stocks priced above one dollar across all U.S. markets. As part of ongoing work that examines a range of issues relating to decimal pricing, GAO reviewed (1) how widely subpenny prices are used and by whom, (2) the advantages and disadvantages of subpenny pricing cited by market participants, and (3) market participants' reactions to SEC's proposed ban. Data on the extent to which market participants are quoting in subpenny increments across all U.S. equity markets are not routinely reported or readily available. However, studies of limited scope conducted by regulators and one market found that subpenny prices were not widely used. For example, a study done by the Nasdaq Stock Market in 2001 of Nasdaq stocks found that subpenny increments were used in less than 15 percent of the orders that specified a price (limit orders). Currently, the major markets do not allow subpenny quoting but a few electronic trading systems that match customer orders do. On electronic trading systems, professional traders (such as those employed by hedge funds) use subpenny quotes to gain a competitive price advantage over other orders. However, many market participants GAO interviewed cited numerous disadvantages to the use of subpenny quoting. They argued that subpenny quotes primarily benefit the professional traders who subscribe to market data systems displaying subpenny prices and who use fast systems to transmit their orders to take advantage of such prices. As a result, most investors do not benefit from subpenny quotes because they do not use these systems and because many broker-dealers do not accept orders from their customers in subpenny increments. In addition, participants said that subpenny quotes allow some traders to step ahead of others' orders for an economically insignificant amount. They said this discourages other traders from submitting limit orders and reduces overall transparency and liquidity in the markets. Based on the work GAO has conducted to date, including a limited review of comments on SEC's proposal to ban subpenny quoting, most market participants support SEC's proposed action. However, some organizations opposed to the ban said that it could reduce the ability of traders to offer better prices and stifle technological innovation and reduce market participants' incentive to invest in better systems. Although some electronic trading systems supported the ban, others indicated that the decision to use subpenny quotes should be left to market participants who, as technology advances, may increasingly find subpenny quotes more useful than they do today. In addition to reviewing subpenny pricing, GAO continues to review the broader impacts of decimal pricing on markets, securities firms, and investors. As part of this work, we plan to conduct original analysis using a comprehensive database of trades and quotes from U.S. markets to identify trends in quoted spreads, clustering of quotes and trades across certain prices, and other potential changes since decimal pricing was introduced.
As our past work has found, climate-related and extreme weather impacts on physical infrastructure such as buildings, roads, and bridges, as well as on federal lands, increase federal fiscal exposures. Infrastructure is typically designed to withstand and operate within historical climate patterns. However, according to NRC, as the climate changes, historical patterns do not provide reliable predictions of the future, in particular, those related to extreme weather events.may underestimate potential climate-related impacts over their design life, which can range up to 50 to 100 years. Federal agencies responsible for the long-term management of federal lands face similar impacts. Climate- Thus, infrastructure designs related impacts can increase the operating and maintenance costs of infrastructure and federal lands or decrease the infrastructure’s life span, leading to increased fiscal exposures for the federal government that are not fully reflected in the budget. Key examples from our recent work include (1) Department of Defense (DOD) facilities, (2) other large federal facilities such as National Aeronautics and Space Administration (NASA) centers, and (3) federal lands such as National Parks. DOD manages a global real-estate portfolio that includes over 555,000 facilities and 28 million acres of land with a replacement value that DOD estimates at close to $850 billion. Within the United States, the department’s extensive infrastructure of bases and training ranges— critical to maintaining military readiness—extends across the country, including Alaska and Hawaii. DOD incurs substantial costs for infrastructure, with a base budget for military construction and family housing totaling more than $9.8 billion in fiscal year 2014. As we reported in May 2014, this infrastructure is vulnerable to the potential impacts of climate change, including increased drought and more frequent and severe extreme weather events in certain locations. In its 2014 Quadrennial Defense Review, DOD stated that the impacts of climate change may increase the frequency, scale, and complexity of future missions, while undermining the capacity of domestic installations to support training activities. For example, in our May 2014 report on DOD infrastructure adaptation, we found that drought contributed to wildfires at an Army installation in Alaska that delayed certain units’ training (see fig. 1). systems in training and decreased the realism of the training. GAO-14-446. Adaptation is defined as adjustments to natural or human systems in response to actual or expected climate change. The federal government owns and operates hundreds of thousands of non-defense buildings and facilities that a changing climate could affect. For example, NASA’s real property holdings include more than 5,000 buildings and other structures such as wind tunnels, laboratories, launch pads, and test stands. In total, these NASA assets—many of which are located in vulnerable coastal areas—represent more than $32 billion in current replacement value. Our April 2013 report on infrastructure adaptation showed the vulnerability of Johnson Space Center and its mission control center, often referred to as the nerve center for America’s human space program. As shown in figure 3, the center is located in Houston, Texas, near Galveston Bay and the Gulf of Mexico. Johnson Space Center’s facilities—conservatively valued at $2.3 billion—are vulnerable to storm surge and sea level rise because of their location on the Gulf Coast. The federal government manages nearly 30 percent of the land in the United States for a variety of purposes, such as recreation, grazing, timber, and habitat for fish and wildlife. Specifically, federal agencies manage natural resources on about 650 million acres of land, including 401 national park units and 155 national forests. As we reported in May 2013, these resources are vulnerable to changes in the climate, including increases in air and water temperatures, wildfires, and drought; forests stressed by drought becoming more vulnerable to insect infestations; rising sea levels; and reduced snow cover and retreating glaciers. In addition, various species are expected to be at risk of becoming extinct due to the loss of habitat critical to their survival. Many of these changes have already been observed on federally managed lands and waters and are expected to continue, and one of the areas where the federal government’s fiscal exposure is expected to increase is in its role as the manager of large amounts of land and other natural resources. According to USGCRP’s May 2014 National Climate Assessment, hotter and drier weather and earlier snowmelt mean that wildfires in the West start earlier in the spring, last later into the fall, and burn more acres. Appropriations for the federal government’s wildland fire management activities have tripled, averaging over $3 billion annually in recent years, up from about $1 billion in fiscal year 1999. As we have previously reported, improved climate-related technical assistance to all levels of government can help limit federal fiscal exposures. Existing federal efforts encourage a decentralized approach to such assistance, with federal agencies incorporating climate-related information into their planning, operations, policies, and programs and establishing their own methods for collecting, storing, and disseminating climate-related data. Reflecting this approach, technical assistance from the federal government to state and local governments also exists in an uncoordinated confederation of networks and institutions. As we reported in our February 2013 high-risk update, the challenge is to develop a cohesive approach at the federal level that also informs action at the state and local levels. The Executive Office of the President and federal agencies have many efforts underway to increase the resilience of federal infrastructure and programs. For example, executive orders issued in 2009 and 2013 directed agencies to create climate change adaptation plans which integrate consideration of climate change into their operations and overall mission objectives, including the costs and benefits of improving climate adaptation and resilience with real-property investments and construction of new facilities. Recognizing these and many other emerging efforts, our prior work shows that federal decision makers still need help understanding how to build resilience into their infrastructure and planning processes. For example, in our May 2014 report, we found that DOD requires selected infrastructure planning efforts for existing and future infrastructure to account for climate change impacts, but its planners did not have key information necessary to make decisions that account for climate and We recommended that DOD provide further information to related risks. installation planners and clarify actions that account for climate change in planning documents. DOD concurred with our recommendations. GAO-14-446. even with the creation of strategic policy documents and high-level agency guidance. The federal government invests tens of billions of dollars annually in infrastructure projects prioritized and supervised by state and local governments. In total, the United States has about 4 million miles of roads and 30,000 wastewater treatment and collection facilities. According to a 2010 Congressional Budget Office report, total public spending on transportation and water infrastructure exceeds $300 billion annually, with roughly 25 percent of this amount coming from the federal government However, the and the rest coming from state and local governments. federal government plays a limited role in project-level planning for transportation and wastewater infrastructure, and state and local efforts to consider climate change in infrastructure planning have occurred primarily on a limited, ad hoc basis. The federal government has a key interest in helping state and local decision makers increase their resilience to climate change and extreme weather events because uninsured losses may increase the federal government’s fiscal exposure through federal disaster assistance programs. Congressional Budget Office, Public Spending on Transportation and Water Infrastructure, Pub. No. 4088 (Washington, D.C.: November 2010). the national gross domestic product by about $7.8 billion.shows Louisiana State Highway 1 leading to Port Fourchon. We found in April 2013, that infrastructure decision makers have not systematically incorporated potential climate change impacts in planning for roads, bridges, and wastewater management systems because, among other factors, they face challenges identifying and obtaining available climate change information best suited for their projects.when good scientific information is available, it may not be in the actionable, practical form needed for decision makers to use in planning and designing infrastructure. Such decision makers work with traditional Even engineering processes, which often require very specific and discrete information. Moreover, local decision makers—who, in this case, specialize in infrastructure planning, not climate science—need assistance from experts who can help them translate available climate change information into something that is locally relevant. In our site visits to several locations where decision makers overcame these challenges— including Louisiana State Highway 1—state and local officials emphasized the role that the federal government could play in helping to increase local resilience. Any effective adaptation strategy must recognize that state and local governments are on the front lines in both responding to immediate weather-related disasters and in preparing for the potential longer-term impacts associated with climate change. We reported in October 2009, that insufficient site-specific data—such as local temperature and precipitation projections—complicate state and local decisions to justify the current costs of adaptation efforts for potentially less certain future benefits. We recommended that the appropriate entities within the Executive Office of the President develop a strategic plan for adaptation that, among other things, identifies mechanisms to increase the capacity of federal, state, and local agencies to incorporate information about current and potential climate change impacts into government decision making. USGCRP’s April 2012 strategic plan for climate change science recognizes this need, by identifying enhanced information management and sharing as a key objective. According to this plan, USGCRP is pursuing the development of a global change information system to leverage existing climate-related tools, services, and portals from federal agencies. In our April 2013 report, we concluded that the federal government could help state and local efforts to increase their resilience by (1) improving access to and use of available climate-related information, (2) providing officials with improved access to technical assistance, and (3) helping officials consider climate change in their planning processes. As a result, we recommended, among other things, that the Executive Director of USGCRP or other federal entity designated by the Executive Office of the President work with relevant agencies to identify for decision makers the “best available” climate-related information for infrastructure planning and update this information over time, and to clarify sources of local assistance for incorporating climate-related information and analysis into infrastructure planning, and communicate how such assistance will be provided over time. These entities have not directly responded to our recommendations, but the President’s June 2013 Climate Action Plan and November 2013 Executive Order 13653 drew attention to the need for improved technical assistance. For example, the Executive Order directs numerous federal agencies, supported by USGCRP, to work together to develop and provide authoritative, easily accessible, usable, and timely data, information, and decision-support tools on climate preparedness and resilience. In addition, on July 16, 2014, the President announced a series of actions to help state, local, and tribal leaders prepare their communities for the impacts of climate change by developing more resilient infrastructure and rebuilding existing infrastructure stronger and smarter. We have work under way assessing the strengths and limitations of governmentwide options to meet the climate-related information needs of federal, state, local, and private sector decision makers. We also have work under way exploring, among other things, the risks extreme weather events and climate change pose to public health, agriculture, public transit systems, and federal insurance programs. This work may help identify other steps the federal government could take to limit its fiscal exposure and make our communities more resilient to extreme weather events. Chairman Murray, Ranking Member Sessions, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions you have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Alfredo Gomez, Director; Michael Hix, Assistant Director; Jeanette Soares; Kiki Theodoropoulos; and Joseph Dean Thompson made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Certain types of extreme weather events have become more frequent or intense according to the United States Global Change Research Program, including prolonged periods of heat, heavy downpours, and, in some regions, floods and droughts. While it is not possible to link any individual weather event to climate change, the impacts of these events affect many sectors of our economy, including the budgets of federal, state, and local governments. GAO focuses particular attention on government operations it identifies as posing a “high risk” to the American taxpayer and, in February 2013, added to its High Risk List the area Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks . GAO's past work has identified a variety of fiscal exposures—responsibilities, programs, and activities that may explicitly or implicitly expose the federal government to future spending. This testimony is based on reports GAO issued from August 2007 to May 2014, and discusses (1) federal fiscal exposures resulting from climate-related and extreme weather impacts on critical infrastructure and federal lands, and (2) how improved federal technical assistance to all levels of government can help reduce climate-related fiscal exposures. GAO is not making new recommendations but has made numerous recommendations in prior reports on this topic, which are in varying states of implementation by the Executive Office of the President and federal agencies. Climate change and related extreme weather impacts on infrastructure and federal lands increase fiscal exposures that the federal budget does not fully reflect. Investing in resilience—actions to reduce potential future losses rather than waiting for an event to occur and paying for recovery afterward—can reduce the potential impacts of climate-related events. Implementing resilience measures creates additional up-front costs but could also confer benefits, such as a reduction in future damages from climate-related events. Key examples of vulnerable infrastructure and federal lands GAO has identified include: Department of Defense (DOD) facilities. DOD manages a global real-estate portfolio that includes over 555,000 facilities and 28 million acres of land with a replacement value DOD estimates at close to $850 billion. This infrastructure is vulnerable to the potential impacts of climate change and related extreme weather events. For example, in May 2014, GAO reported that a military base in the desert Southwest experienced a rain event in August 2013 in which about 1 year's worth of rain fell in 80 minutes. The flooding caused by the storm damaged more than 160 facilities, 8 roads, 1 bridge, and 11,000 linear feet of fencing, resulting in an estimated $64 million in damages. Other large federal facilities. The federal government owns and operates hundreds of thousands of other facilities that a changing climate could affect. For example, the National Aeronautics and Space Administration (NASA) manages more than 5,000 buildings and other structures. GAO reported in April 2013 that, in total, these NASA assets—many of which are in coastal areas vulnerable to storm surge and sea level rise—represent more than $32 billion in current replacement value. Federal lands. The federal government manages nearly 30 percent of the land in the United States—about 650 million acres of land—including 401 national park units and 155 national forests. GAO reported in May 2013 that these resources are vulnerable to changes in the climate, including the possibility of more frequent and severe droughts and wildfires. Appropriations for federal wildland fire management activities have tripled since 1999, averaging over $3 billion annually in recent years. GAO has reported that improved climate-related technical assistance to all levels of government can help limit federal fiscal exposures. The federal government invests tens of billions of dollars annually in infrastructure projects that state and local governments prioritize, such as roads and bridges. Total public spending on transportation and water infrastructure exceeds $300 billion annually, with about 25 percent coming from the federal government and the rest from state and local governments. GAO's April 2013 report on infrastructure adaptation concluded that the federal government could help state and local efforts to increase their resilience by (1) improving access to and use of available climate-related information, (2) providing officials with improved access to technical assistance, and (3) helping officials consider climate change in their planning processes.
The U.S. military routinely uses contracted support in contingency operations. Military forces will often be significantly augmented with contracted support because of the continual introduction of high- technology equipment, coupled with force structure and manning limitations, and the high pace of operations. Accordingly, DOD has recognized that the planning for and integration of contracted support into joint operations is important for the successful execution of military operations. Moreover, the Secretary of Defense’s January 2011 memorandum addresses the need to better plan for operational contract support at the strategic and operational levels. The following describes the roles of various DOD offices involved in planning for operational contract support: The Under Secretary of Defense for Personnel and Readiness is responsible for policy, plans, and program development for the total force, which includes military, DOD civilian, and DOD contractor personnel. The Under Secretary of Defense for Acquisition, Technology and Logistics has the overall responsibility for the performance of the department’s acquisition system, including establishing and publishing policies and procedures governing the operations of the acquisition system and the administrative oversight of defense contracts. The Chairman of the Joint Chiefs of Staff has specific responsibilities in the areas of strategic direction as well as in strategic and contingency planning. The Joint Staff Logistics Directorate (J-4) provides plans, policy, guidance, and oversight on joint logistics, including joint contingency operational contract support matters. The Army, Navy, Marine Corps, and Air Force (under their respective Secretaries) are responsible for planning and executing contract support to their own forces unless directed otherwise by a combatant commander. The Secretaries of the military departments have been tasked in the Secretary of Defense’s January 2011 memorandum to assess how total force data (that is, the mix of military forces, contractors, and civilians) can inform planning and to assess opportunities for in-sourcing contracted capabilities that represent a high risk to the warfighter. The geographic combatant commands plan and oversee the military operations for their areas of responsibility. Combatant commanders are assigned military service components that assist them with further planning and execution of the missions (see fig. 1). The Defense Logistics Agency, at the request of the combatant commands, has two expert planners from its Joint Contingency Acquisition Support Office placed in the logistics offices at each combatant command to improve the incorporation of operational contract support into combatant command plans. The combatant commands and their components create plans to prepare for possible missions in their area. This planning begins with broad strategic guidance provided by the President, the Secretary of Defense, and the Chairman of the Joint Chiefs of Staff. This strategic guidance includes DOD documents, such as the Guidance for the Employment of the Force and the Joint Strategic Capabilities Plan, which tell combatant commanders what to plan for within their areas of responsibility. On the basis of the strategic guidance, combatant command planners write an operation plan to address particular contingencies. During this stage, a combatant commander can also task and provide guidance to the component commands to develop supporting plans for an operation plan. DOD doctrine suggests that, as a plan is developed, frequent dialogue between planners and senior DOD leadership is necessary to ensure that results are sufficient and feasible to meet mission objectives. An operation plan describes how DOD will respond to a potential contingency that might require the use of military force. Such plans are used to deal with a wide range of events, such as terrorism, hostile foreign nations, and natural disasters. An operation plan consists of a base plan and annexes. A base plan describes the concept of operations, major forces, sustainment concept, and anticipated timelines for completing the mission. Base plans are written following a five-paragraph structure—Situation, Mission, Execution, Administration and Logistics, and Command and Control. Plans will generally include assumptions that are relevant to the development or successful execution of the plan and the concept of operation that the commander plans to use to accomplish the mission, including the forces involved, the phasing of operations, and the general nature and purpose of operations to be conducted. In addition to the base plan, operation plans sometimes include annexes that provide further details on areas such as intelligence (Annex B), operations (Annex C), logistics (Annex D), personnel (Annex E), communications (Annex K), and operational contract support (Annex W)—the latter generally includes information such as contract support, contracting capabilities, and capacities support estimates. While Annex D includes operational contract support considerations, we have previously reported that because DOD has typically relied on contractors in areas beyond logistics, it is important for DOD to conduct up-front planning for the use of contractors in all functional areas, not just logistics. In 2010, we recommended that the Chairman of the Joint Chiefs of Staff require all base plans and nonlogistics annexes (e.g., intelligence and communication) to address the potential need for contractor support where appropriate. OSD and the Joint Staff have taken steps to integrate operational contract support into departmental planning, but the Navy, Marine Corps, and Air Force have not issued comprehensive guidance for integrating operational contract support throughout each service’s planning efforts. OSD and the Joint Staff have issued several new or revised policies, undertaken other actions, and are revising other policies regarding operational contract support. These efforts are described in figure 2. OSD and the Joint Staff have issued several new or revised policies on planning for operational contract support. In January 2011, the Secretary of Defense issued a memorandum to address the risks introduced by DOD’s level of dependency on contractors, its future total force mix, and the need to better plan for operational contract support in the future. The memorandum required, among other things, that the Under Secretary of Defense for Policy integrate operational contract support considerations into strategic planning documents and provide policy guidance on planning for contracted support in force planning scenario development. Also, the memorandum required that the Chairman of the Joint Chiefs of Staff collaboratively develop procedures to support operational contract support planning in the Joint Operation Planning and Execution System, including contractor support estimates and visibility of contractors accompanying the force. A memorandum issued by the Director of the Joint Staff in June 2011 further assigned Joint Staff directors to either lead or support specific tasks to implement the Secretary of Defense’s direction. Working with the Joint Staff, the Under Secretary of Defense for Policy completed revisions in April 2011 to the Guidance for the Employment of the Force requiring that the combatant commands, together with their service components and relevant combat support agencies, plan for the integration of contracted support and contractor management in all phases of military operations. Additionally, the Joint Staff completed revisions in April 2011 to the Joint Strategic Capabilities Plan. The revisions included the requirement that operational contract support planning must occur at all plan levels and that, at a minimum, plans will identify anticipated contract support requirements by joint capability area, phase of operation, and area of need. In addition to policy revisions, OSD and the Joint Staff have undertaken additional actions to ensure that planners better integrate operational contract support into the planning process. For example, at the request of the combatant commands, the Joint Staff has conducted operational contract support training that explains changes to the planning requirements for the integration of operational contract support. Specifically, the Joint Staff held training seminars for operational contract support planners at Central Command and Southern Command in November 2011, at Pacific Command in January 2012, and at Africa and European Commands in May 2012 to highlight recent changes in guidance and processes. In addition, the Functional Capabilities Integration Board, which was created in 2010, is actively monitoring ongoing operational contract support–related efforts across DOD and the progress toward timely completion of the direction in the Secretary’s January 2011 memorandum. Chaired by the Deputy Assistant Secretary of Defense for Program Support and the Vice Director for Logistics of the Joint Staff, the Functional Capabilities Integration Board is a senior executive–level body that includes officials from OSD, the military services, defense agencies, and the Joint Staff. The board meets quarterly to conduct independent assessments and analyses of operational contract support capabilities (to include supporting doctrine, organization, training, materiel, leadership and education, personnel, and facilities of the armed forces). It also seeks to establish and assess ways to improve performance and processes for assessing operational contract support readiness. OSD and the Joint Staff are also revising other guidance on operational contract support and examining the extent to which operational contract support is integrated in DOD’s planning for operations, as noted in figure 2. For example, the Joint Staff is overseeing an effort to revise a key doctrine document, Joint Publication 4-10, to incorporate, among other things, lessons learned from the Iraq and Afghanistan wars. Joint Publication 4-10 establishes doctrine for planning, conducting, and assessing operational contract support integration and contractor management functions in support of joint operations. The joint doctrine in the publication applies to the combatant commands and their service components, subunified commands, joint task forces, the services, and defense agencies in support of joint operations. According to Joint Staff officials, they expect to complete revisions to the guidance in November 2013. Moreover, the Joint Staff recently published, on October 18, 2012, the Chairman of the Joint Chiefs of Staff Manual 3130.03 on Adaptive Planning and Execution, Planning Formats and Guidance, which replaces the Joint Operation Planning and Execution System Volume II. This manual will be used by joint commanders and war planners to monitor, plan, and execute mobilization, deployment, employment, and sustainment activities associated with joint operations and provide users with access to joint operations planning policies and procedures. Specifically, the Adaptive Planning and Execution manual requires that functional planners identify major support functions planned for commercial support sourcing. The manual also references Annex W in the instructions for many of the individual annexes. Also, the Adaptive Planning and Execution manual, in keeping with the Secretary of Defense’s memorandum requirements, requires the expansion of the Annex W, the operational contract support annex, in operation plans to include appendixes on (1) estimates of contracting capabilities and capacities support, (2) a contractor management plan, and (3) estimates of contractor support. As stated in the recently revised Joint Strategic Capabilities Plan, operational contract support planning is now required in much greater detail because of the volume of participation and resulting lessons learned in current operations. Additionally, the Joint Staff is drafting the Chairman of the Joint Chiefs of Staff Manual 4300.01, which will include information on integrating operational contract support in joint logistics planning. Specifically, the manual will assist logistics planners in developing procedures and guidance for a logistics planning process that effectively integrates, synchronizes, prioritizes, and focuses joint logistics capabilities on achieving a supported commander’s operational objectives and desired effects for various types of plans, including contingency plans, tasked in the Joint Strategic Capabilities Plan or as directed by the combatant commander. The Joint Staff’s revisions will include setting minimum requirements for operational contract support by plan level and providing templates and tools for planners to use to estimate contractor support and contracting capabilities. DOD officials stated that this manual will be published following revisions to the Logistics Supplement to the Joint Strategic Capabilities Plan, expected by the end of 2012. Finally, OSD has drafted an action plan, in conjunction with the Joint Staff and military services, that will establish operational contract support objectives and performance measures in the department’s attempt to fully institutionalize operational contract support by the end of fiscal year 2016. Specifically, the draft action plan identifies major actions and the projected cost to institutionalize operational contract support capabilities and capacity across the doctrine, organization, training, materiel, leadership and education, personnel, facilities and policy spectrum. It identifies timelines and lead organizations for specified tasks to help guide operational contract support planning and programming initiatives required to resolve urgent capability gaps. DOD officials stated that they expect the draft of the action plan to be approved by January 2013. The military services, with the exception of the Army, have not issued comprehensive guidance to enable the integration of operational contract support into their planning efforts, thus limiting the institutionalization of operational contract support at the service level. Joint Publication 4-10, issued in 2008, notes that each military service, under its respective military department, is responsible for planning and executing contracting support to its forces, unless otherwise directed by the combatant commander. Joint Publication 4-10 also notes that the military services are responsible for integrating identified contract requirements into training. Further, the Secretary of Defense’s January 2011 memorandum also directs the military services to take certain actions, which could improve how they plan for and use operational contract support. In large part because of the Army’s leading role in the major contingencies over the past decade that required it to employ operational contract support, the Army has issued guidance and created various organizations for integrating contract support into its planning and for developing related training. The Secretary of the Army established an independent panel, known as the Gansler Commission, which issued a final report in October 2007 that highlighted issues and needs for better military operations and cited critical deficiencies in the Army’s contracting and contract management. Since that report’s issuance, the Army has made it a priority to address highlighted deficiencies in areas such as guidance and training. In particular, the Army has issued service-specific guidance for integrating contract support into the service. For example, the Army issued Army Regulation 715-9 in 2011 that provides guidance regarding planning and managing operational contract support for the nonacquisition force, such as operational commanders or contracting officer’s representatives. It describes responsibilities, policy, and implementing procedures for operational contract support. Specifically, the regulation describes, among other things, planning, requirements definition, and oversight in the context of contracted support. For example, the regulation notes that, in general, contracted support will be utilized after full consideration of all sources of support, including deployable civilians. The Army also developed a manual containing tactics, techniques, and procedures for operational contract support. The manual provides “how to” guidance about operational contract support for Army operational commanders and their nonacquisition officer staff. In addition, the manual describes the roles of Army officials and organizations regarding operational contract support and serves as the primary reference document for execution of operational contract support planning, integration, and oversight tasks provided in other guidance, including Army Regulation 715-9. Moreover, the manual contains checklists that include considerations related to operational contract support. Furthermore, in direct response to the Gansler Commission report, the Army created the Army Contracting Command in 2008, which performs the majority of the contracting work for the Army, including assisting in operational contract support planning needs, training development, and execution. Other Army entities, along with DOD’s Defense Acquisition University, have developed and are implementing additional operational contract support training initiatives. Key Army training initiatives include the following: The Expeditionary Contracting Command, a subordinate to the Army Contracting Command, provides contracting support to the Army and other federal organizations at installations outside of the United States. Additionally, the Expeditionary Contracting Command has seven contracting support brigades that provide direct support to Army service component commanders, including providing predeployment contingency contracting unit training, which includes training contracting officers’ representatives. The Assistant Secretary of the Army (Acquisition, Logistics and Technology) Integration Office works with other entities including the Expeditionary Contracting Command to develop collective and individual training standards and material for acquisition and nonacquisition personnel involved with the planning, requirements definition, contracting, and management of operational contract support. Also, the integration office oversees the incorporation of planning of operational contract support for brigade-level training. The Army Contracting Command, along with other entities, developed and launched enhanced training of contracting officer’s representatives through the Defense Acquisition University. This course assists with deploying more prepared and trained contracting officer’s representatives into contingencies. Finally, according to statements of senior Army officials before the Commission on Wartime Contracting, the Assistant Secretary of the Army (Acquisition, Logistics and Technology) chartered the Operational Contracting Support and Policy Directorate in December 2009. As described in an Army briefing, this directorate develops, issues, manages, and measures the effectiveness of policies regarding operational contract support. According to the briefing, it provides strategic contract management and oversight of the U.S. Central Command Joint Theater Support Contracting Command, an organization that provides theater contracting support to the combined joint operations area of Afghanistan. The directorate provides oversight of contingency contracting operations in Iraq and Afghanistan as the focal point for the Army for contracting in- theater. According to the briefing, among other activities, such as validation of operational contract support in doctrine, organization, training, materiel, leadership, and personnel considerations, members of the directorate also serve on DOD’s Operational Contract Support Functional Capabilities and Integration Board. In addition, this directorate, along with the Army’s other efforts, assists the Army in meeting the operational contract support requirements in both Joint Publication 4-10 and the Secretary of Defense’s January 2011 memorandum. While the Army has established guidance and taken several steps to integrate operational contract support within its service, the other services have not taken similar actions to incorporate operational contract support into their planning. Joint Publication 4-10 and the Secretary of Defense’s January 2011 memorandum require the military departments to take certain steps that could improve how they plan for and use operational contract support. While the Navy, Marine Corps, and Air Force have developed some training and other efforts to improve the planning and use of operational contract support, they have not developed service- specific guidance detailing how operational contract support will be integrated into each of their services’ planning and execution efforts for contingency operations. Navy, Marine Corps, and Air Force officials told us that they generally do not have a major role in operational contract support because the Army has been the lead service for contracting in present conflicts. Navy officials acknowledged that some sailors need to understand the role that operational contract support plays in their deployed locations. As a result, Navy officials have included information about operational contract support in logistics training, which provides a basic overview of contract execution. However, as acknowledged by Navy officials, the Navy has not issued guidance that includes information regarding roles and responsibilities for ensuring better execution of planning, integration, and oversight of operational contract support within the service. Marine Corps officials acknowledged that the Marine Corps has also not issued service-wide, specific operational contract support guidance, although they explained that the service’s role in the Afghanistan and Iraq contingencies showed that operational contract support was important and that the Marine Corps needed specific, related training. In Afghanistan, the Marine Corps established two operational contract support cells, which have managed contract support and management- related activities for the service and provided oversight of operational contract support training to relevant personnel. In addition, a 2009 Marine Corps reference publication on contingency contracting contains doctrinal information for commanders and their staff members to plan for and obtain contracting support when deployed. While this information is helpful to commanders and their staff to understand the process for contingency contracting, the document does not comprehensively describe how the Marine Corps plans to integrate operational contract support throughout the service. Further, as part of planning before deployment, the Marine Corps identified and trained contracting officer’s representatives prior to their deployment. Additionally, officials told us that the Marine Corps has begun to include some operational contract support in training in areas such as on regulations related to contracting. While the Marine Corps has incorporated some operational contract support in predeployment planning, the training is limited and, according to Marine Corps officials, it is at the commanders’ discretion to include it into their units’ training. According to Marine Corps officials, the Marine Corps has not provided guidance detailing the roles and responsibilities for nonacquisition personnel on how operational contract support will be integrated into the Marine Corps’ planning and execution efforts for contingency operations. Since training often changes, there is no permanent enforcement to maintain competencies of operational contract support, such as contractor oversight, into planning or training within the Marine Corps, thus limiting full institutionalization of operational contract support. Air Force officials acknowledged that the Air Force has not developed service-wide guidance regarding the integration of operational contract support within the service. The guidance that officials did identify is focused on the role of contracting officer’s representatives and actions for deployed commanders and contingency contracting officers to take during initial deployment (such as establishing shelter requirements and other needs of the unit). GAO identified similar acquisition-related implementing guidance related to basic contingency contracting actions during phases of an operation as well as implementing guidance regarding review of operation plans for contractor support integration plans, contractor management plans, and other contracting considerations. However, these documents do not comprehensively describe how the Air Force plans to integrate operational contract support throughout the service. Further, it is not clear how familiar officials are with these documents, as they were not mentioned during the course of discussions. The Air Force’s current training related to operational contract support is limited to the contract familiarization training provided to the contracting officer’s representatives, which is typical of training provided to contracting officer’s representatives in all services. As a result, the integration of operational contract support throughout the Air Force’s planning is limited because, as acknowledged by Air Force officials, the Air Force has not issued comprehensive guidance explaining the roles and responsibilities for the execution of planning, integration, and oversight of operational contract support within the service. Thus, while the Navy, Marine Corps, and Air Force have developed some training and other individual efforts to familiarize servicemembers with operational contract support, these services have not issued comprehensive guidance to assist in fully institutionalizing operational contract support. DOD and service officials told us that they do not need to plan for operational contract support in advance because the Army has been the lead service in recent conflicts. However, according to DOD, the Navy, Marine Corps, and Air Force spent over a billion dollars combined for contracted services in Afghanistan in fiscal year 2011, and therefore contracted support has been utilized for which planning should have occurred. Without specific service-wide guidance to help institutionalize operational contract support, the other services may not fully understand their role in operational contract support and may not be prepared to execute operational contract support in the future—when it is possible that one of these services, instead of the Army, will play a leading role. Further, unless the services’ guidance describes how each service plans to integrate operational contract support into each organization—including planning for contingency operations and training—the other services’ planning efforts may not reflect the full extent of the use of contract support and the attendant cost and requirements for oversight. The combatant commands and their components have begun to incorporate operational contract support into their planning, but they have not fully integrated operational contract support into their planning for contingencies. While the combatant commands and their components have taken steps to integrate operational contract support into contingency planning, mostly in the area of logistics, they are not planning for such support across all areas—such as intelligence and communications—that are likely to use contractors in future contingencies. We found that DOD’s efforts to fully integrate operational contract support at the command and component levels are hindered by not training all planners about new operational contract support requirements, a lack of focus of operational contract support planners on areas beyond logistics, and not providing operational contract support planning expertise at the commands’ components. The combatant commands and their components have taken some positive steps to integrate operational contract support in their planning processes. According to DOD officials, at the time of this review, there were 95 plans with 45 approved Annex Ws. In addition, our current review of selected operation plans at each of the combatant commands found that officials are now including planning assumptions about operational contract support within either the base plan or Annex W. For example, in a draft humanitarian assistance and foreign disaster response plan that we reviewed, officials at Southern Command had included an Annex W that integrated assumptions for operational contract support. Similarly, officials incorporated assumptions for operational contract support in operation base plans and Annex Ws that we examined at Central Command, Pacific Command, Africa Command, and European Command. This integration of operational contract support is an improvement from February 2010 when we found that only 4 of 89 operation plans had approved Annex Ws. Also, the Joint Staff has developed training for logistics officials at the combatant commands and components to better understand how to integrate operational contract support into their planning processes. The Joint Staff’s training informs planners of requirements in the Guidance for the Employment of the Force that the combatant commands, together with their service components and relevant combat support agencies, plan for the integration of contracted support into all phases of military operations. The training also makes officials aware of the new requirement in the Joint Strategic Capabilities Plan that states that geographic combatant commands, together with their service components and logistics planners, will synchronize and integrate contracted support into military operations. According to U.S. Central Command officials, they have already begun to employ the new guidance shared with them during the training for developing the Annex W. We also noted several other positive efforts to integrate operational contract support into planning. For example, the Defense Logistics Agency, at the request of the combatant commands, has assigned two expert planners from its Joint Contingency Acquisition Support Office to the logistics offices at each combatant command to improve the incorporation of operational contract support into combatant command plans. Also, U.S. Africa Command has developed its own instruction to help the command integrate operational contract support into its planning process. While the combatant commands and their components have taken steps to integrate operational contract support into contingency planning, mostly in the area of logistics, they are not planning for such support across all areas—such as intelligence and communications—that are likely to use contractors in future contingencies. Under regulations and DOD Instruction 3020.41, when officials anticipate the need for contractor personnel and equipment to support military operations, military planners are directed to develop orchestrated, synchronized, detailed, and fully developed contract support integration plans and contractor management plans as components of concept plans and operational plans in accordance with appropriate strategic planning guidance. The regulations and instruction also state that plans should contain additional contract support guidance, as appropriate, in applicable annexes and appendixes within the respective plans. Our previous work has shown that DOD has typically relied on contractors in areas beyond logistics, and thus we have emphasized the importance of the up-front planning for their use across all functional areas. A Joint Staff official indicated that, in 2011, the Joint Staff added a requirement to incorporate planning for operational contract support in the base plan and Annex W of all operation plans; but the official noted that requirement did not include incorporating operational contract support in the nonlogistics annexes of plans. In a briefing document on changes and anticipated changes to strategic and planning guidance, the Joint Staff suggested that, among other things, planners would be required to include assumptions for the use of contractor support in paragraph one of the base plan and provide estimates of contractors in the Annex W. Although the combatant commands and their components have integrated operational contract support in the base plans and Annex Ws we reviewed, they did not, at the time we reviewed their plans, have more-specific and comprehensive guidance within the key operations planning system manual for integrating planning for operational contract support across all functional areas where contractors might be used. As a result, officials working in areas outside of logistics were not integrating operational contract support into their respective sections of plans. For example, nonlogistics officials at Central Command—such as those in the communications (Annex K) and intelligence (Annex B) divisions—stated that they do not plan for operational contract support in their respective annexes although contract support had been utilized in the past in these areas. Similarly, at Southern Command, we found that operational contract support was incorporated in Annex D (logistics) and Annex W (operational contract support), but its Annex Ks (communications) and Annex Bs (intelligence) did not directly contain considerations for operational contract support. Moreover, nonlogistics officials at Africa Command stated that they did not incorporate considerations of operational contract support in annexes other than Annex W. Some nonlogistics officials at Central Command further stated that they tend to assume the logistics planners will address the need to incorporate operational contract support throughout operation plans, but we found that this was not occurring. Finally, nonlogistics officials at Pacific Command stated that they had also used contracted support in past operations but believed they did not need to plan for operational contract support until a contingency was under way. The Under Secretary of Defense for Policy completed revisions in April 2011 to include broad language in the Guidance for the Employment of the Force requiring that the combatant commands plan for the integration of contracted support and contractor management in all phases of military operations. However, the Joint Staff only recently issued in October 2012 the Adaptive Planning and Execution manual that calls on functional planners beyond the logistics area alone to identify major support functions planned for commercial support sourcing. As a result, the effect of this new manual remains to be determined. Until all functional area planners begin to integrate operational contract support into their respective sections of plans, the combatant commands and their components risk being unprepared to fully plan for the use of contractors in contingencies. Without plans that adequately consider the use of contract support in areas beyond logistics, DOD has an increased risk of being unprepared to manage deployed contractor personnel and services and to provide necessary oversight during contingencies. As previously discussed, the Joint Staff J-4 has developed training on the requirements for planning for operational contract support at the combatant commands and their components, but, up to this point, this training has been focused on planners only in the logistics area and not on planners in all functional areas. Joint Staff J-4 officials stated that they are developing an operational contract support planning and execution training course to train all strategic and operational planners on the specific requirements and complexities of planning for operational contract support in all functional areas and types of operations. However, this training has not been fully developed and implemented. According to regulations and DOD guidance, the Chairman of the Joint Chiefs of Staff will incorporate, where appropriate, program management and elements of the operational contract support guidance into joint training. Further, according to the Secretary of Defense’s January 2011 memorandum, the Chairman of the Joint Chiefs of Staff shall sustain ongoing efforts and initiate new efforts to institutionalize processes, tools, and doctrine that facilitate and strengthen planning for operational contract support and, by extension, joint operational contract support training, exercises, and execution. Although the Joint Staff has developed new training for planning for operational contact support, this training is not focused on training officials from functional areas other than logistics. According to a Joint Staff official, while the training was open to all planners, it was focused on training operational contract support planners on the new operational contract support requirements and guidance. With the lack of training across all functional areas, along with the absence, until recently, of more-specific and comprehensive joint operations planning guidance on including operational contract support in plans, we found that planning by the combatant commands and their components included limited integration of operational contract support in areas where contracted support has been used, such as communications or intelligence. For example, some officials involved in intelligence and communications planning at Central Command acknowledged that, while contracted support has been used in these areas in recent operations, they have not initially planned for the capability. In addition, in the operation plans that we reviewed at the various combatant commands, the potential use of contracted support was not mentioned in any of the nonlogistics annexes. In our previous work, we reported weaknesses in DOD’s planning for using contractors to support future military operations, and that DOD risked being unprepared to provide the management and oversight of contractor personnel deployed in contingencies. In this review, we found in some cases that officials outside of the logistics area were unaware of the planning requirements for operational contract support that are outlined in the Guidance for the Employment of the Force. Without training to incorporate operational contract support into all areas of their plans, the combatant commands and components risk not fully understanding the extent to which they will be relying on contractors to support combat operations outside of the logistics area and may be unprepared to provide the necessary management and oversight of deployed contractor personnel. Until DOD takes steps to address these gaps, it may be limited in its ability to fully institutionalize operational contract support in planning for current and future contingency operations at both the combatant command and component levels—where the planning for specific operations generally occurs. At the request of the combatant commands, the Defense Logistics Agency has assigned planners from its Joint Contingency Acquisition Support Office to assist all combatant commands with the integration of operational contract support into the commands’ planning. However, the two planners embedded within each combatant command are not integrated across all functional areas and are not always focused on working with the planners from all the functional directorates to integrate operational contract support in all areas of plans. According to guidance from the Chairman of the Joint Chiefs of Staff, OSD established the Joint Contingency Acquisition Support Office within the Defense Logistics Agency in July 2008 as one of several initiatives to respond to congressional mandates in the John Warner National Defense Authorization Act for Fiscal Year 2007. As explained in the guidance, DOD viewed the act as requiring the department to adopt a preplanned organizational approach to program management and to provide a deployable team during contingency operations when requested, to ensure jointness and cross-service coordination. According to the guidance, the purpose of the Joint Contingency Acquisition Support Office is to help synchronize, integrate, and manage the implementation and execution of operational contract support among diverse communities in support of U.S. government objectives during peacetime and contingency operations. As envisioned by the guidance, the planners of the Joint Contingency Acquisition Support Office assigned to each combatant command would enable joint operational contract support planning and strengthen combatant commands’ planning for contingencies. Specifically, among other things, the guidance directs the Joint Contingency Acquisition Support Office, when requested, to provide resources and expertise to the combatant commands to conduct deliberate operational contract support planning, and establish and implement program management strategies to address and resolve operational contract support challenges; assist combatant commands in preparation of plans and orders by drafting, coordinating, and establishing Annex Ws; and participate in exercises, training, meetings, and conferences to integrate and advance operational contract support across DOD. All planners from the Joint Contingency Acquisition Support Office have been organizationally placed within the logistics directorate at each of the combatant commands. According to combatant command officials, these planners have helped to integrate operational contract support into combatant command planning through their participation in planning meetings, communication of new planning requirements for operational contract support to the combatant command planners, and the development—and sometimes the writing—of the Annex W for certain plans. However, because these planners are placed within the logistics directorates, the planners are not integrated across all functional areas and are not always focused on working with all planners at the combatant commands to enable planning for the use of contracted support. Some planners, such as those at U.S. Southern Command, coordinate with combatant command planners from the nonlogistics areas and have helped these planners to become aware of operational contract support considerations. Other planners, such as those at U.S. Central Command, focus on integrating operational contract support into the logistics annex and Annex W sections of plans and are not involved in other areas such as communications or intelligence, which are areas that also have relied on contracted support in recent operations. The Secretary of Defense’s January 2011 memorandum calls for better planning for contracted support at the strategic and operational levels. Further, our prior work on DOD’s development of contract support plans recommended that the Chairman of the Joint Chiefs of Staff require personnel to address the potential need for contractor support where appropriate. In addition, the DOD guidance for combatant commander employment of the Joint Contingency Acquisition Support Office calls for the office, when requested, to embed planners from the Joint Contingency Acquisition Support Office within the combatant commands to enable joint operational contract support planning, and to integrate and synchronize operational contract support efforts across DOD and other partners. Without full coordination of the planners from the Joint Contingency Acquisition Support Office with all planners at the combatant commands to incorporate operational contract support into all areas of their plans, the combatant commands risk not fully understanding the extent to which they will be relying on contractors to support combat operations outside of the logistics area and may be unprepared to provide the necessary management and oversight of deployed contractor personnel. Until DOD takes steps to address these gaps, it may be limited in its ability to fully institutionalize operational contract support in planning for current and future contingency operations. While the Defense Logistics Agency, at the request of the combatant commands, has provided planning expertise to aid combatant commands in integrating operational contract support into planning, the combatant commands’ components have not been provided such expertise to aid them in meeting their operational contract support planning requirements. As a result, the components face difficulties incorporating operational contract support considerations into their planning efforts. Two planners are assigned to each combatant command. After a combatant command plan is developed, it is sent to the combatant command’s components for those organizations to develop their own plans to support the combatant command’s requirements, including the requirements for the integration of operational contract support. For example, a component may be required to develop its own Annex W to support a combatant command’s Annex W within a particular plan. This level of planning is essential since components generally identify and provide the resources necessary to support the combatant command’s requirements in order to accomplish the mission of the specific operation. Without this expertise, component planners are limited in their ability to integrate operational contract support into their plans to support combatant command requirements and, in some cases, are unaware of the overall requirements to integrate operational contract support into their planning as directed by the combatant commander. For example, some component officials with whom we met stated they were unfamiliar with the operational contract support planning requirements found in DOD’s strategic planning guidance such as the Guidance for the Employment of the Force. Some component officials also stated that they were not familiar with how to write an Annex W to support the combatant command requirements. There was consensus among the component officials whom we interviewed, as well as several combatant command officials, that the components would benefit from additional training or expertise in planning for operational contract support. The Secretary of Defense’s memorandum regarding DOD’s implementation of operational contract support requires the Chairman of the Joint Chiefs of Staff to take various steps to improve operational contract support planning. In addition, as described by DOD guidance for the employment of the Joint Contingency Acquisition Support Office, combatant commands are responsible for strategic theater planning, and the joint force commander and component commands are responsible for operational planning. Further, the DOD guidance states that the Joint Contingency Acquisition Support Office’s mission is to bring its enabling capability to support planning activities at the strategic and operational levels. However, the Joint Staff has not acted to ensure that both the combatant commands and their components have planning expertise to address operational contract support in planning for operations. As a result, the components may not be able to fully integrate operational contract support into their planning for contingency operations; therefore, they may be unprepared to manage deployed contractor personnel and provide the necessary oversight. The Secretary of Defense’s January 2011 memorandum, and several of the ongoing and recently completed efforts we have noted in this report, illustrate the department’s recognition of and commitment to integrating operational contract support throughout all aspects of military planning. While progress has been made at high levels within the department to emphasize an awareness of operational contract support, DOD has not yet fully institutionalized planning for operational contract support throughout the military services, or at the combatant commands or components where much of the operational planning occurs for contingencies. Although the Army has made strides in creating guidance and training on the importance of planning for operational contract support because of the challenges it encountered in Iraq and Afghanistan, the other military services have not taken additional steps to develop and implement comprehensive guidance within each service to ensure the full institutionalization of operational contract support. Moreover, at the combatant commands and components, there is a lack of emphasis on training for all planners, operational contract support planners are not working with all planners, and expertise on operational contract support is not provided for component planners. As a result, these challenges hinder DOD’s ability to achieve the cultural change that we called for 2 years ago—a change that emphasizes an awareness of operational contract support throughout all entities of the department. Without a focus on recent changes in planning guidance and more training on incorporating operational contract support in all areas of operation plans—not just in the logistics area—DOD may face challenges to successfully plan for the use of contractors in critical areas such as intelligence and communications. Similarly, without the operational contract support planners assisting the commands with planning in all areas and without such expertise at the service component commands, DOD risks being unprepared to manage deployed contractor personnel and provide the necessary oversight in the next contingency. To further the integration of operational contract support into all of the services’ planning, we recommend that the Secretary of Defense direct the Secretaries of the Navy and Air Force to provide comprehensive service-wide guidance for the Navy, Marine Corps, and Air Force that describes how each service should integrate operational contract support into its respective organization to include planning for contingency operations. To further the integration of operational contract support into all areas of the operation planning process, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to focus its training about operational contract support, which is currently focused on the logistics planners, on training all planners at the combatant commands and components as necessary. To further enable all planners at the combatant commands to integrate operational contract support into plans across their functional areas, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to identify and implement actions by the combatant commanders needed to ensure that planners from the Joint Contingency Acquisition Support Office supporting the combatant commands expand their focus to work with planners throughout all functional areas. To enable the integration of operational contract support into service component command–level planning efforts, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to work with the military services as necessary to improve the level of expertise in operational contract support for the combatant commands’ components. In written comments on a draft of this report, DOD concurred with three of our recommendations and partially concurred with one. DOD’s comments are reprinted in appendix II. DOD also provided technical comments which we have incorporated where appropriate. DOD concurred with our recommendation that the Secretary of Defense direct the Secretaries of the Navy and the Air Force to provide comprehensive service-wide guidance for the Navy, Marine Corps, and Air Force that describes how each service should integrate operational contract support into its respective organization to include planning for contingency operations. DOD stated that the Marine Corps has made significant progress in integrating operational contract support into its warfighting capabilities. DOD noted that the Marine Corps uses Marine Corps Reference Publication 4-11E, “Contingency Contracting,” dated February 12, 2009, which it described as the service-wide guidance on contingency contracting support. According to DOD, this publication contains doctrine for commanders and their staff to plan for and obtain contracting support when deployed. DOD also noted that the Marine Corps has integrated operational contract support with respect to its primary mission, with a focus on support to the Marine Air Ground Task Force. While our report acknowledges the progress made by the services to integrate operational contract support into service training, as well as acknowledging the Marine Corps’ use of Marine Corps Reference Publication 4-11E, we believe that the Navy, Marine Corps, and Air Force should develop comprehensive service-wide guidance to fully institutionalize operational contract support. DOD also agreed with our recommendation that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to focus its training about operational contract support on training all planners at the combatant commands and components as necessary. DOD stated that the Joint Staff is working with the services and the geographic combatant commands to develop an appropriate training plan and gather the necessary resources to conduct operational contract support training. We agree that if fully implemented this action could address this recommendation. DOD partially concurred with our recommendation that the Secretary of Defense direct the Director of the Defense Logistics Agency to identify and implement actions needed to ensure that planners from the Joint Contingency Acquisition Support Office expand their focus to work with planners throughout all functional areas at the combatant commands. DOD agreed with the thrust of our recommendation—the need for efforts to broaden the focus of planners from the Joint Contingency Acquisition Support Office as part of an effort to integrate operational contract support in combatant command planning. DOD stated, however, that the combatant commands—not the Joint Contingency Acquisition Support Office—are responsible for operational contract support planning across “all functional areas.” DOD also stated that, when requested, the Joint Contingency Acquisition Support Office operational contract support planners support combatant commands in meeting this planning requirement. DOD noted that the geographic combatant commanders are responsible for conducting the planning of their respective war plans, not the Defense Logistics Agency. Consequently, DOD stated that, to enable all planners at the combatant commands to integrate operational contract support into plans across their functional areas, the Secretary of Defense should direct the Chairman of the Joint Chiefs of Staff to continue efforts to develop operational contract support planning capabilities and encourage the geographic combatant commanders to utilize the Joint Contingency Acquisition Support Office for planning. We recognize that the Joint Contingency Acquisition Support Office supports the combatant commands in their efforts to incorporate operational contract support planning within their respective war plans, and that the individual combatant commanders are ultimately responsible for how they utilize embedded Joint Contingency Acquisition Support Office planners. We acknowledge that the Chairman of the Joint Chiefs of Staff is in a position to encourage the combatant commanders to utilize the Joint Contingency Acquisition Support Office planners. As such, we agree that the Chairman of the Joint Chiefs of Staff would be an appropriate official to implement our recommendation, and we have revised our recommendation accordingly. However, we continue to believe that the Defense Logistics Agency, which is responsible for the Joint Contingency Acquisition Support Office, must ensure that its planners are prepared to assist the combatant commanders in these efforts, when requested. Full implementation of the recommendation would therefore likely necessitate cooperation and coordination by the Defense Logistics Agency, the Joint Contingency Acquisition Support Office, the Chairman of the Joint Chiefs of Staff, and the geographic combatant commands. Finally, in concurring with our recommendation that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to work with the military services as necessary to improve the level of expertise in operational contract support for the combatant commands’ components, DOD stated that the Joint Staff is taking action to integrate operational contract support into the services’ component command-level planning efforts. DOD also stated that the Joint Staff is developing a Joint Professional Military Education course that focuses on the planning and execution of operational contract support. DOD noted that this course is additive to other courses offered by Defense Acquisition University as well as courses offered by the U.S. Army Logistics University. According to DOD, the Joint Staff will continue to work with the other services on operational contract support issues. We agree that if DOD takes these actions, these efforts could address our recommendation. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) is integrating planning for operational contract support through efforts of the Office of the Secretary of Defense (OSD), Joint Staff, and military services, we collected and analyzed documentation such as planning guidance and policies related to the integration of operational contract support into DOD’s planning for contingency operations. Specifically, we analyzed the implementation of DOD guidance, such as Joint Publication 4-10, the Secretary of Defense’s memorandum on Strategic and Operational Planning for Operational Contract Support and Workforce Mix, and related policies and instruction through discussions with officials from OSD and the Joint Staff to understand the various efforts to address the integration of operational contract support throughout the department. We reviewed and analyzed provisions from the Guidance for the Employment of the Force to understand the new requirements DOD has in place for planning for operational contract support in all phases of military operations. We also spoke with officials specifically focused on integrating operational contract support department-wide, such as officials from the Operational Contract Support Functional Capabilities Integration Board to obtain their perspective on the progress the department has made in integrating operational contract support as well as learning of related initiatives. We reviewed DOD guidance on the civilian expeditionary workforce and interviewed officials from OSD, the Joint Staff, the civilian expeditionary workforce program office, military services, and combatant commands to understand the intent and the status of the development of the civilian expeditionary workforce program. To determine how the military services have integrated operational contract support into their operations, we collected and analyzed service- specific documentation related to operational contract support initiatives from each of the services and met with officials from the Army, Navy, Marine Corps, and Air Force. We also held discussions with officials from each of the services to gain an understanding of how each organization has implemented operational contract support. Further, we reviewed related GAO reports on operational contract support, as well as related reports issued by other agencies. To determine the extent to which DOD is integrating planning for operational contract support in operations planning at the combatant commands and their components, we reviewed plans, such as operation and contingency plans, and other efforts, such as specific related guidance, to understand the commands’ and components’ implementation of requirements to integrate operational contract support. As mentioned above, we interviewed officials from OSD and the Joint Staff in order to assess the extent to which DOD has integrated operational contract support requirements for planning in policies. We then spoke with officials from all of the geographic combatant commands (except Northern Command) and their components regarding their knowledge of the requirements and the extent to which they are planning for operational contract support. Additionally, we spoke to officials to gain knowledge about their current processes for planning for contingency operations. During our meetings with the combatant commands, we spoke with officials from various directorates, such as strategic plans, logistics, and intelligence, in order to obtain an understanding of the extent to which operational contract support is being planned for in the base plan and the directorates’ respective annexes. During our meetings we also reviewed sample operation plans and annexes to analyze the extent to which DOD has integrated operational contract support considerations in its contingency planning. To determine the level at which the combatant commands and components are integrating operational contract support into plans, we requested combatant command and component officials to provide sample operation plans that included base plans with operational contract support considerations, Annex Ws, and other functional area annexes that also contained operational contract support language. Further, we obtained and analyzed specific policies the combatant commands and service component commands had in place governing the planning for operational contract support in their contingency and operation plans. Our review focused on DOD’s planning efforts and thus did not include an examination of how operational contract support is integrated in professional military education or in the execution of current operations. We visited or contacted the following organizations during our review: Office of the Under Secretary of Defense for Personnel and Readiness, Washington, D.C. Civilian Expeditionary Workforce Program Office, Washington, D.C. Office of the Under Secretary of Defense for Policy, Washington, D.C. Force Development, Washington, D.C. Office of the Deputy Assistant Secretary of Defense (Program Support), Washington, D.C. Office of the Under Secretary of Defense for Acquisition, Technology Operational Contract Support Functional Capabilities Integration U.S. Africa Command, Stuttgart, Germany, and several of its service U.S. Central Command, Tampa, Florida, and several of its service U.S. European Command, Stuttgart, Germany, and several of its U.S. Pacific Command, Honolulu, Hawaii, and several of its service U.S. Southern Command, Miami, Florida, and several of its service Chairman, Joint Chiefs of Staff Joint Staff J-4 (Logistics) Directorate, Washington, D.C. Acquisition, Logistics, and Technology-Integration Office, Hopewell, G-43, Strategic Operations, Washington, D.C. Manpower and Reserve Affairs, Washington, D.C. U.S. Navy Headquarters, Washington, D.C. Navy Expeditionary Contracting Command, Little Creek, Virginia U.S. Marine Corps Headquarters, Washington, D.C. Department of the Air Force U.S. Air Force Headquarters, Acquisition, Washington, D.C. We conducted this performance audit from January 2012 to February 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for this assessment based on our audit objectives. In addition to the contact named above, Alissa Czyz, Assistant Director; Marilyn Wasleski, Assistant Director; Hia Quach; Michael Shaughnessy; Yong Song; and Natasha Wilder made key contributions to this report. Richard Powelson and Amie Steele provided assistance in report preparation. Operational Contract Support: Sustained DOD Leadership Needed to Better Prepare for Future Contingencies. GAO-12-1026T. Washington, D.C.: September 12, 2012. Iraq and Afghanistan: Agencies Are Taking Steps to Improve Data on Contracting but Need to Standardize Reporting. GAO-12-977R. Washington, D.C.: September 12, 2012. Iraq and Afghanistan: State and DOD Should Ensure Interagency Acquisitions Are Effectively Managed and Comply with Fiscal Law. GAO-12-750. Washington, D.C.: August 2, 2012. Contingency Contracting: Agency Actions to Address Recommendations by the Commission on Wartime Contracting in Iraq and Afghanistan. GAO-12-854R. Washington, D.C.: August 1, 2012. Defense Acquisition Workforce: Improved Processes, Guidance, and Planning Needed to Enhance Use of Workforce Funds. GAO-12-747R. Washington, D.C.: June 20, 2012. Operational Contract Support: Management and Oversight Improvements Needed in Afghanistan. GAO-12-290. Washington, D.C.: March 29, 2012. Acquisition Workforce: DOD’s Efforts to Rebuild Capacity Have Shown Some Progress. GAO-12-232T. Washington, D.C.: November 16, 2011. Defense Contract Management Agency: Amid Ongoing Efforts to Rebuild Capacity, Several Factors Present Challenges in Meeting Its Missions. GAO-12-83. Washington, D.C.: November 3, 2011. Defense Acquisition Workforce: Better Identification, Development, and Oversight Needed for Personnel Involved in Acquiring Services. GAO-11-892. Washington, D.C.: September 28, 2011. Contingency Contracting: Improved Planning and Management Oversight Needed to Address Challenges with Closing Contracts. GAO-11-891. Washington, D.C.: September 27, 2011. Iraq Drawdown: Opportunities Exist to Improve Equipment Visibility, Contractor Demobilization, and Clarity of Post-2011 DOD Role. GAO-11-774. Washington, D.C.: September 16, 2011. Warfighter Support: DOD Needs to Improve Its Planning for Using Contractors to Support Future Military Operations. GAO-10-472. Washington, D.C.: March 30, 2010. Contingency Contracting: DOD, State, and USAID Continue to Face Challenges in Tracking Contractor Personnel and Contracts in Iraq and Afghanistan. GAO-10-1. Washington, D.C.: October 1, 2009. Contingency Contract Management: DOD Needs to Develop and Finalize Background Screening and Other Standards for Private Security Contractors. GAO-09-351. Washington, D.C.: July 31, 2009. Contingency Contracting: DOD, State, and USAID Contracts and Contractor Personnel in Iraq and Afghanistan. GAO-09-19. Washington, D.C.: October 1, 2008. Defense Management: DOD Needs to Reexamine Its Extensive Reliance on Contractors and Continue to Improve Management and Oversight. GAO-08-572T. Washington, D.C.: March 11, 2008.
DOD has relied extensively on contractors for operations in Iraq and Afghanistan over the past decade. At the height of Operation Iraqi Freedom, the number of contractors exceeded the number of military personnel, and a similar situation is occurring in Afghanistan. In January 2011, the Secretary of Defense issued a memorandum noting the risk of DOD's level of dependency on contractors and outlined actions to institutionalize changes necessary to influence how the department plans for contracted support in contingency operations. The memorandum also called for leveraging the civilian expeditionary workforce to reduce DOD's reliance on contractors, but this workforce is not yet fully developed. GAO was asked to examine DOD's progress in planning for operational contract support. Our review determined how DOD is integrating operational contract support into its planning through efforts of the (1) OSD, Joint Staff, and military services, and (2) combatant commands and their components. To conduct its work, GAO evaluated DOD operational contract support guidance and documents and met with officials at various DOD offices. The Office of the Secretary of Defense (OSD), the Joint Staff, and the services have taken steps to integrate operational contract support into planning for contingency operations. For example, in April 2011, the Under Secretary of Defense for Policy, working with the Joint Staff, revised the Guidance for the Employment of the Force to require planning for operational contract support in all phases of military operations. Further, in December 2011, the Department of Defense (DOD) revised an instruction and issued corresponding regulations establishing policies and procedures for operational contract support. The Army issued service-specific guidance that describes roles, responsibilities, and requirements to help integrate operational contract support into its planning efforts for contingency operations. However, the Navy, Marine Corps, and Air Force have not issued similar comprehensive guidance for integrating operational contract support throughout each service. Instead, these services have taken actions such as developing training and other individual efforts to familiarize servicemembers with operational contract support. According to service officials, one reason that they have not issued comprehensive guidance similar to the Army's guidance is because the Navy, Marine Corps, and Air Force have not been the lead service for contracting in recent operations. However, these services combined spent over a billion dollars for contracted services in Afghanistan in fiscal year 2011. Without specific, service-wide guidance, the other services' future planning efforts may not reflect the full extent of the use of contract support and the attendant cost and need for oversight. The combatant commands and their components have begun to incorporate operational contract support into their planning for contingencies, but they have not fully integrated operational contract support in all functional areas. We found that the combatant commands and components are not planning for the potential use of contractors in areas where they may be needed beyond logistics such as communications. Recognizing the problem, DOD, in October 2012, issued guidance that calls on functional planners beyond the logistics area to identify major support functions planned for commercial support sourcing. GAO also found that officials involved with logistics planning at the commands receive training from the Joint Staff and assistance from embedded operational contract support planners to help integrate operational contract support into logistics planning. However, officials involved in planning for other areas--such as intelligence--that have used contractors in past operations, do not receive such training. Further, the embedded operational contract support planners do not focus on areas beyond logistics. Moreover, while the combatant commands have embedded experts to assist with operational contract support planning, the military service components do not have such expertise. Without training for all planners, a broader focus beyond logistics for embedded planners, and expertise offered at the military service components, DOD risks being unprepared to plan and manage deployed contractor personnel and may not be able to provide the necessary oversight during future contingencies. GAO recommends that the Navy , Marine Corps and Air Force provide guidance on planning for operational contract support; that the Joint Staff provide training for all planners; that the planners broaden their focus to include areas beyond logistics; and that expertise is offered to service components to further integrate operational contract support into plans. DOD generally agreed with the recommendations.
The DTV transition has been in progress for over two decades. With a firm date established in law, all full-power television broadcasters will cease broadcasting their analog signal by February 17, 2009. There are numerous benefits to transitioning to digital-only broadcast signals, such as enabling better quality television picture and sound reception and using the radiofrequency spectrum more efficiently than analog transmission. With traditional analog technology, pictures and sounds are converted into “waveform” electrical signals for transmission through the radiofrequency spectrum, while digital technology converts these pictures and sounds into a stream of digits consisting of zeros and ones for transmission. While the digital signal disperses over distances, a digital receiver can adjust and recreate the missing zeros and ones from the digital transmission, thus making the digital picture and sound near perfect until significant fading occurs, at which point no picture can be seen. To facilitate the digital transition, Congress and FCC temporarily provided each eligible full-power television station (both commercial and noncommercial educational stations, including public stations) with additional spectrum so they could begin broadcasting a digital signal. This companion, or paired, digital channel simulcasts the analog program content in digital format. Assignment of the paired digital channel began in 1997 with the hopes that operating this digital channel would help stations learn about broadcasting a digital signal, in addition to raising consumer interest and understanding about the digital transition. The paired digital channel was intended to be used for a limited period until all stations were assigned a final digital broadcast station and were able to broadcast on their final digital channel. FCC completed the digital channel assignment for most stations in August 2007. A station’s final digital channel could be (1) the same channel as its paired digital channel, (2) the same channel that its analog signal uses to broadcast, or (3) an entirely new channel. The Digital Television Transition and Public Safety Act of 2005 addresses the responsibilities of FCC related to the DTV transition. The act directs FCC to require full-power television stations to cease analog broadcasting after February 17, 2009. Stations are responsible for meeting this requirement and being prepared to commence digital broadcasting by this date; stations not ready to commence digital broadcasting risk losing interference protection and operating authority. The capability to provide a digital broadcast signal often involves a large outlay of capital and effort by broadcast stations. Sometimes a new broadcast tower or significant modifications to an existing tower is required. While a new antenna could cost a station several hundred thousand dollars, an industry association stated that stations could spend as much as $2 million to purchase and install a new broadcast tower, antenna, and equipment. If new towers or antennas are not required, stations may still need to alter or upgrade existing towers. Alterations may include moving the digital antenna from a side-mounted antenna to the top of the tower to increase the coverage of the digital signal. Upgrades to an existing tower may include strengthening a tower before additional antennas can be added. For stations building new towers, installing new antennas, or making changes to existing structures, the stations must plan in advance to order the proper equipment and schedule construction crews. In September 2007, FCC adopted an order designed to ensure that all cable subscribers, including those with analog television sets, can view digital broadcasts after the transition. FCC stated that all cable operators must make all broadcast signals viewable to all subscribers and cannot degrade any signal so that a difference in the cable signal and the broadcast signal would be perceptible to a viewer. According to the order, cable operators can meet this requirement in one of two ways, either (1) carry the signals of commercial and noncommercial must-carry stations in analog format to all analog cable subscribers or (2) for all-digital systems, carry those signals in a digital-only format, provided all subscribers with analog television sets have the proper equipment to view the digital signals. This requirement ensures that subscribers will have the ability to view a digital signal or an analog signal, depending on which best suits their equipment. While this ruling did not address satellite companies, FCC is considering how to apply the content and degradation requirements to satellite carriage of digital broadcast signals, and the commission expects to complete this ruling before the transition. Satellite companies already transmit digital signals to subscribers by digitizing broadcasters’ analog signals. Most broadcasters have made significant progress in preparing their stations for the transition to digital, with 91 percent of survey respondents reporting that they were already transmitting a digital signal. Of the broadcasters already transmitting a digital signal and responding to our survey, 68 percent indicated that they are broadcasting their digital signal at full strength. In addition, 68 percent of survey respondents are broadcasting their digital signal on the channel from which they will be broadcasting after the transition. A small number of stations responding to our survey (9 percent) have yet to begin broadcasting a digital signal, but almost all of those stations expect to be broadcasting digitally by February 17, 2009. Our survey of broadcast television stations found that almost all stations (91 percent of respondents) are transmitting a digital signal. Of those stations transmitting a digital signal, the operating status of these survey respondents, as of February 8, 2008, is shown in figure 1. As figure 2 shows, 68 percent of stations that responded to our survey said that their digital channel will not move after the transition. However, about one third of stations are currently operating on a temporary digital channel and will move to another channel to complete their transition to digital. Twenty-three percent of survey respondents said they will abandon their current digital channel to begin broadcasting digitally at the channel location currently occupied by their analog channel. Approximately 9 percent of survey respondents will have to move to a completely new channel once the transition is complete. Our survey of broadcast stations found that 97 stations, or 9 percent, are not broadcasting a digital signal. On the basis of the information provided by survey respondents, these stations serve a smaller number of households, on average, compared with those stations broadcasting a digital signal. In particular, survey respondents that are not broadcasting digitally transmit their analog signal to approximately 350,000 households, on average, compared with the average of nearly 775,000 households from stations responding to our survey that are already broadcasting digitally. Almost all of these stations that are not yet broadcasting digitally noted that they plan to have their digital signal operational by February 17, 2009. Three stations responded that they were not planning to broadcast a digital signal by February 17, 2009. According to FCC, stations that are not currently transmitting a digital signal either (1) were granted a license to operate a digital signal along with their analog signal but have yet to begin broadcasting digitally or (2) were not given a digital license and plan to turn off their analog signal at the same time that they turn on their digital signal—known as “flash cutting.” According to our survey, 5 percent (61 stations) of the stations indicated that they plan to flash cut to a digital-only broadcast. According to FCC, flash cutting may present challenges, since it will involve stations’ ending their analog television operations and beginning their digital television operations on their current analog channel or, in some cases, will require that a station change to a new channel to be fully operational. Of those stations responding to our survey that plan to flash cut, only 21 percent had begun constructing final digital facilities at the time of our survey. Furthermore, 64 percent of the flash cutters responding to our survey noted that they need to order equipment to complete their digital facilities. Before the transition to digital can be finalized, some stations still have to resolve technical, coordination, or other issues. According to stations responding to our survey, a major technical task for over 13 percent of the stations is the relocation of their digital or analog antenna. Other stations responding to our survey indicated that they have coordination issues to resolve prior to completing the transition, such as the U.S. government reaching agreements with the Canadian and Mexican governments and coordinating with cable providers and satellite companies. Our survey also found that other issues, such as the construction of broadcast towers or financial constraints, have affected some stations’ ability to finalize their digital facilities. Broadcast stations and industry representatives have stated that technical issues might affect television stations’ ability to finalize digital operations. Technical issues that some stations need to address include (1) antenna and equipment replacement or relocation and (2) channel relocation. One of the major tasks that many television stations have to complete to build their digital facilities is to install a digital antenna on the top of the broadcast tower, where the analog antenna resides. According to a broadcast industry representative, many stations need to have their digital antenna at the top of the tower to fully replicate the area that their analog service covers. The broadcast industry representative stated that stations have two options in placing their digital antenna at the top of the broadcast tower: (1) move the digital antenna to the top now, and buy a new side-mounted analog antenna, which would ensure that the analog signal continues until it is switched off and that the digital signal would be at full power, or (2) keep the analog antenna at the top of the tower until it is turned off on February 17, 2009, and then install the digital antenna at the top of the tower. The industry representative stated that both options, however, present problems for broadcast stations. For the first option, stations may have to purchase a new analog antenna, which will only be used for a few months. Also, as a result of the analog antenna being side mounted, stations’ analog broadcast coverage area would be reduced by a range from 2 to 9 percent of the viewing market. Stations agreed that they might have to reduce their analog service prior to the transition date. For example, the owner of a station in Minnesota commented that it may not be possible to complete the construction of its digital facilities without significantly disrupting its analog operations as well as its digital operations. The owner said the power of its analog signal would have to be significantly reduced before February 17, 2009, which would affect a large number of its viewers. Several survey respondents that were already broadcasting a digital signal reported that they needed to take additional steps to complete their digital facilities. According to our survey results, 151 stations (13 percent) indicated that they needed to relocate their digital or analog antenna on a current tower, reinforce an existing tower to allow for additional antennas, or coordinate antenna placement on another tower. Figure 3 shows the number of stations that need to complete these various steps, with some stations reporting that they have to complete multiple steps. FCC recognizes that there are many technical issues associated with antenna and equipment replacement or relocation that might force stations to terminate analog signals prior to the transition date. For example, FCC noted that there are 49 stations that have documented problems with side-mounted analog antennas. These stations will have to relocate their analog antenna to another location on their tower and operate with reduced analog facilities as they complete the transition. Other stations may have a tower at capacity, preventing the installation of an additional antenna on the tower. According to FCC, these stations will have to terminate analog operations prior to the end of the transition to mount their digital antenna. In addition, stations with an antenna that is located on a shared tower may need to reduce or terminate analog signals as the stations coordinate the configuration of their final digital facilities. Still other stations have equipment currently in use with their analog operations that they plan to use with their digital operations. Such a situation will force stations to terminate their analog signals prior to the transition so that the equipment can be reconfigured for the final digital facilities. Although FCC established February 17, 2009, as the new construction deadline for stations facing unique technical challenges, FCC will also consider stations’ requests to operate their digital facilities at less then full power until August 18, 2009—provided the stations continue to serve at least 85 percent of their viewers. According to an antenna manufacturer with whom we spoke, stations will need to place orders for their antenna by the second quarter of 2008 for the stations to be prepared for the February 17, 2009, deadline. According to this manufacturer, the amount of time needed to design, order, and install an antenna can range from 6 weeks and 9 months, depending on its complexity. This manufacturer said a typical antenna serving one station requires about 4 or 5 months, from design to installation. In its third periodic review and order on the DTV transition, FCC noted that absent extraordinary circumstances, it would no longer consider a lack of equipment as a valid reason for granting an extension of time to construct facilities. FCC also said that stations demonstrating that they placed equipment orders well in advance will be considered eligible for an extension on these grounds. Antenna work and replacement could be hampered by weather conditions for towers located in northern climates and on higher elevations. According to an antenna manufacturer with whom we spoke, although antenna work can be done during the winter months, it can be much more difficult, take longer, and entail additional costs. According to this manufacturer, winds over 10 miles an hour can be problematic for installing equipment. Installation crews need several days of limited wind speed to complete antenna work. In addition, ice and snow can present safety issues when installing antennas on towers. FCC recognizes that for some stations, work cannot be completed because of weather conditions, and that those stations facing legitimate delays will be considered for construction extensions. For example, if a station has a side-mounted digital antenna and can demonstrate that weather considerations would force it to reduce or terminate its analog signal well before the transition date to complete building of their final facility, it might qualify for an early reduction or termination of analog service prior to February 17, 2009. FCC states that in such situations, it could be preferable to accept a limited loss of analog service for a short time prior to the transition date to ensure the station is able to complete its transition to digital. FCC notes that the stations facing the most significant amount of construction to finalize their facilities are those that are moving to a different channel. According to FCC, 643 stations will move to a different channel to complete the transition. FCC states that 514 of these stations will relocate their current digital channel to their analog channel. Stations might prefer to relocate their digital channel to the analog channel because it is the channel that viewers recognize. For example, one station we visited has its digital signal on channel 16 but plans to relocate the digital signal to channel 9, which is the station’s current analog channel and the channel number people recognize for that station. In addition, stations currently located on channels 52 through 69 need to relocate their channel because these channel frequencies will be used for public safety and new wireless services after the transition. According to FCC, 129 stations will move to a completely new channel once the transition is complete. Such moves entail additional challenges for these stations because they may need to address such issues as (1) can the stations use any of their current analog or digital equipment, (2) will their viewers be impacted during construction of their digital facilities, and (3) will the stations have to coordinate with other stations because the channel they are moving to will be occupied by another station until the transition date. Because of the issues associated with channel relocation, FCC is allowing stations moving to a different digital channel to cease operations on their pretransition digital channels and begin operating digitally on their new channels before the transition date. Stations can operate on their new channel before the transition date provided (1) the early transitioning stations will not cause impermissible interference to another station and (2) the early transitioning stations continue to serve their existing viewers for the remainder of the transition, and commence their full-power, authorized posttransition operations upon expiration of the February 17, 2009, transition deadline. In addition, stations that are moving to a different digital channel for posttransition operations may temporarily remain on their pretransition channel while they complete construction of their final digital facilities. Stations can remain on their pretransition channel provided (1) they build facilities serving at least the same population that receives their current analog television and digital services so that over the air viewers will not lose service and (2) they do not cause impermissible interference to other stations or prevent other stations from making their transition. Coordination issues might affect television broadcast stations’ ability to finalize their digital operations, according to stations that responded to our survey and our discussions with broadcast stations and industry representatives. Coordination issues that some stations face include (1) U.S. government coordination with Canadian or Mexican governments, (2) coordination with cable providers and satellite companies, and (3) coordination with other broadcast stations. For some stations located along the northern and southern borders of the United States, agreements must be reached with the Canadian and Mexican governments regarding the coverage of the stations’ digital signals that transmit across the borders. According to FCC officials, there are 139 and 43 U.S. stations that operate along the Canadian and Mexican borders, respectively. FCC officials stated that agreements are in place for most of these stations, and FCC expects agreements to be reached for all of the remaining stations. In responding to our survey, the stations that require coordination with a foreign government noted that different levels of coordination had taken place, as illustrated in figure 4. However, in responding to our survey, most stations with a signal that penetrates into Canada or Mexico were not concerned about analog interference. In particular, 81 percent of respondents operating along the Mexican border were not concerned about interference, while 86 percent along the Canadian border were not concerned about such interference. In responding to our survey question regarding coordination with the Mexican and Canadian governments, one station commented that the lack of concurrence from the Mexican government has created significant concern about the station’s ability to transition to its final digital operations, and that an agreement is needed as soon as possible. Another survey respondent stated that objection by the Canadian government to its final channel assignment was very late in the process and will seriously jeopardize its ability to build its digital facilities by the transition date. Another station that responded to our survey expressed concern about Canadian coordination being completed by the 2009 deadline. In its third periodic review and order, FCC stated that it will consider extensions of construction deadlines for stations encountering delays in cases where resolution of issues related to international coordination is truly beyond the control of the station. FCC also stated that if agreements cannot be reached, stations might be required to construct facilities with a smaller area of signal coverage. At the time of this report, there was a set of companion bills in the Senate and House known as the DTV Border Fix Act, which, if enacted, would authorize FCC to allow full-power television stations serving communities located within 50 miles of the U.S.-Mexican border to continue operating an analog signal until February 17, 2014. Among other requirements, stations seeking an extension would have to satisfy FCC that continued analog operation would be in the public interest. As part of finalizing the transition to DTV, cable providers and satellite companies will need to make sure that their facilities receive digital signals from television stations when the analog signals terminate. In its third periodic review and order, FCC made no rules concerning the coordination between broadcast stations, cable providers, and satellite companies. However, FCC reiterated that broadcasters must work with cable providers and satellite companies to ensure a successful transition. Many broadcast stations are currently coordinating with cable providers and satellite companies. As shown in figure 5, 55 percent of the stations responding to our survey indicated that they are currently coordinating with cable providers, and 50 percent of the stations responding to our survey indicated that they are currently coordinating with satellite companies. In addition, nearly 35 percent of stations responding to our survey indicated that they plan to coordinate with cable providers, and 36 percent of stations indicated that they plan to coordinate with satellite companies. One percent of stations responding to our survey indicated that they were not coordinating with and were not planning to coordinate with cable providers, and 5 percent indicated that they were not coordinating with and were not planning to coordinate with satellite companies. With some stations moving to a new channel or changing the coverage area of their broadcast signal, cable providers told us there is uncertainty about whether their cable head-ends will continue to receive the broadcast signals. For example, if a broadcaster’s digital coverage area differs from its analog coverage area, there is a possibility the cable head-end will no longer be able to receive that signal. Approximately 32 percent of survey respondents that are carried by cable, satellite, or both indicated that they are concerned their digital signal may not reach one or more cable providers’ or satellite companies’ facilities once the transition has occurred. One cable provider told us this issue could be particularly problematic in smaller markets where head-ends rely on over-the-air broadcasts to pull in the broadcast signals. A cable provider and satellite company also told us that they need broadcast stations to inform them of their coverage areas, or signal contours, as soon as possible to help them identify areas where the digital signal may not reach cable head-ends or satellite receiver facilities. This information is important because even when stations do have their digital facilities fully operational, they may not broadcast their digital signal to the exact coverage area that their analog signal covered. As shown in figure 6, the digital signal coverage of a station can differ from its analog signal coverage. Officials with one cable provider with whom we spoke indicated that on the basis of potential changing signal coverage areas, the provider might need to reposition its antenna or otherwise update its head-ends so that it can continue to receive the broadcast signals. The officials went on to say that since their company has hundreds of head-ends, it could be time- consuming to update them. Officials of a satellite company told us that any change in the signal coverage area could seriously affect the company’s ability to retransmit broadcast signals and might require it to build new facilities in the altered coverage area. Information from our survey indicates that some stations will have a different digital signal coverage area compared with their analog signal coverage area. Of our survey respondents, 24 percent reported that their digital signal coverage area will vary from their analog coverage area. While some of these stations’ digital coverage area could be increasing compared with the analog coverage area, some stations’ digital coverage area will be smaller, at least in some parts of the coverage area, compared with their analog coverage area. This is evident by 11 percent of the stations responding to our survey reporting that they anticipate losing over-the-air viewers after the transition to digital. On average, those stations anticipating decreased coverage areas expect to lose 23,000 viewers. According to our survey, 101 stations (9 percent) that have to relocate their current digital channel are moving to another channel that might be occupied by another station. Of these 101 stations, 13 survey respondents indicated that they are working with the other station to resolve coordination issues. According to a broadcast industry representative, the movement of channels will require television stations to closely coordinate with each other to minimize interference issues. The industry representative stated that the movement of channels could cause interference for neighboring channels if they move too early or if the neighboring channels move too late. The industry representative further stated that compounding this challenge is the fact that analog signals will be turned off on February 17, 2009. The construction of broadcast towers or financial constraints might affect some stations during their transition. Stations that must change their DTV tower locations might face considerable challenges, especially if the station must construct a new tower. Nineteen stations responding to our survey indicated that they needed to construct a broadcast tower to build their digital facilities. In addition, 62 stations responding to our survey indicated that they needed to reinforce an existing broadcast tower to finalize their facilities. A major television broadcast network stated that equipment manufacturing constraints and the limited number of tower crews and other key equipment installation resources available between now and the transition date will impede stations’ movement to final digital channels by February 17, 2009. A representative with a major tower construction company told us that the company is already booked 6 months into 2008, and that other construction crews also have full schedules. The company representative stated he believes that there are a significant number of stations that will wait until early 2008 to start making inquiries about work needing to be done on broadcast towers. According to FCC, stations constructing a new tower should consider whether there are any existing towers that can be used or if a new tower must be constructed. FCC states that because of the lead times involved in purchasing or leasing land with the appropriate federal government clearances, local and state zoning requirements, and varying timelines for designing and constructing the new tower, stations must begin planning as soon as possible to have all of the work completed by the deadline. Similar to weather conditions affecting work on antennas, winter weather could hamper tower construction in northern climates and on higher elevations. Television stations commented that working on towers in the winter months can be problematic, if not impossible. For example, a major broadcast network commented that many station transmitting sites are not readily accessible during the winter, especially to cranes and other heavy equipment necessary for tower rigging and equipment installation. In fact, the broadcaster commented that snow and ice make one of its stations accessible only by a special vehicle from October until March. Another station commented that it has been difficult to perform heavy construction at a remote and high-altitude transmitter site, and that the short weather window, difficult access, and complex work make the transition date hard to attain. A representative of a major tower construction company stated that weather is always a factor when determining the amount of time a project takes. The company representative stated that subzero conditions and ice are not conducive for tower work, and, although the work can be done, it is very dangerous and takes a much longer time to complete. Stations encountering financial constraints may also have difficulties in completing the digital transition. According to our survey, 38 stations noted that financial constraints had been an issue during the process of constructing their final digital facilities. In addition, 39 stations that are broadcasting a digital signal, but have yet to begin building their final digital facilities, indicated that financial constraints were a reason they had not yet started construction. Furthermore, another 33 stations, or 42 percent of stations not yet broadcasting a digital signal, indicated that financial constraints contributed to delays in building their final digital facilities. One station commented that the digital transition has been a financial drain on small-market television stations. This station noted that the cost for the equipment is the same whether the station serves a small or large market, but large-market stations have a much higher financial base to pay for the equipment. In its third periodic review, FCC acknowledged that some stations face financial obstacles to completing construction, but stated that it is imperative that stations devise and implement a plan to complete their final digital facilities. FCC established criteria for extensions of construction on final digital facilities due to financial hardship. To obtain an extension on the grounds of financial hardship, FCC requires a station to demonstrate that it (1) is the subject of a bankruptcy or receivership proceeding or (2) has experienced a negative cash flow for the past 3 years. FCC stated that while adopting the tighter financial hardship standard, it recognizes that some stations, including some noncommercial educational stations and some smaller stations, face extraordinary financial circumstances that do not fit within the new financial hardship criteria but may warrant an extension of time to finalize construction. Two stations that responded to our survey stated that they would qualify under FCC’s new criteria of financial hardship. One station commented that it was in the process of filing bankruptcy after 3 years of negative cash flow. Another station commented that it would qualify for financial hardship due to costs associated with locating its analog antenna and operating with a digital-only signal for a period of time, which resulted in a 30 percent drop in viewers and a negative cash flow from the reduction of viewers. FCC’s actions have provided guidance to broadcast stations throughout the transition process. A recent FCC ruling addressed many issues important to broadcasters and provided increased flexibility for broadcasters in completing DTV transition tasks. At the time we completed our survey, however, some broadcasters were waiting for FCC decisions before they could finalize their transition plans. For many years, FCC has orchestrated the DTV transition using its rulemakings process to guide broadcast stations through important milestones. FCC determined that establishing a digital standard for broadcasters was critical to begin the transition to digital broadcast; the establishment of a digital standard was completed with the adoption of an order in 1996. Since then, FCC has taken additional actions to continue moving broadcasters toward the digital transition, as shown in table 1. For example, FCC assigned paired digital channels for stations that would be broadcasting both a digital and an analog signal prior to the digital transition. These paired digital channels were important to allow broadcasters time to gain experience in operating a digital service, stimulate interest in the DTV transition, and encourage consumers to begin purchasing digital equipment. In its December 2007 third periodic review and order, FCC finalized a number of actions to facilitate broadcasters’ completion of the DTV transition. For example, the third periodic review and order addressed, among other things, (1) time frames for television stations to complete construction of their digital facilities; (2) information all full-power television stations must provide to FCC by February 19, 2008, detailing the station’s current transition status, any additional steps needed to commence its full, digital operations, and its timeline to meet the February 17, 2009, transition deadline; (3) when and for how long stations will be permitted to reduce or cease service on their analog or paired digital channel; and (4) guidelines for rapid approval of minor expansion of authorized service areas for stations that are moving their digital channel for posttransition operations to allow these stations additional flexibility to use their existing analog antenna. In our survey of broadcast stations, 128 respondents indicated they were “awaiting action from FCC” to complete building their final digital facilities. In following up with these stations after they had responded to our survey, our analysis suggested that the actions many stations were awaiting were addressed in FCC’s third periodic review and order. However, at that time, a few broadcasters still had issues that required FCC decisions—such as approval for a construction permit, petitions to alter their signal power, or FCC reconsideration of their final digital channel assignment. According to FCC, approximately 100 petitions for reconsideration of final DTV channel assignments were filed by broadcasters. FCC said these petitions needed engineering analysis performed to determine the feasibility and impact on other stations. FCC told us that the analysis had been completed, and released its decisions regarding the petitions in early March 2008. FCC noted that it believes broadcasters have everything they need from the commission to proceed with construction of their final digital facilities. We provided a draft of this report to FCC for its review and comment. In response, FCC noted that since our survey results of broadcast stations were based on information received between December 2007 and February 2008, the percentages we cite do not necessarily match information FCC would derive from its records. FCC also provided technical comments that we incorporated in this report where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to interested congressional committees and the Chairman of the Federal Communications Commission. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. The objectives of this report are to provide information on technical issues surrounding the digital television (DTV) transition, specifically, (1) the status of broadcast stations in transitioning to digital, (2) the extent to which broadcast stations are encountering issues during the DTV transition and how these issues impact the broadcast community, and (3) the actions the Federal Communications Commission (FCC) has taken to guide broadcasters in the DTV transition and how those actions have affected the broadcast community. To obtain information on the status of the television broadcast industry in transitioning to digital and the issues broadcasters were encountering, we developed and administered a Web-based survey. Our intent was to survey all full-power commercial and noncommercial broadcast television stations in the 50 states and the District of Columbia. We asked the broadcast stations questions related to their (1) digital facilities and plans, (2) issues affecting the digital conversion, (3) antenna locations, (4) DTV information advertisements and public service announcements, (5) digital signal contour and coordination with cable and satellite, (6) relocation of digital channels, (7) digital and analog signal coverage, (8) international issues, and (9) translator stations. The initial sample frame for the study was all FCC licensed full-powered television stations as of June 2007—a total of 1,747 stations. Since FCC did not maintain e-mail addresses for the licensed broadcasters at that time, we needed to obtain contact information on the broadcasters through alternate sources. We requested and received contact information from the following sources: the Association of Public Television Stations, ABC, CBS, NBC, CW, FOX, and Telemundo. In total, we received contact information for 1,058 stations. For the remaining 625 stations, the engagement team spent 1 week compiling a list of contact information. Of the 1,747 broadcasters on FCC’s list, we surveyed 1,682 stations located in the 50 states and the District of Columbia for which we could obtain contact information. In several instances, we identified stations that were not on FCC’s list of full-power broadcast stations, or stations for which we did not initially have contact information and subsequently sent the survey to these stations. From September 27, 2007, through October 16, 2007, we conducted a series of pretests with general managers of broadcast television stations to help further refine our questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. Upon completion of the pretests and development of the final survey questions and format, we sent an announcement of the upcoming survey to 1,682 broadcast television stations on November 30, 2007. These stations were notified that the survey was available online on December 7, 2007. We sent follow-up e-mail messages to nonrespondents on December 14, 2007, December 21, 2007, January 8, 2008, and January 9, 2008, and then attempted to contact by telephone those stations that had not completed the survey. The survey was available online until February 8, 2008. Of the 1,682 broadcast stations that were asked to complete the survey, we received 1,122 completed surveys, for an overall response rate of 66.7 percent. Of those completed questionnaires, 72 percent were from commercial stations and 28 percent were from noncommercial stations. The practical difficulties of conducting surveys may introduce errors commonly referred to as “nonsampling errors.” For example, questions may be misinterpreted and the respondents’ answers may differ from broadcast stations that did not respond to the survey. To minimize nonsampling errors, we pretested the survey and conducted numerous follow-up contacts with nonrespondents. In addition, steps were taken during data analysis to further minimize errors, such as performing computer analyses to identify inconsistencies and completing a review of data analysis by an independent reviewer. We also conducted a nonresponse bias analysis, comparing our survey estimates with estimates obtained from FCC records, and found small, but statistically significant differences. Because of the differences identified through the bias analysis, we decided to provide estimates only for respondents and not to project our results to the population. The survey results were reliable enough for our purpose because the bias does not appear to be more than a few percentage points. A difference of 5 percentage points in any of our estimates would not affect our findings. To view the survey and a more complete tabulation of the results, go to http://www.gao.gov/cgi-bin/getrpt?GAO-08-528SP. Furthermore, we reviewed relevant law, public comments, proposed rules, and other industry and private sector documents. We interviewed officials with FCC as well as a wide variety of industry and other private sector stakeholders with an interest in the DTV transition, such as commercial and noncommercial broadcasters; antenna and equipment manufacturers; tower construction companies; and industry advocacy groups, such as the National Association of Broadcasters and the Association for Maximum Service Television. We conducted this performance audit from April 2007 through April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, other key contributors to this report were Sally Moino, Assistant Director; Andy Clinton; Colin Fallon; Simon Galed; Eric Hudson; Bert Japikse; Aaron Kaminsky; and Andrew Stavisky. Digital Television Transition: Increased Federal Planning and Risk Management Could Further Facilitate the DTV Transition. GAO-08-43. Washington, D.C.: November 19, 2007. Digital Television Transition: Preliminary Information on Progress of the DTV Transition. GAO-08-191T. Washington, D.C.: October 17, 2007. Digital Television Transition: Preliminary Information on Initial Consumer Education Efforts. GAO-07-1248T. Washington, D.C.: September 19, 2007. Digital Television Transition: Issues Related to an Information Campaign Regarding the Transition. GAO-05-940R. Washington, D.C.: September 6, 2005. Digital Television Transition: Questions on Administrative Costs of an Equipment Subsidy Program. GAO-05-837R. Washington, D.C.: June 20, 2005. Digital Broadcast Television Transition: Several Challenges Could Arise in Administering a Subsidy Program for DTV Equipment. GAO-05-623T. Washington, D.C.: May 26, 2005. Digital Broadcast Television Transition: Estimated Cost of Supporting Set-Top Boxes to Help Advance the DTV Transition. GAO-05-258T. Washington, D.C.: February 17, 2005. Telecommunications: German DTV Transition Differs from U.S. Transition in Many Respects, but Certain Key Challenges Are Similar. GAO-04-926T. Washington, D.C.: July 21, 2004. Telecommunications: Additional Federal Efforts Could Help Advance Digital Television Transition. GAO-03-7. Washington, D.C.: November 8, 2002. Telecommunications: Many Broadcasters Will Not Meet May 2002 Digital Television Deadline. GAO-02-466. Washington, D.C.: April 23, 2002.
The Digital Television Transition and Public Safety Act of 2005, requires all full-power television stations in the United States to cease analog broadcasting by February 17, 2009, known as the digital television (DTV) transition. Prior to the transition date, the television broadcast industry must take a series of actions to ensure that over-the-air programming will continue to be available to television households once the transition is complete. For example, broadcast stations must obtain, install, and test the necessary equipment needed to finalize their digital facilities, and some stations will need to coordinate the movement of channels on the day the analog signal ceases transmission. This requested report examines (1) the status of broadcast stations in transitioning to digital, (2) the extent to which broadcast stations are encountering issues, and (3) the actions the Federal Communications Commission (FCC) has taken to guide broadcasters in the digital transition. To address these issues, GAO conducted a Web-based survey of full-power television broadcast stations. GAO surveyed 1,682 stations and obtained completed questionnaires from 1,122 stations, for a response rate of 66.7 percent. GAO also reviewed legal, agency, and industry documents and interviewed public, private, and other stakeholders. We provided FCC with a draft of this report, and FCC provided technical comments that we incorporated where appropriate. Television broadcast stations have made substantial progress in transitioning to digital television, with the vast majority already transmitting a digital signal. Approximately 91 percent of the 1,122 full-power stations responding to our survey are currently transmitting a digital signal, with approximately 68 percent of survey respondents transmitting their digital signal at full strength and 68 percent transmitting their digital signal on the channel from which they will broadcast after the transition date. However, some stations still need to complete construction of their final digital facilities, and others need to relocate their digital channel to complete the transition. For example, 23 percent of survey respondents indicated they will be moving their digital channel to their analog channel. In addition, other stations need to move to a completely new channel. While almost all full-power stations are already broadcasting a digital signal, 9 percent of stations responding to our survey indicated that they are not currently broadcasting digitally. Almost all of these stations, however, indicated that they plan to have their digital signal operational by February 17, 2009. Some stations, including those already broadcasting a digital signal, need to resolve various technical, coordination, or other issues before their transition to digital is complete. For example, over 13 percent of stations responding to our survey reported that they need to install or relocate their digital or analog antennas. Some of these stations still need to order equipment, such as antennas, to build their final digital facilities. Furthermore, stations may have coordination issues to address to complete their final digital facilities. In particular, some stations are awaiting agreements with the Canadian and Mexican governments regarding their signals crossing the borders of these respective countries before they can complete their digital facilities. Stations also need to coordinate with cable providers and satellite companies to ensure that cable and satellite facilities receive digital signals when the analog signals are turned off. Lastly, the construction of broadcast towers or financial constraints might affect some stations during their transition. FCC's actions have provided guidance to broadcasters throughout the digital transition, but at the time we completed our survey, some broadcasters were awaiting FCC decisions. Since 1987, FCC has directed broadcasters with a series of rulemakings and orders, including assigning digital broadcast channels and developing timelines for the construction of digital facilities. Furthermore, FCC has conducted periodic reviews of the transition and released a ruling on its third periodic review on December 31, 2007, in which FCC addressed a number of important DTV issues. However, some stations responded to our survey that they needed decisions from FCC, such as approval for a construction permit or for changes to their final digital channel. According to FCC, it will address remaining issues quickly and with the release of an order in March 2008, FCC stated that it believes broadcasters have everything they need from the commission to proceed with construction of their final digital facilities.
Title XIX of the Social Security Act establishes Medicaid as a joint federal- state program to finance health care for certain low-income, aged, or disabled individuals. Medicaid is an open-ended entitlement program, under which the federal government is obligated to pay its share of expenditures for covered services provided to eligible individuals under each state’s federally approved Medicaid plan. States operate their Medicaid programs by paying qualified health care providers for a range of covered services provided to eligible beneficiaries and then seeking reimbursement for the federal share of those payments. CMS has an important role in ensuring that states comply with certain statutory Medicaid payment principles when claiming federal reimbursements for payments made to institutional and other providers who serve Medicaid beneficiaries. For example, Medicaid payments by law must be “consistent with efficiency, economy, and quality care,” and states must share in Medicaid costs in proportions established according to a statutory formula. Within broad federal requirements, each state administers and operates its Medicaid program in accordance with a state Medicaid plan, which must be approved by CMS. A state Medicaid plan details the populations a state’s program serves, the services the program covers (such as physicians’ services, nursing home care, and inpatient hospital care), and the rates of and methods for calculating payments to providers. State Medicaid plans generally do not detail the specific arrangements a state uses to finance the nonfederal share of program spending. Title XIX of the Social Security Act allows states to derive up to 60 percent of the nonfederal share from local governments, as long as the state itself contributes at least 40 percent. Over the last several years, CMS has taken a number of steps to help ensure the fiscal integrity of the Medicaid program. These include making internal organizational changes that centralize the review of states’ Medicaid financing arrangements and hiring additional staff to review each state’s Medicaid financing. The agency also published in May 2007 a final rule related to Medicaid payment and financing. This rule would, among other things, limit payments to government providers to their cost of providing Medicaid services. Congress has imposed a moratorium on this rule until May 25, 2008. From 1994 through 2005, we have reported numerous times on a number of financing arrangements that create the illusion of a valid state Medicaid expenditure to a health care provider. Payments under these arrangements have enabled states to claim federal matching funds regardless of whether the program services paid for had actually been provided. As various schemes have come to light, the Congress and CMS took several actions from 1987 through 2002, through law and regulation, to curtail them (see table 1). Many of these arrangements involve payment arrangements between the state and government-owned or government-operated providers, such as local government-operated nursing homes. They also involved supplemental payments—payments states made to these providers separate from and in addition to those made at a state’s standard Medicaid payment rate. The supplemental payments connected with these arrangements were illusory, however, because states required these government providers to return part or all of the payments to the states. Because government entities were involved, all or a portion of the supplemental payments could be returned to the state through an IGT. Financing arrangements involving illusory payments to Medicaid providers have significant fiscal implications for the federal government and states. The exact amount of additional federal Medicaid funds generated through these arrangements is not known, but was in the billions of dollars. For example, a 2001 regulation to curtail states’ misuse of the UPL for certain provider payments was estimated to have saved the federal government approximately $17 billion from fiscal year 2002 through fiscal year 2006. In 2003, we designated Medicaid to be a program at high risk of mismanagement, waste, and abuse, in part because of concerns about states’ use of inappropriate financing arrangements. States’ use of these creative financing mechanisms undermined the federal-state Medicaid partnership as well as the program’s fiscal integrity in at least three ways. First, inappropriate state financing arrangements effectively increased the federal matching rate established under federal law by increasing federal expenditures while state contributions remain unchanged or even decrease. Figure 1 illustrates a state’s arrangement in place in 2004 in which the state increased federal expenditures without a commensurate increase in state spending. In this case, the state made a $41 million supplemental payment to a local government hospital. Under its Medicaid matching formula, the state paid $10.5 million and CMS paid $30.5 million as the federal share of the supplemental payment. However, after receiving the supplemental payment the hospital transferred back to the state approximately $39 million of the $41 million payment, retaining just $2 million. Creating the illusion of a $41 million hospital payment when only $2 million was actually retained by the provider enabled the state to obtain additional federal reimbursements without effectively contributing a nonfederal share—in this case, the state actually netted $28.5 million as a result of the arrangement. Second, CMS had no assurance that these increased federal matching payments were retained by the providers and used to pay for Medicaid services. Federal Medicaid matching funds are intended for Medicaid- covered services for the Medicaid-eligible individuals on whose behalf payments are made. However, under these arrangements payments for such Medicaid-covered services were returned to the states, which could then use the returned funds at their own discretion. In 2004, we examined how six states with large supplemental payment financing arrangements involving nursing homes used the federal funds they generated. As in the past, some states deposited excessive funds from financing arrangements into their general funds, which may or may not be used for Medicaid purposes. Table 2 provides further information on how states used their funds from supplemental payment arrangements, as reported by the six states we reviewed in 2004. Third, these state financing arrangements undermined the fiscal integrity of the Medicaid program because they enabled states to make payments to government providers that could significantly exceed their costs. In our view, this practice was inconsistent with the statutory requirement that states ensure that Medicaid payments are economical and efficient. Our March 2007 report on a recent CMS oversight initiative to end certain financing arrangements where providers did not retain the payments provides context for CMS’s May rule. Responding to concerns about states’ continuing use of creative financing arrangements to shift costs to the federal government, CMS has taken steps starting in August 2003 to end inappropriate state financing arrangements by closely reviewing state plan amendments on a state-by-state basis. As a result of the CMS initiative, from August 2003 through August 2006, 29 states ended one or more arrangements for financing supplemental payments, because providers were not retaining the Medicaid payments for which states had received federal matching funds. We found CMS’s actions under its oversight initiative to be consistent with Medicaid payment principles—for example, that payment for services be consistent with efficiency, economy, and quality of care. We also found, however, that CMS’s initiative to end inappropriate financing arrangements lacked transparency, in that CMS had not issued written guidance about the specific approval standards for state financing arrangements. CMS’s initiative was a departure from the agency’s past oversight approach, which did not focus on whether individual providers were retaining the supplemental payments they received. In contacting the 29 states that ended a financing arrangement from August 2003 through August 2006 under the initiative, only 8 states reported that they had received any written guidance or clarification from CMS regarding appropriate and inappropriate financing arrangements. CMS had not used any of the means by which it typically provides information to states about the Medicaid program, such as its published state Medicaid manual, standard letters issued to all state Medicaid directors, or technical guidance manuals, to inform states about the specific standards it used for reviewing and approving states’ financing arrangements. State officials told us that it was not always clear what financing arrangements CMS would allow and why arrangements approved in the past would no longer be approved. Twenty-four of 29 states reported that CMS had changed its policy regarding financing arrangements, and 1 state challenged CMS’s disapproval of its state plan amendment, in part on the grounds that CMS changed its policy regarding payment arrangements and should have done so through rule making. The lack of transparency in CMS’s review standards raised questions about the consistency with which states had been treated in ending their financing arrangements. We consequently recommended that CMS issue guidance to clarify allowable financing arrangements. Our recommendation for CMS to issue guidance for allowable financing arrangements paralleled a recommendation we had made in earlier work reviewing states’ use of consultants on a contingency-fee basis to maximize federal Medicaid revenues. Problematic projects where claims for federal matching funds appeared to be inconsistent with CMS’s policy or with federal law, or that—as with inappropriate supplemental payment arrangements—undermined Medicaid’s fiscal integrity, involved Medicaid payments to government entities and categories of claims where federal requirements had been inconsistently applied, were evolving, or were not specific. We recommended that CMS establish or clarify and communicate its policies in these areas, including supplemental payment arrangements. CMS’s responded that clarifying guidance was under development for targeted case management, rehabilitation services, and supplemental payment arrangements. We have ongoing work to examine the amount and distribution of states’ Medicaid supplemental payments, but have not reported on the May 2007 rule or other rules related to Medicaid financing issued this year. Certain elements of the May 2007 rule relate to the concerns our past work has raised. Some aspects of the final rule appear to be responsive to recommendations from our past work, to the extent that its implementation could help ensure that Medicaid providers, on whose behalf states’ receive federal matching funds, retain the payments made by the state. The extent to which the rule would address concerns about the transparency of CMS’s initiative and review standards will depend on how CMS implements it. As the nation’s health care safety net, the Medicaid program is of critical importance to beneficiaries and the providers that serve them. The federal government and states have a responsibility to administer the program in a manner that ensures that expenditures benefit those low-income people for whom benefits were intended. With annual expenditures totaling more than $300 billion per year accountability for the significant program expenditures is critical to providing those assurances. Ensuring the program’s long-term fiscal sustainability is important for beneficiaries, providers, states, and the federal government. For more than a decade, we have reported on various methods that states have used to inappropriately maximize federal Medicaid reimbursement, and we have made recommendations to end these inappropriate financing arrangements. Supplemental payments involving government providers have resulted in billions of excess federal dollars for states, yet accountability for these payments—assurances that they are retained by providers of Medicaid services to Medicaid beneficiaries—has been lacking. CMS has taken important steps in recent years to improve its financial management of Medicaid, yet more can be done. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or members of the subcommittee may have. For information regarding this testimony, please contact James Cosgrove at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Katherine Iritani, Assistant Director; Carolyn Yocom, Assistant Director; Ted Burik; Tim Bushfield; Tom Moscovitch; and Terry Saiki also made key contributions to this testimony. Medicaid Financing: Long-Standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-255T. Washington, D.C.: November 1, 2007. Medicaid Financing: Federal Oversight Initiative Is Consistent with Medicaid Payment Principles but Needs Greater Transparency. GAO-07-214. Washington, D.C.: March 30, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Medicaid: States’ Efforts to Maximize Federal Reimbursements Highlight Need for Improved Federal Oversight. GAO-05-836T. Washington, D.C.: June 28, 2005. Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. Medicaid: Intergovernmental Transfers Have Facilitated State Financing Schemes. GAO-04-574T. Washington, D.C.: March 18, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. Major Management Challenges and Program Risks: Department of Health and Human Services. GAO-03-101. Washington, D.C.: January 2003. Medicaid: HCFA Reversed Its Position and Approved Additional State Financing Schemes. GAO-02-147. Washington, D.C.: October 30, 2001. Medicaid: State Financing Schemes Again Drive Up Federal Payments. GAO/T-HEHS-00-193. Washington, D.C.: September 6, 2000. Medicaid in Schools: Improper Payments Demand Improvements in HCFA Oversight. GAO/HEHS/OSI-00-69. Washington, D.C.: April 5, 2000. Medicaid in Schools: Poor Oversight and Improper Payments Compromise Potential Benefit. GAO/T-HEHS/OSI-00-87. Washington, D.C.: April 5, 2000. Medicaid: States Use Illusory Approaches to Shift Program Costs to Federal Government. GAO/HEHS-94-133. Washington, D.C.: August 1, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Medicaid, a joint federal-state program, financed the health care for about 59 million low-income people in fiscal year 2006. States have considerable flexibility in deciding what medical services and individuals to cover and the amount to pay providers, and the federal government reimburses a portion of states' expenditures according to a formula established by law. The Centers for Medicare & Medicaid Services (CMS) is the federal agency responsible for overseeing Medicaid. Growing pressures on federal and state budgets have increased tensions between the federal government and states regarding this program, including concerns about whether states were appropriately financing their share of the program. GAO's testimony describes findings from prior work conducted from 1994 through March 2007 on (1) certain inappropriate state Medicaid financing arrangements and their implications for Medicaid's fiscal integrity and (2) outcomes and transparency of a CMS oversight initiative begun in 2003 to end such inappropriate arrangements. GAO has reported for more than a decade on varied financing arrangements that inappropriately increase federal Medicaid matching payments. In reports issued from 1994 through 2005, GAO found that some states had received federal matching funds by paying certain government providers, such as county-operated nursing homes, amounts that greatly exceeded established Medicaid rates. States would then bill CMS for the federal share of the payment. However, these large payments were often temporary, since some states required the providers to return most or all of the amount. States used the federal matching funds obtained in making these payments as they wished. Such financing arrangements had significant fiscal implications for the federal government and states. The exact amount of additional federal Medicaid funds generated through these arrangements is unknown, but was in the billions of dollars. Because such financing arrangements effectively increase the federal Medicaid share above what is established by law, they threaten the fiscal integrity of Medicaid's federal and state partnership. They shift costs inappropriately from the states to the federal government, and take funding intended for covered Medicaid costs from providers, who do not under these arrangements retain the full payments. In 2003, CMS began an oversight initiative that by August 2006 resulted in 29 states ending one or more inappropriate financing arrangements. Under the initiative, CMS sought satisfactory assurances that a state was ending financing arrangements that the agency found to be inappropriate. According to CMS, the arrangements had to be ended because the providers did not retain all payments made to them but returned all or a portion to the states. GAO reported in 2007 that although CMS's initiative was consistent with Medicaid payment principles, it was not transparent in implementation. CMS had not used any of the means by which it normally provides states with information about Medicaid program requirements, such as the published state Medicaid manual, standard letters issued to all state Medicaid directors, or technical guidance manuals. Such guidance could be helpful by informing states about the specific standards used for reviewing and approving states' financing arrangements. In May 2007, CMS issued a final rule that, if implemented, would, among other things, limit Medicaid payments to government providers' costs. We have not reviewed the substance of the May 2007 rule. The extent to which the May 2007 rule would respond to GAO's concerns about the transparency of CMS's initiative and review standards will depend on how CMS implements it.
Transnational criminal organizations use subterranean, aerial, and maritime smuggling methods to try to avoid the security measures designed to address traditional overland smuggling routes. These smuggling methods—which are further described below—include but are not limited to, illicit cross-border tunnels, ultralight aircraft, panga boats, recreational maritime vessels, and self-propelled semi-submersible and fully submersible vessels. While the use of some of these conveyances is longstanding, DHS has identified changes in transnational criminal organizations’ tactics, techniques, and procedures in using them that present new or different challenges to border security. Cross-border tunnels. Cross-border tunnels are man-made sub-surface passageways that could be used to conceal the movement of humans or contraband and circumvent U.S. border defenses. Cross-border tunnels can be classified into one of four categories based on the predominant features of the tunnel, as described below and shown in figure 1: Sophisticated tunnels are elaborately constructed, may be of significant length and depth, and may have shoring, lighting, electricity, ventilation, and railways. Rudimentary tunnels are crudely constructed and shallow. Interconnecting tunnels exploit and connect to underground municipal infrastructure, such as storm water and sewage systems. Interconnecting tunnels typically connect to a rudimentary or sophisticated tunnel to operate; however, in these cases the entire tunnel would be classified into one category based on the predominant features of the tunnel. The exclusive unaltered use of underground municipal infrastructure to transport people or contraband is not considered a cross-border tunnel, but is another subterranean threat. Mechanically bored tunnels are those that are constructed primarily from mechanical means, instead of human diggers. Such mechanical means can include horizontal directional drilling devices and tunnel boring machines. These tunnels are generally lined by piping. Ultralight aircraft. As shown in figure 2, ultralight aircraft are single-seat aircraft that have an empty weight of about 250 pounds or less. Smugglers modify ultralight aircraft to carry drug loads by, for example, attaching large metal baskets. Maritime vessels. Selected maritime smuggling methods include panga boats, recreational vessels, and self-propelled semi-submersible and fully submersible vessels, which are further described below and shown in figure 3. Panga boats are open-hulled, flat-bottomed fishing vessels designed to arrive and depart directly from a beach. These vessels are between 20 and 60 feet long, and are fitted with one or more outboard motors. Recreational vessels are motorized vessels and sailboats used for leisure activities. Smugglers can exploit the ubiquity of legitimate recreational activity to blend in and avoid detection using these vessels. Self-propelled semi-submersible and fully submersible vessels have low profiles designed to have low radar reflectivity, making them difficult to detect. Semi-submersible vessels generally cut through the water at wave height, while fully submersible vessels can be entirely submerged below the surface. Multiple components within DHS have responsibilities for addressing subterranean, aerial, and maritime smuggling, including ICE HSI, Coast Guard, and CBP’s Border Patrol and AMO. Their specific roles and responsibilities with regard to the selected smuggling methods are discussed below. Cross-border tunnels. CBP and ICE HSI share primary responsibility for countering cross-border tunnel threats. ICE HSI is responsible for cross- border tunnel investigations, Border Patrol is the primary component for interdiction, and CBP is responsible for the remediation of illicit tunnels. Both Border Patrol and ICE HSI efforts can lead to the identification of likely tunnel locations. In 2013, CBP established a Tunnel Program Management Office (TPMO) within Border Patrol to lead and coordinate CBP counter-tunnel efforts. Ultralight aircraft. AMO, Border Patrol, and HSI have primary responsibility for countering ultralight aircraft smuggling. AMO’s Air and Marine Operations Center (AMOC) is to surveil the airspace above the nation’s border and identify the criminal use of noncommercial air conveyances, including ultralight aircraft. AMO and Border Patrol are responsible for responding to and interdicting ultralight aircraft used for smuggling, and ICE HSI is responsible for investigating ultralight aircraft incursions. Maritime vessels. Coast Guard, AMO, and Border Patrol share responsibility for patrolling the U.S. maritime borders, and territorial sea (i.e., maritime approaches 12 nautical miles seaward of the U.S. coast) to interdict drugs and foreign nationals illegally entering the United States. Coast Guard is the lead federal maritime law enforcement agency on the high seas (waters beyond 12 nautical miles seaward of the U.S. coast). ICE HSI and AMO may investigate cross-border maritime smuggling. In addition, within DHS, DHS S&T and CBP’s Office of Acquisition (formerly the Office of Technology Innovation and Acquisition) are responsible for assisting DHS components in obtaining technology that can help them address the threats posed by the selected smuggling methods. DHS S&T is responsible for leading research and development, demonstration, testing, and evaluation to help bridge capability gaps. CBP’s Office of Acquisition is responsible for providing policy and acquisition oversight across CBP to help obtain products and services that enhance border security. Outside of DHS, DOD is the lead federal agency for the detection and monitoring of aerial and maritime transit of illegal drugs into the United States and operates systems, such as radar systems, that can be used in support of DHS and other federal, state, and local law enforcement activities. Our analysis of Border Patrol tunnel data showed that there were 67 cross-border tunnels discovered along U.S. borders from fiscal years 2011 through 2016, all located on the southwest border, as shown in figure 4. Nearly all cross-border tunnels—62 of 67—were discovered in Border Patrol’s Tucson, Arizona or San Diego, California sectors and the remaining 5 were discovered in the El Centro, California and Yuma, Arizona Border Patrol sectors. Additionally, the number of discovered cross-border tunnels generally declined during the period, with 18 tunnels discovered in fiscal year 2011 and 9 tunnels discovered in fiscal year 2016. However, CBP’s 2015 tunnel report to Congress found that illicit cross-border tunnels are a persistent threat to national security and that increased border enforcement efforts would likely continue to push illicit cross-border smuggling underground. Our analysis of Border Patrol tunnel data also showed that sophisticated and interconnecting were the most common tunnel types discovered from fiscal years 2011 through 2016. In particular, 54 of the 67 discovered cross-border tunnels from fiscal years 2011 through 2016 were sophisticated and interconnecting types while 13 were rudimentary or mechanically bored. Additionally, most drug seizures associated with cross-border tunnels involved marijuana. For example, 21 of the 23 seizures involved marijuana, resulting in over 106,600 pounds of seized marijuana. Our analysis of AMO data showed that there were 534 suspected ultralight aircraft incursions from fiscal years 2011 through 2016, all but one located on the southwest border in Arizona, California, New Mexico, and Texas. The number of suspected ultralight incursions declined each year from fiscal years 2011 through 2016. For example, according to AMO data, the overall number of suspected ultralight aircraft incursions declined from 199 in fiscal year 2011 to 28 in fiscal year 2016, as shown in figure 5. However, as discussed later in this report, AMO reports that ultralight aircraft are a flexible threat and a surge in activity could occur in any or all of the southwest border sectors. For example, while the overall number of ultralight aircraft incursions declined, an increase of ultralight activity occurred in Texas in fiscal year 2016. More specifically, 18 suspected ultralight aircraft incursions were detected in Texas in fiscal year 2016, compared to 5 suspected ultralight aircraft incursions for all of fiscal years 2011 through 2015. Additionally, most drug seizures associated with ultralight aircraft incursions were of marijuana. For example, more than 98 percent (100 of 102) of the seizures involved marijuana, resulting in over 22,000 pounds of seized marijuana. Less than two percent (2 of 102) of these seizures involved methamphetamine, which resulted in nearly 8 kilograms of methamphetamine seized. Our analysis of Consolidated Counterdrug Database (CCDB) data shows that the majority of known maritime drug smuggling events involving panga boats and recreational vessels along U.S. mainland borders— nearly 76 percent (234 of 309)—took place on the west coast, specifically California, and over 24 percent (75 of 309) took place on the southeast coast, northeast coast, and the southwest border. As depicted in figure 6, our analysis of CCDB data also showed that the number of known panga boat and recreational vessel drug smuggling events varied from fiscal years 2011 through 2016, with the highest number of events (82 of 309) occurring in fiscal year 2013 and lowest number of events occurring in fiscal years 2015 and 2016, with 32 and 29 respectively. However, the actual number of maritime smuggling events and the amount of drugs smuggled by these methods is unknown. Additionally, a higher proportion of events—nearly 65 percent (200 of 309)—involved motorized, open- hulled vessels, such as panga boats, and a lower proportion of events— over 35 percent (109 of 309)—involved recreational vessels. Our analysis also showed that the majority of known panga boat and recreational vessel drug smuggling events—nearly 86 percent (265 of 309)—involved marijuana, resulting in over 413,400 pounds of seized marijuana from fiscal years 2011 through 2016. Nearly 14 percent (42 of 309) involved cocaine, resulting in over 3,200 kilograms of seized cocaine, and nearly 1 percent (2 of 309) involved methamphetamine, resulting in nearly 300 kilograms of seized methamphetamine from fiscal years 2011 through 2016. Our analysis of Coast Guard data from fiscal years 2011 through 2016 showed that the majority of the known migrants being smuggled along U.S. mainland borders using panga boats and recreational vessels were interdicted off the Florida Coast. Specifically, Coast Guard interdicted nearly 92 percent (1,798 of 1,963) of these migrants off the Florida Coast (e.g., North and South Florida Straits), and over 8 percent (165 of 1,963) on the southwest border or southern California coastline. Our analysis of Coast Guard data also showed that the number of migrants Coast Guard interdicted in maritime smuggling-related events on panga boats and recreational vessels varied over time but generally increased from fiscal years 2011 through 2016. In particular, the lowest numbers were interdicted in fiscal year 2011 (211) and 2013 (239) and the highest number was interdicted in fiscal year 2016 (443), as shown in figure 7. Additionally, of the migrants interdicted from fiscal years 2011 through 2016, about 72 percent (1,374 of 1,899) were on recreational vessels and about 28 percent (525 of 1,899) were on panga boats. In fiscal year 2016, this trend changed and Coast Guard interdicted over half (233 of 443) of these maritime migrants on panga boats and under half (210 of 443) on recreational vessels. DHS has taken steps to assess the risks from smuggling by cross-border tunnels, ultralight aircraft, panga boats, and recreational vessels. Specifically, DHS has assessed the threat of these selected smuggling methods by identifying geographic areas that have experienced greater incidence of smuggling and transnational criminal organization smuggling tactics. DHS has also assessed vulnerabilities by identifying capability gaps that affect the department’s ability to address the threats posed by the selected smuggling methods. Cross-border tunnels. To assess the risk from cross-border tunnels, CBP commissioned a 2010 assessment that identified areas along the southern borders of California, Arizona, New Mexico, and western Texas as having a high risk from tunneling activity, based on factors such as soil composition, water table, and known tunneling activity. However, this assessment did not analyze the risk from rudimentary tunnels, interconnecting tunnels, or mechanically bored tunnels. CBP is currently in the process of obtaining a tunnel risk assessment tool that is to compute an estimated statistical likelihood for each of the four types of illicit tunnels along the southwest border. Further, unlike the 2010 assessment, this tool is to use a web-based platform that can be updated to allow risk to be re-assessed on an ongoing basis. CBP officials expect this tool to be completed in June 2017. ICE HSI and Border Patrol have also conducted intelligence assessments to identify areas that are at a higher risk from tunneling based on factors such as transnational criminal organization smuggling tactics and past tunneling activity. For example, a 2014 ICE HSI intelligence report states that transnational criminal organizations primarily use tunnels to transport narcotics, particularly marijuana, which is an important source of profit. Marijuana is also relatively bulky, and tunnels have the advantage of being able to accommodate large drug loads, according to the assessment. CBP has also identified capability gaps that affect its ability to address cross-border tunnels. As part of the process to acquire tunnel detection technology, CBP sought an independent examination of factors that affect counter tunnel capabilities using a framework that assesses the state of doctrine; organization; training; materiel; leadership; personnel; facilities; and regulations, grants, and standards. The analysis was issued in June 2013 and identified some gaps in tunnel technology as well as non- technological capability gaps in doctrine, among other things. CBP’s TPMO is responsible for addressing these capability gaps, and we discuss the status of key efforts later in this report. Ultralight aircraft. To assess the risk from ultralight aircraft, AMO has analyzed ultralight aircraft data and, as previously discussed, found that the majority of ultralight incursions have occurred in Arizona and California, with a recent uptick in activity in Texas. AMO has reported in its General Aviation Threat Assessments that ultralight aircraft are a flexible smuggling method and that a surge in activity could occur in any or all of the southwest border sectors, shifting when there is an increased law enforcement presence. To keep relevant Border Patrol agents informed of trends and recent ultralight aircraft activity, AMO and Border Patrol officials stationed at AMO’s AMOC, which monitors the airspace on the border, send each Border Patrol sector along the southwest border a monthly briefing as well as provide real-time coordination at the time of the incursions, as discussed later in this report. AMO has also analyzed ultralight aircraft smuggling tactics and found that ultralight aircraft smugglers generally will not land in the United States and instead will airdrop the narcotics load in order to quickly return to Mexico. AMO officials explained that pilots use this method hoping to avoid arrest. According to AMO analysis, the narcotics are generally dropped within a couple of miles of a main road so that the smugglers can quickly collect the narcotics and blend in with other vehicle traffic on the road. Additionally, AMO reports that ultralight aircraft smugglers operate like sub-contractors for transnational criminal organizations, and are financially responsible for the narcotics they transport. As a result, ultralight aircraft primarily transport low- to mid-grade marijuana, with an average load size around 200 pounds, because the cost of higher value narcotics is prohibitive and the risk from destroying a load during the air drop is too great. CBP has identified gaps in its air domain awareness and made finding a technical solution a priority in 2009. CBP has efforts underway to address these capability gaps, which we discuss later in this report. According to CBP analysis, CBP has sufficient capabilities to respond to and resolve detected ultralight aircraft incursions and changes in non-technical capabilities, such as increased manpower, will not significantly enhance its ability to address the threat posed by ultralight aircraft. Maritime vessels. To assess the risk from maritime smuggling through noncommercial vessels such as panga boats and recreational vessels, Coast Guard has produced annual cross-border drug smuggling intelligence assessments since 2014. These assessments have consistently identified Coast Guard Districts 11, 8, and 7, which cover the coastal borders of California and the Southeast United States from Texas through the east coasts of Florida, Georgia, and South Carolina, as well as Puerto Rico, as the primary threat area for cross-border drug smuggling by noncommercial vessels. Further, the Coast Guard intelligence assessments identified marijuana smuggling from Mexico to California by panga boats as a primary threat to the U.S. mainland. In the fiscal year 2015 assessment, the most recent available, Coast Guard found that as with previous years, panga boat smuggling routes tended to be hundreds of miles off shore, with intended destinations north of Los Angeles—most often between Santa Barbara and San Luis Obispo, California—to avoid U.S. maritime law enforcement. Coast Guard has also assessed the risk from maritime migration through its biennial National Maritime Strategic Risk Assessment. Coast Guard’s 2014 risk assessment, the most recent available, found that illegal maritime immigration was associated with societal costs and threats to the safety of the migrants at sea, and was ranked 16 among the 27 incident types assessed in terms of the impact and severity of risks. Specifically, Coast Guard found that the risk from maritime migration was lower than the risks from drug smuggling, natural disasters, and over- fishing, among others, and greater than the risks from events such as an accidental hazardous material release or a debris or sewage discharge, among others. The National Maritime Strategic Risk Assessment is designed to analyze risk at a national level to help inform resource allocation decisions and does not provide assessments at the local level or by vessel type; however, Coast Guard and its DHS partners have also conducted regional assessments. DHS’s Southern Border and Approaches Campaign Plan identified maritime migration from Cuba, Hispaniola, and the Bahamas as the primary illegal maritime migration threat, and Florida- based Coast Guard, AMO, ICE HSI, and Border Patrol have analyzed maritime migration from these areas. For example, intelligence assessments issued from 2015 through spring 2016 found that there has been an increase in Cuban maritime migration that will likely continue due to perceptions that U.S. immigration policies for Cubans will change. Coast Guard assessments show that most Cuban migrants use homemade vessels known as rustics, rafts, or chugs to travel to the Florida Keys; however, transnational criminal organizations commonly use stolen or personally owned recreational vessels to transport migrants, according to a fiscal year 2016 AMO assessment. Coast Guard, ICE HSI, AMO, and Border Patrol reported that another key maritime migrant smuggling route is from the Bahamas to Southeast Florida, a trip that can be as short as 45 nautical miles. Coast Guard data show that maritime migrant smuggling occurs less frequently along the California coast, but California-based Coast Guard, AMO, ICE HSI, and Border Patrol officials we met with have also assessed this threat. Personal watercraft, such as jet skis, were the most commonly used vessel to smuggle migrants in the region, according to a joint fiscal year 2015 California Coastal Region assessment, though recreational vessels and panga boats were also used. In addition, the assessment reports that most migrant smuggling routes along the California coastal region are destined for locations south of Los Angeles, California. Coast Guard and CBP have assessed maritime security capability gaps through DHS S&T’s Integrated Product Team process, which brings together component leaders to identify and prioritize technological capability gaps. As discussed later, DHS S&T has projects underway to enhance maritime domain awareness. Border Patrol and AMO have also initiated their own capability gap assessments to identify gaps and technical and nontechnical solutions to address gaps across the range of each component’s responsibilities, to include maritime security. Border Patrol is implementing its capability gap assessment and is expected to complete the documentation of requirements to address capability gaps in all Border Patrol sectors in 2019, according to officials. AMO expects to complete its capability gap assessment by the end of fiscal year 2017. Coast Guard, AMO, Border Patrol, and ICE HSI all capture information on the types of maritime vessels used for smuggling drugs and migrants to inform their counter smuggling efforts; however, the use of different terminology for vessels in different regions and different data systems has impeded DHS’s ability to develop a full picture of the risks from panga boat and recreational vessel smuggling nationwide. For example, as shown in table 1, the definition of a panga is different in the interagency California coastal region intelligence assessment and data system than it is in Coast Guard’s intelligence assessments, with the former specifying that a panga would have “multiple” outboard motors while the latter states that a panga would have “one or two” outboard motors. Furthermore, both definitions of pangas overlap with other categories of vessels, including “lanchas,” which Coast Guard has defined as open-hulled vessels with one outboard motor used in the Gulf Coast region, and “go-fasts,” which Coast Guard has defined as an open-hulled vessel with one or more outboard motors that can operate at 25 knots in shallow water. The panga boats that have been used to smuggle drugs in the California coastal region are classified as “lanchas” and “go-fast” vessels in the government-wide CCDB and “go-fasts” in a Coast Guard report and a national counternarcotic strategy. Additionally, the term “go-fast” is used by Coast Guard, AMO, Border Patrol, and ICE HSI in various assessments to describe maritime smuggling methods in Florida. However, the go-fast smuggling in Florida includes vessels that can blend in with the recreational boating traffic in the area—or what the California coastal region DHS partners would term a “pleasure craft.” AMO officials stated that differences in terminology do not impact operations at the local level, as officials are familiar with smuggling methods in their area and local vernacular; however, these differences make it difficult to synthesize information across components and regions to get a full picture of the threats posed by panga boat and recreational vessel smuggling nationwide. Facilitating this type of comprehensive assessment could help better inform management resource allocation decisions. For example, Coast Guard and AMO officials we met with in California and Florida stated that vessel types are associated with different smuggling tactics that require different operational response; recreational vessels used for smuggling narcotics that blend into legitimate recreational maritime traffic may require additional tools, such as human intelligence and training, canines, and non-intrusive inspection equipment to identify suspect vessels and hidden compartments. In comparison, the officials told us that more “overt” forms of smuggling, such as panga boats with large unconcealed, or minimally concealed drug loads, and recreational vessels overcrowded with migrants, are relatively easier to address since they can be detected and identified as suspect by maritime patrol aircraft. Differences in regional parlance and varying options in different databases have contributed to the lack of standardized definitions and categories of vessels, according to Coast Guard, AMO, and ICE HSI officials. Managers of the interagency CCDB recognized the issue of overlapping vessel definitions and are planning to revise the vessel options to eliminate overlapping categories such as “lancha” and “panga” under the category of “go-fast.” However, these changes will not affect the other databases or threat assessments used by DHS. Key considerations for implementing interagency collaborative mechanisms state that developing a common terminology can help bridge organizational cultures to enhance and sustain interagency efforts. DHS has also recognized the importance of common definitions and produces an annual DHS Lexicon to define terms and reduce the possibility of misunderstanding when communicating across the department and help DHS develop and manage knowledge, information, and data. Coast Guard, AMO, Border Patrol, and ICE HSI officials agreed that it would be beneficial to have standard vessel definitions DHS-wide to enhance the quality of data and intelligence assessments and facilitate information sharing across agencies. However, Coast Guard and ICE HSI officials noted that it could be challenging to identify all relevant data systems that use vessel types and determine how to reconcile older data with new categories. While we recognize that this could be challenging, there are upcoming opportunities DHS could leverage to efficiently develop and promulgate common vessel definitions and categories. For example, once changes to CCDB vessel categories are finalized, relevant DHS components could consider whether these vessel categories will meet their needs. Additionally, in the next year DHS plans to draft a new Small Vessel Security Strategy to address the risks that terrorists will use small vessels for transportation or an attack, which could be used as a forum for developing standard definitions for the various types of small vessels for inclusion in the DHS Lexicon and use in future threat assessments. If updating all databases proves to be difficult or costly, components could, for example, create common terminology by documenting a crosswalk that demonstrates the relationship between their vessel categories and established DHS-wide vessel definitions. By standardizing definitions of panga boats and recreational vessels in the DHS Lexicon for use in future threat assessments, DHS would be better able to leverage its threat assessments to develop a clearer and more comprehensive picture of the threats posed by these maritime smuggling methods across the nation. Having a complete picture of these maritime smuggling threats could, in turn, help better inform management decisions, including resource allocation decisions. DHS components have established various coordination mechanisms to address smuggling by cross-border tunnels, ultralight aircraft, panga boats, and recreational maritime vessels, and to improve coordination among federal, state, and local partners. As previously discussed, in 2015 DHS established a Border Security Integrated Product Team composed of representatives from DHS S&T, CBP, ICE, and Coast Guard to identify technology gaps and prioritize research and development efforts for enhancing border security. We discuss research and development projects that address the selected smuggling methods later in this report. In addition, DHS has established coordination mechanisms that specifically target the selected smuggling methods. Cross-border tunnels. DHS has established two interagency Border Enforcement Security Task Force (BEST) Tunnel Task Forces to conduct investigations into cross-border tunnel incursions. These Tunnel Task Forces are located in San Ysidro, California, and Nogales, Arizona, within the two Border Patrol sectors with the highest number of illicit cross- border tunnels found. Participants in the task forces include Border Patrol, ICE HSI, and the Department of Justice’s Drug Enforcement Administration, among others. State and local law enforcement officials sometimes provide additional support during a tunnel investigation. For instance, state and local police will, at times, help provide personnel to surveil or search a warehouse suspected of housing a tunnel exit. In addition to participating in the interagency Tunnel Task Force, Border Patrol established a Western Corridor Tunnel Interdiction Group in California to patrol the subterranean drainage infrastructure to locate, map, and monitor interconnected tunnels. Also, Border Patrol and ICE officials in other sectors where tunnels pose threats have established informal task forces and partnerships to facilitate information sharing and leverage intelligence and resources on counter tunnel efforts. For instance, Border Patrol and ICE HSI officials in El Centro, California, stated they have monthly meetings to discuss trends and share information. DHS further coordinates with other federal partners, such as DOD, to identify common tunnel requirements, test tunnel technologies, and exchange tunnel-related information. For instance, DHS officials participate in annual meetings led by the DOD Combating Terrorism Technical Support Office to discuss subterranean trends, developments, requirements, new and emerging technologies, and build relationships. Additionally, Border Patrol officials in Nogales, Arizona, coordinate with the DOD’s Combating Terrorism Technical Support Office and Asymmetric Warfare Group to test tunnel technology and operational scenarios in tunnels. Ultralight aircraft. AMO’s AMOC surveils border airspace for ultralight aircraft incursions and works with AMO and Border Patrol agents in the field to interdict ultralight aircraft drug loads and crews. Currently, AMOC’s Air and Marine Operations Surveillance System can help detect ultralight aircraft. AMOC officials stated they can manually monitor movement patterns on border airspace radar feeds and look for indicators of ultralight activity. When AMOC officials detect a possible ultralight incursion, they then call the relevant Border Patrol and ICE HSI stations. Conversely, if Border Patrol agents or another federal law enforcement partner suspects a possible ultralight aircraft incursion, they call AMOC in order to confirm detection on radar. Border Patrol and ICE HSI representatives stationed at AMOC stated that their co-location further facilitates interagency coordination. AMO and Border Patrol officials noted transnational criminal organizations have employed counter-measures to thwart their efforts. For example, transnational criminal organizations use drones and scouts to conduct counter-surveillance. In order to help mitigate the challenges, select Border Patrol sectors and ICE HSI field offices created ad hoc coordination mechanisms and operations to partner and better focus resources when the threat posed by ultralight aircraft is high in their areas of responsibility. These sectors and offices also established tip lines for the general public to report suspicious air activity and instruct their agents on ultralight detection methods. DHS also coordinates with DOD to share information related to aerial incursions, identify technical solutions, and coordinate assets to support interdiction efforts. For example, DHS leverages DOD as well as Federal Aviation Administration radars to feed into Air and Marine Operations Surveillance System. Conversely, AMOC officials stated that AMOC has also provided a number of DOD entities access to the Air and Marine Operations Surveillance System, to help enhance their domain awareness and identify suspicious targets. Maritime vessels. In 2011, DHS established Regional Coordinating Mechanisms (RECOM) to coordinate interagency operations and avoid duplicative efforts to address U.S. mainland threats in the maritime domain, including panga boats and recreational vessels. There are RECOMs in California, Florida, and Texas, addressing the primary threat areas of maritime smuggling. Participants include the Coast Guard, CBP AMO and Border Patrol, ICE HSI, the U.S. Attorney’s Offices, and may also include state and local law enforcement. To address maritime smuggling, RECOM partners host joint teleconferences to create interagency interdiction plans, coordinate asset deployment and schedules to de-conflict missions, and discuss post-interdiction prosecution of migrant cases. RECOMs also serve to share information and intelligence on the threats posed by maritime smuggling and trends among partners. While DHS component officials identified some challenges in addressing maritime smuggling, component officials also reported that the RECOMs help mitigate challenges. For example, DHS component officials noted the vastness of the maritime environment precludes DHS officials from having full awareness of the presence of maritime vessels, including panga boats and recreational vessels. However, the RECOMs coordinate and leverage each partner’s resources in order to maximize assets and expand coverage. Additionally, Coast Guard and AMO perform routine patrols on aerial and marine assets to monitor potential smuggling routes, conduct public outreach at marinas regarding smuggling, surveil for transnational criminal organization scouts, and perform random searches of recreational vessels with canines. Coast Guard officials indicated the Federal Emergency Management Agency’s Operation Stonegarden grants have been instrumental in involving state and local law enforcement agencies in coastal border security operations. For example, Operation Stonegarden local law enforcement partners helped the San Francisco RECOM interdict 10 subjects involved in a panga boat landing in 2015. RECOMs also conduct maritime smuggling investigations. For example, the San Diego BEST Marine Task Force is the investigative entity for the San Diego RECOM. Participants in the Marine Task Force include ICE HSI, Border Patrol, AMO, Coast Guard’s Investigative Service, San Diego Harbor Police, San Diego Sheriff’s Department, and the California Army National Guard. As with cross-border tunnels and ultralight aircraft, DHS components also coordinate with DOD to share information and leverage technical solutions for addressing maritime smuggling. For example, DHS components and DOD share some cross-border drug removal data, in order to increase domain awareness. Additionally, DHS coordinates with DOD to address maritime smuggling in the transit zone through the Joint Interagency Task Force South—a national task force that facilitates international and interagency interdiction of illicit maritime trafficking. Joint Interagency Task Force South officials told us the task force primarily operates in the transit zone rather than along U.S. mainland borders due to the large quantities of narcotics being moved from source countries through the transit zone. DHS’s approach to countering cross-border tunnels centers on collaboration to leverage the efforts of multiple agencies; however, no comprehensive department-level standard operating procedures have been established to provide strategic guidance and facilitate information sharing departmentwide. As previously discussed, ICE is the primary agency responsible for tunnel investigations, and CBP is responsible for tunnel interdiction and remediation. Both ICE and CBP have designated an authority within their agency for counter-tunnel responsibilities. Specifically, ICE HSI has designated a Unit Chief in its Contraband Smuggling Unit as responsible for oversight and coordination of ICE tunnel investigations at the headquarters-level, among other things. CBP designated Border Patrol as the primary point of contact for tunnels within CBP in 2010, and tasked it with establishing standardized detection and reporting procedures for CBP entities. CBP later formed the Tunnel Program Management Office (TPMO) in 2013 to serve as CBP’s centralized coordination point for addressing tunnels. However, as of November 2016, neither of these ICE or CBP authorities had established standard operating procedures guiding how agencies should individually or collectively address tunnels used for smuggling. A tunnel capability gap assessment commissioned by CBP in 2013 found that while standard operating procedures existed in some sectors, CBP did not have an accepted set of tactics, techniques, and procedures, such as best practices and tunnel activity indicators. The ICE-led BEST Tunnel Task Forces also do not have documented standard operating procedures for addressing tunnels. In studies, CBP and ICE have identified the absence of standard operating procedures as a challenge. For example, the CBP capability gap assessment found that DHS personnel located in different areas had inconsistent knowledge of the primary methods for addressing tunnels and that selected Border Patrol personnel conducting tunnel prediction operations may not have access to all pertinent information. During the course of our audit work, we further found that establishing standard operating procedures could strengthen DHS’s counter-tunnel efforts. Specifically, we found that not all officials addressing cross-border tunnels were aware of—and thus not accessing—all relevant DHS systems or offices with tunnel information. For example, ICE HSI officials we met with at one location were unaware of the existing TPMO or any national tunnel office. Further, TPMO, ICE HSI, and Border Patrol officials told us that standard operating procedures for tunnels could be beneficial. For example, ICE HSI and Border Patrol officials from three different sectors indicated a national-level office could help support counter-tunnel efforts by providing guidance, training, and strategic-level insight on tunnels. For instance, ICE HSI officials from one sector said it would be helpful to have guidance on detecting different types of tunnels and different investigative techniques for detecting tunnels used across sectors. In recognition of these issues, both the CBP and ICE assessments recommended that DHS establish standard operating procedures for addressing tunnels in order to formalize methods and enhance information sharing for operational coordination. CBP accepted the CBP capability gap assessment’s recommendation and tasked the TPMO with leading the effort to provide strategic-level guidance and direction for CBP counter-tunnel efforts. However, according to the Assistant Chief who leads the TPMO, it has not yet developed standard operating procedures due to lack of personnel and resources. According to the ICE HSI Unit Chief responsible for oversight and coordination of ICE tunnel investigations, no standard operating procedures could be drafted that would address the needs of specific locales due to the different operational areas. Additionally, both the TPMO and ICE HSI officials at the headquarters-level stated that establishing standard operating procedures is unnecessary because current coordination is effective and CBP and ICE have general memoranda of understanding from 2004 and 2006 that govern their coordination. While we recognize there are different types of tunnel threats in varying geographic environments and that CBP and ICE coordinate to address tunnels, counter-tunnel standard operating procedures could include best practices and procedures applicable to all sectors—such as procedures for reporting and accessing information on tunnels—as well as key differentiated information to account for the distinct operational areas. Further, CBP and ICE assessments have recommended establishing standard operating procedures for counter-tunnel efforts and the general CBP-ICE memoranda of understanding do not speak specifically to counter-tunnel coordination procedures. Additionally, the DHS Office of the Inspector General (OIG) recommended in a 2012 report that DHS designate an authority to provide leadership, strategy and coordination of DHS counter-tunnel efforts across DHS components. The OIG identified the lack of a department-level focal point for tunnels as a concern and stated that it increased the risk of DHS not achieving its goal of disrupting criminal organizations that engage in cross-border smuggling. As an example, the DHS OIG reported that there were not sufficient policies or procedures in place to ensure that when acquiring tunnel detection technology, CBP would take into account ICE HSI investigative requirements, such as the need for covert use so as to not alert criminals to the presence of law enforcement. At the time, CBP and ICE stated they would designate a co- chaired committee to satisfy the recommendation. DHS approved this decision in February 2013. However, according to the TPMO, the co- chaired committee has never convened, nor has it had the need to take action. The ICE HSI Unit Chief responsible for tunnel coordination and oversight was unaware of the existence of the committee. Convening this CBP-ICE committee to establish standard operating procedures could help provide strategic guidance that addresses the complexity of the threats posed by cross-border tunnels and ensure information is shared among the range of agencies involved. Once convened, this committee could also take the lead on other strategic counter-tunnel efforts, such as developing training. Standards for Internal Control in the Federal Government calls for agencies to implement control activities, such as policies, to help achieve objectives and ensure accountability for stewardship of government resources. Additionally, these standards state that control activities should be documented in, for example, management directives, administrative policies, or operating manuals. Leadership is a key feature for successful interagency collaboration, and we have previously reported that it is often beneficial to designate one lead in order to centralize accountability and expedite decision making. We have also previously reported that establishing a focal point with sufficient time, responsibility, authority, and resources can help ensure successful implementation of complex interagency and intergovernmental undertakings. While developing standard operating procedures for detecting, identifying, and addressing cross-border tunnels may require some investment of resources, having such standardized procedures could reduce resource requirements over time by increasing the efficiency of counter-tunnel efforts by formalizing and enhancing information sharing and establishing protocols. Furthermore, according to the tunnel capability gap assessment, having standard operating procedures reduces the likelihood of gaps or conflict in roles and responsibilities among staff, and minimizes the likelihood that information and partnerships may be lost during personnel changes. DHS currently uses multiple existing technological solutions and is researching additional technologies to address smuggling by cross-border tunnels, ultralight aircraft, and the selected maritime methods. Cross-border tunnels. DHS initiated a Cross-Border Tunnel Threat program to acquire tunnel detection technology in 2012 and is currently completing an analysis of alternatives to evaluate different technology options. CBP’s preliminary concept of operations for tunnel detection technology states that detection capability is required in border environments that vary from urban, to coastal, to desert, to rugged, mountainous terrain. According to CBP officials, completion of the analysis of alternatives has been delayed as of November 2016 due to a number of reasons, including delays in obtaining security clearances for the contractor. CBP officials are currently determining new acquisition timeframes. In the meantime, DHS is leveraging multiple existing tunnel technologies. DHS S&T is also in the process of developing additional technologies for predicting, detecting, tracking, and interdicting cross-border tunnels, but the projects are in the research and development phase. For example, DHS S&T is developing technology to determine how long ago a clandestine tunnel was built and infer the types of contraband and number of people that may have gone through the tunnel over that period of time. Appendix II provides more details on potential tunnel technology projects being researched and developed. Ultralight aircraft. AMO and Border Patrol are using existing radar and surveillance camera technology, including DOD and Federal Aviation Administration radars, the Tethered Aerostat Radar System, Remote Video Surveillance Systems, Integrated Fixed Towers, and Mobile Surveillance Capabilities, to detect and track ultralight aircraft. The Tethered Aerostat Radar System has been helpful in detecting some ultralight incursions, according to AMO and Border Patrol officials we interviewed. Maritime vessels. Coast Guard and AMO use both marine and aerial assets equipped with sensors, such as cameras and forward looking infrared radar, for surveillance and targeted interdictions of maritime vessels used for smuggling, including panga boats and recreational vessels. They also employ existing technologies, such as X-ray machines to identify hidden compartments of maritime vessels. Additionally, DHS leverages existing DOD maritime technology, such as a system called Minotaur, which integrates and processes sensor data from multiple sources for surveillance aircraft. DHS S&T is in the process of developing additional technologies to be used for predicting, detecting, tracking, and interdicting illicit maritime vessels, but the technologies are not yet deployed. For example, DHS S&T is developing the Integrated Maritime Domain Enterprise and Coastal Surveillance System software to integrate multiple data systems and create new maritime security common operating data to share across DHS components. Appendix II provides more details on research and development technology projects. CBP is considering various technological solutions to address ultralight aircraft, but does not have a plan to assess how the solutions will meet its operational needs. After Border Patrol identified ultralight aircraft incursions as a high priority threat, it requested assistance from CBP in September 2009 to identify a technology solution to aid in the detection and interdiction of ultralights. In response, CBP initiated the Ultralight Aircraft Detection acquisition program to acquire a technological solution. In 2011, CBP formalized the operational need for an Ultralight Aircraft Detection program in an Operational Needs Statement, in which it justified its need for the technology by referring the reader to capability gaps it had documented in a Mission Needs Statement for Small Dark Aircraft-Low Flying Aircraft Detection, an ongoing research and development project for technology to address the threats posed by ultralight aircraft and other low-flying aircraft. CBP deployed a limited number of Ultralight Aircraft Detection systems to detect ultralight aircraft along both the southern and northern borders. In June 2015, CBP, in accordance with recommendations from AMO and Border Patrol, ceased operational use of the Ultralight Aircraft Detection systems. CBP officials explained that a quick buy acquisition strategy and limited institutional technical knowledge contributed to poorly defined requirements and the acquisition of the Ultralight Aircraft Detection radar with limited capability. In 2015, CBP began a technology demonstration to assess the ability of DOD’s Lightweight Surveillance Target Acquisition Radar systems to aid in the detection of low-flying aircraft along the southwest border. Once again, CBP used the Small Dark Aircraft-Low Flying Aircraft Detection Mission Needs Statement to describe the operational needs that Lightweight Surveillance Target Acquisition Radar was intended to address. The three ultralight aircraft technological solutions are further described in Table 2. Although CBP used the same mission needs statement for the three projects, CBP officials stated that CBP is demonstrating separate ultralight aircraft technological solutions to address different geographic areas. For example, CBP officials told us the Small Dark Aircraft-Low Flying Aircraft Detection technology demonstration is geared towards identifying a technical solution for addressing aerial smuggling on the northern border, which is more mountainous and remote, and the Ultralight Aircraft Detection program was meant to detect and track ultralight aircraft along the southern border, which has a relatively flatter terrain. While we recognize that there may need to be multiple technical solutions to address the threats posed by ultralight aircraft and account for operational differences, such as terrain and manpower, CBP has not assessed and documented how the technological solutions will fully address Border Patrol and AMO’s operational needs to detect ultralight aircraft in all operational environments or how these solutions fit into the broader aerial domain awareness efforts. While the Ultralight Aircraft Detection program is no longer being pursued, there are a number of efforts that could be used to address ultralight aircraft smuggling. Both the Small Dark Aircraft-Low Flying Aircraft Detection and Lightweight Surveillance Target Acquisition Radar are still being demonstrated and considered as potential solutions to acquire to address ultralight aircraft. Furthermore, DHS S&T plans to extend the Small Dark Aircraft-Low Flying Aircraft Detection project to the southern border to help detect and track low-flying aircraft. Additionally, CBP intends to replace or modernize the Tethered Aerostat Radar System and states in its acquisition documentation that it is seeking alternative capabilities to improve target detection of low flying aircraft, among other things. DHS has also identified small unmanned aerial systems as an emerging smuggling method and CBP is starting to look for technological solutions to address this new threat that potentially could also detect ultralight aircraft. We have previously identified the need for agencies to evaluate alternatives by considering the costs and benefits of different measures and to document management’s decisions and the rationale for the investment of resources. Additionally, Standards for Internal Control in the Federal Government states that significant events—including decisions—need to be clearly documented, and the documentation should be readily available for examination. There are multiple ongoing analytical efforts that CBP could leverage to analyze how the alternative technologies for detecting and tracking ultralight aircraft address operational needs in various environments and the associated costs and benefits. For example, AMO and John Hopkins University Applied Physics Laboratory are leading the development of a formal Capability Gap Assessment process to gather mission needs and elicit capability gaps in both the air and maritime domains from the field; AMO and CBP’s Office of Acquisition are jointly developing a comprehensive capabilities analysis report for aerial domain awareness; and AMO and DHS S&T have a Value Focused Modeling project to estimate return on investment of AMOC’s existing radars and sensor technologies. CBP officials acknowledged the benefit of analyzing alternatives and plan to analyze the costs and benefits of some alternatives (e.g., existing Tethered Aerostat Radar Systems) as part of the process to determine whether or not to acquire the Small Dark Aircraft-Low Flying Aircraft Detection system. However, they did not say that this analysis would include all alternative approaches for addressing ultralight aircraft—such as modernized Tethered Aerostat Radar Systems, the Lightweight Surveillance Target Acquisition Radar, or any solutions DHS S&T is developing. CBP could be better positioned to use its resources more effectively and ensure the technological solutions selected will fully meet operational needs prior to making investment decisions by assessing and documenting how the ongoing technology demonstrations and any other potential technological solutions for detecting and tracking ultralight aircraft will fully address operational needs. Documenting such assessments, consistent with standards for internal control, prior to making investment decisions would also enhance transparency by providing stakeholders visibility into rationales for investment decisions over time. DHS has established or is in the process of establishing high-level smuggling-related performance measures and DHS components collect data regarding the prevalence of cross-border tunnel, ultralight aircraft and selected maritime smuggling, but DHS has not assessed the effectiveness of its efforts specific to addressing these smuggling methods. With respect to high-level smuggling-related performance measures, DHS has, for example, established a performance measure through which it monitors and reports on the percentage of ICE’s drug investigations resulting in the disruption or dismantlement of high-threat transnational drug trafficking organizations or individuals in its Annual Performance Report. This performance measure includes data on any investigations in which smugglers leveraged cross-border tunnels, ultralight aircrafts, panga boats or recreational vessels, but does not separately assess investigative performance by these conveyances. Additionally, CBP AMO is in the process of developing high-level measures related to interdiction, investigation, and domain awareness. These high-level performance measures may include performance and capabilities relevant to ultralight aircraft incursions such as the amount of radar coverage for detecting a range of aerial threats within a given volume of airspace along the southwest border but will not be specific to ultralight aircraft, according to these officials. Furthermore, DHS established performance measures through which Coast Guard reports on maritime migrant interdiction effectiveness and cocaine removal rates in the transit zone, but there is not a unified effort among all DHS components responsible for maritime security to jointly assess their performance to address panga boat and recreational vessel maritime smuggling at U.S. mainland borders. Additionally, DHS components collect various data regarding the prevalence of cross-border tunnel, ultralight aircraft and selected maritime smuggling methods, but have not established performance measures and associated targets to assess the effectiveness of their efforts specific to addressing cross-border tunnels, ultralight aircraft, or non-traditional maritime threats, as described below. Cross-border tunnels. Border Patrol’s TPMO tracks and reports DHS’s official tunnel data such as the number, location, type, and dimensions of cross-border tunnels to Congress and plans to use this information in its new threat assessments tool discussed earlier in this report. Similarly, the ICE HSI Unit Chief responsible for tunnel coordination and oversight stated that ICE also collects data on cross- border tunnels. However, Border Patrol and ICE HSI have not used tunnel-related information to assess their collective performance to, for example, help identify effective approaches to discover tunnels, including technologies, investigative approaches, or patrols. Ultralight aircraft. CBP AMO collects various data regarding ultralight aircraft incursions, and has developed performance measures specific to its efforts to address ultralight aircraft, but these measures do not have targets that would allow it to gauge progress toward goals. Specifically, CBP AMO collects ultralight data regarding the number and location of suspected ultralight aircraft incursions, how the ultralight aircraft was detected (i.e., by technology such as ground or aerostat radars, or reported by an individual who heard or saw the ultralight aircraft activity), if a law enforcement response was coordinated, and if there was an arrest or seizure. AMOC uses these data to track certain performance measures specific to its efforts to address ultralight aircraft, such as the percent of detected ultralight aircraft incursions where AMOC coordinated a law enforcement response and the percent of ultralight incursions that resulted in a violation (e.g., an arrest or seizure). For example, in fiscal year 2015, AMOC reported that it coordinated a law enforcement response for 94 percent of the suspect ultralight aircraft incursions and that 32 percent of the suspect ultralight aircraft incursions resulted in a violation. However, AMO and Border Patrol have not assessed their performance against targets in order to determine if these rates represent a satisfactory level of performance, given the level of risk and investment. Maritime vessels. Coast Guard and the RECOMs collect data on drug and migrant interdiction by fiscal year and vessel type as well as the number of arrests and the outcome of the cases prosecuted. However, data collection efforts are not consistent across RECOMs and there are no established performance measures and targets, to monitor, for example, how maritime smuggling events are identified and detected, the number or percent of detected smuggling events that resulted in interdictions, or the number of interdictions that resulted in prosecutions. DHS component officials provided a variety of reasons why they have not established and monitored performance measures and targets to assess effectiveness of DHS efforts to address cross-border tunnel, ultralight aircraft and selected maritime smuggling, such as limited resources, difficulty measuring unknown information, limitations of measures focused on specific smuggling methods, and difficulty of jointly establishing and monitoring performance among DHS components. For example, according to the Border Patrol TPMO Assistant Chief, the office has not established performance measures due to its previously discussed limited resources and the difficulty of fully measuring the effectiveness of counter tunnel efforts. In particular, Border Patrol officials told us that measuring the performance of activities which detect and deter threats to the United States where the universe of the total prevalence of smuggling is unknown—as with the case of cross-border tunnels—is difficult. Furthermore, ICE HSI and CBP AMO officials reported that performance measures which focus on a particular smuggling method or conveyance would not be the best measure of their efforts to combat smuggling. For example, AMO officials reported that higher-level measures linked to AMO’s goals and objectives can better address the full range of smuggling methods. Additionally, CBP and ICE officials told us that jointly establishing and monitoring performance measures and targets is difficult due to the component’s different missions. For example, a performance measure relevant to CBP, such as the number of miles of border under surveillance for tunnels, would not be relevant to ICE. We recognize the challenges associated with resource constraints and establishing performance measures, as well as the value of higher level performance measures; however, agency resources are being invested to address cross-border tunnel, ultralight aircraft, and selected maritime smuggling methods and without some type of performance measurement, DHS does not have reasonable assurance that efforts to address these selected smuggling methods are effective. DHS already collects data on these selected smuggling threats that could be leveraged to mitigate the resources needed to measure performance. Additionally, DHS could leverage other DHS efforts as avenues to establish performance measures. For example, DHS could utilize the planned Maritime Security Coordination Working Group to establish performance measures and targets related to assessing the effectiveness of efforts to address panga boats and recreational vessels. Further, even if it is not possible to collect certain data, such as the full universe of cross-border tunnels, GAO, the Office of Management and Budget, and the Performance Improvement Council’s Law Enforcement Measures Working Group—an interagency effort to address issues related to law enforcement performance measures chaired by the Office of Management and Budget—have previously reported that agencies can use proxy measures to determine how well a program is functioning. These proxy measures should be closely tied to the desired outcome—preventing smuggling through cross-border tunnels—and could include measures such as the percent of tunnels discovered prior to completion or the percent of tunnels discovered prior to being operational. In addition, by assessing the performance for each selected smuggling method, DHS could obtain valuable information on the relative risks posed by these smuggling methods as compared to other methods to capture the overall smuggling threat picture and better inform resource allocation decisions for addressing smuggling. Moreover, assessing performance information across components could help components determine cost-effective means for improving performance. For example, if CBP established and monitored performance measures related to subterranean domain awareness, both ICE HSI and Border Patrol could use this information to inform investigations and patrol operations. Standards for Internal Control in the Federal Government states that regular monitoring is needed to assess the quality of performance over time, and information should be communicated to management to achieve its objectives and ensure any issues are resolved. We have previously reported that performance measures are important management tools that can be used by individual programs or initiatives and that performance measures should have quantifiable, numerical targets or other measurable values to allow for easier comparison with actual performance. Furthermore, interagency collaboration best practices call for federal agencies engaged in collaborative efforts to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. By working together to establish performance measures and regularly monitor performance against targets, managers could obtain valuable information on successful approaches and areas that could be improved to help ensure that both technology investments and operational responses to address cross- border tunnel, ultralight aircraft, and selected maritime smuggling are effective. As transnational criminal organizations have adapted their techniques to smuggle drugs and humans through cross-border tunnels, ultralight aircraft, panga boats, and recreational vessels to evade detection, it is vital that DHS respond accordingly in its border security enforcement efforts. DHS has taken steps to assess and address the risk posed by these smuggling methods, but opportunities exist to ensure these efforts are effective and that managers and stakeholders have information needed to make decisions. In particular, DHS would be better able to develop a more comprehensive picture of the threats posed by panga boats and recreational vessels across the nation and leverage its maritime threat assessments to make decisions by standardizing vessel definitions departmentwide in the DHS Lexicon for use in future threat assessments. Additionally, convening the CBP-ICE committee to establish standard operating procedures for addressing cross-border tunnels could help provide strategic guidance that addresses the complexity of the threats posed by cross-border tunnels and ensure information is shared among the range of agencies involved. DHS has also invested in technology to help detect and track subterranean, aerial, and maritime smuggling, including various technologies to help detect and track ultralight aircraft; however, CBP has not assessed and documented how all of the alternative ultralight aircraft technical solutions being considered will fully address operational requirements or the costs and benefits associated with different approaches. This type of analysis could help better position CBP to use its resources more effectively and ensure that technology solutions selected will fully meet operational needs prior to making investment decisions. Furthermore, DHS has not assessed its performance in addressing any of the selected smuggling methods. By establishing performance measures and regularly monitoring performance against targets, managers could obtain valuable information on successful approaches and areas that could be improved to help ensure that both technology investments and operational responses to address smuggling through cross-border tunnels, ultralight aircraft, panga boats, and recreational vessels are effective. To help ensure that efforts to address smuggling through cross-border tunnels, ultralight aircraft, panga boats, and recreational vessels are effective and that managers and stakeholders have the information needed to make decisions, we recommend the Secretary of Homeland Security take the following six actions: 1. develop standardized, departmentwide definitions for maritime vessels used for smuggling in the DHS Lexicon; 2. direct the CBP-ICE tunnel committee to convene and establish standard operating procedures for addressing cross-border tunnels, including procedures for sharing information; 3. direct the Commissioner of CBP to assess and document how the alternative technological solutions being considered will fully meet operational needs related to ultralight aircraft; 4. direct the Commissioner of CBP and the Director of ICE to jointly establish and monitor performance measures and targets related to cross-border tunnels; 5. direct the Commissioner of CBP to establish and monitor performance targets related to ultralight aircraft; and 6. direct the Commandant of the Coast Guard, Commissioner of CBP, and the Director of ICE to establish and monitor RECOM performance measures and targets related to panga boat and recreational vessel smuggling. We provided a draft of this report to DHS and DOD for their review and comment. DHS provided written comments, which are summarized below and reproduced in full in appendix III, and DOD did not provide written comments. DHS concurred with four of the six recommendations in the report and described actions underway or planned to address them. DHS did not concur with two recommendations in the report. DHS also provided technical comments, which we incorporated as appropriate. With regard to the first recommendation, that DHS develop definitions for maritime vessels used for smuggling in the DHS Lexicon, DHS concurred and described planned actions to address the recommendation. According to DHS, in March 2017, the DHS Lexicon Program created a terminology working group composed of Coast Guard, CBP, ICE, and other DHS maritime subject matter experts to address terminology and definition issues identified in our report. The terminology and definitions would then be published for use across the department. If implemented effectively, these actions should address the intent of the recommendation. With regard to the second recommendation, that the CBP-ICE tunnel committee convene and establish standard operating procedures for addressing cross-border tunnels, DHS did not concur. DHS noted that there are memoranda of understanding between CBP and ICE outlining how they work together and share information as well as component- specific procedures in place at the local sector level that DHS believes constitute the procedures we recommended. In addition, DHS cited the establishment of multi-agency BEST Tunnel Task Forces in the areas at the highest risk for cross-border tunnel activity as helping to ensure collaboration and information sharing. However, CBP and ICE agreed that there may be benefits from strengthening existing operating procedures and stated that they plan to review, revise, and potentially consolidate procedures as they deem appropriate. We continue to believe that establishing national-level, joint CBP-ICE standard operating procedures for addressing cross-border tunnels could help ensure that information is shared by CBP and ICE across all locations and minimize the likelihood that information and partnerships are lost during personnel changes. As discussed in this report, the memoranda of understanding between CBP and ICE do not speak specifically to counter-tunnel coordination procedures. Additionally, as we reported, both CBP and ICE have completed studies that identified the absence of standard operating procedures as a challenge and recommended the establishment of departmental standard operating procedures. During the course of our audit work, we further found that not all officials addressing cross-border tunnels were aware of relevant information systems or the TPMO, and ICE HSI and Border Patrol officials from three different sectors indicated additional guidance, training, and strategic-level insights would be helpful. While we agree that strengthening existing operating procedures could be helpful, not all sectors or relevant officials may benefit without the establishment of standard operating procedures that apply nationwide. For example, as noted in our report, the BEST Tunnel Task Forces do not currently have documented standard operating procedures for addressing tunnels. Establishing departmental standard operating procedures for tunnels could help ensure that all relevant ICE and CBP officials have guidance on how to address tunnels. With regard to the third recommendation, that CBP assess and document how alternative technological solutions being considered will fully meet operational needs related to ultralight aircraft, DHS concurred. DHS stated that AMO is developing a Capability Analysis Report, a Mission Need Statement, and a Concept of Operations for air domain awareness that will result in validated requirements for ultralight aircraft, among other threats. In addition, DHS stated that subsequent to these efforts, it will prepare operational requirements documents that will specify how technological solutions will meet the requirements. If implemented effectively, these actions should address the intent of the recommendation. With regard to the fourth recommendation, that CBP and ICE jointly establish and monitor performance measures and targets related to cross-border tunnels, DHS concurred and stated that CBP and ICE will work together to harmonize performance data collection efforts and develop performance measures and targets. However, DHS stated that it would be premature to establish measures and targets prior to making tunnel detection technology acquisition and deployment decisions, and therefore will wait to develop them until it has addressed its technology challenges. We believe that DHS could benefit from establishing some performance measures and targets prior to making technology decisions. As discussed in this report, DHS initiated a program to acquire tunnel detection technology in 2012 and has an analysis of alternatives to evaluate different technology options underway, but this analysis has been delayed and CBP has not yet determined new time frames. Given that CBP has been working to acquire additional tunnel detection technology for several years and time frames for its acquisition have not been determined, we believe that establishing some measures and targets in the interim could help inform DHS’s current efforts to address cross-border tunnels and provide insights relevant to its tunnel detection technology acquisition decisions. Once fully implemented, DHS’s planned actions should address the intent of the recommendation. With regard to the fifth recommendation, that CBP establish and monitor performance targets related to ultralight aircraft, DHS concurred and stated that CBP’s AMO and Border Patrol are developing a joint performance measure and targets for interdicting ultralight aircraft. According to DHS, AMO and Border Patrol plan to document how the measure will be defined and validate the data reporting process. If implemented effectively, these actions should address the intent of the recommendation. With regard to the sixth recommendation, that Coast Guard, CBP, and ICE establish and monitor RECOM performance measures and targets related to panga boat and recreational vessel smuggling, DHS did not concur. DHS stated that it believes that by establishing common terminology to address our first recommendation, the RECOMs will have more reliable, usable analyses to inform their maritime interdiction efforts. However, it did not believe that performance measures and targets related to smuggling by panga boats would provide the most useful strategic assessment of operations to prevent all illicit trafficking, regardless of area of operations or mode of transportation. DHS also cited the recent creation of the DHS Office of Policy, Strategy, and Plans that is to work with Coast Guard, CBP, ICE, and other components and offices to better evaluate the effectiveness of all operations that work to prevent the illegal entry of goods and people into the country, as appropriate. Additionally, DHS stated that it will continue to work with the Office of National Drug Control Policy to create a set of enterprise-wide, strategic-level measures of performance for drug supply reduction activities. DHS requested that we consider this recommendation resolved and closed. We continue to believe that by jointly establishing performance measures and targets related to panga boat and recreational vessel smuggling, Coast Guard, CBP, and ICE could track valuable performance information, such as how panga boat and recreational vessel smuggling events are identified and detected, or the percent of these detected smuggling events that result in interdictions, to help inform their collaborative efforts to address maritime smuggling. We agree that creating common terminology is a positive step; however, without performance measures or targets, DHS does not have reasonable assurance that its collaborative efforts and investments to counter cross- border smuggling by panga boats and recreational vessels are effective. Similarly, we recognize the value of high level strategic performance measures; however, these types of measures may not provide sufficiently detailed performance information to allow DHS to identify successful approaches to addressing smuggling by panga boats and recreational vessels and areas for improvement. Further, performance measures and targets related to panga boat and recreational vessel smuggling could, in turn, better position DHS to understand the overall smuggling threat picture and better inform resource allocation decisions for addressing smuggling and drug supply reduction activities. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Secretary of the Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report addresses the following questions: 1. What do Department of Homeland Security (DHS) data show about the prevalence of smuggling by cross-border tunnel, ultralight aircraft, and selected maritime methods from fiscal years 2011 through 2016? 2. To what extent has DHS assessed the risks from smuggling by these methods? 3. How has DHS addressed smuggling by these methods? 4. To what extent has DHS assessed the results of its efforts to address smuggling by these methods? To address these questions, we focused our review on smuggling across U.S. mainland borders, including coastal borders, and we selected the following smuggling methods: cross-border tunnels, ultralight aircraft, panga boats, recreational maritime vessels, and self-propelled semi- submersible and fully submersible vessels. We selected these smuggling methods to include only those that would occur between ports of entry through means other than overland (given our focus on subterranean, aerial, and maritime smuggling); have been identified in strategy documents or by senior DHS officials and DHS officials with whom we met as a challenge or risk; and were of a magnitude that DHS had taken steps to address them. We analyzed DHS policies, procedures, reports, and data regarding the selected smuggling methods from fiscal years 2011 through 2016. We also conducted site visits to San Diego, El Centro, and Riverside, California; Nogales and Yuma, Arizona; and Miami and Key West, Florida. During these visits, we observed DHS approaches to addressing the selected smuggling methods and interviewed cognizant officials from U.S. Coast Guard (Coast Guard); U.S. Immigration and Customs Enforcement (ICE) Homeland Security Investigations (HSI); and U.S. Customs and Border Protection’s (CBP) U.S. Border Patrol (Border Patrol) and Air and Marine Operations (AMO) about their efforts. We selected these locations based upon a combination of factors, including the past detected use of the selected smuggling methods and the presence of coordinated DHS efforts to counter them in these areas. The information gathered from our site visits is not generalizable to other locations but provides insights into DHS’s responses to these incursions and efforts to use risk and performance information to stop future smuggling incidents using these methods. Additionally, we interviewed headquarters officials from the Coast Guard; CBP’s Border Patrol, AMO, and Office of Acquisition, the office responsible for CBP’s acquisition of products and services; ICE HSI; and DHS’s Science and Technology Directorate (S&T), the office responsible for leading research and development efforts across the department, to obtain information and perspectives on their efforts to assess and address threats posed by the selected smuggling methods. To determine what DHS data show about the prevalence of smuggling by cross-border tunnel, ultralight aircraft, and the selected maritime methods, we obtained and analyzed DHS data from fiscal years 2011 through 2016. Due to the illicit nature of smuggling, there are limitations to identifying the total number of smuggling events. Therefore, in this report we discuss the number of known smuggling events such as the numbers of discovered cross-border tunnels and detected ultralight aircraft and maritime drug and migrant smuggling events. We assessed the reliability of these data by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the systems that produced them, and (3) interviewing agency officials knowledgeable about the data. Additionally, where possible, we compared the data to similar data DHS previously reported in such products as a Congressional report on cross-border tunnels and an AMO management report that included data on aerial incursions. We found the data sufficiently reliable for the purposes of reporting trends in the selected smuggling methods from fiscal years 2011 through 2016. To determine the extent to which DHS has assessed the risk from smuggling by cross-border tunnel, ultralight aircraft, and the selected maritime methods we analyzed Coast Guard, AMO, Border Patrol, and ICE HSI risk, threat, and intelligence assessments. We also reviewed CBP, DHS S&T, and Coast Guard documentation on capability gaps, such as capability assessments and requirements documents. We evaluated these efforts against GAO’s risk management framework and leading practices for interagency collaboration. To determine how DHS has addressed smuggling by the selected methods, we analyzed DHS policies, procedures, training documents, and documentation on developing and acquiring new technology, such as project plans, mission needs statements, and concepts of operations. We also interviewed officials from Border Patrol, AMO, ICE HSI and Coast Guard to determine the extent to which they have established mechanisms to coordinate assets and operations related to smuggling by cross-border tunnels, ultralight aircraft, and the selected maritime conveyances, and associated challenges. Further, we met with Department of Defense (DOD) officials from the offices and organizations that have been involved in DHS efforts to address the selected smuggling methods to determine the extent to which DHS has coordinated with DOD to leverage any efforts or technologies. We assessed these efforts against GAO’s leading practices for interagency collaboration and risk management framework. To examine the extent to which DHS has assessed the results of its efforts to address the selected smuggling methods, we analyzed DHS and component performance reports, threat assessments, and strategic planning documents to determine what measures are in place to track the effectiveness of DHS’s counter cross-border tunnel, ultralight aircraft, and maritime smuggling efforts. We also interviewed DHS officials to determine how they use performance information to inform decision making. We assessed DHS’s performance monitoring efforts against Standards for Internal Control in the Federal Government and performance assessment best practices. We conducted this performance audit from November 2015 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A number of technology systems are being researched and developed by the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) to address selected subterranean, aerial, and maritime smuggling threats. Below is an overview of these ongoing DHS S&T projects. In addition to the contact named above, Taylor Matheson (Assistant Director), David Alexander, Chuck Bausell, Jr., Dominick Dale, Wendy Dye, Megan Erwin, Eric Hauswirth, Kelsey Hawley, Susan Hsu, Richard Hung, Heather May, and Sasan J. “Jon” Najmi made key contributions to this report.
As DHS has increased the security of overland smuggling routes, transnational criminal organizations have adapted their techniques to smuggle drugs and humans through alternative methods. These methods include cross-border tunnels, ultralight aircraft, panga boats, and recreational maritime vessels. While these methods account for a small proportion of known smuggling, they can be used to transport significant quantities of drugs or for terrorist activity. GAO was asked to review DHS's efforts to address subterranean, aerial, and maritime smuggling. This report addresses, among other things, (1) the known prevalence of the aforementioned smuggling methods, (2) efforts to address them, and (3) efforts to assess the results of activities to counter them. GAO analyzed relevant procedures, reports, and data for fiscal years 2011 through 2016. GAO also interviewed DHS officials and conducted site visits to locations in California, Arizona, and Florida, chosen based upon past detection of smuggling by the selected methods, among other things. The information from the site visits is not generalizable, but provided valuable insights. GAO's analysis of Department of Homeland Security (DHS) data showed that there were 67 discovered cross-border tunnels, 534 detected ultralight aircraft incursions, and 309 detected drug smuggling incidents involving panga boats (a fishing vessel) and recreational vessels along U.S. mainland borders from fiscal years 2011 through 2016. The number of known smuggling events involving these methods generally declined over this period, but they remain threats. DHS has established various coordination mechanisms and invested in technology to address select smuggling methods in the subterranean, aerial, and maritime domains. For example, DHS established interagency task forces to investigate cross-border tunnels. However, DHS has not established comprehensive standard operating procedures for addressing cross-border tunnels, and we found that relevant officials were not aware of all DHS systems or offices with tunnel information. By establishing procedures for addressing cross-border tunnels, DHS could provide strategic guidance and facilitate information sharing departmentwide, consistent with standards for internal control. DHS has also invested or plans to invest in at least five technology projects to help detect and track ultralight aircraft. However, DHS has not assessed and documented how all of the alternative ultralight aircraft technical solutions it is considering will fully address operational requirements or the costs and benefits associated with these different solutions. This type of analysis could help better position DHS to use its resources effectively and ensure that operational needs are met, consistent with risk management best practices. DHS has established high-level smuggling performance measures and collects data on smuggling by tunnels, ultralight aircraft, panga boats, and recreational vessels; however, DHS has not assessed its efforts specific to addressing these smuggling methods to, for example, compare the percent of detected panga boat and recreational smuggling events that are interdicted against targeted performance levels. By establishing measures and regularly monitoring performance against targets, managers could obtain valuable information on successful approaches and areas that could be improved to help ensure that technology investments and operational responses to address these smuggling methods are effective, consistent with standards for internal control. This is a public version of a For Official Use Only—Law Enforcement Sensitive report that GAO issued in February 2017. Information DHS deemed For Official Use Only—Law Enforcement Sensitive has been redacted. GAO is making six recommendations, including that DHS establish procedures for addressing tunnels, assess ultralight aircraft technology, and establish performance measures and targets. DHS concurred with four recommendations and disagreed with those to establish tunnel procedures and maritime performance measures, citing other efforts. GAO believes the recommendations remain valid, as discussed in the report.
The following information provides details about our agents’ experiences and observations entering the United States from Mexico at border crossings in California and Texas and at two crossings in Arizona. California: On February 9, 2006, two agents entered California from Mexico on foot. One of the agents presented as identification a counterfeit West Virginia driver’s license and the other presented a counterfeit Virginia driver’s license. The CBP officers on duty asked both agents if they were U.S. citizens and both responded that they were. The officers also asked the agents if they were bringing anything into the United States from Mexico and both answered that they were not. The CBP officers did not request any other documents to prove citizenship, and allowed both agents to enter the United States. Texas: On February 23, 2006, two agents crossed the border from Mexico into Texas on foot. When the first agent arrived at the checkpoint, a CBP officer asked him for his citizenship information; the agent responded that he was from the United States. The officer also asked if the agent had brought back anything from Mexico. The agent responded that he had not, and the officer told him that he could enter the Unites States. At this point, the agent asked the CBP officer if he wished to see any identification. The officer replied “OK, that would be good.” The agent began to remove his counterfeit Virginia driver’s license from his wallet and the inspector said “That’s fine, you can go.” The CBP officer never looked at the driver’s license. When the second agent reached the checkpoint, another CBP officer asked him for his citizenship information and he responded that he was from the United States. The CBP officer asked the agent if he had purchased anything in Mexico and the agent replied that he had not. He was then asked to show some form of identification and he produced a counterfeit West Virginia driver’s license. The CBP inspector briefly looked at the driver’s license and then told the agent he could enter the United States. Arizona, first crossing: On March 14, 2006, two agents arrived at the border crossing between Mexico and Arizona in a rental vehicle. Upon request, the agents gave the CBP officer a counterfeit West Virginia driver’s license and counterfeit Virginia driver’s license as identification. As the CBP officer reviewed the licenses, he asked the agents if they were U.S. citizens and they responded that they were. The officer also asked if the agents had purchased anything in Mexico and they said they had not. The CBP officer then requested that agents open the trunk of their vehicle. The agents heard the inspector tap on several parts of the side of the vehicle first with his hand and again with what appeared to be a wand. The officer closed the trunk of the vehicle, returned the agents’ driver’s licenses, and allowed them to enter the United States. Arizona, second crossing: On March 15, 2006, two agents again entered Arizona from Mexico on foot at a different location than the previous day. One of the agents carried a counterfeit West Virginia driver’s license and a counterfeit West Virginia birth certificate. The other carried a counterfeit Virginia driver’s license and a counterfeit New York birth certificate. As the agents were about to cross the border, another agent who had crossed the border earlier using his genuine identification phoned to inform them that the CBP officer on duty had swiped his Virginia driver’s license through a scanner. Because the counterfeit driver’s licenses the agents were carrying had fake magnetic strips, the agents decided that in the event they were questioned about their licenses, they would tell the CBP officers that the strips had become demagnetized. When the agents entered the checkpoint area, they saw that they were the only people crossing the border at that time. The agents observed three CBP officers on duty; one was manning the checkpoint and the other two were standing a short distance away. The officer manning the checkpoint was sitting at a cubicle with a computer and what appeared to be a card scanner. The agents engaged this officer in conversation to distract him from scanning their driver’s licenses. After a few moments, the CBP officer asked the agents if they were both U.S. citizens and they said that they were. He then asked if they had purchased anything in Mexico and they said no. He then told them to have a nice day and allowed them to enter the United States. He never asked for any form of identification. The following information provides details about our agents’ experiences and observations entering the United States from Canada at Michigan, New York, Idaho, and Washington border crossings. Michigan: On May 1, 2006, two agents drove in a rental vehicle to a border crossing in Michigan. When asked for identification by the CBP officer on duty, the agents presented a counterfeit West Virginia driver’s license and a counterfeit Virginia driver’s license. As the CBP officer examined the licenses, he asked the agents if they were U.S. citizens and they responded that they were. The CBP officer then asked if the agents had birth certificates. One agent presented a counterfeit New York birth certificate and the other presented a counterfeit West Virginia birth certificate. The agents observed that the CBP officer checked the birth certificates against the driver’s licenses to see if the dates and names matched. The CBP officer then asked the agents if they had purchased anything in Canada and they responded that they had not. The officer also asked what the agents were doing in Canada and they responded that they had been visiting a casino in Canada. The CBP officer then returned the agents’ documentation and allowed them to enter the United States. New York, first crossing: On May 3, 2006, two agents entered New York in a rental vehicle from Canada. The agents handed the CBP officer on duty counterfeit driver’s licenses from West Virginia and Virginia. The CBP officer asked for the agents’ country of citizenship and the agents responded that they were from the United States. The CBP officer also asked the agents why they had visited Canada. The agents responded that they had been gambling in the casinos. The CBP officer told the agents to have a nice day and allowed them to enter the United States. New York, second crossing: On the same date, the same two agents crossed back into Canada and re-entered New York at a different location. The agents handed the CBP officer at the checkpoint the same two counterfeit driver’s licenses from West Virginia and Virginia. The officer asked the agents what they were doing in Canada and they replied that they been gambling at a casino. The officer then asked the agents how much money they were bringing back into the country and they told him they had approximately $325, combined. The officer next asked the agent driving the car to step out of the vehicle and open the trunk. As the agent complied, he noticed that the officer placed the two driver’s licenses on the counter in his booth. The officer asked the agent whose car they were driving and the agent told him that it was a rental. A second officer then asked the agent to stand away from the vehicle and take his hands out of his pockets. The first officer inspected the trunk of the vehicle, which was empty. At this point, the officer handed back the two driver’s licenses and told the agents to proceed into the United States. Idaho: On May 23, 2006, two agents drove in a rental vehicle to a border crossing in Idaho. The agents handed the CBP officer on duty a counterfeit West Virginia driver’s license and a counterfeit Virginia driver’s license. As the CBP officer examined the licenses, he asked the agents if they were U.S. citizens and they responded that they were. The CBP officer then asked if the agents had birth certificates. One agent presented a counterfeit New York birth certificate and the other presented a counterfeit West Virginia birth certificate. The agents observed that the CBP officer checked the birth certificates against the driver’s licenses to see if the dates and names matched. The officer also asked what the agents were doing in Canada and they responded that they had been sightseeing. The CBP officer then returned the agents’ documentation and allowed them to enter the United States. Washington: On May 24, 2006, two agents drove in a rental vehicle to a border crossing checkpoint in Washington. When the agents arrived at the border, they noticed that no one was at the checkpoint booth at the side of the road. Shortly thereafter, a CBP officer emerged from a building near the checkpoint booth and asked the agents to state their nationality. The agents responded that they were Americans. The CBP officer next asked the agents where they were born, and they responded New York and West Virginia. The agents then handed the CBP officers their counterfeit West Virginia and Virginia driver’s licenses. The officer looked at the licenses briefly and asked the agents why they had visited Canada. The agents responded that they had a day off from a conference that they were attending in Washington and decided to do some sightseeing. The CBP officer returned the agents’ identification and allowed them to enter the United States. We conducted a corrective action briefing with officials from CBP on June 9, 2006, about the results of our investigation. CBP agreed its officers are not able to identify all forms of counterfeit identification presented at land border crossings. CBP officials also stated that they fully support the newly promulgated Western Hemisphere Travel Initiative, which will require all travelers, including U.S. citizens, within the Western Hemisphere to have a passport or other secure identification deemed sufficient by the Secretary of Homeland Security to enter or reenter the United States. The current timeline proposes that the new requirements will apply to all land border crossings beginning on December 31, 2007. The proposed timeline was developed pursuant to the Intelligence Reform and Terrorism Prevention Act of 2004. The act requires the Secretary of Homeland Security, in consultation with the Secretary of State, to implement a plan no later than January 1, 2008, to strengthen the border screening process through the use of passports and other secure documentation in recognition of the fact that additional safeguards are needed to ensure that terrorists cannot enter the United States. However, the Senate recently passed a bill to extend the implementation deadline from January 1, 2008, to June 1, 2009. Additionally, the Senate bill would also authorize the Secretary of State, in consultation with the Secretary of Homeland Security, to develop a travel document known as a Passport Card to facilitate travel of U.S. citizens to Canada, Mexico, the countries located in the Caribbean, and Bermuda. We did not assess whether this initiative would be fully implemented by either the January 2008 or June 2009 deadline or whether it would be effective in preventing terrorists from entering the United States. The results of our current work indicate that (1) CBP officers at the nine land border crossings tested did not detect the counterfeit identification we used and (2) people who enter the United States via land crossings are not always asked to present identification. Furthermore, our periodic tests since 2002 clearly show that CBP officers are unable to effectively identify counterfeit driver’s licenses, birth certificates, and other documents. This vulnerability potentially allows terrorists or others involved in criminal activity to pass freely into the United States from Canada or Mexico with little or no chance of being detected. It will be critical that the new initiative requiring travelers within the Western Hemisphere to present passports or other accepted documents to enter the United States address the vulnerabilities shown by our work. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Currently, U.S. citizens are not required to present a passport when entering the United States from countries in the Western Hemisphere. However, U.S. citizens are required to establish citizenship to a CBP officer's satisfaction. On its Web site, U.S. Customs and Border Protection (CBP) advises U.S. citizens that an officer may ask for identification documents as proof of citizenship, including birth certificates or baptismal records and a photo identification document. In 2003, we testified that CBP officers were not readily capable of identifying whether individuals seeking entry into the United States were using counterfeit identification to prove citizenship. Specifically, our agents were able to easily enter the United States from Canada and Mexico using fictitious names and counterfeit driver's licenses and birth certificates. Later in 2003 and 2004, we continued to be able to successfully enter the United States using counterfeit identification at land border crossings, but were denied entry on one occasion. Because of Congress's concerns that these weaknesses could possibly be exploited by terrorists or others involved in criminal activity, Congress requested that we assess the current status of security at the nation's borders. Specifically, Congress requested that we conduct a follow-up investigation to determine whether the vulnerabilities exposed in our prior work continue to exist. Agents successfully entered the United States using fictitious driver's licenses and other bogus documentation through nine land ports of entry on the northern and southern borders. CBP officers never questioned the authenticity of the counterfeit documents presented at any of the nine crossings. On three occasions--in California, Texas, and Arizona--agents crossed the border on foot. At two of these locations--Texas and Arizona--CBP allowed the agents entry into the United States without asking for or inspecting any identification documents. After completing our investigation, we briefed officials from CBP on June 9, 2006. CBP agreed that its officers are not able to identify all forms of counterfeit identification presented at land border crossings and fully supports a new initiative that will require all travelers to present a passport before entering the United States. We did not assess whether this initiative would be effective in preventing terrorists from entering the United States or whether it would fully address the vulnerabilites shown by our work.
The NAS consists of a wide assortment of technologies operated by FAA, other federal agencies, such as DOD, and industry participants such as airlines. Technology transfer may be defined as the process by which technology or knowledge developed by one entity is applied and used by another. Technology transfer may involve the transfer of equipment, research, architecture, knowledge, procedures, or software code, or involve data integration. Technology transfer also encompasses the process by which research is transitioned from one entity and then developed and matured by another through testing and additional applied research until ultimately deployed. This report focuses on the mechanisms used to transfer research and technology between partner agencies and private industry and FAA, which can include the transfer of FAA and partner agency research to the private sector to develop a technology, or the transfer of research or technology developed by partner agencies or the private sector to FAA. Since the origination of the NextGen effort, several mechanisms intended to facilitate coordination and technology transfer among FAA and partner agencies have been established. Congress created JPDO within FAA as the primary mechanism for interagency and private-sector coordination for NextGen. JPDO’s enabling legislation states that JPDO’s responsibility with regard to technology transfer is “facilitating the transfer of technology from research programs such as the National Aeronautics and Space Administration program and the Department of Defense Advanced Research Projects Agency program to federal agencies with operational responsibilities and to the private sector.” JPDO developed an Integrated Work Plan that recommends primary and support responsibilities to partner agencies for research and development of various technological aspects of NextGen. (See fig. 1.) JPDO is also responsible for overseeing and coordinating NextGen research activities within the federal government and ensuring that new technologies are used to their fullest potential in aircraft and the air traffic control system. The memorandums of understanding among the partner agencies also require that the partner agencies have the mechanisms in place to coordinate and align their NextGen activities, including their NextGen-related budgets, acquisitions, and research and development. The legislation also directed the Secretary of Transportation to establish a Senior Policy Committee, to be chaired by the Secretary, to provide NextGen policy guidance and review, and to facilitate coordination and planning of NextGen by the partner agencies. To help implement the responsibilities described in the legislation, each partner agency assigned a liaison to JPDO—as well as staff to JPDO in some cases. In addition, several working groups were created to facilitate collaboration between partner agencies and the private sector, and the NextGen Institute was created to be a forum for private industry involvement in NextGen planning and other activities. As initial NextGen planning was completed, and the focus turned to implementation, JPDO’s role has changed to focus on long-term research past 2018. Furthermore, in 2010 a new JPDO Director was appointed (the office’s fourth Director in its 7 years of existence) and JPDO was moved organizationally within FAA to raise its prominence within FAA and enable it to better serve as a mechanism for interagency collaboration. Because NextGen implementation also requires expertise, research, and technology from the private sector, FAA has developed processes and mechanisms for interacting with the private sector. FAA views its Acquisition Management System (AMS) as the primary mechanism for transferring research and technology from the private sector. FAA’s AMS establishes policy and guidance for all aspects of the acquisition lifecycle, and the AMS contracting process is designed to help FAA procure products and services from sources offering the best value to satisfy FAA’s mission needs. election, assignment, nd chedling of ircrft to rnw to imltneously optimize opertion crossltiple irport. created four research transition teams as mechanisms to transition the complicated technologies that do not fit within a single FAA office’s purview under FAA’s structure. The teams cover approximately half of all research and development activities conducted by NASA’s Airspace Systems Program—a group assigned to directly address fundamental NextGen needs. Each team addresses a specific issue area that (1) is considered a high priority, (2) has defined projects and deliverables, and (3) requires the coordination of multiple offices within FAA or NASA. Involving planning and operational personnel early is meant to avoid making decisions in isolation that may waste resources and time. Consistent with key practices that can help enhance and sustain interagency collaboration, these teams identify common outcomes, establish a joint strategy to achieve that outcome, and define each agency’s role and responsibilities, allowing FAA and NASA to overcome differences in agency missions, cultures, and established ways of doing business. Each research transition team develops and documents a plan that defines the scope of its efforts and the products to be developed. The plans outline a delivery schedule and the maturity level to which products will be developed. They also identify how products will be used by FAA in its investment decision process, describe what NASA will provide to FAA, and what FAA’s involvement will be related to the conduct of research. For example, one team’s plan includes development of a decision support tool to help manage the assignment and scheduling of runways at multiple airports to optimize operations. For this product, NASA is scheduled to deliver technical papers in 2012 and a software prototype in 2013. At the time of the scheduled transition to FAA in 2014, the tool should be at a prescribed level of technical maturity and FAA will make an implementation decision later that year. Most of the four research transition teams have not yet delivered products and, while stakeholders are optimistic, whether technologies developed by these teams are ultimately implemented will largely depend on how well coordination occurs across multiple FAA offices involved in implementation. Research transition teams’ products identified for development are expected to be transferred to FAA predominantly from 2012 through 2015. As of April 2011, NASA has delivered two final products and several interim informational products to FAA—including concept feasibility papers, an algorithm related to efficient flow in congested airspace, and data from a joint simulation. Going forward, stakeholders and participants with whom we spoke generally expressed optimism about the research transition teams’ ability to transfer NASA work to FAA and into NAS. However, some stakeholders noted that success requires high-level commitment from each agency and effective team leads. Specifically, one NASA official noted that FAA’s research transition team leads do not have the authority to make final decisions about the implementation of a given technology. Therefore, the success of the team’s product will ultimately depend on that team lead’s ability to work across various FAA offices to negotiate and coordinate a solution. FAA and NASA also use other technology transfer mechanisms—including interagency agreements and test facility demonstrations—which have historically faced challenges at the point where the technology is handed off from NASA to FAA, but have nonetheless resulted in successful transfer and implementation of technology. Past technology transfer efforts between NASA and FAA faced challenges at the transfer point between invention and acquisition, referred to as the “valley of death.” At this point in the process, NASA has had limited funding at times to continue beyond fundamental research, but the technology was not matured to a level for FAA to assume the risks of investing in a technology that had not yet been demonstrated with a prototype or similar evidence. FAA and NASA officials have said the transition is still a challenge, but both are working to address this issue through interagency agreements that specify a commitment to a more advanced level of technological maturity of research than NASA has conducted at times in the past. Both interagency agreements and test facility demonstrations were used in the development and transfer of the Traffic Management Advisor, a program NASA developed, which uses graphical displays and alerts to increase situational awareness for air traffic controllers and traffic management coordinators. Through an interagency agreement, the two agencies established the necessary data feeds and two-way computer interfaces to support the program. NASA demonstrated the system’s capabilities at the NextGen test facility in North Texas where it also conducted operational evaluations. NASA successfully transferred the program to FAA, which, after reengineering it for operational use, deployed it throughout the United States. In some instances, the mechanisms FAA and NASA use to collaborate and transfer technologies have resulted in implementation of that technology in the NAS—as with Traffic Management Advisor; in others, the mechanisms have resulted in less tangible outcomes but nonetheless represent successful transfer in our view. For example, according to NASA officials, much of what is transferred between NASA and FAA is technical knowledge (e.g., an informational report or an algorithm) as opposed to a piece of hardware or new software. These products may not necessarily lead to immediate deployments, but the knowledge transferred may inform future decisions, lead to applied research, or be the precursors to future operational trials. In other instances, these mechanisms may produce a proven technology that is ultimately not implemented by FAA, but can be successfully transferred to the private sector. For example, NASA developed a decision support tool intended to assist controllers in identifying the most optimal route given wind conditions. Though operational evaluation testing was successful, FAA chose not to pursue full-scale development of the capability because it ultimately did not consider the capability to be a controller function. However, Boeing has since leveraged NASA’s work to develop Boeing Direct Routes, a service that uses advanced software algorithms to automatically alert an airline’s operations centers and flight crew when a simple, more fuel-efficient path is available, permitting the operations center to propose those routes to FAA controllers for approval. Boeing predicts that the service will result in measurable decreases in aircraft fuel usage and emissions. In this case, even though FAA—NASA’s intended customer—did not deploy the technology, it was successfully transferred to the private sector and will be used in the NAS to produce anticipated benefits consistent with NextGen goals. FAA primrily reponle for the ir trffic mgement-wether integrtion process nd for directing rerch nd development of vition-pecific wether informtion nd fnctionlity. Collaboration between FAA and Commerce, specifically the National Oceanic and Atmospheric Administration (NOAA), has been facilitated by the creation of the NextGen Executive Weather Panel (the Executive Panel). Weather has a tremendous impact on aviation operations and accounts for approximately 70 percent of all air traffic delays. Assimilating weather information into air-traffic management decisions so that decision-makers can better identify areas where and when aircraft can fly safely is a key goal of NextGen. It also requires significant collaboration and coordination across agencies and the private sector to transfer the data, knowledge, and technology necessary. (See sidebar and fig. 2.) NASA involved as jor developer of ir trffic mgement tool nd techniq, nd wether integrtion methodologie. Federal partner re o to involve the privte ector in deciion tht mffect them. In order to improve communication and coordination related to NextGen weather, the Senior Policy Committee approved the Executive Panel to act as the primary policy and decision-making body for NextGen weather issues. The Executive Panel is composed of high-level representatives from FAA, NOAA, DOD, NASA, and JPDO. According to one JPDO official, the Executive Panel is akin to the research transition team construct used by FAA and NASA in that it provides senior executive level oversight and coordination of interagency activities related to delivering NextGen weather capabilities. While the Executive Panel provides a forum for senior level direction, it has not connected researchers from NOAA with program and operation staff at FAA or identified specific technology development transition plans as the FAA and NASA teams have. Progress is also being made in defining each agency’s roles and responsibilities, though this task has not been completed. For instance, FAA and NOAA have a memorandum of understanding from 2004 that generally establishes the responsibilities of each agency for meeting aviation weather requirements, and in 2010, the agencies jointly completed an integrated management plan for NextGen Network-Enabled Weather and the NextGen 4-D Weather Data Cube. In addition, the two have come to agreements on financial responsibility for some weather projects. For example, FAA and Commerce have come to an overall agreement that the National Weather Service will fund the development of the NextGen 4-D Weather Data Cube and FAA will fund the development of the NextGen Network-Enabled Weather capability, which is expected to connect to the Cube for weather data. There is also agreement that funding for any research and development or capabilities that are aviation unique (e.g., turbulence forecasting) would need to be negotiated between the two agencies. However, FAA and Commerce have not developed an overarching strategy that would identify those specific capabilities in advance. Development of a research management plan is one step expected to facilitate the process to meet NextGen weather needs by the partner agencies, clarify roles and responsibilities, and improve the process for transitioning FAA weather research into National Weather Service operations. Similar to other agencies, any lack of coordination between FAA and Commerce could result in duplicative research and inefficient use of resources at both agencies. FAA and Commerce use additional mechanisms to coordinate their research and have transitioned some weather technology. For instance, FAA, NOAA, and NASA have held joint research program reviews in each of the last 2 years to enhance collaboration and identify duplications in efforts, according to FAA. Researchers from several of NOAA laboratories and forecast centers have also collaborated with FAA in research planning, development, and assessment as well as implementation of research results through interagency agreements. According to NOAA officials, it has worked with FAA to coordinate and align program goals and requirements to meet NextGen weather needs and in the last 2 years, FAA transitioned two weather technologies to NOAA’s National Weather Service. In addition, a team from FAA and NOAA’s National Weather Service, sponsored by JPDO, has begun to develop the functional requirements for NextGen aviation weather systems and continue to work together on additional weather-related planning efforts. DOD has not completed an inventory of its research and development portfolio related to NextGen, impeding FAA’s ability to identify and leverage potentially useful research, technology, or expertise from DOD. JPDO has recommended that DOD have primary responsibility for 6 research and development activities and provide support for an additional 47. In December 2007, DOD designated the Air Force as the lead service for the agency’s NextGen involvement, and, in the formal agreement that established roles and responsibilities for JPDO and the partner agencies, DOD agreed to develop mechanisms to align its NextGen-related research and development efforts with JPDO’s Integrated Work Plan. Air Force officials expected to have completed a comprehensive list of DOD’s NextGen-related research and development activities and programs, as well as a roadmap to facilitate technology transfer by November 2009. In June 2010, the DOT Office of the Inspector General recommended that FAA develop a plan to identify research and technologies from DOD’s research and development portfolio that could be used for NextGen and establish a mechanism to coordinate and transfer that information to the appropriate FAA program or development offices. According to JPDO, it has established contacts with various DOD organizations, but has only begun to develop a plan to review and identify DOD research and technologies potentially useful for NextGen. As of March 2011, DOD had compiled a preliminary but incomplete list of its NextGen-related research and development. According to DOD officials, the office underestimated the size and complexity of the task. As a result of progress made during 2010 and 2011, it has become clear that the original tasking was not the ideal approach. Instead, DOD plans to form technical teams with representatives from the research and development bodies within each agency to identify critical NextGen research and development needs and using that list of specific needs, identify programs that may address them. This process is currently being applied to the area of unmanned aircraft systems in an interagency effort led by JPDO. At the same time, DOD’s ability to identify potentially useful research and technology may be impeded by FAA’s inability to identify the scope of its needs. Though JPDO has identified the research and development activities needed to deliver NextGen, according to DOD officials, FAA has not provided, in some cases, enough specificity of its NextGen technological gaps, so that DOD can help identify where its research and development efforts and expertise may provide benefit. As we have previously reported, a key aspect of successful agency coordination is identifying and addressing needs by leveraging resources. Collaborating agencies can accomplish this by identifying the human, information technology, and physical and financial resources needed to initiate or sustain their collaborative effort. However, without an inventory, DOD, JPDO, and FAA have been unable to identify all the resources at DOD that may be useful for NextGen, or the budgetary resources that DOD puts toward NextGen-related activities. Lack of coordination between FAA and DOD could result in duplicative research and inefficient use of resources at both agencies. Although DOD has liaisons at FAA and JPDO, according to DOD and JPDO officials, communication challenges continue to impede coordination and collaboration between the agencies. DOD has assigned a liaison to JPDO with experience in net-centric operations, one of the areas in which stakeholders view DOD expertise as an important contribution to NextGen. DOD also co-chairs JPDO’s Net-Centric Operations Working Group and contributes as a member of various other JPDO committees, boards, and working groups. In addition, in 2010 DOD assigned a liaison from the Air Force Research Laboratory to FAA’s NextGen and Operations Planning, Research and Technology Development Office to act as a conduit into DOD’s research base. We have previously reported that as agencies bring diverse cultures to collaborative efforts, it is important to address those differences in a way that will enable a cohesive working relationship and create the mutual trust required to enhance and sustain such a collaborative effort. In particular, according to DOD officials, differences in terminology and culture across agencies create communication challenges between FAA and DOD. DOD research plans were developed according to DOD needs, using DOD’s terminology, not with potential connection to NextGen and civil aviation in mind. To understand the extent to which DOD research can address NextGen needs, DOD officials stated that subject matter experts from both FAA and DOD with extensive knowledge of DOD research and NextGen would need to review the existing research, determine what connections exist to NextGen plans, and develop a method of communicating and translating how DOD research supports NextGen activities. Existing mechanisms for collaboration between FAA and DOD are not currently designed or equipped to accomplish this task. DHS’s collaboration is important in several areas of NextGen research, particularly related to unmanned aircraft systems and cyber security; however, thus far, DHS’s participation has been limited in these key areas. DHS plans to use unmanned aircraft systems to monitor the nation’s borders and plays a key role in the initiative to safeguard federal government systems from cyber threats and attacks, including conducting and coordinating cyber security research and development. DHS has collaborated with the partner agencies on NextGen as the co-chair of JPDO’s Aviation Security Working Group, one of nine working groups that JPDO established to solve problems and make fact-based recommendations to be integrated into NextGen. According to DHS officials, it helped develop the security component of NextGen planning and has been an active participant, since JPDO’s inception, through the working group it co-chairs. DHS has also been involved in NextGen integrated surveillance planning and coordination efforts in collaboration with FAA and DOD. Though these are steps toward identifying common outcomes and joint strategies, in other important areas DHS has had limited participation in NextGen. JPDO has recommended that DHS be the agency with primary responsibility for 19 research and development activities and provide support for an additional 18. Many of the activities for which DHS is primarily responsible are related to baggage screening and other security functions, not air traffic management functions where FAA would be the implementer. However, like DOD, DHS has not identified and aligned its NextGen-related research and development activities as it agreed to do in the formal agreement that established the roles and responsibilities of JPDO and the partner agencies, and has not identified the budget figures associated with NextGen activities. In addition, according to DHS officials and other partner agencies, DHS was not involved in early planning for activities at JPDO specifically related to cyber security. DHS officials commented that sometimes DHS does not participate in events either because it is not invited or because it does not choose to participate. Limited collaboration between DHS and FAA could result in conflicts in NextGen priorities and needs in the future. As we have previously reported, that lack of collaboration can result in marginalizing NextGen areas that affect DHS. Further, given DHS’s responsibility for cyber security, lack of coordination in this area could result in FAA not fully leveraging technologies developed by DHS. DHS and JPDO collaboration efforts may improve with the assignment of a new executive representative. In October 2010 DHS’s executive representative to JPDO left the agency, and DHS did not initially identify a replacement. According to one JPDO official, participation in work on integrated surveillance began to lag at that point, although according to DHS, its efforts through JPDO’s Aviation Security Working Group continued. DHS assigned a new executive representative and back-up in January 2011 and integrated surveillance work has resumed. FAA and partner agencies are working to address previously identified research gaps, though coordination is an issue in some areas. In 2008 JPDO conducted a cross-agency gap analysis intended to identify major differences between NextGen planning documents and partner agency plans and budgets. JPDO identified gaps in key research and implementation focus areas that are critical to NextGen and involved joint agency missions and expenditures. The areas where gaps were identified included unmanned aircraft systems, human factors, and airspace security. According to FAA’s chief scientist for NextGen development, efforts are underway in each of these areas. For instance, FAA, in partnership with JPDO, and the partner agencies are defining the research and development needs for operating unmanned aircraft systems in domestic airspace and are developing a joint concept of operations and research roadmap. In late 2010, JPDO sponsored a workshop on unmanned aircraft systems that brought together subject-matter experts and executives from FAA, JPDO, DOD, and NASA. The workshop focused on critical and cross cutting long- term research and development issues and was a step toward JPDO’s goal of having the technologies, procedures, standards, and policies in place to achieve full integration of unmanned aircraft systems. However, DHS, which will be one of the primary operators of these systems in domestic airspace, did not participate. A lack of coordination could result in a duplication of research or an inefficient use of resources. With regard to human factors, as we have previously reported, FAA and NASA are coordinating their NextGen human factors research using a variety of mechanisms—including research advisory committees, interagency agreements, and research transition teams. In addition, FAA has also created a human factors portfolio to identify and address priority human factors issues. In addition, in February 2011, FAA and NASA completed a cross-agency human factors plan as JPDO and we recommended. Finally, with respect to airspace security, according to FAA, it is engaging with both DOD and DHS through JPDO sponsored events. However, FAA is unable to move forward with some of its airspace security research and development because DHS has not involved the appropriate personnel needed to move the issue area beyond the concept development phase. Broadly speaking, FAA’s Acquisition Management System (AMS) provides a framework for FAA to undertake research and development of concepts and technologies, progress that technology to a point where FAA can define the requirements to meet its needs, and then either identify existing technology that meets those needs or request proposals from industry to develop the technology. Within the AMS, FAA may use several mechanisms at various stages to conduct outreach, collaborate with private sector firms, and transfer technology. (See table 1.) In particular, FAA may use several types of research and development agreements between itself and the private sector as mechanisms to facilitate technology transfer. These agreements include cooperative research and development agreements, memorandums of agreement, memorandums of understanding, and other transaction authority. Cooperative research and development agreements allow FAA to share facilities, equipment, services, or other resources with private industry, academia, or state and local government agencies and are part of meeting FAA’s technology transfer program requirements. Within FAA’s Research and Technology Development Office, as of January 6, 2011, there are over 20 such agreements with industry or academia. Prior to pursuing an acquisition, the agency is required under the AMS to conduct a market analysis to determine if the needed capability exists in the marketplace or has to be obtained through the acquisition process. A market analysis may be conducted as FAA moves forward with an acquisition. FAA may publicly request proposals from private industry to develop the technology, and any private sector entity can submit its proposal for meeting FAA’s requirements and compete against other entities for the contract award. However, under some circumstances, stakeholders said that AMS can lack flexibility for FAA to consider alternative technologies or new ideas for certain technologies or sub-systems within an acquisition once the process is underway. According to several industry stakeholders we spoke with, if they have a technology they believe is worth considering to improve some aspect or meet some need of a system that is being developed at FAA— such as a piece of software or some data that may be relevant to improve decision-making—there is no clear entry gate for getting that technology considered. Other stakeholders said that FAA has difficulty considering technologies that cut across programs and offices, and one stakeholder said that such ideas may not be considered because there is no clear “home” or “champion” within FAA for the technology. Similar issues have been encountered for technologies that NASA developed, which resulted in the creation of the research transition teams discussed previously. In the past, we have recommended that FAA improve its ability to manage portfolios of capabilities across program offices. However, on the other hand, at a certain point, FAA must be able to commit resources, finalize plans, and stop considering alternatives in order to move forward with implementing a new system. Furthermore, according to these officials, once FAA makes a decision to pursue a particular technological path, it can become costly to change course; therefore, any benefits of changing course must be weighed against the costs. Nonetheless, industry stakeholders suggested that additional avenues to consider alternative technologies could be made available and could result in technologies that enable FAA to meet its mission more efficiently. We have made recommendations to FAA over the years to improve its AMS process. To address this issue at least in part, FAA has recently designed another contracting tool to provide it with research and development and systems engineering support to integrate NextGen concepts, procedures and technologies into the NAS, which may provide some additional flexibility for collaboration and technology transfer with industry. The Systems Engineering 2020 (SE 2020) contracts are a set of multiple award, up to 10- year umbrella contracts worth approximately $6.4 billion. Under SE 2020, FAA will be able to have participating firms support NextGen implementation activities such as concept exploration, modeling and simulation, and prototype development. By pooling engineering expertise under a single contracting vehicle, FAA believes it will be able to more quickly obligate funds and issue task orders, which is intended to result in implementing NextGen more quickly. FAA officials believe that this process, by structuring the umbrella contract to include small businesses, would encourage the participation of more small businesses in its efforts to implement NextGen. Firms that have not been selected will not be able to participate in the SE 2020 program. However, according to some industry officials, the program’s ability to more quickly obligate funds and issue and complete task orders has yet to be fully demonstrated, and stakeholders we spoke with expressed concerns about whether FAA’s efforts to expedite the work will mean missing out on the expertise of excluded companies. FAA also has an unsolicited proposal evaluation process that is designed as a mechanism for private industry to offer unique ideas or approaches outside FAA’s competitive procurement process; however, it has not proven to be a significant source of new technology for FAA. From 2008 to 2010, FAA received 56 unsolicited proposals from private industry and rejected all but one of them. The most common reasons for rejection, according to FAA, were that the proposals were not unique and innovative or that FAA already had an effort in place to meet that requirement. (See table 2.) In general, we found that FAA’s reasons for rejecting proposals met FAA’s established criteria for evaluating unsolicited proposals. However, FAA evaluators told us that FAA’s “unique and innovative” criterion for an unsolicited proposal was a difficult criterion to meet for proposals, because technologies often build on previous technologies. Furthermore, if a firm submitting an unsolicited proposal is to receive a sole source contract, competitive procurement principles require that it must be found that no other company can provide the technology but the company submitting the unsolicited proposal. If this is not the case, competitive proposals must be sought. Some participants told us that technologies should not be eliminated from consideration even if their application is not entirely unique and contracts to implement them might have to be awarded competitively. FAA evaluators commented that there was little guidance on how to interpret the criteria, including the unique and innovative criterion in particular, for evaluating unsolicited proposals. Some suggested that additional guidance on applying criteria or a review panel could be set up to assist in reviewing the ideas contained in these proposals. Participants also told us that the process, in some cases, is not collaborative, which may hinder FAA from leveraging potentially valuable technologies. Other participants explained that FAA’s written response sometimes did not reflect a full understanding of what a company was offering, so in these cases the companies would have liked an opportunity to clarify the merits of their proposal. Although FAA says that companies whose proposals are rejected can meet with the program offices to discuss reasons for rejection, some companies told us this opportunity was not always provided. Where there are disagreements between FAA and companies submitting unsolicited proposals over FAA’s stated reason for rejection of a proposal, FAA is not required to discuss why a submission was rejected or how it might be improved. FAA conducts various outreach events with its research stakeholders, including those in industry, to exchange information among stakeholders currently engaged in collaborative technology projects and to communicate NextGen’s direction to potential collaborators. From 2008 through 2010, over 300 outreach events were held during which FAA presented technical information focused on planned or on-going NextGen projects and programs. Seminars, conferences, and industry days are designed to inform industry about where FAA is headed with regard to NextGen and any changes that may have occurred in NextGen’s direction in the last year. The identification of technologies for use in NextGen is not necessarily a goal of many of these efforts. Although technology identification or transfer may not occur at these events, they can create and reinforce working and personal relationships between leading experts and researchers in the air traffic management research and development community, create opportunities to share available research results, and maintain consensus between FAA and industry on major issues. Some FAA and industry events, however, have had more of a collaborative purpose, creating opportunities for information and technical exchanges. Technical interchange meetings, workshops, and demos are designed to address select technical issues and have been used to try and identify existing technologies or to communicate to private sector stakeholders specific technological or research needs that they can address. These meetings can result in the identification of existing technologies that can be used by FAA to meet a specific need. For example, FAA’s Global Navigation Satellite System Program Office recently sponsored a workshop for a broad range of industry and partner agency stakeholders to come together to discuss needs and potential solutions for a back-up system that could support the Global Positioning System if satellites became unavailable. The purpose of the workshop was to collaboratively work with partner agencies and industry to identify existing technologies and systems that can be modified to provide a viable backup system. One industry participant we spoke with told us that the workshop was highly collaborative and had positive results in terms of focusing on technology that could be leveraged by FAA. However, according to participants in other events, it is often unclear what happens after these events in terms of taking the next steps to transfer knowledge or technology or working with FAA to develop solutions. FAA keeps documentation of what occurs at these meetings, including information on outcomes from the event. Our review of this documentation found that few events documented concrete outcomes or identified next steps to further develop ideas or technologies identified and discussed at an event. JPDO is reassessing the role and structure of the NextGen Institute as a mechanism for collaboration and technology transfer with industry. The DOT Inspector General recommended in June 2010 that JPDO determine whether there is a continued need for the Institute and, if there is, to redefine its roles and responsibilities to avoid duplication with other private-sector organizations. The NextGen Institute was established in March 2005 as the mechanism through which JPDO would access private- sector expertise in a fair and balanced framework that embraces all individuals, industry, and user segments for application to NextGen activities and tasks. However, participation in the Institute diminished over time as funding was uncertain. Recently, a new Executive Director was named for the Institute, and the JPDO is working closely with the new Executive Director and the Institute Management Council—which oversees the policy, recommendations, and products of the NextGen Institute—to identify a course of action that is embraced by industry stakeholders. According to several private-sector stakeholders we spoke with, the NextGen Institute could serve as a valuable mechanism for FAA and industry collaboration if properly designed and structured. While not necessarily a technology transfer mechanism, RTCA—a private, not-for-profit corporation that develops consensus-based recommendations within the aviation community on communications, navigation, surveillance, and air traffic management system issues—is a key source of FAA and industry collaboration. For example, in 2009 RTCA convened the NextGen Midterm Implementation Task Force at the request of FAA, which brought together key stakeholders in the aviation community. The Task Force reached a consensus within the aviation community to focus on implementing capabilities in the NAS that take advantage of existing technologies and capabilities aboard aircraft. In addition, RTCA has recently created the NextGen Advisory Committee, which is comprised of top-level executives representing various parts of the aviation and aerospace industries, as well as airports, air traffic management, and various other public and private stakeholder groups. Some NextGen test facilities serve as a forum in which private companies may learn and partner with each other, and eventually, enter into technology acquisition agreements with FAA with reduced risk. The FAA Technical Center test facility in Atlantic City, New Jersey, and the Embry Riddle test facility in Daytona, Florida, provide places where integration and testing with industry can take place without affecting day-to-day air traffic operations. They also enable industry and government to ensure that new technologies will integrate with systems currently in the NAS and, according to a senior FAA official, allows FAA to leverage private sector funding, expertise, and technologies. For example, in November 2008, several companies, including Lockheed Martin and Boeing, were involved in an FAA demonstration at Embry Riddle on how current and forecasted weather information can be integrated into FAA’s traffic management and en route automation systems. Also at Embry Riddle, Lockheed Martin is funding some work in conjunction with US Airways on a new time-based traffic flow management system designed to provide increased gate-to-gate air traffic predictability. The success of these test facilities as opportunities to leverage private- sector resources depends in large part on the extent to which the private sector perceives benefits to their participation. Representatives of firms participating in test facility activities told us that tangible results in terms of implementation of technologies developed were important to maintain private sector interest and that it was not always clear what happened to technologies that were successfully tested at these sites. In June 2010, the DOT Inspector General also reported that demonstrations may not provide a clear path to implementation and are sometimes not outcome-focused. We have also reported that FAA should increase its focus on performance and outcomes. One of the difficulties cited by officials at these test facilities was that if a technology being tested did not have a place in one of the NAS Enterprise Architecture Infrastructure Roadmaps, then there was no implementation plan for that technology and no next steps to get that technology into the NAS. For example, NASA was developing the Precision Departure Release Capability, a software technology that links Traffic Management Advisor to other information to better plan flight departures by minimizing delays once passengers have boarded the plane. This technology, however, was not a capability or technology that was a part of the Enterprise Architecture Roadmap, and NASA had difficulty finding support for it, its merit and FAA’s interest in pursuing it notwithstanding. According to NASA officials that worked on the capability, the process for getting a technology into a roadmap was not transparent to participants at the test facilities and it took considerable time and effort to eventually get the capability included in the roadmap and garner support. To advance aviation partnerships and the development and transfer of aviation technologies, the concept for a Next Generation Aviation Research and Technology Park was developed through a collaborative effort by local, county, state, and federal agencies; academia; and private sector interests. As a result of this effort, the FAA entered into a lease and memorandum of understanding with the South Jersey Economic Development District to build a Next Generation Research and Technology Park adjacent to the William J. Hughes Technical Center near Atlantic City, N.J. The lease transfers control of 58 acres of FAA property for construction of the complex. The Park is a partnership that is intended to engage industry in a broad spectrum of research projects, with access to state-of-the-art federal laboratories. The establishment of this park will help encourage the transfer of scientific and technical information, data, and know-how to the private sector and is consistent with FAA’s technology transfer program order. The park will offer a central location for the FAA’s industry partners to perform research, development, testing, integration and verification of the technologies, concepts, and procedures required by NextGen. According to FAA, this private-sector engagement in research has the potential to save significant time and expense in bringing new products to market and reducing the time to deliver NextGen components. The Park is intended to complement the NextGen demonstration capabilities at Embry Riddle Aeronautics University in Daytona, Florida. Advanced NextGen technologies developed and tested at the Technical Center will be demonstrated in an operational environment at Daytona then returned to the Technical Center for integration with the current NAS and other components of NextGen. Transforming the nation’s air transportation system is a technically complex undertaking that will affect FAA’s activities and missions, and those of federal partner agencies and the private sector. NextGen’s success is dependent, in significant part, on FAA’s ability to leverage the research and technology efforts of these agencies and firms. While much has been done to develop mechanisms for effective research and technology transfer, some mechanisms have not been successful in ensuring that FAA is leveraging the research and technologies of its partners. In particular, FAA and DOD have yet to completely identify DOD’s potentially beneficial research and technology. In addition, FAA and DHS’s collaboration in identifying areas for joint research and technology development is limited. Effective transfer of research and technology requires effective collaboration, and we have previously found that interagency collaboration is enhanced when agencies, among other things, define common outcomes, identify and address needs, establish joint strategies, agree on roles and responsibilities, and establish compatible policies, procedures and other means to operate across agency boundaries. FAA’s collaborative mechanisms with DOD and DHS fall short of fulfilling these criteria. FAA’s ability to identify potentially useful DOD and DHS research and technology has been impeded because DOD and DHS have not completely identified research and development in their portfolios that is applicable to NextGen, while DOD’s ability to identify potentially useful research and technology may be impeded because FAA has not made clear the scope of its needs with enough specificity. Further, communication between DOD and FAA has been hampered by differing vocabularies and terms, and mechanisms have not yet been developed to help the agencies work across agency boundaries. While we have noted these issues in several reports over the years and the DOT Inspector General has made recommendations for FAA to develop a plan to review DOD’s research, we find that much remains to be done in this area to improve the communication and collaboration between the agencies. Unless FAA and its partner agencies communicate and jointly identify ongoing research and technology development that is relevant to NextGen efforts, FAA will not be able to fully leverage the potential of its partner agencies’ research and technology development efforts. In this report, as well as in a previous report, we note that FAA and its partner agencies have struggled to develop an integrated budget document that tracks partner agencies’ involvement in NextGen, determines whether funding is adequate for specific efforts, and tracks the overall cost of NextGen. Failure to complete this effort makes it difficult for FAA and the Congress to understand the extent to which FAA is leveraging the research efforts of its partners to achieve the NextGen vision. We have an open recommendation to FAA with regard to developing this integrated budget and are monitoring actions related to our recommendation. We are therefore not making recommendations in this report about this issue. We also discuss several issues throughout the report with respect to how FAA collaborates with the private sector to transfer research and technology. For example, while FAA conducts market analysis, holds numerous events with industry, enters into various collaborative agreements, and has numerous mechanisms—such as the NextGen Institute, demonstrations, and testing facilities—to collaborate with industry and provide opportunities for technology transfer, it is not always clear what comes out of these mechanisms, and some in industry have indicated that, despite all of these collaborative activities, it is not always evident what are the “entry points” to FAA for getting technologies or ideas considered. Nonetheless, numerous mechanisms exist, and additional mechanisms are being reconsidered, or are still under development, such as the NextGen Institute and the Research and Technology Park. We also found that FAA’s AMS process can limit FAA’s ability to consider alternatives in some cases, and that FAA has difficulty considering technology solutions that cut across several programs or offices at FAA. We have made several recommendations to FAA over the years to address these issues. We have recommended that FAA improve its AMS process, improve its ability to manage portfolios of capabilities across program offices, and increase its focus on performance and outcomes, which FAA has begun to implement Moreover, the DOT Inspector General made a recommendation in 2010 for FAA to reassess the current role and continued need for the NextGen Institute and to ensure that it is a useful resource and not duplicative with other mechanisms designed to work with private industry. We are therefore not making any further recommendations to FAA in these areas, but encourage FAA to continue its efforts to address existing recommendations. To more fully leverage the potential of NextGen partner agencies’ research and technology development efforts, we recommend that the Secretary of Transportation direct the Administrator of the FAA to work with the Secretaries of Defense and Homeland Security to develop mechanisms that will further clarify NextGen interagency collaborative priorities and  enhance technology transfer between the agencies. These mechanisms should focus on improving interagency communication about the specific needs, outcomes, and existing research that FAA has for NextGen, and the existing research and technology development portfolios that may be applicable to NextGen within DOD and DHS. These mechanisms should aim to improve the ability of the agencies to leverage resources or transfer knowledge or technology among each other consistent with the key practices for successful collaboration that we lay out in this report. We provided a draft of this report to the Departments of Transportation, Defense, Homeland Security, and Commerce, NASA, and the Office of Science and Technology Policy. The Department of Transportation provided technical comments by e-mail, which we incorporated as appropriate, but did not comment whether or not it agreed with our recommendation. The Department of Defense provided written comments, which are reproduced in appendix I. DOD concurred with our recommendation and highlighted the existing mechanisms it has that support agency collaboration and technology transfer. The Department of Homeland Security provided written comments, which are reproduced in appendix II. DHS also concurred with our recommendation and mentioned a newly formed mechanism—the Air Domain Awareness Board—that will support technology transfer discussions among DHS, FAA, JPDO, and other stakeholders in relation to NextGen. These mechanisms are positive steps toward NextGen technology transfer among the partner agencies. However, as our recommendation further states, DOD and DHS should ensure that relevant research and development activities that could support NextGen are identified within these or other mechanisms, and that appropriate steps are taken to develop mechanisms to effectively transfer any identified research and technology. Because the mechanisms DOD and DHS identified have not yet demonstrated these results, we believe that fully implementing the recommendation is still important beyond the existing mechanisms used by DOD and DHS. The Office of Science and Technology Policy provided one technical comment by e-mail, which we incorporated. The Department of Commerce and NASA had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Transportation, the Administrator of the Federal Aviation Administration, NASA, DOD, DHS, Commerce, the Office of Science and Technology Policy, and other parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact named above, individuals making key contributions to this report include Andrew Von Ah (Assistant Director), Richard Hung, Bert Japikse, Delwen Jones, Kieran McCarthy, Josh Ormond, Taylor Reeves, Richard Scott, Maria Stattel, and Jessica Wintfeld.
The Federal Aviation Administration (FAA) is developing and implementing a broad transformation of the national airspace system known as the Next Generation Air Transportation System (NextGen). NextGen is a complex undertaking that requires new technologies and supporting infrastructure and involves the activities of several agencies as well as private industry. This report provides information on the effectiveness of (1) FAA's and the federal partner agencies' mechanisms for collaborating and leveraging resources to develop and implement NextGen, and (2) FAA's mechanisms for working with and transferring technology to or from private industry. To do this, we assessed FAA and partner agency mechanisms against applicable agreements, the agencies' own guidance for these activities, as well as applicable key practices that GAO has reported can enhance federal collaborative efforts. Some mechanisms for FAA and partner agency collaboration are effective, though others fail to ensure research and technology from the partner agencies and industry are fully used by FAA. Some mechanisms used by FAA and the National Aeronautics and Space Administration (NASA) for coordinating research and transferring technology are consistent with several key practices in interagency coordination. For instance, FAA and NASA use research transition teams to coordinate research and transfer technologies from NASA to FAA. The design of these teams is consistent with several key practices GAO has identified in previous work that can enhance interagency coordination, such as identifying common outcomes, establishing a joint strategy to achieve that outcome, and defining each agency's role and responsibilities. This allows the agencies to overcome differences in mission, culture, and ways of doing business. However mechanisms for collaborating with other partner agencies do not always ensure that FAA effectively leverages agency resources. For example, the mechanisms used by FAA, DOD, and DHS have not yet resulted in a full determination of what research, technology, or expertise FAA can leverage to benefit NextGen. Further, collaboration between FAA, DOD, and DHS may be limited by differing priorities. Finally, FAA and the Joint Planning and Development Office--an interagency organization created to plan and coordinate research for NextGen--have not fully coordinated the partner agencies' research efforts, though they are working to address research gaps. A lack of coordination could result in a duplication of research or an inefficient use of resources. Numerous mechanisms are available to FAA to collaborate with industry to identify and transfer technology to advance NextGen, but some lack flexibility and outcomes can be unclear. Within its Acquisition Management System (AMS), FAA may use several mechanisms at various stages to conduct outreach, collaborate with private-sector firms, or transfer technology. In particular, FAA may use several types of research and development agreements between itself and the private sector as mechanisms to facilitate technology transfer. However, stakeholders said that the system can lack flexibility, in some circumstances, to consider alternative technologies or new ideas once the process is underway. GAO has made recommendations in the past to improve FAA's AMS system. FAA has begun to implement these recommendations. FAA is beginning to use a new, possibly more flexible, contracting vehicle--Systems Engineering 2020--to acquire the research, development, and systems engineering support to integrate NextGen concepts. FAA also reviews unsolicited proposals as a mechanism for private industry to offer unique ideas or approaches outside of the competitive procurement process. However, FAA's unsolicited proposal process is not a significant source of new technology for FAA. Other mechanisms such as outreach events with private industry and NextGen test facilities might enhance knowledge and result in technology transfer, but outcomes, such as specific benefits, from some of these mechanisms can be unclear. GAO recommends that FAA and the Departments of Defense (DOD) and Homeland Security (DHS) work together to develop mechanisms that will enhance collaboration and technology transfer between the agencies. GAO and others have outstanding recommendations related to interaction with industry that FAA has begun to address and GAO makes no further recommendations in this report. DOD and DHS concurred with the recommendation, while FAA did not comment on whether or not it agreed.
Over the past two decades—from 1991 through 2012—there was a substantial increase in the number of FLSA lawsuits filed, with most of the increase occurring in the period from fiscal year 2001 through 2012. As shown in figure 1, in 1991, 1,327 lawsuits were filed; in 2012, that number had increased over 500 percent to 8,148. FLSA lawsuits can be filed by DOL on behalf of employees or by private individuals. Private FLSA lawsuits can either be filed by individuals or on behalf of a group of individuals in a type of lawsuit known as a “collective action”. The court will generally certify whether a lawsuit meets the requirements to proceed as a collective action. The court may deny certification to a proposed collective action or decertify an existing collective action if the court determines that the plaintiffs are not “similarly situated” with respect to the factual and legal issues to be decided. In such cases, the court may permit the members to individually file private FLSA lawsuits. Collective actions can serve to reduce the burden on courts and protect plaintiffs by reducing costs for individuals and incentivizing attorneys to represent workers in pursuit of claims under the law. They may also protect employers from facing the burden of many individual lawsuits; however, they can also be costly to employers because they may result in large amounts of damages. For fiscal year 2012, we found that an estimated 58 percent of the FLSA lawsuits filed in federal district court were filed individually, and 40 percent were filed as collective actions. An estimated 16 percent of the FLSA lawsuits filed in fiscal year 2012 (about a quarter of all individually-filed lawsuits), however, were originally part of a collective action that was decertified (see fig. 2). Federal courts in most states experienced increases in the number of FLSA lawsuits filed between 1991 and 2012, but large increases were concentrated in a few states, including Florida, New York, and Alabama. Of all FLSA lawsuits filed since 2001, more than half were filed in these three states, and in 2012, about 43 percent of all FLSA lawsuits were filed in Florida (33 percent) or New York (10 percent). In both Florida and New York, growth in the number of FLSA lawsuits filed was generally steady, while changes in Alabama involved sharp increases in fiscal years 2007 and 2012 with far fewer lawsuits filed in other years (see fig. 3). Each spike in Alabama coincided with the decertification of at least one large collective action, which likely resulted in multiple individual lawsuits. For example, in fiscal year 2007, 2,496 FLSA lawsuits (about one-third of all FLSA lawsuits) were filed in Alabama, up from 48 FLSA lawsuits filed in Alabama in fiscal year 2006. In August 2006, a federal district court in Alabama decertified a collective action filed by managers of Dollar General stores. In its motion to decertify, the defendant estimated the collective to contain approximately 2,470 plaintiffs. In fiscal year 2012, an estimated 97 percent of FLSA lawsuits were filed against private sector employers, and an estimated 57 percent of FLSA lawsuits were filed against employers in four industry areas: accommodations and food services; manufacturing; construction; and “other services”, which includes services such as laundry services, domestic work, and nail salons. Almost one-quarter of all FLSA lawsuits filed in fiscal year 2012 (an estimated 23 percent) were filed by workers in the accommodations and food service industry, which includes hotels, restaurants, and bars. At the same time, almost 20 percent of FLSA lawsuits filed in fiscal year 2012 were filed by workers in the manufacturing industry. In our sample, most of the lawsuits involving the manufacturing industry were filed by workers in the automobile manufacturing industry in Alabama, and most were individual lawsuits filed by workers who were originally part of one of two collective actions that had been decertified. FLSA lawsuits filed in fiscal year 2012 included a variety of different types of alleged FLSA violations and many included allegations of more than one type of violation. An estimated 95 percent of the FLSA lawsuits filed in fiscal year 2012 alleged violations of the FLSA’s overtime provision, which requires certain types of workers to be paid at one and a half times their regular rate for any hours worked over 40 during a workweek. Almost one-third of the lawsuits contained allegations that the worker or workers were not paid the federal minimum wage. We also identified more specific allegations about how workers claimed their employers violated the FLSA. For example, nearly 30 percent of the lawsuits contained allegations that workers were required to work “off-the-clock” so that they would not need to be paid for that time. In addition, the majority of lawsuits contained other FLSA allegations, such as that the employer failed to keep proper records of hours worked by the employees, failed to post or provide information about the FLSA, as required, or violated requirements pertaining to tipped workers such as restaurant wait staff (see fig. 4). An estimated 14 percent of FLSA lawsuits filed in federal district court in fiscal year 2012 included an allegation of retaliation. limitations for filing an FLSA claim is 2 years (3 years if the violation is “willful”), New York state law provides a 6-year statute of limitations for filing state wage and hour lawsuits. A longer statute of limitations may increase potential financial damages in such cases because more pay periods are involved and because more workers may be involved. Adding a New York state wage and hour claim to an FLSA lawsuit in federal court could expand the potential damages, which, according to several stakeholders, may influence decisions about where and whether to file a lawsuit. In addition, according to multiple stakeholders we interviewed, because Florida lacks a state overtime law, those who wish to file a lawsuit seeking overtime compensation generally must do so under the FLSA. Ambiguity in applying the law and regulations. Ambiguity in applying the FLSA statute or regulations—particularly the exemption for executive, administrative, and professional workers—was cited as a factor by a number of stakeholders. In 2004, DOL issued a final rule updating and revising its regulations in an attempt to clarify this exemption and provided guidance about the changes, but a few stakeholders told us there is still significant confusion among employers about which workers should be classified as exempt under these categories. Industry trends. As mentioned previously, about one-quarter of FLSA lawsuits filed in fiscal year 2012 were filed by workers in the accommodations and food service industry. Nationally, service jobs, including those in the leisure and hospitality industry, increased from 2000 to 2010, while most other industries lost jobs during that period. Federal judges in New York and Florida attributed some of the concentration of such litigation in their districts to the large number of restaurants and other service industry jobs in which wage and hour violations are more common than in some other industries. An academic who focuses on labor and employment relations told us that changes in the management structure in the retail and restaurant industry may have contributed to the rise in FLSA lawsuits. For example, frontline managers who were once exempt have become nonexempt as their nonmanagerial duties have increased as a portion of their overall duties. We also reviewed DOL’s annual process for determining how to target its enforcement and compliance assistance resources. The agency targets industries for enforcement that, according to its recent enforcement data, have a higher likelihood of FLSA violations, along with other factors. In addition, according to WHD internal guidance, the agency’s annual enforcement plans should contain strategies to engage related stakeholders in preventing such violations. For example, if a WHD office plans to investigate restaurants to identify potential violations of the FLSA, it should also develop strategies to engage restaurant trade associations about FLSA-related issues so that these stakeholders can help bring about compliance in the industry. However, DOL does not compile and analyze relevant data, such as information on the subjects or the number of requests for assistance it receives from employers and workers, to help determine what additional or revised guidance employers may need to help them comply with the In developing its guidance on the FLSA, WHD does not use a FLSA.systematic approach that includes analyzing this type of data. In addition, WHD does not have a routine, data-based process for assessing the adequacy of its guidance. For example, WHD does not analyze trends in the types of FLSA-related questions it receives. This type of information could be used to develop new guidance or improve the guidance WHD provides to employers and workers on the requirements of the FLSA. Because of these issues, we recommended that WHD develop a systematic approach for identifying areas of confusion about the requirements of the FLSA that contribute to possible violations and improving the guidance it provides to employers and workers in those areas. This approach could include compiling and analyzing data on requests for guidance on issues related to the FLSA, and gathering and using input from FLSA stakeholders or other users of existing guidance through an advisory panel or other means. While improved DOL guidance on the FLSA might not affect the number of lawsuits filed, it could increase the efficiency and effectiveness of its efforts to help employers voluntarily comply with the FLSA. A clearer picture of the needs of employers and workers would allow WHD to more efficiently design and target its compliance assistance efforts, which may, in turn, result in fewer FLSA violations. WHD agreed with our recommendation that the agency develop a systematic approach for identifying and considering areas of confusion that contribute to possible FLSA violations to help inform the development and assessment of its guidance. WHD stated that it is in the process of developing systems to further analyze trends in communications received from stakeholders such as workers and employers and will include findings from this analysis as part of its process for developing new or revised guidance. In closing, while there has been a significant increase in FLSA lawsuits over the last decade, it is difficult to determine the reasons for the increase. It could suggest that FLSA violations have become more prevalent, that FLSA violations have been reported and pursued more frequently than before, or a combination of the two. It is also difficult to determine the effect that the increase in FLSA lawsuits has had on employers and their ability to hire workers. However, the ability of workers to bring such suits is an integral part of FLSA enforcement because of the limits on DOL’s capacity to ensure that all employers are in compliance with the FLSA. Chairman Walberg, Ranking Member Courtney, and members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you may have. For further information regarding this statement, please contact Andrew Sherrill at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Betty Ward-Zukerman (Assistant Director), Catherine Roark (Analyst in Charge), David Barish, James Bennett, Sarah Cornetto, Joel Green, Kathy Leslie, Ying Long, Sheila McCoy, Jean McSween, and Amber Yancey-Carroll. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The FLSA sets federal minimum wage and overtime pay requirements applicable to millions of U.S. workers and allows workers to sue employers for violating these requirements. Questions have been raised about the effect of FLSA lawsuits on employers and workers and about WHD's enforcement and compliance assistance efforts as the number of lawsuits has increased. This statement examines what is known about the number of FLSA lawsuits filed and how WHD plans its FLSA enforcement and compliance assistance efforts. It is based on the results of a previous GAO report issued in December 2013. In conducting the earlier work, GAO analyzed federal district court data from fiscal years 1991 to 2012 and reviewed selected documents from a representative sample of lawsuits filed in federal district court in fiscal year 2012. GAO also reviewed DOL's planning and performance documents, interviewed DOL officials, as well as stakeholders, including federal judges, plaintiff and defense attorneys who specialize in FLSA cases, officials from organizations representing workers and employers, and academics. Substantial increases occurred over the last decade in the number of civil lawsuits filed in federal district court alleging violations of the Fair Labor Standards Act of 1938, as amended (FLSA). Federal courts in most states experienced increases in the number of FLSA lawsuits filed, but large increases were concentrated in a few states, including Florida and New York. Many factors may contribute to this general trend; however, the factor cited most often by stakeholders GAO interviewed—including attorneys and judges—was attorneys' increased willingness to take on such cases. In fiscal year 2012, an estimated 97 percent of FLSA lawsuits were filed against private sector employers, often from the accommodations and food services industry, and 95 percent of the lawsuits filed included allegations of overtime violations. The Department of Labor's Wage and Hour Division (WHD) has an annual process for planning how it will target its enforcement and compliance assistance resources to help prevent and identify potential FLSA violations. In planning its enforcement efforts, WHD targets industries that, according to its recent enforcement data, have a higher likelihood of FLSA violations. WHD, however, does not have a systematic approach that includes analyzing relevant data, such as the number of requests for assistance it receives from employers and workers, to develop its guidance, as recommended by best practices previously identified by GAO. In addition, WHD does not have a routine, data-based process for assessing the adequacy of its guidance. For example, WHD does not analyze trends in the types of FLSA-related questions it receives from employers or workers. According to plaintiff and defense attorneys GAO interviewed, more FLSA guidance from WHD would be helpful, such as guidance on how to determine whether certain types of workers are exempt from the overtime pay and other requirements of the FLSA. In its December 2013 report, GAO recommended that the Secretary of Labor direct the WHD Administrator to develop a systematic approach for identifying and considering areas of confusion that contribute to possible FLSA violations to help inform the development and assessment of its guidance. WHD agreed with the recommendation and described its plans to address it.
Under the DERP, DOD is authorized to identify, investigate and clean up environmental contamination and other hazards at FUDS as well as active installations. To that end, DOD has established restoration goals and identified over 31,000 sites that are eligible for cleanup, including more than 21,000 sites on active installations, more than 5,000 sites on installations identified for Base Realignment and Closure (BRAC), and 4,700 FUDS. The DERP was established by section 211 of the Superfund Amendments and Reauthorization Act of 1986 (SARA) which amended the Comprehensive Environmental Response Compensation and Liability Act (CERCLA) of 1980. Under the DERP, DOD’s activities addressing hazardous substances, pollutants or contaminants are required to be carried out consistent with section 120 of CERCLA. DOD delegated its authority for administering the cleanup of FUDS to the Army, which in turn delegated its execution to the Army Corps of Engineers (the Corps). Funding for cleanup activities comes from the Environmental Restoration and BRAC accounts. The Environmental Restoration account funds cleanup at active sites and FUDS properties and, of the $1.4 billion obligated in fiscal year 2007, FUDS property obligations totaled $116.5 million for addressing hazardous substances and $102.9 million for munitions response. To be eligible for FUDS cleanup, a property must have been owned by, leased to, possessed by, or otherwise controlled by DOD during the activities that led to the presence of hazards. These hazards may include unsafe buildings, structures, or debris, such as weakened load-bearing walls; hazardous, toxic, and radioactive substances, which includes contaminants such as arsenic, certain paints, some solvents, and petroleum; containerized hazardous, toxic, and radioactive waste, such as transformers and aboveground or underground storage tanks that contain petroleum, solvents, or other chemicals which have been released into the environment; and ordnance and explosive materials, such as military munitions and chemical warfare agents. To determine if a property is eligible for cleanup under the FUDS program, the Corps conducts a preliminary assessment of eligibility to determine whether the property was ever owned or controlled by DOD and if hazards caused by DOD’s use may be present. If the Corps determines that the property was owned or controlled by DOD but does not find evidence of any hazards caused by DOD, it designates the property as “no DOD action indicated” (NDAI). If however, the Corps determines that a DOD-caused hazard may be present, the Corps begins to further study and/or clean up the hazard, consistent with CERCLA. The CERCLA process generally includes the following phases: preliminary assessment, site inspection, remedial investigation/feasibility study, remedial design/remedial action, and long- term monitoring. To address the release of hazardous substances, pollutants, or contaminants resulting from past practices that pose environmental health and safety risks on both active sites and FUDS, DOD established the Installation Restoration Program (IRP) in 1985 under the DERP. In fiscal year 2007, the Corps had 2,612 FUDS in the IRP. Performance metrics and comprehensive goals have been developed by DOD to assess progress toward the agency’s IRP goals. These goals include progress in reaching a CERCLA cleanup phase at the site level, progress toward achieving a “remedy in place” or “response complete” status at the installation level, and progress in achieving overall relative-risk reduction. Specific targets are included in DOD’s annual report to Congress. To better focus its munitions cleanup activities on both active sites and FUDS, DOD established the Military Munitions Response Program (MMRP) in September 2001, as part of the DERP, specifically to address potential explosive and environmental hazards associated with munitions. The objectives of the program include compiling a comprehensive inventory of military munitions sites, establishing a prioritization protocol for sequencing work at these sites, and establishing program goals and performance measures to evaluate progress. In December 2001, shortly after DOD established the program, the Congress passed the National Defense Authorization Act for fiscal year 2002, which, among other things, required DOD to develop an initial inventory of defense sites, other than military ranges still in operation, that are known or suspected to contain military munitions by May 31, 2003, and to provide annual updates thereafter. DOD provides these updates as part of its annual report to Congress on Defense environmental programs; in its 2007 report DOD had identified 3,537 sites suspected or known to have munitions contamination, an increase of 221 sites from fiscal year 2006. Table 1 provides a summary of DOD performance goals for MMRP and IRP. The principal government entities involved in the Spring Valley cleanup include the Corps, the Environmental Protection Agency (EPA), and the District of Columbia. The Corps has led the effort of identifying, investigating, and cleaning up contamination at the site, whereas EPA primarily consulted with and provided technical assistance to the Corps and the District of Columbia. The District of Columbia’s Department of Health has monitored the cleanup’s status and adequacy, conducting such actions as, according to the Department, assessing the human health risks associated with any exposure to remaining hazards at Spring Valley. Additionally, advisory entities were created to further facilitate decision- making on technical topics. In 2002, we reported that cleanup progress included the identification and removal of a large number of hazards, including buried ordnance, chemical warfare agents in glass containers, and arsenic-contaminated soil. By April 2002 the Corps had identified and removed 5,623 cubic yards of arsenic-contaminated soil from 3 properties and removed 667 pieces of ordnance-- 25 of which were chemical munitions-- and 101 bottles of chemicals. A March 2009 project overview report by the Corps indicated that, in 2004, the Corps excavated 474 drums of soil and recovered more than 800 items, such as construction debris, ordnance scrap, and laboratory glassware and ceramic pieces. The report also indicated that, by 2006, the Corps removed 5,500 cubic yards of soil, 117 munitions debris items, 6 intact munitions items, and 31 intact containers in addition, the excavation, backfilling, and restoration of the debris field that contained these materials was completed. We reported in 2002 that the primary health risks that influenced cleanup activities were (1) the possibility of injury or death from exploding or leaking ordnance and containers of chemical warfare agents; and (2) potential long-term health problems, such as cancers and other health conditions, from exposure to arsenic-contaminated soil. A study by the Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry found no evidence of significant exposure to arsenic in the individuals tested in 2002. In 2003, the Corps discovered perchlorate in groundwater at the site, and installed at least 38 monitoring wells for sampling. Sampling results identified elevated levels of perchlorate in the project area. Further investigation is underway with more wells and sampling planned in 2009. In April 2002, the Army estimated that the remaining cleanup activities at Spring Valley would take 5 years to complete. Total costs for the project were estimated at $145.9 million in fiscal year 2002; by fiscal year 2007, the estimated total costs increased to $173.7 million. Figure 1 presents information on the annual cost to complete and annual amounts spent to date from 2003 to the present at the Spring Valley site. When we reviewed the Spring Valley cleanup in 2002, we found that the Army determined that there was no evidence of large–scale burials of hazards remaining at Spring Valley before it received all technical input. For example, while the Army’s Toxic and Hazardous Materials Agency reviewed work done by American University and documentation from additional sources, it also contracted with EPA’s Environmental Photographic Interpretation Center to review available aerial photographs of the site taken during the World War I era. However, the photographs were not received or reviewed prior to 1993, according to EPA officials. Despite never having received technical input from EPA on the aerial photographs, in 1986 the Army concluded that if any materials were buried in the vicinity of the university, the amounts were probably limited to small quantities and no further action was needed. However, as we now know, subsequent investigations by the Army discovered additional ordnance in large burial pits and widespread arsenic-contaminated soil. The experience at Spring Valley is by no means a unique occurrence. Our review of other FUDS nationwide found significant shortcomings in the Corps’ use of available information and guidance for making decisions relating to cleanup of contamination at these sites. For example, in 2002, we reported that the Corps did not have a sound basis for determining that about 1,468 of 3,840 FUDS properties––38 percent––did not need further study or cleanup action. Specifically, we found No evidence that the Corps reviewed or obtained information that would allow it to identify all the potential hazards at these properties or that it took sufficient steps to assess the presence of potential hazards. That for about 74 percent of all NDAI properties, the site assessment files were incomplete—i.e., the files lacked information such as site maps or photos that would show facilities, such as ammunition storage facilities, that could indicate the presence of hazards (e.g. unexploded ordnance). That for about 60 percent of all NDAI properties the Corps may not have contacted all the current owners to obtain information about potential hazards present on the site. The Corps appeared to have overlooked or dismissed information in its possession that indicated hazards might be present. For example, at a nearly 1,900 acre site previously used as an airfield by both the Army and the Navy, the file included a map showing bomb and fuse storage units on the site that would suggest the possible presence of ordnance-related hazards; however, we found no evidence that the Corps searched for such hazards. The files contained no evidence that the Corps took sufficient steps to assess the presence of potential hazards. For example, although Corps guidance calls for a site visit to look for signs of potential hazards, we estimated that the Corps did not conduct the required site visit for 686 or about 18 percent of all NDAI properties. We found that these problems occurred in part because the Corps’ guidance did not specify (1) what documents or level of detail the agency should obtain when looking for information on the prior uses of and the facilities located at FUDS properties to identify potential hazards or (2) how to assess the presence of potential hazards. For example, some Corps district staff stated that there was no guidance showing the types of hazard normally found at certain types of facilities. We concluded that, since many properties may have not been properly assessed, the Corps did not know the number of additional properties that may require cleanup, the hazards that were present at those properties, the risk associated with these hazards, the length of time needed for cleanup, or the cost to clean up the properties. To address these problems, we recommended that the Corps develop more specific guidelines and procedures for identifying and assessing potential hazards at FUDS and to use them to review NDAI files and determine which properties should be reassessed. DOD told us that it has implemented this recommendation; however, according to one major association of state regulators, problems persist in how the Corps makes NDAI determinations in many cases. In 2008, the association published a fact sheet indicating, among other things, that the evidence collected is not adequate for making determinations. We will be reviewing some aspects of this decision making process as part of our ongoing work on FUDS and MMRP. At Spring Valley, the Corps’ estimate of the cost to complete cleanup of the site increased by about six fold––from about $21 million to about $124 million––from fiscal year 1997 through 2001. Factors such as the future discovery of hazards made it inherently challenging for the Corps to estimate the costs for completing cleanup activities at the site. Future estimates of the cost to complete cleanup of the site also depend on assumptions about how many properties require the removal of arsenic- contaminated soil and how many properties need to be surveyed and excavated to remove possible buried hazards. As these assumptions have changed, the cost to cleanup Spring Valley has continued to rise where the most recent estimate for fiscal year 2007 is $173.7 million. The challenges of estimating the costs of the Spring Valley cleanup are common to many FUDS, and our past work has shown that incomplete data on site conditions and emerging contaminants can interfere with the development of accurate cost and schedule estimates. For example, in 2004, we evaluated DOD’s MMRP program and found several weaknesses in preliminary cost estimates for numerous sites. We found that a variety of factors, including the modeling tool used to compile cost estimates, contributed to these weaknesses. Specifically, when detailed, site-specific information was not available for all sites, we found that DOD used estimates, including assumptions about the amount of acreage known or suspected of containing military munitions when preparing its cost projections. As a result, the cost estimates varied widely during the life of some cleanup projects. For example, the Corps confirmed the presence of unexploded ordnance at Camp Maxey in Texas, and in 2000, estimated cleanup costs at $45 million. In its fiscal year 2002 annual report, DOD reported that the estimated total cost had tripled and grown to $130 million, and then in June 2003, the estimate decreased to about $73 million––still 62 percent more than the original cost estimate. The main factors behind these shifting cost estimates, according to the project manager, were changes in the acreage requiring underground removal of ordnance and changes in the amount of ordnance found. To address the challenges of estimating costs, schedules, and other aspects of munitions response, we made a number of recommendations related to various elements of DOD’s comprehensive plan for identifying, assessing and cleaning up military munitions at potentially contaminated sites. In its response to our 2004 report and recommendations, DOD said that it was working on developing better cost estimates, and that the Corps would designate 84 percent of its environmental restoration budget in fiscal year 2007 for investigations and cleanup actions. According to DOD, this funding would help the Corps gather more site specific information, which in turn could be used for better determining the expected cost to complete cleanup at FUDS. We found that these concerns are also not limited to just FUDS but also affect operational ranges as well. When we reviewed the development of DOD’s cost estimates for addressing potential liabilities associated with unexploded ordnance, discarded military munitions, and munitions constituents on operational ranges, we found that DOD’s cost estimates for cleanup were questionable because the estimates were based on inconsistent data and invalidated assumptions. The presence of newly identified contaminants at sites needing cleanup further complicates DOD’s efforts to develop reliable cost estimates. In 2004, we found that DOD does not have a comprehensive policy requiring sampling or cleanup of the more than 200 chemical contaminants associated with military munitions on operational ranges. Of these 200 contaminants, 20 are of great concern to DOD due to their widespread use and potential environmental impact—including perchlorate. According to our 2005 report, perchlorate has been found in the drinking water, groundwater, surface water, or soil in 35 states, the District of Columbia (including the Spring Valley site), and 2 commonwealths of the United States. In its 2007 Annual Report to Congress, DOD indicated that new requirements to address emerging contaminants like perchlorate will drive its investments in cleanup, and require modifications in plans and programs, and adjustments to total cleanup and cost to complete estimates. However, there is limited information on the potential costs of addressing these emerging contaminants and how their cleanup may affect overall site cleanup schedules. This is partly because none of these munitions constituents are currently regulated by a federal drinking water standard under the Safe Drinking Water Act, although perchlorate, for example, is the subject of a federal interim health advisory and several state drinking water standards. Our 2004 report recommended that DOD provide specific funding for comprehensive sampling for perchlorate at sites where no sampling had been conducted; although DOD disagreed at the time, it recently took action to sample hundreds of locations nationwide. Spring Valley has received priority funding due to its proximity to the nation’s capitol and high visibility; however, our past work shows that this is not the case with most FUDS. Over the past 10 years DOD has invested nearly $42 billion in its environmental programs, which include compliance, restoration, natural resources conservation, and pollution prevention activities. In fiscal year 2007, DOD obligated approximately $4 billion for environmental activities, but only $1.4 billion of this total was utilized for DERP environmental restoration activities at active installations and FUDS. Of this amount, $1.2 billion funded cleanup of hazardous substances, pollutants and contaminants from past DOD activities through the Installation Restoration Program (IRP) and $215.8 million funded activities to address unexploded ordnance, discarded military munitions and munitions constituents through the Military Munitions Response Program (MMRP). Figure 2 shows expenditures through fiscal year 2007, DOD’s estimated costs to complete, and the fiscal year 2007 obligations for the IRP and MMRP at active sites and FUDS. DOD requests separate funding amounts for active sites and FUDS cleanup programs based on specific DERP restoration goals and the total number of sites in each program’s inventory. Goals are set separately for the IRP and MMRP; target dates for cleanup of high priority sites are different for these programs. Furthermore, while DOD has established Department- wide goals, each service has its own goals, which may differ, and determines the allocation of funds between IRP and MMRP. Specifically, for the IRP, the DOD goal is to have a remedy in place or response complete for all active sites and FUDS by fiscal year 2020. However, DOD has requested much greater budgets for active sites than for FUDS. For example, DOD requested $257.8 million for FUDS or only one-fifth of the amount requested for active sites for fiscal year 2009. Similarly, obligations in fiscal year 2007 totaled $969.8 million for active sites, whereas FUDS obligations only totaled $219.4 million. According to the most recent annual report to Congress, DOD does not expect to complete the IRP goal for FUDS until fiscal year 2060. DOD is aiming to complete cleanup of IRP sites much earlier than MMRP sites, even if higher-risk MMRP sites have not yet been addressed. For MMRP, DOD’s first goal was to complete preliminary assessments for FUDS as well as active sites, by the end of fiscal year 2007. DOD reported that it has reached this goal for 96 percent of MMRP sites. However, it is not clear if this percentage includes sites recently added to the site inventory. DOD also has an MMRP goal of completing all site inspections by the end of fiscal year 2010, but has not yet set a goal for achieving remedy in place or response complete. Our ongoing reviews of the FUDS and MMRP programs will include more in-depth analyses of the prioritization processes used by DOD for active sites and FUDS. In our 2002 report on Spring Valley, we reported that the Corps, EPA and the District of Columbia had made progress on site cleanup by adopting a partnership approach for making cleanup decisions. Importantly, they established a systematic means of communicating information to, and receiving input from, the residents of Spring Valley and other interested members of the public. While the entities did not agree on all cleanup decisions, officials of all three entities—the Corps, the District of Columbia, and EPA—stated that the partnership had been working effectively. However, we have found that this kind of cooperation and coordination does not always occur at other sites nationwide. For example: In 2003, we conducted a survey to determine how the Corps coordinates with state regulators during the assessment and cleanup of FUDS. We found that the Corps did not involve the states consistently, and that EPA had little involvement in the cleanup of most FUDS. We found that the Corps informed states of upcoming work at hazardous waste projects 53 percent of the time and requested states’ input and participation 50 percent of the time. We reported that federal and state regulators believed that better coordination with the Corps regarding cleanup at FUDS would increase public confidence in the cleanups and improve their effectiveness. Some state regulators told us that inadequate Corps coordination has made it more difficult for them to carry out their regulatory responsibilities at FUDS properties and that, because of their lack of involvement, they have frequently questioned Corps cleanup decisions at FUDS. Conversely, when Corps coordination has occurred, states have been more likely to agree with Corps decisions. Several states also told us that they would like to see EPA become more involved in the cleanup process, for example, by participating in preliminary assessments of eligibility or providing states with funds to review Corps work. EPA also believed that a better-coordinated effort among all parties would improve the effectiveness of cleanup at FUDS and increase public confidence in the actions taken at these sites, but emphasized it did not expect its involvement to be consistent across all phases of work; rather, that it would increase its involvement at a site when conditions warranted—for example, if there were “imminent and substantial endangerment” or if it had concerns about the appropriateness of the cleanup. We also found that EPA and DOD disagreed on EPA’s role in the FUDS program. Although EPA is the primary regulator for the FUDS that are on the National Priorities List, the states are typically the primary regulatory agency involved for all other FUDS. EPA told us that its role at some of these unlisted FUDS should be greater because it believes it can help improve the effectiveness of the cleanups and increase public confidence in the program. DOD and some states disagreed with this position because they do not believe there is a need for additional EPA oversight of DOD’s work at unlisted FUDS properties where the state is the lead regulator. We concluded in 2003 that the lack of a good working relationship between two federal cleanup agencies may hamper efforts to properly assess properties for cleanup and may, in some cases, result in some duplication of effort. We also concluded in this 2003 report that a factor behind the historical lack of consistency in the Corps coordination with regulators could be that DOD and Corps guidance does not offer specific requirements that describe exactly how the Corps should involve regulators. To address these shortcomings, we recommended that DOD and the Corps develop clear and specific guidance that explicitly includes, among other things, what coordination should take place during preliminary assessments of eligibility on projects involving ordnance and explosive waste. We also recommended that DOD and the Corps assess recent efforts to improve coordination at the national as well as district level and promote wider distribution of best practices; and work with EPA to clarify their respective roles in the cleanup of former defense sites that are not on the National Priorities List. DOD, representing the Corps and DOD, generally agreed with our recommendations and has since implemented additional changes to improve its coordination with regulators, including revising its guidance to include step-by-step procedures for regulatory coordination at each phase of FUDS cleanup. However, we have not reassessed DOD’s efforts or reviewed its coordination efforts since our 2003 report. In addition to better coordination with regulators, our past work has shown that the Corps frequently did not notify property owners of its determinations that the properties did not need further action, as called for in its guidance, or instruct the owners to contact the Corps if evidence of DOD-caused hazards was found later. In 2002, we estimated that the Corps failed to notify current owners of its determinations for about 72 percent of the properties that the Corps determined did not need further study or cleanup action. Even when the Corps notified the owners of its determinations, we estimated that for 91 percent of these properties it did not instruct the owners to contact the Corps if evidence of potential hazards was found later. In some cases, several years elapsed before the Corps notified owners of its determinations. We concluded that this lack of communication with property owners hindered the Corps’ ability to reconsider, when appropriate, its determinations that no further study or cleanup action was necessary. As a result of our findings, we recommended that the Corps consistently implement procedures to ensure that owners are notified of NDAI determinations and its policy of reconsidering its determinations if evidence of DOD-caused hazards is found later. DOD has implemented this recommendation although we have not reviewed its implementation. In conclusion, Mr. Chairman, as we move forward on the cleanup of the Spring Valley site, we believe that the lessons learned from DOD’s national environmental cleanup programs provides valuable insights that could guide decision-making and also inform the oversight process. The experience at the national level tells us that while not all the information that DOD needs is always available, it is imperative that the information that is available should be duly considered when developing cleanup plans and estimates. Moreover, involving regulators and property owners can also better ensure that DOD has the best information on which to make its decisions. Finally, it is important to recognize that emerging and unexpected situations can cause significant changes in both cost and time schedules and this could have funding implications as well for specific cleanup sites. This concludes my prepared statement. I will be happy to respond to any questions from you or other Members of the Subcommittee. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Anu Mittal at (202) 512- 3841 or [email protected]. Key contributors to this testimony were Diane Raynes, Elizabeth Beardsley, Alison O’Neill, Justin Mausel, and Amanda Leisoo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Under the Defense Environmental Restoration Program (DERP), the Department of Defense (DOD) has charged the Army Corps of Engineers (the Corps) with cleaning up 4,700 formerly used defense sites (FUDS) and active sites that were under its jurisdiction when they were initially contaminated. The 661-acre Spring Valley site in Washington, D.C is one such site. Like many other FUDS, the U.S. Army used the Spring Valley site during World War I for research and testing of chemical agents, equipment, and munitions. Most of the site is now privately owned and includes private residences, a hospital, and several commercial properties. The primary threats at the site are buried munitions, elevated arsenic in site soils, and laboratory waste; perchlorate was also found onsite. This testimony discusses GAO's past work relating to remediation efforts at FUDS and military munitions sites to provide context for issues at Spring Valley. Specifically, it addresses: (1) the impact that shortcomings in information and guidance can have on decision-making; (2) the impact that incomplete data can have on cost estimates and schedules; (3) how funding for a particular site may be influenced by overall program goals; and (4) how better coordination can increase public confidence in cleanups and facilitate effective decision-making. GAO has made several prior recommendations that address these issues, with which, in most cases, the agency concurred. GAO's past work has found significant shortcomings in the Corps' use of available information and guidance for making decisions relating to cleanup of FUDS. For example, in 2002, GAO found that the Army determined that there was no evidence of large-scale burials of hazards remaining at Spring Valley before it had received all technical input. This experience is not unique. In a 2002 national study, GAO reported that the Corps did not have a sound basis for determining that about 1,468 of 3,840 FUDS properties--38 percent--did not need further study or cleanup action. GAO attributed these shortcomings to limitations in the Corps guidance that did not specify what documents or level of detail the agency should obtain to identify potential hazards at FUDS or how to assess the presence of potential hazards. GAO's past work has also shown that incomplete data on site conditions and emerging contaminants can interfere with the development of accurate cost and schedule estimates. At Spring Valley, the Corps' estimates of cleanup costs increased by about six fold, from about $21 million to about $124 million from fiscal year 1997 through fiscal year 2001. As assumptions about site conditions changed and new hazards were discovered, the estimates continued to rise and currently stand at about $174 million. Again, these problems are not unique. In 2004, GAO evaluated DOD's cleanup of sites with military munitions and found several similar weaknesses in preliminary cost estimates for numerous sites across the country. GAO's past work has shown that funding available for specific sites may be influenced by overall program goals and other priorities. Spring Valley has received priority funding due to its proximity to a major metropolitan area and high visibility; however, GAO's past work shows that this is usually not the case with most FUDS sites. Over the past 10 years DOD has invested nearly $42 billion in its environmental programs, but it typically requests and receives a relatively smaller amount of funding for environmental restoration activities at FUDS sites compared to funding available for active sites. GAO's past work has found that better coordination and communication with regulators and property owners can increase public confidence and facilitate effective decision-making for contaminated sites. With regard to Spring Valley, GAO reported in 2002 that the Corps, the Environmental Protection Agency (EPA) and the District of Columbia had made progress because they had adopted a partnership approach to cleanup decisions. However, this kind of cooperation and coordination does not always occur nationwide. For example, in 2003, GAO reported that the Corps only informed states of upcoming work and requested input from them about half of the time. Similarly, GAO found that the Corps did not always communicate with property owners about the decisions it makes regarding contamination at FUDS sites and more often than not did not inform property owners about how to contact the Corps in the event that further hazardous substances were identified at the site.
The Department of Defense’s military compensation package includes a collection of pays and benefits used to retain and recruit active duty servicemembers, including basic pay, allowances for housing and subsistence, and federal tax advantages. In addition, servicemembers can be provided with compensation for specific duties and occupations or conditions of service in the form of S&I pays. As we reported in 2011, DOD and the services are authorized to offer various S&I pays that provide targeted monetary incentives to specific groups of personnel to influence the numbers of personnel in specific situations in which less costly methods have proven inadequate or impractical. The services use a variety of S&I pay programs to help meet staffing targets for the three high-skill occupations we selected as case studies for this review (see table 1). These S&I pay programs are generally used to improve accession and retention of servicemembers. We discuss the services’ use of S&I pays to support these occupations in more detail in appendixes II – IV. In its 2008 Report of The Tenth Quadrennial Review of Military Compensation (QRMC), DOD recommended consolidating the more than 60 S&I pays into 8 broad categories in order to increase the pay system’s flexibility and effectiveness as a force management tool. These categories include enlisted force management pay, officer force management pay, nuclear officer force management pay, aviation officer force management pay, health professions officer force management pay, hazardous duty pay, assignment or special duty pay, and skill incentive or proficiency pay. Prior to the release of DOD’s Tenth QRMC in 2008, Congress authorized the consolidation of the 60 legacy authorities into 8 authorities based on the categories. This change is required to be completed by January 28, 2018. In addition to the 8 consolidated authorities, existing authorities for the 15-year career status bonus and the critical skills retention bonus were retained. According to DOD, as of October 2016, 5 of the 8 consolidated special pay authorities had been fully or partially implemented by revising and transitioning existing S&I pay programs in conformance with the new consolidated authorities. According to a DOD official, implementation of the remaining 3 consolidated authorities is expected to be completed by October 2017. In June 2011 we recommended that DOD monitor its efforts in consolidating S&I pay programs under its new authorities to determine whether consolidation resulted in greater flexibility. DOD officials had previously stated that they would not be able to assess whether the consolidation resulted in greater flexibility until the consolidation is complete. See appendix I for additional details on DOD’s implementation of the consolidation effort. Within the Office of the Secretary of Defense, the Deputy Under Secretary of Defense for Personnel and Readiness is responsible for DOD personnel policy and total force management. The Principal Deputy Under Secretary of Defense for Personnel and Readiness, under the Under Secretary of Defense for Personnel and Readiness, is responsible for providing overall guidance in the administration of the enlistment bonus, accession bonus for new officers in critical skills, selective reenlistment bonus, and critical skills retention bonus programs. It is DOD policy that the military services use enlistment, accession, reenlistment, and retention bonuses as incentives in meeting personnel requirements. The intent of bonuses is to attract and retain servicemembers in specific skills or career fields in which less costly methods have proven inadequate or impractical. According to policy, the military services must exercise this authority in the most cost-effective manner, considering bonus employment in relation to overall skill, training, and utilization requirements. Military skills selected for the award of enlistment, accession, reenlistment, and/or retention bonuses must be essential to the accomplishment of defense missions. DOD has experienced an overall decrease in active duty S&I pay obligations since fiscal year 2005, but it does not report comparable data on Reserve Component S&I pay programs. Our analysis of DOD’s annual budget data shows that the obligations of S&I pays for active duty military personnel, after accounting for inflation, decreased by 42 percent from fiscal year 2005 through fiscal year 2015, from $5.8 billion to $3.4 billion. The 42 percent S&I pay obligation decrease from fiscal years 2005 through 2015 also coincided with a 12 percent decline in active duty military average strengths. DOD officials attributed the decrease to a combination of reduced overseas contingency operations, a reduced annual average strength of the force, and a favorable recruiting climate. DOD does not report complete information on S&I pay obligations for the Reserve Components, in part because DOD’s Reserve Components are not required to separately collect and report all S&I pay obligations in annual budget materials provided to Congress, thus limiting the extent to which we could identify and evaluate changes occurring within Reserve Components’ S&I pay programs. Our analysis of DOD budget data shows that from fiscal year 2005 through fiscal year 2015 the department’s active duty S&I pay obligations decreased by 42 percent, from $5.8 billion to $3.4 billion (see figure 1). In comparison, during the same time, the total active duty military personnel obligations decreased by 10 percent—largely due to end strength reductions. Obligations for active duty S&I pays increased from $5.8 billion in fiscal year 2005 to $7.1 billion in fiscal year 2008 (by 22 percent), largely due to the increased use of S&I pays by the Army and the Marine Corps. Service officials attributed the increase to the Army and Marine Corps Grow-the-Force initiative. After peaking in 2008, active duty S&I pay obligations declined to $3.4 billion in fiscal year 2015. DOD officials attributed this decrease to a combination of reduced overseas contingency operations, a drawdown in forces, and an economic recession that led to a more favorable recruiting climate and less need to offer S&I pays. As shown in figure 1, the 42 percent S&I pay obligation decrease from fiscal years 2005 through 2015 also coincided with a 12 percent decline in active duty military average strengths, demonstrating the services’ ability to adjust certain S&I pays in response to changing economic conditions and labor market dynamics, as well as trends in the number of military personnel. From fiscal years 2005 through 2015, obligations for S&I pays varied across the pay categories under which legacy S&I pays are being consolidated (see figure 2). Specifically, since the peak in fiscal year 2008, obligations in all but three consolidated S&I pay categories decreased. For example, obligations for hazardous duty pay, which consolidated legacy pays that traditionally operated as entitlement authority and was paid to servicemembers performing hazardous duties enumerated in statute, peaked in fiscal year 2008 to $1.2 billion due to the operational tempo and steadily declined as a result of the drawdown in forces to $259 million in fiscal year 2015 (by 79 percent). Similarly, the general bonus pay for enlisted members, which accounted for 29 percent of the total obligations for S&I pays from fiscal years 2005 through 2015, grew from $1.7 billion in fiscal year 2005 to $2.5 billion in fiscal year 2008 (by 47 percent) and subsequently declined to $727 million in fiscal year 2015 (by 71 percent). Service officials attributed the increase to the Army and Marine Corps Grow-the-Force initiative, which resulted in an increased use of enlistment and retention bonuses, and attributed the subsequent decrease to the drawdown in forces and the economic recession. Service officials noted that while obligations and the number of Selective Reenlistment Bonus (SRB) contracts have declined overall since fiscal year 2008, for certain high demand specialties, such as special operations, cyber, and nuclear personnel, obligations and the numbers of bonus contracts have increased due to the need to retain these personnel. The Reserve Components did not consistently collect and report complete obligation data for each S&I pay program. Specifically, the Reserve Components’ budget justification materials did not contain obligation data for S&I pays provided to Guard and reserve members to the same level of detail as the active component, and the Marine Corps Reserve was the only Reserve Component able to provide total obligations for S&I pays. Depending on the type of duty they are performing, Reserve Component members may be eligible for special and incentive pays, such as aviation career incentive pay, foreign language proficiency pay, special pays for health professionals, diving duty pay, hazardous duty pays, and others. Reservists are generally eligible for special and incentive pays during active duty training under the same conditions as active component personnel. Typically, they may receive a pro-rated portion of the full monthly amount corresponding to the number of days served. Reserve component members may also be eligible for special and incentive pays during inactive duty for training, and they typically receive such compensation at a rate proportional to the amount of inactive duty compensation they receive (i.e., one-thirtieth of the monthly rate for each unit training assembly). Our review of the services’ annual budget materials found that the services did not report Reserve Component S&I pay obligations in their annual budget materials in a manner consistent with the active component. This was because DOD’s Financial Management Regulation, which provides guidance for a uniform budget and accounting classification that is to be used for preparing budget estimates, including the budget justification materials we reviewed, does not require the services to do so. For the active military personnel budget materials, the regulation contains guidance and a framework that require the services to separately report obligations for each S&I pay. In contrast, the regulation requires the Reserve Components to list obligations for certain bonuses but does not require them to report obligations for all S&I pays separately; instead, it specifies that many Reserve Component S&I pays be grouped together with other military personnel obligations under a single budget activity. For example, in accordance with DOD guidance, all the Reserve Components reported obligations for accession, reenlistment, and enlistment bonuses for their administration and support personnel. In addition to the bonuses, the Navy Reserve separately identified obligations for aviation continuation pay and foreign language proficiency pay in its annual budget materials for its administration and support personnel. The Air Force Reserve also separately identified obligations for foreign language proficiency pay in its annual budget materials for its administration and support personnel. However, for many of the other S&I pays, the services grouped S&I pay obligations with other military personnel obligations under a single budget activity, as is allowed under the regulation. We requested that the Reserve Components provide us with S&I pay obligation data that were not contained in annual budget materials, but the Marine Corps reserve was the only Reserve Component that was able to provide obligations for each S&I pay for all the years included in our review (fiscal years 2005 through 2015). Army, Navy, and Air Force officials told us that their systems were not originally designed to collect and report obligations for individual S&I pays for the Reserve Components. As a result, the Army and the Air Force could not provide additional data. The Navy provided some additional data, but we determined that these data were not reliable because of inconsistencies and incompleteness. For example, the Navy could not provide obligation data for all S&I pays—instead providing execution data. Further, according to Navy officials, certain S&I pays are not consistently categorized in their system, making it difficult to identify cost trends in these S&I pays over time. Further, these services could not provide the portion of their Reserve Components’ military personnel budgets that S&I pay obligations represent. S&I pay obligations for the Marine Corps Reserve accounted for roughly 2 percent ($172 million) of its total military personnel budget on average from fiscal years 2005 through 2015. However, this percentage may not be representative of all the services’ Reserve Components, as the services’ reliance on S&I pays can vary, as we observed variability among the active components. Similarly to active duty S&I pay obligations, Marine Corps Reserve data also indicated that obligations peaked in fiscal year 2009 due in part to an increased offering of new enlistment bonuses. According to Marine Corps officials, this increase helped to support recruitment and retention of additional Marines required to sustain two major theater combat operations as well as to provide forces to Special Operations, Cyberspace Operations, and various headquarters staffs. The Office of the Under Secretary of Defense (Comptroller) has established budgetary information as a priority area for DOD’s Financial Improvement and Audit Readiness Plan. The Comptroller’s memorandum establishing these priorities states that, because budgetary information is used widely and regularly for management, DOD will place the highest priority on improving its budgetary information and processes. In support of DOD’s policy to use the most efficient and cost-effective processes in the military services’ recruitment of new personnel, DOD components are required to track and report all resource information applicable to enlisted members and officers who join the military services, including recruitment incentives. Furthermore, according to key statutes and accounting standards, agencies should develop and report cost information. Besides demonstrating financial accountability in the use of taxpayer dollars by showing the full cost of federal programs, the cost information can be used by the Congress and federal executives in making decisions about allocating federal resources, authorizing and modifying programs, and evaluating program performance. The cost information can also be used by program managers in making managerial decisions to improve operating economy and efficiency. According to the Office of the Under Secretary of Defense (Comptroller), there is no requirement to collect and report obligations separately for each of the Reserve Component’s S&I pays. For example, guidance such as DOD’s Financial Management Regulation for DOD’s annual budget justification materials does not require the Reserve Components to collect and report such data. As a result, with the exception of the Marine Corps Reserve, the Reserve Components have not collected and reported S&I pay obligations separately. Furthermore, officials noted that there is no requirement to collect this information because the Reserve Component personnel generally accrue S&I pays at a much lower rate than do active duty personnel. DOD officials told us that the services would likely need to make programming changes to various financial and personnel systems in order to separately track and report Reserve Component S&I pay obligations in their budget materials. However, DOD officials were unable to provide estimates of the costs associated with making such changes, and they told us that DOD has not explored other approaches that may be cost-effective to collect and report such information. According to federal internal control standards, agencies should have financial data to determine whether they are meeting their goals for accountability for the effective and efficient use of resources, which would apply to DOD in gauging the cost-effectiveness of changes to its financial and personnel systems for tracking and reporting S&I pay obligations for reservists. Although S&I pay amounts provided to Reserve Component servicemembers are most likely a fraction of what is paid to the active component servicemembers, the total amounts could add to hundreds of millions of dollars over time, on the basis of data provided by the Marine Corps. Furthermore, according to Marine Corps officials, analysts were able to obtain data on S&I obligations for the fiscal years under our review using their financial pay systems without making any changes to these systems. Until DOD explores cost-effective approaches to collect and report S&I pay program data for the Reserve Components, DOD may not know the full cost of its S&I pay programs, may not be able to make fully informed decisions about resource allocation, and may not be able to evaluate program performance over time. The military services have largely applied key principles of effective human capital management in using S&I pay programs to retain servicemembers within our selected case study occupations (nuclear propulsion, aviation, and cybersecurity). However, the consistency with which the services applied these principles varied by service and by occupation. DOD and the services have not taken steps to fully ensure consistent application of principles of human capital management in some S&I pay programs for selected occupations and thereby ensure effectiveness in the programs’ design. Our review found that the military services largely applied key principles of effective human capital management in selected S&I pay programs. However, the consistency with which they applied the principles varied across the services and across selected nuclear, aviation, and cybersecurity occupations. In our March 2002 report on strategic human capital management, we stated that making targeted investments in employees is a critical success factor associated with acquiring, developing, and retaining talent. Our report noted that, in making such investments, agencies must consider competing demands and limited resources and must balance and prioritize those factors. Similarly, in its Eleventh QRMC, DOD outlined a number of “core elements” for ensuring that investments in S&I pay strategies are cost-effective and optimize limited resources. On the basis of our prior work and the recommendations from DOD’s QRMC, we selected seven key principles of human capital management that can be applied to assess whether the services’ S&I pay programs are designed to ensure their effectiveness. These seven key principles of human capital management include, among other things, decision-making about human capital investment that is based largely on the expected improvement of agency results and is implemented in a manner that fosters top talent; consideration of replacement costs when deciding whether to invest in recruitment and retention programs; and assessments of civilian supply, demand, and wages that inform updates to agency plans as needed. Figure 3 lists the seven key principles and our assessment of the extent to which they were addressed in the military services’ S&I pay programs for each of our three case study occupations. Based on our analysis of military service policies and guidance, annual S&I pay program proposals submitted to the Office of the Secretary of Defense (Personnel and Readiness), and interviews with officials, we determined that the services largely applied key human capital principles to the S&I pay programs for three selected occupations (nuclear propulsion, aviation, and cybersecurity). The extent to which the principles were applied varied in consistency by service and by occupation. The Navy’s nuclear propulsion program demonstrated consistent application of all seven principles throughout the use of S&I pays for both officers and enlisted personnel. We found that the Navy uses a four-part approach to planning, implementing, and monitoring its S&I pay programs for nuclear-trained personnel to ensure effectiveness in nuclear personnel recruitment and retention. Together, these practices align with the principles of effective human capital management. For example, the Navy’s approach addresses key principle #2 by considering the high replacement costs of its nuclear personnel—up to $986,000 per trainee— in justifying a strategy that prioritizes investment in retention initiatives over new accessions or recruits. In addition, the Navy sets optimal bonus amounts for nuclear officers and enlisted personnel by monitoring civilian nuclear salaries and employment demand (key principle #7), studying the effects of changes to bonus amounts on retention behavior (key principle #5), and making changes to bonus amounts as appropriate (key principle #4). Moreover, the Navy makes informed decisions about its investments in S&I pay programs for nuclear personnel by using both quantitative and qualitative models for predicting numbers of personnel and retention rates as accurately as possible (key principle #6). Finally, Navy officials perform periodic personnel audits to ensure that recipients of its nuclear-related S&I pays are continuing to meet eligibility criteria, thereby helping to ensure that only qualified members are retained (key principle #3). We found that the Navy fully addressed each of the seven key principles of effective human capital management in managing its program to retain pilots. The Army, the Marine Corps, and the Air Force largely applied the principles, but we found that the extent to which the services addressed the key principles varied (see figure 3). For example, all of the services identified opportunities to improve their S&I pay programs, and incorporate these changes into the next planning cycle (key principle #6). The Navy and the Marine Corps addressed key principle #6 for their Aviation Continuation Pay (ACP) program by (1) offering different pay amounts to pilots by specific platform (the model of aircraft a pilot operates) and (2) reducing or suspending the pay when staffing goals had been achieved. In contrast, the Air Force offered ACP to broad categories of pilots across multiple platforms, and it generally offered the maximum amount allowed by law. Only the Navy and Marine Corps fully incorporated quality measures into decisions to offer S&I pays to pilots (key principle #3). For example, the Navy incorporated quality into its ACP program by targeting the bonus to pilots on track for Department Head positions and canceling bonus contracts for pilots who were not promoted to Department Head. In contrast, the Air Force considered the expected positive effect on retention as a factor for offering ACP, but did not specifically consider the relative quality of pilots within a targeted community as a factor for awarding an ACP contract. In addition, the services varied in how they incorporated a review of the civilian aviation sector in their decisions to offer retention bonuses to pilots (key principle #7). For example, the Army has not reviewed or considered commercial aviation in the context of its S&I pay program for pilots—largely because the Army provided ACP only to special operations pilots, and the skillset required for this mission does not have a clear civilian-sector equivalent. The Navy fully addressed this principle by specifically identifying comparable salary levels for commercial aviation pilots. The Air Force and the Marine Corps partially addressed this principle by considering the relationship between the compensation offered to their pilots and to commercial aviation pilots, but they did not specifically identify comparable salary levels and use them to determine retention bonus amounts. In addition, the services reached different conclusions about the extent to which the civilian aviation sector competes with the military for pilots. Specifically, the Navy stated that airline compensation would have to increase in order to have a significant impact on the retention of Navy pilots, and the Marine Corps reported that the potential increase in hiring by commercial airlines did not warrant the offering of ACP bonuses in fiscal year 2013. In contrast, the Air Force’s reports endorsing aviator retention bonuses stated that civilian aviation compensation factored into the Air Force’s decision to keep bonus amounts at the statutory limit of $25,000 per year. In February 2014, we reported that commercial aviation compensation decreased by almost 10 percent in constant dollars from 2000 to 2012. In July 2016, DOD reported to Congress on aviation-related S&I pays. DOD’s report stated that the military has experienced high levels of pilot retention as a result of decreased civilian airline pilot salaries, significantly reduced civilian airline pilot hiring, increased military pay and benefits, and an increased sense of duty and patriotism after the events of September 11, 2001. However, the report added that the department anticipated that increased hiring by commercial airlines over the ensuing 5 to 7 years could necessitate increasing bonus amounts from $25,000 per year to a range of $38,500 to $62,500 per year. As such, DOD’s report requested that Congress consider increasing the rates of Aviation Career Incentive Pay, and specifically increase the maximum authorized level of Aviation Continuation Pay from $25,000 per year to $35,000 per year. Similar to our findings with regard to aviation-related S&I pay programs, we found that the services are also not consistently applying the principles of effective human capital management in implementing S&I pay programs for their cybersecurity personnel. As shown in figure 3 above, our assessment found that although none of the services fully addressed all seven principles for the cybersecurity occupation, they all addressed each principle at least partially. Each service consistently addressed three of the seven principles, including having clear and consistently applied criteria (key principle #1), considering the replacement cost of personnel (key principle #2), and identifying opportunities for improvement and incorporating them in planning (key principle #6). For example, the Army, the Navy, the Air Force and the Marine Corps have all addressed several principles through their development of criteria to guide their decisions to invest in SRBs for cybersecurity personnel. Examples of those criteria include growing requirements; personnel replacement costs (including training); and mission criticality of the skill set. Service officials stated that they considered replacement costs and noted that replacing these personnel would be more costly than offering an SRB. According to service officials, depending on the military occupational specialty, after initial military training, specialized training may take from 8 months to 3 years. Service officials cited costs to train their cyberforces as ranging from about $23,000 to over $500,000. We found that the Navy and the Marine Corps have taken steps to implement their S&I pay programs in a way that would help retain the most valuable personnel in the cybersecurity occupation in terms of top performance (key principle #3). For example, in order to retain the most qualified personnel, in fiscal year 2012 the Marine Corps began to use a rating system that would help decision-makers to differentiate Marines’ performance during the reenlistment process. According to Army and Air Force officials, the purpose of the SRB program is to retain adequate numbers of qualified enlisted personnel serving in critical skills, and the bonus generally was not designed to target top performers. Further, we found that only the Army has tailored its SRB program to target cybersecurity personnel within non-designated cyber career fields (key principle #4). Specifically, the Army further targets personnel in career fields by location and skill, which enables it to target cybersecurity personnel in non-designated cyber career fields. The Marine Corps and the Air Force do not target cybersecurity personnel in non-designated cyber career fields. According to Navy officials, they do not have designated cybersecurity career fields and do not directly target cybersecurity personnel when offering bonuses. In addition, the services varied in how they incorporated a review of the civilian cybersecurity occupation in their decisions to offer S&I pays to cybersecurity personnel (key principle #7). For example, as part of determining the amount to offer, the Army and the Navy considered the wage of civilians in cyber- related career fields. The Navy noted in its justification for offering selective reenlistment bonuses that sailors within cyber-related career fields could qualify for positions in the civilian workforce with salaries starting at $90,000 with a $5,000 to $10,000 sign-on bonus. According to Marine Corps and Air Force officials, the Marine Corps and the Air Force did not consider civilian wages in cyber-related career fields when determining whether to offer a retention bonus. As noted above, the military services largely incorporated key principles of effective human capital management into the S&I pay programs used for nuclear propulsion, aviation, and cybersecurity occupations. However, our review found that DOD and the services have not taken steps to fully ensure consistent application of the principles in some S&I pay programs for these selected occupations and to ensure effective program design. First, although DOD reports have stated that S&I pays are used efficiently, we found that DOD has not taken steps to support this conclusion. Specifically, DOD has not reviewed whether its S&I pay programs have incorporated the key principles of human capital management that we identified, or whether they have used resources efficiently because DOD officials told us that the services and different occupations have unique needs that make comparison and assessment difficult. DOD guidance pertaining to enumerated S&I pay programs generally requires the Assistant Secretary of Defense for Personnel and Readiness, using delegated authority, to monitor and propose revisions to bonus and special pay programs. For example, DOD Directive 1304.21 directs the Principal Deputy Under Secretary of Defense for Personnel and Readiness, acting under the Under Secretary of Defense for Personnel and Readiness, to monitor bonus programs of the military services and recommend measures required to attain the most efficient use of resources devoted to programs on enlistment bonuses, accession bonuses for new officers in critical skills, selective reenlistment bonuses, and critical skills retention bonuses for active members. Consistent with the policy, the Office of the Under Secretary of Defense for Personnel and Readiness monitors the services’ S&I pay programs, including bonuses. However, on the basis of interviews with DOD officials, we found that DOD has not systematically included in its monitoring efforts a review of whether S&I pay programs have used resources efficiently, nor has it developed measures required to attain efficient use of S&I pay program resources. The officials stated that DOD has contracted with a research organization to develop a quantitative modeling tool that would enable the services to set cost-efficient bonus amounts. Such a tool may help the services to consistently apply human capital management principle #3 in instances where they are not already doing so in their S&I pay programs for aviation and cybersecurity (see figure 3). This principle calls for investment decisions to be based on expected improvement in agency results and implemented in a manner that fosters top talent. Depending on the inputs to the modeling tool, once developed, it could also help the Army, the Marine Corps, and the Air Force to address principle #7 by assessing civilian supply, demand, and wages and by updating their plans as needed for their S&I pay programs. According to DOD officials, however, progress on this effort has been slowed by competing priorities—that is, by the department’s focus on adjusting the military retirement system. In the absence of measures for ensuring efficiency in S&I pay programs, DOD and the services generally assess their S&I pay programs’ effectiveness by the extent to which they achieve desired staffing targets. However, this approach does not ensure that S&I pay programs are using resources in the most efficient manner, as DOD guidance requires. Until DOD reviews whether its S&I pay programs have incorporated the key principles of human capital management that we identified, reviews whether the programs have used resources efficiently, and prioritizes and completes the establishment of measures for efficient use of resources, DOD and the services may lack assurance that S&I pay programs are effective and that resources are optimized for the greatest return on investment. Secondly, on the basis of our interviews with DOD officials, we found that the department has not assessed the extent to which its non-monetary incentives could result in the retention of personnel at a lower cost than S&I pays and with equal or better effectiveness. An assessment of this kind would help the services to consistently apply human capital principles #4 and #5 in its cybersecurity S&I pay programs (see figure 3). Specifically, an assessment of non-monetary incentives would help ensure that approaches are tailored to meet the services’ needs by identifying and evaluating unique staffing issues, and by collecting and reviewing historical retention data to evaluate the effects and performance of S&I pay programs. In our case study review of S&I pay programs associated with the nuclear propulsion program, Navy officials told us that changes to S&I pays provide only short-term solutions when retention shortfalls are caused by servicemembers’ quality-of-life concerns about things like deployment lengths and geographic instability. As a result, Navy officials told us that they use a variety of non-monetary incentives to complement S&I pays for retention purposes, including guarantees for shore duty and graduate education opportunities. We found that DOD and the services also take steps to understand what non- monetary incentives improve servicemember satisfaction and retention. For example, through its periodic Status of Forces surveys, DOD collects information from servicemembers on their satisfaction with non-monetary incentives and their plans to leave or stay in the military, among other things. In addition, DOD officials told us that the services collect feedback from servicemembers who are separating to understand their reasons for leaving. However, according to DOD officials, they have not taken steps to routinely leverage existing feedback mechanisms and evaluate whether these non-monetary approaches can be expanded as less costly alternatives to address retention challenges, because they believe that S&I pay programs may be more efficient than non-monetary incentives. Without conducting routine assessments of the impact of non-monetary incentive approaches on retention behavior and on the necessary levels of S&I pays, DOD and the services do not know whether they are using the most efficient and effective combination of incentives for achieving retention objectives at the lowest possible cost. Third, with regard to key principle #3, department-level guidance on S&I pay programs does not explicitly incorporate personnel performance into eligibility criteria or retention decisions as a way to foster top talent and improve program results. For example, DOD guidance we reviewed on critical skills retention bonuses does not include explicit provisions addressing personnel performance that would ensure that monetary incentives are targeted to top performers. At the service-level, some but not all S&I pay programs for the three case study occupations included direction for targeting pays to personnel based on their levels of performance, consistent with principle #3. For example, the Navy reported that they implement their program for awarding aviation retention bonuses to pilots to explicitly connect the bonus contract to a pilot’s successful promotion to a department head position. If the pilot fails to be promoted, the bonus contract is canceled. In addition, the Navy’s instruction on its nuclear officer incentive pay program considers performance by excluding servicemembers who are undergoing disciplinary actions, who failed to maintain nuclear qualifications, or who failed to be selected for promotions, among other things. DOD officials told us that S&I pay programs were not designed to target top performers and that the services use other means to recognize performance, such as promotions. Federal internal control standards emphasize the importance of establishing clear and consistent agency objectives, which would apply to DOD’s determination about incorporating personnel performance into eligibility criteria or retention decisions for its S&I pay programs. Until the services clarify existing guidance for S&I pay programs regarding the extent to which personnel performance should be incorporated into retention decisions where appropriate, consistent with principle #3, the application and understanding of the guidance may be inconsistent among service officials responsible for managing S&I pay programs. Finally, with regard to key principle #4, which calls for tailoring approaches for meeting organizational needs by evaluating unique staffing issues, we found that the military services have awarded SRBs to cybersecurity personnel in accordance with their broader military occupational specialty rather than tailoring the awards toward the skill sets within those specialties that have specific or unique staffing shortfalls. According to information received from service officials, the services have some cybersecurity specific career fields; however, each military service continues to assign cybersecurity personnel to military occupational specialties that include other types of personnel skill sets, such as intelligence or information technology. However, the Army recently began to tailor its SRB program to target cybersecurity personnel. The Navy, the Marine Corps, and the Air Force—unlike the Army—have not imposed other conditions by which to direct SRBs to personnel with specific cybersecurity skill sets within a broader military occupational specialty. According to service officials, cybersecurity is an emerging occupation, and the services have not yet assigned all of their cybersecurity personnel to cybersecurity-designated career fields. Marine Corps officials told us, for example, that cybersecurity within the enlisted community is often a secondary skill set associated with other primary specialties. According to DOD officials, the Marine Corps has the ability to target SRBs to a secondary skill; however according to Marine Corps officials, the Marine Corps has not begun to do this for the cybersecurity community. As a result, these services have not awarded SRBs to personnel with cybersecurity skill sets without also awarding it to other skill sets within the same occupational specialty that may not have the same staffing needs. DOD’s policy on the enlisted bonus program states that the SRB may be used to obtain the reenlistment or voluntary extension of an enlistment in exchange for a member’s agreement to serve for a specified period in at least one of the following reenlistment or extension categories: a designated military skill, career field, unit, or grade; or to meet some other condition or conditions imposed by the Secretary of the Military Department concerned. Until the services develop approaches to directly target SRBs to personnel with cybersecurity skill sets, they may award SRBs to specialties that include non-cybersecurity personnel for whom the SRB is unneeded. Further, without consistently targeting their SRBs toward specific skill sets, the services may not be using the SRBs as cost-effectively as possible. Strategic management of S&I pay programs is important to support DOD’s ability to sustain its all-volunteer force, providing a suite of flexible compensation approaches that can be used to address staffing issues more efficiently than basic pay increases. In addition, the effective use of S&I pay programs, like other components of military compensation, is important for the efficient use of budgetary resources. As DOD officials seek to efficiently manage the department’s budget, S&I pay programs are likely to be a continued area in which to find efficiencies and cost savings. However, without exploring cost-effective approaches to collect and report complete obligation data for each S&I pay program for the Reserve Components, DOD may not know the full cost of its S&I pay programs, may not be able to make fully informed decisions about resource allocation, and may not be able to evaluate program performance over time. According to DOD officials, DOD has also not reviewed the extent to which the services’ S&I pay programs incorporate key principles of effective human capital management, or whether S&I pay programs have used resources efficiently; nor has it prioritized and completed the establishment of measures for ensuring the efficient use of resources. Furthermore, the military services do not consistently apply key human capital management principles to their S&I pay programs, such as by using non-monetary incentives to retain personnel or incorporating personnel performance into eligibility criteria or retention decisions as a way to foster top talent and improve program results. In addition, service officials told us that the military services have not completely identified cyber workforces, thereby limiting their ability to target S&I pays to critical specialties. Without addressing these issues, DOD and the services may not be able to ensure that S&I pay programs are effectively designed and that resources are optimized for the greatest return on investment. To facilitate DOD’s oversight of the military services’ S&I pay programs, and to fully ensure the effectiveness of these programs, we recommend that the Secretary of Defense take the following five actions: Direct the Under Secretary of Defense (Comptroller), in coordination with the military services, to explore cost-effective approaches to collect and report S&I pay program data for the Reserve Components; Direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the military services, to review whether S&I pay programs have incorporated key principles of effective human capital management and used resources efficiently, and prioritize and complete the establishment of measures for the efficient use of resources; routinely assess the impact of non-monetary incentive approaches on retention behavior and on the necessary levels of S&I pays; clarify existing guidance for S&I pay programs regarding the extent to which personnel performance should be incorporated into retention decisions; and Direct the Secretaries of the Military Departments to develop approaches to directly target SRBs to cybersecurity skill sets. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix VI, DOD concurred with three of our recommendations and partially concurred with two. DOD also provided technical comments on the draft report, which we incorporated as appropriate. In regard to our first recommendation—to explore cost-effective approaches to collect and report S&I pay program data for the Reserve Components—DOD concurred, adding that it will maintain its focus on the recruiting and retention pays for both the active and reserve components, and will continue to work with the Reserve Components to strengthen the collection of the remaining special and incentive pays. This action could meet the intent of our recommendation if it results in DOD exploring approaches to collect and report more complete and consistent data on S&I pays for the Reserve Components. In regard to our second recommendation— to review whether S&I pay programs have incorporated key principles of effective human capital management and used resources efficiently, and to prioritize and complete the establishment of measures for the efficient use of resources—DOD partially concurred, stating in its written comments that DOD does use key principles of effective human capital management, and although not articulated as GAO’s principles, share common goals and results. We agree there are similarities and as noted in the report DOD has demonstrated that it has used many of them. DOD stated that it will support the opportunity to review and improve upon the principles and methods to assess the efficiency of its S&I pay programs, and, where appropriate, will incorporate these principles in future DOD policy issuances and updates. We continue to believe that fully implementing the key principles of effective human capital management that we identified would help DOD and the services to ensure that S&I pay programs are effectively designed and that resources are optimized for the greatest return on investment. In regard to our third and fourth recommendations—to routinely assess the impact of non-monetary incentive approaches on retention behavior and on the necessary levels of S&I pays, and to clarify existing guidance for S&I pay programs regarding the extent to which personnel performance should be incorporated into retention decisions—DOD concurred. In written comments, DOD provided examples of non- monetary incentives used by the services as alternatives to cash pays and bonuses. DOD also noted that the department will clarify existing guidance regarding the extent to which personnel performance will be incorporated into retention decisions. In regard to our fifth recommendation—to develop approaches to directly target SRBs to cybersecurity skill sets—DOD partially concurred. In written comments, DOD stated that the services are responsible for developing their personnel requirements in order to meet individual service needs, and that it has provided the services with the necessary staffing tools to recruit and retain servicemembers in the cybersecurity skill sets. DOD also noted that it is crucial for the services to retain their flexibility to utilize these pays and benefits to address service-specific shortfalls within their cybersecurity workforce and noted that it will assist the services in growing and maintaining their cybersecurity workforce through existing and future DOD policies. We recognize that the services are responsible for their specific personnel requirements and that flexibility is important. However, as noted in our report, each military service has assigned cybersecurity personnel to military occupational specialties that include other types of personnel skill sets, such as intelligence or information technology. As a result, because the services offer SRBs by military occupational specialty, the services may award SRBs to specialties that include non-cybersecurity personnel for whom the SRB is unneeded. Therefore, we continue to believe that there are benefits to developing approaches to target cybersecurity personnel in non-designated cybersecurity fields. We are sending copies of this report to the appropriate congressional committee, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the Army, the Navy, and the Air Force, and the Commandant of the Marine Corps. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. In its September 2008 Tenth Quadrennial Review of Military Compensation, the Department of Defense (DOD) reported that the basic pay table provided limited flexibility to tailor the department’s human capital approaches. DOD also noted that for many special pays, detailed eligibility rules and precise payment amounts are set in statute and could only be changed by congressional action. As a result, when staffing needs or market conditions change, managers sometimes could not adjust special and incentive (S&I) pay eligibility criteria or payment levels in response to those changing circumstances. DOD recommended that the more than 60 S&I pays be replaced with 8 broad categories. In addition to these 8 categories, existing authorities for the 15-year career status bonus and the critical skills retention bonus would be retained. The review identified three benefits of consolidation: (1) increasing the services’ flexibility to allocate resources to those areas that would most effectively meet staffing needs; (2) decreasing the number of pays and therefore reducing the administrative burden of managing over 60 different pays with different sets of rules and funding streams; and (3) authorizing the services to allocate S&I pay to their highest priority staffing needs which would allow the services to respond quickly to changing staffing needs throughout the fiscal year. The National Defense Authorization Act for Fiscal Year 2008 authorized the consolidation and required DOD to complete the transition by January 28, 2018. DOD began implementing the consolidation in 2008, and according to a DOD official, expects the process to be completed by October 2017 (see figure 4). According to DOD officials and our analysis of updated DOD guidance, as of October 2016, DOD has at least partially transitioned 5 of the 8 consolidated S&I pay program authorities (see table 2). DOD has also fully transitioned the authorities for the 2 legacy pays that were retained. According to a DOD official, implementation of the remaining 3 consolidated authorities is expected to be completed by October 2017. Officers and enlisted personnel in the Navy’s nuclear propulsion program are tasked with safely operating and maintaining the nuclear reactors that power the Navy’s fleet of aircraft carriers and submarines (see figure 5). Navy officials described the nuclear propulsion program as rigorous, technically demanding, and staffed with highly trained and skilled personnel. Sailors in the nuclear propulsion occupation totaled nearly 23,000 in fiscal year 2016 (about 6 percent of the Navy’s active and Reserve Component personnel), including approximately 5,700 officers and about 17,200 enlisted members. The cadre of nuclear officers includes specialties such as surface warfare officers; submarine warfare officers; engineering duty officers; naval reactor engineers; pilots; and naval flight officers. Enlisted personnel in the nuclear propulsion program serve as operators and supervisors in the following types of skills, among others: electronics technicians; electrician’s mates; engineering laboratory technicians; and machinist’s mates. Before their assignment to a nuclear billet, officers and enlisted personnel attend 6 months of classroom study at the Navy’s nuclear power school, and another 6 months of nuclear prototype training, where they acquire hands-on experience. Navy officials estimate that the cost of nuclear training was about $405,000 per officer or enlisted student as of fiscal year 2016. In addition, the cost of accessing and retaining a nuclear officer through his or her initial service obligation, including expenses for undergraduate education and salary, is estimated to have been $581,000 as of fiscal year 2016. As of 2016, the U.S. nuclear industry includes 60 commercially operating nuclear power plants across 30 states and more than 100,000 employees. The median pay for a civilian nuclear engineer was about $103,000 per year in 2015, according to the U.S. Bureau of Labor Statistics. Officers and enlisted members of the Navy’s nuclear propulsion program can transition easily to civilian employment in the nuclear industry for several reasons, according to Navy officials. First, officials stated that civilian nuclear jobs are directly correlated with skills that nuclear personnel acquire in the Navy, and officers can complete civilian nuclear certifications on their first attempt nearly twice as frequently as can other new hires. Second, officials told us that civilian employers can train a new employee hired from the Navy with fewer resources and about half the time compared with training an employee hired from outside the nuclear propulsion program. Finally, Navy officials told us that, due to a wave of expected civilian retirements through 2021, more than 45,000 civilian nuclear jobs may become available. On the basis of feedback from nuclear personnel leaving the Navy, Navy officials also told us that high salaries in the civilian sector and the appeal of geographic stability are factors that influence retention in the nuclear propulsion program. The Navy estimates that about 80 percent of transitioning nuclear officers accept jobs in technical management in the civilian nuclear industry. To help meet retention and recruitment goals, the Navy has maintained a long-standing program of special and incentive (S&I) pays for nuclear propulsion personnel. Navy officials told us that these pays are a last line of defense, coupled with non-monetary incentives, for mitigating declining retention in the nuclear community. Currently, there are 11 S&I pays available in the Navy for the recruitment and retention of nuclear personnel. Five of the 11 pays are limited to officers in the nuclear propulsion community and are described in table 3. The other 6 pays, for enlisted personnel, are discussed later in this appendix. Of the five pays shown in table 3, nuclear officers may receive only one at a time. A nuclear officer will generally receive one or more of these pays over his or her career. The Navy manages the first four pays in table 3 collectively and refers to them as the Nuclear Officer Incentive Pay (NOIP) program. Compared with S&I pays available to officers in other Navy occupations, the community of nuclear propulsion officers ranks second to the medical community in terms of total possible S&I pay compensation over a 30-year career. Specifically, the Navy estimated that in fiscal year 2015, a medical officer could earn about $1.6 million in S&I pays over his or her career, while a nuclear propulsion officer could earn approximately $1.1 million over a career. The total amount of possible career S&I pays for a nuclear officer is about twice that of the next highest compensated career group——about $530,000 for a Sea, Air, Land (SEAL) officer over his or her career. There are 6 S&I pays available to enlisted nuclear personnel in connection with their service in nuclear occupations. These 6 pays are shown in table 4 and are discussed in further detail below. Of the 6 S&I pays for enlisted nuclear personnel shown in table 4, all but Enlisted Supervisor Retention Pay (ESRP) are also offered across the Navy to select occupations or specialties outside of the nuclear field. Enlistees selected for nuclear training may be eligible for an enlistment bonus if they complete training and attain their nuclear rating. After attaining their nuclear rating, nuclear-trained enlisted personnel are then eligible to receive monthly Special Duty Assignment Pay (SDAP) of up to $600 per month, depending on their billet. Many enlisted nuclear personnel will also apply for and receive one or more reenlistment bonuses over their careers. Specifically, eligible members may receive a Selective Reenlistment Bonus (SRB) if they have fewer than 10 years of service and an ESRP bonus once they have more than 10 years of service. In addition to monthly SDAP and one or more SRB and ESRP bonuses, nuclear-trained enlisted personnel may also be eligible for monthly Sea Duty Incentive Pay (SDIP) and monthly Assignment Incentive Pay (AIP) at some point (or points) in their career. SDIP and AIP are limited to sailors who apply and are selected to fill certain critical billets. Within the nuclear propulsion program, SDIP payments of either $450 or $1,000 per month are limited to two specific types of supervisory billets on submarines at sea. As of 2016, AIP amounts are $166.67 per month and are available only to sailors who volunteer to serve as Nuclear Power Training Unit instructors. The Navy obligated more than $169 million per year in constant 2015 dollars, on average, on nuclear-related S&I pays from fiscal years 2010 through 2015. This $169 million average amount represented approximately 11 percent of the Navy’s average annual obligations for special and incentive pays to all of its active duty personnel during that same period. Although for fiscal years 2010 through 2015 the Navy’s total annual obligations for all S&I pays declined by about 17 percent, S&I pays for nuclear personnel increased by 2 percent over the same period in constant 2015 dollars. Retention bonuses for officers and enlisted personnel—specifically, NOIP and SRB—accounted for the largest total obligations of the Navy’s S&I pays that we analyzed further for the nuclear occupation (see figure 6). For fiscal years 2010 through 2015, total obligations for officer S&I pays declined by approximately 7 percent, from approximately $80 million to about $75 million in constant 2015 dollars. NOIP obligations accounted for about 99 percent of those obligations. By contrast, SSIP comprised around 1 percent of obligations over that period because of its limited use by the Navy for retaining a goal of 25 O-5 or O-6 submarine officers each year. As shown in figure 7, NOIP obligations for Nuclear Officer Continuation Pay (COPAY), Nuclear Career Annual Incentive Bonuses (AIB), and Nuclear Career Accession Bonuses (NCAB), in particular, varied in accordance with yearly changes in the number of recipients for those pays. Obligations for COPAY, AIB, and NCAB increased in fiscal years 2011 and 2012 following years in which the Navy fell short of officer retention goals. Because retention levels for nuclear surface warfare officers declined each year from 2010 to 2014, according to NOIP program documentation, the Navy increased the COPAY rate for this group of officers from $30,000 to $35,000 (in nominal dollars) per year in December 2014. For fiscal years 2010 through 2015, total obligations for S&I pays for nuclear enlisted personnel increased by nearly 11 percent, from about $85 million to approximately $94 million in constant 2015 dollars. SRB obligations accounted for about half or more of those total obligations each year. As shown in figure 8, SRB obligations generally increased in proportion with yearly changes in the number of recipients, rising overall by about 45 percent from fiscal year 2010 to fiscal year 2015 in constant 2015 dollars. SRB obligations for nuclear personnel also varied from year to year because of changes the Navy made to the program in terms of the possible amounts that sailors were eligible to receive. Specifically, award ceilings and bonus multiples were increased or decreased for certain nuclear ratings and reenlistment “zones” at different times. For example, in April 2014 seven different nuclear ratings became eligible for an increased, newly established ceiling of $100,000. The Navy also increased the multiples associated with a few ratings and reenlistment zones at that time. On the other hand, in September 2012 the Navy decreased the bonus multiples for three nuclear ratings. Figure 9 shows that, for fiscal years 2010 through 2015, total obligations for SDAP to nuclear enlisted personnel decreased overall from $27 million to $26 million (about 5 percent) in constant 2015 dollars, while the numbers of recipients increased (by approximately 9 percent). Our analysis showed that the downward trend in SDAP obligations relative to the increased number of recipients is attributable in part to the effects of inflation. The SDAP award levels have remained the same since December 2006. The Navy’s obligations for ESRP declined steadily each year from fiscal years 2010 to 2014 in constant 2015 dollars (see figure 10). Figure 10 also shows that ESRP recipients declined in all years except fiscal year 2012. The increase in that year corresponded with the Navy’s January 2012 restructuring of the ESRP program. Specifically, the Navy increased the maximum eligibility period for enlisted personnel in terms of their years of service from 20 to 23. This change increased the number of sailors eligible for the bonus and ultimately increased the total number of reenlistments. In addition, the Navy reconfigured its reenlistment zones for ESRP at that time, including narrowing the zone associated with its most expensive contracts and thereby decreasing their associated costs. According to Navy program documents we reviewed, the reconfiguration of ESRP incentivized retention of the most qualified senior enlisted sailors and linked ESRP eligibility to continued achievement of career milestones. The result, according to the program documents, has been an overall improvement in program outcomes in terms of the sailors retained, with lower yearly costs. Each of the military services relies on pilots to operate aircraft (see figure 11). For the purpose of this case study, we define “pilot” as a servicemember directly responsible for the flight operations of an aircraft. We include both traditionally piloted aircraft (aircraft with crew on board, both fixed-wing and rotary-wing) and remotely piloted aircraft (aircraft without crew on board). However, due to data limitations, we have included non-pilot aviators in some trend analyses and have noted when we have done so. Before qualifying to operate aircraft for military missions, pilots must complete a series of training requirements. The lengths associated with this training vary by specific type of aircraft and range from 18 weeks for remotely piloted aircraft (RPA) operators on certain platforms to 5 years for a fighter jet pilot. The cost of replacing an experienced pilot can be significant. For example, in 2014 the Army estimated the cost of replacing a special operations rotary-wing pilot with 6 years of experience at $8.8 million, and in 2014 the Air Force estimated the cost of replacing a fighter pilot with 5 years of experience at $9 million. According to the U.S. Bureau of Labor Statistics, the U.S. civilian-sector aviation industry employed 238,400 airline and commercial pilots, copilots, and flight engineers in 2014. The U.S. military employed approximately 34,100 active-duty pilots in the same timeframe. In February 2014, we reported that U.S. airlines have historically recruited military pilots and that these pilots can be competitive in the commercial aviation sector due to their accumulation of flying hours while in the military. However, we also noted that, according to some airline representatives, the percentage of pilots with military experience who were hired by airlines declined from about 70 percent prior to 2001 to about 30 percent as of 2013. Most of the services discussed the relationship between their military pilot workforces and the civilian aviation sector in their annual reports analyzing the effects of the Aviation Continuation Pay (ACP) program and its impact on retention. Our analysis found that the services’ conclusions about the level of competition represented by recruitment from the commercial aviation sector varied. Specifically, the Air Force concluded that it needed to increase bonus amounts to retain sufficient numbers of pilots, while the Navy and the Marine Corps concluded that they did not. For example, in reports supporting the ACP programs in fiscal years 2010 through 2015, the Air Force consistently stated that it expected increased recruitment of pilots from the commercial aviation sector, and cited this as justification for continued offerings of retention bonuses. In contrast, the Navy stated that airline compensation would have to increase in order to have a significant impact on the retention of Navy pilots. Also, the Marine Corps reported that the potential increase in hiring by commercial airlines did not warrant the reinstatement of ACP bonuses for fiscal year 2013. The fiscal year 2016 National Defense Authorization Act required the Department of Defense (DOD) to report to Congress by February 1, 2016, on a market- based compensation approach to the retention of aviation officers that considers the pay and allowances offered by commercial airlines to pilots and the propensity of pilots to leave the Air Force to become commercial airline pilots. DOD responded in July 2016 with a report in support of increasing aviation-related S&I pays. DOD’s report noted that the military has experienced high levels of pilot retention as a result of decreased civilian airline pilot salaries, significantly reduced civilian airline pilot hiring, increase military pay and benefits, and an increased sense of duty and patriotism after the events of September 11th, 2001. However, the report added that the department—based on a study by the RAND Corporation—anticipated that increased hiring by commercial airlines over the next 5 to 7 years would require bonus amounts to increase from $25,000 per year to a range of $38,500—62,500 per year. As such, DOD requested that Congress consider increasing the rates of Aviation Career Incentive Pay, and increase the maximum authorized level of Aviation Continuation Pay from $25,000 per year to $35,000 per year. According to DOD officials, all traditionally piloted aircraft are operated by officers, while some RPAs are operated by enlisted personnel. Specifically, the Marine Corps and the Army rely on enlisted personnel to operate their RPAs, while officers generally operate Air Force and Navy RPAs. The Air Force initially assigned servicemembers from the traditionally piloted aircraft community to pilot RPAs until a dedicated RPA-pilot career track was established. The Navy uses only pilots rated on traditionally piloted aircraft to fly RPAs and, according to officials, does not plan to create a designated RPA career track. These differences in how the services staff RPA positions, combined with statutory and policy limitations on offering aviation-specific S&I pays for the RPA community, have led to a variety of S&I pays being used to retain RPA operators. Specifically, per statute and DOD policy, only pilots rated on traditionally piloted aircraft have been provided Aviation Career Incentive Pay (ACIP). In addition, the decision of the Army and the Marine Corps to use enlisted personnel to operate RPAs has meant that these pilots cannot receive ACIP or ACP, as only officers qualify for these pays. Instead, the Army and the Marine Corps have used the Selective Reenlistment Bonus (SRB) to retain these pilots. In April 2014 we reported on the need for the Air Force to assess its workforce mix for RPA operators. Specifically, we recommended that the Air Force develop a recruiting and retention strategy for RPA operators and evaluate using alternative personnel populations to operate RPAs. The Air Force concurred with our recommendation to develop a recruiting and retention strategy, stating that it expected to have a recruiting and retention strategy that was tailored to the specific needs and challenges of RPA pilots by October 2015. However, this has not been completed. The Air Force partially concurred with our recommendation to evaluate the viability of using alternative personnel populations as RPA pilots, but as of December 2016 this recommendation has not been implemented. The military services used a variety of special and incentive pay programs to retain pilots in fiscal years 2010 through 2015, including combinations of up to four different types of pays (see table 5). ACIP is offered only to officers. DOD defines this pay as a financial incentive for officers to pursue a military career as aviators. The pay was first offered in 1974, in part to compensate aviators for the inherent dangers associated with military flight. Until October 2016, DOD policy for ACIP did not recognize operation of an RPA as aerial flight, and therefore RPA pilots who were not graduates of traditional undergraduate pilot training were not authorized to receive ACIP. From fiscal years 2010 through 2015, the pay levels for ACIP established in statute varied by years of aviation service, and ranged from $125 to $840 per month ($1,500 - $10,080 per year) for pilots rated on traditionally piloted aircraft. For example, the rate for pilots with over 6 to 14 years of aviation service (which accounted for 37 percent of military pilots for fiscal years 2010 through 2015) has not changed since 1989, which due to inflation equates to a net 57 percent decrease in the purchasing power of this benefit. Recent statutory changes have allowed DOD to begin offering ACIP to RPA pilots, to a maximum of $1,000 per month ($12,000 per year). DOD has updated guidance to reflect this change, and has also provided the services with the option not to provide ACIP to all pilots. Our analysis of ACIP obligations from fiscal years 2010 through 2015 shows an overall decrease of 15 percent. Figure 12 displays how obligations for ACIP decreased across all four military services from fiscal years 2010 through 2015. During this time the population of active duty pilots operating traditionally piloted aircraft declined by 5 percent. ACP is offered only to officers. The pay was first authorized in fiscal year 1981, and DOD defines it as a financial incentive to retain qualified, experienced officer aviators who have completed—or are within 1 year of completing—any active duty service commitment incurred for undergraduate aviator service. According to DOD officials, in practice, pilots generally qualify for ACP at approximately 10 years of aviation service—the end of their initial active duty service obligation. From fiscal year 2010 through 2015 the level of pay was set by each service and was capped at $25,000 per year for pilots operating traditionally piloted aircraft. During this time, most of the services offered contracts up to 5 or 6 years long, but in fiscal year 2013, the Air Force began offering 9- year contracts to fighter pilots for a total contract amount of $225,000. Starting in fiscal year 2015, the Air Force offered 9-year contracts to all pilots for a total contract amount of up to $225,000. The services may target ACP to specific groups of aviators, adjust the pay amounts on an annual basis, or choose not to offer the pay at all. Table 6 shows that the services implemented ACP differently for pilots of different types of aircraft, and that their implementation approaches generally varied from year to year. The Reserve Components also provided ACP to pilots. Specifically, the Marine Corps Reserve, the Air Force Reserve, the Air National Guard, and the Navy Reserve all offered ACP to pilots for at least a portion of fiscal years 2010 through 2015. Similarly to the active component, the Reserve Components used ACP to retain pilots to help personnel inventories meet requirements. For fiscal years 2010 through 2015, obligations for ACP decreased by 53 percent across DOD. Figure 13 shows how the extent of this decrease varied for each of the services, largely due to different policy decisions at the service level about how to implement the pay. These implementation approaches are discussed in further detail below. We found that each of the services took a different approach to implementing ACP. Specifically, there are differences in how the services identified the target population and established the contract amounts offered. These approaches resulted in different amounts of ACP offered (see table 6). Some of the primary differences we identified are that the Air Force offered ACP to broad categories of pilots (for example, fighter pilots, RPA pilots), while the Navy and Marine Corps offered ACP by specific platform (that is, the model of aircraft a pilot operates). The Army offered ACP to a small group of elite special operations rotary-wing pilots. Also, the Marine Corps suspended ACP offerings in fiscal year 2012, and is the only service to have done so. The SRB is available only to enlisted personnel and therefore is used as an aviation pay only by the Army and the Marine Corps, because the Navy and the Air Force do not generally use enlisted personnel to pilot aircraft. DOD defines this pay as an incentive for enlisted personnel to reenlist in positions experiencing retention challenges. The level of pay is set by each service, not to exceed $30,000 for each year of obligated service on active duty. The Army and the Marine Corps have offered the SRB to enlisted personnel operating RPAs. Specifically, in fiscal years 2010 through 2015 the Army provided 783 bonuses to RPA operators, at an average rate of $9,501 per year, and the Marine Corps provided 123 bonuses to RPA operators, at an average rate of $2,376 per year. Both officers and enlisted personnel may qualify for AIP. DOD defines this as a pay used to encourage members to volunteer for difficult-to-fill jobs or assignments in less desirable locations. The level of pay is set by each service, up to a cap of $3,000 per month ($36,000 per year). For fiscal years 2010 through 2015, according to service officials, the Army offered AIP to pilots in the Special Operations Aviation Regiment, and the Air Force offered AIP to RPA operators. Air Force officials told us that their intent was to use AIP to allow pilots only rated on RPAs to be compensated at a comparable level to those RPA operators who were rated on traditionally piloted aircraft and who were receiving ACIP. The Department of Defense’s (DOD) cybersecurity workforce includes military personnel within the active and Reserve Components, DOD civilians, and contractors who all work together to accomplish DOD’s three primary cyber missions: (1) to defend DOD networks, systems, and information; (2) to defend the U.S. homeland and U.S. national interests against cyberattacks of significant consequence; and (3) to provide cyber support to military operational and contingency plans. The cybersecurity workforce includes various roles, such as designing and building secure information networks and systems, monitoring and detecting unauthorized activity in the cyberspace domain, and performing offensive and defensive cyberspace operations in support of the full range of military operations (see figure 14). In November 2011 we reported that DOD faced challenges in determining the appropriate size for its cybersecurity workforce because of variations in how work is defined and the lack of a designated cybersecurity career field identifier for all of its cybersecurity personnel. Further, we reported that DOD had established a cybersecurity workforce plan, but that the plan only partially described strategies to address gaps in human capital approaches and critical skills and competencies, and that it did not address performance management or recruiting flexibilities. In addition, the plan only partially described building the capacity to support workforce strategies. We recommended that DOD either update its departmentwide cybersecurity workforce plan or ensure that departmental components have plans that appropriately address human capital approaches, critical skills, competencies, and supporting requirements for its cybersecurity workforce strategies. DOD concurred and implemented this recommendation by updating its cybersecurity workforce plan. DOD policy calls for maintaining a total force management perspective to provide qualified cyberspace government civilian and military personnel to identified and authorized positions, augmented where appropriate by contracted services support. These personnel function as an integrated workforce with complementary skill sets to provide an agile, flexible response to DOD requirements. In May 2011 we reported that DOD needed to define cybersecurity personnel with greater specificity in order for the military services to organize, train, and equip cyber forces. We recommended that DOD develop and publish detailed policies and guidance pertaining to categories of personnel who can conduct the various forms of cyberspace operations. DOD agreed with this recommendation and has taken steps to implement it. DOD asked the Institute for Defense Analyses to assess the current and projected total force mix for DOD’s Cyber Mission Force and, if possible, to suggest alternative staffing plans. The Institute for Defense Analyses issued its report in August 2016. According to the U.S. Bureau of Labor Statistics, in 2014 more than 82,000 information security analysts were employed in the United States, and in 2015 the median annual wage for information security analysts was $90,120. According to information obtained from the services, there is a need to offer S&I pays to the cybersecurity military workforce in order to compete with the civilian sector, which includes government, DOD contractors, and corporations. For example, the Navy noted in its justification for offering selective reenlistment bonuses (SRBs) that sailors within cyber-related career fields could qualify for positions in the civilian workforce with salaries starting at $90,000 with a $5,000 to $10,000 sign- on bonus. Service officials told us that for fiscal years 2010 through 2015 the monetary incentive they have primarily relied on to retain cybersecurity personnel is the SRB. In addition to the SRB, starting in fiscal year 2015 the Army offered assignment incentive pay (AIP) and special duty assignment pay (SDAP) to a select group of cybersecurity personnel working at the Army Cyber Command. Starting in fiscal year 2016 the Air Force offered SDAP to those in a designated cybersecurity career field. Of the retention bonuses and assignment pays being offered to cybersecurity personnel, officers are eligible only for AIP. According to services officials, during this same period S&I pays to officers in cybersecurity career fields were not as necessary as they were for enlisted personnel because the services had not experienced the same growth and retention concerns as they had with the enlisted personnel. However, an Air Force official noted that due to low staffing for Cyberspace Operations officers, the Air Force is currently assessing whether to offer the officer retention bonus. According to service officials, the services have the flexibility they need to effectively use S&I pays to address retention issues in the cybersecurity workforce. Table 7 shows the different monetary S&I pays that the services have used to incentivize cybersecurity personnel to remain in cyber-related career fields for fiscal years 2010 through 2015. For fiscal years 2010 through 2015, the services offered SRBs to cyber- related career fields. According to DOD, the SRB is the primary monetary force-shaping tool to achieve required enlisted retention requirements by occupation or skill set. The level of pay is set by each service and is limited by the cap placed by DOD of $25,000 for each year of obligated service in an active component. The maximum amount of SRB payments that servicemembers can receive is $100,000 per contract for the active component. Factors that military service officials cited for determining SRB bonuses for cyber-related career fields include growing requirements, mission criticality of the skill set, and replacement costs. Officials also noted that they take into account previous retention trends, current budget constraints, and input from community managers. All of the services offer bonuses by career field identifier, which identifies the basic skills and training the servicemember holds. The Navy also may offer SRBs by rating and offers bonuses to personnel within the Cryptologic Technician (Networks) and Information Systems Technician ratings. As a result, all qualified personnel in a career field within the rating would be eligible for the bonus. According to Navy officials, they only offer by rating when necessary and in most cases the Navy uses the combination of rating, career field, and zone. Based on our review of 13 SRB bonus announcements for fiscal years 2010 through 2015, the Navy offered a bonus to those in Cryptologic Technician (Networks) rating in all 13 announcements and those in Information Systems Technician rating in 11 of the 13 announcements. The Army adjusts the amount of the SRB based on rank, additional skills above the basic skills required for the career field, and location; the Navy and the Air Force adjust the bonus multiplier based on zone; and the Marine Corps adjusts the amount based on rank and zone. For example, the Army’s award amounts for Cryptologic Network Warfare Specialists for a 4-year reenlistment varied from $8,000 to $41,800 for fiscal years 2010 through 2015, depending on the rank, zone, skill, and location. Table 8 depicts the amounts offered for fiscal years 2010 through 2015 for selected career fields. In addition to offering SRBs, the Army has recently begun to offer AIP and SDAP to select cybersecurity personnel, and the Air Force has recently begun to offer SDAP. In 2015 the Army first approved the use of SDAP and AIP to target certain cybersecurity personnel. As of February 2015 these pays targeted qualified cybersecurity personnel within the Army Cyber Command’s Cyber Mission Force. According to Army documentation, SDAP was approved because those in the Army’s Cyber Mission Force require special qualifications met through rigorous screening or special schooling. Also according to Army documentation, AIP was approved because positions within the Army’s Cyber Mission Force were considered difficult to fill. According to DOD policy, AIP is a temporary compensation tool to provide an additional monetary incentive to encourage servicemembers to volunteer for select difficult-to-fill or less desirable assignments, locations, or units designated by, and under the conditions of service specified by, the Secretary concerned. According to Army officials, the Army is exploring other approaches to further incentivize cybersecurity personnel within the Army Cyber Command’s Cyber Mission Force. Unlike the bonuses that are offered to soldiers qualified in a specified career field, the Army does not offer AIP or SDAP by career field and instead offers these pays only to personnel who are permanently assigned to authorized positions within the Cyber Mission Force and have completed the appropriate training and certifications. According to an Army official, in fiscal year 2015 the Army obligated $151,000 for 85 soldiers within the Cyber Mission Force to receive SDAP and obligated $310,000 for 128 personnel within the Cyber Mission Force to receive AIP. As noted in table 9, eligible cyber personnel receive monthly payments of between $350 and $800. According to an Air Force official, in fiscal year 2016 the Air Force first offered SDAP to cybersecurity personnel in career field Cyber Warfare Operations (1B4X1). The Air Force offered SDAP because servicemembers in career field 1B4X1 are considered to have highly specialized training and skill sets. Eligible cyber enlisted personnel receive monthly payments of between $150 and $225. As noted earlier, total obligations for all enlisted SRBs have been declining; however, SRB obligations for certain cybersecurity career fields have increased in recent years. For example, as shown in figure 15, after a significant decline from fiscal year 2010, Army obligations for cyber- related SRBs have been increasing since fiscal year 2014, and as shown in figures 16 and 17, Navy and Marine Corps obligations for cyber-related SRBs have been increasing since fiscal year 2012. According to Army officials, the Army did not start to target cyber until 2013. The higher obligation amount seen in figure 15 for fiscal year 2010 was not due to a focus on cyber but was in part due to shortages in high density career fields, such as 25B and 25U. As noted earlier, in addition to cyber personnel, these career fields contain other types of personnel, such as information technology personnel. According to Navy officials, the decrease in fiscal year 2012 and the subsequent increase reflect the reenlistment need for the skills at the given time. According to Marine Corps officials, obligations for cyber-related fields decreased in fiscal years 2011 and 2012 because of reductions in force structure. The Marine Corps was reaching its reduced 2012 overall inventory goal, but for certain career fields containing cybersecurity personnel there was an increase in force structure for fiscal year 2015 to support the Cyber Mission Force build-up, which DOD began in 2012. The Air Force was unable to provide reliable SRB obligations for its cyber-related career fields. The average per-person SRB amounts the services offered to cyber- related career fields for fiscal years 2010 through 2015 varied, as shown in table 10, ranging from about $2,277 to about $71,788. During this time, the Navy’s Interactive Operators have consistently on average received the highest amounts, ranging from about $58,594 to $71,788. The Army on average paid the lowest bonus amounts to personnel within cyber- related career fields. According to service officials, the Reserve Components may also use retention bonuses to retain personnel in cyber-related career fields. However, according to information received from service officials, the number of S&I pays that have been offered to retain cybersecurity personnel within the Reserve Components is relatively small. For example, according to documentation provided by the Navy, for fiscal years 2010 through 2015, the Navy Reserve expended about $379,000 to retain 68 personnel in cyber-related career fields. According to Marine Corps officials, the Marine Corps Reserve offers those in the Cyber Security Technician career field a retention bonus, and the current requirement for this career field is 12 personnel. Further, according to Air Force officials, the Air National Guard has not offered bonuses in the past to cyber-related career fields, and the Air Force Reserve did not offer cyber-related career field bonuses in fiscal year 2015 and did not have plans to offer bonuses in fiscal year 2016. For these reasons, we did not include the Reserve Components in our case study for cybersecurity. A Senate Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2016 included a provision that we review the effectiveness of the Department of Defense’s (DOD) special and incentive (S&I) pay programs. This report assesses (1) trends in DOD obligations for S&I pay programs for fiscal years 2005 through 2015 and the extent to which DOD reports such obligations department-wide and (2) the extent to which the military services applied key principles of effective human capital management in the design of S&I pay programs for recruitment and retention of servicemembers in selected high-skill occupations for fiscal years 2010 through 2015. To address our first objective, we analyzed obligations for S&I pays from the military services’ active and Reserve Component military personnel accounts for fiscal years 2005 through 2015. We selected this timeframe to enable us to evaluate trends over time, and fiscal year 2015 was the most recent year of available obligations data at the time of our review. We obtained these data from the annual budget justification materials that DOD and the military services published in connection with their military personnel appropriations requests. We normalized S&I pay obligations data to constant fiscal year 2015 dollars using the series of military personnel deflators for fiscal years 2005 through 2015 published in DOD’s National Defense Budget Estimates for Fiscal Year 2015. To analyze trends in S&I pay obligations over time across the active military service components, we obtained obligations for the more than 60 S&I pays across the services and grouped them into nine categories in accordance with the consolidated pay categories authorized by Congress in the National Defense Authorization Act for Fiscal Year 2008. We compared each active component’s total S&I pay obligations from fiscal years 2005 through 2015 with each service’s total military personnel obligations. In addition, we obtained average strength numbers from the annual budget justification materials and compared these data with the services’ S&I pay obligations and assessed any possible correlation. We discussed with service officials the factors that may have contributed to trends we identified in the S&I pay obligations data we reviewed. To assess the reliability of the data on S&I pays, we assessed the completeness of the data and compared the data against other data sources. GAO has designated DOD’s financial management area as high risk due to long-standing deficiencies in DOD’s systems, processes, and internal controls. Since some of these systems provide the data used in the budgeting process, there are limitations to the use of DOD budget data. However, based on discussions with appropriate DOD officials and our comparison of the trends in the budget data against other data sources, we determined that the S&I pay obligation data for the active component are sufficiently reliable for showing overall trends for S&I pays. However, we determined that data for Reserve Component S&I pays were unreliable due to incompleteness or inconsistency, as described earlier in this report. We compared the available Reserve Component data with guidance established in DOD’s Financial Management Regulation, which provides guidance for a uniform budget and accounting classification that is to be used for preparing budget estimates, including the budget justification materials we reviewed. Further, we examined DOD policy, key statutes and accounting standards for developing and reporting cost information to determine the extent to which they were followed by DOD when reporting on S&I pay obligations. We compared the available Reserve Component data with guidance established in DOD’s Financial Management Regulation and with federal internal control standards to determine the extent to which the guidance was followed. To address our second objective, we selected a non-generalizable sample of three high-skill occupations from across the military services to review in greater detail as case studies. For the purposes of this review, we limited the scope and definition of “high-skill occupations” to the six occupations identified in Senate Report 114-49: nuclear maintenance and engineering (i.e., nuclear propulsion), pilots (i.e., aviation), critical language skills, information technology, cyber warfare (i.e., cybersecurity), and special operations. In selecting case studies from this pool of occupations, we sought to include (1) a mix of S&I pay programs associated with an occupation-specific pay authority and S&I pay programs that apply authorities that are available across occupation types; (2) programs containing pays in varying stages of implementing the consolidation authorities established in the National Defense Authorization Act for Fiscal Year 2008; and (3) at least one emerging occupation. On the basis of these three criteria, we selected nuclear propulsion, aviation, and cybersecurity as case studies. We selected nuclear because there are occupation-specific pays for nuclear personnel and the consolidation special bonus and incentive pay authorities for nuclear officers were completely implemented. We selected aviation because there are occupation-specific pays for aviators and the consolidation special aviation incentive pay and bonus authorities have not been fully implemented. In addition, we selected aviation because of the recent policy changes for aviation-related S&I pays involving remotely piloted aircraft pilots. Lastly, we selected cybersecurity because it is an emerging career field. While the information obtained from the case studies is not generalizable for all occupations across DOD, it enabled us to obtain the perspectives of how the services use their S&I pay programs for the three high-skill occupations we selected. To determine the extent to which the military services applied key principles of effective human capital management in the design of their S&I pay programs associated with our selected occupations, we reviewed DOD and service policies and guidance on the special and incentive pays used for the three occupations we selected. We analyzed available data and reports from the military services on the eligibility criteria, pay amounts, and numbers of recipients in each of the occupations, and we discussed with cognizant officials the context for changes to the S&I pay programs targeting these occupations. For our selected occupations, we reviewed the services’ retention, assignment, and officer accession S&I pay obligations to analyze trends and to understand reasons for any fluctuations we identified from fiscal years 2010 through 2015. To identify key principles of effective human capital management, we reviewed a compilation of GAO’s body of work on human capital management, DOD’s Eleventh Quadrennial Review of Military Compensation, and the DOD Diversity and Inclusion Strategic Plan 2012 - 2017. Using these sources, we assembled a list of seven key principles of effective human capital management. To identify the extent to which the services met these key principles in the design of their S&I pay programs, we reviewed the principles and compared them with service-provided documentation on the S&I pay programs for our three case studies, including policies and guidance, administrative messages, program proposals and justifications, as well as with information obtained through interviews with service officials. The analysts then used this comparison between the principles and the service information to determine whether each service’s approach to the pay programs for our case studies was consistent with each principle. Specifically, for each case study of S&I pay programs associated with nuclear propulsion, aviation, and cyber operations, an analyst independently assessed whether each service’s processes for planning, implementing, and monitoring the pay programs addressed, partially addressed, or did not address the seven principles of human capital management. By addressed, we mean the principle was applied throughout the program and demonstrated through documentation and testimonial information from interviews with service officials; by partially addressed, we mean one or more parts of the principle, but not all parts, were explicitly addressed (e.g., the principle is addressed for one or a few pays within a program, but not for all; or the principle is demonstrated through policy but not through implementation); and by not addressed, we mean that no part of the principle was explicitly addressed from reviewing program documentation or interviews with officials. Following the initial case study assessments, a second analyst reviewed all available documentation and testimonial evidence for each principle and each service’s programs and made an assessment about the extent to which the principle was addressed. Where the two analysts disagreed, they discussed the evidence and reached consensus on the extent to which the principle in question was addressed. Once the assessment process was completed, we reviewed our results with cognizant DOD and service officials and incorporated their feedback or additional information where appropriate. In addition to the key principles for effective human capital management, we also compared aspects of DOD’s application of S&I pay program guidance with federal internal control standards that emphasize the importance of establishing clear and consistent agency objectives. To determine DOD’s progress in consolidating S&I pay programs from legacy statutory authorities to new authorities established in the National Defense Authorization Act for Fiscal Year 2008, we met with cognizant DOD officials and we obtained and reviewed documentation related to DOD’s implementation and status of the pay consolidation, such as updated or new DOD instructions resulting from the consolidation effort. We interviewed officials or, where appropriate, obtained documentation at the organizations listed below: Office of the Secretary of Defense Office of the Under Secretary of Defense for Personnel and Office of the Under Secretary of Defense, Comptroller Office of the Assistant Secretary of Defense for Manpower and Office of the Assistant Secretary of Defense for Health Affairs Defense Manpower Data Center Department of the Air Force Office of the Assistant Secretary of the Air Force for Financial Office of the Deputy Chief of Staff for Manpower, Personnel and Services (A1) Office of the Deputy Chief of Staff of the Army, G-1 Army Human Resources Command Office of the Chief of the Army Reserve Office of the Director of the Army National Guard Office of the Assistant Secretary of the Navy for Manpower and Office of the Deputy Chief of Naval Operations for Manpower, Personnel, Training, and Operations (N1) Office of the Deputy Commandant of the Marine Corps for Programs Office of the Deputy Commandant of the Marine Corps for Manpower and Reserve Affairs We conducted this performance audit from July 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Beverly Schladt, Assistant Director; Melissa Blanco, Aica Dizon, Farrah Graham, Foster Kerrison, Amie Lesser, Felicia Lopez, Martin Mayer, Shahrzad Nikoo, Stephanie Santoso, John Van Schaik , and Cheryl Weissman. Unmanned Aerial Systems: Further Actions Needed to Fully Address Air Force and Army Pilot Workforce Challenges. GAO-16-527T. Washington D.C.: March 16, 2016. Military Recruiting: Army National Guard Needs to Continue Monitoring, Collect Better Data, and Assess Incentives Programs. GAO-16-36. Washington D.C.: November 17, 2015. Unmanned Aerial Systems: Actions Needed to Improve DOD Pilot Training. GAO-15-461. Washington, D.C.: May 14, 2015. Human Capital: DOD Should Fully Develop Its Civilian Strategic Workforce Plan to Aid Decision Makers. GAO-14-565. Washington, D.C.: July 9, 2014. Air Force: Actions Needed to Strengthen Management of Unmanned Aerial System Pilots. GAO-14-316. Washington, D.C.: April 10, 2014. Imminent Danger Pay: Actions Needed Regarding Pay Designations in the U.S. Central Command Area of Responsibility. GAO-14-230R. Washington, D.C.: January 30, 2014. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce: GAO-13-470. Washington, D.C.: May 29, 2013. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. Washington, D.C.: November 29, 2011. Military Cash Incentives: DOD Should Coordinate and Monitor Its Efforts to Achieve Cost-Effective Bonuses and Special Pays. GAO-11-631. Washington, D.C.: June 21, 2011. Military Personnel: Military and Civilian Pay Comparisons Present Challenges and Are One of Many Tools in Assessing Compensation. GAO-10-561R. Washington, D.C.: April 1, 2010. Military Personnel: DOD Needs to Improve the Transparency and Reassess the Reasonableness, Appropriateness, Affordability, and Sustainability of Its Military Compensation System. GAO-05-798. Washington, D.C.: July 19, 2005. A Model of Strategic Human Capital Management.GAO-02-373SP, Washington, D.C.: March 15, 2002.
DOD uses S&I pay programs to compensate and incentivize servicemembers for occupations that are dangerous, less desirable, or require special skills. Senate Report 114-49 included a provision for GAO to review the effectiveness of DOD's S&I pay programs. This report assesses (1) trends in DOD obligations for S&I pay programs for fiscal years 2005 through 2015 and the extent to which DOD reports such obligations department-wide; and (2) the extent to which the military services applied key principles of effective human capital management in the design of S&I pay programs for selected high-skill occupations for fiscal years 2010 through 2015. GAO analyzed DOD S&I pay obligations for fiscal years 2005 through 2015; reviewed a nongeneralizable sample of S&I pay programs for nuclear propulsion, aviation, and cybersecurity occupations, chosen based on their pay programs' attributes; compared DOD and service policies and documents with key principles of effective human capital management; and interviewed DOD officials. The Department of Defense's (DOD) special and incentive (S&I) pay obligations for active duty servicemembers decreased from fiscal years 2005 through 2015 from $5.8 billion to $3.4 billion (about 42 percent) in constant 2015 dollars (see fig.). DOD officials attributed the decrease to a combination of reduced overseas contingency operations, a reduced annual average strength of the force, and a favorable recruiting climate. DOD does not collect and report complete S&I obligation data for the reserve components because, according to officials, there is no requirement to do so and the services would likely need to make changes to their financial and personnel systems to separately track the obligations. However, according to officials, DOD has not explored cost-effective approaches to collect and report this information, which would better position the department to know the full cost of its S&I pay programs. The military services largely applied key principles of effective human capital management in the design of their S&I pay programs for nuclear propulsion, aviation, and cybersecurity occupations. However, the application of these key principles varied by service and occupation. Only the Navy's S&I pay programs for nuclear propulsion and aviation fully addressed all seven principles; programs for other occupations and services generally exhibited a mixture of full and partial application. GAO found that, according to officials, DOD and the services had not taken steps to fully ensure consistent application of the principles. For example, DOD has not reviewed the extent to which its S&I pay programs have incorporated principles of effective human capital management and used resources efficiently. DOD also has not established related measures to ensure efficient use of resources. Without such measures, DOD and the services generally assess the effectiveness of S&I pay programs by the extent to which they achieve desired staffing targets. However, this approach does not ensure that S&I pay programs are using resources in the most efficient manner, as DOD guidance requires. Until DOD reviews the extent to which S&I pay programs have incorporated human capital management principles and used resources efficiently—and develops related measures for efficient use of resources—DOD and the services may lack assurance that S&I pay programs are effective and that resources are optimized for the greatest return on investment. GAO is making five recommendations, including that DOD explore reporting reserve S&I pay program data, review the incorporation of human capital management principles and use of resources, and develop related measures. DOD concurred with three recommendations and partially concurred with two. GAO continues to believe that actions to fully address these two recommendations are needed, as discussed in the report.
Major real property-holding agencies and OMB have made progress toward strategically managing federal real property. In April 2007, we found that in response to the President’s Management Agenda (PMA) real property initiative and a related executive order, agencies covered under the executive order had, among other things, designated senior real property officers, established asset management plans, standardized real property data reporting, and adopted various performance measures to track progress. The administration had also established a Federal Real Property Council (FRPC) that guides reform efforts. Under the real property initiative, OMB has been evaluating the status and progress of agencies’ real property management improvement efforts since the third quarter of fiscal year 2004 using a quarterly scorecard that color codes agencies’ progress—green for success, yellow for mixed results, and red for unsatisfactory. As Figure 1 shows, according to OMB’s analysis, many of these agencies have made progress in accurately accounting for, maintaining, and managing their real property assets so as to efficiently meet their goals and objectives. As of the first quarter of 2009, 10 of the 15 agencies evaluated had achieved green status. According to OMB, the agencies achieving green status have established 3-year timelines for meeting the goals identified in their asset management plans; provided evidence that they are implementing their asset management plans; used real property inventory information and performance measures in decision making; and managed their real property in accordance with their strategic plan, asset management plan, and performance measures. (For more information on the criteria OMB uses to evaluate agencies’ efforts, see app. I.) OMB has also taken some additional steps to improve real property management governmentwide. According to OMB, the federal government disposed of excess real property valued at $1 billion in fiscal year 2008, bringing the total to over $8 billion since fiscal year 2004. OMB also reported success in developing a comprehensive database of federal real property assets, the Federal Real Property Profile (FRPP). OMB recently took further action to improve the reliability of FRPP data by implementing a recommendation we made in April 2007 to develop framework that agencies can use to better ensure the validity and usefulness of key real property data in the FRPP. According to OMB officials, OMB now requires agency-specific validation and verification plans and has developed a FRPP validation protocol to certify agency data. These actions are positive steps towards eventually developing a database that can be used to improve real property management governmentwide. However, it may take some time for these actions to result in consistently reliable data, and, as described later in this testimony, in recent work we have continued to find problems with the reliability and usefulness of FRPP data. Furthermore, our work over the past year has found some other posi steps that some agencies have taken to address ongoing challenges. Specifically: In September 2008, we found that from fiscal year 2005 through 2007, VA made significant progress in reducing underutilized space (space not used to full capacity) in its buildings from 15.4 million square feet to 5.6 million square feet. We also found that VA’s use of various legal authorities, suc h as its enhanced use lease authority (EUL), which allows it to enter into long-term agreements with public and private entities for the use of VA property in exchange for cash or in-kind consideration, likely contrib to its overall reduction of underutilized space since fiscal year 2005. However, our work also shows that VA does not track it s use of these authorities or of the space reductions. In spite of some progress made by OMB and agencies in managing their real property portfolios, our recent work has found that agencies continue to struggle with the long-standing problems that led us to identify federal real property as high-risk: an over-reliance on costly leasing—and challenges GSA faces in its leasing contracting; unreliable data; underutilized and excess property and repair and maintenance backlogs; and ongoing security challenges faced by agencies and, in particular, by the Federal Protective Service (FPS), which is charged with protecting GSA buildings. One of the major reasons for our designation of federal real property as a high-risk area in January 2003 was the government’s overreliance on costly leasing. Under certain conditions, such as fulfilling short-term space needs, leasing may be a lower-cost option than ownership. However, our work over the years has shown that building ownership often costs less than operating leases, especially for long-term space needs. In January 2008, we reported that federal agencies’ extensive reliance on leasing has continued, and that federal agencies occupied about 398 million square feet of leased building space domestically in fiscal year 2006, according to FRPP data. GSA, USPS, and USDA leased about 71 percent of this space, mostly for offices, and the military services leased another 17 percent. For fiscal year 2008, GSA reported that for the first time, it leased more space than it owned. In 10 GSA and USPS leases that we examined in the January 2008 report, decisions to lease space that would be more cost-effective to own were driven by the limited availability of capital for building ownership and other considerations, such as operational efficiency and security. For example, for four of seven GSA leases we analyzed, leasing was more costly over time than construction—by an estimated $83.3 million over 30 years. Although ownership through construction is often the least expensive option, federal budget scorekeeping rules require the full cost of this option to be recorded up front in the budget, whereas only the annual lease payment and cancellation costs need to be recorded for operating leases, reducing the up-front commitment even though the leases are generally more costly over time. USPS is not subject to the scorekeeping rules and cited operational efficiency and limited capital as its main reasons for leasing. While OMB made progress in addressing long-standing real property problems, efforts to address the leasing challenge have been limited. We have raised this issue for almost 20 years. Several alternative approaches have been discussed by various stakeholders, including scoring operating leases the same as ownership, but none have been implemented. In our 2008 report, we recommended that OMB, in consultation with the Federal Real Property Council and key stakeholders, develop a strategy to reduce agencies’ reliance on leased space for long-term needs when ownership would be less costly. OMB agreed with our recommendation. According to OMB officials, in response to this recommendation, an OMB working group conducted an analysis of lease performance. OMB is currently using this analysis as it works with officials of the new administration to assess overall real property priorities in order to establish a roadmap for further action. With GSA’s ongoing reliance on leasing, it is critical that GSA manage its in-house and contracted leasing activities effectively. However, in January 2007, we identified numerous areas in GSA’s implementation of four contracts for national broker services that warranted improvement. Our findings were particularly significant since, over time, GSA expects to outsource the vast majority of its expiring lease workload. At one time, GSA performed lease acquisition, management, and administration functions entirely in-house. In 1997, however, GSA started entering into contracts for real estate services to carry out a portion of its leasing program, and in October 2004, GSA awarded four contracts to perform broker services nationwide (national broker services), with contract performance beginning on April 1, 2005. GSA awarded two of the four contracts to dual-agency brokerage firms—firms that represent both building owners and tenants (in this case, GSA acting on behalf of a tenant agency). The other two awardees were tenant-only brokerage firms—firms that represent only the tenant in real estate transactions. Because using a dual-agency brokerage firm creates an increased potential for conflicts of interest, federal contracting requirements ordinarily would prohibit federal agencies from using dual-agency brokers, but GSA waived the requirements, as allowed, to increase competition for the leasing contracts. When the contracts were awarded, GSA planned to shift at least 50 percent of its expiring lease workload to the four awardees in the first year of the contracts and to increase their share of GSA’s expiring leases to approximately 90 percent by 2010—the fifth and final year of the contracts. As of May 30, 2009, GSA estimated that the total value of the four contracts was $485.6 million. We reviewed GSA’s administration of the four national broker services contracts (i.e., the national broker services program) for the first year of the contracts which ended March 31, 2006. In our January 2007 report, we identified a wide variety of issues related to GSA’s early implementation of these contracts. Problems included inadequate controls to (1) prevent conflicts of interest and (2) ensure compliance with federal requirements for safeguarding federal information and information systems used on behalf of GSA by the four national brokers. We also reported, among other matters, that GSA had not developed a method for quantifying what, if any, savings had resulted from the contracts or for distributing work to the brokers on the basis of their performance, as it had planned. We made 11 recommendations designed to improve GSA’s overall management of the national broker services program. As figure 2 shows, GSA has implemented 7 of these 11 recommendations; has taken action to implement another recommendation; and, after consideration, has decided not to implement the remaining 3. (For more details on the issues we reported in January 2007 and GSA’s actions to address our recommendations, see app. II). We are encouraged by GSA’s actions on our recommendations but have not evaluated their impact. Quality governmentwide and agency-specific data are critical for addressing the wide range of problems facing the government in the real property area, including excess and unneeded property, deterioration, and security concerns. In April 2007, we reported that although some agencies have made progress in collecting and reporting standardized real property data for FRPP, data reliability is still a challenge at some of the agencies, and agencies lacked a standard framework for data validation. We are pleased that OMB has implemented our recommendation to develop a framework that agencies can use to better ensure the validity and usefulness of key real property data in the FRPP, as noted earlier. However, in the past 2 years, we have found the following problems with FRPP data: In our January 2008 report on agencies’ leasing, we found that, while FRPP data were generally reliable for describing the leased inventory, data quality concerns, such as missing data, would limit the usefulness of FRPP for other purposes, such as strategic decision making. In our October 2008 report on federal agencies’ repair and maintenance backlogs, we found that the way six agencies define and estimate their repair needs or backlogs varies. We also found that, according to OMB officials, FRPP’s definition of repair needs was purposefully vague so agencies could use their existing data collection and reporting process. Moreover, we found that condition indexes, which agencies report to FRPP, cannot be compared across agencies because their repair estimates are not comparable. As a result, these condition indexes cannot be used to understand the relative condition or management of agencies’ assets. Thus, they should not be used to inform or prioritize funding decisions between agencies. In this report, we recommended that OMB, in consultation with the Federal Accounting Standards Advisory Board, explore the potential for adding a uniform reporting requirement to FRPP to capture the government’s fiscal exposure related to real property repair and maintenance. OMB agreed with our recommendation. In our February 2009 report on agencies’ authorities to retain proceeds from the sale of real property, we found that, because of inconsistent and unreliable reporting, governmentwide data reported to FRPP were not sufficiently reliable to analyze the extent to which the six agencies with authority to sell real property and retain the proceeds from such sales actually sold real property. Such data weaknesses reduce the effectiveness of the FRPP as a tool to enable governmentwide comparisons of real property efforts, such as the effort to reduce the government’s portfolio of unneeded property. Furthermore, although USPS is not required to submit data to FRPP, in December 2007, we found reliability issues with USPS data that also compromised the usefulness of the data for examining USPS’s real property performance. Specifically, we found that USPS’s Facility Database—developed in 2003 to capture and maintain facility data—has numerous reliability problems and is not used as a centralized source for facility data, in part because of its reliability problems. Moreover, even if the data in the Facility Database were reliable, the database would not help USPS measure facility management performance because it does not track performance indicators nor does it archive data for tracking trends. In April 2007, we reported that among the problems with real property management that agencies continued to face were excess and underutilized property, deteriorating facilities, and maintenance and repair backlogs. We reported some federal agencies maintain a significant amount of excess and underutilized property. For example, we found that Energy, DHS, and NASA reported that over 10 percent of their facilities were excess or underutilized. Agencies may also underestimate their underutilized property if their data are not reliable. For example, in 2007, we found during limited site visits to USPS facilities that six of the facilities we visited had vacant space that local employees said could be leased, but these facilities were not listed as having vacant, leasable space in USPS’s Facilities Database (see fig. 3). At that time, USPS officials acknowledged the vacancies we cited and noted that local officials have few incentives to report facilities’ vacant, leasable space in the database. Underutilized properties present significant potential risks to federal agencies because they are costly to maintain and could be put to more cost-beneficial uses or sold to generate revenue for the government. In 2007, we also reported that addressing the needs of aging and deteriorating federal facilities remains a problem for major real property- holding agencies, and that according to recent estimates, tens of billions of dollars will be needed to repair or restore these assets so that they are fully functional. In October 2008, we reported that agency repair backlog estimates are not comparable and do not accurately capture the government’s fiscal exposure. We found that the six agencies we reviewed had different processes in place to periodically assess the condition of their assets and that they also generally used these processes to identify repair and maintenance backlogs for their assets. Five agencies identified repair needs of between $2.3 billion (NASA) and $12 billion (DOI). GSA reported $7 billion in repair needs. The sixth agency, DOD, did not report on its repair needs. Table 1 provides a summary of each agency’s estimate of repair needs. In addition to other ongoing real property management challenges, the threat of terrorism has increased the emphasis on physical security for federal real property assets. In 2007, we reported that all nine major real property-holding agencies reported using risk-based approaches to prioritize security needs, as we have suggested, but cited a lack of resources for security enhancements as an ongoing problem. For example, according to GSA officials, obtaining funding for security countermeasures, both security fixtures and equipment, is a challenge not only within GSA but for GSA’s tenant agencies as well. Moreover, last week we testified before the Senate Committee on Homeland Security and Governmental Affairs that preliminary results show that the Federal Protective Service’s (FPS) ability to protect federal facilities is hampered by weaknesses in its contract security guard program. We found that FPS does not fully ensure that its contract security guards have the training and certifications required to be deployed to a federal facility and has limited assurance that its guards are complying with post orders. For example, FPS does not have specific national guidance on when and how guard inspections should be performed; and FPS’s inspections of guard posts at federal facilities are inconsistent, and the quality varied in the six regions we visited. Moreover, we identified substantial security vulnerabilities related to FPS’s guard program. GAO investigators carrying the components for an improvised explosive device successfully passed undetected through security checkpoints monitored by FPS’s guards at each of the 10 level IV federal facilities where we conducted covert testing. Once GAO investigators passed the control access points, they assembled the explosive device and walked freely around several floors of these level IV facilities with the device in a briefcase. In response to our briefing on these findings, FPS has recently taken some actions including increasing the frequency of intrusion testing and guard inspections. However, implementing these changes may be challenging, according to FPS. We previously testified before this subcommittee in 2008 that FPS faces operational challenges, funding challenges, and limitations with performance measures to assess the effectiveness of its efforts to protect federal facilities. We recommended, among other things, that the Secretary of DHS direct the Director of FPS to develop and implement a strategic approach to better manage its staffing resources, evaluate current and alternative funding mechanisms, and develop appropriate performance measures. DHS agreed with the recommendations. According to FPS officials, FPS is working on implementing these recommendations. As GAO has reported in the past, real property management problems have been exacerbated by deep-rooted obstacles that include competing stakeholder interests, various legal and budget-related limitations, and weaknesses in agencies’ capital planning. While reforms to date are positive, the new administration and Congress will be challenged to sustain reform momentum and reach consensus on how the obstacles should be addressed. In 2007, we found that some major real property-holding agencies reported that competing local, state, and political interests often impede their ability to make real property management decisions, such as decisions about disposing of unneeded property and acquiring real property. For example, we found that USPS was no longer pursuing a 2002 goal of reducing the number of “redundant, low-value” retail facilities, in part, because of legal restrictions on and political pressures against closing them. To close a post office, USPS is required to, among other things, formally announce its intention to close the facility, analyze the impact of the closure on the community, and solicit comments from the community. Similarly, VA officials reported that disposal is often not an option for most properties because of political stakeholders and constituencies, including historic building advocates or local communities that want to maintain their relationship with VA. In addition, Interior officials reported that the department faces significant challenges in balancing the needs and concerns of local and state governments, historical preservation offices, political interests, and others, particularly when coupled with budget constraints. If the interests of competing stakeholders are not appropriately addressed early in the planning stage, they can adversely affect the cost, schedule and scope of a project. Despite its significance, the obstacle of competing stakeholder interests has gone unaddressed in the real property initiative. It is important to note that there is precedent for lessening the impact of competing stakeholder interests. Base Realignment and Closure Act (BRAC) decisions, by design, are intended to be removed from the political process, and Congress approves all BRAC decisions as a whole. OMB staff said they recognize the significance of the obstacle and told us that FRPC would begin to address the issue after the inventory is established and other reforms are initiated. But until this issue is addressed, less than optimal decisions based on factors other than what is best for the government as a whole may continue. As discussed earlier, budgetary limitations that hinder agencies’ ability to fund ownership leads agencies to rely on costly leased space to meet new space needs. Furthermore, the administrative complexity and costs of disposing of federal property continue to hamper efforts by some agencies to address their excess and underutilized real property problems. Federal agencies are required by law to assess and pay for any environmental cleanup that may be needed before disposing of a property—a process that may require years of study and result in significant costs. As valuable as these legal requirements are, their administrative complexity and the associated costs of complying with them create disincentives to the disposal of excess property. For example, we reported that VA, like all federal agencies, must comply with federal laws and regulations governing property disposal that are intended to protect subsequent users of the property from environmental hazards and to preserve historically significant sites, among other purposes. We have reported that some VA managers have retained excess property because the administrative complexity and costs of complying with these requirements were disincentives to disposal. Additionally, some agencies reported that the costs of cleanup and demolition sometimes exceed the costs of continuing to maintain a property that has been shut down. In such cases, in the short run, it can be more beneficial economically to retain the asset in a shut- down status. Some federal agencies have been granted authorities to enter into EULs or to retain proceeds from the sale of real property. Recently, in February 2009, we reported that the 10 largest real property-holding agencies have different authorities for entering into EULs and retaining proceeds from the sale of real property, including whether the agency can use any retained proceeds without further congressional action such as an annual appropriation act, as shown in table 2. Officials at five of the six agencies with the authority to retain proceeds from the sale of real property, (the Forest Service, GSA, State, USPS, and VA) said this authority is a strong incentive to sell real property. Officials at the five agencies that do not have the authority to retain proceeds from the sale of real property (DOE; DOI; DOJ; NASA; and USDA except for the Forest Service) said they would like to have such expanded authorities to help manage their real property portfolios. However, officials at two of those agencies said that, because of challenges such as the security needs or remote locations of most of their properties, it was unlikely that they would sell many properties. We have previously found that, for agencies which are required to fund the costs of preparing property for disposal, the inability to retain any of the proceeds acts as an additional disincentive to disposing of real property. As we have testified previously, it seems reasonable to allow agencies to retain enough of the proceeds to recoup the costs of disposal, and it may make sense to permit agencies to retain additional proceeds for reinvestment in real property where a need exists. However, in considering whether to allow federal agencies to retain proceeds from real property transactions, it is important for Congress to ensure that it maintains appropriate control and oversight over these funds, including the ability to redistribute the funds to accommodate changing needs. Two current initiatives relate to these issues. The administration’s 2010 budget includes a real property legislative proposal that, among other things, would permit agencies to retain the net proceeds from the transfer or sale of real property subject to further Congressional action. On May 19, 2009, H.R. 2495, the Federal Real Property Disposal Enhancement Act of 2009, was introduced in the House of Representatives, and this bill, like the administration’s legislative proposal, would authorize federal agencies to retain net proceeds from the transfer or sale of real property subject to further congressional action. Additionally, both the administration’s legislative proposal and H.R. 2497 would establish a pilot program for the expedited disposal of federal real property. Over the years, we have reported that prudent capital planning can help agencies to make the most of limited resources, and failure to make timely and effective capital acquisitions can result in acquisitions that cost more than anticipated, fall behind schedule, and fail to meet mission needs and goals. In addition, Congress and OMB have acknowledged the need to improve federal decision making in the area of capital investment. A number of laws enacted in the 1990s placed increased emphasis on improving capital decision-making practices and OMB’s Capital Programming Guide and its revisions to Circular A-11 have attempted to address the government’s shortcomings in this area. However, we have continued to find limitations in OMB’s efforts to improve capital planning governmentwide. For example, real property is one of the major types of capital assets that agencies acquire, and therefore shortcomings in the capital planning and decision-making area have clear implications for the administration’s real property initiative. However, while OMB staff said that agency asset management plans are supposed to align with their capital plans, OMB does not assess whether the plans are aligned. Moreover, we found that guidance for the asset management plans does not discuss how these plans should be linked with agencies’ broader capital planning efforts outlined in the Capital Programming Guide. Without a clear linkage or crosswalk between the guidance for the two documents, agencies may not link them. Furthermore, the relationship between real property goals specified in the asset management plans and longer-term capital plans may not be clear. In April 2007, we recommended that OMB, in conjunction with the FRPC, should establish a clearer link between agencies’ efforts under the real property initiative and broader capital planning guidance. According to OMB officials, OMB is currently considering options to strengthen agencies’ application of the capital planning process as part of Circular A-11, with a focus on preventing cost overruns and schedule delays. In 2007, we concluded that the executive order on real property management and the addition of real property to PMA provided a good foundation for strategically managing federal real property and addressing long-standing problems. These efforts directly addressed the concerns we had raised in past high-risk reports about the lack of a governmentwide focus on real property management problems and generally constitute what we envisioned as a transformation strategy for this area. However, we found that these efforts were in the early stages of implementation, and the problems that led to our high-risk designation—excess property, repair backlogs, data issues, reliance on costly leasing, and security challenges— still existed. As a result, this area remains high risk until agencies show significant results in eliminating the problems by, for example, reducing inventories of excess facilities and making headway in addressing the repair backlog. While the prior administration took several steps to overcome some obstacles in the real property area, the obstacles posed by competing local, state, and political interests went largely unaddressed, and the linkage between the real property initiative and broader agency capital planning efforts is not clear. In 2007, we recommended that OMB, in conjunction with the FRPC, develop an action plan for how the FRPC will address these key problems. According to OMB officials, these key problems are among those being considered as OMB works with administration officials to assess overall real property priorities in order to establish a roadmap for further action. While reforms to date are positive, the new administration and Congress will be challenged to sustain reform momentum and reach consensus on how the ongoing obstacles should be addressed. Madam Chair, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Mark Goldstein on (202) 512-2834 or by email at [email protected]. Key contributions to this testimony were also made by Keith Cunningham, Dwayne Curry, Susan Michal-Smith, Steven Rabinowitz, Kathleen Turner, and Alwynne Wilbur. In April 2007, we found that adding real property asset management to the President’s Management Agenda (PMA) had increased its visibility as a key management challenge and focused greater attention on real property issues across the government. As part of this effort, the Office of Management and Budget (OMB) identified goals for agencies to achieve in right-sizing their real property portfolios. To achieve these goals and gauge an agency’s success in accurately accounting for, maintaining, and managing its real property assets so as to efficiently meet its goals and objectives, the administration established the real property scorecard in the third quarter of fiscal year 2004. The scorecard consists of 13 standards that agencies must meet to achieve the highest status—green— as shown in figure 1. These 13 standards include 8 standards needed to achieve yellow status, plus 5 additional standards. An agency reaches green or yellow status if it meets all of the standards for success listed in the corresponding column in figure 1 and red status if it has any of the shortcomings listed in the column for red standards.
In January 2003, GAO designated federal real property as a high-risk area because of long-standing problems with excess and underutilized property, deteriorating facilities, unreliable real property data, over-reliance on costly leasing, and security challenges. In January 2009, GAO found that agencies have taken some positive steps to address real property issues but that some of the core problems that led to the designation of this area as high risk persist. This testimony focuses on (1) progress made by major real property-holding agencies to strategically manage real property, (2) ongoing problems GAO has identified in recent work regarding agencies' efforts to address real property issues, and (3) underlying obstacles GAO has identified through prior work as hampering agencies' real property reform efforts governmentwide. OMB and real property-holding agencies have made progress in strategically managing real property. In response to an administration reform initiative and related executive order, agencies have, among other things, established asset management plans, standardized data, and adopted performance measures. According to OMB, the federal government disposed of excess real property valued at $1 billion in fiscal year 2008, bringing the total to over $8 billion since fiscal year 2004. OMB also reported success in developing a comprehensive database of federal real property assets and implemented a GAO recommendation to improve the reliability of the data in this database by developing a framework to validate these data. GAO also found that the Veterans Administration has made significant progress in reducing underutilized space. In another report, GAO found that six agencies reviewed have processes in place to prioritize maintenance and repair items. While these actions represent positive steps, some of the long-standing problems that led GAO to designate this area as high risk persist. Although GAO's work over the years has shown that building ownership often costs less than operating leases, especially for long term space needs, in 2008, the General Services Administration (GSA), which acts as the government's leasing agent, leased more property than it owned for the first time. Given GSA's ongoing reliance on leasing, it is critical that GSA manage its leasing activities effectively. However, in January 2007, GAO identified numerous areas that warranted improvement in GSA's implementation of four contracts for national broker services for its leasing program. GSA has implemented 7 of GAO's 11 recommendations to improve these contracting efforts. Although GAO is encouraged by GSA's actions on these recommendations, GAO has not evaluated their impact. Moreover, in more recent work, GAO has continued to find that the government's real property data are not always reliable and agencies continue to retain excess property and face challenges from repair and maintenance backlogs. Regarding security, GAO testified on July 8, 2009, that preliminary results show that the ability of the Federal Protective Service (FPS), which provides security services for about 9,000 GSA facilities, to protect federal facilities is hampered by weaknesses in its contract security guard program. Among other things, GAO investigators carrying the components for an improvised explosive device successfully passed undetected through security checkpoints monitored by FPS's guards at each of the 10 federal facilities where GAO conducted covert testing. As GAO has reported in the past, real property management problems have been exacerbated by deep-rooted obstacles that include competing stakeholder interests, various budgetary and legal limitations, and weaknesses in agencies' capital planning. While reforms to date are positive, the new administration and Congress will be challenged to sustain reform momentum and reach consensus on how such obstacles should be addressed.
TRICARE has three options for its eligible beneficiaries: TRICARE Prime, a program in which beneficiaries enroll and receive care in a managed network similar to a health maintenance organization (HMO); TRICARE Extra, a program in which beneficiaries receive care from a network of preferred providers; and TRICARE Standard, a fee-for-service program that requires no network use. The programs vary according to the amount beneficiaries must contribute towards the cost of their care and according to the choices beneficiaries have in selecting providers. In TRICARE Prime, the program in which active duty personnel must enroll, the beneficiaries must select a primary care manager (PCM) who either provides care or authorizes referrals to specialists. Most beneficiaries who enroll in TRICARE Prime select their primary care providers from MTFs, while other enrollees select their PCMs from the civilian network. Regardless of their status—military or civilian—PCMs may refer Prime beneficiaries to providers in either MTFs or TRICARE’s civilian provider network. Both TRICARE Extra and TRICARE Standard require co-payments, but beneficiaries do not enroll with or have their care managed by PCMs. Beneficiaries choosing TRICARE Extra use the same civilian provider network available to those in TRICARE Prime, and beneficiaries choosing TRICARE Standard are not required to use providers in any network. For these beneficiaries, care can be provided at an MTF when space is available. DOD employs four civilian health care companies or managed care support contractors (contractors) that are responsible for developing and maintaining the civilian provider network that complements the care delivered by MTFs. The contractors recruit civilian providers into a network of PCMs and specialists who provide care to beneficiaries enrolled in TRICARE Prime. This network also serves as the network of preferred providers for beneficiaries who use TRICARE Extra. In 2002, contractors reported that the civilian network included about 37,000 PCMs and 134,000 specialists. The contractors are also responsible for ensuring adequate access to health care, referring and authorizing beneficiaries for health care, educating providers and beneficiaries about TRICARE benefits, ensuring providers are credentialed, and processing claims. In their network agreements with civilian providers, contractors establish reimbursement rates and certain requirements for submitting claims. Reimbursement rates cannot be greater than Medicare rates unless DOD authorizes a higher rate. DOD’s four contractors manage the delivery of care to beneficiaries in 11 TRICARE regions. DOD is currently analyzing proposals to award new civilian health care contracts, and when they are awarded in 2003, DOD will reorganize the 11 regions into 3—North, South, and West—with a single contract for each region. Contractors will be responsible for developing a new civilian provider network that will become operational in April 2004. Under these new contracts DOD will continue to emphasize maximizing the role of MTFs in providing care. The Office of the Assistant Secretary of Defense for Health Affairs (Health Affairs) establishes TRICARE policy and has overall responsibility for the program. The TRICARE Management Activity (TMA), under Health Affairs, is responsible for awarding and administering the TRICARE contracts. DOD has delegated oversight of the provider network to the local level through the regional TRICARE lead agent. The lead agent for each region coordinates the services provided by MTFs and civilian network providers. The lead agents respond to direction from Health Affairs, but report directly to their respective Surgeons General. In overseeing the network, lead agents have staff assigned to MTFs to provide the local interaction with contractor representatives and respond to beneficiary complaints as needed and report back to the lead agent. DOD’s contracts for civilian health care are intended to enhance and support MTF capabilities in providing care to millions of TRICARE beneficiaries. Contractors are required to establish and maintain the network of civilian providers in the following locations: for all catchment areas, base realignment and closure sites, in other contract-specified areas, and in noncatchment areas where a contractor deems it cost- effective. In the remaining areas, a network is not required. DOD requires that contractors have a sufficient number and mix of providers, both primary care and specialists, necessary to satisfy the needs of beneficiaries enrolled in the Prime option. Specifically, it is the responsibility of the contractors to ensure that the network has at least one full-time equivalent PCM for every 2,000 TRICARE Prime enrollees and one full-time equivalent provider (both PCMs and specialists) for every 1,200 TRICARE Prime enrollees. In addition, DOD has access-to-care standards that are designed to ensure that Prime beneficiaries receive timely care. The access standards require the following: appointment wait times shall not exceed 24 hours for urgent care, 1 week for routine care, or 4 weeks for well-patient and specialty care; office wait times shall not exceed 30 minutes for nonemergency care; and travel times shall not exceed 30 minutes for routine care and 1 hour for specialty care. DOD does not specify access standards for eligible beneficiaries who do not enroll in TRICARE Prime. However, DOD requires that contractors provide information and/or assist all beneficiaries—regardless of which option they choose—in finding a participating provider in their area. DOD has delegated oversight of the civilian provider network to the regional TRICARE lead agents. The lead agents told us they use the following tools and information to oversee the network. Network Adequacy Reporting—Contractors are required to provide reports quarterly to the lead agents. The reports contain information on the status of the network—such as the number and type of specialists, a list of primary care managers, and data on adherence to the access standards. The reports may also contain information on steps the contractors have taken to address any network inadequacies. Beneficiary Complaints—The complaints come directly from beneficiaries and through other sources, such as the contractor or MTFs. In addition to these tools, lead agents periodically monitor contractor compliance by reviewing performance related to specific contract requirements, including requirements related to network adequacy. Lead agents also told us they periodically schedule reviews of special issues related to network adequacy, such as conducting telephone surveys of providers to determine whether they are accepting TRICARE patients. In addition, lead agents stated they meet regularly with MTF and contractor representatives to discuss network adequacy and access to care. If the lead agents determine that a network is inadequate, they have formal enforcement actions they may use to correct deficiencies. However, lead agents told us that few of the actions have been issued. They said they prefer to address deficiencies informally rather than take formal actions, particularly in areas where they do not believe the contractor can correct the deficiency because of local market conditions. For example, rather than taking a formal enforcement action, one lead agent worked with the contractor to arrange for a specialist from one area to travel to another area periodically. DOD’s ability to effectively oversee—and thus guarantee the adequacy of—the TRICARE civilian provider network is hindered by (1) flaws in its required provider-to-beneficiary ratios, (2) incomplete reporting on beneficiaries’ access to providers, and (3) the absence of a systematic assessment of complaints. Although DOD has required its network to meet established ratios of providers to beneficiaries, the ratios may underestimate the number of providers needed in an area. Similarly, although DOD has certain requirements governing beneficiary access to available providers, the information reported to DOD on this access is often incomplete—making it difficult to assess compliance with the requirements. Finally, when beneficiaries complain about availability or access in their network, these complaints can be directed to different DOD entities, with no guarantee that the complaints will be compiled and analyzed in the aggregate to identify possible trends or patterns and correct network problems. In some cases, the provider-to-beneficiary ratios underestimate the number of providers, particularly specialists, needed in an area. This underestimation occurs because in calculating the ratios, the contractors do not always include the total number of Prime enrollees within the area. Instead, they base their ratio calculations on the total number of beneficiaries enrolled with civilian PCMs and do not count beneficiaries enrolled with PCMs in MTFs. The ratio is most likely to result in an underestimation of the need for providers in areas in which the MTF is a clinic or small hospital with a limited availability of specialists. Moreover, in reporting whether their network meets the established ratios, different contractors make assumptions about the level of participation on the part of civilian network providers. These assumptions may or may not be accurate, and the assumptions have a significant effect on the number of providers required in the network. Contractors generally assume that between 10 to 20 percent of their providers’ practices are dedicated to TRICARE Prime beneficiaries. Therefore, if a contractor assumes 20 percent of all providers’ practices are dedicated to TRICARE Prime rather than 10 percent, the contractor will need half as many providers in the network in order to meet the prescribed ratio standard. In the network adequacy reports we reviewed, managed care support contractors did not always report all the information required by DOD to assess compliance with the access standards. Specifically, for the network adequacy reports we reviewed from 5 of the 11 TRICARE regions, we found that contractors reported less than half of the required information on access standards for appointment wait, office wait, and travel times. Some contractors reported more information than others, but none reported all the required access information. Contractors said they had difficulties in capturing and reporting information to demonstrate compliance with the access standards. Additionally, two contractors collected some access information, but the lead agents chose not to use it. Most of the DOD lead agents we interviewed told us that because information on access standards is not fully reported, they monitor compliance with the access standards by reviewing beneficiary complaints. Beneficiaries can complain about access to care either orally or in writing to the relevant contractor, their local MTF, or the regional lead agent. Because beneficiary complaints are received through numerous venues, often handled informally on a case-by-case basis, and not centrally evaluated, it is difficult for DOD to assess the extent of any systemic access problems. TMA has a central database of complaints it has received, but complaints directed to MTFs, lead agents, or contractors may not be directed to this database. While contractor and lead agent officials told us they have received few complaints about network problems, this small number of complaints could indicate either an overall satisfaction with care or a general lack of knowledge about how or to whom to complain. Additionally, a small number of complaints, particularly when spread among many sources, limits DOD’s ability to identify any specific trends of systemic problems related to network adequacy within TRICARE. DOD and contractors have reported three factors that may contribute to network inadequacy: geographic location, low reimbursement rates, and administrative requirements. While reimbursement rates and administrative requirements may have created dissatisfaction among providers, it is not clear how much these factors have affected network adequacy because the information the contractors provide to DOD is not sufficient to reliably measure network adequacy. DOD and contractors have reported regional shortages for certain types of specialists in rural areas. For example, they reported shortages for endocrinology in the Upper Peninsula of Michigan and dermatology in New Mexico. Additionally, in some instances, TRICARE officials and contractors have reported difficulties in recruiting providers into the TRICARE Prime network because in some areas providers will not join managed care programs. For example, contractor network data indicate that there have been long-standing provider shortages in TRICARE in areas such as eastern New Mexico, where the lead agent stated that the providers in that area have repeatedly refused to join any network. According to contractor officials, TRICARE Prime providers have expressed concerns about decreasing reimbursement rates. In addition, there have been reported instances in which groups of providers have banded together and refused to accept TRICARE patients due to their concerns with low reimbursement rates. One contractor identified low reimbursement rates as the most frequent cause of provider dissatisfaction. In addition to provider complaints, beneficiary advocacy groups, such as the Military Officers Association of America (MOAA), have cited numerous instances of providers refusing care to beneficiaries because of low reimbursement rates. By statute, DOD cannot generally pay TRICARE providers more than they would be paid under the Medicare fee schedule. In certain situations, DOD has the authority to pay up to 115 percent of the Medicare fee to network providers. DOD’s authority is limited to instances in which it has determined that access to health care is severely impaired within a locality. In 2000, DOD increased reimbursement rates in rural Alaska in an attempt to entice more providers to join the network, but the new rates did not increase provider participation. In 2002, DOD increased reimbursement rates to 115 percent of the Medicare rate for the rest of Alaska. In 2003, DOD increased the rates for selected specialists in Idaho to address documented network shortcomings. In 1997, DOD also increased reimbursement rates for obstetrical care. These cases represent the only instances in which DOD has used its authority to pay above the Medicare rate. Because Medicare fees declined in 2002, and there is a potential for future reductions, some contractors are concerned that reimbursement rates may undermine the TRICARE network. Contractors also report that providers have expressed dissatisfaction with some TRICARE administrative requirements, such as credentialing and preauthorizations and referrals. For example, many providers have complained about TRICARE’s credentialing requirements. In TRICARE, a provider must get recredentialed every 2 years, compared to every 3 years for the private sector. Providers have said that this places cumbersome administrative requirements on them. Another widely reported concern about TRICARE administrative requirements relates to preauthorization and referral requirements. Civilian PCM providers are required to get preauthorizations from MTFs before referring patients for specialized care. While preauthorization is a standard managed care practice, providers complain that obtaining preauthorization adversely affects the quality of care provided to beneficiaries because it takes too much time. In addition, civilian PCMs have expressed concern that they cannot refer beneficiaries to the specialist of their choice because of MTFs’ “right of first refusal” that gives an MTF discretion to care for the beneficiary or refer the care to a civilian provider. Nevertheless, there are not direct data confirming that low reimbursement rates or administrative burdens translate into widespread network inadequacies. We found that out of the 2,156 providers who left one contractor’s network during a 1-year period, 900 providers cited reasons for leaving. Only 10 percent of these providers identified low reimbursement rates as a factor and only 1 percent cited administrative burdens. DOD’s new contracts for providing civilian health care, called TNEX, may address some network concerns raised by providers and beneficiaries, but may create other areas of concern. Because the new contracts are not expected to be finalized until June 2003, the specific mechanisms DOD and the contractors will use to ensure network adequacy are not known. DOD plans to retain the access standards for appointment and office wait times, as well as travel-time standards. However, instead of using provider-to- beneficiary ratios to measure network adequacy, TNEX requires that the network complement the clinical services provided by MTFs and promote access, quality, beneficiary satisfaction, and best value health care for the government. However, TNEX does not specify how this will be measured. TNEX may reduce administrative burden related to provider credentialing and patient referrals. Currently, TRICARE providers must follow TRICARE-specific requirements for credentialing. In contrast, TNEX will allow for network providers to be credentialed through a nationally recognized accrediting organization. DOD officials stated this approach is more in line with industry practices. Patient referral procedures will also change under TNEX. Referral requirements will be reduced, but the MTFs will still retain the “right of first refusal.” On the other hand, TNEX may be creating a new administrative concern for contractors and providers by requiring that 100 percent of network claims submitted by providers be filed electronically. In fiscal year 2002, only 25 percent of processed claims were submitted electronically. Contractors stated that such a requirement could discourage providers from joining or staying in their network. However, DOD states that electronic filing will cut claims-processing costs and save money.
During 2002, in testimony to the House Armed Services Committee, Subcommittee on Personnel, beneficiary groups described problems with access to care from TRICARE's civilian providers, and providers testified about their dissatisfaction with the TRICARE program, specifying low reimbursement rates and administrative burdens. The Bob Stump National Defense Authorization Act of 2003 required that GAO review DOD's oversight of TRICARE's network adequacy. In response, GAO is (1) describing how DOD oversees the adequacy of the civilian provider network, (2) assessing DOD's oversight of the adequacy of the civilian provider network, (3) describing the factors that may contribute to potential network inadequacy or instability, and (4) describing how the new contracts, expected to be awarded in June 2003, might affect network adequacy. GAO's analysis focused on TRICARE Prime--the managed care component of the TRICARE health care delivery system. This testimony summarizes GAO's findings to date. A full report will be issued later this year. To oversee the adequacy of the civilian network, DOD has established standards that are designed to ensure that its network has a sufficient number and mix of providers, both primary care and specialists, necessary to satisfy TRICARE Prime beneficiaries' needs. In addition, DOD has standards for appointment wait, office wait, and travel times that are designed to ensure that TRICARE Prime beneficiaries have adequate access to care. DOD has delegated oversight of the civilian provider network to lead agents, who are responsible for ensuring that these standards have been met. DOD's ability to effectively oversee--and thus guarantee the adequacy of--the TRICARE civilian provider network is hindered in several ways. First, the measurement used to determine if there is a sufficient number of providers for the beneficiaries in an area does not account for the actual number of beneficiaries who may seek care or the availability of providers. In some cases, this may result in an underestimation of the number of providers needed in an area. Second, incomplete contractor reporting on access to care makes it difficult for DOD to assess compliance with this standard. Finally, DOD does not systematically collect and analyze beneficiary complaints, which might assist in identifying inadequacies in the TRICARE civilian provider network. DOD and its contractors have reported three factors that may contribute to potential network inadequacy: geographic location, low reimbursement rates, and administrative requirements. However, the information the contractors provide to DOD is not sufficient to measure the extent to which the TRICARE civilian provider network is inadequate. While reimbursement rates and administrative requirements may have created dissatisfaction among providers, it is not clear that these factors have resulted in insufficient numbers of providers in the network. The new contracts, which are expected to be awarded in June 2003, may result in improved network participation by addressing some network providers' concerns about administrative requirements. For example, the new contracts may simplify requirements for provider credentialing and referrals, two administrative procedures providers have complained about. However, according to contractors, the new contracts may also create requirements that could discourage provider participation, such as the new requirement that 100 percent of network claims submitted by providers be filed electronically. Currently, only about 25 percent of such claims are submitted electronically.
Many categories of legal immigrants are currently eligible for SSI and AFDC benefits. SSI provides benefits to three groups of needy individuals: aged (65 years old and older), blind, and disabled. AFDC provides benefits to needy families with children. Immigrants eligible for assistance include those classified by the Immigration and Naturalization Service (INS) as lawful permanent residents. Also eligible for benefits are certain other legally admitted immigrants, classified by public assistance programs as permanently residing in the United States under color of law (PRUCOL).Under the SSI and AFDC programs, the PRUCOL category includes immigrants, such as refugees, asylees and certain others whose deportation INS does not plan to enforce. (See glossary.) However, a few small categories of immigrants considered as PRUCOL are not uniform between the two programs. Some legal immigrants are admitted into the country under the financial sponsorship of a U.S. resident. The Immigration and Nationality Act of 1952, as amended, provides for the exclusion of any alien who is likely to become a public charge. Aliens can show prospective self-sufficiency through (1) proof of sufficient personal resources, (2) an offer of a job with adequate compensation, (3) posting of a public charge bond, or (4) an affidavit of support submitted on their behalf by a sponsor who preferably is a U.S. citizen or permanent resident. By signing the affidavit of support, sponsors attest to their ability and willingness to provide financial assistance to the immigrant. However, several courts have ruled that these affidavits of support are not legally binding. Concerned about the number of sponsored immigrants receiving public assistance, the Congress amended program statutes to include a sponsor-to-alien deeming period; that is, if a sponsored immigrant applies for public assistance before a certain time period, a portion of the sponsor’s income and resources are deemed or assumed to be available for the immigrant’s use (whether or not they are available in fact). This deeming provision is used to determine eligibility as well as benefit amount. For the AFDC program, this period is 3 years after admission to the United States as a permanent resident. In 1993, the deeming period for the SSI program was temporarily extended from 3 to 5 years, starting in January 1994 through September 1996. The deeming provisions do not apply if an immigrant becomes blind or disabled after admission to the United States as a permanent resident. Affidavits of support were amended so sponsors currently agree to provide financial support to the immigrant for 3 years. The Responsibility and Empowerment Support Program Providing Employment, Child Care and Training Act of 1994, reintroduced as H.R. 4, the Personal Responsibility Act of 1995, introduced by a group of Republicans during the 104th Congress as part of their “Contract With America,” would eliminate most legal immigrants’ eligibility for SSI and AFDC, as well as food stamps, Medicaid, foster care and adoption assistance, education programs and numerous other public assistance programs. Two groups would remain eligible: (1) refugees in the country fewer than 6 years and (2) lawful permanent residents who are 75 years old and older and who have been in the country 5 years or more. The provisions of this proposal would go into effect 1 year after enactment with no grandfathering provision. In contrast, the administration’s proposal would increase the time period that sponsors’ incomes would be deemed available to immigrants receiving AFDC, SSI, or food stamps to 5 years. After the fifth year, sponsored immigrants would still receive benefits if their sponsor’s income was below the U.S. median income. The proposal would become effective as of October 1995 and contains a grandfather clause protecting current recipients. Table 1 provides more detailed information on these two proposals. To determine the number and characteristics of immigrants receiving SSI and AFDC benefits, we analyzed data from SSI and AFDC administrative files, as well as published data from INS and the Bureau of the Census. To identify trends in immigrant and citizen use of SSI and AFDC, we reviewed published administrative data from 1983 through 1993. We used published data from INS’s 1992 and 1993 Statistical Yearbooks and the March 1994 Supplement of the Census Bureau’s Current Population Survey (CPS) to provide background on overall immigration. The March 1994 CPS reports recipiency data for 1993. To identify the characteristics of immigrant recipients who could lose benefits under the proposals, we reviewed current SSI and AFDC policies and four key welfare reform proposals. We also analyzed 1993 AFDC administrative data and SSI administrative data for December 1993. In addition, we reviewed a published study by the Social Security Administration (SSA) that included information about immigrants’ use of SSI’s aged, blind, and disabled benefits. INS defines immigrants as lawful permanent residents. For the purposes of this report, we also included as immigrants other categories of noncitizens who are eligible for SSI or AFDC: refugees, asylees, aliens granted stay of deportation by INS, and other PRUCOL individuals. We analyzed immigrant recipients’ immigration status, length of time in the United States, and age—key characteristics in determining eligibility under the welfare reform proposals. To determine the impact of restricting or eliminating benefits for immigrants, we reviewed four key welfare reform proposals. We used H.R. 4 and the administration’s proposal as examples of the range of options available. To assess the impact of the proposals on immigrants and their families, we interviewed officials from the SSI and AFDC programs and from INS, researchers from public policy groups, and state and local government officials with information about immigrants’ utilization of assistance programs. Overall, immigrants as a group are more likely than citizens to be receiving SSI or AFDC benefits. Based on CPS data, immigrants receiving SSI or AFDC represented about 6 percent of all immigrants in 1993; in contrast, about 3.4 percent of citizens received such assistance. However, the total number of immigrants receiving SSI or AFDC is much lower than the number of citizens because legal immigrants represent only about 6 percent of the U.S. population. Based on 1993 administrative data, an estimated 18.6 million citizens received SSI or AFDC, compared with an estimated 1.4 million legal immigrants. Much of the difference in recipiency rates between immigrants and citizens can be explained by differences in their demographic characteristics and household composition. Immigrants are much more likely than citizens to be poor. In 1993, about 29 percent of immigrant households reported incomes below the poverty line, compared with 14 percent of citizen households. Researchers have noted that immigrant households have larger numbers of small children and elderly or disabled persons and contain more members with relatively little schooling and low skill levels. These are all characteristics that increase the likelihood of welfare recipiency. Public policy has also played a role in immigrants’ receipt of public assistance. Refugees and asylees are categories of immigrants who are much more likely to be on welfare than citizens or other immigrants. By virtue of their refugee or asylee status alone, they qualify immediately for assistance programs that may be restricted to other immigrants. Almost 83 percent of all immigrants receiving SSI or AFDC in 1993 resided in four states: California, New York, Florida, and Texas. This is not surprising given that over 68 percent of all immigrants resided in these states. Over one-half of the immigrants receiving these benefits lived in California. (See table 2.) As a percentage of all SSI recipients, immigrants receiving SSI benefits have increased dramatically. The percentage of SSI recipients who were immigrants nearly tripled between 1983 and 1993, rising from 3.9 to 11.5 percent. This rise occurred because the number of immigrants receiving SSI grew at a much faster rate than the number of citizen recipients. The number of immigrants receiving SSI increased from 151,207 to 683,178 while the number of citizen recipients increased from approximately 3,750,300 to 5,301,200. In total, immigrants received an estimated $3.3 billion in SSI benefits in 1993. Between 1983 and 1993, the number of immigrants receiving aged benefits quadrupled (106,600 to 416,420), while the number of citizens receiving aged benefits decreased by 25 percent (1,408,800 to 1,058,432). Consequently, aged immigrant recipients grew from 7.0 to 28.2 percent of all aged recipients. Over the same time period, the number of immigrants receiving disabled benefits increased six-fold (44,600 to 266,730), while the number of citizens receiving disabled benefits increased by 81 percent (approximately 2,341,500 to 4,242,800). As a result, the percentage of disabled immigrants more than tripled, rising from 1.9 to 5.9 percent of all disabled recipients. (See fig. 1.) Immigrants as a percentage of all AFDC recipients grew at a lower rate than immigrants receiving SSI benefits. Adult immigrants receiving AFDC increased from 5.5 to 10.8 percent between 1983 and 1993. In 1993, almost 722,000 immigrants, including adults and children, received an estimated $1.2 billion in AFDC benefits. The characteristics of immigrants receiving SSI and AFDC differ, but data limitations prevent a complete analysis. Available data show that SSI immigrant recipients are more likely than citizens to be 75 years old or older—the age that H.R. 4 uses to determine eligibility. Most AFDC families containing immigrant recipients also contain citizen recipients. Only the immigrants in these families would lose benefits under some of the proposals—the citizen members of these families would remain eligible. Compared with SSI recipients, AFDC recipients are more likely to be refugees. However, available data provide an incomplete picture of immigrant recipients, and even less is known about their sponsors. As noted earlier, immigrants account for an increasingly greater percentage of the SSI aged program. Moreover, immigrant recipients are more likely than citizen recipients to be 75 years old or older. In 1993, 26 percent of immigrants on SSI aged benefits were 75 years old or older; in contrast, 15.3 percent of citizen SSI recipients were 75 years old or older. Most immigrants receiving SSI are lawful permanent residents, and many have been in the country for over 5 years. Among immigrants receiving SSI benefits in 1993, over 76 percent were lawful permanent residents, 18 percent were refugees or asylees, and 6 percent were other PRUCOL immigrants. (See fig. 2.) Of lawful permanent residents, over 56 percent had been in the country for 5 years or longer. About 10 percent of lawful permanent residents were 75 years old or older and had been in the country for 5 years or longer. Over 14 percent of refugees had been in the country for 6 years or longer. Questions have been raised about the growing numbers of elderly immigrants receiving SSI and the extent to which these immigrants entered the United States with a financial sponsor. While we cannot determine the extent to which immigrants receiving SSI are sponsored, SSA’s data suggest that some immigrants apply for SSI benefits shortly after a deeming period would have expired. Analyses by SSA researchers indicate that about 25 percent of lawful permanent residents who applied for SSI benefits since 1980 applied soon after 3 years of U.S. residency; that is, soon after the sponsor’s promise of support would have expired. Discussing the immigration status of AFDC recipients is complicated because AFDC is a family-based benefit, and each family member could have a different immigration status. Most AFDC households containing at least one immigrant also contain a citizen. Of AFDC households with at least one immigrant recipient, only about 19 percent contained no citizen (that is, all members of the household were immigrants). For example, over 64 percent were headed by an immigrant adult with at least one citizen child. In about 9 percent of the households containing immigrants, at least one adult is a citizen and at least one child is a citizen. (See fig. 3.) Most immigrants receiving AFDC were either lawful permanent residents or refugees or asylees. Data on all immigrant recipients showed that 65.3 percent—over 471,000—were lawful permanent residents and almost 32.5 percent—over 234,000—were refugees or asylees. The remaining immigrant recipients fall into other PRUCOL categories. (See fig. 4.) No one source provides all the data needed to fully describe the characteristics of immigrants receiving benefits or of their sponsors. Administrative data from the SSI and AFDC programs may not have a recipient’s current immigration status if an immigrant’s status changed and the immigrant did not notify the agency. For example, lawful permanent residents can become citizens after 5 years of residing in the United States and meeting other INS criteria, and refugees and asylees can become lawful permanent residents after a 1-year residency in the United States. Further, AFDC administrative data do not contain information on how long an immigrant recipient has resided in the United States. Additionally, computerized data on sponsors, their incomes, the amount of financial support they provide, and the number of immigrants they are sponsoring are not available from administrative sources or the INS. SSI’s new automated application system collects information on sponsors but it cannot currently aggregate the data for national analyses. The AFDC program does not have any computerized data on sponsors of immigrant recipients. INS collects this information when an immigrant first enters the country but the data are not computerized. Given these data limitations, we were unable to assess the extent to which immigrants are relying on sponsors for financial assistance or determine sponsors’ ability to support sponsored immigrants. The estimated number of immigrants affected by welfare reform proposals varies. H.R. 4, which eliminates eligibility for certain categories of immigrants, would eliminate benefits to the largest number of immigrant recipients. The impact of the administration’s proposal, which would increase the sponsor’s responsibility for supporting immigrants, is difficult to determine. Last year, CBO estimated cost savings for these two proposals. If these proposals were enacted, immigrants might change their behavior by, for example, applying for state-funded public assistance, naturalizing more quickly, or changing their immigration patterns. Under H.R. 4, only two groups of immigrants would remain eligible for benefits—refugees residing in the country fewer than 6 years and lawful permanent residents 75 years old or older who have resided in the United States for at least 5 years. An estimated 522,000 immigrants receiving SSI and an estimated 492,000 immigrants receiving AFDC—mostly lawful permanent residents—are in categories that lose eligibility under this proposal. In addition, some of the approximately 230,000 refugee recipients may no longer be eligible if they had resided in the United States for 6 years or longer. CBO estimated that federal savings from this proposal for the SSI and AFDC programs would be $9.2 billion and $1 billion, respectively, over the period 1996-99. Adjusting administrative data to account for naturalizations, CBO estimated that 390,000 immigrants receiving SSI and 400,000 immigrants receiving AFDC would lose eligibility under this proposal. Greater federal savings are expected from the SSI program because SSI (1) provides a higher average monthly benefit per person—$407 for SSI immigrants, compared with $133 for AFDC; and (2) SSI benefits are solely a federal expenditure, while AFDC costs are shared between the federal government and the states. CBO estimated federal savings of $21.7 billion from all the public assistance programs affected by this proposal including the SSI, AFDC, Food Stamp, and Medicaid programs. Determining the impact of extending the amount of time a sponsor’s income is deemed available for the immigrant is difficult because of a lack of computerized data on sponsors, their income, and the number of immigrants they are sponsoring. Recognizing these limitations, CBO estimated that the administration’s proposal would save nearly $2.9 billion in SSI, Medicaid, and AFDC benefits over the next 4 years. CBO estimated that more than 80 percent of these savings would come from the SSI program. An additional $400 million over 4 years would be saved by tightening SSI, Medicaid, and AFDC eligibility standards for immigrants to conform with stricter Food Stamp program criteria. While determining exactly how immigrant recipients will be affected by the various welfare reform proposals is difficult, these changes may have some effect on immigrants’ behavior. No studies have quantified these effects; however, experts have suggested a number of possible outcomes. For example, some immigrants who lose eligibility may find themselves financially worse off. Other immigrants may find ways to increase their income by increasing their work effort or relying more heavily on their sponsors (if they have one) for financial support. Also, immigrants may supplement their incomes by applying for state-funded public assistance or seek changes in their naturalization status that would result in the reinstatement of their benefits. Immigrants who lose eligibility for federal welfare programs may turn to state-funded public assistance programs, thus shifting costs to the states. State general assistance programs would be unable to restrict benefits to legal immigrants losing federal eligibility. According to the 1971 Supreme Court ruling in Graham v. Richardson, states cannot categorically restrict legal immigrants from receiving state benefits. As of 1992, state- or county-funded public assistance programs were operating in 42 states. California and New York, two states with high concentrations of immigrants on public assistance and which operate state general assistance programs, could be greatly affected. As a result, the possible savings that states would accrue from their reduced share of AFDC benefits to immigrants could be offset by increased costs for state-funded general assistance. Immigrants may also change their naturalization and immigration patterns. Eliminating or restricting benefit eligibility may prompt more immigrants to become citizens to retain their eligibility, according to an Urban Institute study. CBO’s $21.7 billion cost-savings estimate takes into account higher naturalization rates. However, the impact of the proposal on naturalization rates is difficult to predict. Even higher naturalization rates could lower actual program savings. Restricting legal immigrants’ eligibility for benefits may also have longer-term effects on the number and composition of immigrants entering this country. Eliminating benefits for most legal immigrants could prompt some prospective immigrants to reconsider their decision to seek residence in this country. In addition, potential sponsors of immigrants may reconsider whether they would assist others in entering this country if doing so may result in additional financial responsibility on the part of the sponsor, according to an INS official. As agreed with your office, we did not obtain written agency comments but we did discuss the report with program officials at HHS, SSA, and INS. We also discussed the contents of the report with the Congressional Research Service, CBO, and other relevant research organizations. The officials generally agreed with the contents of this report but made some technical comments that we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Health and Human Services, the Commissioner of the Social Security Administration, the Commissioner of INS, and other interested parties. If your or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix I. In addition to those named above, the following individuals made important contributions to this report: John Vocino, Senior Evaluator; Alicia Puente Cackley, Senior Economist; C. Robert DeRoy, Assistant Director (Computer Science); Paula A. Bonin, Senior Computer Specialist; Vanessa R. Taylor, Senior Evaluator (Computer Science); Steven R. Machlin, Senior Statistician. Signed by a sponsor for an immigrant as an assurance that the immigrant will not become a public charge. Aliens found likely to become a public charge may not be admitted into the United States under the Immigration and Nationality Act. The AFDC program provides cash welfare payments for (1) needy children who have been deprived of parental support or care because their father or mother is absent from the home continuously, incapacitated, deceased, or unemployed; and (2) certain others in the household of a child recipient. Benefits may also be provided to needy women in the third trimester of their pregnancy. States define need, set their own benefit levels, establish income and resource limits, and supervise or administer the program. Federal funds pay from 50 to about 80 percent of the AFDC benefit costs in a state and 50 percent of administration costs. INS defines an asylee as an alien in the United States or at a port of entry, who is unable or unwilling to return to his or her country of nationality because of persecution or a well-founded fear of persecution. Persecution or the fear thereof may be based on the alien’s race, religion, nationality, membership in a particular social group, or political opinion. Asylees apply for status after entering the United States, and are eligible to adjust to lawful permanent resident status after 1 year of continuous presence in the United States. If a sponsored immigrant applies for public assistance, the income and resources of the sponsor will be considered or deemed to also be available to the sponsored immigrant, regardless of whether they are in fact available to the immigrant. State and locally funded programs designed to provide basic benefits to low-income people who are not eligible for federally funded cash assistance. States, counties, or other local governmental units determine general assistance benefit levels, eligibility criteria, and length of eligibility. INS defines lawful permanent residents as persons lawfully accorded the privilege of residing permanently in the United States. They may be issued immigrant visas by the Department of State overseas or adjusted to permanent resident status by the INS in the United States. Generally, a lawful permanent resident can apply for naturalization to become a U.S. citizen after living in the United States continuously for 5 years. INS defines naturalization as the conferring, by any means, of citizenship upon a person after birth. Immigrants must meet certain requirements to be eligible to become naturalized citizens. Generally, they must be at least 18 years old, have been lawfully admitted for permanent residence, and have resided in the United States continuously for at least 5 years. They must also be able to speak, read, and write the English language; demonstrate a knowledge of U.S. government and history; and have good moral character. This term refers to immigrants who are considered “permanently residing under color of law.” PRUCOL is not an immigration status provided by INS; rather, it is a term used to indicate many alien statuses and is used for the purpose of determining eligibility for AFDC, SSI, and Medicaid. INS defines a refugee as any person who is outside his or her country of nationality and unable or unwilling to return to that country because of persecution or a well-founded fear of persecution. Persecution or the fear thereof may be based on the alien’s race, religion, nationality, membership in a particular social group, or political opinion. Refugees apply for status outside the United States; they are eligible to adjust to lawful permanent resident status after 1 year of continuous presence in the United States. The SSI program is a means-tested, federally administered income assistance program authorized by title XVI of the Social Security Act. Begun in 1974, SSI provides monthly cash payments in accordance with uniform, nationwide eligibility requirements to needy aged, blind, and disabled persons. The aged are defined as persons 65 years old and older. The blind are individuals with 20/200 vision or less with the use of a correcting lens in the person’s better eye or those with tunnel vision of 20 degrees or less. Disabled individuals are those unable to engage in any substantial gainful activity by reason of a medically determined physical or mental impairment expected to result in death or that has lasted or can be expected to last for a continuous period of at least 12 months. Some states supplement federal SSI payments with state funds. A sponsor is a person who has signed an affidavit of support on behalf of an alien seeking permanent residence in the United States. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the effect of proposed welfare reform legislation on legal immigrant welfare recipients, focusing on: (1) legal immigrants' and citizens' use of the Supplemental Security Income (SSI) and Aid to Families with Dependent Children (AFDC) programs; (2) the numbers of legal immigrants receiving SSI or AFDC benefits; (3) the immigrant recipients that could lose benefits under the welfare reform proposals; and (4) the possible impacts of restricting immigrants' SSI and AFDC benefits on federal welfare programs. GAO found that: (1) a greater percentage of legal immigrants receive SSI or AFDC benefits than do citizens; (2) immigrants tend to be poorer than citizens and have more small children, more elderly or disabled family members, and more family members with minimal education and skill levels; (3) the number of immigrants receiving SSI benefits more than quadrupled between 1983 and 1993 and these immigrants now comprise over 11 percent of all SSI recipients; (4) legal immigrants received an estimated $1.2 billion in AFDC benefits in 1993; (5) most immigrant recipients are lawful permanent residents or refugees and are 75 years old or older; (6) one welfare reform proposal would save $9.2 billion in SSI benefits and $1 billion in AFDC benefits over 4 years by dropping about 500,000 immigrant recipients from each program; (7) the Administration's proposal would affect fewer immigrants and extend the length of sponsorship and tighten eligibility standards; (8) the two welfare reform proposals could save between $3.3 billion and $21.7 billion over 4 years; and (9) the loss of benefits could cause immigrants to change their immigration, work, and naturalization patterns or to turn to state welfare programs for support.
Before originating a residential mortgage loan, a lender assesses the risk of making the loan through a process called underwriting, in which the lender generally examines the borrower’s credit history and capacity to pay back the mortgage and obtains a valuation of the property to be used as collateral for the loan. Lenders need to know the property’s market value, which refers to the probable price that a property should bring in a competitive and open market, in order to provide information for assessing their potential loss exposure if the borrower defaults. Lenders also need to know the value in order to calculate the loan-to-value ratio, which represents the proportion of the property’s value being financed by the mortgage and is an indicator of its risk level. Real estate can be valued using a number of methods, including appraisals, broker price opinions (BPO), and automated valuation models (AVM). Appraisals—the valuation method used in the large majority of mortgage transactions— are opinions of value based on market research and analysis as of a specific date. Appraisals are performed by state-licensed or -certified appraisers who are required to follow the Uniform Standards of Professional Appraisal Practice (USPAP). A BPO is an estimate of the probable selling price of a particular property prepared by a real estate broker, agent, or sales person rather than by an appraiser. An AVM is a computerized model that estimates property values using public record data, such as tax records and information kept by county recorders, multiple listing services, and other real estate records. In 1986, the House Committee on Government Operations issued a report concluding that problematic appraisals were an important contributor to the losses that the federal government suffered during the savings and loan crisis. The report states that hundreds of savings and loans chartered or insured by the federal government were severely weakened or declared insolvent because faulty and fraudulent real estate appraisals provided documentation for loans larger than justified by the collateral’s real value. In response, Congress incorporated provisions in Title XI of FIRREA that were intended to ensure that appraisals performed for federally related transactions were done (1) in writing, in accordance with uniform professional standards, and (2) by individuals whose competency has been demonstrated and whose professional conduct is subject to effective supervision. Various private, state, and federal entities have roles in the Title XI regulatory structure: The Appraisal Foundation. The Appraisal Foundation is a private not- for-profit corporation composed of groups from the real estate industry that works to foster professionalism in appraising. The foundation sponsors two independent boards with responsibilities under Title XI. The first of these, the Appraisal Standards Board, sets forth rules for developing an appraisal and reporting its results through USPAP. Title XI requires real estate appraisals performed in conjunction with federally related transactions to follow USPAP. The second board, the Appraiser Qualifications Board, establishes the minimum qualification criteria for state certification and licensing of real property appraisers. Title XI requires all state-licensed and -certified appraisers to meet the minimum education, experience, and examination requirements promulgated by the Appraiser Qualifications Board. The foundation disseminates information regarding USPAP and the appraiser qualification criteria, which are periodically revised and updated, to state and federal regulators, appraisers, users of appraisal services, and the general public. The foundation is funded primarily by sales of publications but also receives an annual grant from ASC. State-level regulatory entities. Title XI relies on the states to (1) implement the certification and licensing of all real estate appraisers and (2) monitor and supervise appraisers’ compliance with appraisal standards and requirements. To assure the availability of certified and licensed appraisers, all 50 states, the District of Columbia, and four U.S. territories have adopted structures to regulate and supervise the appraisal industry. These structures typically consist of a state regulatory agency coupled with a board or commission to establish education and experience requirements (consistent with or in excess of Appraiser Qualifications Board criteria), license and certify appraisers, and monitor and enforce appraiser compliance. These regulatory agencies generally oversee the activities of appraisers for all types of transactions, including those that are federally related. Federal financial institutions regulators. Title XI places responsibility for regulating appraisals and “evaluations” performed in conjunction with federally related transactions with the Federal Reserve, FDIC, OCC, and NCUA. To meet this responsibility, these financial institution regulators have established requirements for appraisals and evaluations through regulations and have jointly issued Interagency Appraisal and Evaluation Guidelines. Among other things, appraisals for federally related transactions must, at a minimum, provide an estimate of market value, conform to USPAP, be in writing, and contain sufficient information and analysis to support the institution’s decision to engage in the transaction. By regulation, loans that qualify for sale to a U.S. government agency or U.S. government- sponsored agency and loans that are wholly or partially insured or guaranteed by such agencies are exempt from the appraisal requirements. In addition, loans that involve residential real estate transactions in which the appraisal conforms to Fannie Mae or Freddie Mac appraisal standards are exempt from these appraisal requirements. Under authority granted by Title XI, the federal regulators also have adopted regulations that exempt federally related transactions of $250,000 or less from appraisal requirements, meaning that the services of a licensed or certified appraiser are not required (although an evaluation must be performed). The regulations provide a similar appraisal exemption for real estate- secured business loans of $1 million or less that are not dependent on the sale of, or rental income derived from, real estate as the primary source of repayment. The regulations and guidelines also specify the types of policies and procedures lenders should have in place to help ensure independence and credibility in the valuation process. Additionally, the federal regulators have developed procedures for examining the real estate lending activities of regulated institutions that include steps for assessing the completeness, adequacy, and appropriateness of these institutions’ appraisal and evaluation policies and procedures. Appraisal Subcommittee. ASC has responsibility for monitoring the implementation of Title XI by the private, state, and federal entities discussed previously. Among other things, ASC is responsible for (1) monitoring and reviewing the practices, procedures, activities, and organizational structure of the Appraisal Foundation—including making grants to the Foundation in amounts that it deems appropriate to help defray costs associated with its Title XI activities; (2) monitoring the requirements established by the states and their appraiser regulatory agencies for the certification and licensing of appraisers; (3) monitoring the requirements established by the federal financial institutions regulators regarding appraisal standards for federally related transactions and determinations of which federally related transactions will require the services of state-licensed or - certified appraisers; and (4) maintaining a national registry of state- licensed and -certified appraisers who may perform appraisals in connection with federally related transactions. Among other responsibilities and authorities, the Dodd-Frank Act requires ASC to implement a national appraiser complaint hotline and provides ASC with limited rulemaking authority. ASC provides an annual report to Congress on its activities and financial status in the preceding year. For fiscal year 2010, ASC reported total expenses (including grants to the Appraisal Foundation) of approximately $3.6 million. Some 20 years after the passage of Title XI, questions remain about oversight of the appraisal industry and the quality of appraisals. Although the federal financial institutions regulators have had guidance since the 1990s to help ensure the independence of appraisers, during the mid- 2000s, some appraisers reported that loan officers and mortgage brokers pressured them to overvalue properties to help secure mortgage approvals. An investigation into allegations about a major lender’s role in pressuring appraisers led to questions about what the enterprises, which had purchased many of the lender’s mortgages, had done to ensure that the appraisals for the mortgages met the enterprises’ requirements. A key outcome of that investigation was the enterprises’ adoption of the Home Valuation Code of Conduct (HVCC), which set forth appraiser independence requirements for mortgages sold to the enterprises. Although the Dodd-Frank Act declared HVCC no longer in effect, it codified several of HVCC’s provisions, and the enterprises have incorporated many of the other provisions into their requirements. As we reported in July 2011, appraiser independence requirements and other factors have increased the use of Appraisal Management Companies (AMC). Some appraisal industry participants are concerned that some AMCs may prioritize low costs and quick completion of assignments over appraiser competence, with negative consequences for appraisal quality. Moreover, according to the FBI, appraisal fraud—the deliberate overstatement or understatement of a home’s appraised value—is an ongoing concern. Of the 817 mortgage fraud cases the FBI closed from the fourth quarter of fiscal year 2010 through the third quarter of fiscal year 2011, 92 involved appraisal fraud. ASC has been performing its monitoring role under Title XI, but several weaknesses have potentially limited its effectiveness. In particular, ASC has not fully developed appropriate policies and procedures for monitoring state appraiser regulatory agencies, the federal financial institutions regulators, and the Appraisal Foundation. As part of its monitoring role, ASC also maintains a national registry of appraisers, which includes data on state disciplinary actions. ASC has improved its reviews of state compliance with Title XI, but its enforcement tools and procedures for reporting compliance levels have been limited. ASC has detailed policies and procedures for monitoring state appraiser regulatory programs and has issued 10 policy statements covering different aspects of states’ implementation of Title XI requirements. The policy statements cover topics including submission of data to the national registry, license reciprocity (enabling an appraiser certified or licensed in one state to perform appraisals in other states), and programs for enforcing appraiser qualifications and standards. For example, Statement 6 states that license reciprocity agreements should contain certain characteristics, such as recognizing and accepting successfully completed continuing education courses taken in the appraiser’s home state. Statement 10 sets forth guidelines for enforcing Appraiser Qualifications Board criteria for appraiser certification and complaint resolution. The policy statements are designed to assist states in continuing to develop and maintain appropriate organizational and regulatory structures for certifying, licensing, and supervising real estate appraisers. These statements reflect the general framework that ASC uses to review a state’s program for compliance with Title XI. ASC staff told us that they had initiated actions to update the policy statements to reflect Appraisal Standards Board changes to USPAP, modifications to Appraiser Qualifications Board criteria, emerging issues identified through state compliance reviews, and provisions in the Dodd-Frank Act. Apart from the policy statements, however, ASC has functioned without regulations and enforcement tools that could be useful in promoting state compliance with Title XI. Prior to the Dodd-Frank Act, Title XI did not give ASC rulemaking authority and provided it with only one enforcement option. ASC’s policy statements on specific elements of Title XI take the form of policies rather than regulations, which may limit ASC’s leverage over states that are not in compliance. As discussed later in this report, the Dodd-Frank Act provides ASC with limited rulemaking authority. Prior to the Dodd-Frank Act, the only enforcement action ASC could take under Title XI was to “derecognize” a state’s appraiser regulatory program, which would prohibit all licensed or certified appraisers from that state from performing appraisals in conjunction with federally related transactions. ASC has never derecognized a state, and ASC officials told us that using this sanction would have a devastating effect on the real estate markets and financial institutions within the state. While ASC has until recently had limited enforcement tools, it has had a number of tools to encourage state programs to comply with the policy statements and therefore Title XI requirements (see table 1). ASC’s primary tools for monitoring the states are routine and follow-up compliance reviews, which are performed on site by ASC’s four Policy Managers. These reviews are designed to encourage adherence to Title XI requirements by identifying any instances of noncompliance or “areas of concern” and recommending corrective actions. ASC conveys its findings and recommendations to states through written reports. Examples of areas covered by the reviews include timeliness in resolving complaints about appraiser misconduct or wrongdoing; degree to which education courses are consistent with Appraiser Qualifications Board criteria; adequacy of state statutes and regulations on certifying and licensing appraisers; timeliness and completeness of data submissions to the national registry and remittance of national registry fees; and validation of documentation supporting appraiser education and experience.contact visits” on an as-needed basis and off-site monitoring performed continuously. The Dodd-Frank Act contains 14 provisions that give ASC a number of new responsibilities and authorities. We identified 27 tasks associated with these provisions, ranging from complex undertakings to more straightforward administrative actions. Some of the more complex tasks include establishing and maintaining a national appraisal complaint hotline, making grants to state appraiser regulatory agencies, and implementing new rulemaking authority and enforcement tools. The act includes several other tasks such as encouraging states to accept appraisal courses approved by the Appraiser Qualifications Board and to establish policies for issuing reciprocal licenses or certifications to qualified appraisers from other states. As of October 2011, ASC had completed several tasks that required no rulemaking or creation of new programs and was in various stages of progress on the others. Appendix IV provides a summary of all 27 tasks and their status as of October 2011. The Dodd-Frank Act requires ASC to determine whether a national hotline exists that receives complaints of noncompliance with appraisal independence standards and USPAP, including complaints from appraisers, individuals, or other entities concerning the improper influencing or attempted improper influencing of appraisers or the appraisal process. ASC completed this task in January 2011, within the statutory deadline, and reported that no such hotline currently existed. The Dodd-Frank Act also requires ASC to establish and operate such a national hotline, including a toll-free telephone number and an e-mail address, if it determined that one did not already exist. Additionally, the act requires ASC to refer hotline complaints to appropriate governmental bodies for further action. ASC has not fully addressed this requirement but has researched how other agencies operate hotlines and make complaint referrals. ASC officials told us that the hotline would require significant staff and funds and that they were exploring options for implementing it, including hiring a contractor. Appraisal industry stakeholders we spoke with identified a number of potential challenges in establishing and operating a hotline. They noted that creating and maintaining a hotline could be costly because it will likely require investments in staff and information technology to fully ensure that calls are properly received, screened, tracked, and referred to appropriate regulatory agencies. Stakeholders indicated that screening calls would be a critical and challenging task because frivolous complaints could overwhelm the system and identifying valid complaints would require knowledge of USPAP. Some stakeholders we spoke with expressed concern about consumers using the hotline simply to report disagreement with an appraiser’s valuation instead of to report USPAP violations, concerns about appraiser independence, or potential fraud. Some appraisers said that frivolous consumer complaints could hurt an appraiser’s ability to get future appraisal assignments, while federal financial regulatory officials said that frivolous complaints from appraisers against lenders could lead to costly and time-consuming investigations. Additionally, industry stakeholders noted that the hotline would only have value if valid complaints were followed up and resolved but pointed out that some states lack the resources to handle their existing volume of complaints. Further, stakeholders said that deciding which regulatory entities should receive complaint referrals could be difficult in some cases and that differing state requirements for complaints (such as forms, procedures, and standards) could complicate the referral process. Nonetheless, appraisal industry stakeholders told us they believed that the hotline would offer several benefits. These included giving appraisers a central place to report when they feel they are being pressured, providing a conduit to forward complaints to appropriate entities, promoting the development of more uniform complaint and complaint follow-up procedures, and providing ASC with information that could be useful for its state and appraiser enforcement efforts. Among the state appraiser regulatory agencies we surveyed, views on establishing a hotline varied. For example, 13 of the 50 states responded that the hotline would improve their ability to regulate the appraisal industry in their state, while 9 viewed it as a hindrance. Of the remaining 28 respondents, 13 thought it would neither help nor hinder, 12 did not know, 2 commented on the potential for frivolous complaints, and 1 did not respond. Additionally, 25 of the 50 states responded that the establishment of a hotline would increase the number of complaints they received. The Dodd-Frank Act requires ASC to make grants to state appraiser regulatory agencies to support these agencies’ compliance with Title XI, including processing and investigating complaints, enforcement activities, and submission of data to the national registry. As previously noted, timely investigation and resolution of complaints has been a persistent problem for a number of states. Most of the state appraiser regulatory agencies we surveyed expressed interest in applying for ASC grants once the program is implemented. Specifically, 34 of the 50 states responding to our survey indicated they would likely apply for a grant, while 8 said they were unlikely to do so, and 3 said they were neither likely nor unlikely to do so. States cited activities related to enforcement and complaints—such as training for prosecutors and investigation of complaints—as the most likely potential uses of grant funds. Other potential uses cited by states included technological improvements for submitting data to the national registry and hiring appraiser licensing staff. While generally supportive of the grant program, appraisal industry stakeholders and ASC officials we spoke with noted several potential hurdles. Several stakeholders raised concerns about whether ASC had adequate resources to fund grants or sufficient expertise in grant administration and oversight. For example, officials from one appraisal industry group noted that ASC’s grant resources could be spread thin if numerous states apply and that states may not find small grants to be worthwhile. ASC officials said they were unsure whether a planned increase in the national registry fee—from $25 to $40 per appraiser credential, effective January 2012—would be adequate to fund the grants and oversee them, especially in light of recent declines in the number of appraisers. They also indicated that they would likely need to hire a grants specialist and an accountant to properly administer the grant program. Additionally, appraisal industry stakeholders cited challenges that ASC could face in designing the grant program and the decisions it will need to make. Some noted the challenge of designing grant eligibility and award criteria that (1) do not reward states that have weak appraiser regulatory programs because they use appraisal-related fee revenues (from state appraiser licensing and exam fees, for example) for purposes other than appraiser oversight and (2) will not create incentives for states to use less of their own resources for regulation of appraisers. They noted that some states direct (or “sweep”) appraisal-related revenues into the state’s general fund, which, in some cases may contribute to underfunding of the state’s appraiser regulatory agency. Twenty-six of the 50 state agencies that responded to our survey reported that their state government had the authority to sweep revenues collected by the agency into the state’s general fund, and 19 of these 26 indicated that their state had exercised In addition, stakeholders had a range of views on what this authority.the grant award criteria should include. For example, some suggested flexible grants based on the number of complaints or the number of appraisers in a state. However, others, including an ASC board member, said that the grants should target specific, well-defined initiatives to help ensure that funds are used appropriately. The board member pointed to state investigator training funded through ASC grants to the Appraisal Foundation as an example of such an initiative. States responding to our survey identified other possible funding criteria, including the extent to which a state had established appropriate performance benchmarks and the state’s past efforts to address compliance deficiencies. The Dodd-Frank Act also gives ASC the authority to prescribe regulations in four areas: temporary practice, the national registry, information sharing, and enforcement. For purposes of prescribing regulations, the act requires ASC to establish an advisory committee of industry participants, including appraisers, lenders, consumer advocates, real estate agents, and government agencies, and hold meetings as necessary to support the development of regulations. Although ASC already has policy statements covering the four areas, appraisal industry stakeholders and ASC officials indicated that regulations could be expected to strengthen ASC’s leverage over states to comply with Title XI. In addition, ASC officials noted that rulemaking authority would allow them to establish mandatory state reporting requirements and provide them additional administrative options to address state noncompliance. However, as of October 2011, ASC had not established an advisory committee or drafted any regulations. ASC officials told us that these tasks were still in the early planning stage. In addition to the rulemaking authority, the Dodd-Frank Act expands ASC’s enforcement tools. As previously discussed, ASC’s only enforcement option prior to the act was derecognition of a state’s appraiser regulatory program. The act gives ASC the authority to remove a state-licensed or -certified appraiser or a registered AMC from the national registry on an interim basis, not to exceed 90 days, pending state agency action on licensing, certification, registration, and disciplinary proceedings. It also authorizes ASC to impose (unspecified) interim actions and suspensions against a state agency as an alternative to, or in Many appraisal industry advance of, the derecognition of the agency.stakeholders we spoke with supported ASC’s new authorities because they will allow ASC to take a more flexible, targeted approach to enforcement. ASC has yet to implement these authorities and will face a number of decisions and challenges in doing so. ASC officials told us they would use their new rulemaking authority to promulgate regulations for removing an appraiser from the national registry. As part of the rulemaking, ASC officials said they plan to develop criteria for circumstances that warrant removal as well as due process procedures. Several appraisers we spoke with stressed the importance of having a process that will allow them to Officials from state bank defend themselves prior to a removal action.regulatory agencies told us that ASC may face challenges in collecting sufficient documentary evidence to justify removing an appraiser from the national registry because evidence collection is resource intensive. ASC officials said that determining the interim actions and suspensions they would take against state agencies also would be done through rulemaking, which can be a time-consuming process. Officials from several state appraiser regulatory agencies said that for such actions to be effective, they should be directed to higher levels of state government because the agencies have limited authority to make resource decisions or implement major changes. For example, some state appraiser regulatory agencies report to other agencies that control budget and policy decisions. ASC confronts the challenge of implementing the tasks associated with the Dodd-Frank Act with limited resources. As previously noted, ASC has a small staff and, in recent years, its revenues have declined while its expenses have grown. ASC has 10 staff members, including an Executive Director, a Deputy Executive Director, a General Counsel, 4 Policy Managers, an Information Management Specialist, and 2 Administrative Officers. ASC’s revenues—which come exclusively from national registry fees— rose (in nominal dollars) from $2.2 million in fiscal year 2000 to a peak of $3.2 million in fiscal year 2007 but declined to $2.8 million in fiscal year 2010 (see fig. 3). According to ASC officials, revenue from registry fees allowed ASC to carry out its Title XI responsibilities and accumulate approximately $6 million in reserves by fiscal year 2008. However, since 2007, the number of appraiser credentials in the registry has declined each year, causing ASC’s revenues to shrink. Pursuant to a Dodd-Frank Act provision, ASC increased its registry fee from $25 to $40 (a 60 percent increase) effective January 2012, which will likely increase ASC’s revenues. However, because the number of appraisers has been declining—by about 9.4 percent from 2007 through 2010—the fee increase may not result in a proportional rise in revenue. To illustrate, ASC’s revenue in 2014 would be about $4.4 million if the number of appraiser credentials stayed at 2010 levels but would be about $4.0 million if the number of appraiser credentials fell by another 9.4 percent from 2011 through 2014. Although the Dodd-Frank Act also authorized ASC to collect registry fees from AMCs, revenues from this source may not be available for several years because regulations for AMC registration must be developed and implemented first. As shown in figure 3, ASC’s total expenses in nominal dollars increased from $2.2 million in fiscal year 2000 to $3.6 million in fiscal year 2010. ASC’s total expenses include operating expenses and grants to the Appraisal Foundation, both of which rose over that period. Operating expenses grew from $1.3 million in fiscal year 2000 to $2.3 million in fiscal year 2010, primarily due to an increase in personnel and administrative costs for conducting more frequent state compliance reviews. Grants to the Appraisal Foundation grew from $916,000 in fiscal year 2000 to $1.3 million in fiscal year 2010, partly to fund state investigator training courses. In fiscal years 2009 and 2010, ASC’s expenses exceeded its revenues by $380,581 and $782,046, respectively. ASC used reserve funds to cover these amounts, reducing the reserve to $4.8 million by the end of fiscal year 2010. In light of these resource and implementation challenges, ASC officials began developing a strategic plan in May 2011 that encompasses both its existing activities and its new responsibilities and authorities under the Dodd-Frank Act. ASC also developed a more limited project plan that focuses specifically on tasks and milestones stemming from the act. According to an ASC board member, ASC did not previously have a strategic plan, due partly to stability in its functions over the years. The board member said that the new responsibilities contained in the Dodd- Frank Act prompted them to undertake a full strategic planning effort. ASC officials told us that they hoped to complete the plan by the end of 2011. ASC officials told us that their strategic plan would include a mission statement and goals but did not provide specific information about the expected contents of their plan. Although ASC is not subject to the GPRA Modernization Act of 2010 (GPRAMA)—which amends the Government Performance and Results Act of 1993 (GPRA)—ASC officials told us that GPRAMA their plan would include GPRAMA’s general components.provides federal agencies with an approach to focusing on results and improving government performance by, among other things, developing strategic plans. Examples of GPRAMA plan components include a comprehensive agency mission statement; general goals and objectives, including outcome-oriented goals; and a description of how the goals and objectives are to be achieved, including the processes and resources required. Our analysis of HMDA data found that approximately 71 percent of first- lien mortgages for single-family (one- to four-unit) homes originated from calendar years 2006 through 2009 were less than or equal to $250,000— the regulatory threshold at or below which appraisals are not required for federally related transactions. As shown in figure 4, the percentage varied little by origination year, ranging from a low of 69 percent in 2006 to a high of 73 percent in 2008. For all four years combined, 41 percent of the mortgages were $150,000 or less, and 30 percent were from $150,001 to $250,000. For the same 4-year period, we found that about 22 percent of mortgages for residential multifamily structures were at or below the $250,000 threshold, as were about 98 percent of mortgages for manufactured housing. The proportions of mortgages originated from 2006 through 2009 that were below the threshold varied considerably by state. The percentage of first-lien mortgages for single-family homes that were less than or equal to $250,000 ranged from a low of 32 percent in California and Hawaii to a high of 95 percent in North Dakota. Two states, New Mexico and South Carolina, represented the median percentage of 82 percent (see fig. 5.) The only places in which more than half of the mortgage originations were greater than $250,000 were California, the District of Columbia, and Hawaii. In states that experienced some of the steepest declines in house prices during the 4 years we examined, the proportion of annual mortgage originations that fell below the threshold increased substantially over the period. For example, the proportion rose 25 percentage points in Nevada, 17 percentage points in California, and 8 percentage points in both Arizona and Florida. Despite the sizable proportion of residential mortgages at or below $250,000, the threshold has had limited impact in recent years on the percentage of mortgages with an appraisal because mortgage lenders, investors, and insurers generally require them for mortgages, regardless of amount. Due to the sharp contraction of the private mortgage market that began in 2007, the large majority of mortgage originations are currently purchased or insured by the enterprises and HUD’s Federal Housing Administration (FHA), which require appraisals on most mortgages. In 2010, enterprise-backed mortgages accounted for more than 65 percent of the market and FHA-insured mortgages accounted for about 20 percent. As we reported in July 2011, data for the two enterprises combined showed that they required appraisals for 85 percent of the mortgages they bought in 2010 and 94 percent of the mortgages they bought in 2009 that were underwritten using their automated underwriting systems. FHA requires appraisals for all of the home purchase mortgages and most of the refinance mortgages it insures. Furthermore, lender valuation policies may exceed investor or insurer requirements in some situations. For example, lender risk-management policies may require the lender to obtain an appraisal even when the enterprises do not, or the lender may obtain an appraisal to better ensure that the mortgage complies with requirements for sale to either of the enterprises. The $250,000 threshold could become more consequential if the roles of the enterprises and FHA are scaled back in the future. The administration and Congress are considering options that would diminish the federal role in mortgage finance and help transition to a more privatized market by winding down the enterprises and reducing the size of FHA. If this were to occur, the proportion of mortgage originations not subject to the appraisal requirements of these entities could increase. If private investors and insurers were to impose less stringent appraisal requirements than the enterprises or FHA, more mortgages of $250,000 or less may not receive an appraisal. However, whether the private market will require appraisals for mortgages below the threshold is unclear at this time. The perspectives of appraisal industry stakeholders we spoke with— including appraisers, lenders, and federal and state regulators—did not provide a consensus view on whether or how the $250,000 threshold or the $1 million threshold that applies to real estate-secured business loans should be revised. Although no stakeholders advocated higher thresholds, a number recommended lowering or eliminating them, while others thought no changes were necessary. In addition, some stakeholders suggested alternatives to fixed, national dollar thresholds. Appraiser industry groups, lending industry representatives, and some of the state regulators we contacted said that the appraisal exemption thresholds should be lower, in part to help manage the risk assumed by lending institutions. For example, 14 of the 50 state appraiser regulatory agencies that responded to our survey indicated that the $250,000 threshold should be lowered to either $50,000 or $100,000. Several of the parties we spoke with pointed out that the median sales price of homes in the United States is below $250,000, which exempts numerous mortgage transactions from regulatory appraisal requirements. An NCUA official noted that in large numbers, smaller home mortgages or business loans can pose the same risks to lending institutions as larger ones, so smaller loans should not necessarily be exempt from appraisal requirements. Additionally, appraisal industry stakeholders indicated that “evaluations” that may be performed as an alternative to an appraisal may include methods that are less credible and reliable, such as AVMs. These stakeholders acknowledged that while appraisal requirements are currently driven by the enterprises and FHA, the roles of these entities could change. Additionally, while appraisals for residential mortgages are not intended to validate the purchase price of the property in question, some stakeholders believe that they serve a consumer protection function by providing objective information about the market value of a property that consumers can use in making buying decisions. One appraisal industry representative said this information can help homebuyers avoid immediately owing more on a property than the property is worth, a situation that can make resale or refinancing difficult or cost-prohibitive. The Dodd-Frank Act requires that any revisions to the $250,000 threshold take into account consumer protection considerations through the concurrence of CFPB. Other appraisal industry stakeholders, including some state appraiser and bank regulatory officials, felt that the appraisal thresholds should remain where they are. For example, 17 of the 50 state appraiser regulatory agencies that responded to our survey indicated that the $250,000 threshold should not be changed. A few of these stakeholders stated that lowering the threshold would potentially require more homebuyers to pay for appraisals, which are generally more expensive than other valuation methods. For example, according to mortgage industry participants, a typical appraisal can cost a consumer $300 to $450 on average, while a property valuation by an AVM can cost $5 to $25. appraisal industry participant said that lower thresholds could subject more real estate-related transactions for which an appraisal is not necessary to appraisal requirements. For example, he indicated that when the property in question is collateral for a loan that is much less than the probable value of the property, a cheaper and faster valuation method such as an AVM may be sufficient. An FDIC official said it was not clear that the exemption thresholds needed to be revised and noted that even for transactions below the thresholds, regulated financial institutions are expected to have a risk-based approach that determines when they will use an appraisal versus another method. Some appraisal industry stakeholders said that changes in real estate market conditions and variation in housing markets argued for thresholds tied to median property values at the state or regional level. For example, some of the respondents to our state survey noted that a national $250,000 threshold is largely irrelevant in some areas of the country. As previously shown in figure 5, in several states, over 90 percent of recent mortgages were $250,000 or less. Some stakeholders felt that the thresholds should not be based solely on the loan amount and should include other factors that affect credit risk, such as the borrower’s debt burden. Appraisal costs can vary considerably depending on the location and size of the property, among other factors. See GAO-11-653. The critical role of real estate appraisals in mortgage underwriting underscores the importance of effective regulation of the appraisal industry. Title XI of FIRREA created a complex regulatory structure that relies upon the actions of many state, federal, and private entities to help ensure the quality of appraisals and the qualifications of appraisers used in federally related transactions. ASC performs an important function within that structure by, among other things, monitoring the requirements and activities of some of the key entities—state appraiser regulatory agencies, the federal financial institutions regulators, and the Appraisal Foundation. Although ASC is carrying out its monitoring function, it has not developed appropriate policies and procedures for some of its activities, potentially limiting its effectiveness. First, ASC could improve how it assesses and reports on states’ overall compliance with Title XI. Specifically, developing and disclosing clear definitions of the compliance categories could help ensure consistent and transparent application of the categories and provide more useful information to Congress about states’ implementation of Title XI. Second, ASC could better delineate its role in monitoring the appraisal requirements of the federal financial institutions regulators and thereby strengthen accountability for this function. Third, ASC could enhance its policies for determining which Appraisal Foundation activities are eligible for grants to help ensure consistent funding decisions and improve the transparency of the grant process. Addressing these areas would also improve ASC’s compliance with federal internal control standards designed to promote the effectiveness and efficiency of agency operations. Provisions in the Dodd-Frank Act will help ASC carry out its Title XI monitoring functions but will also create challenges that will require effective long-term planning. The limited rulemaking and enhanced enforcement authorities the act provides to ASC address prior weaknesses in its ability to promote states’ compliance with Title XI. Implementing these authorities will involve significant follow-on steps, including drafting regulations and developing criteria and processes to remove problem appraisers from the national registry. Other tasks stemming from the Dodd-Frank Act, such as establishing an appraiser hotline and a state grant program, require resources and involve difficult decisions. ASC is facing these tasks at a time when its costs have been increasing, and its revenues from national registry fees have fallen because of a decline in the number of appraisers. To help address these challenges, ASC has for the first time undertaken a strategic planning process. Although this process was not far enough along for us to examine the details of ASC’s plan, setting goals and identifying processes and resources necessary to achieve them could help ASC align its new responsibilities with its mission and aid in resource allocation decisions. To help ensure effective implementation of ASC’s Title XI and Dodd- Frank Act responsibilities and improve compliance with federal internal control standards, we recommend that the Chairman of ASC direct the ASC board and staff to take the following three actions: clarify the definitions used to categorize states’ overall compliance with Title XI and include them in ASC’s compliance review and policy and procedures manuals, compliance review reports to states, and annual reports to Congress; develop specific policies and procedures for monitoring the appraisal requirements of the federal financial institutions regulators and include them in ASC’s policy and procedures manual; and develop specific criteria for assessing whether the grant activities of the Appraisal Foundation are Title XI-related and include these criteria in ASC’s policy and procedures manual. We provided a draft of this report to ASC, CFPB, FDIC, the Federal Reserve, FHFA, HUD, NCUA, and OCC for their review and comment. We received written comments from the Chairman, ASC; the Assistant Director for Mortgage Markets, CFPB; the Executive Director, NCUA; and the Acting Comptroller of the Currency, which are reprinted in appendixes V through VIII. We also received technical comments from FDIC, the Federal Reserve, and OCC, which we incorporated where appropriate. FHFA and HUD did not provide comments on the draft report. In their written comments, ASC, NCUA, and OCC agreed with our recommendations. ASC noted that it had already taken preliminary actions to address our recommendations and would consider the report’s findings as it continues to implement its new authority under the Dodd- Frank Act. OCC also acknowledged the challenges ASC faces in implementing its new responsibilities and authority under the act. CFPB neither agreed nor disagreed with our recommendations but said that the report provided a comprehensive analysis of ASC’s role and highlighted resource and operating constraints that may challenge ASC’s ability to implement its new duties under the Dodd-Frank Act. CFPB also noted that if federal regulators contemplate revising the $250,000 appraisal exemption threshold, CFPB would evaluate whether the proposed change would provide reasonable protection for homebuyers. Additionally, CFPB indicated that it hoped to designate an ASC board member in the near future and that, in the meantime, CFPB serves on the ASC board in an advisory capacity. We are sending copies of this report to the appropriate congressional committees, the Chairman of ASC, the Chairman of FFIEC, the Chairman of FDIC, the Chairman of the Federal Reserve, the Acting Director of FHFA, the Secretary of Housing and Urban Development, the Chairman of NCUA, the Acting Comptroller of the Currency, the Director of the Bureau of Consumer Financial Protection, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) requires GAO to examine the Appraisal Subcommittee’s (ASC) ability to carry out its functions, as well as related issues, including regulatory exemptions to appraisal requirements, state disciplinary actions against appraisers, and the extent to which a national appraisal repository would benefit ASC. Our objectives were to examine (1) how ASC is performing its functions under Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) that existed prior to the passage of the Dodd-Frank Act, (2) ASC’s plans and actions to implement provisions in the Dodd-Frank Act, and (3) analysis and stakeholder views on existing dollar-based exemptions to appraisal requirements for federally related transactions. For the first objective and for information that appears in appendix II, we also examined the number of state-licensed and -certified appraisers, as of December 31, 2010, and the number of disciplinary actions that states took against appraisers from 2001 through 2010. Finally, for information that appears in appendix III, we examined the views of appraisal industry stakeholders on the potential benefits and challenges of a national appraisal repository for ASC. To determine how ASC is performing its Title XI functions that existed prior to the passage of the Dodd-Frank Act, we reviewed Title XI of FIRREA and its legislative history. We reviewed ASC’s policies and procedures, including its rules of operation, policy and procedures manual, policy statements, compliance review manual, bulletins, and notices. We consulted GAO’s Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool to assess ASC’s policies and procedures. We reviewed a wide range of ASC reports and records relating to each of ASC’s functions. With respect to ASC’s monitoring of states, we reviewed reports on ASC’s compliance reviews of states from 2007 through 2010, state response letters to compliance reviews, and summary statistics in ASC’s annual reports to Congress on the results of compliance reviews. We analyzed this information to determine how often ASC reviewed states, the type and frequency of noncompliance problems ASC identified, and the number of states in each of three overall compliance categories (“in substantial compliance,” “not in substantial compliance,” and “not in compliance”). We identified states that ASC reviewed at least twice from 2007 through 2010 to determine any changes in these states’ overall compliance levels over that period. Regarding ASC’s monitoring of the federal financial institutions regulators, we reviewed ASC board minutes from 2003 through 2010, ASC’s annual reports to Congress for those years, and a 2007 internal review of ASC’s operations, which addressed this monitoring responsibility. With respect to ASC’s monitoring of the Appraisal Foundation, we reviewed foundation grant proposals, statements of work, and reimbursement requests from 2003 through 2010; ASC decisions on grant proposals and reimbursement requests for that period; agreed-upon procedures reviews of the foundation from 2005 through 2010 by an independent auditing firm; and miscellaneous correspondence between ASC and the foundation. We also reviewed ASC’s annual reports to Congress and board meeting minutes from 2003 through 2010 to obtain information about the foundation’s activities and ASC’s monitoring process. Regarding the national registry, we analyzed selected information from ASC’s national registry database, including the number of active appraiser credentials by type and state as of December 31, 2010, and the number and types of disciplinary actions against appraisers that states took and reported from calendar years 2001 through 2010. To assess the reliability of the registry data, we (1) reviewed information related to data elements, system operations, and controls; (2) performed electronic testing for obvious errors in accuracy and completeness; and (3) interviewed ASC officials knowledgeable about the data. We concluded that the data elements we used were sufficiently reliable for our purposes. In addition to our document review and data analysis, we interviewed current ASC staff, including the Executive Director, Deputy Executive Director, and General Counsel, as well as a former ASC General Counsel. We also interviewed ASC board members, which, at the time of our fieldwork, included officials from the Federal Deposit Insurance Corporation (FDIC), Board of Governors of the Federal Reserve System (Federal Reserve), Federal Housing Finance Agency (FHFA), Department of Housing and Urban Development (HUD), National Credit Union Administration (NCUA), Office of the Comptroller of the Currency (OCC), and Office of Thrift Supervision (OTS). We also interviewed officials from the Federal Financial Institutions Examination Council (FFIEC); representatives of the Appraisal Foundation; state appraisal regulatory officials; and a range of other appraisal industry participants and stakeholders, including trade groups that represent appraisers and lenders, officials from the government-sponsored enterprises Fannie Mae and Freddie Mac (the enterprises), and officials from the Federal Bureau of Investigation (FBI). Finally, to support this objective and our other reporting objectives, we conducted a Web-based survey of appraiser regulatory agencies from the 50 states, the District of Columbia, and the U.S. territories of Guam, Northern Mariana Islands, Puerto Rico, and the Virgin Islands. During May 2011, we conducted four telephone pretests of the survey instrument with officials from different state regulatory agencies. The pretest results were incorporated into the survey questions as warranted. We fielded the survey to officials from the 55 state and territorial regulatory agencies on June 7, 2011. The survey had a closing deadline of July 8, 2011. Fifty of the 55 agencies completed the survey; the remaining five either did not start or did not finish the survey. Among other things, the survey collected information on how the state and territorial agencies carry out their Title XI responsibilities (including submitting data to the national registry and following up on complaints against appraisers); agency funding and staffing issues; and state views on ASC, appraisal-related provisions in the Dodd-Frank Act, and the $250,000 appraisal exemption threshold. The results are contained in an e-supplement to this report that includes the questions asked and a summary of the answers provided. View the e- supplement at GAO-12-198SP. To describe ASC’s plans and actions to implement Dodd-Frank Act provisions, we reviewed pertinent sections of the act and analyzed ASC records and other documents that described specific tasks stemming from the act and ASC’s progress in addressing them. These records and documents included ASC board meeting minutes, ASC Dodd-Frank Act summaries and implementation timelines, and Federal Register notices. We also interviewed ASC board members and staff about progress and challenges in implementing these tasks. To gain perspective on ASC’s resources for implementing the Dodd-Frank Act provisions, we reviewed information from ASC’s annual reports and financial statements. More specifically, we examined the number and responsibilities of ASC’s staff positions and ASC’s revenues, expenses, and reserves from fiscal years 2001 through 2010. In addition, we estimated ASC’s fee revenues in 2014 under two scenarios. The first assumed no change in the number of appraiser credentials after 2010, and the second assumed a 9.4 percent drop after 2010 (mirroring the decline that occurred from 2007 through 2010). To examine ASC’s strategic planning efforts, we interviewed ASC board members and staff about their planning process and time frames. We also reviewed the GPRA Modernization Act (GPRAMA), which provides a framework for federal agency’s strategic plans. To examine existing dollar-based appraisal exemption thresholds, we analyzed data from FFIEC’s Home Mortgage Disclosure Act (HMDA) database and obtained stakeholder opinions about the thresholds. HMDA requires lending institutions to collect and publicly disclose information about housing loans and applications for such loans, including the loan type and amount, property type, and borrower characteristics. These data are the most comprehensive source of information on mortgage lending and are estimated to capture about 75 to 85 percent of conventional mortgages (those without government insurance or guarantees) and 90 to 95 percent of mortgages insured by HUD’s Federal Housing Administration. Lenders with small total assets and lenders that do not have a home or branch office in a metropolitan statistical area do not have to report HDMA data. We analyzed HMDA data from 2006 through 2009 to determine the proportion of mortgages less than or equal to $250,000—the regulatory threshold at or below which appraisals are not required for federally related transactions.purchase and refinance mortgages for single-family (one-to-four unit) site- We focused primarily on built residences. At the national level and for each state, we calculated the proportion of these mortgages that were $250,000 or less by year of origination and for all 4 years combined. In addition, for each state, we calculated the change in the proportion of mortgages at or below the $250,000 threshold from 2006 through 2009. Using FHFA’s purchase- only house price index, we also examined the extent to which states with large increases in the proportion of mortgages at or below the threshold also experienced large house price declines over the 4-year period. We analyzed mortgages for residential multifamily housing (five or more units) and manufactured housing separately and at the national level only. Specifically, we calculated the proportions of these mortgages that were at or below the $250,000 threshold, combining data for 2006 through 2009. Due to a lack of readily available data, we were not able to perform a similar analysis for real estate-secured business loans, which have an appraisal exemption threshold of $1 million or less. To assess the data reliability of the HMDA data we used, we reviewed documentation on the process used to collect and ensure the reliability and integrity of the data; reviewed Federal Reserve and HUD analysis of the data’s market coverage; conducted reasonableness checks on data elements to identify any missing, erroneous, or outlying data; and spoke with officials from the Federal Reserve and the Bureau of Consumer Financial Protection (also known as the Consumer Financial Protection Bureau or CFPB) knowledgeable about the data. We concluded that the data we used were sufficiently reliable for our purposes. To provide perspective on the impact of the $250,000 threshold, we relied on information in a report we issued in July 2011, which included information on the proportion of residential mortgage originations from 2006 through 2010 that had appraisals. In that report, we indicated that the enterprises and the Federal Housing Administration (FHA) have commanded a large share of the mortgage market in recent years and that these entities require appraisals on the large majority of the mortgages they back, both above and below $250,000. To obtain stakeholder views on the $250,000 and $1 million thresholds, we interviewed ASC board members and staff; officials from the federal financial institutions regulators, FHFA, HUD, and CFPB; and representatives from the Appraisal Foundation and state appraiser regulatory agencies. We also interviewed other appraisal industry participants, including trade groups that represent appraisers and lenders and officials from the enterprises. Additionally, we drew on the results of our state survey, which included questions about the $250,000 threshold. To obtain stakeholder views about whether new means of data collection, such as the establishment of a national appraisal repository, might assist ASC in carrying out its responsibilities, we interviewed ASC board members and staff; officials from federal financial institutions regulators, CFPB, FBI, FHFA, HUD, and the enterprises; representatives of the Appraisal Foundation; and state appraiser regulatory officials. We also interviewed representatives of trade groups that represent appraisers and lenders, as well as individual mortgage lenders, appraisers, and appraisal industry researchers. We conducted this performance audit from November 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. ASC’s national registry of state-licensed and -certified appraisers contains information on four classes of appraiser credentials: certified general, certified residential, licensed, and transitionally licensed. As of December 31, 2010, the database reported nearly 110,000 active appraiser credentials. The number of appraiser credentials reported by state appraiser regulatory agencies ranged from 8 in the Northern Mariana Islands to 13,050 in California (see table 3.) Nationwide, certified general and certified residential appraiser credentials accounted for about 84 percent of the total appraiser credentials. As previously noted, the national registry contains information on disciplinary actions taken and reported by state regulators. Table 4 summarizes this information for calendar years 2001 through 2010. The Dodd-Frank Act asked us to examine whether new means of data collection, such as the establishment of a national repository of appraisal information, would benefit ASC’s ability to perform its functions. We spoke with a range of appraisal industry stakeholders, including appraisers, lenders, regulators, and ASC officials about what a national repository might contain, its potential benefits and challenges, and the extent to which it would help ASC carry out its responsibilities. The Dodd-Frank Act does not specify the information that a national appraisal repository would contain if one were to be created. Appraisal industry stakeholders we spoke with identified a number of possibilities, ranging from a compilation of scanned appraisal reports to a searchable database of appraisal information such as the location and characteristics of the subject property, name of the appraiser and mortgage lender, appraised value, and properties used as “comparables.” Some stakeholders indicated that a repository could potentially be linked to other data such as geographic information (e.g., digital maps), mortgage and borrower characteristics (e.g., status of mortgage payments), and housing market and economic statistics (e.g., local sales activity and rental and vacancy rates). Stakeholders said that multiple listing services and other proprietary databases contain some of this information. While the potential uses of a repository would depend on who had access to it, appraisal industry stakeholders identified a variety of benefits that a repository could provide. Some indicated that a repository could help regulators detect problematic appraisals and appraisers. For example, knowing the entities associated with every appraisal (e.g., appraiser, appraisal management company, and lender) could help regulators identify patterns of questionable behavior by individuals or firms. Additionally, the ability to view appraisals of the same property over time and appraisals for nearby properties could help regulators identify outliers (i.e., unusually high or low values) that may merit further investigation. Appraisers also could benefit from a repository by having access to additional data with which to perform their valuations. For example, one ASC board member said a repository that included the selling price of the comparables used in each appraisal would give appraisers access to sales information in states where such data are not publicly disclosed. In addition, industry stakeholders indicated that an appraisal repository could be integrated with mortgage portfolio information to help manage financial risk—for example, by assessing relationships between appraisal quality and loan performance. The government-sponsored enterprises Fannie Mae and Freddie Mac (the enterprises) have undertaken a joint effort, under the direction of FHFA that illustrates this concept. Known as the Uniform Mortgage Data Program (UMDP), this effort will collect consistent appraisal and loan data for all mortgages the enterprises purchase from lenders and will produce a proprietary dataset for use by the enterprises and FHFA. According to officials from the enterprises, UMDP will allow the enterprises to work with lenders to resolve any concerns regarding appraisal quality prior to purchasing mortgages. While a repository could provide some benefits, appraisal industry stakeholders also identified a number of challenges related to data collection and analysis, access rights, and resources. For example, they indicated that reporting of appraisal data would need to be more standardized for the repository to be useful. They also said questions exist about the extent to which appraisal reports are proprietary and could be included in a database that would potentially be widely accessible. Some stakeholders said analyzing data in a repository would not be straightforward because potential differences in the scope of work for each appraisal (e.g., an interior and exterior inspection versus an exterior inspection only) would complicate comparison of appraisal results. Additionally, some stakeholders expressed concerns about who would have access to the repository and whether broad access would encroach upon the privacy of appraisers. Further, a number of stakeholders and ASC officials said that a national repository could be very costly to create and maintain. They indicated that ASC was not the appropriate agency to develop a repository because it lacks the necessary resources. Some stakeholders also said that development of a repository would partially duplicate the enterprises’ efforts under UMDP. Appraisal industry stakeholders and ASC officials questioned how much a national repository would help ASC carry out its monitoring responsibilities. They said that the high-level nature of ASC’s monitoring responsibilities did not require detailed information on individual appraisals. For example, ASC officials said it was unclear how a repository would help them monitor states’ appraiser regulatory programs, a process that involves examining state appraiser licensing and certification requirements and assessing their compliance with Title XI. Other industry stakeholders said they were not sure how ASC could use a repository because ASC is not charged with assessing appraisal quality or proactively identifying individual appraisers or institutions responsible for problem appraisals. Additionally, one appraisal industry participant noted that analyzing information from a repository could require expertise and resources that ASC may not currently have. Subtitle F, Section 1473 of the Dodd-Frank Act, includes amendments to Title XI of FIRREA. These amendments expand ASC’s responsibilities and authorities. We identified 27 tasks for ASC stemming from the Dodd- Frank Act provisions. A description and the status of each task as of October 2011 is presented in the table below. In addition to the individual named above, Steve Westley, Assistant Director; Alexandra Martin-Arseneau; Yola Lewis; John McGrail; Marc Molino; Carl Ramirez, Kelly Rubin; Jerome Sandau; Jennifer Schwartz; Andrew Stavisky; and Jocelyn Yin made key contributions to this report.
Real estate appraisals have come under increased scrutiny in the wake of the recent mortgage crisis. Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 created an oversight structure for appraisals and appraisers that involves state, federal, and private entities. This structure includes ASC, a federal agency responsible for monitoring these entities’ Title XI-related activities. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) expanded ASC’s Title XI role and required GAO to examine ASC’s activities and exemptions to federal appraisal requirements. This report discusses (1) how ASC is carrying out its original Title XI responsibilities, (2) ASC’s actions and plans to implement Dodd-Frank Act provisions, and (3) regulatory dollar thresholds for determining when an appraisal is required. To do this work, GAO reviewed ASC records and reports, surveyed state appraiser regulatory agencies, analyzed government mortgage data, and interviewed industry stakeholders. The Appraisal Subcommittee (ASC) has been performing its monitoring role under Title XI, but several weaknesses have potentially limited its effectiveness. For example, Title XI did not originally provide ASC rulemaking and enforcement tools that could be useful in promoting state compliance. In addition, ASC has not reported or clearly defined the criteria it uses to assess states’ overall compliance levels. Title XI charges ASC with monitoring the appraisal requirements of the federal financial institutions regulators, but ASC has not defined the scope of this function—for example, by developing policies and procedures—and its monitoring activities have been limited. ASC also lacks specific policies for determining whether activities of the Appraisal Foundation (a private nonprofit organization that sets criteria for appraisals and appraisers) that are funded by ASC grants are Title XI-related. Not having appropriate policies and procedures is inconsistent with federal internal control standards designed to promote effectiveness and efficiency and limits the accountability and transparency of ASC’s activities. ASC faces potential resource and planning challenges in implementing some Dodd-Frank Act provisions. ASC has only 10 staff and is funded by appraiser registration fees that totaled $2.8 million in fiscal year 2010. The Dodd-Frank Act expands ASC’s responsibilities and authorities. For example, the act requires ASC to establish a national appraiser complaint hotline and provide grants to state appraiser regulatory agencies, and it gives ASC limited rulemaking and enhanced enforcement authorities to help address prior weaknesses. As of October 2011, ASC had completed several implementation tasks that required no rulemaking or creation of new programs and was in various stages of progress on the others. The potentially resource-intensive nature of some remaining tasks will require careful planning. For example, operating a complaint hotline may require investments in information technology and the creation of screening and follow-up procedures. Also, implementing a grant program will require ASC to set aside funds, develop funding criteria, and oversee grantees. ASC is in the process of developing a strategic plan to help carry out these efforts with available resources. GAO found that more than 70 percent of residential mortgages made from 2006 through 2009 were $250,000 or less—the regulatory threshold at or below which appraisals are not required for transactions involving federally regulated lenders. In recent years, however, the threshold has had a limited impact on the proportion of mortgages with appraisals because mortgage investors and insurers such as Fannie Mae, Freddie Mac, and the Federal Housing Administration have generally required appraisals for mortgages both above and below the threshold. While these entities currently dominate the mortgage market, federal plans to scale them back could lead to a more privatized market, and whether this market would impose similar requirements is not known. None of the appraisal industry stakeholders GAO spoke with argued for increasing the threshold. Some stakeholders said the threshold should be lowered or eliminated, citing potential benefits to risk management and consumer protection. Others noted potential downsides to lowering the threshold, such as requiring more borrowers to pay appraisal fees and requiring appraisals on more transactions for which cheaper and quicker valuation methods may be sufficient. To help ensure effective implementation of ASC’s original Title XI and additional Dodd-Frank Act responsibilities, ASC should clarify and report the criteria it uses to assess states’ overall compliance with Title XI and develop specific policies and procedures for its other monitoring functions. GAO provided a draft of this report to ASC and seven other agencies. ASC and two other agencies agreed with the report’s recommendations. One agency did not comment on the recommendations, and the others did not provide written comments.
The history of the C-27J is complex and involves several government agencies and private contractors. To meet the Army’s combat zone airlift supply mission, in 2007 a joint Army and Air Force program office awarded a contract for C-27Js to a company called Global Military Aircraft Systems—a partnership between L-3 Communications and the manufacturer, an Italian company called Alenia Aermacchi (a Finmeccanica company). L-3 Communications, the lead partner, installed U.S. military-specific equipment on the aircraft manufactured by Alenia Aermacchi. In addition, a separate division of L-3 Communications provided full logistics support to the Army and Air Force under the Global Military Aircraft Systems contract. The Air Force and Army began purchasing these aircraft without fully developing a logistics plan that established how to fully support the aircraft. Five years later, in 2012, the Air Force canceled the program, citing budget constraints and its determination that Air Force C-130s could provide nearly all of the Army’s desired capability. The Air Force had not finished its logistics plan at the time. When the program was canceled in 2012, the Department of Defense (DOD) had purchased 21 C-27Js—13 by the Army and 8 by the Air Force. By mid-2013, the 13 aircraft that the Air Force fielded were sent to AMARG (colloquially referred to as the “boneyard”) for preservation and 8 aircraft were still in production. The Air Force announced that these 21 aircraft would be made available to U.S. government agencies. In May 2012, the Coast Guard conducted a business case, which estimated that receiving the 21 C-27J aircraft would save it $826 million, in fiscal year 2012 dollars, without changing planned performance targets. In August 2013, after USASOC also expressed interest in the aircraft, the Coast Guard wrote a letter to Congress using its prior analysis to estimate that receiving 14 C-27Js, instead of all 21 aircraft, would still save $799 million ($837 million in fiscal year 2015 dollars) over 30 years. Figure 1 illustrates the major events in the history of the C-27J. In fall 2013, DOD transferred 7 C-27J aircraft to USASOC; 3 that had since completed production and 4 aircraft still in production. Then, in December 2013, as a part of the fiscal year 2014 National Defense Authorization Act, Congress directed DOD to transfer the remaining 14 C- 27J aircraft to the Secretary of Homeland Security. These aircraft—13 from AMARG storage and 1 aircraft still owned by L-3 Communications— have been or will be transferred to the Coast Guard. Congress also required Homeland Security to transfer 7 HC-130Hs to the Air Force after certain modifications; the Air Force was required to transfer these aircraft to the Secretary of Agriculture for use by the U.S. Forest Service. These HC-130H aircraft are to be supplied by the Coast Guard. Figure 2 illustrates the transfer of aircraft among these agencies. The Coast Guard’s fixed-wing aviation fleet comprises several assets with different capabilities. Table 1 illustrates the capability differences between the Coast Guard’s long-range (HC-130H and HC-130J) and medium- range (HC-144 and C-27J) airframes in terms of payload, range, endurance, and speed. For each asset, table 1 also includes the number of annual planned flight hours, the designed service life, and cost per flight hour. The HC-144 and C-27J have a lower cost per flight hour than the HC-130. Each aircraft is planned to have the same surveillance and communication capabilities, and, according to program officials, the extent to which the endurance, speed, range, and other attributes increase performance largely depends upon the mission the aircraft is performing. For example, all of these aircraft can conduct maritime security and search and rescue missions in accordance with Coast Guard needs; however, the HC-130J, with greater payload and endurance, is better suited for responding to humanitarian disasters or mass casualty incidents either domestically or overseas. The C-27J is larger than the HC-144, but generally fits between the HC-144 and the HC-130J in terms of the capability of the airframe. In 2005, the Coast Guard developed a mission needs statement that factored in new requirements for its fixed-wing assets following the September 11, 2001, terrorist attacks. In particular, this document emphasized the importance of persistent surveillance in accomplishing Coast Guard missions—generated by flying more hours with surveillance- capable aircraft. Based on this analysis, the Coast Guard determined that it needed 52,400 fixed wing flight hours per year to meet its missions while moving toward a presence-based approach to enforcement, rather than its conventional response-based approach. For example, according to the mission needs statement, this presence-based approach will lead to operations that detect and interdict threats as far from the United States as possible. In 2007, based on the 2005 mission needs statement, the Coast Guard published a baseline for all of its major acquisition programs, which became a single program of record. The fixed-wing portion of this program of record included 22 HC-130Js and 36 HC-144s, which were planned to meet the annual goal of 52,400 flight hours. The receipt of 14 C-27Js represents a significant change, in terms of fleet composition, to the Coast Guard’s 2007 program of record. As of January 2015, the Coast Guard has transferred 2 of 14 C-27Js to its aircraft maintenance facility, after returning them to flyable status, and is in the process of developing a detailed plan for fielding all 14 aircraft by 2022. The Coast Guard, based on DHS direction, has restructured its HC-144 acquisition program to also encompass the transfer of the C- 27Js. This combined acquisition, termed the Medium Range Surveillance Aircraft program, is considered a new, major program within DHS. The C- 27J transfer process is not simple, as significant work remains to achieve three major milestones before the aircraft are fully operational: (1) induct the aircraft (prepare for use), (2) establish operational units (bases), and (3) add surveillance and advanced communication capabilities. In addition, complicating these efforts are areas of risk that need to be addressed before the Coast Guard can field fully operational C-27Js. These three risk areas are: (1) purchasing spare parts, (2) accessing technical data, and (3) understanding the condition of the aircraft. Figure 4 illustrates the milestones and risk areas the Coast Guard must address before it can field a fully capable aircraft. The Coast Guard’s 2012 business case estimated that it would cost about $600 million in acquisition costs to transform the 14 C-27Js into fully functioning Coast Guard assets, which includes purchasing sets of initial spare parts (estimated at $150 million), flight trainers/maintenance equipment/engineering costs (estimated at $150 million), and installing surveillance capabilities (estimated at $300 million). However, these costs are notional since the Coast Guard is in the process of developing a cost, schedule, and performance baseline for the aircraft as part of its Medium Range Surveillance Aircraft program. These costs are based on the Coast Guard’s experience with other fixed-wing aircraft and do not account for risks specific to the C-27J. The Coast Guard has awarded or plans to award several contracts to assist with these steps, in some cases partnering with USASOC. Coast Guard project officials recognize each of these risk areas and are confident they can work through them given the Coast Guard’s experience with foreign manufactured aircraft, such as the HC-144. DHS oversees Coast Guard major acquisitions and in October 2014 issued an acquisition decision memorandum that outlined how the C-27J would be incorporated into the DHS acquisition review process. In this regard, DHS directed the Coast Guard to restructure its HC-144 program to accommodate the addition of the 14 C-27J aircraft. These aircraft, together, are now termed the Medium Range Surveillance Aircraft program, which is a major DHS acquisition program. DHS directed the Coast Guard to pursue the Medium Range Surveillance Aircraft program in two phases. The first phase is the acquisition of the HC-144, which is currently paused following the purchase of 18 out of 36 planned aircraft. The Coast Guard previously estimated that it would complete the purchase of 36 missionized HC-144s by 2025. DHS has instructed the Coast Guard to develop a plan to close out this phase. The second phase focuses on the acquisition of the C-27J aircraft and has two segments. Segment one is the induction and employment of the unmodified C-27J aircraft that can perform missions, but without surveillance capabilities. Segment two is modification of the 14 aircraft, which principally involves the addition of the surveillance and advanced communication capabilities and operationally testing the asset. The Coast Guard was directed to develop all acquisition documents for the second segment of the C-27J acquisition, including a life-cycle cost estimate and acquisition program baseline. DHS further directed the Coast Guard to operationally test a prototype of the C-27J with the new surveillance capabilities before it modifies all 14 aircraft. estimate, and logistics plan for the first segment of the C-27J phase of this program. 2. Include the funding requirements for the C-27J in the Coast Guard’s forthcoming 5-year funding plan (fiscal years 2016 to fiscal years 2020). 3. Submit a detailed schedule that includes C-27J engineering and testing activities. 4. Present operational test strategies to DHS’s Director of Operational Test and Evaluation for the HC-144 and the C-27J. 5. Provide a revised execution plan including a summary of funds obligated and a spend plan for the fiscal year 2014 and 2015 C-27J funding. 6. Provide a business case analysis comparing Coast Guard operational and depot maintenance to contracted maintenance for the C-27J. 7. Provide a report on the process of adding a mission system to the C- 27J. DHS also provided the Coast Guard with the authority to continue to incorporate the C-27J aircraft into its fleet while it develops these areas of knowledge and provides these action items to DHS. Three major milestones need to be accomplished before the Coast Guard can field fully capable C-27Js: 1. Induct the aircraft—The Coast Guard plans to continue removing the aircraft from storage, establish a maintenance regimen, recondition the aircraft, and develop manuals and user guides, among other tasks. Currently, the Coast Guard is assessing the first two planes and establishing the key steps necessary to finish inducting these aircraft. 2. Establish operational units—The Coast Guard plans to establish the first C-27J operational unit (base) in California in fiscal year 2016 and a second unit at an as-yet undetermined location in fiscal year 2018. Training aircrew, among other key tasks, are completed as a part of this milestone. 3. Add surveillance and advanced communication capabilities—The Coast Guard plans to convert the C-27J from a cargo aircraft to a multi-mission aircraft with both cargo and surveillance capabilities to fully meet Coast Guard mission requirements. The Coast Guard is in the process of developing a baseline induction process for the C-27J; however, Coast Guard program officials stated that until they understand the condition of each aircraft, they cannot estimate how long it will take to induct each plane. The first part of induction entails removing the aircraft from the AMARG storage facility, which involves taking off a protective compound, conducting system checks and basic maintenance, and successfully completing a flight test—among other steps. Figure 5 shows one of the Coast Guard’s C-27Js in preserved status at AMARG. Once the first part of the induction process is complete, the aircraft is flown to the Coast Guard’s aviation maintenance center, called the Aviation Logistics Center, where the Coast Guard has established a project office responsible for completing the induction process and fielding the C-27Js. Currently, this project office is assessing the first two planes and establishing the key steps necessary to fully induct the aircraft, such as incorporating the aircraft into the Coast Guard maintenance system, building up the Coast Guard’s knowledge of the aircraft by conducting training and test flights, repairing physical damage (if any), replacing missing parts, and creating Coast Guard operational and maintenance documents and procedures. When completing these steps, complications can arise. For example, program officials stated that the C-27J uses a liquid oxygen based aircrew breathing system similar to the HC-130J. All other Coast Guard fixed wing aircraft use oxygen in gas form; thus, the Coast Guard has to write new policies and train users on how to work with this material on the C-27J. The Coast Guard expects it will take about 9 months to induct the first two aircraft and, therefore, plans to have them ready for operations by fiscal year 2016. According to Coast Guard officials, the amount of time required to induct subsequent C-27J aircraft should decrease after the Coast Guard develops logistics, maintenance, and training systems with the first two aircraft. In addition, in February 2015, the Coast Guard signed a sole-source contract with Alenia Aermacchi for on-the-ground troubleshooting support, which should also help to speed up the induction process. Further, because a significant part of the induction process involves integrating the C-27J into the Coast Guard’s maintenance system, the time needed to induct future planes should also decrease as maintenance and training procedures, among others, are developed and documented. Thus far, to augment and develop its in-house maintenance capabilities, the Coast Guard spent $3.2 million for 1 year of contractor logistics support under a pre-existing USASOC contract with Lockheed Martin set to expire in August 2015. The Coast Guard has decided to exercise an option for this contract for an additional year while it builds the capacity necessary to maintain the plane using Coast Guard personnel and procedures. The Coast Guard plans to establish the first C-27J operational unit in fiscal year 2016 in California with four fully inducted C-27Js; a second undetermined location (likely on the east coast) is tentatively scheduled to begin initial operations in fiscal year 2018. Coast Guard officials added that plans for additional bases, if necessary, are not yet finalized. According to Coast Guard aviation program managers, the C-27J fleet will be based where it is most needed to help the Coast Guard fulfill its drug and migrant interdiction, disaster response, and search and rescue missions, among the Coast Guard’s other missions. To establish the first base, pilots, aircrew, and aircraft maintenance personnel all need training to effectively and safely operate and maintain the C-27J. The Coast Guard plans to train an initial cadre of approximately 20 pilots and 80 aircrew and maintenance personnel to stand-up the base in California. In addition, the Coast Guard plans to build a $12 million maintenance training facility for the HC-144 and the C- 27J. In May 2014, the Coast Guard signed a $434,000 sole-source contract with Alenia Aermacchi to train two Coast Guard pilots and two aircrew members in Pisa, Italy. These personnel completed training in September 2014. In addition, the Coast Guard is in the process of contracting for the use of a C-27J flight simulator. To meet its requirements, the Coast Guard has to convert the C-27J from a cargo aircraft to a multi-mission aircraft with both cargo and surveillance capabilities—a $300 million effort, according to initial Coast Guard estimates. These capabilities are enabled by a mission system that primarily consists of a surface-search radar, electro-optical infrared camera, and advanced communication capabilities to process and distribute data gathered by these sensors. The process of adding a mission system is called missionization. The Coast Guard plans to use a mission system known as Minotaur—already in use by the Navy for aviation surveillance activities—for all of its medium and long range surveillance aircraft, including the C-27J. United States Customs and Border Protection also uses this system for its surveillance aircraft, potentially increasing the communication and data sharing across DHS. The Coast Guard is in the early stages of replacing obsolete and poorly performing mission systems on its existing fixed-wing fleet of HC-130Js and HC-144s with the Minotaur system. In November 2014, the Coast Guard completed a preliminary design review for the HC-130J system and plans to begin installation of a prototype mission system this summer. Prototyping and testing a system for the HC-144 is dependent on the schedule for the HC-130J, but is currently planned to be completed in fiscal year 2016. The C-27J will be the last to have the mission system incorporated. To begin this effort, the Coast Guard entered into a $1 million agreement with the Navy’s Naval Air Warfare Center in November 2014 to evaluate mission system options on the C-27J. The Coast Guard tentatively estimates that it will have a C-27J mission system prototype by fiscal year 2017 and, following testing, plans to install and integrate this equipment on additional aircraft beginning in fiscal year 2018. Notionally, all 14 C-27Js are to have mission systems incorporated by fiscal year 2022. The successful and cost effective fielding of the C-27J is contingent on the Coast Guard’s ability to address three risk areas, related to the following: Purchasing spare parts—The Coast Guard had to develop its own list of spare parts for the aircraft, as existing lists are not available. Further, the Air Force and USASOC have had difficulties obtaining some spare parts. Accessing technical data—The Coast Guard does not have full access to the technical data for the C-27J, which are required to maintain the aircraft over the long term and make modifications to the aircraft’s structure—for example, to install sensors. Understanding the condition of the aircraft—The condition of each of the Coast Guard’s C-27Js is not fully known; for example, the Coast Guard found an undocumented dent on one aircraft taken out of AMARG and its 14th plane—still owned by L-3 Communications— was not properly preserved. We have identified these three areas as risks because they represent knowledge gaps for the Coast Guard and will likely require the largest amount of resources during the process of fielding 14 fully capable aircraft. For example, accessing technical data is key to installing surveillance capabilities—an effort planned to cost $300 million. In addition, these are areas in which the Air Force and USASOC have experienced difficulties while operating the C-27J. In combination, these risk areas could inhibit the number of hours per year that the C-27J will be fully capable of accomplishing its missions and add additional costs to these efforts. To mitigate some of these risks, the Coast Guard is formulating a partnership with USASOC to collaboratively maintain the C-27Js owned by both services. For example, USASOC purchased technical manuals, which it is sharing with the Coast Guard, while the Coast Guard is contracting for a field service representative to work with both fleets. In addition, the Coast Guard began participating in a user group with the other countries that operate the C-27J, including Italy and Greece, though this user-base is limited since only 76 C-27s have been sold by Alenia Aermacchi (including the 21 C-27Js that belong to the United States government). In general, Coast Guard officials have characterized these risks as “good problems to have” because they are receiving 14 aircraft without reimbursement and they see the C-27J as a valuable addition to the Coast Guard’s fixed-wing fleet. Spare parts are essential to keeping aircraft operational, but USASOC and the Air Force have experienced a number of setbacks related to acquiring necessary parts, particularly from Alenia Aermacchi. While the Air Force was operating the C-27J, it encountered issues keeping its fleet in operable condition due to difficulties with obtaining spare parts. For example, at the time the Air Force canceled the C-27J program, 11 of 14 planes were missing parts. Further, to support the deployment of two C- 27Js to Afghanistan in July 2011, the Air Force built a large pool of spare parts by purchasing $65 million worth of spares from L-3 Communications. Due to delivery delays from Alenia Aermacchi, parts were taken from a C-27J that was still in production at L-3 Communications. Using these parts, the C-27Js were in operable condition 83 percent of the time from July 2011 to June 2012. However, creating such a large and expensive pool of spares, relative to the number of aircraft, is not a sustainable approach that the Coast Guard can apply to maintaining its planes. Further, the Air Force did not obtain access to key data such as spare parts demand information and ordering history and, therefore, could not provide these data to the Coast Guard. Since it began operating C-27Js in fall 2013, USASOC has made some progress purchasing spare parts, though purchasing directly from Alenia Aermacchi continues to present significant challenges, due to parts pricing and delivery delays. Air Force and L-3 Communications officials estimate that Alenia Aermacchi controlled up to 90 percent of the spare parts for the C-27J. Through the efforts of its own logistics support contractor, however, as of November 2014, USASOC has had to order only 24 percent of the parts it needs directly from Alenia Aermacchi, with the remainder coming directly from the original parts manufacturers or U.S. government-approved suppliers. However, the 24 percent of parts that USASOC ordered from Alenia Aermacchi comprised 40 percent of USASOC’s total spending on spare parts; thus, Alenia Aermacchi remains a significant stakeholder. Further, USASOC has had issues with pricing and delayed deliveries. Alenia Aermacchi has increased the price of parts on two USASOC purchases; for example, USASOC purchased a refueling valve that previously cost $10,998, but it now must pay $15,121 for the part—a 37 percent increase. In addition, USASOC officials have been frustrated by the length of time Alenia Aermacchi has taken to provide parts. For example, as of November 2014, USASOC had received 70 percent of the parts ordered from non-Alenia Aermacchi suppliers but only 3 percent of the parts ordered from Alenia Aermacchi. While Alenia Aermacchi officials recognize that there have been issues with spare parts in the past, they told us that the company has recently changed its logistics model and has the capability to fully support the Coast Guard’s C-27Js. Further, according to USASOC contracting officials, difficulties with Italian export controls have slowed the spare parts and parts repair processes. Alenia Aermacchi is in the process of applying for Italian export control licenses for USASOC and the Coast Guard. Once approved, these licenses will reduce delays attributable solely to export issues—usually around 30 to 90 days—according to Alenia Aermacchi officials. The Coast Guard has already encountered some challenges with purchasing spare parts. For example, when removing aircraft from AMARG it is standard procedure to replace the aircraft’s filters. In doing so, the Coast Guard was able to purchase only half of the filters through U.S. government approved suppliers. The remaining filters had to be purchased directly from Alenia Aermacchi because they were unavailable from other sources. To mitigate export control delays, Coast Guard officials said they used Alenia Aermacchi filters from USASOC’s warehouse and paid for replacements to USASOC’s inventory. Developing an initial set of spares for the C-27J, estimated to cost $8 million per aircraft, is another significant area of risk. The Coast Guard received data from other countries’ air forces that fly the C-27J and, using these data, have developed its own initial list of spares. The Coast Guard must now find suppliers for these parts and then determine what portion of these parts are economical to keep on-hand in the Coast Guard’s supply chain. Further, according to Coast Guard project officials, significant learning will occur as the Coast Guard inducts the C-27J and generates its own data, as it starts flying the aircraft, on the failure rate of the aircraft’s parts. These data will allow the Coast Guard to fine tune the list of spare parts before the aircraft are fielded. Lastly, the Coast Guard plans to fly its aircraft at a more aggressive operating pace—up to 1,000 hours per year compared to about 500 hours per year for the Air Force and USASOC—and in a maritime environment, which is generally more demanding on assets due to increased corrosion from the saltwater environment. Thus, its spare parts needs are likely to be much higher than USASOC and the Air Force. The Coast Guard does not yet have sufficient access to technical data to fully support, maintain, and operate the C-27J. Alenia Aermacchi is the sole owner of the full technical data associated with the C-27J aircraft. Coast Guard project officials said they have approached the company about acquiring a technical data licensing agreement. The Coast Guard plans to meet immediate technical data requirements by utilizing resources under the Alenia Aermacchi field service representative contract. The Coast Guard continues to explore various options for accessing this key information. These options can vary depending upon the three basic types of technical data for aircraft: 1. Flight and maintenance manuals–These manuals provide all of the information required to safely and effectively fly and maintain the aircraft—including detailed guidance on maintenance and flight procedures—and function as the foundational documents necessary for properly operating the aircraft. In 2014, the Air Force supplied the Coast Guard with two sets of flight and maintenance manuals. The first set, produced by L-3 Communications, translated the Alenia Aermacchi manuals into Air Force-specific language and also covered the modifications made to the plane by L-3 Communications. These manuals were updated through March 2013 but never fully completed. The second set, produced by Alenia Aermacchi, pertained only to the aircraft in its originally manufactured condition prior to modifications. This set continued to be updated by Alenia Aermacchi until the Air Force transferred them to the Coast Guard in February 2014. USASOC has since purchased flight and maintenance manuals from Alenia Aermacchi, according to Coast Guard project officials, and provided the Coast Guard access to them. The Coast Guard is in the process of contracting for Coast Guard-specific instructions based on these manuals for how to conduct basic aircraft maintenance. 2. Depot level maintenance data–During periodic depot level maintenance, the Coast Guard conducts major airframe inspections and completes required repairs, which allows the service to accomplish its missions with fewer resources over the long run. These depot maintenance periods typically include removing large portions of the aircraft to address core corrosion and rebuild key parts, including engine work. At the time the Air Force canceled its C-27J program, it was in the process of pursuing the data required for depot maintenance from L-3 Communications, the lead contractor. However, since Alenia Aermacchi owned the data, this process would have been complex and, according to Air Force officials, was unsuccessfully resolved. The extent to which the Coast Guard can develop depot level maintenance procedures and conduct its own engineering activities on the C-27J with only the basic flight and maintenance manuals is unknown at present. The Coast Guard has significant experience conducting depot maintenance on aviation assets, which, according to project office officials, increases its ability to overcome knowledge gaps associated with the lack of technical data. 3. Design and manufacturing data–These data contain information such as the expected fatigue life—how long components last—of pieces of the airframe and information required to manufacture key parts. Access to design and manufacturing data is needed when modifications are required to the aircraft, such as to add capability, and manufacturing data are useful for competitively purchasing parts or determining the cause of parts failures. Alenia Aermacchi has sole ownership and access to these data. According to project officials, the Coast Guard could purchase access to these data from Alenia Aermacchi, either on an as-needed basis or in bulk. Alternatively, Coast Guard officials added that they can learn about the aircraft through testing, reverse engineering, and/or experimentation. Coast Guard project officials stated that these options would require significant resources, and that they will likely use a combination of these approaches. Without access to technical data for the C-27J, the Coast Guard faces risks related to controlling costs and maintenance issues and installing surveillance capabilities. Technical data can enable the government to complete maintenance work in-house, as well as to competitively award acquisition and sustainment contracts. In July 2010, we reported that for service contracts pertaining to DOD weapons programs, the lack of access to proprietary technical data and a heavy reliance on specific contractors for expertise limits or precludes the possibility of competition. Further, in May 2011, we reported that access to technical data is needed to help control costs and maintain flexibility in the acquisition and sustainment of DOD weapon systems. In addition, the lack of technical data could ground the aircraft for longer periods than necessary. For example, Air Force program officials told us about a severe 2012 mishap with one of its C-27Js that grounded its fleet for several months while Alenia Aermacchi investigated the incident. As Alenia Aermacchi had sole access to the technical data, the Air Force was reliant on it to conduct the investigation. After several months, Alenia Aermacchi determined that the issue was the result of improper manufacturing of an aircraft component at one of its suppliers’ facilities. The Air Force then had to wait approximately 10 months while the parts were re-made by the supplier and sent to the Air Force. Alenia Aermacchi officials have expressed interest in serving as the sole maintenance provider for the Coast Guard and USASOC’s aircraft as it does for many of its international customers. Alenia Aermacchi officials note that they are also accustomed to supporting agencies doing their own maintenance by providing field service representatives, engineering support, and technical publications. The process of installing surveillance capabilities on the C-27J will be shaped by the extent to which the Coast Guard can access design and manufacturing data for the C-27J. The first step in this process is purchasing the main sensor systems, a surface search radar and electro- optical infrared camera, and installing them on the aircraft. However, this task is risky and could reduce capability if the Coast Guard does not gain access to the C-27J’s technical data. For example, on the HC-130J, the Coast Guard mounts the surface-search radar on the aircraft’s fuselage. However, such modification to an aircraft requires technical data to determine the impact on its structural integrity. Without access to the necessary data, the Coast Guard would be reliant on Alenia Aermacchi to perform the engineering required to mount the radar. However, the Coast Guard is looking into alternatives that would require access to fewer data. According to Coast Guard and USASOC officials, two possibilities for the C-27J that would require a limited amount of technical data include (1) mounting the electro-optical infrared camera on one of the aircraft’s doors and (2) modifying the surface-search radar to fit on the nose of the aircraft—replacing the existing weather radar, which may no longer be necessary because the new radar would also perform weather-related tasks. While possible, such an approach could require performance trade- offs—such as limiting the coverage of the radar and the camera. An additional risk for the Coast Guard is addressing the physical condition of the 14 aircraft. USASOC has experienced a number of premature failures and unexpected maintenance with its C-27Js that could also be an issue for the Coast Guard. So far, during the 700 hours flown by the 6 operational USASOC aircraft between January 2014 and November 2014, they have had the following problems: Fuel leaks were found in the wings of the aircraft, which are designed to hold fuel similar to many commercial aircraft. The seams and joints of three of USASOC’s seven aircraft were poorly sealed upon delivery of the aircraft, requiring significant repairs. The landing gear on one aircraft extended during landing without pilot instruction due to a landing gear component deficiency, which is a safety issue and grounded the aircraft. Wheel assemblies on multiple aircraft were improperly constructed. A cracked bracket required an aircraft to be grounded for 58 days. A crack was discovered in a structural piece surrounding the left wheel on four aircraft. Problems were found with four different types of valves, including fuel and de-icing valves on multiple aircraft. Oxygen system leaks due to manufacturing errors on multiple aircraft. Some of these issues are related to major manufacturing problems and are consistent with findings of the Defense Contract Management Agency, which oversaw the C-27J manufacturing process on behalf of the Air Force. For example, in March 2013, the Defense Contract Management Agency issued a request for L-3 Communications, as the lead U.S. partner, to correct poor practices at Alenia Aermacchi that would seriously compromise the reliability and safety of the C-27J if not corrected. This corrective action request has been closed, but the extent to which these problems extend through the Coast Guard’s 14 C-27Js, built prior to these changes, is unknown. While USASOC experienced the problems noted above on brand new aircraft, the Coast Guard is receiving aircraft that the Air Force previously used. When removing the first two planes from storage at AMARG, the Coast Guard discovered numerous, though relatively minor, issues that delayed delivery of the planes to the Coast Guard’s Aviation Logistics Center by a few weeks. For example, the Coast Guard discovered a dent on the underside of one C-27J that was not properly documented and also found some corrosion, particularly with bolts on the wings of the aircraft, which it replaced on both aircraft. Coast Guard officials stated that the manufacturer may have installed the wrong bolts on the aircraft. Coast Guard and Air Force officials determined that the first two aircraft that they removed from AMARG are likely in the best condition. Two of the most heavily used aircraft destined for the Coast Guard supported the contingency operation in Afghanistan for 11 months. The Coast Guard will continue to assess the condition of the other 11 planes as they are removed from storage, which officials have identified as an area of concern. Apart from the 13 aircraft that have been stored at the AMARG, the 14th plane destined for the Coast Guard is also missing parts and has been stored outdoors since 2011 without being preserved by L-3 Communications (which still owns the aircraft). L-3 Communications officials told us that they did not properly maintain the aircraft’s engines and propellers because parts required to run the engines, necessary for proper maintenance, were used for other C-27Js and not replaced in a timely manner. However, L-3 Communications, at its expense, recently sent the engines and propellers to be serviced by the original manufacturer and these items are now properly stored. In October 2014, we observed the aircraft at L-3 Communications’ facility in Waco, Texas. The aircraft’s engines and propellers were not installed but were stored in a nearby hangar consistent with original equipment manufacturer direction, according to L-3 officials. However, the cockpit was missing several components related to communications and operations functions, and the body of the aircraft showed some corrosion—particularly under each wing. L-3 Communications is now in the process of replacing 11 key missing parts taken from the aircraft to support the Afghanistan deployment and other C-27Js. At the time of our visit, L-3 Communications officials were optimistic that the aircraft would be delivered to the Coast Guard in working condition by February 2015, pending the delivery of the missing parts. However, as of March 2015, Alenia Aermacchi had yet to deliver these parts to L-3 Communications. L-3 Communications is now planning to deliver the aircraft to the U.S. Government in June 2015, pending the delivery of parts expected by late March 2015. Given that the airplane was not stored in accordance with Air Force procedures and has been used for spare parts, there will likely be some maintenance issues that L-3 Communications will have to address before it can deliver the aircraft to the U.S. Government. The C-27J will improve the affordability of the Coast Guard’s fixed-wing fleet, but the current fleet of aircraft that the Coast Guard is pursuing is not optimal in terms of cost and flight hour capability. We estimate that the Coast Guard’s current plan should save $795 million over the next 30 years, compared to the 2013 estimate of $837 million. However, the source of these savings has shifted. A significant portion of the savings now results from a drop in the number of flight hours the fleet will achieve due to reducing the planned quantity of aircraft. For example, the 2013 plan achieves the Coast Guard’s stated goal of 52,400 flight hours per year, while the current plan achieves 43,200 flight hours per year—an 18 percent reduction. This reflects a shift from a fleet of 58 planes primarily composed of less-expensive HC-144s, to a fleet of 54 planes composed of a higher number of larger and more expensive HC-130Js. Operating more HC-130Js results in more expensive flight hours per year. The Coast Guard is in the process of examining, in several stages, its mission needs, including whether the current flight hour goal is still sound. But the results will not be used to inform budgets prior to fiscal year 2019. In the meantime, DHS and the Coast Guard have paused the HC-144 acquisition program, but historically the Coast Guard has received C-130J aircraft without budgeting for them. The Coast Guard already owns 20 aircraft that are not yet operational, including 14 C-27Js and 6 HC-130Js, and are planned to be outfitted with surveillance capabilities in the coming years. If the Coast Guard continues to receive additional aircraft before the results of the study are known, options for optimizing its fleet mix may be limited. To determine the potential impact of the C-27J on the cost and fight hour capability of the Coast Guard’s fixed-wing fleet, we compared three scenarios: the 2007 program of record (without the C-27J), to which we applied updated assumptions and the data in the Coast Guard’s business case, the Coast Guard’s C-27J business case as presented to Congress in 2013, and the Coast Guard’s current plan, to which we applied updated assumptions and the data in the Coast Guard’s business case. Table 2 shows the total planned number of aircraft in each fleet we compared, the total cost to fly the aircraft for the next 30 years, total flight hours over the next 30 years, and the total estimated savings of each fleet compared to the program of record. We found that the Coast Guard’s current plan should save $795 million over the next 30 years compared to the program of record fleet. While the amount of savings is similar to the $837 million estimated by the Coast Guard in 2013, the source of these savings has shifted, as shown in figure 6. The Coast Guard’s savings in the initial plan were largely due to acquisition cost savings. However, in the current plan, the savings are now largely due to operating expenses based on the Coast Guard’s planned reduction in flight hours. In its August 2013 letter to Congress, the Coast Guard stated that receiving 14 C-27Js would save money without reducing planned flight hours below its goal of 52,400 hours per year, set forth in the Coast Guard’s 2005 mission needs statement. To do this, the Coast Guard planned to replace three HC-130Js with three C-27Js, gaining 600 flight hours per year at a lower cost per flight hour, and to decommission its HC-130Hs sooner than originally planned. However, the Coast Guard has since changed its planned fleet composition, and is now on a path to replace 14 HC-144s with 14 C-27Js and buy all 22 of the HC-130Js as planned in the program of record. This change results in 9,200 fewer flight hours per year (an 18 percent reduction) once the currently planned fleet is fully operational. Also contributing to this reduction of flight hours is the current plan to purchase 4 fewer medium range aircraft (HC-144s and C- 27Js) and reduce the HC-144 flight hours from 1,200 to 1,000 hours per year—due primarily to the high cost of maintaining the aircraft while flying at the higher pace. Table 3 shows: (1) the aircraft that comprise each fleet plan, (2) the planned annual flight hours once each fleet is built, and (3) the difference in flight hours, if any, based on the planned flight hours per year. We calculated this difference using the Coast Guard’s goal of 52,400 annual flight hours as a baseline. The table also includes the actual quantity and flight hour performance of the fleet, as of 2014, as a basis for comparison. In all, the Coast Guard’s current plan is still an improvement over the flight hours recorded in fiscal year 2014, when the Coast Guard flew 38 percent fewer hours compared to its stated needs. As reflected in the table, however, the current plan also would result in a flight hour shortfall compared to the program of record. In addition to the reduction in its planned flight hours, the Coast Guard also has a shortage in its capability to meet its surveillance needs. To fully meet its needs, the Coast Guard must fly 52,400 hours per year with assets capable of conducting surveillance missions with advanced communication capabilities, such as sharing data. The Coast Guard’s 2005 mission needs statement directed the Coast Guard’s fixed wing fleet to be comprised of assets with improved surveillance capabilities, which would allow the Coast Guard to become more proactive through increased presence and surveillance rather than responding to events as they occur. The Coast Guard has not been able to build up its flight hours as quickly as planned in 2007. The Coast Guard planned for the HC-144 and HC- 130J, the two fixed wing assets in the program of record fleet planned to be outfitted with improved surveillance capabilities, to conduct surveillance consistent with the surveillance goal. The C-27J, once missionized, is also planned to have improved surveillance capabilities. However, in 2014, the 16 HC-144s and the 5 HC-130Js that are currently missionized and operational flew only 16,381 hours, about 31 percent of the overall need. The remaining flight hours in 2014 were flown by HC- 130Hs and other legacy aircraft that do not have surveillance capabilities consistent with the Coast Guard’s needs. The result, in fiscal year 2014, was a 69 percent difference in these capabilities compared to the Coast Guard’s 2005 mission needs statement. Further, the surveillance shortage in today’s fixed wing fleet is likely larger than 69 percent because the mission systems on the HC-144 and HC- 130J are not yet fully effective. For example, the Navy had to use non- Coast Guard software to assess the capabilities of the HC-130J’s radar after determining that the software the Coast Guard uses does not work well with the aircraft’s sensors. Moreover, we found in June 2014 that the HC-144 did not meet key performance parameters related to surveillance during operational testing. Replacement of the current mission system, already underway, is planned to address the majority of deficiencies. Once missionized, the HC-144, C-27J and HC-130J will reduce the current surveillance shortage since these aircraft will comprise increasingly larger proportions of the fleet over the next decade. While the fixed-wing fleet that the Coast Guard is currently pursuing should save $795 million over 30 years, it will have a higher average cost per hour of flight. The Coast Guard’s long-term approach, as reflected in the current plan, is to replace HC-144s with C-27Js that cost approximately $1,000 more per flight hour. This is a shift from the plan as presented to Congress in 2013, which proposed replacing some of the more expensive to operate HC-130Js with C-27Js. Both plans propose replacing the HC-130H as soon as possible compared to the program of record, which will improve the overall cost per hour of flight. As table 4 illustrates, the current proposed mix of aircraft will cost $11,059 per hour of flight, which is greater than the program of record and the fleet described to Congress in 2013. While the Coast Guard’s current plan preserves much of the anticipated savings from receiving the 14 C-27Js and increases the number of flight hours compared to the fleet the Coast Guard is operating today, it results in fewer flight hours for the dollar. The Coast Guard is currently conducting a fleet-wide analysis, including surface, aviation, and information technology, intended to be a fundamental reassessment of the capabilities and mix of assets the Coast Guard needs to fulfill its missions. The Coast Guard is undertaking this effort consistent with direction from Congress. The Howard Goble Coast Guard and Maritime Transportation Act of 2014 directed the Commandant of the Coast Guard to submit an integrated major acquisition mission needs statement that, among other things, identifies current and projected capability gaps using mission hour targets. This mission needs statement is to be completed concurrent with the President’s fiscal year 2019 budget submission to Congress. Specifically, the Coast Guard plans first to rewrite its mission needs statement and concept of operations by 2016. Then, it will use a complex model to develop the full fleet mix study, which will include a re-assessment of the fixed-wing flying hour goals. Based on this, the Coast Guard plans to recommend a set of assets that best meets these needs in terms of capability and cost. The Coast Guard plans to complete the full study in time to inform the fiscal year 2019 budget, though specific dates for these events have not been set forth. The Coast Guard and DHS have undertaken several studies, starting in 2008, to reassess the mix of assets the Coast Guard needs. However, in 2011, we reported that it was unclear how DHS and the Coast Guard would reconcile and use these multiple studies to make trade-off decisions or changes to the program of record. To date, the Coast Guard has made no changes to its program of record based on these analyses. The upcoming mission needs statement and subsequent fleet mix analysis will be important to inform decisions about the mix of fixed wing assets the Coast Guard needs and can afford. For example, our calculations and the Coast Guard’s 2012 business case demonstrate that replacing HC-130Js with medium-range aircraft (such as the HC-144 and C-27J) adds flight hours and reduces costs. Specifically, the savings the Coast Guard presented to Congress in its 2013 letter were predicated on replacing three HC-130Js with three C-27Js. According to the Coast Guard, this action would add 600 flight hours per year and save $322 million over the next 30 years. Further, the Coast Guard’s analysis showed that replacing nine HC-130Js would add 1,800 flight hours per year and save nearly $1 billion. According to Coast Guard officials, the fleet mix analysis will examine these cost savings while also accounting for the level of performance provided by the HC-130J compared to the other fixed-wing assets. Because the results of the fleet mix study will not be available for several years, decisions that are made in the interim will not be informed by the Coast Guard’s analysis. To illustrate, if this fleet mix analysis were to establish needed flight hours at a lower number than the current 52,400 goal, the Coast Guard could end up with excess capacity. Further, if the analysis were to demonstrate that the optimal fleet mix is comprised of more medium-range aircraft and fewer long- range aircraft, then the Coast Guard is currently on a path to end up with a more expensive fleet than necessary and it would be too late to opt for a fleet with a greater number of flight hours for the dollar. Coast Guard budget and programming officials recognize the aviation fleet may change based on the flight hour goals in the new mission needs statement and the overall fleet mix analysis. They therefore have not included any additional fixed-wing asset purchases in the Coast Guard’s five-year budget plan. For example, DHS and the Coast Guard have formally paused the HC-144 acquisition program at 18 aircraft for the time being. In addition, the Coast Guard already owns 20 aircraft, received since fiscal year 2009, comprised of 14 C-27Js and 6 HC-130Js that are not yet fully operational. These aircraft are planned to be outfitted with surveillance capabilities in the coming years. In total, since 2000, the Coast Guard has received 12 HC-130Js, currently valued at approximately $100 million each, without including them in its budget requests. The Coast Guard’s Major Systems Acquisition Manual provides that the Coast Guard must manage its portfolio of assets to ensure that public resources are wisely invested and that capital programming is an integrated process of a component’s portfolio of capital assets to achieve its strategic goals and objectives for the lowest life cycle cost and least risk. Continuing to receive these aircraft in the coming years, while the Coast Guard revisits its fixed-wing mission needs, will diminish the Coast Guard’s flexibility to optimize its fleet. Further, the Coast Guard may end up with aircraft it ultimately does not need. The Coast Guard is in the process of revisiting its fixed-wing fleet needs while also addressing several unknowns regarding its newest asset, the C-27J. While the transfer of the C-27Js to the Coast Guard may save acquisition funds, the Coast Guard is still a long way from being able to operate these aircraft efficiently and effectively. Overcoming the issues we have highlighted is feasible. But it will take time and resources to ensure that the C-27J will be able to function as a Coast Guard medium- range surveillance asset, particularly in terms of adding surveillance capabilities and achieving 1,000 flight hours per year. If the Coast Guard uses the C-27J to replace some HC-144s, as is the current plan, the Coast Guard will fall short of its flight hour goals over the next 30 years, but if the C-27J replaces some HC-130Js, the Coast Guard can achieve more flight hours at a lower cost. The Coast Guard has an opportunity to address these issues now, within the context of its ongoing effort to assess its overall fleet of fixed wing assets. For example, the fleet mix analysis will aid the Coast Guard in determining the right mix of assets between the HC-130J—which the Coast Guard views as a highly capable aircraft—or the greater number of lower cost flight hours provided by the HC-144 or similar aircraft. However, the study results are years away. In the meantime, although the Coast Guard has exercised prudence in pausing the HC-144 program, it may continue to receive HC- 130Js before it knows that it needs these aircraft and before it has determined the capabilities of its C-27J fleet. As a result, if the Coast Guard continues to receive HC-130Js while it revisits its needs, the capability and cost of the Coast Guard’s fixed-wing fleet runs the risk of being dictated by the assets the Coast Guard already owns rather than what it needs. Until the fleet mix study is concluded, the Coast Guard does not know the quantities of each aircraft that optimally balance the capability and presence of its fixed-wing fleet. Because the Coast Guard already has HC-130J aircraft in the pipeline awaiting the addition of surveillance capabilities and sensors, any impact of halting the provision of these aircraft in the interim, prior to completion of the fleet mix study, would be mitigated. We recommend that the Secretary of Homeland Security and the Commandant of the Coast Guard inform Congress of the time frames and key milestones for completing the fleet mix study, including the specific date when the Coast Guard will publish its revised annual flight hour needs and when it plans to inform Congress of the corresponding changes to the composition of its fixed-wing fleet to meet these needs. We also recommend that the Commandant of the Coast Guard advise Congress to modify the provision of any additional HC-130Js, as appropriate, pending the findings of the fleet mix study. We provided a draft of this report to DHS for review and formal comment. In its comments, DHS concurred with our first recommendation but did not concur with our second recommendation. DHS’s written comments are reprinted in appendix II. We also provided a full draft of this report to DOD and draft sections of this report to Alenia Aermacchi and L-3 Communications, which provided us with technical comments that we incorporated as appropriate. In its letter, DHS stated that it disagreed with our analysis of cost and flight hours because it contains updated assumptions that are not carried through the entire report. During our review, the Coast Guard agreed that changing these assumptions would provide a more accurate understanding of the Coast Guard’s current fixed-wing fleet costs and flight hours. DHS and Coast Guard officials stated that they are not planning to conduct an analysis of the Coast Guard’s current fixed-wing aircraft plan. To assess this plan accurately, as discussed in the objectives, scope, and methodology of this report, we changed two key assumptions from the Coast Guard’s 2013 letter to Congress. First, the HC-144 is now planned to fly only 1,000 hours per year compared to the original plan of 1,200 hours per year. Second, the original analysis assumed all of the Coast Guard’s aircraft have a 30-year service life. In reality, each aircraft type is projected to have a different service life: 40 years for the HC-144; 30 years for the HC-130J; and 25 years for the C- 27J. We applied these assumptions to all of our calculations and then compared our results with what the Coast Guard presented to Congress to determine what, if any, differences exist. We believe this analysis is necessary to understand changes the Coast Guard has made and how they compare to the total savings presented to Congress. Regarding the first recommendation, on informing Congress of the time frames and milestones for completing the fleet mix study, DHS concurred with our recommendation but did not provide specific time lines for meeting this recommendation. Based upon a project schedule we received in fall 2014, DHS is currently working toward completing its full fleet mix analysis effort, including providing a revised statement of annual flight hour needs. The Coast Guard plans to complete its initial mission needs statement and concept of operations by 2016, but these documents will not identify the exact mix of assets the Coast Guard needs to meet its missions. Once these documents are complete, the Coast Guard will conduct further analysis to produce the fleet mix study. Based on the study, the Coast Guard plans to recommend a fleet of assets that best meets its needs and, according to officials, will take fiscal constraints into account. The time line for this second effort is unclear but officials told us that they plan for it to inform the fiscal year 2019 budget. We believe it is crucial for Congress and other stakeholders to understand when this information will be available so that key decisions can be made with accurate and up-to-date data. Further, Congress needs to know that the mix of the Coast Guard’s fixed-wing fleet assets will likely change based upon the results of this study. DHS did not agree with our second recommendation, that the Commandant of the Coast Guard advise Congress to modify the provision of any additional HC-130Js pending the results of the fleet mix study. DHS stated that it would be inappropriate for the Coast Guard to provide additional guidance, beyond the President’s budget, to the United States Congress on how to appropriate funds. The context for our recommendation reflects the fact that Congress uses many information sources to make decisions on how to appropriate funds that are not included in the President’s budget, such as information provided through agency briefings and reports, input from congressional agencies, and other sources. The Coast Guard has initiated an assessment that it states will provide a definitive flight hour goal for its fixed-wing assets—and subsequently the number and type of aircraft to meet this need. Without knowing the outcome of that assessment, Congress risks providing aircraft that may be in excess of the Coast Guard’s needs and that could result in an additional $1 billion in costs to the Coast Guard. In the meantime, as several C-130Js are already in the pipeline and 14 C-27Js have recently been received, the Coast Guard has prudently decided to pause the HC-144 program while it reassesses its needs. If receiving more C-130Js could complicate or even obviate the fleet mix analysis, now is the time to so advise Congress. DHS and the Coast Guard also provided technical comments that we incorporated into the report as appropriate. We are sending copies of this report to the Secretary of the Department of Homeland Security, Commandant of the Coast Guard, and Secretary of the Department of Defense. In addition, the report is available on our website at http://www.gao.gov. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to your offices. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to determine (1) the status of the transfer of the C-27Js from the Air Force to the Coast Guard, including cost and schedule estimates, plans for testing, and establishing a maintenance program—as well as any obstacles the Coast Guard faces to field the transferred aircraft, and (2) to what extent the acquisition will affect the overall cost and performance of the Coast Guard’s fixed wing aviation fleet. To determine the status of the transfer of the C-27Js from the Air Force to the Coast Guard, including cost and schedule estimates, plans for testing, and establishing a maintenance program—as well as any obstacles the Coast Guard faces to field the aircraft, we examined the Coast Guard’s C- 27J Implementation Plan as well as other key acquisition documents, including life cycle cost estimates, acquisition program baselines, and logistics studies. To develop a list of major steps in the transfer process, we analyzed the Coast Guard’s initial C-27J Implementation Plan and compared the steps in this plan to the Coast Guard’s Major Systems Acquisition Manual as well as the most recent C-27J Acquisition Decision Memorandum to identify what needed to be done and when. To gain a better sense of the history of the aircraft and its past performance and issues, we reviewed program documents on costs, and maintenance history from both the Air Force and Army. We spoke to members of the Air Force’s C-27J program office to identify how much knowledge the Air Force gained through its acquisition process and gain an understanding of the successes and challenges that they were having. We also reviewed interagency contracting agreements the Coast Guard has with the Army and Navy as well as C-27J contract documents. We interviewed Coast Guard officials from the requirements and acquisitions directorates to identify challenges for the transfer and sustainment of the aircraft, as well as officials from the Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance Project regarding the acquisition and implementation of the mission system. We developed a list of risk areas based on Coast Guard documentation and what is needed to develop sufficient knowledge about the program. We interviewed U.S. Naval Air Systems Command officials on airworthiness standards as well as the mission system in development by the Navy. We visited the Air Force 309th Aerospace Maintenance and Regeneration Group (AMARG), located on Davis-Monthan Air Force Base, to view the 13 C-27Js in storage and interview Air Force officials in charge of the C-27Js on site, as well as collect flight and maintenance logs regarding past C-27J operation. We met with U.S. Army Special Operations Command officials at Fort Bragg, interviewed contracted logistics support for the C-27J, and toured the parts warehouse and viewed some of the C-27Js on site. We also interviewed program office officials and contractor representatives from the Coast Guard’s C-27J Asset Project Office to gain a better sense of operational challenges and how they are being addressed and toured their respective facilities to discuss issues related to the fielding of the aircraft. We visited L-3 Communications in Waco, Texas to discuss the progress of the 14th plane, view the plane in its current condition, and interview Defense Contract Management Agency officials. To assess the costs of operating and fully equipping the C-27J for Coast Guard missions compared to the program of record, we used the Coast Guard’s May 2012 business case for information regarding all of the costs associated with acquiring and operating this asset for the next 30 years— 2013 through 2042. This estimate, derived in May 2012 by the Survivability/Vulnerability Information Analysis Center at Wright Patterson Air Force Base in Ohio, used the costs for the HC-144 and the HC-130J to estimate the costs of the C-27J using the relative weights of each aircraft. We also used the same perspective as this analysis, in that we looked at the costs to the Coast Guard over the next 30 years given the options for fleet composition. We assumed—similar to the 2012 business case—that the Coast Guard has to purchase the remaining HC-130Js with acquisition funds even though these aircraft have been added by Congress to the Coast Guard’s budget in prior years. We changed three assumptions underlying the analysis to better reflect the Coast Guard’s actual data: 1. Flight Hours: The business case assumed that the C-27J and HC- 144 would fly 1,200 hours per year but the Coast Guard plans to fly each aircraft for 1,000 hours per year. Our analysis used the 1,000 hour number because it is the actual planned amount. 2. Designed Service Life: The Coast Guard’s business case assumed that all three of its fixed-wing assets have the same designed service life. However, the HC-144 has a 40-year designed service life, the C- 27J has a 25-year designed service life, and the HC-130J is designed for a 30-year life. While the Coast Guard maybe able to extend the service life of the C-27J, it could also do so for the HC-144 and HC- 130J. We accounted for this by dividing each asset’s full acquisition cost by the designed service life and multiplying by the 30-year span of the analysis. 3. Spare Parts: The Coast Guard is not going to receive $42 million in spare parts from the Air Force, which was factored into the original business case but not our analysis. We also used the business case to generate purchase and employment schedules for each fixed-wing aircraft for the next 30 years. To assess the Coast Guard’s current plan, we received the planned flight hours for the next 10 years from the Coast Guard’s planning directorate and, similar to the business case, extrapolated these numbers over the next 30 years. To convert all information into fiscal year 2015 dollars, we used the deflators for procurement, fuel, operations and maintenance, and military pay as appropriate from the Office of the Under Secretary of Defense’s National Budget Estimates for FY 2015, known as the green book. The efficiency of each planned fleet was derived by taking the total acquisition and operating costs of each planned fleet and dividing this number by the total planned flight hours. Lastly, we met with Coast Guard officials who put together the Coast Guard’s estimate and assessed the estimate from the Survivability/Vulnerability Information Analysis Center. In addition, we met with Coast Guard officials who are working on the Coast Guard’s Fleet Mix Analysis who provided schedules and briefings to us describing this ongoing assessment. In addition to the contact above, Katherine Trimble, Assistant Director; Laurier R. Fish; Marie Ahearn; Peter W. Anderson; Ozzy Trevino; and Melissa Wohlgemuth all made key contributions to this report. Jonathan Mulcare also made contributions to this report. Coast Guard Acquisitions: Better Information on Performance and Funding Needed to Address Shortfalls. GAO-14-450. Washington, D.C.: June 5, 2014. Coast Guard: Portfolio Management Approach Needed to Improve Major Acquisition Outcomes. GAO-12-918. Washington, D.C.: September 20, 2012. Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies. GAO-12-751R. Washington, D.C.: May 31, 2012. Coast Guard: Action Needed as Approved Deepwater Program Remains Unachievable. GAO-11-743. Washington, D.C.: July 28, 2011. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained. GAO-10-790. Washington, D.C.: July 27, 2010.
The Air Force is transferring 14 C-27J aircraft to the Coast Guard. Once modified into surveillance aircraft, the C-27Js will be a part of the Coast Guard's fixed-wing aircraft fleet. In 2007, the Coast Guard established a baseline of aircraft quantities and costs known as the program of record. This baseline established the cost and quantity of aircraft necessary to achieve its goal of 52,400 flight hours per year. The Coast Guard's aircraft, including the HC-144 and HC-130J/H, are integral to its missions, such as counterdrug and search and rescue. GAO was asked to review the transfer of the C-27J to the Coast Guard. This report assesses (1) the status of the transfer and risks the Coast Guard faces in fielding the transferred aircraft; and (2) the extent to which acquiring the C-27J affects the overall cost and performance of the Coast Guard's fixed-wing aviation fleet. GAO analyzed program documents and maintenance records for the C-27J. GAO interviewed Coast Guard and Air Force officials and private contractors. GAO also analyzed the Coast Guard's C-27J business case. As of January 2015, the Coast Guard had transferred 2 of the 14 C-27J aircraft it is receiving from the Air Force to its aircraft maintenance facility, with plans to field 14 fully operational C-27Js by 2022. According to initial Coast Guard estimates, while the aircraft come at no cost, the Coast Guard needs about $600 million to fully operationalize them. This process is complex and significant work and risk remain. For example, the Coast Guard must establish its needs and purchase a set of spare parts for each aircraft, but faces hurdles due to potential pricing issues and delivery delays from the manufacturer. Also, the Coast Guard does not have access to the manufacturer's technical data that are required for modifications to the aircraft's structure to, for example, incorporate radar. These and other risks may inhibit the Coast Guard's ability to operate the aircraft as planned. However, the Coast Guard is working to mitigate these risks. The C-27J will improve the affordability of the Coast Guard's fixed-wing fleet, but the fleet as currently planned may not be optimal in terms of cost and flight hour capability. The Coast Guard submitted a business case to Congress in 2013 that determined the C-27J would save $837 million over 30 years, compared to the program of record, without reducing fleet performance. GAO estimates that the fleet the Coast Guard is currently pursuing achieves nearly all of these savings. However, the source of these savings has shifted. A significant portion of the savings now results from an 18 percent drop in flight hours due to a change in the mix of aircraft the Coast Guard intends to pursue. GAO used updated information in conducting its analysis, such as the expected service life of each aircraft type. Consistent with congressional direction, the Coast Guard is conducting a multi-phased analysis of its mission needs—including its flight hour goals and fleet of fixed-wing assets—but will not present the full results prior to its 2019 budget request. In the meantime, the Coast Guard has prudently paused its existing HC-144 acquisition program. However, since 2000, the Coast Guard has received 12 HC-130Js without budgeting for them and it may continue to receive these aircraft while it studies its fixed-wing fleet needs. If the Coast Guard continues to receive these aircraft in the near term, the capability and cost of the Coast Guard's fixed-wing fleet runs the risk of being dictated by the assets the Coast Guard already owns rather than what it determines it needs. The Department of Homeland Security (DHS) and the Coast Guard should advise Congress of the time frames for the Coast Guard's fleet analysis and to modify the provision of additional HC-130Js, as appropriate, in the interim. DHS agreed with the first recommendation, but did not agree with the second recommendation. If the Coast Guard accepts additional HC-130Js before completing the fleet mix study, the aircraft may be in excess of the Coast Guard's need.
Federal agencies, including DOD, can choose among numerous contract types to acquire products and services. One of the characteristics that varies across contract types is the amount and nature of the fee that agencies offer to the contractor for achieving or exceeding specified objectives or goals. Of all the contract types available, only award- and incentive-fee contracts allow an agency to adjust the amount of fee paid to contractors based on the contractor’s performance. Typically, award-fee contracts emphasize multiple aspects of contractor performance in a wide variety of areas, such as quality, timeliness, technical ingenuity, and cost- effective management. Incentive-fee contracts usually focus on cost control, although they can also be used to motivate contractors to achieve specific delivery targets or performance goals in areas such as missile range, aircraft speed, engine thrust, or vehicle maneuverability. Regardless of differences between award- and incentive-fee contracts, federal acquisition regulations state that these contracts should be used to achieve specific acquisition objectives, such as delivering products and services on time or within cost goals and with the promised capabilities. For award-fee contracts, the assumption underlying the regulation is that the likelihood of meeting these acquisition objectives will be enhanced by using a contract that effectively motivates the contractor toward exceptional performance. The reason or basis for selecting an award- or incentive-fee contract can vary, depending on the type of work a contractor is expected to perform. The acquisition environment, including the knowledge DOD has prior to starting an acquisition program, the adequacy of resources, and the soundness of acquisition practices, can also be a critical factor that affects how well contractor performance translates into acquisition outcomes. The development and administration of award-fee contracts involve substantially more effort over the life of a contract than incentive-fee contracts. For award-fee contracts, DOD personnel (usually members of an award-fee evaluation board) conduct periodic—typically semiannual— evaluations of the contractor’s performance against specified criteria in an award-fee plan and recommend the amount of fee to be paid. Because award fees are intended to motivate contractor performance in areas that are susceptible to judgmental and qualitative measurement and evaluation (e.g., technical, logistics support, cost, and schedule), these criteria and evaluations tend to be subjective. After receiving the recommendation of the award-fee evaluation board, a fee-determining official makes the final decision about the amount of fee the contractor will receive. The fee- determining official can also decide to move unearned award fee from one evaluation period to a subsequent evaluation period or periods, thus providing the contractor an additional opportunity to earn previously unearned fee—a practice called rollover. Table 1 provides a general look at the process for evaluating and determining award fee amounts. Incentive-fee contracts use what is considered to be an objective evaluation of the contractor’s performance to adjust the fee paid. DOD’s evaluation usually involves the application of a fee-determination formula that is specified in the contract. Evaluations occur at the end of the contract or, in the case of a performance or delivery incentive, at program milestones. The evaluations do not require an extensive evaluation process or the participation of a large number of contracting or program personnel. Table 2 provides a general look at the process for evaluating and determining the amount of incentive fee paid for a contract with a cost incentive. For this report, we examined fixed-price and cost-reimbursable award- and incentive-fee contracts, as well as contracts that combined aspects of both of these contract types. (See app. III for an explanation of various contract types.) Our probability sample of 93 contracts was drawn from a total of 597 DOD award- and incentive-fee contracts that were active from fiscal years 1999 through 2003 and had at least one contract action coded as cost-plus-award-fee, cost-plus-incentive-fee, fixed-price award-fee, or fixed-price-incentive valued at $10 million or more during that time. Among the sample, 52 contracts contained only award-fee provisions, 27 contracts contained only incentive-fee provisions, and 14 contracts included both. (App. I contains additional information on our scope and methodology.) From fiscal year 1999 through fiscal year 2003, award- and incentive-fee contract actions accounted for 4.6 percent of all DOD contract actions over $25,000. However, when taking into account the dollars obligated— award- and incentive-fee contract actions accounted for 20.6 percent of the dollars obligated on actions over $25,000, or over $157 billion, as shown in figure 1. Our sample of 93 contracts includes $51.6 billion, or almost one-third, of those obligated award- and incentive-fee contract dollars. DOD utilized the contracts in our sample for a number of purposes. For example, research and development contracts accounted for 51 percent (or $26.4 billion) of the dollars obligated against contracts in our sample from fiscal years 1999 through 2003, while non-research-and-development services accounted for the highest number of contracts in our sample. Contract actions include any action related to the purchasing, renting, or leasing of supplies, services, or construction. Contract actions include definitive contracts; letter contracts; purchase orders; orders made under existing contracts or agreements; and contract modifications, which would include the payment of award and incentive fees. Table 3 shows the dollars obligated and the types of contracts by product and service. Appendix IV contains a breakdown of the contracts in our sample by contract type and military service. DOD has the flexibility to mix and match characteristics from different contract types. The risks for both DOD and the contractor vary depending on the exact combination chosen, which, according to the Federal Acquisition Regulation, should reflect the uncertainties involved in contract performance. Based on the results from our sample, about half of the contracts in our study population were cost-plus-award-fee contracts. The theory behind these contracts is that although the government assumes most of the cost risk, it retains control over most or all of the contractor’s potential fee as leverage. On cost-plus-award-fee contracts, the award fee is often the only source of potential fee for the contractor. According to defense acquisition regulations, these contracts can include a base fee—a fixed fee for performance paid to the contractor—of anywhere from 0 to 3 percent of the value of the contract; however, based on our sample results, we estimate that about 60 percent of the cost-plus-award- fee contracts in our study population included zero base fee. Tables 4 and 5 show the estimated percentage of DOD award-fee contracts that had a particular percentage of the value of the contract available in award fees and base fees. Based on the results from our sample, an estimated 16 percent of the contracts in our study population were fixed-price incentive contracts, and an estimated 13 percent were cost-plus-incentive-fee contracts. In both of these cases, the government and the contractor share the cost risks. However, on fixed-price incentive contracts, the contractor usually assumes more risk because if the contract reaches its ceiling price, the contractor absorbs the loss. Under a cost-plus-incentive-fee contract, when costs increase to the point where the contractor will only earn the minimum fee, no further fee adjustments occur and the government continues to pay the contractor’s reimbursable costs. When discussing award- and incentive-fee contracts, it is important to acknowledge the acquisition environment in which they are used. For instance, based on our sample results, we estimate that most of the contracts and most of the dollars in our study population are related to the acquisition of weapon systems. Since 1990, GAO has designated DOD weapon system acquisition as a high-risk area. Although U.S. weapons are the best in the world, DOD’s acquisition process for weapon programs consistently yields undesirable consequences—cost increases, late deliveries to the warfighter, and performance shortfalls. These problems occur because DOD’s weapon programs do not capture early on the requisite knowledge that is needed to efficiently and effectively manage program risks. For example, programs move forward with unrealistic program cost and schedule estimates, lack clearly defined and stable requirements, use immature technologies in launching product development, and fail to solidify design and manufacturing processes at appropriate junctures in development. As a result, wants are not always distinguished from needs, problems often surface late in the development process, and fixes tend to be more costly than if made earlier. When programs require more resources than planned, the buying power of the defense dollar is reduced, and funds are not available for other competing needs. The persistence of these problems reflects the fact that the design, development, and production of major weapon systems are extremely complex technical processes that must operate within equally complex budget and political processes. A program that is not well conceived, planned, managed, funded, and supported may easily be subject to such problems as cost growth, schedule delays, and performance shortfalls. Even properly run programs can experience problems that arise from unknowns, such as technical obstacles and changes in circumstances. In short, it takes a myriad of things to go right for a program to be successful but only a few things to go wrong to cause major problems. DOD has not structured and implemented award-fee contracts in a way that effectively motivates contractors to improve performance and achieve acquisition outcomes. DOD practices—such as routinely paying its contractors nearly all of the available award fee, amounting to billions of dollars, regardless of whether the acquisition outcomes fell short of, met, or exceeded expectations; rolling an estimated $669 million in unearned or withheld award fees to future evaluation periods; and paying a significant portion of the available fee for what award-fee plans describe as “acceptable, average, expected, good, or satisfactory” performance—all lessen the motivation for the contractor to strive for excellent performance. In addition, DOD award-fee plans have not been structured to focus the contractor’s attention on achieving desired acquisition outcomes. DOD generally does not evaluate contractors on criteria that are directly related to acquisition outcomes, and the link between the elements of contractor performance that are included in award-fee criteria and acquisition outcomes is not always clear. While incentive-fee contracts are more directly linked to select acquisition outcomes, DOD has not fared well at using these types of contracts to improve cost control behavior or meet program goals. However, when contractor performance does not result in the desired acquisition outcome under an incentive-fee contract, the reduction of fees is usually automatic and based on the application of a predetermined formula. Figure 2 summarizes our findings within the general framework of issues surrounding DOD’s use of award and incentive fees. DOD’s practice of routinely paying its contractors nearly all of the available award fee puts DOD at risk of creating an environment in which programs pay and contractors expect to receive most of the available fee, regardless of acquisition outcomes. Based on our sample, we estimate that for DOD award-fee contracts, the median percentage of available award fee paid to date (adjusted for rollover) was 90 percent, representing an estimated $8 billion in award fees for contracts active between fiscal years 1999 through 2003. The lowest percentage of available fee paid to date for contracts in our sample was 36 percent, and the highest was 100 percent. Figure 3 shows the percentage of available fee earned for the 63 award-fee contracts in our sample and the lack of variation, especially across the contracts in the middle of the distribution. The pattern of consistently high award-fee payouts is also present in DOD’s fee decisions from evaluation period to evaluation period. This pattern is evidence of reluctance among DOD programs to deny contractors significant amounts of fee, even in the short term. We estimate that the median percentage of award fee earned for each evaluation period was 93 percent and the level of variation across the evaluation periods in our sample was similar to the trend shown in figure 3. On DOD award-fee contracts, we estimate that the contractor received 70 percent or less of the available fee in only 9 percent of the evaluation periods and none of the available fee in only 1 percent of the evaluation periods. Figure 4 shows the percentage of available fee earned by evaluation period for the award-fee contracts in our sample. There were 572 evaluation periods overall for these contracts. In addition to consistently awarding most of the available award fee on an evaluation period-by-evaluation period basis, the use of “rollover” is another indication of DOD’s reluctance to withhold fees. Rollover is the process of moving unearned available award fee from one evaluation period to a subsequent evaluation period, thereby providing the contractor an additional opportunity to earn that unearned award-fee amount. DOD and program officials view rollover as an important mechanism for maintaining leverage with contractors; however, award-fee guidance issued by the Air Force, Army, and Navy in the last 3 years states that this practice should rarely be used in order to avoid compromising the integrity of the award-fee evaluation process. We estimate that 52 percent of DOD award-fee contracts rolled over unearned fees into subsequent evaluation periods. We estimate that unearned fees were rolled over in 42 percent of evaluation periods of contracts that used this practice. Further, we estimate that the mean percentage of unearned fees that were rolled over in these periods was 86 percent, and in 52 percent of these periods at least 99 percent of the unearned fee was rolled over. Consequently, in many evaluation periods when rollover was used, the contractor still had the chance to earn almost all of the unearned fee, even in instances when the program was experiencing problems. Across all the evaluation periods for the 32 contracts in our sample that used this practice, the amount rolled over was almost $500 million, or an average of 51 percent of the total unearned fees. (See fig. 5 for a depiction of DOD’s use of rollover on the contracts in our sample.) Overall, for DOD award-fee contracts active between fiscal years 1999 through 2003, we estimate that the total dollars rolled over across all evaluation periods that had been conducted by the time of our review was $669 million. Several of the contracts in our sample routinely rolled over 100 percent of a contractor’s unearned award fee into fee pools for use later in the programs. For example, the Joint Strike Fighter program has rolled over 100 percent of the unearned award fee for its development contracts into a reserve award-fee pool that the program uses to target areas not covered in the award-fee plan, such as encouraging the contractor to track awards to small businesses and improving communications with countries that are partners in the development program. However, the program has also used the reserve award-fee pool to provide additional money to motivate cost control, even though this area is already a focus of the award-fee plan. If the contractor does not earn the fee in the targeted area, the program keeps rolling the unearned fee back into the reserve pool. The practical effect of this is that the Joint Strike Fighter program’s prime contractors still have the ability to earn the maximum award fee despite the cost and technical issues the program has experienced. DOD may also be diluting the motivational effectiveness of award fees by paying significant amounts of fee for satisfactory performance. Although DOD guidance and federal acquisition regulations state that award fees should be used to motivate excellent contractor performance in key areas, most DOD award-fee contracts pay a significant portion of the available fee from one evaluation period to the next for what award-fee plans describe as “acceptable, average, expected, good, or satisfactory” performance. Figure 6 shows the maximum percentage of award fee paid for “acceptable, average, expected, good, or satisfactory” performance and the estimated percentage of DOD award-fee contracts active between fiscal years 1999 through 2003 that paid these percentages. Some plans for contracts in our sample did not require the contractor to meet all of the minimum standards or requirements of the contract to receive one of these ratings. Some DOD award-fee contracts in our sample also allowed for a portion of the available award fee to be paid for marginal performance—a rating lower than satisfactory. Even fixed-price-award-fee contracts, which already include a normal level of profit in the price, paid out award fees for satisfactory performance. Six of the eight fixed-price contracts with award fee provisions in our sample paid out 50 percent or more of the available award fee for satisfactory performance. The amount of award fee being paid for performance at or below the minimum standards or requirements of the contract appears to not only be inconsistent with the intent of award fees (as explained in DOD guidance and federal acquisition regulations) but also is inconsistent with the reasons contracting and program officials cited on our questionnaire for their use. According to responses to our questionnaire, rewarding satisfactory performance was one reason that award or incentive fees were used on an estimated 29 percent of DOD award- and incentive-fee contracts. However, rewarding better than satisfactory performance was one reason that these fees were used on an estimated 77 percent of these contracts. The responses provided to our questionnaire also seem to rule out the administration of award fees as one of the reasons for their general lack of effectiveness. Several key elements related to development and administration of award-fee contracts were present on almost all contracts. Specifically, contracting and program officials’ questionnaire responses showed that the appropriate people were involved in the development and administration of award-fee contracts, and there was adequate guidance and training in place. We estimate that for 91 percent of DOD award-fee contracts, there were designated performance monitors responsible for evaluating specific areas described in the award-fee plan. On an estimated 88 percent of DOD award-fee contracts, award-fee evaluation board members received training on their roles and responsibilities. We further estimate that on 85 percent of DOD award-fee contracts, performance monitors also received training. Evaluation boards were held as planned for an estimated 86 percent of DOD award-fee contracts, and some programs conducted interim assessments of contractor performance to support the end-of-period evaluations. Based on questionnaire responses from contracting and program officials, an estimated 95 percent of DOD award-fee contracts had rating category descriptions that provided enough detail to distinguish between categories. An estimated 79 percent of the contracting officers responsible for developing and administering award-fee contracts and an estimated 80 percent of the contracting officers responsible for incentive-fee contracts believed the training was adequate. Finally, the contracting and program officials on an estimated 94 percent of DOD award- and incentive- fee contracts felt that the guidance they used to develop and administer the contract was adequate. DOD programs do not structure award fees in a way that motivates contractors to achieve or holds contractors accountable for achieving desired acquisition outcomes. In several contracts we evaluated, DOD established award-fee criteria that were focused on broad areas, such as how well the contractor was managing the program. This can result in award-fee plans and criteria that seemingly have little to do with acquisition outcomes, such as meeting cost and schedule goals and delivering desired capabilities. For example, on a Navy ship construction contract, 50 percent of the award-fee money, or $28 million, was based on management criterion including how responsive the contractor was to the government customers, the quality and accuracy of contract proposals, and the timeliness of contract data requirements. Elements of the award- fee process, such as the frequency of evaluations, may also limit DOD’s ability to effectively evaluate the contractor’s progress toward acquisition outcomes. For instance, while holding award-fee evaluations every quarter was successful for three Pentagon Renovation Management construction contracts because the contractor’s short-term progress could easily be assessed, a similar strategy might not be effective for a long-term development effort because quarterly or even semiannual evaluations may not generate meaningful information about progress. High award-fee payouts on programs that have fallen or are falling well short of meeting their stated goals are also indicative of DOD’s failure to implement award fees in a way that promotes accountability. Several major development programs—accounting for 52 percent of the available award-fee dollars in our sample and 46 percent of the award-fee dollars paid to date—are not achieving or have not achieved their desired acquisition outcomes, yet contractors received most of the available award fee. The Comanche helicopter, F/A-22 and Joint Strike Fighter aircraft, and the Space-Based Infrared System High satellite system, have experienced significant cost increases, technical problems, and development delays, but the prime systems contractors have respectively received 85, 91, 100, and 74 percent of the award fee made available to date (adjusted for rollover), totaling $1.7 billion (see table 6). DOD can ensure that fee payments are more representative of program results by developing fee criteria that focus on its desired acquisition outcomes. We found two notable examples in which DOD’s Missile Defense Agency attempted to hold contractors accountable for program outcomes. In the case of the Airborne Laser program, DOD revised the award-fee plan in June 2002 as part of a program and contract restructuring. The award-fee plan was changed to focus on achieving a successful system demonstration by December 2004. Prior to the restructuring, the contractor had received 95 percent of the available award fee, even though the program had experienced a series of cost increases and schedule delays. The contractor did not receive any of the $73.6 million award fee available under the revised plan because it did not achieve the key program outcome—successful system demonstration. Similarly, the development contract for the Terminal High Altitude Area Defense program, a ground-based missile defense system, contains a portion of the award fee tied specifically to desired program outcomes— conducting successful flight tests, including intercepts of incoming missiles. This $50 million special award-fee pool is separate from and in addition to the subjective award-fee portion of the contract, which is worth more than $524 million (of which $275 million has already been paid). If one of the first two test flights is successful, the contractor will receive $25 million. If the missile misses the target, the contractor provides DOD with a cost credit of $15 million. The first of these flight tests is scheduled to occur before the end of calendar year 2005. Other programs have utilized different fee strategies to focus the contractor’s attention on specific acquisition outcomes. However, contracting officials have stated that there are few mechanisms to share lessons learned and innovative practices outside the local level. These approaches include conditional fees and linked incentives. Conditional fees stipulate that certain requirements must be met for a contractor to earn and keep fees. For example, we reviewed an Intercontinental Ballistic Missile program award-fee plan that included an “After Discover Performance Deficiencies” provision to ensure that award-fee payouts were consistent with program outcomes. This provision allowed the program to retrieve funds paid during prior award-fee periods if the program experienced overruns or if performance deficiencies were discovered after the award fee has been paid. Linked incentives evaluate cooperation across multiple contracts and contractors. For example, after initial interoperability problems, the Cooperative Engagement Capability program added award-fee criteria to evaluate how well the system integrated with the Aegis destroyer. Contracts with incentive fees have also not fared well at motivating cost- control behavior or meeting program targets; however, fee payments are more consistent with acquisition outcomes. According to DOD contracting and program officials, contractors overran or were expected to overrun the target price on 52 percent of the 27 incentive-fee contracts in our sample. In these cases, the contractor does not earn the target fee but may earn a minimum fee, if one is specified in the contract. For example, the Navy’s cost-plus-incentive-fee contract for the LPD 17, an amphibious transport dock ship, is projected to overrun the target price of $644 million by at least 139 percent; on the Army’s Brilliant Anti-Armor Submunition program, a fixed-price incentive contract for test hardware, overran the $75 million target cost by 27 percent ($20 million); and the fixed-price incentive contract for the Navy’s P-3C Sustained Readiness Program initially called for 50 kits to be produced, but only 13 were delivered before contract funding was exhausted. Incentive-fee contracts that also included performance and delivery incentives similarly have not met those key objectives, as shown in the examples below. Even though the system received approval from the Navy in June 2005 for low-rate initial production, the contracting officer and program manager stated that the cost, delivery, and technical incentives in the Airborne Laser Mine Detection System program did not improve contractor performance. During the course of the effort, the contractor experienced several cost overruns, as well as technical performance shortfalls. In addition, because of government delays, program officials decided to eliminate the delivery incentive included in the initial contract. According to the contracting and program officials responsible for administering and managing one of the Army’s chemical demilitarization contracts, performance milestones with incentive fees were an important part of the Army’s effort to accelerate the destruction of chemical weapons stockpiles after the events of September 11, 2001. However, these incentives did not keep the contract on schedule. The contractor missed the target completion date for the third of its four performance incentive milestones and the program was delayed by over a year. According to DOD, the failure to meet this milestone was due to unforeseen technical difficulties, and could not have been ultimately influenced by any type of contractual language. In contrast, the successful use of fee is supported by the level of product knowledge attained by officials and their ability to leverage this knowledge. For example, DOD contracting officials for the Patriot Advanced Capability-3 missile had a well-developed knowledge of the acquisition’s cost risks and were able to reduce costs by $42 million for the low-rate initial production contract. Contracting officials stated that the favorable outcome was due to the use of a cost model that was developed and matured on the previous production contract. Unlike award-fee contracts, incentive-fee contracts are based on formula- like mechanisms that determine the amount of fee earned. When a contractor misses a target in an incentive-fee contract, the reduction of fees is usually automatic and based on the application of a predetermined formula. The nature of the fee criteria in these contracts also eliminates most of the subjectivity in the evaluation process. Cost, schedule or delivery, and performance incentives are all based on targets that can be evaluated against actual costs, actual dates, and actual performance. In addition, negative incentives allow for fee reductions if the contractor does not meet certain criteria. For example, on one of the Navy’s carrier refueling and overhaul contracts, the contractor’s fee could be reduced if its overhead rate exceeded a certain target. Since incentive fees, especially those related to cost, are primarily evaluated at the conclusion of the contract, the officials applying the evaluation criteria or fee formula have a clear sense of the contractor’s performance. DOD’s use of monetary incentives is based on the assumption that such incentives can improve contractor performance and acquisition outcomes; however, past studies have challenged the validity of this assumption. Research on incentive fees going back to the 1960s has concluded these incentive fees are not effective in controlling cost. Studies conducted by GAO, Harvard University, and the RAND Corporation, among others, have concluded that these incentives do not motivate cost efficiency, in part because profit is not the contractor’s only motivation. Other considerations, such as securing future contracts with the government, can be stronger motivators than earning additional profit. More recently, research on award fees revealed that while these fees are an intuitively appealing way to improve contractor performance, they do not always operate that way in practice. Contractor respondents in one study stated that award fees motivate performance to some extent; however, the consensus was that they do not in and of themselves increase performance significantly. Research has also pointed to recurring disconnects between the intent and the administration of award-fee contracts. Award-fee criteria were not applied as intended; and many award-fee board members and fee-determining officials approached the process with the assumption the contractors should earn the full amount unless there were specific instances of poor performance that warranted deductions, instead of starting at zero and considering the actions the contractor had taken to earn the available fee. Finally, the lack of explicit rationale and documentation in support of performance ratings has led some researchers to conclude that fees were being paid without adequate justification. Despite these findings and the concerns raised by senior DOD officials about the amounts of award fee paid to contractors on acquisitions that were not performing to their established baselines, very little effort has gone into determining whether DOD’s current use of monetary incentives is effective. Over the past few years, officials including the Undersecretary for Acquisition Technology and Logistics and the Assistant Secretary of the Air Force for Acquisition expressed concerns that contractors routinely earn high percentages of fee while programs have experienced performance problems, schedule slips, and cost growth. In 1999, following a report by a DOD-led integrated process team addressing contractor incentives, the Undersecretary of Defense also issued a memorandum for all service secretaries specifically noting that contractors do not always have an incentive to focus their attention on the government’s desired outcomes and offered several principles for structuring future contract incentives. However, according to the lead of the integrated process team from the Office of the Secretary of Defense, the effort did not result in any new policy directives, changes in guidance, or new training. In addition, DOD did not assess the results of the study. In contrast to the concerns expressed by DOD’s senior acquisition leadership, we gathered testimonial evidence that indicates DOD contracting and program officials believe that these monetary incentives are effective for improving contractor performance. Based on responses to our questionnaire, an estimated 77 percent of DOD award- and incentive- fee contracts had improved performance because of the incentive provisions, in the opinion of contracting and program officials. On award- fee contracts, officials pointed to increased responsiveness or attention from the contractor at the management level as evidence of this improvement, even if this increased responsiveness did not result in overall desired program outcomes being achieved. One of the potential reasons for this disconnect between statements at the policy level and the opinions of practitioners is the lack of a DOD-wide system for compiling and aggregating award- and incentive-fee information and for identifying resulting trends and outcomes. DOD has not compiled information, conducted evaluations, or used performance measures to judge how well award and incentive fees are improving or can improve contractor performance and acquisition outcomes. The lack of data is exemplified by the fact that DOD does not track such basic information as how much it pays in award and incentive fees. Such information collection across DOD is possible. For instance, DOD is implementing the Defense Acquisition Management Information Retrieval system to collect data on acquisition costs and variances, schedules, and program baseline breaches on major acquisition systems. This system provides DOD policymakers with readily available information they can use to oversee program performance across the department. If DOD does not begin to collect similar information on award and incentive fee payments, it may not be able to measure progress toward meeting one of the goals listed in its fiscal 2004 performance and accountability report, that is, invigorating the fiscal well-being of the defense industry by rewarding good performance. The existence or application of a well-developed and well-implemented monetary incentive alone does not determine the overall success or failure of an acquisition. DOD acquisition programs operate in an environment with underlying pressures and incentives that drive both program and contractor behavior. Competition for funding and contracts leads to situations, especially in major system acquisitions, in which costs are underestimated and capabilities are overpromised. Resulting problems require additional time and money to address. At the same time, DOD customers are tolerant of cost overruns and delays in order to get a high- performance weapon system. DOD’s current approach toward monetary incentives reflects these realities and has resulted in a failure to hold contractors accountable for delivering and supporting fielded capabilities within cost and schedule baselines. While DOD and contractors share the responsibility for program success, award and incentive fees, to be effective, need to be realigned with acquisition outcomes. Awarding large amounts of fee for satisfactory or lesser performance and offering contractors multiple chances to earn previously withheld fees has fostered an environment in which DOD expects to pay and contractors expect to receive most of the available award fee regardless of outcomes. In addition, DOD’s lack of information on how well award and incentive fees are achieving their intended purpose leaves the department vulnerable to millions of dollars of potential waste. Successes do exist at the individual contract level, but DOD will need to leverage this knowledge if it hopes to identify proven incentive strategies across a wide variety of DOD acquisitions. To strengthen the link between monetary incentives and acquisition outcomes and by extension increase the accountability of DOD programs for fees paid and of contractors for results achieved, we recommend that the Secretary of Defense direct the Undersecretary of Defense for Acquisition, Technology, and Logistics to take the following seven actions. DOD can immediately improve its use of award fees on all new contracts by (1) instructing the military services to move toward more outcome- based award-fee criteria that are both achievable and promote accountability for acquisition outcomes; (2) ensuring that award-fee structures are motivating excellent contractor performance by only paying award fees for above satisfactory performance; and (3) requiring the appropriate approving officials to review new contracts to make sure these actions are being taken. DOD can improve its use of award fees on all existing contracts by (4) issuing DOD guidance on when rollover is appropriate. In the longer term, DOD can improve its use of award and incentive fees by (5) developing a mechanism for capturing award- and incentive-fee data within existing data systems, such as the Defense Acquisition Management Information Retrieval system; (6) developing performance measures to evaluate the effectiveness of award and incentive fees as a tool for improving contractor performance and achieving desired program outcomes; and (7) developing a mechanism to share proven incentive strategies for the acquisition of different types of products and services with contracting and program officials across DOD. DOD’s Office of Defense Procurement and Acquisition Policy provided written comments on a draft of this report. These comments are reprinted in appendix II. DOD also provided separate technical comments, which we have incorporated as appropriate. DOD concurred with three of our seven recommendations—moving toward more outcome-based award-fee criteria, issuing guidance on rollover, and developing a mechanism to share proven incentive strategies. The department indicated that it would implement these recommendations by issuing a policy memorandum on award fees and completing a communications plan for sharing incentive strategies on March 31, 2006. DOD partially concurred with four of our seven recommendations. Concerning three of the four recommendations—requiring the appropriate officials to make sure these recommendations are implemented in new contracts, collecting award and incentive fee data, and developing performance measures to evaluate the effectiveness of award and incentive fees in improving acquisition outcomes—DOD indicated that the Director of the Office of Defense Procurement and Acquisition Policy, in collaboration with the military departments and defense agencies, would conduct a study to determine the appropriate actions to address them. The office plans to complete the study by June 1, 2006. While this study may provide additional insights, we encourage DOD to use it as a mechanism for identifying the specific steps the department will take to fully address our recommendations, not to determine whether the department will take action. For instance, in its response to our recommendation on developing a mechanism for capturing award and incentive fee data, DOD raises the issue of cost. We agree that the potential cost of implementing this recommendation should be considered, while deciding on an appropriate course of action. However, given that the department paid out an estimated $8 billion in award fees on the contracts in our study population regardless of outcomes, we believe that a reasonable investment in ensuring that these funds are well-spent in the future is warranted. Collecting this data is also necessary to support the development of meaningful performance measures, which can be used to evaluate the costs and benefits of continuing to use these contract types and determine if they are achieving their goal of improving contractor performance and acquisition outcomes. Further, without data and performance measures, DOD will not be in a position to measure the effectiveness of any actions it takes to address the issues identified in this report. DOD also partially concurred with our recommendation related to only paying award fees for above satisfactory performance. Specifically, the department stated that it is fair and reasonable to allow the contractor to earn a portion of the award fee for satisfactory performance. However, we believe that this use of award fee should be the exception, not the rule. Fixed-price-award-fee contracts already include a normal level of profit in the price which is paid for satisfactory performance. In addition, the inclusion of base fee in a cost-plus-award-fee contract may be a more appropriate mechanism for providing fee for satisfactory performance. According to the Army Contracting Agency’s Handbook for Award Fee Contracts, base fee (not exceeding three percent of the estimated contract cost) can be paid to the contractor for acceptable performance and is designed to compensate the contractor for factors such as risk assumption, investment, and the nature of the work. DOD also stated that award fee arrangements should be structured to encourage the contractor to earn the preponderance of fee by providing excellent performance. According to its comments, DOD plans to address this issue in the March 2006 policy memorandum on award fees. While DOD may conclude that it needs the flexibility to pay a portion of the award fee for satisfactory performance, especially for high risk efforts, current practice on most award fee contracts is to pay a significant portion of the available fee for “acceptable, average, expected, good, or satisfactory” performance. We would encourage DOD to consider limiting the maximum percentage of fee available for this level of performance to, consistent with its comments, keep the preponderance of fee available for excellent performance. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please call me at (202) 512-4841 ([email protected]). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to this report were Thomas J. Denomme, Assistant Director; Robert Ackley; Heather Barker; Lily J. Chin; Aftab Hossain; Julia Kennon; John Krump; Jerry Sandau; Sidney Schwartz; Ron Schwenn; Najeema Davis Washington; and E. Chris Woodard. Our objective was to determine whether award and incentive fees are an effective management tool for achieving the Department of Defense’s (DOD) desired outcomes. To conduct our work, we selected a sample of 93 award- and incentive-fee contracts, interviewed contracting and program officials, analyzed contract documentation related to incentive provisions, collected and analyzed data on award-fee payments, reviewed DOD and military service guidance on award and incentive fees, and examined the results of initiatives related to improving the use of these fees. Our sample for this review was based on contract data from the Federal Procurement Data System. We extracted information from this database on all DOD contracts active between fiscal years 1999 through 2003 that had at least one contract action coded as cost-plus-award-fee, cost-plus- incentive-fee, fixed-price-award-fee, or fixed-price incentive valued at $10 million or more during that time. These criteria gave us a study population of 597 unique contracts, which were associated with 2,474 award- and incentive-fee contract actions. To ensure the validity of the database from which we drew our sample, we tested the reliability of the contract type field in the Federal Procurement Data System. We selected a sample of 30 contracts from the population of DOD contracts active between fiscal years 1999 through 2003 and asked DOD and the military services to provide data on the contract type(s) for each one using data sources other than the Federal Procurement Data System or Individual Contracting Action Reports (DD Form 350). We also requested that DOD and the military services verify that at least one contract action between fiscal years 1999 through 2003 was valued at over $10 million. Of the 30 contracts, DOD and the military services reported that 10 were either incorrectly coded or omitted information on a relevant contract type in the Federal Procurement Data System. Of these 10 errors, only 3 would have caused a contract to be mistakenly included or excluded in the population from which our sample was selected. Based upon these responses and the exclusion of only one contract from our sample because of miscoding in the Federal Procurement Data System, we determined that the data were sufficiently reliable for the purposes of this report. To select the sample for this review, we stratified the population of 597 contracts based on the total dollar value of award- and incentive-fee contract actions associated with the contract during this period. We included all 12 contracts in the sample for which the total value of the award- and incentive-fee contract actions during this period exceeded $2 billion. We used probability sampling techniques to select 85 contracts from the remaining 585 contracts in the population, ensuring that the number of contracts from the Navy, Army, Air Force, and all other defense agencies and organizations combined were proportional to their representation among the 585. During our work, we discovered that 2 of the 85 contracts we sampled from the stratum of 585 contracts were outside of the scope of this review. These contracts were removed from the sample. We also discovered that for 2 other in-scope contracts in this stratum, the officials involved in developing and administering the contract and the contract documentation were not available. We excluded these contracts from our analysis. We randomly selected a total of 4 additional contracts from the same stratum to include in our analysis. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as 95 percent confidence intervals (for example, plus or minus 7 percentage points). These are the intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from our review have margins of error (that is, confidence interval widths) not exceeding plus or minus 10 percentage points, unless otherwise noted. All numerical estimates other than percentages (such as totals and ratios) have margins of error not exceeding plus or minus 25 percent of the value of those estimates. Our analysis also tested the extent to which statistically significant relationships existed between such factors as contract type, reasons an incentive contract was chosen, types of officials involved in developing the incentive structure, use of rollover, training, and guidance; and contracts that contracting and program officials have cited improved contractor performance because of the use of incentive. To determine whether award and incentive fees are an effective management tool, we conducted structured interviews with contracting and program officials about the development, implementation, and effectiveness of the incentive structure for 92 of the 93 award- and incentive-fee contracts in our sample; analyzed contract documentation related to incentive provisions; and collected and analyzed data on award- fee payments for 63 of the 66 contracts with award-fee provisions in our sample. For one contract, the office responsible for administering the contract could not identify any contracting or program personnel who could address our interview topics and all questions were coded as “no response.” For three contracts, the office responsible for administering the contract could not provide complete documentation on award-fee payments. To conduct our structured interviews on the development, implementation, and effectiveness of the incentive structure, we used a questionnaire that was a combination of open- and close-ended questions. When possible, these interviews were held in person. We visited the Defense Threat Reduction Agency Headquarters; Joint Strike Fighter Program Office; Los Angeles Air Force Base; Missile Defense Agency (Navy Annex); Patuxent Naval Air Station; Pentagon Renovation and Construction Program Office; Redstone Arsenal; U.S. Army Contracting Agency’s Information Technology, E-Commerce and Commercial Contracting Center; U.S. Navy’s Strategic System Program Office; Warner Robins Air Force Base; Washington Navy Yard; and Wright Patterson Air Force Base for this purpose. The remaining interviews were held by video teleconference or by telephone. All interviews were conducted between October 2004 and April 2005. We also reviewed contract documentation related to the development and implementation of the contracts’ incentives, including the basic contract, statement of work, acquisition planning documents, modifications related the incentive structure, award-fee plan, documentation describing fee criteria for specific evaluation periods, contractor self-assessments, award-fee board evaluation reports, and fee-determination documents. We used this information to corroborate and supplement the information provided in the structured interviews, determine the extent to which linkages exist between fee criteria and the desired program outcomes identified by contracting and program officials, and examine fee payments in the context of program performance. When possible, we evaluated program and contract performance using GAO’s body of work on DOD systems acquisitions, including the annual assessment of selected major weapon programs and annual status report on the ballistic missile defense program. For each of the 66 award-fee contracts in our sample, we collected and analyzed data on the base fee and maximum award fee, expressed as a percentage of the estimated cost, exclusive of the cost of money; the award fee available and paid for each evaluation period; the amount of unearned fee rolled over into subsequent evaluation periods; the total award-fee pool; and the remaining award-fee pool, which included any rolled-over fee still remaining to be potentially earned. In most cases, contracting and program officials submitted the data on a standard template we provided. In cases where the program did not submit data in the requested format, we gathered this information from fee-determination letters and contract modifications. We also used these documents to verify the reliability of the data that were submitted by contracting and program officials. From this data, we calculated the percentage of the available fee that was awarded for individual evaluation periods, entire contracts to date, and the overall sample. We included rollover amounts available and earned in our calculations of fee awarded for individual evaluation periods. When calculating the percentage of fee earned for entire contracts, we excluded rolled-over fees from the available fee pool when those fees were still available to be earned in future evaluation periods. We also calculated the percentage of unearned fee that was made available to the contractor as rollover for individual evaluation periods, entire contracts, and the overall sample. Estimates of total award fees earned and total award fees that were rolled over are based on all evaluation periods held from the inception of our sample contracts through our data collection phase, not just those from fiscal years 1999 through 2003. We did not analyze incentive fee payments because most fee determinations are related to cost and are not complete until the contract is closed out. We interviewed officials from Defense Acquisition University, Office of Director of Defense Procurement and Acquisition Policy, Office of the Deputy Assistant Secretary of the Air Force for Contracting (Policy and Implementation), Office of Deputy Assistant Secretary of the Army (Policy and Procurement), Office of the Deputy Assistant Secretary of the Navy for Acquisition Management, Office of the Air Force Inspector General, and the U.S. Army Audit Agency, as well as government contracting experts on recent initiatives and current trends in incentive contracting. We reviewed previous audit and inspection reports from the Air Force, Army, and Navy. We analyzed current award- and incentive-fee guidance provided in the Federal Acquisition Regulation, Defense Federal Acquisition Regulation Supplement, U.S. Army Audit Agency’s report on Best Practices for Using Award Fees, Air Force Award Fee Guide, Air Force Material Command Award Fee Guide, and other service-specific policies, as well as the National Aeronautics and Space Administration’s Award Fee Guide. We identified and reviewed DOD and military service policy memos and initiatives including DOD’s Contractor Incentives Integrated Process Team; the Assistant Secretary of the Navy for Research, Development, and Acquisition’s policy memo on Contract Incentives, Profits and Fees; the Deputy Assistant Secretary of the Army for Procurement’s report on Innovation in Contractual Incentives; and the Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics’ “quick look” at DOD Profit Policy and Defense Industry Profitability. We identified innovative monetary incentives used on contracts within our sample and the mechanisms available to share those across DOD. We performed our review from February 2004 to November 2005 in accordance with generally accepted government auditing standards. Award fee: An amount of money that is added to a contract and that a contractor may earn in whole or in part during performance and that is sufficient to provide motivation for excellence in the areas such as quality, schedule, technical performance, and cost management. Base fee: An award-fee contract mechanism that is an amount of money over the estimated costs (typically in the range of 0 to 3 percent of the contract value), which is fixed at the inception of the contract and paid to the contractor for performance in a cost-plus-award-fee contract. A base fee is similar to the fixed fee paid to a contractor under a cost-plus-fixed- fee contract that does not vary for performance. Ceiling price: A prenegotiated maximum price that may be paid to the contractor. Cost contract: A cost-reimbursement contract in which the contractor receives no fee. A cost contract may be appropriate for research and development work, particularly with nonprofit educational institutions or other nonprofit organizations, and for facilities contracts. Cost-plus-award-fee contract: A cost-reimbursement contract that provides for a fee consisting of a base amount (which may be zero) fixed at inception of the contract and an award amount, based upon a judgmental evaluation by the government, sufficient to provide motivation for excellence in contract performance. Cost-plus-incentive-fee contract: A cost-reimbursement contract that provides for an initially negotiated fee to be adjusted by a formula based on the relationship of total allowable costs to total target costs. Cost-reimbursable contract: A contract that provides for payment of the contractor’s allowable cost to the extent prescribed in the contract not to exceed a ceiling. Delivery incentives: A monetary incentive used to motivate the contractor to meet a particular product or service delivery objective. Fixed-price contract: A contract that provides for a price that is either fixed or subject to adjustment obligating the contractor to complete work according to terms and for the government to pay the specified price regardless of the contractor’s cost of performance. Fixed-price-award-fee contract: A variation of the fixed-price contract in which the contractor is paid the fixed price and may be paid a subjectively determined award fee based on periodic evaluation of the contractor’s performance. Fixed-price incentive contract: A fixed-price contract that provides for adjusting profit and establishing the final contract price by application of a formula based on the relationship of total final negotiated cost to total target cost. Incentive contract: A contract used to motivate a contractor to provide supplies or services at lower costs and, in certain instances, with improved delivery or technical performance, by relating the amount of fee to contractor performance. Linked incentives: Incentives tied to performance in areas across multiple contracts and contractors and used to motivate contractors to cooperate. Negative incentives: A method used by the government to allow for fee reductions if the contractor does not meet certain criteria. Rollover: The process of moving unearned award fee from one evaluation period to a subsequent period or periods, thus allowing the contractor an additional opportunity to earn that unearned award fee. Share ratio: A fee-adjustment formula written as a ratio of the cost risk between the government and the contractor. Target cost: The preestablished cost of the contracted goods/services that is a reasonable prediction of final incurred costs. GAO’s sample of 93 award and incentive contracts comprises the following contract types: 48 cost-plus-award-fee contracts, 4 fixed-price-award-fee contracts, 12 cost-plus-incentive-fee contracts, 14 fixed-price incentive contracts, 1 cost-plus-incentive-fee / fixed-price incentive contract, and 14 contracts that are combinations of award- and incentive-fee contract types. The sample contracts included the following breakdown by military service and DOD agency or organization: 37 Navy contracts, 30 Air Force contracts, 18 Army contracts, 3 Missile Defense Agency contracts, 3 Pentagon Renovation Management Office contracts, 1 Marine Corps contract, and 1 Defense Threat Reduction Agency contract. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Defense Acquisitions: DOD’s Revised Policy Emphasizes Best Practices, but More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Program Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 18, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Major Acquisitions: Significant Changes Underway in DOD’s Earned Value Management Process. GAO/NSIAD-97-108. Washington, D.C.: May 5, 1997. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996.
Collectively, the Department of Defense (DOD) gives its contractors the opportunity to earn billions of dollars through monetary incentives--known as award fees and incentive fees. These fees are intended to motivate excellent contractor performance in areas deemed critical to an acquisition program's success, with award fees being appropriate when contracting and program officials cannot devise objective incentive fee targets related to cost, technical performance, or schedule. GAO was asked to determine whether award and incentive fees have been used effectively as a tool for achieving DOD's desired acquisition outcomes. To do this, GAO selected a probability sample of 93 contracts from the study population of 597 DOD award- and incentive-fee contracts that were active and had at least one contract action valued at $10 million or more from fiscal year 1999 through 2003. The power of monetary incentives to motivate excellent contractor performance and improve acquisition outcomes is diluted by the way DOD structures and implements incentives. While there were two examples in our sample in which the Missile Defense Agency attempted to link award fees directly to desired acquisition outcomes, such as demonstrating a capability within an established schedule, award fees are generally not linked to acquisition outcomes. As a result, DOD has paid out an estimated $8 billion in award fees to date on the contracts in our study population, regardless of outcomes. The following selected programs show this disconnect. When DOD programs did not pay all of the available award fee, DOD gave contractors on an estimated 52 percent of award-fee contracts at least a second opportunity to earn an estimated $669 million in initially unearned or deferred fees. GAO believes these practices, along with paying significant amounts of fee for "acceptable, average, expected, good, or satisfactory" performance, undermine the effectiveness of fees as a motivational tool and marginalize their use in holding contractors accountable for acquisition outcomes. They also serve to waste taxpayer funds. Incentive fees provide a clearer link to acquisition outcomes; however, a majority of the 27 contracts with cost incentives that GAO reviewed failed or are projected to fail to complete the acquisition at or below the target price. Despite paying billions in fees, DOD has little evidence to support its belief that these fees improve contractor performance and acquisition outcomes. The department has not compiled data, conducted analyses, or developed performance measures to evaluate the effectiveness of award and incentive fees. In addition, when contracts have utilized different fee strategies to focus the contractor's attention on specific acquisition outcomes, contracting officials have stated that DOD has few mechanisms to share lessons learned and innovative practices outside the local level.
Research and innovation play an important role in addressing issues associated with building, maintaining, operating, and using the U.S. highway system. Highway research is an essential national investment because it helps address broad issues related to highway planning, safety, traffic operations, pavement durability, maintenance, and the impact of the highway system on the environment. In addition, research helps transportation professionals to (1) understand how the highway transportation system functions and (2) anticipate future demands. Past research has yielded many advances and innovations that have saved money, improved performance, added capacity, reduced fatalities and injuries, and minimized the impact of the highway system on the environment. For example, in the late 1950s, the American Association of State Highway Officials sponsored research, called the AASHO Road Test, to study how traffic contributes to the deterioration of highway pavements. This research, which contributed to the creation of nationwide design standards for the new Interstate highway system, was designed to complement existing highway research programs and is credited with critical advances related to the structural design and performance of pavements, and to understanding the effects of various climates on pavements. While highway research has resulted in transportation advances, implementing research results can be difficult because of the number of stakeholders involved. The network of highway transportation stakeholders is large and complex, consisting of federal and state transportation agencies, universities, industry associations, and private organizations. In total, more than 35,000 highly decentralized public agencies manage the U.S. highway system, and thousands of private contractors, materials suppliers, and other organizations provide support services. The federal government supports highway research through FHWA, whose mission, in part, is to deploy and implement technology and promote the use of innovative approaches to address highway challenges. For example, to enhance mobility on U.S. highways, FHWA conducts and funds research on current and emerging nationwide transportation issues to, among other matters, enhance the transportation system’s overall performance; reduce traffic congestion; improve safety; and maintain infrastructure integrity. However, according to a report issued by TRB in 2001, the majority of FHWA’s highway research focuses on short-term, incremental transportation-related improvements. Although transportation agencies are generally responsive to implementing small innovations with the promise of short-term benefits, according to this report, it takes considerably longer to implement changes that realize large, long-term benefits. Although the establishment of a national strategic highway research program, like SHRP 2, has been rare, it is not unprecedented. Specifically, in 1987, Congress established the first Strategic Highway Research Program (SHRP) to achieve large-scale, accelerated, and innovative highway research on topics not adequately addressed by prior or existing research programs. SHRP focused on a few critical infrastructure and operational problems faced by state transportation agencies, such as the quality of asphalt used in highway construction, the integrity and longevity of road pavements, and the deterioration of concrete bridge decks and other components. The program, concluded in 1991, was considered ambitious because of its limited duration and its concentration on previously neglected research areas related to asphalt pavements, structural concrete, and winter maintenance. Two of the better known and more widely implemented results of SHRP are (1) the Superpave materials selection and design system, which resulted in more durable asphalt pavements, and (2) a collection of methods and technologies that significantly improved approaches for controlling snow and ice on roadways. The success of SHRP prompted Congress and others to take several key steps that, ultimately, led to the establishment of SHRP 2. Table 1 provides a timeline of key events related to SHRP 2. Special Report 260 recommended that the program address the following four research goals: safety—to prevent or reduce the severity of highway crashes through more accurate knowledge of driver behavior and other crash factors; renewal—to develop a consistent and systematic approach to performing highway rehabilitation that is rapid, causes minimum disruption, and produces long-lived (durable) transportation facilities, such as roadways and bridges; reliability—to provide highway users with improved travel time reliability (more consistent travel times between locations) by preventing and reducing the impact of relatively unpredictable events, such as traffic accidents, work zones, special events, and weather; and capacity—to develop approaches and tools for systematically integrating environmental, economic, and community requirements into the decision- making processes for planning and designing projects to increase highway capacity. While Special Report 260 provided strategic direction and a general framework for developing SHRP 2, additional planning had to be conducted before the research program could begin. Therefore, in January 2002, TRB assembled five panels—an oversight panel and four technical panels of experts—to provide leadership and technical guidance for the development of detailed research plans for each of the four research areas. The panels consisted of a wide range of highway transportation experts, including representatives from state departments of transportation, FHWA, the National Highway Traffic Safety Administration, universities, industry associations, and private companies. The planning effort, completed in September 2003, resulted in detailed research plans for each of the four research areas, which identified, among other matters, the objectives, scope, and anticipated projects and budgets for each of the four areas. Each technical panel of experts prioritized the research projects identified in its area after considering, among other matters, the (1) probability of each project’s success and (2) likelihood that each project would improve transportation practices. In total, the four plans identified 106 projects—15 for safety, 38 for renewal, 33 for reliability, and 20 for capacity projects—designed to achieve the overall research goals specified in Special Report 260. SAFETEA-LU, enacted in 2005, established several requirements for carrying out SHRP 2. For example, Congress required that the program (1) address the four research areas described in Special Report 260 as well as the detailed research plans completed in 2003 and (2) involve state transportation officials and other stakeholders in the governance of the research program. SHRP 2 began in December 2005, when FHWA, AASHTO, and the National Research Council formed a partnership to carry out SHRP 2 through a memorandum of understanding. In doing so, these entities specified that TRB should manage the program’s daily operations and budget and establish a structure for carrying out the program. Similar to the 2003 detailed planning effort, TRB established the following organizational structure, composed of experts at all levels, to carry out SHRP 2: an oversight committee to approve annual work plans, budgets, and contractor awards, among other activities; a technical coordinating committee (TCC) for each of the four research areas to develop annual research plans and monitor the progress of contracts, among other matters; and numerous expert task groups, as needed, to provide technical input to each of the four research areas, develop the requests for project proposals, recommend contractor selections, and monitor research projects. According to SHRP 2 staff, the extensive involvement of experts to define, prioritize, and oversee research in each of the four areas was intended to maximize the usefulness of the research results. Special Report 260, which was requested by Congress, recommended that SHRP 2 receive $450 million over 6 fiscal years, with 9 years to complete the research. In 2005, SAFETEA-LU authorized $205 million for SHRP 2 over 4 fiscal years (fiscal years 2006 through 2009). SHRP 2 was officially inaugurated in March 2006, when FHWA provided about $36 million to TRB to initiate the program and 7 years to complete the research (i.e., by 2013) through a cooperative agreement with the National Research Council. However, the initial amount provided for fiscal year 2006 constituted less than one-half of the annual recommended amount in Special Report 260 ($75 million) and about $15 million less than the annual amount authorized in SAFETEA-LU ($51.25 million). SAFETEA-LU contained other funding limitations, which ultimately reduced SHRP 2’s funding below its authorized amount. The 2008 SAFETEA-LU Technical Corrections Act provided additional obligation authority for the program, which resulted in about $20 million in additional funds. TRB currently expects about $171 million in total SHRP 2 funding. Table 2 provides a comparison of the (1) funding and duration for SHRP 2 as recommended in Special Report 260, (2) program funding authorized in SAFETEA-LU, and (3) amount actually funded. The SHRP 2 oversight committee funded research projects for the program based on the recommendations of its TCCs, which considered the input of other experts and factors such as available program funds and time frames. These experts included highway transportation personnel from federal, state, and local government; private sector firms; academia; AASHTO liaisons; and other stakeholder organizations within the U.S. and international highway community. While the 2003 detailed research plans constituted the starting point for decisions about project selections, the 106 projects identified in these plans had to be significantly modified on two occasions because of program funding and time frames. The first major modification occurred in 2006, when, as discussed, considerably less funding and time were provided for the program’s completion than had been assumed by the parties involved in the development of the detailed research plans in 2003. The second major modification occurred in 2008, when about $20 million in additional program funding became available because of the passage of the SAFETEA-LU Technical Corrections Act. On both occasions, the SHRP 2 oversight committee relied on the input of experts to select projects for funding. Given less funding and time than had been assumed for completing the program, in 2006, the oversight committee requested that the parties involved in the 2003 planning effort reevaluate these plans for the purpose of rescoping the program and prioritizing projects for funding. In doing so, these parties assigned lower priority to projects that (1) were duplicative or similar to other research efforts, (2) could not be accomplished within SHRP 2’s budget or time frame, or (3) could be deferred. In addition, they rescoped other projects under consideration for funding. After the four TCCs were formed later in 2006, the oversight committee requested them to review the revised research plans. As a result of this effort, the TCCs developed recommendations for project funding in each of the four research areas, which were approved by the oversight committee in November 2006. When more funds became available, in 2008, the oversight committee asked the TCCs to prepare prioritized lists of additional projects for funding. In doing so, the oversight committee requested the TCCs to assign higher funding priority to (1) ongoing projects that addressed gaps in existing research, (2) projects that were demonstrating the most promising results, and (3) potential projects that advanced SHRP 2’s strategic goals. This effort resulted in project recommendations for several new projects and additional funding for some existing projects, which were approved by the oversight committee in November 2008. As a result of the reprioritization process, 56 of the 106 projects identified in 2003 either evolved into, or were partially merged with, one or more of the currently funded SHRP 2 projects, while 50 of the projects were eliminated entirely. Table 3 provides information on the number of projects identified in the 2003 detailed research plans (1) for each research area; (2) that either evolved into, or were partially merged with, one or more SHRP 2 funded projects; and (3) that were eliminated entirely from funding. Appendixes II through V provide more detailed information, by research area, on how specific projects identified in 2003 were reprioritized for funding. As of December 31, 2009, the SHRP 2 oversight committee had allocated approximately $123 million (about 72 percent) of the roughly $171 million available to fund projects related to highway safety, renewal, reliability, and capacity. Of the 85 projects selected for funding, 11 were completed, 52 were ongoing, and 22 were expected to begin in the future. SHRP 2 staff expect all of the projects will be completed by 2013. The outcomes of the projects are expected to vary, ranging from the (1) production of data sets and related analyses to (2) development of improved technologies, procedures, guidelines, and techniques for advancing the goals of each of the four research areas. The oversight committee allocated the remaining $48 million to fund administrative expenses, publication of research reports, and contingencies that may arise. Figure 1 illustrates how SHRP 2 how SHRP 2 funding was allocated as of December 31, 2009. funding was allocated as of December 31, 2009. Special Report 260 recommended different percentages of funding for each of the four research areas, ranging from 15 percent to 40 percent of available funding. As shown in table 4, the oversight committee closely followed the relative funding distributions recommended in this report. Table 4 compares the recommended funding levels and percentages in Special Report 260 with the actual funding levels and percentages. As of December 31, 2009, the SHRP 2 oversight committee had allocated about $49 million to fund 5 completed, 7 ongoing, and 4 future safety projects, for a total of 16 projects. The goal of the safety research is “to prevent or reduce the severity of highway crashes through more accurate knowledge of crash factors and of the cost-effectiveness of selected countermeasures in addressing these factors.” The SHRP 2 safety TCC expects that the collection of safety research projects will (1) provide objective and reliable information on driver performance and behavior and (2) help assess the risks associated with related crash factors. The 16 safety projects are part of two overall studies that are expected to produce a variety of data on driver behavior: the in-vehicle driving study and the site-based risk study. Most of these projects (15 of 16) and funding ($48 million of $49 million allocated) relate to the in-vehicle driving study, also referred to as the SHRP 2 naturalistic driving study. This study involves the use of cameras, radar, and other sensors installed in the vehicles of about 3,000 volunteer drivers in six locations for 1 to 2 years. Collectively, the devices are expected to record (1) real-time video from multiple angles of each volunteer while driving (e.g., the driver’s face and interior views of the vehicle) and the driving environment (e.g., road characteristics and traffic) and (2) information about the vehicle (e.g., the vehicle’s speed and information on whether the seat belt is being used). In addition, researchers will record information on roadway conditions, as well as demographic data and data on other factors that may affect the drivers’ behavior. Overall, SHRP 2 staff expect this study will result in objective information on driver behavior that, for the first time, will allow researchers to determine the relative risk associated with various factors and circumstances related to the analysis of accidents, near collisions, and uneventful driving experiences. The oversight committee allocated the remaining $1 million for a project related to a site-based risk study. This project includes (1) a study to develop a portable, semi-automated video system and (2) a pilot field study, using multiple overhead video cameras, to record the relative position of traffic moving through selected locations to advance the understanding of driver behavior. While the intent of the naturalistic driving study is to passively observe individual drivers, the site-based study will allow researchers to observe multiple drivers at selected locations. SHRP 2 staff expect the project will allow researchers to observe how drivers resolve traffic conflicts; react to traffic controls, such as road signs and stoplights; and adjust to changing environmental conditions, such as light, weather, and pavement quality. Figure 2 provides the projected budget and timeline, by research category, for the SHRP 2 safety projects. According to SHRP 2 staff, the naturalistic driving study is expected to produce the largest and most comprehensive database on driver behavior available to date because, unlike most previous studies that generally relied on simulations and subjective post accident observations, the naturalistic driving study is expected to provide objective information on driver behavior in real-world circumstances. These data are expected to help transportation officials (1) better understand risk factors, such as driver distractions, associated with different crash factors, and, ultimately, (2) develop practical measures to effectively reduce collisions or otherwise improve highway safety. SHRP 2 staff stated that while some data analysis is planned (about $5 million), significantly more analytic work will be needed after the conclusion of SHRP 2 to fully realize the benefits of these data. According to these staff, future analyses of these data likely will lead to significant improvements in highway safety, particularly related to accidents that occur when vehicles run off the road—a major cause of highway fatalities. In addition, the safety TCC expects the results of the site-based project likely will lead to similar future studies that may provide more comprehensive information on, for example, accidents resulting from collisions at intersections, where many accidents occur. See appendix II for additional information on how these projects were reprioritized for funding and selected information about the currently funded safety projects. As of December 31, 2009, the SHRP 2 oversight committee had allocated about $32 million to fund 3 completed and 24 ongoing projects, and 1 future project, for a total of 28 renewal projects. The goal of the renewal research is “to develop a consistent, systematic approach to performing highway renewal that is (1) rapid, (2) causes minimum disruption, and (3) produces long-lived facilities.” The SHRP 2 renewal TCC expects the collection of renewal projects will promote a systematic approach to highway rehabilitation and reconstruction (i.e., highway renewal) and result in quicker, more efficient, and improved repairs because the projects are designed to, among other matters, minimize travel disruptions and produce long-lived (i.e., more durable) facilities. Nineteen of the 28 funded projects focus on developing rapid approaches to highway renewal and are expected to reduce the time involved in preparing and executing construction projects. In total, the oversight committee allocated about $21.5 million (about 67 percent of total renewal funding) for these 19 projects. In addition, the oversight committee allocated about $2.5 million to fund 4 projects to minimize disruptions to travelers, communities, or utilities while renewal construction is under way, and about $8 million to fund 5 projects for producing more durable facilities needed to minimize the frequency of highway-related repairs. Figure 3 provides the projected budget and timeline, by research category, for the SHRP 2 renewal projects. The renewal TCC expects research in this area will promote rapid and durable highway rehabilitation and reconstruction and result in the production and implementation of various tools (i.e., hardware or technology) and techniques (i.e., strategies, procedures, recommendations, guidelines, or specifications). Overall, the renewal TCC expects 19 of the 28 projects will primarily develop tools, while the remaining 9 will primarily develop techniques for promoting rapid highway renewal. Specifically: To advance rapid approaches to highway renewal, 15 projects are expected to primarily develop tools, while 4 projects are expected to primarily develop techniques. For example, regarding tools, some of the 15 projects are expected to produce technologies for efficiently locating and characterizing underground utilities. This is necessary because studies show that locating utilities, such as water mains and electrical and gas lines, is the most significant source of delay in highway renewal work. Regarding techniques, one of the 4 projects is expected to produce best practices and recommendations for addressing worker fatigue, which, according to SHRP 2 staff, can (1) negatively affect performance and the quality of work performed and (2) increase the potential for time- consuming and costly mistakes, accidents, and injuries among workers who often are required to work for extended periods of time. To minimize disruptions during renewal work, each of the 4 funded projects is expected to produce techniques for foreseeing and avoiding or mitigating travel disruptions. For example, 1 project is expected to establish cooperative strategies that help transportation agencies and utility companies effectively manage utilities throughout the renewal efforts, thereby minimizing disruptions to highway users and utility users in surrounding communities. To produce durable highway facilities, 4 of the 5 projects are expected to primarily develop tools, such as technologies for designing and constructing bridges to increase the service life of bridges, while the other is expected to primarily develop techniques for preserving pavements to promote a longer service life. See appendix III for additional information on how these projects were reprioritized for funding and selected information about the currently funded renewal projects. As of December 31, 2009, the SHRP 2 oversight committee had allocated about $20 million to fund 1 completed, 10 ongoing, and 10 future projects, for a total of 21 reliability research projects. The goal of the reliability research is “to provide highway users with reliable travel times by preventing and reducing the impact of nonrecurring incidents.” Thus, projects in the reliability area are designed to address highway congestion caused by nonrecurring (i.e., relatively unpredictable) events—such as traffic accidents, work zones, special events, and weather. The SHRP 2 reliability TCC expects these research results will help transportation practitioners provide highway users with reliable travel times by, for example, helping to ensure that an individual’s commute to work is consistently the same and minimally affected by congestion caused by relatively unpredictable events. The reliability TCC divided research in this area into four principal categories addressing different aspects of travel time reliability. The oversight committee allocated most of the funds, $11.6 million (about 57 percent of total reliability funding), to 14 projects in two of the four reliability research categories—“data and analysis” and “institutional and human components.” Collectively, the 14 projects are expected to (1) develop data, analytical tools, and procedures for monitoring travel time reliability; (2) develop performance measures and models to evaluate the effectiveness of actions to control and mitigate the impact of relatively unpredictable events that cause congestion; and (3) identify how the institutional behaviors of transportation and public safety agencies and the human behaviors of travelers contribute to unpredictable events that affect congestion. The oversight committee allocated the remaining funds—about $8.6 million (or, approximately, 43 percent of total reliability funding)—for projects in the three remaining research categories. Specifically, the committee allocated about $5.3 million to 4 projects for “incorporating reliability into planning, programming, and design” of highways. Further, the oversight committee allocated about $1.5 million to 2 projects to encourage the development of innovative ideas related to “future needs and opportunities to improve travel time reliability.” Finally, in November 2008, the oversight committee allocated about $1.8 million for a project to produce a framework for integrating the results of the reliability research, potentially providing transportation decision makers and practitioners with a guide to (1) understand travel time reliability and (2) incorporate reliability strategies into their project planning and design. Figure 4 provides the projected budget and timeline, by research category, for the SHRP 2 reliability projects. Overall, the reliability TCC expects this research will develop and promote programs and strategies that monitor and improve travel time reliability. For example, one project focuses on developing guidance for establishing programs to monitor travel time reliability. Additionally, some projects are expected to use data collected from the SHRP 2 safety projects to understand how driver behavior is affected by relatively unpredictable events that cause congestion. Other projects are expected to develop measures for understanding the effectiveness of strategies used by transportation agencies, while some focus more on the managerial aspects of agencies, such as the identification of the optimal organizational structure to monitor travel time reliability. Moreover, the oversight committee funded 2 projects to incorporate some of the reliability research results into two widely used reference manuals for highway designers—TRB’s Highway Capacity Manual and AASHTO’s Policy on Geometric Design for Highways and Streets. According to SHRP 2 staff, the inclusion of some of the research results into these reference manuals, such as research on cost-effective highway design features that can reduce the effects of relatively unpredictable events, represents a significant step toward the systematic implementation of the reliability research findings. SHRP 2 staff noted that the incorporation of travel time reliability into highway design, construction, and management is a relatively new concept for the transportation community. The staff said that they are hopeful that research in this area will result in innovative methods for reducing congestion. See appendix IV for additional information on how these projects were reprioritized for funding and selected information about the currently funded reliability projects. As of December 31, 2009, the SHRP 2 oversight committee had allocated about $21 million to fund 2 completed, 11 ongoing, and 7 future projects, for a total of 20 capacity research projects. The goal of the capacity research is “to develop approaches and tools for systematically integrating environmental, economic, and community requirements into the analysis, planning, and design of new highway capacity.” The SHRP 2 capacity TCC expects this research will promote a holistic approach to addressing highway capacity issues. The capacity TCC divided the capacity projects into two categories: the (1) development of a “collaborative decision-making framework,” to establish a decision-making process that includes environmental, economic, and social impacts of highway capacity efforts, and (2) “improvement in methods” to address common issues that arise during the design, planning, and execution of capacity-enhancing efforts. The oversight committee allocated most of the funds, $13.9 million (about 66 percent of total capacity funding), for 13 projects related to the first category of projects and $7.2 million for 7 projects in the second category. Figure 5 provides the projected budget and timeline, by research category, for the SHRP 2 capacity projects. The capacity TCC expects the outcomes of the 13 capacity projects to develop a framework for improving collaboration among transportation agencies, community and government stakeholders, and the general public, which could result in more comprehensive, efficient, and informed decision making. Specifically, the collaborative decision-making framework is expected to (1) provide guidance to agencies at key decision points and (2) help transportation stakeholders consider a variety of issues throughout the decision-making process. The following issues are included in the framework: community issues (e.g., comparative assessments of how alternative capacity efforts affect communities); environmental issues (e.g., analyses of how capacity-enhancing projects affect greenhouse emissions and the effective protection of wetlands); economic issues (e.g., assessments of matters, such as the expected increase in employment and tax revenue of highway capacity projects to the local economy); and travel time reliability issues (e.g., the effective loss of capacity because of relatively unpredictable events that cause congestion). In addition, the capacity TCC expects the outcomes of the remaining seven projects will provide better methods for improving capacity efforts, such as models and analyses needed to assess the consequences of capacity- related enhancements. For example, one project is expected to establish partnerships with local transportation agencies and develop and operationalize an innovative travel demand model for analyzing the effects of capacity management strategies. The capacity TCC expects that this project will help transportation agencies better understand how their management strategies affect highway capacity, such as how their decisions about speed limits or the use of reversible travel lanes affect congestion. Another project in this category is expected to help transportation practitioners understand the impact of highway tolls and other pricing strategies on highway congestion. See appendix V for additional information on how these projects were reprioritized for funding and selected information about the currently funded capacity projects. As a result of SHRP 2’s reprioritization process, 50 of the 106 projects identified in 2003 were eliminated entirely, and many of the remaining 56 projects that either evolved into, or were merged with, one or more SHRP 2 projects had one or more aspects of their research eliminated from funding. As discussed, the reprioritization process was needed to adjust to funding and time constraints that had not been anticipated when the programs’ detailed project plans were developed in 2003. According to SHRP 2 staff, in the end, the oversight committee typically funded applied research to develop products critical to transportation agencies and other stakeholders—rather than many of the implementation-related activities, such as testing the research results in real-world settings. Thus, the eliminated research typically was for, among other activities, (1) translating research results into products (i.e., research applications), (2) training and dissemination of the research findings (i.e., technology transfer), and (3) providing technical support for implementing research products and technologies and for demonstrating new technologies (i.e., research implementation). According to DOT and AASHTO officials and SHRP 2 staff, early results of the SHRP 2 research have been promising but likely would be enhanced with additional funding to restore some of the eliminated research. DOT officials and SHRP 2 staff explained that initial research results often require additional research and development in real-world trials before a usable product is ready for implementation. Thus, in their collective view, to fully achieve the original expectations for SHRP 2, it will be important to eventually fund some of the research that had to be eliminated because of funding and time constraints. SHRP 2 staff further explained that the sooner new research findings are implemented, the earlier that the performance and economic benefits of the research will begin to accrue. Similarly, in June 2008, the Chief Deputy Director of the California Department of Transportation (and AASHTO representative) testified before the House Subcommittee on Technology and Innovation that the ultimate success of SHRP 2 research will depend on widespread deployment. According to SHRP 2 staff, they are hopeful that other researchers will develop projects for implementing some of SHRP 2’s research after the program’s completion. The following sections provide information on some of the eliminated research. Of the 15 safety projects identified in 2003, 6 projects were eliminated entirely, including 2 of the 3 projects related to the site-based risk study. As discussed, this study was expected to use multiple overhead video cameras to record the relative position and motion of each vehicle passing through selected locations under different traffic conditions or with different signal phases (e.g., left turns and yellow lights) to evaluate the effect on the traffic. To complete the study, the SHRP 2 safety TCC originally anticipated that 3 projects would be funded to (1) develop technology and methods for data collection and conduct a pilot test, (2) implement the study in field tests, and (3) analyze the resulting data and assess the implications of these data. However, because of funding and time constraints, the oversight committee funded only 1 of the 3 projects and, thus, SHRP 2 will not, according to the safety TCC, result in a comprehensive assessment of the risk of collision associated with driver behavior. In addition, the 2 projects identified in 2003 for evaluating countermeasures were not funded. Overall, this research was intended to (1) address the effectiveness of existing countermeasures through rigorous, retrospective studies of accidents under different conditions, and (2) support the development of new countermeasures. The first of the 2 eliminated projects was expected to identify and prioritize countermeasure issues for subsequent evaluations, while the second project would have evaluated the identified countermeasure issues to determine the associated benefits and costs based on retrospective crash data. A key requirement for both of these projects was the use of expected data from the site-based risk and naturalistic driving studies. However, because designing field studies requires substantial resources and time, neither of these projects was funded. According to DOT officials and SHRP 2 staff, the 2 site-based and 2 countermeasures evaluation projects were dropped, in part because they expected more promising outcomes from the naturalistic driving study. AASHTO representatives agreed and told us that it would not have been helpful to reduce funding for the naturalistic driving study to, instead, fund other projects because a larger, more comprehensive data set on driver behavior is needed for developing new and improved countermeasures. Thus, given limited funding, the SHRP 2 safety TCC decided to allocate most of the safety funding toward the development of this data set. Finally, while the oversight committee funded all but 2 of the naturalistic driving study projects identified in 2003, that research also was affected by funding realities. Specifically, the study originally was intended to collect 3 years of data from about 4,000 volunteer drivers. However, 1 year and about 1,000 volunteers had to be eliminated from the planned study because of the shorter time frame for carrying out SHRP 2. According to SHRP 2 staff, an additional year of research would have yielded about 50 percent more data at little additional cost, since the equipment for the vehicles already would have been purchased. See appendix II for additional information on how the safety projects identified in 2003 were reprioritized for funding and the currently funded safety projects. Of the 38 renewal projects identified in 2003, 17 projects were eliminated entirely. According to DOT and SHRP 2 staff, the renewal area probably was most affected by the reprioritization process because many of the projects identified in 2003 were daisy-chained together and thus dependent on the completion or initiation of other related projects. Many of the 17 projects were eliminated for this reason, while others were eliminated because they were similar to other recent, current, or planned research. Additionally, given less funding and time than originally anticipated, the SHRP 2 renewal TCC decided that many of the 17 projects, including several projects for developing technologies and techniques to (1) continuously monitor the health and performance of bridges and (2) improve their maintenance with minimum disruptions to users, should be eliminated from funding consideration because they were of lower priority than other research projects. Further, while not entirely eliminated, some of the renewal projects selected for funding were reduced in scope, and implementation activities related to the research were not funded. For example, all of the renewal projects identified in 2003 that focused on innovative methods to locate and characterize underground utilities were scaled down because they depended on the outcomes of projects that had not been funded. In other cases, laboratory evaluations, field case studies, and demonstrations of proposed systems for improving pavements and bridges were eliminated because related pilot projects for implementing the research were not funded. See appendix III for additional information on how the renewal projects identified in 2003 were reprioritized for funding and the currently funded renewal projects. Of the 33 reliability projects identified in 2003, 20 projects were eliminated entirely. As with the other areas, SHRP 2 staff told us that the reliability projects identified in 2003 needed to be reevaluated to fund as many high- priority projects as possible given available funding and time frames. According to the staff, the reprioritization of these projects was the most challenging area and, consequently, required the assistance of a facilitator to aid in the decision-making process. Because research for reducing the impact of relatively unpredictable causes of congestion and improving travel time reliability is new, the collection of SHRP 2 projects identified in 2003 was expected to provide a comprehensive approach to collecting real-time information for use in assessing travel time reliability. However, given less funding and time than had been expected, the SHRP 2 reliability TCC decided to focus on high-priority projects needed to collect and analyze fundamental data for improving travel times for travelers. In addition, some of the 20 eliminated projects were designed to improve agencies’ response to relatively unpredictable events through the use of new technologies to (1) monitor traffic and roadway conditions, (2) instantaneously communicate information about incidents and work zones to highway users, and (3) provide information about transporting hazardous materials to better prepare agencies that respond to accidents. Furthermore, several of the eliminated projects were designed to study the effect of various weather and pavement conditions on travel time reliability. According to SHRP 2 staff, these and other reliability projects identified in 2003 had to be eliminated because of funding and time constraints for conducting follow-on projects needed to apply the research results and transfer the technology developed to highway practitioners and other users. Thus, according to the staff, field tests to demonstrate the usefulness of the research to practitioners, provide additional insights into how the results can be implemented by agencies and other users, and create more usable future products will be needed following completion of SHRP 2. See appendix IV for additional information on how the reliability projects identified in 2003 were reprioritized for funding and the currently funded reliability projects. Of the 20 capacity projects identified in 2003, 7 projects were eliminated entirely. According to FHWA officials and SHRP 2 staff, the philosophy underlying this research area had to be completely reevaluated largely because the research planned in 2003 envisioned a much larger and broader scale of research. Specifically, many of the 2003 projects related to the development of a “virtual workspace” for highway planning and development intended to visually illustrate the effects of alternative planning approaches. According to SHRP 2 staff, the virtual workspace, once developed, would have facilitated simultaneous data transfer between highway practitioners at each step of the highway planning process. However, the SHRP 2 capacity TCC scaled down or eliminated most of the projects for advanced data gathering, access, and the computerized display elements that would be required for the virtual workspace, and, instead, decided to focus on research needed to produce the collaborative decision-making framework for highway planning and development. SHRP 2 staff told us that most of the scaled-down or eliminated projects were for research application and implementation, such as technology transfer. Specifically, regarding the application of research results, many of the eliminated projects were expected to (1) enhance public and stakeholder support for capacity-enhancing projects and (2) develop partnerships to provide training and implement the research. Collectively, these projects were intended to result in the systematic integration of environmental, economic, and community requirements into the analysis, planning, and design for enhancing highway capacity. In addition, while the currently funded capacity research projects are expected to result in the development of (1) a Web-based tool for using the collaborative decision-making framework and (2) manuals and tools to assist transportation agencies make more comprehensive and informed decisions, according to SHRP 2 staff, additional implementation, including technology transfer, will be needed to help ensure that the research results are widely implemented. See appendix V for additional information on how the capacity projects identified in 2003 were reprioritized for funding and the currently funded capacity projects. We provided a draft of this report to DOT and TRB for review and comment. DOT and TRB provided technical clarifications, which we incorporated, as appropriate. We are sending copies of this report to other interested congressional committees and members, DOT, TRB, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To address our three reporting objectives, we reviewed the legislative requirements, goals, and objectives for the Second Strategic Highway Research Program (SHRP 2), including the Transportation Equity Act for the 21st Century; the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU); and the SAFETEA-LU Technical Corrections Act of 2008. We also reviewed the Department of Transportation’s strategic plan for fiscal years 2006-2011, and the Federal Highway Administration’s October 2008 Strategic Plan and its Corporate Master Plan for Research and Deployment of Technology and Innovation. In addition, we reviewed and analyzed literature, studies, and reports related to the research program. Our review included reports by GAO and the Congressional Research Service that provided background information on the first Strategic Highway Research Program, SHRP 2, and the Federal Highway Administration’s research and technology program, including its federal-aid highway program. We also reviewed the Transportation Research Board’s (TRB) Special Report 260: Strategic Highway Research: Saving Lives, Reducing Congestion, Improving Quality of Life; Special Report 261: The Federal Role in Highway Research and Technology; Special Report 296: Implementing the Results of the Second Strategic Highway Research Program: Saving Lives, Reducing Congestion, Improving Quality of Life; and the National Cooperative Highway Research Program’s Report 510: Interim Planning for a Future Strategic Highway Research Program. Finally, we reviewed quarterly, semiannual, and annual SHRP 2 reports; annual research plans for the four SHRP 2 research areas; and report summaries of the funded SHRP 2 projects. To address our first two objectives (i.e., determining the process for selecting research projects for funding and the status of those projects), we reviewed the statutory requirements for SHRP 2 and reviewed available agency and program documentation. We also determined how the program is monitored and the program’s reporting requirements. In addition, we obtained and analyzed agency and program documentation on projects that were either funded or identified for potential funding in the 2003 detailed research plans, as well as the revised plans for reprioritizing projects for funding. We also reviewed this documentation to identify how TRB plans to evaluate the research and how the outcomes of the research are expected to address highway challenges. To address our third objective (i.e., determining what, if any, planned research was eliminated from the program), we compared program documentation related to the currently funded projects with the four research areas identified in Special Report 260 and the projects identified in the 2003 research plans. We also determined how actual funding for the four research areas compared with the funding levels recommended in Special Report 260. Because of time constraints, we did not assess the appropriateness of funding decisions or projects selected for SHRP 2 funding. To address all three objectives, we also interviewed agency officials from the Department of Transportation (DOT), the Federal Highway Administration, and the National Highway Traffic Safety Administration, and representatives from the National Research Council, TRB, SHRP 2 staff, and the American Association of State Highway and Transportation Officials (AASHTO). We conducted this performance audit from June 2009 through February 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The SHRP 2 oversight committee funded many of the safety projects identified in the 2003 detailed research plans based on the recommendations of the SHRP 2 safety technical coordinating committee. As a result, 9 of the 15 safety projects identified in 2003 either evolved or were partially merged into the currently funded safety projects and 6 were eliminated. Table 6 provides information on the safety projects identified in 2003 and how they were reprioritized for funding. Table 7 provides information on the 16 currently funded SHRP 2 safety projects. The SHRP 2 oversight committee funded many of the renewal projects identified in the 2003 detailed research plans based on the recommendations of the SHRP 2 renewal technical coordinating committee. As a result, 21 of the 38 renewal projects identified in 2003 either evolved or were partially merged into the currently funded renewal projects and 17 were eliminated. Table 8 provides information on the renewal projects identified in 2003 and how they were reprioritized for funding. Table 9 provides information on the 28 currently funded SHRP 2 renewal projects. The SHRP 2 oversight committee funded many of the reliability projects identified in the 2003 detailed research plans based on the recommendations of the SHRP 2 reliability technical coordinating committee. As a result, 13 of the 33 reliability projects identified in 2003 either evolved or were partially merged into the currently funded reliability projects and 20 were eliminated. Table 10 provides information on the reliability projects identified in 2003 and how they were reprioritized for funding. In addition, 4 funded projects, which had not been identified in 2003, were developed to fill research gaps or provide more affordable research alternatives. Table 11 provides information on the 21 currently funded SHRP 2 reliability projects. The SHRP 2 oversight committee funded many of the capacity projects identified in the 2003 detailed research plans based on the recommendations of the SHRP 2 capacity technical coordinating committee. As a result, 13 of the 20 capacity projects identified in 2003 either evolved or were partially merged into the currently funded capacity projects and 7 were eliminated. Table 12 provides information on the capacity projects identified in 2003 and how they were reprioritized for funding. In addition, 2 funded projects, which had not been identified in 2003, were developed to fill research gaps or provide more affordable research alternatives. Table 13 provides information on the 20 currently funded SHRP 2 capacity projects. In addition to the contact named above, Kathleen Turner, Assistant Director; Vashun Cole; Silvia Arbelaez-Ellis; Dana Hopings; and Amy Rosewarne made important contributions to this report.
The 2005 Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users authorized the Department of Transportation to establish a highway research program to address future challenges facing the U.S. highway system. In 2006, the Second Strategic Highway Research Program was established to conduct research in four areas--safety, renewal, reliability, and capacity. The Transportation Research Board manages this program in cooperation with the Federal Highway Administration and others. The legislation also required GAO to review the program no later than 3 years after the first research contracts were awarded. This report provides information about the process for selecting the program's projects for funding, the projects' status, and what, if any, research was eliminated because of funding and time constraints. To address our objectives, GAO reviewed the program's authorizing legislation, analyzed studies and reports related to the program and its projects, and interviewed officials from relevant transportation agencies and organizations. GAO is not making recommendations in this report. The Department of Transportation and the Transportation Research Board reviewed a draft of this report and provided technical clarifications, which we incorporated, as appropriate. The program's oversight committee funded research projects based on the recommendations of its four technical coordinating committees of experts (one for each of the four research areas), which considered the input of other experts and factors, such as available program funds and time frames. Prior to the program's establishment, detailed research plans were developed by panels of experts in 2003 that identified 106 possible research projects. However, these research plans were significantly modified on two occasions--in 2006, when less funding and time were provided for completing the program than had been assumed in 2003, and in 2008, when about $20 million in additional program funding became available. On both occasions, the program's oversight committee relied on experts to prioritize and recommend projects for funding. As a result of this process, 56 of the 106 projects either evolved into, or were partially merged with, one or more of the currently funded projects, while 50 projects were eliminated entirely. As of December 31, 2009, the program's oversight committee had allocated about $123 million of the approximately $171 million available to fund 85 projects in the four research areas of highway safety (40 percent), renewal (26 percent), reliability (16 percent), and capacity (17 percent). These funding allocations closely followed the overall funding percentages recommended by the Transportation Research Board in 2001. Of the 85 funded projects, 11 were completed, 52 were ongoing, 22 were anticipated, and all of the projects were expected to be completed by 2013. The outcomes are expected to vary by research area, ranging from useful data sets and related analyses to improved technologies, guidelines, and techniques for advancing the goals of each research area. Among other outcomes, the program staff expects: (1) the safety research will produce the largest, most comprehensive database on driver behavior available to date and, thus, provide the foundation for significant improvements in highway safety; (2) the renewal research will produce a variety of tools and techniques to promote rapid and durable highway renewal; (3) the reliability research will develop methods to provide highway users with relatively more consistent travel times between locations; and (4) the capacity research will provide strategies for better decision making in highway planning processes to increase the capacity of U.S. highways. Because of funding and time constraints, 50 of the 106 research projects identified in 2003 were eliminated entirely from funding, while many of the remaining 56 projects had one or more portions of their planned research eliminated. Overall, most of the funded projects are for applied research, but many of the implementation-related activities identified in 2003 were eliminated. While activities to (1) translate research results into products, (2) train and disseminate research findings, and (3) provide technical support for implementing the research are often needed to widely implement research results, program staff are hopeful that other researchers will initiate some of the eliminated research activities after the program's completion.
The Asset Forfeiture Program has three primary goals: (1) to punish and deter criminal activity by depriving criminals of property used or acquired through illegal activities; (2) to enhance cooperation among foreign, federal, state, and local law enforcement agencies through the equitable sharing of assets recovered through this program; and, as a by-product, (3) to produce revenues in support of future law enforcement investigations and related forfeiture activities. A number of federal law enforcement organizations participate in the AFF, including USMS, which serves as the primary custodian of seized and forfeited property for the program. See figure 1 for the Asset Forfeiture Program participants. DOJ’s Asset Forfeiture Management Staff (AFMS) is part of DOJ’s Justice Management Division and is responsible for managing and overseeing all financial aspects of the AFF, review and evaluation of asset forfeiture program activities, internal controls and audit functions, information systems, and other administrative functions related to the fund. The Asset Forfeiture Money and Laundering Section (AFMLS) is part of DOJ’s Criminal Division and is responsible for legal aspects of the program, including civil and criminal litigation and providing legal advice to the U.S. Attorneys’ Offices. AFMLS is responsible for establishing the Asset Forfeiture Program’s policies and procedures, coordinating multidistrict asset seizures, acting on petitions for remission in judicial forfeiture cases, and coordinating international forfeiture and sharing. AFMLS also oversees the AFF’s equitable sharing program. United States Attorneys’ Offices (USAO) are responsible for the prosecution of both criminal and civil actions against property used or acquired during illegal activity. USMS serves as the primary custodian of seized property for the Asset Forfeiture Program. USMS manages and disposes of the majority of the valued property seized for forfeiture. In serving as the primary custodian of the majority of assets managed by the fund, USMS manages all valued assets that are not considered evidence, contraband, or targeted for use by individual law enforcement agencies. ATF enforces the federal laws and regulations relating to alcohol, tobacco, firearms, explosives, and arson by working directly and in cooperation with other federal, state, and local law enforcement agencies. While USMS is the primary custodian over valued assets, ATF maintains custody over assets seized under its authority, including firearms, ammunition, explosives, alcohol, and tobacco. DEA implements major investigative strategies against drug networks and cartels. DEA maintains custody over narcotics and other seized contraband. The FBI investigates a broad range of criminal violations, integrating the use of asset forfeiture into its overall strategy to eliminate targeted criminal enterprises. There are several agencies outside the Department of Justice that also participate in the DOJ Asset Forfeiture Program. Non-DOJ participants include the United States Postal Inspection Service, the Food and Drug Administration’s Office of Criminal Investigations, the United States Department of Agriculture’s Office of the Inspector General, the Department of State’s Bureau of Diplomatic Security, and the Department of Defense Criminal Investigative Service. There are two types of forfeiture: administrative and judicial, and they differ in a number of ways, including (1) the point in the proceeding, generally at which the property may be seized; (2) the burden of proof necessary to forfeit the property; and (3) in some cases, the type of property interests that can be forfeited. Administrative forfeiture allows for property to be forfeited without judicial involvement. Although property may be seized without any judicial involvement, seizures performed by federal agencies must be based on probable cause. In administrative forfeitures, the government initiates a forfeiture action and will take ownership of the property provided that no one steps forward to contest the forfeiture. Specifically, the administrative forfeiture procedure requires that those with an interest in the property be notified and given an opportunity to request judicial forfeiture proceedings. See below for an example of an administrative forfeiture. Example of Administrative Forfeiture DEA initiated a task force investigation into a drug-trafficking organization. Task force officers received information from a confidential source that the drug-trafficking organization was using a van with hidden compartments to transport methamphetamine and drug proceeds, and a drug detection dog gave a positive alert to the presence of drugs in the van. Officers obtained and executed a search warrant on the vehicle, which resulted in the discovery and seizure of 149 kilograms of cocaine and $1,229,785 in U.S. currency. Because no party filed a claim contesting the forfeiture, the currency was administratively forfeited by DEA pursuant to 19 U.S.C. § 1609. Judicial forfeiture, both civil and criminal, is the process by which property may be forfeited to the United States by filing a forfeiture action in federal court. In civil forfeiture, the action is against the property and thus does not require that the owner of the property be charged with a federal offense. The government must only prove a connection between the property and the crime. By contrast, criminal forfeiture requires a conviction of the defendant before property is subject to forfeiture. Example of Civil Forfeiture After obtaining a search warrant, agents searched a residence and the adjoining land on a 50-acre farm. Agents found firearms and ammunition, along with 60 pounds of processed marijuana. Agents also found approximately 4,000 marijuana plants growing outside in the adjacent field, along with approximately 2,500 plants being processed. While the owner of the farm will be subject to prosecution, because the land was used for illegal activities, a separate civil forfeiture action was filed against the property. The farm where the marijuana plants were located was seized and will be forfeited under civil forfeiture proceedings. Example of Criminal Forfeiture According to the United States Attorney, two Philadelphia-based corporations operated an Internet enterprise that facilitated interstate prostitution activities. The defendants allegedly developed and operated an Internet website and created an online network for prostitutes, escort services, and others to advertise their illegal activities to consumers and users of those services. The case was investigated by state police, FBI, and the Internal Revenue Service Criminal Investigations Division. The investigation found that defendants received fees in the form of money orders, checks, credit card payments, and wire transfers from users of the website. The funds the defendants allegedly received were the proceeds of violations of federal laws prohibiting interstate travel in aid of racketeering enterprises, specifically prostitution, and aiding and abetting such travel. The money-laundering conspiracy charge alleges that the defendants engaged in monetary transactions in property of a value greater than $10,000 derived from those unlawful activities. The defendants entered guilty pleas to the money-laundering conspiracy charge and agreed to serve a probation term of 18 months and to pay a $1,500,000 fine. In addition, under the terms of the plea agreement, the defendants agreed to the criminal forfeiture of $4.9 million in cash derived from the unlawful activity, as well as forfeiture of the domain name, all of which represent property used to facilitate the commission of the offenses. The asset forfeiture process involves a number of key steps, including necessary planning in advance of the seizure, seizing and taking custody of the asset, notifying interested parties, addressing any claims and petitions, and equitable sharing with state and local law enforcement agencies. According to DOJ, enhancing cooperation among federal, state, and local law enforcement agencies is one goal of the equitable sharing program. For more information on how agencies qualify for equitable sharing, see appendix I. From fiscal years 2003 through 2011, AFF revenues and expenditures increased, with annual revenues doubling in fiscal year 2006, due in part to an increase in forfeitures resulting from fraud and financial crimes investigations. DOJ estimates anticipated revenues and expenditures based on prior years’ trends and then carries over funds to help cover operational expenses and other liabilities in the next fiscal year, including reserves needed for pending equitable sharing and third-party payments. However, the transparency of DOJ’s process for carrying over these funds could be enhanced. Once all expenses have been accounted for and unobligated funds deemed necessary for next year’s expenses have been carried over to the next fiscal year, DOJ then reserves funds to cover annual rescissions. In the 9-year period from fiscal years 2003 through 2011, AFF revenues totaled $11 billion, growing from $500 million in fiscal year 2003 to $1.8 billion in fiscal year 2011. Since 2006, an increase in the prosecution of fraud and financial crime cases has led to substantial increases in AFF revenue.involved the misappropriation of funds by the founder of a television cable company, Adelphia Communications, and resulted in over $700 million in forfeited assets. As a result of the increase in forfeitures resulting from money laundering and financial crimes investigations, in 2006, revenues doubled those of previous years, and for the first time in the AFF’s history, total annual revenues grew above $1 billion to approximately $1.2 billion. Since 2006, the AFF’s annual revenues have remained above $1 billion, For example, a money laundering case in fiscal year 2007 with the highest revenues of $1.8 billion reported in 2011.shows the fund’s revenue growth over time from fiscal years 2003 through 2011. Moreover, according to DOJ officials, in addition to an increase in the prosecution of fraud and financial crime cases, the increase in revenues can also be attributed to an overall increase in the number of forfeiture cases together with higher-value forfeitures. Across all fiscal years, forfeited cash income constituted 76 percent or more of the AFF’s revenue sources. Forfeited cash income includes cash/currency, as well as financial instruments such as money orders, bank accounts, brokerage accounts, and shares of stock. The second, and much smaller, source of revenue is the sale of forfeited property including automobiles, boats, airplanes, jewelry, and real estate, among others. In fiscal year 2011, revenues from forfeited cash income and the sale of forfeited property together accounted for over 84 percent of the total revenues. Other sources of income may include transfers from the Treasury Forfeiture Fund (TFF), and transfers from other federal agencies. Additionally, since fiscal year 2006—when the AFF’s revenues from fraud and financial crime cases increased—large-case deposits (forfeitures greater than $25 million) of forfeited cash income have contributed an average of 37 percent to total revenues. For example, in 2007, DOJ reported a total of six large deposits that totaled $842 million, or slightly over 50 percent of the AFF’s total revenues in that fiscal year. These forfeitures of assets greater than $25 million involved investigations of misappropriation of funds, including corporate fraud and the illegal sales of pharmaceutical drugs. The types of assets that were seized in these investigations were primarily forfeited cash income. From fiscal years 2003 through 2011, AFF expenditures totaled $8.3 billion. As revenues have increased, there has been a corresponding increase in expenditures in support of asset forfeiture activities. Specifically, expenditures increased from $458 million in fiscal year 2003 to $1.3 billion in fiscal year 2011. Figure 3 shows the expenditures from fiscal year 2003 through 2011, including the large growth in expenditures beginning in 2007. Revenues resulting from forfeitures are used to pay the forfeiture program’s expenditures in three major categories: 1. payments to third parties, including payments to satisfy interested parties such as lienholders, as well as the return of funds to victims of large-scale fraud; 2. equitable sharing payments to state and local law enforcement agencies that participated in law enforcement efforts resulting in the forfeitures; and 3. all other program operations expenses that include a total of 13 expenditure categories such as asset management and disposal, the storage and destruction of drugs, and investigative expenses leading to a seizure. Table 1 shows the AFF’s expenditures across all fiscal years, including payments to third parties, equitable sharing, and all other program operations expenses. Equitable sharing payments to state and local law enforcement agencies have generally increased since fiscal year 2003; in fiscal year 2003, equitable sharing payments totaled $218 million, and in fiscal year 2011, equitable sharing totaled $445 million. Moreover, when compared with DOJ grant programs, equitable sharing is one of the largest DOJ programs providing funds to recipients in order to support state and local law enforcement activities. For example, in fiscal year 2010, the Victims of Crime Assistance (VOCA) Program was DOJ’s largest grant program; DOJ distributed approximately $412 million in funds through the VOCA program. By way of comparison, equitable sharing in fiscal year 2010 provided a total of $388 million in equitable sharing payments to state and local law enforcement agencies. According to state and local law enforcement officials we met with, because most of their departmental budgets go toward personnel costs, the equitable sharing program is extremely important because it helps fund equipment, training, and other programs that they may otherwise not be able to afford. For example, one local law enforcement agency stated that salaries make up 96 percent of its annual budget. As a result, equitable sharing dollars allow them to purchase equipment they could not otherwise buy with the limited available annual budget. See appendix I for the total equitable sharing payments made to each state in fiscal year 2011. Equitable sharing has generally increased from 2003 through 2011; however, as a percentage of total expenditures, equitable sharing has decreased from 48 percent of total expenditures in 2003 to 34 percent in 2011. This percentage decrease began in fiscal year 2006, when another expenditure category—payments to third parties including lienholders and victims—increased from 10 to 44 percent of total expenditures. DOJ officials attribute the shift among these major expense categories in part to the increase in the prosecution of fraud cases with significant numbers of victims. Moreover, because large-case deposits are generally the result of fraud and financial crime cases, they typically have a greater proportion of payments to victims than equitable sharing, a fact that may also contribute to the overall percentage decrease in equitable sharing. For example, in fiscal year 2007, as a result of a non-prosecution agreement with Adelphia Communications, over $700 million in cash and stocks was forfeited and liquidated. In fiscal year 2012, the net proceeds from these forfeitures, which totaled approximately $728 million, were returned to victims. In addition to equitable sharing and third-party payments to victims and lienholders, the AFF is used to pay for a variety of program operations expenses. According to DOJ, the primary purpose of the AFF is to provide a stable source of resources to cover the costs of the Asset Forfeiture Program, including the costs of seizing, evaluating, inventorying, maintaining, protecting, advertising, forfeiting, and disposing of property seized for forfeiture. Among the program operations expenses covered by the AFF are costs associated with storing, maintaining, and disposing of forfeited assets. The AFF also funds case- related expenses including costs of managing paperwork, costs associated with the prosecution of forfeiture cases, costs associated with the execution of forfeiture judgments, and the costs of advertising.AFF also funds a variety of investigative expenses associated with forfeiture, including payments to reimburse any federal agency participating in the AFF for investigative costs leading to seizures. Other investigative expenses may include awards for information, purchase of evidence, and costs to fund joint task force operations. For additional details regarding expenditure categories, see appendix II. At the end of each fiscal year, DOJ carries over funds in order to help ensure it has sufficient resources to cover all AFF expenses that may not be covered by the next year’s revenues; however, the process DOJ uses to determine how much to carry over each year is not documented or outlined in its Congressional Budget Justifications. While DOJ officials stated that they cannot predict how much revenue will result from forfeitures in any given year, they attempt to estimate their anticipated revenues based on prior years’ trends. They then carry over funds needed to cover anticipated expenses for the coming year including funds needed to cover the costs of pending equitable sharing and third-party payments as well as funds needed to ensure the Asset Forfeiture Program’s solvency—including the anticipated costs associated with continuing forfeiture activities—at the start of the next fiscal year. Similar to the growth in revenues and expenditures, the funds DOJ carries over to cover these authorized expenses at the end of each fiscal year have grown since 2003. For example, at the end of fiscal year 2003, DOJ carried over approximately $365 million both to maintain solvency and to cover anticipated equitable sharing and third-party payments in fiscal year 2004. In comparison, in fiscal year 2011, DOJ carried over a total of $844 million to cover these expenditures. Additionally, DOJ officials emphasized that because revenues from fraud and financial crime cases have increased, the funds needed to make third-party payments, including payments to victims, have also increased. The flow of funds into and out of the AFF is complex and involves an interaction among revenues, expenditures, and funds carried over to manage the AFF. The following illustrates how DOJ used revenues, expenditures, and carryover funds to manage the AFF in fiscal year 2010: At the start of fiscal year 2010, DOJ carried over a total of $634 million in funds from fiscal year 2009 to maintain the program’s solvency and for pending equitable sharing and third-party payments. These funds were used at the start of fiscal year 2010 to continue operations, such as paying expenses for asset storage, and to cover pending equitable sharing and third-party payments. In addition to the $634 million, $207 million was reserved to cover DOJ’s fiscal year 2010 rescission. This rescission was proposed in the President’s budget, and later passed by Congress and enacted into law. As a result, at the start of fiscal year 2010, DOJ carried over a total of $841 million in funds from fiscal year 2009, as shown in table 2 below. In the course of fiscal year 2010, a total of approximately $1.58 billion was deposited into the AFF, including revenues received from forfeitures. Based on the total of $841 million that was carried over from fiscal year 2009 plus the $1.58 billion deposited into the AFF in fiscal year 2010, DOJ then had approximately $2.42 billion in total available resources in fiscal year 2010. Of these resources, DOJ obligated $1.45 billion in fiscal year 2010 and carried over $975 million into fiscal year 2011 to maintain solvency and reserves and to cover the proposed fiscal year 2011 rescission. While DOJ had obligated $1.45 billion for the three main expenditure categories; equitable sharing, third-party interests, and all other program operations expenses, DOJ’s actual expenditures in fiscal year 2010 totaled $1.29 billion. The difference of $0.16 billion in fiscal year 2010 represents funds that had been obligated, but had not yet been spent. According to DOJ officials, there may be a lag between the funds obligated in a fiscal year and the actual expenditures, and therefore, it is not uncommon for the total obligations to be higher than the expenditures in a given fiscal year. Table 2 shows the total funds available for use in fiscal year 2010. In order to identify the funds that will need to be carried over to cover anticipated expenses for the coming year, DOJ officials stated that they use reports generated from its asset-tracking system to identify pending equitable sharing and third-party payments. These reports provide DOJ with information to determine carry over funds needed for the disbursements that must be paid in the next fiscal year. In addition, DOJ carries over funds needed to ensure the Asset Forfeiture Program’s solvency at the start of the next fiscal year. According to DOJ officials, they consider a number of factors when calculating the funds needed to maintain solvency, such as historical data including information on the costs of past contracts, salary costs, and other expenses; known future expenses including salaries and contracts; and the costs of any potential new expenditures. DOJ officials explained the general factors they consider when carrying over funds needed to cover anticipated expenditures in the next fiscal year, but they do not specify in the AFF’s Congressional Budget Justifications how they determine the total amounts carried over each year. Specifically, the Congressional Budget Justifications do not include information on how DOJ calculated the amounts carried over nor do they explain the significant variations from one year to the next in the amount of funds carried over for solvency. For example, in fiscal year 2007, DOJ carried over $188 million based on its estimates of what it needed to cover solvency. The amount carried over to cover solvency then increased to $402 million in fiscal year 2009 and decreased to $169 million by fiscal year 2011. Figure 4 shows the variation in carryover funds retained in the AFF at the end of each fiscal year to cover solvency, equitable sharing, and third-party payments from fiscal years 2003 through 2011. DOJ officials stated that a number of cost drivers may change the funds needed for solvency from year to year. These cost drivers include salaries for government employees, information systems costs, asset management and disposal contracts, and contracts for administrative support staff, among other things. According to DOJ, these categories comprise recurring operational costs of the Asset Forfeiture Program. While these expenses are generally funded by AFF revenues, DOJ carries over funds to ensure it has sufficient resources that may not be covered by the next year’s revenues. Moreover, additional funds may need to be carried over to account for any number of program uncertainties. For example, the AFF could be responsible for making payments related to pending judicial actions, in the event that DOJ were to lose a forfeiture case in court. Therefore, DOJ may carry over more funds from one fiscal year to the next in order to cover these types of liabilities. DOJ officials stated that they estimate needed carryover funds by reviewing the cost drivers, as well as by assessing the risk that revenues may be less than projected. DOJ officials further noted that planning for AFF carryover and the actual carryover can differ due to the unpredictable dynamics of the fund. According to DOJ officials, there is no documented process used to determine the amount of funds that are carried over at the end of each fiscal year. Our prior work has emphasized the importance of transparency in federal agencies’ budget presentations to help provide Congress the necessary The information to make appropriation decisions and conduct oversight. department provides a yearly budget justification to Congress that details the estimated revenues, expenses, and carryover requirements for the upcoming fiscal year as well as AFF-related performance information. Officials further noted that the Congressional Justification includes discussions of the various categories of fund expenses, but does not include a detailed discussion of the process used to estimate the amounts carried over. Without a clearly documented and transparent process that demonstrates how DOJ determines the amounts that will be carried over each year, it is difficult to determine whether DOJ’s conclusions regarding the amounts that need to be carried over each year are well founded. Providing more transparent information as part of the AFF’s annual budget process would better inform Congress’ oversight of the AFF, by making it easier to evaluate whether the funds carried over to maintain Asset Forfeiture Program solvency and cover pending equitable sharing and third-party payments adequately reflect the AFF’s needed resources. After revenues needed to cover expenses in the current and upcoming fiscal years have been carried over, DOJ reserves funds to cover rescissions. After these funds have been reserved, any funds determined to be in excess of these requirements (excess unobligated balances) may be declared as Super Surplus. While these Super Surplus balances may be used at DOJ’s discretion for a variety of purposes, in recent years, these balances have been used as a means to supplement the funds reserved to cover yearly rescissions proposed in the President’s budget, and later passed by Congress and enacted into law. Figure 5 provides a description of the process for identifying Super Surplus balances in any given fiscal year. Rescissions are legislative actions to reduce an agency’s budgetary resources. For example, in fiscal year 2010, $387 million was rescinded from the AFF, and in fiscal year 2011, the enacted rescission totaled $495 million. Rescinded funds are generally taken from an agency and returned to the Treasury before they are obligated. However, per Office of Management and Budget (OMB) guidance, rescinded funds from the AFF have not been returned to the Treasury. Instead, DOJ has treated the funds as unavailable for obligation for the remainder of the fiscal year for which the rescission was enacted. At the beginning of each new fiscal year, DOJ would have made the rescinded funds available for obligation again, also in response to OMB guidance, had a new rescission not been enacted. With the enactment of a new rescission for the subsequent fiscal year, however, DOJ has continued to treat the rescinded funds as unavailable for obligation. For example, the $387 million that was rescinded from the AFF in fiscal year 2010 was treated as unavailable for obligation in fiscal year 2010, and was then used again to cover part of the enacted $495 million rescission in fiscal year 2011. To make up the difference needed to meet the $495 million rescission in fiscal year 2011, DOJ used unobligated balances in the amount of $233 million. Table 3 shows the enacted rescissions for each fiscal year, as well as the unobligated balances used by DOJ in order to meet the rescissions. One effect of these rescissions is to reduce the department’s discretionary spending in the year in which the rescission was enacted. This could ultimately decrease the size of the federal deficit, provided the decreased spending from the rescission is not offset by increased spending elsewhere. For example, in fiscal year 2012, DOJ’s discretionary budget authority was reduced to $27.4 billion, due in part to the $675 million enacted AFF rescission. DOJ has established guidelines and oversight mechanisms for the equitable sharing program, but additional controls could enhance the consistency and transparency of the program. Moreover, DOJ has recently started conducting reviews of state and local law enforcement agencies that participate in the equitable sharing program to determine the extent to which they are complying with program policies as well as bookkeeping, accounting, and reporting requirements. DOJ has established written guidelines governing how state and local law enforcement agencies should apply for equitable sharing. Specifically, according to the guidelines, state and local law enforcement agencies must submit an application for equitable sharing in which they outline identifying information for their agency, information on the asset that was forfeited, how they intend to use the asset (or the proceeds of the asset), and the number of work hours their agency contributed to the investigation. In addition, DOJ has established mechanisms governing how DOJ agencies should make equitable sharing determinations. Specifically, the field office for the DOJ agency that served as the lead federal agency in the investigation is responsible for making an initial recommendation regarding the percentage of the proceeds of the forfeited asset that each participating agency should receive. According to forfeiture statutes governing the transfer of forfeited property to state and local law enforcement agencies, equitable sharing determinations must bear a reasonable relationship to the degree of direct participation of the requesting agency in the total law enforcement effort leading to the forfeiture. are to be based on a comparison of the work hours that each federal, state, and local law enforcement agency contributed to the investigation. However, according to DOJ guidelines, further adjustments to sharing percentages may be made when work hours alone do not reflect the relative value of an agency’s participation in an investigation. For example, if a state or local law enforcement agency contributed additional resources or equipment to an investigation, its sharing percentage might be adjusted upward from what it would be based on work hours alone. See 21 U.S.C. §881(e)(3); 18 U.S.C. §981(e). of the agencies that participated in the investigation.then required to forward both the application forms completed by state and local law enforcement agencies and sharing recommendations to investigative agency headquarters officials for review. The review process differs depending on the amount and type of the forfeiture, as follows: For administrative forfeitures less than $1 million, agency headquarters officials are responsible for reviewing and approving the final sharing determination. For judicial forfeitures less than $1 million, agency headquarters officials are to forward the recommendation to the USAO for final approval. In any administrative or judicial forfeiture where the total appraised value of all forfeited assets is $1 million or more, in multidistrict cases, and in cases involving the equitable transfer of real property, the agency headquarters officials forward the recommendation to the USAO for review, which is then submitted to AFMLS officials for review. o Where the investigative agency, the USAO, and AFMLS concur in a sharing recommendation, the Assistant Attorney General makes the final equitable sharing determination. o Where the investigative agency, the USAO, and AFMLS do not all concur in a sharing recommendation, the Deputy Attorney General (DAG) determines the appropriate share. Figure 6 shows the steps involved in making equitable sharing determinations. While DOJ has established guidance indicating that adjustments to sharing percentages may be made when a state or local law enforcement agency’s work hours alone do not reflect the value of its participation in an investigation, DOJ has not developed guidance regarding how to apply the qualitative factors that may warrant departures from sharing percentages. DOJ agencies currently make adjustments to sharing percentages based on a number of qualitative factors regarding the additional assistance or contributions state or local law enforcement agencies may have made during an investigation. According to DOJ’s written guidelines, DOJ agencies must take these factors into account when determining whether to adjust an equitable sharing percentage beyond a strict work hour allocation. For example, according to DOJ guidelines, the deciding authority should consider such factors as the inherent importance of the activity, the length of the investigation, whether an agency originated the information leading to the seizure, or whether an agency provided unique and indispensable assistance, among others. In addition, DOJ’s Equitable Sharing Guidelines state that each agency may use judgment when determining how these qualitative factors should be used to adjust sharing percentages. In the course of our review, DOJ officials provided examples of these qualitative factors. For example, if a state or local law enforcement agency provided a helicopter, drug-sniffing dog, or a criminal informant to an investigation, DOJ would consider these contributions to be unique or indispensible assistance. In one case we reviewed, a local law enforcement agency that participated in a joint investigation with federal agents would have received 7.4 percent in equitable sharing based on the work hours it contributed to the investigation. However, the agency also provided information obtained from a confidential source that led to the seizure and provided a helicopter for aerial surveillance. As a result, its final sharing determination was adjusted upward from 7.4 percent to 12 percent. If the net proceeds of the forfeiture are $1.6 million once all investigative and forfeiture-related expenses have been paid, the resulting equitable sharing payment made to the law enforcement agency will increase from $118,400 to $192,000. Standards for Internal Control in the Federal Government calls for significant events to be clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. AFMLS officials report that they have established “rules of thumb” based on historical knowledge or precedent when applying these qualitative factors to equitable sharing adjustments that are subject to their review, but have not issued guidance to the DOJ agencies. Further, headquarters officials for each of the DOJ agencies emphasized that they follow the guidelines issued by DOJ when making adjustments to sharing percentages. However, as previously discussed, these guidelines outline the qualitative factors that may be considered when making adjustments to sharing percentages, but they do not include any additional information regarding how qualitative factors should be used to adjust sharing percentages. As a result, agency headquarters officials stated that field office staff use their own judgment when determining how qualitative factors should be used to adjust sharing percentages. AFMLS officials state that adjustments to equitable sharing percentages based on qualitative factors should be made on a case-by-case basis because each investigation is unique and the facts and circumstances of each case must be considered in totality before making adjustments to sharing determinations. While we recognize the inherently subjective nature of evaluating each agency’s unique contributions to a case based on facts and circumstances, additional guidance regarding how to apply the qualitative factors could help to improve transparency and better ensure consistency with which these qualitative factors are applied over time or across cases. This is particularly important given that these determinations represent DOJ’s overall assessment of each agency’s unique contributions to an investigation and are a key component of how DOJ makes decisions about how much to award each agency. DOJ’s written guidance also requires the DOJ agencies that are responsible for making equitable sharing determinations to use work hours as the primary basis for calculating sharing percentages; however, agencies do not consistently document the work hours each agency contributed to the investigation. According to DOJ officials, the work hours contributed by each of the local, state, and federal law enforcement agencies involved in the investigation should be added together by the DOJ agency leading the investigation to arrive at a total. Each law enforcement agency’s individual work hours are then divided by the total in order to determine each agency’s equitable sharing percentage. DOJ’s guidance states that every agency participating in the investigation should report work hours on either the application for equitable sharing or on the equitable sharing decision form. While state and local law enforcement agencies record their work hours on their applications for equitable sharing, we found that the DOJ agencies did not consistently record their own hours or the total hours contributed by all participating agencies. Of the 25 equitable sharing determinations we reviewed, 5 included supplemental memos provided by the DOJ agencies detailing the work hours provided by all of the agencies involved in the investigation. However, these memos are not required under existing DOJ guidance and were provided in only those investigations subject to AFMLS review. For the remaining 20 determinations, DOJ agencies did not document this information. Specifically, although work hours serve as the primary basis of calculating equitable sharing determinations, in 20 of the 25 determinations we reviewed, neither the work hours contributed by DOJ agencies nor the total number of work hours contributed by all of the agencies involved in the investigation were recorded in the documents provided to agency headquarters officials for review. According to DOJ agency headquarters officials responsible for reviewing and approving equitable sharing determinations, they rely on agents in the field to calculate sharing percentages and as a result, they do not verify the work hours contributed by each agency involved in the investigation. In the absence of documented work hours, it is unclear how deciding authorities could verify whether equitable sharing determinations involving millions of dollars in assets were calculated in accordance with established guidance. DOJ’s guidance does not explicitly require DOJ agencies to record the rationale for making adjustments to sharing percentages when work hours alone do not reflect the value of an agency’s participation in the investigation. In the 25 equitable sharing determinations we reviewed, state and local law enforcement agencies often reported basic information regarding their agency’s role in a particular investigation in their applications for equitable sharing, but DOJ’s rationale for making adjustments to sharing percentages was not consistently documented in each investigation. Specifically, agencies did not consistently document whether they believed the state or local law enforcement agency made additional contributions that warranted departures from standard sharing percentages. Of the 25 determinations we reviewed, 5 included supplemental memos provided by the DOJ agencies indicating whether adjustments from standard sharing percentages were warranted. In 3 of these 5 AFMLS determinations, adjustments to sharing percentages were made based on the additional contributions of the state and local law enforcement agencies involved in the investigation and the memos detailed the rationale for making the adjustment in each case. However, these memos are not required under existing DOJ guidance and were provided in only those investigations subject to AFMLS review. For the remaining 20 investigations, DOJ did not document this information. Moreover, because work hours were not documented in these cases, it was not possible to determine whether further adjustments were made based on additional contributions made by each of the agencies involved in the investigation. According to DOJ agency headquarters officials responsible for reviewing and approving equitable sharing determinations, they rely on agents in the field to calculate sharing percentages and, as a result, they do not attempt to verify the adjustments that are made based on each agency’s additional contributions to the investigation. Specifically, agency headquarters officials reported that the field is responsible for confirming state and local law enforcement’s contributions to a case through a variety of means including face-to-face meetings, telephone conversations, and e-mails. For example, one agency official noted that although the rationale for making adjustments to sharing percentages is not included in the documents provided to headquarters for review and approval, the field office is most familiar with the investigation and the contributions that each state and local law enforcement agency may have made in a given case. Therefore, headquarters considers the field office to be the best source of information for how qualitative factors should be taken into account when adjusting sharing percentages. Agency headquarters officials further noted that it is rare for them to question equitable sharing recommendations made by the field or to ask for more information regarding the rationale for adjustments to sharing percentages. While the field office may have firsthand knowledge of the contributions of state and local law enforcement agencies in a given investigation, in the absence of the rationale for adjustments to sharing percentages being documented, there is limited transparency over how and why agencies make adjustments to sharing determinations when work hours alone do not accurately represent an agency’s contribution to an investigation. Standards for Internal Control in the Federal Government states that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. In addition, control activities help to ensure that all transactions are completely and accurately recorded. In the absence of consistently documenting work hours and the rationale for making adjustments to sharing percentages, it is unclear how the equitable sharing deciding authorities could evaluate the nature and value of the contributions of each of the agencies involved in the investigation. Establishing a mechanism to ensure that this information is documented by all DOJ agencies responsible for making equitable sharing determinations could enhance the transparency of equitable sharing decisions. In the absence of documenting work hours or the rationale for making adjustments to sharing percentages, deciding authorities have limited means to verify the basis for equitable sharing decisions. Agency headquarters officials responsible for reviewing and approving equitable sharing determinations report that they review equitable sharing applications and decision forms to ensure that they are complete and that sharing determinations appear reasonable. However, headquarters officials for each of the DOJ agencies reported that they rely on field office staff to ensure that equitable sharing percentages were calculated correctly based on the work hours and the qualitative factors that each federal, state, and local law enforcement agency contributed to the investigation. However, because the information that serves as the basis for equitable sharing recommendations—including work hours and the qualitative factors used to make adjustments to sharing percentages—are not subject to review by agency headquarters officials, DOJ does not have reasonable assurance that the equitable sharing determinations are made in accordance with the established guidance. According to Standards for Internal Control in the Federal Government, controls should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations. Such monitoring should be performed continually and ingrained in the agency’s operations. This could include regular management and supervisory activities, comparisons, reconciliations, or other actions. Developing a mechanism to verify the work hours and qualitative factors that serve as the basis for equitable sharing determinations could improve DOJ’s visibility over equitable sharing determinations and help promote confidence in the integrity of the equitable sharing program. Agency headquarters officials have reported that altogether, DEA, ATF, and FBI reviewed a total of 52,034 equitable sharing requests in fiscal year 2011, and 113 of these requests went to AFMLS for review and approval. As a result, agency headquarters officials note that they have limited resources to verify the basis for each and every equitable sharing determination. We recognize that in the face of these limited resources, it may not be practical for agency headquarters officials to review all of the information used in support of all equitable sharing determinations. However, a spot check approach would allow headquarters officials to assess the extent to which equitable sharing decisions are made in accordance with established guidelines to help address resource constraints. DOJ has established requirements governing the permissible uses of equitable sharing funds. Specifically, DOJ’s guidelines state that equitably shared funds or assets put into official use shall be used by law enforcement agencies for law enforcement purposes only. Some of the permissible uses of equitable sharing funds include training, facilities, equipment, travel and transportation in support of law enforcement activities, as well as paying for the costs of asset accounting and auditing functions. Examples of impermissible uses of equitable sharing funds include payments to cover the costs of salaries or benefits and non-law enforcement education and training. DOJ’s guidelines also state that agencies should use federal sharing monies prudently and in such a manner as to avoid any appearance of extravagance, waste, or impropriety. For example, tickets to social events, hospitality suites at conferences, or meals outside of the per diem are all considered impermissible uses of shared funds. DOJ’s guidelines further state that equitable sharing funds must be used to increase or supplement the resources of the receiving state or local law enforcement agency and should not be used to replace or supplant the appropriated resources of the recipient. This means that equitable sharing funds must serve only to supplement funds they would normally receive and must not be used as a substitute for funds or equipment that would otherwise be provided by the law enforcement agency. For example, if city officials were to cut the police department’s budget by $100,000 as a result of the police department’s receiving $100,000 in equitable sharing funds, DOJ would consider this to be an example of improper supplantation, which is not an allowable use of equitable sharing funds. In addition to establishing requirements governing the permissible uses of equitably shared funds and property, DOJ has also established bookkeeping, internal controls, reporting, and audit requirements that state and local law enforcement agencies must follow in order to participate in the equitable sharing program. For example, state and local law enforcement agencies must establish mechanisms to track equitably shared funds and property, implement proper bookkeeping and accounting procedures, maintain compliance with internal controls standards, and meet defined reporting standards. Among other things, DOJ’s equitable sharing guidance calls for participating agencies to avoid commingling DOJ equitable sharing funds with funds from any other source, maintain a record of all equitable sharing expenditures, and complete annual reports known as Equitable Sharing Agreement and Certification Forms. These Equitable Sharing Agreement and Certification Forms require agencies participating in the equitable sharing program to report annually on the actual amounts and uses of equitably shared funds and property. Among other things, agencies must detail the beginning and ending equitable sharing fund balance, and the totals spent on specific law enforcement activities (e.g., training, computers, weapons, and surveillance equipment). In submitting the form each year, agencies must certify that they will be complying with the guidelines and statutes governing the equitable sharing program. Office of Management and Budget, “Audits of States, Local Governments and Non-Profit Organizations,” A-133, June 27, 2003. requirements, the substantial majority of equitable sharing participants are required to comply with the Single Audit Act. Under a Single Audit, an auditor must provide his or her opinion on the presentation of the entity’s financial statements and schedule of federal expenditures, and on compliance with applicable laws, regulations, and provisions of contracts or grant agreements that could have a direct and material effect on the financial statements. AFMLS officials reported that pilot testing of the compliance review process was started in December 2010, but the compliance review team did not start on a full-scale basis until April 2011. beforehand either through news reports or referrals from the U.S. Attorneys’ Offices. AFMLS has established guidelines for conducting compliance reviews of equitable sharing participants in order to determine the extent to which agencies are following established equitable sharing guidelines. Among other things, they select a sample of the agency’s expenditures in order to substantiate the agency’s records and to confirm that the expenditure was consistent with established DOJ guidelines. AFMLS also determines whether the agency has established an appropriate system of internal controls for tracking and recording equitable sharing receipts and expenditures. Further, AFMLS determines whether the agency was subject to Single Audit requirements and if so, whether the Single Audit including reporting on equitable sharing funds was completed as required. As of December 2011, AFMLS had completed a total of 11 onsite audits of approximately 9,200 state and local law enforcement agencies that participate in the equitable sharing program.has limited staff (eight total) and resources to conduct compliance reviews of equitable sharing participants. As a result, AFMLS reported conducting risk assessments in order to select agencies for compliance reviews. In addition to monitoring news briefs regarding the potential misuse of funds among equitable sharing participants, some of the issues that AFMLS considers as part of these risk assessments include the amount of each agency’s equitable sharing expenditures, whether a state or local law enforcement agency has reported spending a significant amount of money in a sensitive area, and whether a small law enforcement agency that may be unfamiliar with the equitable sharing program suddenly received a large equitable sharing payment. The 11 compliance reviews completed in 2011 revealed that participants do not consistently follow requirements to properly account for equitable sharing receipts and expenditures, do not consistently comply with the allowable AFMLS reports it currently uses of equitable sharing funds, and do not consistently complete Single Audits as required. AFMLS identified one or more areas for corrective action in 9 of the 11 compliance reviews. Two of the state and local law enforcement agencies were determined to be in full compliance with all of the equitable sharing requirements. In May 2012, AFMLS officials reported that all of the agencies had fully addressed the corrective actions identified by AFMLS. See appendix III for the results of the 11 compliance reviews AFMLS has completed as of December 2011. AFMLS has established a mechanism to systematically track and analyze the results of these reviews. Specifically, the findings from each compliance review are entered into a tracking report, and follow-up with each agency is completed to ensure that corrective actions are taken. AFMLS officials noted that they may follow up with an agency multiple times to ensure that items identified for corrective action are addressed. According to AFMLS, tracking frequencies and trends identified in the course of compliance reviews is an important tool in risk evaluations for both future audit selections and return audits to specific participants with particularly troublesome problems. Further, AFMLS officials have stated that they plan to use the results of compliance reviews in order to identify larger trends that may need to be addressed across all equitable sharing participants. For example, AFMLS has found through these reviews that equitable sharing recipients are not consistently reporting equitable sharing expenditures on Single Audits. AFMLS has reported that it is currently working with both equitable sharing recipients and the auditor community to address this issue. AFMLS’s approach to conducting compliance reviews of equitable sharing participants is consistent with standards for internal control, which state that monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. With more than $1 billion in forfeited assets deposited into the AFF every year since 2006, the Asset Forfeiture Program generates substantial revenue for the Department of Justice. These funds are used to cover annual operating expenses, to compensate crime victims, or are shared with state and local law enforcement agencies that participate in investigations resulting in forfeiture. The significant amounts of money involved as well as the sensitive nature of asset forfeiture mean it is imperative to be vigilant in maintaining the transparency of the program. Since the Asset Forfeiture Program’s operations are supported by annual revenues, DOJ faces a challenging task estimating future revenues and expenditures each year. The AFF’s annual revenues have consistently exceeded annual expenditures, allowing DOJ to cover annual rescissions and to reserve funds for the next fiscal year. This allows DOJ to ensure that the AFF has sufficient resources at the start of each fiscal year to cover solvency and pending equitable sharing and third-party payments. However, the AFF’s Congressional Budget Justification does not clearly outline the factors that DOJ considers when determining the total amounts that need to be carried over each fiscal year. As part of the AFF’s annual budget process, documenting how DOJ determines the funds that need to be carried over at the end of each year and providing additional details on that determination to Congress would provide more transparency over the process and would help Congress make more informed appropriations decisions. In addition, the authorization to share federal forfeiture proceeds with cooperating state and local law enforcement agencies is one of the most important provisions of asset forfeiture. DOJ has established guidelines stating that adjustments to equitable sharing percentages should be based on qualitative factors; however, additional guidance regarding how to apply these factors could help to improve the transparency and better ensure consistency with which adjustments to sharing percentages are made over time or across cases. Additionally, there are gaps in the extent to which key information that serves as the basis for equitable sharing decisions is documented. In the absence of documenting the work hours used to calculate initial equitable sharing percentages—the primary means to determine each agency’s share of forfeiture proceeds—it is unclear how equitable sharing deciding authorities could verify the relative degree of participation of each of the agencies involved in the case. Similarly, documenting information on DOJ’s rationale for making adjustments to sharing percentages could help to improve transparency over whether equitable sharing decisions are being made in accordance with DOJ’s guidance. Additionally, establishing a mechanism to verify that equitable sharing determinations are made in accordance with established guidance would provide DOJ with greater assurance that there are effective management practices in place to help promote confidence in the integrity of the equitable sharing program. We are making four recommendations to the Attorney General. To help improve transparency over the AFF’s use of funds, we recommend that the Attorney General provide more detailed information to Congress as part of the AFF’s annual budget process, clearly documenting how DOJ determines the amount of funds to be carried over into the next fiscal year. To help improve management controls over the equitable sharing program, we recommend that the Attorney General direct AFMLS to take the following three actions: Develop and implement additional guidance on how DOJ agencies should apply qualitative factors when making adjustments to equitable sharing percentages. Establish a mechanism to ensure that the basis for equitable sharing determinations—including the work hours contributed by all participating agencies and the rationale for any adjustments to sharing percentages—are recorded in the documents provided to agency headquarters officials for review and approval. Develop a risk-based mechanism to monitor whether key information in support of equitable sharing determinations is recorded and the extent to which sharing determinations are made in accordance with established guidance. We provided a draft of this report to DOJ for its review and comment. DOJ did not provide official written comments to include in our report. However, in an e-mail received on June 21, 2012, the DOJ liaison stated that the department appreciated the opportunity to review the draft report and that DOJ concurred with our recommendations. DOJ further noted that the department will develop a plan of corrective action in order to address the recommendations. DOJ also provided us written technical comments, which we incorporated as appropriate. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. In addition, this report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512- 9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. State and local law enforcement agencies typically qualify for equitable sharing by participating directly with Department of Justice (DOJ) agencies in joint investigations leading to the seizure and forfeiture of property. Agencies may either receive a portion of the proceeds resulting from the sale of the forfeited asset or may request that a forfeited asset such as a vehicle be put into official use. Any property other than contraband or firearms may be transferred to a state or local agency for official use provided that it is used for law enforcement purposes. State and local law enforcement can receive equitable sharing payments after working on a joint case with one or more federal law enforcement partners or after participating in a case carried out by a federal law enforcement task force. Approximately 83 percent of all equitable sharing determinations are the result of joint investigations. State and local law enforcement agencies can also qualify for equitable sharing by requesting that federal partners adopt a case initiated at the state or local level. An adoptive forfeiture occurs when local police officials effectively hand a case over to federal law enforcement officials provided that the property in question is forfeitable under federal law. According to DOJ officials, many state and local law enforcement agencies will make seizures pursuant to their state laws. However, they may reach out to federal law enforcement agencies to adopt a forfeiture if they don’t have a state or local statute that allows them to carry out a forfeiture. For example, in a particular case, there may be large amounts of cash involved but no drugs found or seized. Federal statute allows for the forfeiture of assets based on illegal activity even if there are no drugs seized, whereas the state or local statute might not allow for this type of forfeiture. Alternatively, state and local law enforcement agencies may request that DOJ adopt a forfeiture in those cases where federal coordination or expertise is needed in the case. Our analysis shows a slight decrease in adoptive versus non-adoptive equitable sharing payments since 2003. In 2003, adoptions made up about 23 percent of all equitable sharing payments, while in 2010, adoptions made up about 17 percent of all equitable sharing payments. According to DOJ, as more states have established their own forfeiture laws, they may rely less on DOJ to adopt forfeiture cases and may instead pursue forfeitures under state law when appropriate. Figure 7 shows the equitable sharing payments made to each state in fiscal year 2011. Directions: Place mouse over each state name for the total equitable sharing payments made to that state in fiscal year 2011. Ill. Our analysis shows a strong positive association between the equitable sharing payments made to each state and the state’s total population. However, our analysis found no correlation between per capita equitable sharing payments and arrest rates, once we corrected for population size. It is important to note that a number of other factors may influence the amount of equitable sharing payments a state receives in a given year. For example, if a state or local law enforcement agency participated in a joint investigation that resulted in a very large forfeiture, the agency might receive a significant amount of equitable sharing dollars, even if no arrests were made in conjunction with the case. Alternately, an agency may work several cases that generate multiple arrests, but no forfeitures, so no equitable sharing payments would be made. Finally, differences in equitable sharing between states may be influenced by whether state and local law enforcement agencies decide to pursue forfeitures under their state laws versus those cases where federal involvement may be warranted. 1. Third-Party Payments: Third-party payments are payments to satisfy third-party interests, including lienholders and other innocent parties, pursuant to 28 U.S.C. § 524(c)(1)(D); payments in connection with the remission and mitigation of forfeitures, pursuant to 28 U.S.C. § 524(c)(1)(E). 2. Equitable Sharing Payments: These funds are reserved until the receipt of the final forfeiture orders that result in distributions to the participants. Equitable sharing payments represent the transfer of portions of federally forfeited cash and proceeds from the sale of forfeited property to state and local law enforcement agencies and foreign governments that directly assisted in targeting or seizing the property. Most task force cases, for example, result in property forfeitures whose proceeds are shared among the participating agencies. All other program operations expenses 3. Asset Management and Disposal: According to DOJ, the primary purpose of the Assets Forfeiture Fund (AFF) is to ensure an adequate and appropriate source of funding for the management and disposal of forfeited assets. Also, funding is required for the assessment, containment, removal, and destruction of hazardous materials seized for forfeiture, and hazardous waste contaminated property seized for forfeiture. The United States Marshals Service (USMS) has primary responsibility for the storage and maintenance of assets, while the Drug Enforcement Administration (DEA) and the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) are responsible for the disposal of toxic and hazardous substances. 4. Case-Related Expenses: Case-related expenses are expenses associated with the prosecution of a forfeiture case or execution of a forfeiture judgment, such as advertising, travel and subsistence, court and deposition reporting, courtroom exhibit services, and expert witness costs. In appropriate cases, the services of foreign counsel may be necessary. 5. Special Contract Services: The AFF uses contract personnel to manage the paperwork associated with forfeiture, including data entry, data analysis, word processing, file control, file review, quality control, case file preparation, and other process support functions. 6. Investigative Expenses Leading to Seizure: Investigative expenses are those normally incurred in the identification, location, and seizure of property subject to forfeiture. These include payments to reimburse any federal agency participating in the AFF for investigative costs leading to seizures. 7. Contracts to Identify Assets: Investigative agencies use these funds for subscription services to nationwide public record data systems, and for acquisition of specialized assistance, such as reconstruction of seized financial records. According to DOJ, these resources are used to identify assets during the investigative stage of the case, where such research will enhance effective use of the asset forfeiture sanction. DOJ officials note that if the government can improve upon the identification of ill-gotten assets, the nature of the criminal wrongdoing can be better demonstrated and reinforced before the jury. Such evidence results in greater penalties for criminals who may have avoided such penalties in the past by successfully concealing such assets. 8. Awards for Information Leading to a Forfeiture: The Omnibus Consolidated Appropriations Act, 1997, amended the Justice Fund statute to treat payments of awards based on the amount of the forfeiture the same as other costs of forfeiture. Therefore, the amount available each year for expenses for awards no longer had to be specified in annual appropriations acts. 9. Automated Data Processing: Recurring costs include telecommunications support, system and equipment maintenance, user support and help desk, software maintenance, user training, equipment, and data center charges in support of the Consolidated Asset Tracking System (CATS). All asset forfeiture activity for each asset is recorded in CATS. According to DOJ, CATS enables access for more than 1,000 locations to a central database to perform full asset forfeiture life-cycle tasks more efficiently. The system provides current information to field personnel on the status of cases, integrates financial analysis capabilities into the inventory management process, provides the estimation of program income and expenses, and provides the capability for agency and department managers to review and assess program activity. 10. Training and Printing: This category funds expenses for training personnel on aspects of the federal forfeiture program as well as other training necessary to maintain the competency of federal and contractor personnel dedicated to performing federal forfeiture functions. Printing costs reflect the continuing need to provide current legal advice and support. Expenses include updating and distributing manuals and pamphlets directly related to forfeiture issues, policies, and procedures. 11. Other Program Management: This category includes several types of expenses in support of the overall management of the Asset Forfeiture Program, including management analysis, performance assessment, problem analysis, requirements analyses, policy development, and other special projects designed to improve program performance. This funding is to provide travel and per diem funds for temporary duty assignments needed to correct program deficiencies. Other activities funded under this heading include the annual audit of the financial statements of the Assets Forfeiture Fund and the Seized Asset Deposit Fund by an independent accounting firm and special assessments and reviews. This category also finances the Asset Forfeiture Money Laundering Section (AFMLS), Asset Forfeiture Management Staff (AFMS), and, since 2001, USMS headquarters administrative personnel and non- personnel costs associated with the forfeiture program. In addition, the AFF funds Deputy U.S. Marshal salaries to enhance the legal and fiduciary responsibilities that are inherent in the seizure of personal and real property during the pendency of a forfeiture action. 12. Storage, Protection, and Destruction of Controlled Substances: These expenses are incurred to store, protect, or destroy controlled substances. 13. Joint Federal, State, and Local Law Enforcement Operations: Under 28 U.S.C. § 524(c)(1)(l), the AFF has authority to pay for “overtime, travel, fuel, training, equipment, and other similar costs of state or local law enforcement officers that are incurred in a joint law enforcement operation with a federal law enforcement agency participating in the Fund.” 14. Awards for Information and Purchase of Evidence Awards payable from the AFF directly support law enforcement efforts by encouraging the cooperation and assistance of informants. The AFF may also be used to purchase evidence of violations of drug laws, Racketeering Influenced and Corrupt Organizations (RICO), and criminal money laundering laws. According to DOJ, payment of awards to sources of information creates motivation for individuals to assist the government in the investigation of criminal activity and the seizure of assets. 15. Equipping of Conveyances: This category provides funding to equip vehicles, vessels, or aircraft for law enforcement functions, but not to acquire them. Purchased equipment must be affixed to and used integrally with the conveyance. This funding is used for emergency and communications equipment, voice privacy and surveillance equipment, armoring, and engine upgrades and avionic equipment for aircraft. According to DOJ, it is only through AFF resources that many of these surveillance vehicles are available to the field districts that need them. DEA uses various surveillance techniques, including stationary and mobile platforms to conduct surveillance and gather intelligence, the cornerstone of cases against most major drug violators. In addition, evidence obtained through the use of such surveillance often provides the audio and video documentation necessary for conviction. DOJ’s Asset Forfeiture and Money Laundering Section completed a total of 11 compliance reviews of equitable sharing participants in 2011. Table 4 shows the results of the 11 compliance reviews. In addition to the contact named above, Sandra Burrell and Dawn Locke (Assistant Directors), Sylvia Bascope, Samantha Carter, Raymond Griffith, Mike Harmond, Shirley Hwang, Valerie Kasindi, and Jeremy Manion made key contributions to the report. Also contributing to this report were Lydia Araya, Benjamin Bolitzer, Frances Cook, Katherine Davis, Richard Eiserman, Janet Temko, Mitchell Karpman, Linda Miller, Jan Montgomery, Bintou Njie, Robert Lowthian, Cynthia Saunders, and Jerry Seigler.
Every year, federal law enforcement agencies seize millions of dollars in assets in the course of investigations. The AFF was established to receive the proceeds of forfeiture and holds more than $1 billion in assets. DOJ uses the proceeds from forfeitures primarily to cover the costs of forfeiture activities. DOJ also shares forfeiture proceeds with state and local agencies that participate in joint investigations through its equitable sharing program. GAO was asked to review (1) AFF’s revenues and expenditures from fiscal years 2003 through 2011 and DOJ’s processes for carrying over funds for the next fiscal year, and (2) the extent to which DOJ has established controls to help ensure that the equitable sharing program is implemented in accordance with established guidance. GAO analyzed data on AFF revenues, expenditures, and balances; interviewed DOJ officials; and analyzed a sample of 25 equitable sharing determinations, which included 5 determinations from each relevant DOJ agency. GAO’s analysis of the samples was not generalizable, but provided insight into DOJ’s decisions. Annual revenues into the Assets Forfeiture Fund (AFF) from forfeited assets increased from $500 million in 2003 to $1.8 billion in 2011, in part due to an increase in prosecutions of fraud and financial crimes cases. Expenditures in support of forfeiture activities such as equitable sharing payments to state and local law enforcement agencies and payments to victims also increased over the same 9-year period, growing from $458 million in 2003 to $1.3 billion in 2011. The Department of Justice (DOJ) uses the difference between revenues and expenditures in any year to help cover anticipated expenses in the next fiscal year. Because the AFF uses fund revenues to pay for the expenses associated with forfeiture activities, DOJ carries over funds at the end of each fiscal year to ensure it has sufficient resources to cover expenses that may not be covered by the next year’s revenues. When determining the amounts to carry over, DOJ reviews historical data on past program expenditures, analyzes known future expenses such as salaries and contracts, and estimates the costs of any potential new expenditures. However, DOJ has not documented the process for determining the amount of funds needed to cover anticipated expenditures in the next fiscal year in its annual budget justifications. Providing more transparent information as part of the AFF’s annual budget process would better inform Congress’ oversight of the AFF. Further, after DOJ obligates funds needed to cover program expenses, any remaining AFF funds identified at the end of a fiscal year may be declared an excess unobligated balance. DOJ has the authority to use these balances for any of the department’s authorized purposes. Per Office of Management and Budget guidance, in recent years, DOJ used these excess unobligated balances to help cover rescissions. Rescissions cancel the availability of DOJ’s previously enacted budget authority, making the funds involved no longer available for obligation. For example, in fiscal year 2011, DOJ used excess unobligated balances to help cover a $495 million AFF program rescission. DOJ has established guidelines for making equitable sharing determinations, but controls to ensure consistency and transparency could be improved. For example, DOJ agencies responsible for making equitable sharing determinations may make adjustments to sharing percentages when work hours alone do not reflect the relative value of an agency’s contribution to an investigation. If a state or local law enforcement agency contributed a helicopter or a drug-sniffing dog to an investigation, its sharing percentage might be adjusted upward from what it would be based on work hours alone. However, DOJ’s guidance does not include information regarding how decisions about these adjustments to sharing determinations should be made. This is particularly important given that these determinations represent DOJ’s overall assessment of each agency’s unique contributions and are a key component of how DOJ determines how much to award to each agency. Furthermore, key information that serves as the basis for equitable sharing determinations—such as the work hours contributed by each of the participating agencies in an investigation—is not subject to review by approving authorities. Developing guidance regarding how these decisions are to be made, documenting the basis for these decisions, and subjecting them to review and approval would help ensure the consistency and transparency of equitable sharing determinations. GAO recommends that, among other things, DOJ clearly document how it determines the amount of funds that will need to be carried over for the next fiscal year, develop guidance on how components should make adjustments to equitable sharing determinations, and ensure that the basis for equitable sharing determinations is documented and subjected to review and approval. DOJ concurred with GAO’s recommendations.
Boeing and TRW disclosed the key results and limitations of Integrated Flight Test 1A in written reports released between August 13, 1997, and April 1, 1998. The contractors explained in a report issued 60 days after the June 1997 test that the test achieved its primary objectives, but that some sensor abnormalities were noted. For example, while the report explained that the sensor detected the deployed targets and collected some usable target signals, the report also stated that some sensor components did not operate as desired and the sensor often detected targets where there were none. In December 1997, the contractors documented other test anomalies. According to briefing charts prepared for a December meeting, the Boeing sensor tested in Integrated Flight Test 1A had a low probability of detection; the sensor’s software was not always confident that it had correctly identified some target objects; the software significantly increased the rank of one target object toward the end of the flight; and in-flight calibration of the sensor was inconsistent. Additionally, on April 1, 1998, the contractors submitted an addendum to an earlier report that noted two more problems. In this addendum, the contractors disclosed that their claim that TRW’s software successfully distinguished a mock warhead from decoys during a post-flight analysis was based on tests of the software using about one-third of the target signals collected during Integrated Flight Test 1A. The contractors also noted that TRW reduced the software’s reference data so that it would correspond to the collected target signals being analyzed. Project office and Nichols Research officials said that in late August 1997, the contractors orally communicated to them all problems and limitations that were subsequently described in the December 1997 briefing and the April 1998 addendum. However, neither project officials nor contractors could provide us with documentation of these communications. Although the contractors reported the test’s key results and limitations, they described the results using some terms that were not defined. For example, one written report characterized the test as a “success” and the sensor’s performance as “excellent.” We found that the information in the contractors’ reports, in total, enabled officials in the Ground Based Interceptor Project Management Office and Nichols Research to understand the key results and limitations of the test. However, because such terms are qualitative and subjective rather than quantitative and objective, their use increased the likelihood that test results would be interpreted in different ways and might even be misunderstood. As part of our ongoing review of missile defense testing, we are examining the need for improvements in test reporting. Appendix I provides details on the test and the information disclosed. The Ground Based Interceptor Project Management Office relied on an on- site engineer and Nichols Research Corporation to provide insight into Boeing’s work. The project office also relied on Boeing to oversee the performance of its subcontractor, TRW. Oversight was limited by the ongoing competition between Boeing and another contractor competing for the exoatmospheric kill vehicle contract because the Ground Based Interceptor Project Management Office and its support contractors had to be careful not to affect competition by assisting one contractor more than another. Project officials said that they relied more on “insight” into the contractors’ work rather than oversight of that work. Nichols gained program insight by attending technical meetings, assessing test reports, and sometimes evaluating technologies proposed by Boeing and TRW. For more information on how the project office exercised oversight over its contractors’ technical performance, see appendix II. Boeing and TRW reported that post-flight testing and analysis of data collected during Integrated Flight Test 1A showed that deployed target objects displayed distinguishable features when observed by an infrared sensor. The contractors reported the test also showed that Boeing’s exoatmospheric kill vehicle sensor could collect target signals from which TRW’s software could extract distinguishable features and that the software could identify the mock warhead from other objects by comparing the extracted features to the features that it had been told to expect each object to display. However, there has been no independent verification of these claims. We talked with Dr. Mike Munn, who was, during the 1980s, the Chief Scientist for missile defense programs at Lockheed Missiles and Space Company. He agreed that a warhead and decoys deployed in the exoatmosphere likely display distinguishable differences in the infrared spectrum. However, the differences may not be fully understood or there may not presently be methods to predict the differences. Dr. Munn added that the key was in the ability to make both accurate and precise measurements and also to predict signatures accurately. He emphasized that robust discrimination depends on the ability to predict signatures and then to match in-space measurements with those predictions. The Phase One Engineering Team and Nichols Research Corporation have noted that TRW's software used prior knowledge of warhead and decoy differences, to the maximum extent available, to discriminate one object from the other and cautioned such knowledge may not always be available in the real world. National Missile Defense program officials said that after considerable debate among themselves and contractors, the program manager reduced the number of decoys planned for intercept flight tests in response to a recommendation by an independent panel, known as the Welch Panel.The panel, established to reduce risk in ballistic missile defense flight test programs, viewed a successful hit-to-kill engagement as a difficult task that should not be further complicated in early tests by the addition of decoys. After contemplating the advice of the Welch panel and considering the opinions of program officials and contractors who disagreed over the number and complexity of decoys that should be deployed in future tests, the program manager decided that early tests should include only one decoy, a large balloon. See appendix III for more information on the reduction of decoys in later tests. The Phase One Engineering Team was tasked by the National Missile Defense Joint Program Office to assess the performance of TRW’s software and to complete the assessment within 2 months using available data. The team's methodology included determining if TRW’s software was based on sound mathematical, engineering, and scientific principles and testing the software’s critical modules using data from Integrated Flight Test 1A. The team reported that although the software had weaknesses, it was well designed and worked properly, with only some changes needed to increase the robustness of the discrimination function. Further, the team reported that the results of its test of the software using Integrated Flight Test 1A data produced essentially the same results as those reported by TRW. Based on its analysis, team members predicted that the software would perform successfully in a future intercept test if target objects deployed as expected. Because the Phase One Engineering Team did not process the raw data from Integrated Flight Test 1A or develop its own reference data, the team cannot be said to have definitively proved or disproved TRW’s claim that its software successfully discriminated the mock warhead from decoys using data collected from Integrated Flight Test 1A. A team member told us its use of Boeing- and TRW-provided data was appropriate because the former TRW employee had not alleged that the contractors tampered with the raw test data or used inappropriate reference data. Appendix IV provides additional details on the Phase One Engineering Team evaluation. In commenting on a draft of this report, the Department of Defense concurred with our findings. It also suggested technical changes, which we incorporated as appropriate. The Department's comments are reprinted in appendix VII. We conducted our review from August 2000 through February 2002 in accordance with generally accepted government auditing standards. Appendix VI provides details on our scope and methodology. The National Missile Defense Joint Program Office’s process for releasing documents significantly slowed our work. For example, the program office took approximately 4 months to release key documents such as the Phase One Engineering Team’s response to the professor’s allegations. We requested these and other documents on September 14, 2000, and received them on January 9, 2001. As arranged with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we plan to provide copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Senate Committee on Appropriations, Subcommittee on Defense; the House Committee on Armed Services; and the House Committee on Appropriations, Subcommittee on Defense; and the Secretary of Defense; and the Director, Missile Defense Agency. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please contact Bob Levin, Director, Acquisition and Sourcing Management, on (202) 512-4841; Jack Brock, Managing Director, on (202) 512-4841; or Keith Rhodes, Chief Technologist, on (202) 512-6412. Major contributors to this report are listed in appendix VIII. Boeing and TRW disclosed the key results and limitations of an early sensor flight test, known as Integrated Flight Test 1A, to the Ground Based Interceptor Project Management Office. The contractors included some key results and limitations in written reports submitted soon after the June 1997 test, but others were not included in written reports until December 1997 or April 1998. However, according to project office and Nichols officials, all problems and limitations included in the written reports were communicated orally to the project management office in late August 1997. The deputy project office manager said his office did not report these verbal communications to others within the Program Office or the Department of Defense because the project office was the office within the Department responsible for the Boeing contract. One problem that was included in initial reports to program officials was a malfunctioning cooling mechanism that did not lower the sensor’s temperature to the desired level. Boeing characterized the mechanism’s performance as somewhat below expectations but functioning well enough for the sensor’s operation. We hired experts to determine the extent to which the problem could affect the sensor’s performance. The experts found that the cooling problem degraded the sensor’s performance in a number of ways, but would not likely result in extreme performance degradation. The experts studied only how increased noise affected the sensor’s performance regarding comparative strengths of the target signals and the noise (signal to noise ratio). The experts did not evaluate discrimination performance, which is dependent on the measurement accuracy of the collected infrared signals. The experts’ findings are discussed in more detail later in this appendix. Integrated Flight Test 1A, conducted in June 1997, was a test of the Boeing sensor—a highly sensitive, compact, infrared device, consisting of an array of silicon detectors, that is normally mounted on the exoatmospheric kill vehicle. However, in this test, a surrogate launch vehicle carried the sensor above the earth’s atmosphere to view a cluster of target objects that included a mock warhead and various decoys. When the sensor detected the target cluster, its silicon detectors began to make precise measurements of the infrared radiation emitted by the target objects. Over the tens of seconds that the target objects were within its field of view, the sensor continuously converted the infrared radiation into an electrical current, or signal, proportional to the amount of energy collected by the detectors. The sensor then digitized the signal (converted the signals into numerical values), completed a preliminary part of the planned signal processing, and formatted the signal so that it could be transmitted via a data link to a recorder on the ground. After the test, Boeing processed the signals further and formatted them so that TRW could input the signals into its discrimination software to assess its capability to distinguish the mock warhead from decoys. In post-flight ground testing, the software analyzed the processed data and identified the key characteristics, or features, of each signal. The software then compared the features it extracted to the expected features of various types of target objects. Based on this comparison, the software ranked each item according to its likelihood of being the mock warhead. TRW reported that the highest- ranked object was the mock warhead. The primary objective of Integrated Flight Test 1A was to reduce risk in future flight tests. Specifically, the test was designed to determine if the sensor could operate in space; to examine the extent to which the sensor could detect small differences in infrared emissions; to determine if the sensor was accurately calibrated; and to collect target signature data for post-mission discrimination analysis. In addition, Boeing established quantitative requirements for the test. For example, the sensor was expected to acquire the target objects at a specified distance. According to a Nichols’ engineer, Boeing established these requirements to ensure that its exoatmospheric kill vehicle, when fully developed, could destroy a warhead with the single shot precision (expressed as a probability) required by the Ground Based Interceptor Project Management Office. The engineer said that in Integrated Flight Test 1A, Boeing planned to measure its sensor’s performance against these lower-level requirements so that Boeing engineers could determine which sensor elements, including the software, required further refinement. However, the engineer told us that because of the various sensor problems, of which the contractor and project office were aware, Boeing determined before the test that it would not use most of these requirements to judge the sensor’s performance. (Although Boeing did not judge the performance of its sensor against the requirements as it originally planned, Boeing did, in some cases, report the sensor’s performance in terms of these requirements. For a summary of selected test requirements and the sensor’s performance as reported by Boeing and TRW in their August 22, 1997, report, see app. V.) Table 1 provides details on the key results and limitations of Integrated Flight Test 1A that contractors disclosed in various written reports and briefing charts. Although the contractors disclosed the key results and limitations of the flight test in written reports and in discussions, the written reports described the results using some terms that were not defined. For example, in their August 22, 1997, report, Boeing and TRW described Integrated Flight Test 1A as a “success” and the performance of the Boeing sensor as “excellent.” We asked the contractors to explain their use of these terms. We asked Boeing, for example, why it characterized its sensor’s performance as “excellent” when the sensor’s silicon detector array did not cool to the desired temperature, the sensor’s power supply created excess noise, and the sensor detected numerous false targets. Boeing said that even though the silicon detector array operated at temperatures 20 to 30 percent higher than desired, the sensor produced useful data. Officials said they knew of no other sensor that would be capable of producing any useful data under those conditions. Boeing officials went on to say that the sensor continuously produced usable, and, much of the time, excellent data in “real-time” during flight. In addition, officials said the sensor component responsible for suppressing background noise in the silicon detector array performed perfectly in space and the silicon detectors collected data in more than one wave band. Boeing concluded that the sensor’s performance allowed the test to meet all mission objectives. Based on our review of the reports and discussions with officials in the Ground Based Interceptor Project Management Office and Nichols Research, we found that the contractors’ reports, in total, contained information for those officials to understand the key results and limitations of the test. However, because terms such as “success” and “excellent” are qualitative and subjective rather than quantitative and objective, we believe their use increases the likelihood that test results would be interpreted in different ways and could even be misunderstood. As part of our ongoing review of missile defense testing, we are examining the need for improvements in test reporting. This report, sometimes referred to as the 45-day report, was a series of briefing charts. In it, contractors reported that Integrated Flight Test 1A achieved its principal objectives of reducing risks for subsequent flight tests, demonstrating the performance of the exoatmospheric kill vehicle’s sensor, and collecting target signature data. In addition, the report stated that TRW’s software successfully distinguished a mock warhead from accompanying decoys. The August 22 report, known as the 60-day report, was a lengthy document that disclosed much more than the August 13 report. As discussed in more detail below, the report explained that some sensor abnormalities were observed during the test, that some signals collected from the target objects were degraded, that the launch vehicle carrying the sensor into space adversely affected the sensor’s ability to collect target signals, and that the sensor sometimes detected targets where there were none. These problems were all noted in the body of the report, but the report summary stated that review and analysis subsequent to the test confirmed the “excellent” performance and nominal operation of all sensor subsystems. Boeing disclosed in the report that sensor abnormalities were observed during the test and that the sensor experienced a higher than expected false alarm rate. These abnormalities were (1) a cooling mechanism that did not bring the sensor’s silicon detectors to the intended operating temperature, (2) a power supply unit that created excess noise, and (3) software that did not function as designed because of the slow turnaround of the surrogate launch vehicle. In the report’s summary, Boeing characterized the cooling mechanism’s performance as somewhat below expectations but functioning well enough for the sensor’s operation. In the body of the report, Boeing said that the fluctuations in temperature could lead to an apparent decrease in sensor performance. Additionally, Boeing engineers told us that the cooling mechanism’s failure to bring the silicon detector array to the required temperature caused the detectors to be noisy. Because the discrimination software identifies objects as a warhead or a decoy by comparing the features of a target’s signal with those it expects a warhead or decoy to display, a noisy signal may confuse the software. Boeing and TRW engineers said that they and program office officials were aware that there was a problem with the sensor’s cooling mechanism before the test was conducted. However, Boeing believed that the sensor would perform adequately at higher temperatures. According to contractor documents, the sensor did not perform as well as expected, and some target signals were degraded more than anticipated. Boeing disclosed in the report that sensor abnormalities were observed during the test and that the sensor experienced a higher than expected false alarm rate. These abnormalities were (1) a cooling mechanism that did not bring the sensor’s silicon detectors to the intended operating temperature, (2) a power supply unit that created excess noise, and (3) software that did not function as designed because of the slow turnaround of the surrogate launch vehicle. In the report’s summary, Boeing characterized the cooling mechanism’s performance as somewhat below expectations but functioning well enough for the sensor’s operation. In the body of the report, Boeing said that the fluctuations in temperature could lead to an apparent decrease in sensor performance. Additionally, Boeing engineers told us that the cooling mechanism’s failure to bring the silicon detector array to the required temperature caused the detectors to be noisy. Because the discrimination software identifies objects as a warhead or a decoy by comparing the features of a target’s signal with those it expects a warhead or decoy to display, a noisy signal may confuse the software. Boeing and TRW engineers said that they and program office officials were aware that there was a problem with the sensor’s cooling mechanism before the test was conducted. However, Boeing believed that the sensor would perform adequately at higher temperatures. According to contractor documents, the sensor did not perform as well as expected, and some target signals were degraded more than anticipated. The report also referred to a problem with the sensor’s power supply unit and its effect on target signals. An expert we hired to evaluate the sensor’s performance at higher than expected temperatures found that the power supply, rather than the temperature, was the primary cause of excess noise early in the sensor’s flight. Boeing engineers told us that they were aware that the power supply was noisy before the test, but, as shown by the test, it was worse than expected. The report explained that, as expected before the flight, the slow turnaround of the massive launch vehicle on which the sensor was mounted in Integrated Flight Test 1A caused the loss of some target signals. Engineers explained to us that the sensor would eventually be mounted on the lighter, more agile exoatmospheric kill vehicle, which would move back and forth to detect objects that did not initially appear in the sensor’s field of view. The engineers said that Boeing designed software that takes into account the kill vehicle’s normal motion to remove the background noise, but the software’s effectiveness depended on the fast movement of the kill vehicle. Boeing engineers told us that, because of the slow turnaround of the launch vehicle used in the test, the target signals detected during the turnaround were particularly noisy and the software sometimes removed not only the noise but the entire signal as well. The report mentioned that the sensor experienced more false alarms than expected. A false alarm is a detection of a target that is not there. According to the experts we hired, during Integrated Flight Test 1A, the Boeing sensor often mistakenly identified noise produced by the power supply as signals from actual target objects. In a fully automated discrimination software program, a high false alarm rate could overwhelm the tracking software. Because the post-flight processing tools were not fully developed at the time of the August 13 and August 22, 1997, reports, Boeing did not rely upon a fully automated tracking system when it processed the Integrated Flight Test 1A data. Instead, a Boeing engineer manually tracked the target objects. The contractors realized, and reported to the Ground Based Interceptor Project Management Office, that numerous false alarms could cause problems in future flight tests, and they identified software changes to reduce their occurrence. On December 11, 1997, Boeing and TRW briefed officials from the Ground Based Interceptor Project Management Office and one of its support contractors on various anomalies observed during Integrated Flight Test 1A. The contractors’ briefing charts explained the effect the anomalies could have on Integrated Flight Test 3, the first planned intercept test for the Boeing exoatmospheric kill vehicle, identified potential causes of the anomalies, and summarized the solutions to mitigate their effect. While some of the anomalies included in the December 11 briefing charts were referred to in the August 13 and August 22 reports, others were being reported in writing for the first time. The anomalies referenced in the briefing charts included the sensor’s high false alarm rate, the silicon detector array’s higher-than-expected temperature, the software’s low confidence factor that it had correctly identified two target objects correctly, the sensor’s lower than expected probability of detection, and the software’s elevation in rank of one target object toward the end of the test. In addition, the charts showed that an in-flight attempt to calibrate the sensor was inconsistent. According to the charts, actions to prevent similar anomalies from occurring or impacting Integrated Flight Test 3 had in most cases already been implemented or were under way. The contractors again recognized that a large number of false alarms occurred during Integrated Flight Test 1A. According to the briefing charts, false alarms occurred during the slow turnarounds of the surrogate launch vehicle. Additionally, the contractors hypothesized that some false alarms resulted from space-ionizing events. By December 11, engineers had identified solutions to reduce the number of false alarms in future tests. As they had in the August 22, 1997, report, the contractors recognized that the silicon detector array did not cool properly during Integrated Flight Test 1A. The contractors reported that higher silicon detector array temperatures could cause noisy signals that would adversely impact the detector array’s ability to estimate the infrared intensity of observed objects. Efforts to eliminate the impact of the higher temperatures, should they occur in future tests, were on-going at the time of the briefing. Contractors observed that the confidence factor produced by the software was small for two target objects. The software equation that makes a determination as to how confident the software should be to identify a target object correctly, did not work properly for the large balloon or multiple-service launch vehicle. Corrections to the equation had been made by the time of the briefing. The charts state that the Integrated Flight Test 1A sensor had a lower than anticipated probability of detection and a high false alarm rate. Because a part of the tracking, fusion, and discrimination software was designed for a sensor with a high probability of detection and a low false alarm rate, the software did not function optimally and needed revision. Changes to prevent this from happening in future flight tests were under way. The briefing charts showed that TRW’s software significantly increased the rank of one target object just before target objects began to leave the sensor’s field of view. Although a later Integrated Flight Test 1A report stated the mock warhead was consistently ranked as the most likely target, the charts show that if in Integrated Flight Test 3 the same object’s rank began to increase, the software could select the object as the intercept target. In the briefing charts, the contractors reported that TRW made a software change in the model that is used to generate reference data. When reference data was generated with the software change, the importance of the mock warhead increased, and it was selected as the target. Tests of the software change were in progress as of December 11. The Boeing sensor measures the infrared emissions of target objects by converting the collected signals into intensity with the help of calibration data obtained from the sensor prior to flight. However, the sensor was not calibrated at the higher temperature range that was experienced during Integrated Flight Test 1A. To remedy the problem, the sensor viewed a star with known infrared emissions. The measurement of the star’s intensity was to have helped fill the gaps in calibration data that was essential to making accurate measurements of the target object signals. Boeing disclosed that the corrections based on the star calibration were inconsistent and did not improve the match of calculated and measured target signatures. Boeing subsequently told us that the star calibration corrections were effective for one of the wavelength bands, but not for another, and that the inconsistency referred to in the briefing charts was in how these bands behaved at temperatures above the intended operating range. Efforts to find and implement solutions were in progress. On April 1, 1998, Boeing submitted a revised addendum to replace an addendum that had accompanied the August 22, 1997, report. This revised addendum was prepared in response to comments and questions submitted by officials from the Ground Based Interceptor Project Management Office, Nichols Research Corporation, and the Defense Criminal Investigative Service concerning the August 22 report. In this addendum, the contractors referred in writing to three problems and limitations that had not been addressed in earlier written test reports or the December 11 briefing. Contractors noted that a gap-filling module, which was designed to replace noisy or missing signals, did not operate as designed. They also disclosed that TRW’s analysis of its discrimination software used target signals collected during a selected portion of the flight timeline and used a portion of the Integrated Flight Test 1A reference data that corresponded to this same timeline. The April 1 addendum reported that a gap-filling module that was designed to replace portions of noisy or missing target signals with expected signal values did not operate as designed. TRW officials told us that the module’s replacement values were too conservative and resulted in a poor match between collected signals and the signals the software expected the target objects to display. The April 1, 1998, addendum also disclosed that the August 13 and August 22 reports, in which TRW conveyed that its software successfully distinguished the mock warhead from decoys, were based on tests of the software using about one-third of the target signals collected during Integrated Flight Test 1A. We talked to TRW officials who told us that Boeing provided several data sets to TRW, including the full data set. The officials said that Boeing provided target signals from the entire timeline to a TRW office that was developing a prototype version of the exoatmospheric kill vehicle’s tracking, fusion, and discrimination software, which was not yet operational. However, TRW representatives said that the test bed version of the software that TRW was using so that it could submit its analysis within 60 days of Integrated Flight Test 1A could not process the full data set. The officials said that shortly before the August 22 report was issued, the prototype version of the tracking, fusion, and discrimination software became functional and engineers were able to use the software to assess the expanded set of target signals. According to the officials, this assessment also resulted in the software’s selecting the mock warhead as the most likely target. In our review of the August 22 report, we found no analysis of the expanded set of target signals. The April 1, 1998, report, did include an analysis of a few additional seconds of data collected near the end of Integrated Flight Test 1A, but did not include an analysis of target signals collected at the beginning of the flight. Most of the signals that were excluded from TRW's discrimination analysis were collected during the early part of the flight, when the sensor’s temperature was fluctuating. TRW told us that their software was designed to drop a target object’s track if the tracking portion of the software received no data updates for a defined period. This design feature was meant to reduce false tracks that the software might establish if the sensor detected targets where there were none. In Integrated Flight Test 1A, the fluctuation of the sensor’s temperature caused the loss of target signals. TRW engineers said that Boeing recognized that this interruption would cause TRW’s software to stop tracking all target objects and restart the discrimination process. Therefore, Boeing focused its efforts on processing those target signals that were collected after the sensor’s temperature stabilized and signals were collected continuously. Some signals collected during the last seconds of the sensor’s flight were also excluded. The former TRW employee alleged that these latter signals were excluded because during this time a decoy was selected as the target. The Phase One Engineering Team cited one explanation for the exclusion of the signals. The team said that TRW stopped using data when objects began leaving the sensor’s field of view. Our review did not confirm this explanation. We reviewed the target intensities derived from the infrared frames covering that period and found that several seconds of data were excluded before objects began to leave the field of view. Boeing officials gave us another explanation. They said that target signals collected during the last few seconds of the flight were streaking, or blurring, because the sensor was viewing the target objects as it flew by them. Boeing told us that streaking would not occur in an intercept flight because the kill vehicle would have continued to approach the target objects. We could not confirm that the test of TRW’s discrimination software, as explained in the August 22, 1997, report, included all target signals that did not streak. We noted that the April 1, 1998, addendum shows that TRW analyzed several more seconds of target signals than is shown in the August 22, 1997, report. It was in these additional seconds that the software began to increase the rank of one decoy as it assessed which target object was most likely the mock warhead. However, the April 1, 1998, addendum also shows that even though the decoy’s rank increased the software continued to rank the mock warhead as the most likely target. But, because not all of the Integrated Flight Test 1A timeline was presented in the April 1 addendum, we could not determine whether any portion of the excluded timeline might have been useful data and if there were additional seconds of useful data whether a target object other than the mock warhead might have been ranked as the most likely target. The April 1 addendum also documented that portions of the reference data developed for Integrated Flight Test 1A were also excluded from the discrimination analysis. Nichols and project office officials told us the software identifies the various target objects by comparing the target signals collected from each object at a given point in their flight to the target signals it expects each object to display at that same point in the flight. Therefore, when target signals collected during a portion of the flight timeline are excluded, reference data developed for the same portion of the timeline must be excluded. Officials in the National Missile Defense Joint Program Office’s Ground Based Interceptor Project Management Office and Nichols Research told us that soon after Integrated Flight Test 1A the contractors orally disclosed all of the problems and limitations cited in the December 11, 1997, briefing and the April 1, 1998, addendum. Contractors made these disclosures to project office and Nichols Research officials during meetings that were held to review Integrated Flight Test 1A results sometime in late August 1997. The project office and contractors could not, however, provide us with documentation of these disclosures. The current Ground Based Interceptor Project Management Office deputy manager said that the problems that contractors discussed with his office were not specifically communicated to others within the Department of Defense because his office was the office within the Department responsible for the Boeing contract. The project office’s assessment was that these problems did not compromise the reported success of the mission, were similar in nature to problems normally found in initial developmental tests, and could be easily corrected. Because we questioned whether Boeing’s sensor could collect any usable target signals if the silicon detector array was not cooled to the desired temperature, we hired sensor experts at Utah State University’s Space Dynamics Laboratory to determine the extent to which the sub-optimal cooling degraded the sensor’s performance. These experts concluded that the higher temperature of the silicon detectors degraded the sensor’s performance in a number of ways, but did not result in extreme degradation. For example, the experts said the higher temperature reduced by approximately 7 percent the distance at which the sensor could detect targets. The experts also said that the rapid temperature fluctuation at the beginning and at the end of data acquisition contributed to the number of times that the sensor detected a false target. However, the experts said the major cause of the false alarms was the power supply noise that contaminated the electrical signals generated by the sensor in response to the infrared energy. When the sensor signals were processed after Integrated Flight Test 1A, the noise appeared as objects, but they were actually false alarms. Additionally, the experts said that the precision with which the sensor could estimate the infrared energy emanating from an object based on the electrical signal produced by the energy was especially degraded in one of the sensor’s two infrared wave bands. In their report, the experts said that the Massachusetts Institute of Technology’s Lincoln Laboratory analyzed the precision with which the Boeing sensor could measure infrared radiation and found large errors in measurement accuracy. The Utah State experts said that their determination that the sensor’s measurement capability was degraded in one infrared wave band might partially explain the errors found by Lincoln Laboratory. Although Boeing’s sensor did not cool to the desired temperature during Integrated Flight Test 1A, the experts found that an obstruction in gas flow rather than the sensor’s design was at fault. These experts said the sensor’s cooling mechanism was properly designed and Boeing’s sensor design was sound. The Ground Based Interceptor Project Management Office used several sources to monitor the contractors’ technical performance, but oversight activities were limited by the ongoing exoatmospheric kill vehicle contract competition between Boeing and Raytheon. Specifically, the project office relied on an engineer and a System Engineering and Technical Analysis contractor, Nichols Research Corporation, to provide insight into Boeing’s work. The project office also relied on Boeing to oversee TRW’s performance. The deputy manager of the Ground Based Interceptor Project Management Office told us that competition between Boeing and Raytheon limited oversight to some extent. He said that because of the ongoing competition, the project office monitored the two contractors’ progress but was careful not to affect the competition by assisting one contractor more than the other. The project office primarily ensured that the contractors abided by their contractual requirements. The project office deputy manager told us that his office relied on “insight” into the contractors’ work rather than oversight of that work. The project office gained insight by placing an engineer on-site at Boeing and tasking Nichols Research Corporation to attend technical meetings, assess test reports, and, in some cases, evaluate Boeing’s and TRW’s technologies. The on-site engineer was responsible for observing the performance of Boeing and TRW and relaying any problems back to the project office. He did not have authority to provide technical direction to the contractors. According to the Ground Based Interceptor Project Management Office deputy manager, Nichols essentially “looked over the shoulder” of Boeing and TRW. We observed evidence of Nichols’ insight in memorandums that Nichols’ engineers submitted to the project office suggesting questions that should be asked of the contractors, memorandums documenting engineer’s comments on various contractor reports, and trip reports recorded by the engineers after various technical meetings. Boeing said its oversight of TRW’s work complied with contract requirements. The contract between the Department of Defense and Boeing required Boeing to declare that “to the best of its knowledge and belief, the technical data delivered is complete, accurate, and complies with all requirements of the contract.” With regard to Integrated Flight Test 1A, Boeing officials said that they complied with this provision by selecting a qualified subcontractor, TRW, to develop the discrimination concepts, software, and system design in support of the flight tests, and by holding weekly team meetings with subcontractor and project office officials. Boeing officials stated that they were not required to verify the validity of their subcontractor’s flight test analyses; rather, they were only required to verify that the analyses seemed reasonable. According to Boeing officials, both they and the project office shared the belief that TRW possessed the necessary technical expertise in threat phenomenology modeling, discrimination, and target tracking, and both relied on TRW’s expertise. National Missile Defense Joint Program Office officials said that they reduced the number of decoys planned for intercept flight tests in response to a recommendation by an independent panel, known as the Welch Panel. The panel, established to reduce risk in ballistic missile defense flight test programs, viewed a successful hit-to-kill engagement as a difficult task that should not be further complicated in early tests by the addition of decoys. In contemplating the panel’s advice, the program manager discussed various target options with other program officials and the contractors competing to develop and produce the system’s exoatmospheric kill vehicle. The officials disagreed on the number of decoys that should be deployed in the first intercept flight tests. Some recommended using the same target set deployed in Integrated Flight Test 1A and 2, while others wanted to eliminate some decoys. After considering the differing viewpoints, the program manager decided to deploy only one decoy—a large balloon—in early intercept tests. As flight tests began in 1997, the National Missile Defense Joint Program Office was planning two sensor tests—Integrated Flight Test 1A and 2— and 19 intercept tests. The primary objective of the sensor flight tests was to reduce risk in future flight tests. Specifically the tests were designed to determine if the sensor could operate in space; to examine the extent to which the sensor could detect small differences in infrared emissions; to determine if the sensor was accurately calibrated; and to collect target signature data for post-mission discrimination analysis. Initially, the next two flight tests were to demonstrate the ability of the competing kill vehicles to intercept a mock warhead. Integrated Flight Test 3 was to test the Boeing kill vehicle and Integrated Flight Test 4 was to test the Raytheon kill vehicle. Table 1 shows the number of target objects deployed in the two sensor tests, the number of objects originally planned to be deployed in the first two intercept attempts, and the number of objects actually deployed in the intercept attempts. By the time Integrated Flight Tests 3 and 4 were actually conducted, Boeing had become the National Missile Defense Lead System Integrator and had selected Raytheon’s exoatmospheric kill vehicle for use in the National Missile Defense system. Boeing conducted Integrated Flight Test 3 (in October 1999) and Integrated Flight Test 4 (in January 2000) with the Raytheon kill vehicle. However, both of these flight tests used only the mock warhead and one large balloon, rather than the nine objects originally planned. Integrated Flight Test 5 (flown in July 2000) also used only the mock warhead and one large balloon. Program officials told us that the National Missile Defense Program Manager decided to reduce the number of decoys used in Integrated Flight Tests 3, 4, and 5, based on the findings of an expert panel. This panel, known as the Welch Panel, reviewed the flight test programs of several Ballistic Missile Defense Organization programs, including the National Missile Defense program. The resulting report,which was released shortly after Integrated Flight Test 2, found that U.S. ballistic missile defense programs, including the National Missile Defense program, had not yet demonstrated that they could reliably intercept a ballistic missile warhead using the technology known as “hit-to-kill.” Numerous failures had occurred for several of these programs and the Welch Panel concluded that the National Missile Defense program (as well as other programs using "hit-to-kill" technology) needed to demonstrate that it could reliably intercept simple targets before it attempted to demonstrate that it could hit a target accompanied by decoys. The panel reported again 1 month after Integrated Flight Test 3 and came to the same conclusion. The Director of the Ballistic Missile Defense Organization testified at a congressional hearing that the Welch Panel advocated removing all decoys from the initial flight tests, but that the Ballistic Missile Defense Organization opted to include a limited discrimination requirement with the use of one decoy. Nevertheless, he said that the primary purpose of the tests was to demonstrate the system’s “hit-to-kill” capability. Program officials said there was disagreement within the Joint Program Office and among the key contractors as to how many targets to use in the early intercept flight tests. Raytheon and one high-ranking program official wanted Integrated Flight Tests 3, 4, and 5 to include target objects identical to those deployed in the sensor flight tests. Boeing and other program officials wanted to deploy fewer target objects. After considering all options, the Joint Program Office decided to deploy a mock warhead and one decoy—a large balloon. Raytheon officials told us that they discussed the number of objects to be deployed in Integrated Flight Tests 3, 4, and 5 with program officials and recommended using the same target set as deployed in Integrated Flight Tests 1A and 2. Raytheon believed that this approach would be less risky because it would not require revisions to be made to the kill vehicle’s software. Raytheon and program officials told us that Raytheon was confident that it could successfully identify and intercept the mock warhead even with this larger target set. One high-ranking program official said that she objected to reducing the number of decoys used in Integrated Flight Test 3, because there was a need to more completely test the system. However, other program officials lobbied for a smaller target set. One program official said that his position was based on the Welch Panel’s findings and on the fact that the program office was not concerned at that time about discrimination capability. He added that the National Missile Defense program was responding to the threat of “nations of concern,” which could only develop simple targets, rather than major nuclear powers, which were more likely to be able to deploy decoys. The Boeing/TRW team also wanted to reduce the number of decoys used in the first intercept tests. In a December 1997 study, the companies recommended that Integrated Flight Test 3 be conducted with a total of four objects—the mock warhead, the two small balloons, and the large balloon. (The multi-service launch system was not counted as one of the objects.) The study cited concerns about the inclusion of decoys that were not part of the initially expected threat and about the need to reduce risk. Boeing said that the risk increased significantly that the exoatmospheric kill vehicle would not intercept the mock warhead if the target objects did not deploy from the test missile as expected. According to Boeing/TRW, as the types and number of target objects increased, the potential risk that the target objects would be different in some way from what was expected also increased. Specifically, the December 1997 study noted that the medium balloons had been in inventory for some time and had not deployed as expected in other tests, including Integrated Flight Test 1A. In that test, one medium balloon only partially inflated and was not positioned within the target cluster as expected. The study also found that the medium rigid light replicas are the easiest to misdeploy and the small canisterized light replica moved differently than expected during Integrated Flight Test 1A. In 1998, the National Missile Defense Joint Program Office asked the Phase One Engineering Team to conduct an assessment, using available data, of TRW’s discrimination software even though Nichols Research Corporation had already concluded that it met the requirements established by Boeing. The program office asked for the second evaluation because the Defense Criminal Investigative Service lead investigator was concerned about the ability of Nichols to provide a truly objective evaluation. The Phase One Engineering Team developed a methodology to (1) determine if TRW’s software was consistent with scientific, mathematical, and engineering principles; (2) determine whether TRW accurately reported that its software successfully discriminated a mock warhead from decoys using data collected during Integrated Flight Test 1A; and (3) predict the performance of TRW’s basic discrimination software against Integrated Flight Test 3 scenarios. The key results of the team’s evaluation were that the software was well designed; the contractors accurately reported the results of Integrated Flight Test 1A; and the software would likely perform successfully in Integrated Flight Test 3. The primary limitation was that the team used Boeing- and TRW- processed target data and TRW-developed reference data in determining the accuracy of TRW reports for Integrated Flight Test 1A. The team began its work by assuring itself that TRW’s discrimination software was based on sound scientific, engineering, and mathematical principles and that those principles had been correctly implemented. It did this primarily by studying technical documents provided by the contractors and the program office. Next, the team began to look at the software’s performance using Integrated Flight Test 1A data. The team studied TRW’s August 13 and August 22, 1997, test reports to learn more about discrepancies that the Defense Criminal Investigative Service said it found in these reports. Team members also received briefings from the Defense Criminal Investigative Service, Boeing, TRW, and Nichols Research Corporation. Team members told us that they did not replicate TRW’s software in total. Instead, the team emulated critical functions of TRW’s discrimination software and tested those functions using data collected during Integrated Flight Test 1A. To test the ability of TRW’s software to extract the features of each target object’s signal, the team designed a software routine that mirrored TRW’s feature-extraction design. The team received Integrated Flight Test 1A target signals that had been processed by Boeing and then further processed by TRW. These signals represented about one-third of the collected signals. Team members input the TRW-supplied target signals into the team’s feature-extraction software routine and extracted two features from each target signal. The team then compared the extracted features to TRW’s reports on these same features and concluded that TRW’s software-extraction process worked as reported by TRW. Next, the team acquired the results of 200 of the 1,000 simulations that TRW had run to determine the features that target objects deployed in Integrated Flight Test 1A would likely display.Using these results, team members developed reference data that the software could compare to the features extracted from Integrated Flight Test 1A target signals. Finally, the team wrote software that ranked the different observed target objects in terms of the probability that each was the mock warhead. The results produced by the team’s software were then compared to TRW’s reported results. The team did not perform any additional analysis to predict the performance of the Boeing sensor and its software in Integrated Flight Test 3. Instead, the team used the knowledge that it gained from its assessment of the software’s performance using Integrated Flight Test 1A data to estimate the software’s performance in the third flight test. In its report published on January 25, 1999, the Phase One Engineering Team reported that even though it noted some weaknesses, TRW’s discrimination software was well designed and worked properly, with only some refinement or redesign needed to increase the robustness of the discrimination function. In addition, the team reported that its test of the software using data from Integrated Flight Test 1A produced essentially the same results as those reported by TRW. The team also predicted that the Boeing sensor and its software would perform well in Integrated Flight Test 3 if target objects deployed as expected. The team's assessment identified some software weaknesses. First, the team reported that TRW’s use of a software module to replace missing or noisy target signals was not effective and could actually hurt rather than help the performance of the discrimination software. Second, the Phase One Engineering Team pointed out that while TRW proposed extracting several features from each target-object signal, only a few of the features could be used. The Phase One Engineering Team also reported that it found TRW’s software to be fragile because the software was unlikely to operate effectively if the reference data—or expected target signals—did not closely match the signals that the sensor collected from deployed target objects. The team warned that the software’s performance could degrade significantly if incorrect reference data were loaded into the software. Because developing good reference data is dependent upon having the correct information about target characteristics, sensor-to-target geometry, and engagement timelines, unexpected targets might challenge the software. The team suggested that very good knowledge about all of these parameters might not always be available. The Phase One Engineering Team reported that the results of its evaluation using Integrated Flight Test 1A data supported TRW’s claim that in post-flight analysis its software accurately distinguished a mock warhead from decoys. The report stated that TRW explained why there were differences in the discrimination analysis included in the August 13, 1997, Integrated Flight Test 1A test report and that included in the August 22, 1997, report. According to the report, one difference was that TRW mislabeled a chart in the August 22 report. Another difference was that the August 22 discrimination analysis was based on target signals collected over a shorter period of time (see app. I for more information regarding TRW’s explanation of report differences). Team members said that they found TRW’s explanations reasonable. The Phase One Engineering Team predicted that if the targets deployed in Integrated Flight Test 3 performed as expected, TRW's discrimination software would successfully identify the warhead as the target. The team observed that the targets proposed for the flight test had been viewed by Boeing’s sensor in Integrated Flight Test 1A and that target-object features collected by the sensor would be extremely useful in constructing reference data for the third flight test. The team concluded that given this prior knowledge, TRW’s discrimination software would successfully select the correct target even in the most stressing Integrated Flight Test 3 scenario being considered, if all target objects deployed as expected. However, the team expressed concern about the software’s capabilities if objects deployed differently, as had happened in previous flight tests. The Phase One Engineering Team’s conclusion that TRW’s software successfully discriminated is based on the assumption that Boeing’s and TRW’s input data were accurate. The team did not process the raw data collected by the sensor’s silicon detector array during Integrated Flight Test 1A or develop their own reference data by running hundreds of simulations. Instead, the team used target signature data extracted by Boeing and TRW and developed reference data from a portion of the simulations that TRW ran for its own post-flight analysis. Because it did not process the raw data from Integrated Flight Test 1A or develop its own reference data, the team cannot be said to have definitively proved or disproved TRW’s claim that its software successfully discriminated the mock warhead from decoys using data collected from Integrated Flight Test 1A. A team member told us its use of Boeing- and TRW-provided data was appropriate because the former TRW employee had not alleged that the contractors tampered with the raw test data or used inappropriate reference data. The table below includes selected requirements that Boeing established before the flight test to evaluate sensor performance and the actual sensor performance characteristics that Boeing and TRW discussed in the August 22 report. We determined whether Boeing and TRW disclosed key results and limitations of Integrated Flight Test 1A to the National Missile Defense Joint Program Office by examining test reports submitted to the program office on August 13, 1997, August 22, 1997, and April 1, 1998, and by examining the December 11, 1997, briefing charts. We also held discussions with and examined various reports and documents prepared by Boeing North American, Anaheim, California; TRW Inc., Redondo Beach, California; the Raytheon Company, Tucson, Arizona; Nichols Research Corporation, Huntsville, Alabama; the Phase One Engineering Team, Washington, D.C.; the Massachusetts Institute of Technology/Lincoln Laboratory, Lexington, Massachusetts; the National Missile Defense Joint Program Office, Arlington, Virginia, and Huntsville, Alabama; the Office of the Director, Operational Test and Evaluation, Washington D.C.; the U.S. Army Space and Missile Defense Command, Huntsville, Alabama; the Defense Criminal Investigative Service, Mission Viejo, California, and Arlington, Virginia; and the Institute for Defense Analyses, Alexandria, Virginia. We held discussions with and examined documents prepared by Dr. Theodore Postol, Massachusetts Institute of Technology, Cambridge, Massachusetts; Dr. Nira Schwartz, Torrance, California; Mr. Roy Danchick, Santa Monica, California; and Dr. Michael Munn, Benson, Arizona. In addition, we hired the Utah State University Space Dynamics Laboratory, Logan, Utah, to examine the performance of the Boeing sensor because we needed to determine the effect the higher operating temperature had on the sensor’s performance. We did not replicate TRW’s assessment of its software using target signals that the Boeing sensor collected during the test. This would have required us to make engineers and computers available to verify TRW’s software, format raw target signals for input into the software, develop reference data, and run the data through the software. We did not have these resources available, and we, therefore, cannot attest to the accuracy of TRW’s discrimination claims. We also examined the methodologies, findings, and limitations of the review conducted by the Phase One Engineering Team of TRW’s discrimination software. To accomplish this task, we analyzed the Phase One Engineering Team’s “Independent Review of TRW EKV Discrimination Techniques” dated January 1999. In addition, we held discussions with Phase One Engineering Team members, officials from the National Missile Defense Joint Program Office, and contractor officials. We did not replicate the evaluations conducted by the Phase One Engineering Team and cannot attest to the accuracy of their reports. We reviewed the decision by the National Missile Defense Joint Program Office to reduce the complexity of later flight tests by comparing actual flight test information with information in prior plans and by discussing these differences with program and contractor officials. We held discussions with and examined documents prepared by the National Missile Defense Joint Program Office, the Institute for Defense Analyses, Boeing North American, and the Raytheon Company. Our work was conducted from August 2000 through February 2002 according to generally accepted government auditing standards. The length of time the National Missile Defense Joint Program Office required to release documents to us significantly slowed our review. For example, the Program Office required approximately 4 months to release key documents such as the Phase One Engineering Team’s response to the professor’s allegations. We requested these and other documents on September 14, 2000, and received them on January 9, 2001.
The Department of Defense (DOD) awarded contracts to three companies in 1990 to develop and test exoatmospheric kill vehicles. One of the contractors--Boeing North American--subcontracted with TRW to develop software for the kill vehicle. In 1998, Boeing became the Lead System Integrator for the National Missile Defense Program, and chose Raytheon as the primary kill vehicle developer. Boeing and TRW reported that the June 1997 flight test achieved its primary objectives, but that some sensor abnormalities were detected. The project office relied on Boeing to oversee the performance of TRW. Boeing and TRW reported that deployed target objects displayed distinguishable features when being observed by an infrared sensor. After considerable debate, the program manager reduced the number of decoys planned for intercept flight tests in response to a recommendation by an independent panel. The Phase One Engineering Team, which was responsible for completing an assessment of TRW's software performance within two months using available data, found that although the software had weaknesses, it was well designed and worked properly, with only some changes needed to increase the robustness of the discrimination function. On the basis of that analysis, team members predicted that the software would perform successfully in a future intercept test if target objects deployed as expected.
Although definitions vary, including definitions used by federal agencies, many experts generally agree that bullying involves intent to cause harm, repetition, and an imbalance of power. The pioneering research of Dr. Dan Olweus in Norway has defined being bullied or victimized as when a student “is exposed, repeatedly and over time, to negative actions on the part of one or more other youths” with an intent to harm.bullying is distinct from general conflict or aggression, which can occur absent an imbalance of power or repetition. For example, a single fight between two youths of roughly equal power is a form of aggression, but may not be bullying. When bullying occurs it may take many forms that can also be associated with conflict or aggression, including physical Notably, harm, such as hitting, shoving, or locking inside a school locker; verbal name calling, taunts, or threats; relational attacks, such as spreading rumors or isolating victims from their peers; and the use of computers or cell phones to convey harmful words or images, also referred to as cyberbullying. Often bullying occurs without apparent provocation and may be based on the victim’s personal characteristics. For example, youth may be bullied based on the way they look, dress, speak, or act. There are several federal efforts under way to bring together federal resources that can be used to identify and address bullying. In particular, given their focus on education, health, and safety issues, Education, HHS, and Justice, along with other federal agencies, have been involved in efforts to help coordinate federal resources to identify and address bullying. Additionally, several bills have been introduced in the 112th Congress that relate to bullying. Among the various issues addressed in these bills are bullying policies, the collection and reporting of bullying data, and the prohibition of discrimination on the basis of sexual orientation or gender identity. Some of the bills would authorize federal grants to states and school districts for antibullying-related purposes. Although there is not presently a federal law directly targeted to address school bullying, several federal civil rights laws that prohibit discrimination based on protected characteristics of individuals may, under certain circumstances, be used to address particular incidents of bullying. With respect to states’ efforts to address bullying, Education commissioned a two-part study that examines the elements of state bullying laws and the manner in which school districts are implementing the laws. The first part of Education’s study, issued in December 2011, included a review of all state bullying laws and model policies in effect as of April 2011, including those of the eight states we reviewed, as well as policies from 20 large school districts. The second part of Education’s study is scheduled for completion during fall 2012. It will include case studies of how 24 schools, selected from four states, implement their states’ bullying laws. Being bullied is a serious problem, as evidenced by four federally sponsored nationally representative surveys conducted from 2005 to 2009. Estimates of the national prevalence of bullying ranged from approximately 20 to 28 percent of youth reporting they had been bullied during the survey periods, which ranged from a couple of months to a year. However, differences in definitions and survey methods make it difficult to draw definitive conclusions regarding trends and affected demographic groups. Our analysis and similar work from HHS’s Centers for Disease Control and Prevention (CDC), one of the sponsors of two of the surveys, showed that the surveys vary in the way they pose questions about being bullied and how bullying is defined, if at all.Education, the sponsor of one of the surveys, and HHS also told us that different survey questions and definitions of bullying lead to different results in estimates of prevalence. While it is clear that bullying is a serious problem, it is unclear from the surveys the extent to which bullying affects certain groups of youths relative to other groups. Specifically, the surveys collected information on the percentage of youths bullied based on gender and race. However, the information showed varying results. For example, there was no significant difference in the percentage of boys and girls that reported being bullied, according to two surveys, while one noted that girls were bullied at a higher percentage. In two of the three surveys, white youths reported being bullied at a higher percentage than African-American youths, while one other survey found no significant difference. In addition, the four national surveys we identified did not consistently collect information about other demographic characteristics, making it impossible to determine percentages of bullying for these groups. For example, none of the surveys collected demographic information for youths by sexual orientation or gender identity. Researchers noted various challenges to obtaining such information, such as some schools may not permit questions on sexual orientation or gender identity status, potentially resulting in a sample that would not be nationally representative. Also, questions about sexual orientation or gender identity may be sensitive for youth respondents to complete, and researchers noted that such questions may not yield accurate information. Additionally, the surveys varied in whether or not they collected demographic information to allow for analysis based on religion, disability, or socioeconomic status, and two of the surveys did not include any questions asking specifically if youths had been bullied based on specific demographic characteristics. (See table 1.) While federal agencies have not collected information on some demographic groups, other researchers have attempted to fill the void. For example, the Gay, Lesbian and Straight Education Network (GLSEN) conducted a survey in the 2008-2009 school year and received responses from more than 7,000 students between the ages of 13 and 21 who self-reported as not heterosexual. Although not nationally representative, the results found, among other things, that 85 percent of students who responded to the survey said they were called names or threatened at some point in the past school year based on their sexual orientation, and 64 percent based on their gender expression; for example, for not acting “masculine enough” or “feminine enough”. Forty percent of students who responded said they were pushed or shoved based on their sexual orientation, and 27 percent based on their gender expression. In addition to the fact that there are voids in information about demographic groups, Education and HHS officials said that researchers need a uniform definition to measure bullying. To better understand the prevalence of bullying, and given the different definitions used by bullying research instruments, CDC is leading an interdepartmental project to develop a uniform definition of bullying for research purposes. According to CDC officials, a report is expected to be issued in 2012 that contains a uniform definition along with information on other data elements to measure bullying, such as the frequency or types of bullying behavior. According to CDC, the project on the uniform definition is still under review, but may contain data elements for a number of demographic characteristics, including sex, race, ethnicity, disability status, religion, and sexual orientation. Research, spanning more than a decade, has demonstrated that bullying is associated with a variety of negative outcomes for victims, including psychological, physical, academic, and behavioral issues. For example, a 2000 analysis of 23 bullying research studies found that youth who were bullied experienced higher levels of depression, loneliness, low self- esteem, and anxiety than their peers who had not been bullied. Similarly, a 2010 analysis of 18 research studies found that being bullied was linked to increased psychological issues later in life. A third analysis, of 20 studies, published in 2011, found that being bullied was associated with greater likelihood of being depressed later in life. A 2009 analysis of 11 research studies found that bullying victims had a higher risk for such physical health outcomes as headaches, backaches, sleeping problems, and bad appetite, as compared with their peers who had not been bullied. Additionally, a 2010 analysis of 33 research studies on bullying and academic achievement found that bullying is related to concurrent academic difficulties for victims. Academic achievement was assessed based on such measures as grade point averages, standardized test scores, or teacher ratings of academic achievement. Researchers have also linked bullying to increases in behavioral problems for victims over time, such as aggression, delinquency, and truancy. While researchers point out that the causes of suicide and violence are varied and complex, bullying has been identified as one risk factor associated with violent actions against oneself and others. For example, one 2011 analysis of 18 studies found that gay, lesbian, and bisexual youth were more likely to be verbally harassed and teased or physically and sexually victimized than heterosexual youth, and more likely to experience detrimental outcomes, such as suicidal thoughts and attempts. According to a federally sponsored website on bullying, specific groups have an increased risk of suicide, including American Indian and Alaskan Native, Asian-American, lesbian, gay, bisexual, and transgender youths. Their risk of suicide can be increased further by bullying. Bullying has also been linked to acts of violence against others. For example, a 2002 study by Education and the Secret Service reviewed 37 incidents of school attacks and shootings occurring between 1974 and the end of the 2000 school year, and reported out 10 key findings that could be used to develop strategies to address targeted school violence. One of those 10 findings was that nearly three-quarters of attackers were bullied, persecuted, or injured by others prior to the attack, and that in several cases the bullying was severe and long-standing. According to Education, 49 states had school bullying laws as of April 2012, including the 8 states that we reviewed. These 8 states’ laws vary in several ways, including who is covered and the requirements placed on state agencies and school districts. For example, the 8 states’ laws that we reviewed vary in whether and the extent to which they cover specific demographic groups, referred to as protected classes. Five states— Arkansas, Illinois, Iowa, New Mexico, and Vermont—identify race, color, sex or gender, national origin or nationality, disability, sexual orientation, gender identity, and religion as protected classes. California includes all of these groups, except for color. Some states also prohibit bullying of other protected classes. For example, Illinois also includes as protected classes ancestry, age, and marital status. Virginia and Massachusetts do not include protected classes in their state bullying laws. According to Massachusetts officials, protected classes were intentionally omitted from the state’s law to ensure that all youths were equally protected. Within Massachusetts’ state educational agency (SEA), a specific office is designated to receive complaints, including from youths who have been bullied for any reason, such as obesity or socioeconomic status. Additionally, four of the states that identify protected classes—Arkansas, Illinois, Iowa, and New Mexico—provide that the list of classes is not exhaustive, so protection can be afforded to youths with characteristics not explicitly listed. For example, Iowa prohibits bullying “based on any In contrast, actual or perceived trait or characteristic of the student.”California’s bullying law is more exclusive and limits protection to only those groups that are listed in the law. We also found that state laws impose various requirements on SEAs. For example, laws in California, Massachusetts, Vermont, and Virginia require that SEAs develop model bullying policies as a resource for school districts. Also, we found that while SEAs in Arkansas, California, and Illinois are required by law to review or monitor school district’s bullying policies, the approach taken to do so is different from state to state. For example, officials in Arkansas reported that as part of a broader effort to ensure that school districts’ policies align with federal and state laws, they conduct on-site reviews every 4 years, and require school districts to forward information to the Department of Education for review every year, including information about discipline and bullying policies. Conversely, an Illinois official reported that little meaningful oversight is occurring, in part because of resource constraints. In each of the states we reviewed, the laws require school districts to adopt bullying policies or plans, but the states differed in the specific requirements of what must be included in these policies or plans. For example, of the 8 states’ laws we reviewed, 6 states require school districts to set forth a process for receiving and investigating complaints, and 2 do not. Similarly, we found that 6 states’ laws require district policies to identify the consequences for bullies, while 2 do not. Table 2 provides information about commonly required school district provisions in state bullying laws. States are also making changes to their bullying laws, as evidenced by 4 of our 8 selected states amending or enacting bullying laws since we began our study in the spring of 2011.other things, amended its law to include protected classes based on actual or perceived characteristics. Vermont amended its law to include protections against cyberbullying and incidents that do not occur during the school day on school property, or at school-sponsored events. The six school districts we reviewed in New Mexico, Virginia, and Vermont have all adopted policies, plans, or rules, and implemented a range of approaches, to combat bullying. Among other components of the bullying policies and rules, each prohibits bullying and describes potential consequences for the behavior. Also, the school districts in New Mexico and Vermont developed policies and procedures covering the reporting and investigation of bullying behavior. School district officials explained that they have developed several approaches to prevent and respond to bullying. For example, in five of the six school districts we visited, central administrators or principals said they conduct student surveys that include questions about bullying to determine the prevalence of the behavior, and two administrators said the surveys are used to develop strategies to address the behavior. Also, officials from four of the six school districts said that several or all of their schools utilize the prevention-oriented framework Positive Behavioral Interventions and Supports (PBIS) to improve overall behavior in schools (see text box). Additionally, several school districts and schools use curricula that help youths develop interpersonal skills and manage their emotions, such as Second Step, a classroom-based social skills program for youths 4 to 14 years of age, and Steps to Respect, a bullying prevention program developed for grades three through six. Several central administrators and principals mentioned that antibullying-focused events have been held at their schools, such as Rachel’s Challenge and Ryan’s Story. Rachel’s Challenge is a program that seeks to create a positive culture change in schools and communities and begins with video/audio footage of Rachel Scott, the first person killed during the 1999 Columbine High School incident. Ryan’s Story is a presentation that recounts the factors that led to the 2003 suicide of Ryan Halligan, a victim of both bullying and cyberbullying. The Positive Behavioral Interventions and Supports framework utilizes evidence-based, prevention-oriented practices and systems to promote positive and effective classroom and school social cultures. According to Education’s Office of Special Education Programs, PBIS steps to addressing bullying behavior at school include the following: examining discipline data to determine, for example, the frequency, location, and timing of specific bullying behaviors; examining the extent to which staff members have, for example, actively and positively supervised all students across all school settings, had high rates of positive interactions and contact with all students, and arranged their instruction so all students are actively engaged, successful, and challenged ; and teaching students and staff common strategies for preventing and responding to bullying behavior, such as intervening and responding early and quickly to interrupt bullying behavior, removing what triggers and maintains bullying behavior, and reporting and recording when a bullying behavior incident occurs. Students whose bullying behavior does not improve are considered for additional supports. For example, on the basis of the function of a student’s behavior, students would (1) begin the day with a check-in or reminder about the daily expectations; (2) be more overtly and actively supervised; (3) receive more frequent, regular, and positive performance feedback each day; and (4) conclude each day with a checkout or debriefing with an adult. In addition to mentioning efforts focused on youths, several central administrators and principals said that teachers receive some bullying prevention guidance or training. Information about bullying prevention is also shared with parents during workshops and forums. For example, one official mentioned that Rachel’s Challenge includes a session with parents and community leaders. A parent said that his school district hosted a national speaker to share information with parents about bullying. Both state and local officials expressed concerns about various issues associated with implementing state bullying laws, regulations, and local policies and codes of conduct. For example, administrators and principals reported that determining how to respond to out-of-school incidents, such as cyberbullying, is challenging. Administrators and principals said that sometimes they are not informed of incidents in a timely manner, resulting in a delayed response. Additionally, some parents discourage school officials’ involvement in out-of-school incidents. However, administrators and principals agreed that when out-of-school incidents affect school climate, the behavior has to be addressed. Another issue of concern for both state and local officials is that parents and youths can confuse conflict with bullying. According to the state and local officials that we spoke with, they spend a lot of time on nonbullying behavior and more could be done to educate parents and youths on the distinction between bullying behavior and other forms of conflict. On a related matter, state and local officials said that it is important to train teachers and staff to prevent, identify, and respond to bullying behavior. However, according to these officials, because of state budget cuts and the elimination of some federal funding that could be used for bullying prevention activities, there is little funding available for training. State officials specifically cited the loss of funding from Title IV, Part A of the Elementary and Secondary Education Act of 1965, as amended, which among other things could be used to prevent violence in and around schools. According to federal officials, funding for this program was eliminated in 2009. When bullying rises to the level of discrimination, federal civil rights laws may be used to provide redress to individuals in legally protected groups. Federal civil rights laws protect against discrimination based on sex, race, color, national origin, religion, or disability. However, federal agencies generally lack jurisdiction to address discrimination based on classifications not protected under federal civil rights statutes. For example, federal agencies lack authority to pursue discrimination cases based solely on sexual orientation. Additionally, federal civil rights laws do not cover all youths in all educational settings, and as a result, where a student goes to school could affect the student’s ability to file a claim of discrimination with the federal government. For example, Title IV of the Civil Rights Act of 1964 (Title IV) prohibits discrimination in public schools and institutions of higher learning. Since Title IV is the only federal civil rights law addressing religious discrimination in educational settings, only youths at public schools and public institutions of higher learning, where Title IV applies, could file such a claim. Youths who attend public schools or other schools receiving federal education funding and who belong to other federally protected classes may have the option to file a complaint with Education, Justice, or both departments, depending on which agency has enforcement authority.laws, protected classes, and agency enforcement authority. According to OCR’s case processing manual, a complaint must be filed within 180 calendar days of the date of the alleged discrimination, unless the time for filing is extended by Education’s Office for Civil Rights for good cause shown under certain circumstances. complaints is partly due to Education’s greater staff resources. Education’s Office for Civil Rights has roughly 400 staff, and Justice’s Civil Rights Division, Educational Opportunities Section, has about 20 attorneys. According to departmental officials, Education investigates all complaints it receives for which it has jurisdiction. Conversely, Justice selects a limited number of complaints to review based on such factors as the severity of the complaint and whether the federal government has a special interest in the case. Additionally, officials from Education and Justice told us that they collaborate closely. Generally, Justice and Education share information about complaints because they may have overlapping jurisdiction, and try to coordinate efforts where feasible. Education and Justice do not currently have a systematic approach for tracking information about the number of cases related to various demographic groups that they do not have jurisdiction to address. The U.S. Commission on Civil Rights, in a 2011 report on the protections of federal anti-discrimination laws relating to school bullying, recommended that Justice and Education, among other things, track dismissed civil rights claims by various demographic characteristics. However, Education and Justice officials told us that as part of their complaint review processes, they focus on collecting information to establish federal jurisdiction, and as a result neither department collects information in a way that would allow them to routinely assess the demographic characteristics of cases where they lack jurisdiction. Thus, they do not plan to address the commission’s recommendation. Additionally, according to officials from both departments, attempting to track such information would be problematic because of difficulties in ascertaining demographic information. They also believe the information could be misleading. According to Justice officials, they dedicate significant resources to outreach designed to educate communities on their jurisdiction, and this may impact the number of complaints they receive from demographic groups that fall outside of their jurisdiction. We found that some states’ civil rights laws extend beyond the protections afforded at the federal level, but information about the possibility of pursuing claims at the state level was not always provided to federal complainants. For all eight states we reviewed, state anti- discrimination laws, like federal civil rights laws, provide protections for individuals who are discriminated against on the basis of sex, race, national origin, religion, and disability, and in all but Arkansas, color.Thus, in these eight states, for these protected classes, legal action can generally be taken at the federal, state, or both levels. The majority of the eight states that we reviewed include in their anti- discrimination laws protections for various groups of people who are not explicitly covered at the federal level. For example, six of the eight states we reviewed prohibit discrimination on the basis of sexual orientation, and five of the eight states prohibit discrimination on the basis of gender identity. Beyond these protected classes, most states we reviewed also prohibit discrimination on the basis of other personal characteristics, such as marital status. California is unique among the states in our review in that its anti-discrimination laws explicitly protect individuals on the basis of citizenship, gender-related appearance and behavior, and individuals who are associated with a person with (or perceived to have) a protected characteristic. However, because some characteristics are not explicitly protected under anti-discrimination laws at either the federal level or in the states we reviewed, youths in these states who are bullied on the basis of one of these characteristics would have no recourse under civil rights law at either level. For example, state education and civil rights officials mentioned that anti-discrimination laws generally do not apply to youths who were bullied based on their socioeconomic status or obesity. The six states are California, Illinois, Iowa, Massachusetts, New Mexico, and Vermont. who withdraws his or her complaint may be informed in a phone discussion about legal options at the state level. Also, officials said that if a complaint reaches the stage of a dismissal, Education’s letter to the complainant sometimes suggests that the claimant might have a claim under state civil rights law, along with the name and address of the relevant state agency. However, according to Education officials, when the agency lacks jurisdiction, it does not presently notify complainants about the availability of possible recourse under state law on a routine basis. As a result, individuals who file complaints with Education may not be fully aware of their legal options. On the other hand, according to Justice officials, department officials routinely share with complainants that they may have legal options available to them through their state’s civil rights laws. While not specific to particular states and their laws, Justice provides a general notification in letters to complainants for complaints they do not pursue. Education, HHS, and Justice have established coordinated efforts to carry out research and broadly disseminate information on bullying. Education has also provided key information about how federal civil rights laws can be used to address bullying and is conducting a study of state bullying laws and how selected school districts are addressing bullying. Three federal efforts, in particular—formation of a coordinating committee, establishment of a central website, and an informational campaign—have provided the public with a range of information about bullying, through a variety of media. The Federal Partners in Bullying Prevention Steering Committee serves as a forum for federal agencies to develop and share information with each other and the public. The committee was created in 2009 and is composed of the Departments of Education, HHS, Justice, Agriculture, Defense, and Interior, along with the Federal Trade Commission, the National Council on Disability, and the White House Initiative on Asian Americans and Pacific Islanders. Among other activities, the coordinating committee helped to plan a conference on bullying in March 2011 hosted by the White House, as well as annual conferences of the coordinating committee in August 2010 and September 2011. Following each annual conference, the committee has developed priorities and formed subcommittees to address those priorities. For example, after identifying a need for better coordination of bullying research, a research subcommittee was created after the August 2010 conference. Following the September 2011 conference, this subcommittee’s activities in the upcoming year will also include identifying best practices for training teachers as well as drawing attention to programs that could help youths develop interpersonal skills and manage their emotions. The three federal departments, along with the White House, established a central federal website (www.stopbullying.gov, last accessed May 22, 2012), launched in March 2011 at the White House conference on bullying. The central website sought to consolidate the content of different federal sites into one location to provide free materials for the public. Hosted by HHS, with content and technical support from the Health Resources and Services Administration (HRSA), the website aims to present a consistent federal message and features content arranged by target audience, such as teens, along with sections on special topics such as cyberbullying. HHS through HRSA launched the informational campaign called Stop Bullying Now! in 2004. Federal departments outside HHS that assist with the campaign include the Departments of Education, Justice, Agriculture, Defense, and Interior. The campaign is designed for youth and adults to raise awareness, foster partnerships, and disseminate evidence-based findings to help prevent and intervene in instances of bullying. The informational campaign offers a variety of free materials, including a DVD with 14 cartoon episodes, 30 tip sheets based on research and evidence- based practices, public service announcements, posters, brochures, comic books, and kits for youth leaders and adults. According to data from HRSA as of August 2011, recipients of materials in mass mailings included, among others, all 66,000 public elementary and middle schools in the country, 17,000 libraries, relevant state health and education agencies, offices serving Indian and military youth, 4,000 Boys and Girls Clubs, relevant state health and education agencies, schools on military bases worldwide, and offices serving American Indian youth. (See app. V for more information on the campaign.) However, according to HHS officials, the campaign and its online content are currently in a period of transition, as they adapt to the new interdepartmental website and its governance. While these efforts are still evolving, we found that they are consistent with key practices that we determined can help or sustain coordination efforts across federal agencies. Specifically, we found that in each of these three efforts that key agencies reached agreement on roles and responsibilities. For example, the roles and responsibilities of the federal agencies responsible for stopbullying.gov are spelled out in a governance document, and the lead agency, HHS, for this website has executed agreements to provide funding for the maintenance and operation of the website. Similarly, we found that these agencies worked to establish compatible policies and procedures, and to develop mechanisms to monitor progress for these coordinated efforts. Appendix V provides more information on federal coordination efforts on bullying. In addition to these collaborative agency efforts to share information about bullying, Education has disseminated information about federal civil rights laws that can be used to address bullying, and key components of state bullying laws. In October 2010, Education sent a letter to state and local education officials outlining how federal civil rights laws can be applied to bullying. The letter stated that student misconduct may trigger school responsibilities under federal civil rights laws and provided examples of behavior that may meet the threshold for violating the laws. In December 2010, the department issued another letter that summarized several key components of state bullying laws, such as specifying prohibited behavior, development and implementation of school district policies, and training and preventive education. As previously discussed, following up on this letter, the department commissioned a study of state bullying laws to determine the extent to which states and school districts incorporate the key components into their laws and policies. In December 2011, Education issued the first part of this two-part study on state bullying laws. While Education, HHS, and Justice have initiated several efforts to better inform the public about how to utilize federal, state, and other resources to better address bullying, none of these efforts include an assessment of state civil rights laws and procedures for filing complaints. Since some states’ civil rights laws provide protection for groups not named in applicable federal civil rights laws, collection and dissemination of such information could assist in better understanding how these laws vary in coverage and in the procedures states have in place for filing complaints. For example, five states in our review—California, Illinois, Iowa, Massachusetts, and Vermont—have established processes and procedures for resolving civil rights complaints, and have empowered a statewide organization with the authority to hold schools and school districts accountable when discrimination is found, according to state officials. Specifically, according to a state official, California’s Uniform Complaint Process empowers its Department of Education’s Office of Equal Opportunity to ensure compliance with state and federal civil rights laws. California’s state code also requires uniform complaint procedures that each school district within the state must follow when addressing complaints of discrimination against protected groups, according to a state official. The complaint process allows up to 60 days for an investigation and decision to be rendered at the district level, unless a child is directly in harm’s way and the school district is unresponsive, in which case a complaint can be filed directly with the state. In Vermont, the state’s Human Rights Commission acts as an independent agency focused solely on the protection of civil rights, and if its investigation determines unlawful discrimination occurred, the agency assists the parties in negotiating a settlement. Human Rights Commission officials told us if a settlement cannot be reached, the agency may choose to take the case to court. However, they said that this usually does not happen because cases are generally settled. The Massachusetts Department of Elementary and Secondary Education has a formal process called the Problem Resolution System that handles complaints that allege a school or a district is not meeting legal requirements for education, including complaints of discrimination. In each of the five states with established processes and procedures for resolving civil rights complaints, the SEAs include information on their websites about the civil rights complaint process, including where to file, required information, and time frames. According to their respective state officials, Arkansas and New Mexico offer only limited legal options for protected classes with complaints of discrimination based on school bullying because they lack a state entity with the authority to investigate and hold school districts accountable for such complaints. Although Arkansas has an Equity Assistance Center within its Department of Education that can serve as an intermediary between the complainant and the school district, its decisions lack the authority to discipline a school district, according to state officials. New Mexico has a human rights commission that receives and investigates complaints of discrimination based on protected classes, but the commission is focused on employment issues and does not address discrimination complaints related to education. As a result, the state lacks formal processes and procedures to address complaints of discrimination stemming from instances of bullying, according to state officials. Therefore, according to state officials from these two states, if an individual cannot afford an attorney to file a private right of action related to complaints of discrimination because of school bullying, the individual’s only legal option is to file a federal complaint. By not incorporating an assessment of state civil rights laws and procedures into their various bullying prevention efforts, federal agencies are overlooking a potentially important source of information. Building on information from Education’s study of state bullying laws and the letters they issued on federal civil rights laws, information on state civil rights laws and procedures would provide a broader and more complete perspective of the overall coverage of federal and state efforts to prevent and address bullying. Students who are bullied may seek recourse through a number of avenues—local and state educational policies, state bullying laws, state civil rights laws, or federal civil rights laws. However, the nature and extent of protections available to them depend on the laws and policies of where they live or go to school. Education and Justice have taken important steps in assessing how federal civil rights laws can be used to help combat certain instances of bullying of protected classes of youth for which they have jurisdiction. And Education has completed a study of state bullying laws and is conducting another study looking at how school districts are implementing these laws. However, neither Education nor Justice has assessed state civil rights laws and procedures as they may relate to bullying. Many of the states’ civil rights laws we reviewed extend protections to classes of individuals beyond the groups protected at the federal level, but states vary in the groups that are explicitly protected; therefore, whether bullying victims have any recourse through civil rights laws can depend on the state in which they live or go to school. Also, states vary in their procedures for pursuing civil rights claims, which could also affect the ability to pursue a bullying-related discrimination claim. State civil rights laws, just like federal civil rights laws and state bullying laws, can play an important role in addressing this important issue. More information about state civil rights laws and procedures is a key missing link and is needed by administration officials and decision makers alike, to understand the potential overall legal protections available to students who have been bullied. Federal claimants would also benefit from knowing that options may be available to them at the state level. This is particularly key when cases are dismissed at the federal level because of a lack of jurisdiction. While Justice routinely informs individuals when their complaints are dismissed because of a lack of jurisdiction of possible recourse under their state civil rights laws, Education does not. Routinely making this basic information available would be another key step in helping ensure that bullying victims are aware of some of the legal options available to them. Multiple efforts to collect information about bullying have been under way for several years; however, the prevalence of bullying of youths in certain vulnerable demographic groups is not known. A greater effort by key federal agencies to develop more information about the extent to which a broader range of demographic groups are subject to bullying and bullying- related discrimination would better inform federal efforts to prevent and remedy bullying. Understanding the prevalence of bullying by demographic groups would help administration officials develop additional actions targeted at the greatest areas of need. This information, together with an assessment of federal and state legal protections, could also aid policymakers in determining whether additional actions are needed to protect vulnerable groups of youths who are subjected to bullying. To allow for a more comprehensive assessment of federal and state efforts to prevent and address bullying, we recommend the Secretary of Education, in consultation with the Attorney General, as appropriate, compile information in a one-time study—similar to its study of state bullying laws—about state civil rights laws and procedures, as they may pertain to bullying. In order to better ensure that individuals are aware of their options to seek legal redress, especially in cases where their complaints to Education are not pursued because of a lack of jurisdiction, we recommend that the Secretary of Education develop procedures to routinely inform individuals who file complaints of discrimination stemming from bullying about the potential availability of legal options under their state’s anti-discrimination laws. To address gaps in knowledge about targets of bullying and discrimination, we recommend that the Secretaries of Education and HHS and the Attorney General work together to develop information in their future surveys of youths’ health and safety issues on the extent to which youths in various vulnerable demographic groups are bullied. To aid policymakers and program administrators at the federal and state levels in understanding more comprehensively what is being done to address bullying and discrimination, we recommend that the Secretaries of Education and HHS and the Attorney General, in conjunction with the Federal Partners in Bullying Prevention Steering Committee, assess the extent to which legal protections against bullying exist for vulnerable demographic groups. Such an assessment, to be comprehensive, should make use of information federal agencies have already compiled on state bullying laws and federal civil rights laws together with information from our recommendations above to compile information on state civil rights laws and collect more information on demographic groups in federal surveys of youth health and safety issues. We provided Education, HHS, and Justice an opportunity to comment on a draft of this report. Education and HHS provided written responses, which appear in appendixes VII and VIII, respectively. Each of the agencies provided technical comments, which we incorporated as appropriate. Justice chose not to provide a written response. Education disagreed with our recommendation that it compile information about state civil rights laws and procedures as they pertain to bullying. Specifically, Education noted that it does not have jurisdiction over state civil rights laws, nor the appropriate expertise, to interpret and advise on these laws. The department stressed that its previous analysis of state bullying laws was limited to compiling a list of statutes or regulations and identifying key components of statutes and regulations. Further, Education suggested that compiling information about state civil rights laws and procedures would only be useful if kept current, and that undergoing such a time-intensive and costly survey and review of state’s civil rights laws would not be an appropriate use of the department’s limited resources. We continue to believe that a one-time compilation of state civil rights laws and procedures would be beneficial, and provide a basis, along with other information, for analyzing the overall legal protections that are available for vulnerable demographic groups. Such an assessment would help determine the extent to which states are positioned to respond to these types of civil rights complaints and to identify those instances where certain students are left with little recourse to pursue discrimination claims simply because of the state in which they reside or go to school. While we appreciate the work involved in any analysis of state laws, we believe that Education can develop a methodological approach that would limit the scope of their work and hone in on those aspects of civil rights laws that come into play when bullying leads to allegations of discrimination. For example, this review could be limited to compiling basic information about state civil rights laws, such as which protected classes are included and whether they apply in educational settings, and may not require an extensive analysis of state case law. In implementing a study of this type, Education may consider approaches similar to those they used in their previous work on state bullying laws. Alternatively, Education officials could choose to rely on the knowledge and expertise of cognizant state officials by conducting a survey or otherwise soliciting pertinent information, rather than undertaking the bulk of this work themselves. We acknowledge Education’s concerns regarding keeping the information on state civil rights laws updated and have modified language in the report and our recommendation to clarify that this is meant to be a onetime effort. Regarding our second recommendation, Education indicated that they are considering whether to develop procedures that would inform complainants whose complaints are dismissed for lack of jurisdiction that they may have possible recourse under state or local laws. We encourage Education to review the language that Justice currently includes in similar notification letters. As Education suggested, more detailed guidance regarding rights and procedures for seeking redress may then be provided by state and local agencies. Both HHS and Education agreed with our recommendation that they develop additional information in their surveys about youths in various vulnerable groups who are bullied. In response to our recommendation that Education, HHS, and the Attorney General assess the extent to which protections exist for various demographic groups likely to be the target of bullies, HHS agreed with the recommendation and Education cited many of its ongoing efforts to this end. We commend Education on its current efforts as well as other efforts we have discussed in our report. However, as we point out in our previous recommendations, more information is needed on state civil rights laws as well as about how various demographic groups are affected by bullying. Utilizing all of the information at their disposal, including information we recommend be collected, Education, HHS, and Justice could work together to assess how well the available laws and resources address areas of need and identify measures that could be taken to help prevent bullying. We believe that it is an important step to assimilate information on resources and laws with research about areas of need in order to assist federal policy makers and agency officials in their efforts to address this important issue. Based on questions we received during discussions with Justice on our report, we modified this recommendation to clarify that such an assessment should make use of information from our previous recommendations in this report, as well as information that federal agencies have already gathered, and that the three agencies in our review could work through the Federal Partners in Bullying Prevention Steering Committee to conduct such an assessment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Education, and Health and Human Services, and the Attorney General; relevant congressional committees; and other interested parties. In addition, the report will be available on GAO’s website at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (206) 287-4809 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IX. To obtain information on the prevalence of school bullying of victims in the United States, we primarily compared estimates and methodologies of available data on being bullied in four nationally representative surveys by federal statistical agencies conducted from 2005 to 2009. Specifically, we compared data on being victims of bullying from the Youth Risk Behavior Survey, the School Crime Supplement to the National Crime Victimization Survey, the Health Behavior in School-aged Children Survey, and the National Survey of Children’s Exposure to Violence (see table 4). We selected these surveys based on interviews with officials at the Departments of Education (Education), Health and Human Services (HHS), and Justice (Justice), as well as the similar work of the Centers for Disease Control and Prevention (CDC) on this topic that compared the four surveys. We evaluated these federal surveys for methodological rigor, as well as to determine the extent to which the data could be used to offer a national perspective on bullying in schools. This included interviews with researchers, as appropriate. We determined that the data were sufficiently reliable for our purposes. Because the survey data were collected using generalizable probability samples, this sample is only one of a large number of samples that might have been selected. Since each sample could provide different estimates, we have used 95 percent confidence intervals to show the precision of our results. All percentage estimates used in this report have 95 percent confidence intervals of within plus or minus 2.1 percentage points, unless otherwise noted. In addition to sampling error, surveys are subject to nonsampling error, such as how respondents interpret questions, including any biases or tendencies to provide desirable answers or false answers. Although respondents self-reported being bullied in the surveys, this approach to measure the prevalence of bullying is viewed as valid and robust, according to some previous research on bullying. We also reviewed certain other relevant research as appropriate. Finally, we conducted interviews with officials at Education and HHS to obtain information about how different surveys and research define bullying and their efforts to develop a uniform definition of bullying for research purposes. To describe the effects of school bullying on victims, we conducted a literature review. To identify studies on the effects of bullying on victims, we searched numerous databases—including MEDLINE, Embase, Education Resources Information Center (ERIC), ProQuest, PsycINFO, Sociological Abstracts, Social Services Abstracts, and WorldCat. We also consulted with officials at Education, HHS, and Justice to identify relevant studies. Because of the extensive available literature, we limited our review to meta-analyses, which analyze other studies and synthesize their findings. Additionally, we limited our review to articles published in peer-reviewed journals. Our literature search covered studies published from 2001 through July 2011. Subsequently, new meta-analyses were brought to our attention by agency officials, and we reviewed them to the extent they were consistent with our search criteria. We identified seven relevant studies. We reviewed the methodologies of these studies to ensure that they were sound and determined that they were sufficiently reliable. The meta-analyses synthesized the findings of studies of school- aged children in a variety of countries, including the United States. They were not designed to establish causal relationships, nor are the results of the meta-analyses generalizable. To describe approaches that selected states and local school districts are taking, we reviewed relevant state bullying laws and regulations, as well as guidance and other documents from eight selected states and conducted interviews with state education officials. We selected eight states—Arkansas, California, Illinois, Iowa, Massachusetts, New Mexico, Vermont, and Virginia—based on the following criteria: Each has bullying laws or regulations, and they vary with respect to bullying definitions and enumeration of protected classes, geographic variation, and student enrollment. Further, we selected three of these states (New Mexico, Vermont, and Virginia), which vary on the characteristics listed above, to review policies and guidance of local school districts and conduct interviews with school officials. We selected a total of six school districts, two in each state—Albuquerque Public Schools, Rio Rancho Public Schools, Fairfax County Public Schools, Warren County Public Schools, Windham Southeast Supervisory Union, and Windham Southwest Supervisory Union. The six school districts were selected from the National Center for Education Statistics (NCES) Common Core of Data Public Elementary/Secondary School Universe Survey: School Year 2008–09. The Common Core of Data (CCD) nonfiscal surveys consist of data submitted annually to NCES by state educational agencies (SEA). School districts and schools were selected to reflect a range of size, and urbanicity (urban, suburban, or rural), as well as racial and socioeconomic diversity. Participation in the National School Lunch Program was used as a proxy for socioeconomic status. We held interviews with central administrators, principals, school staff, and parents. In several instances, multiple individuals attended an interview; for example six parents attended one parent interview. During the interviews, we asked about measures taken to prevent bullying, school officials’ response to bullying behavior, and lessons learned. We analyzed narrative responses thematically. To identify legal options that federal and selected state governments have in place when bullying leads to allegations of discrimination, we reviewed relevant federal and state anti-discrimination laws and regulations, selected federal court decisions, as well as guidance and other documents of the federal government and the eight states selected for review. We also conducted interviews with federal officials in the Department of Education’s Office for Civil Rights (OCR) and the Department of Justice’s Civil Rights Division (CRT), Educational Opportunities Section, as well as with state officials. State officials were from various departments, including state educational agencies and human rights or civil rights commissions or departments. During the interviews with federal and state officials, we asked about provisions, discrimination complaint processes, complaint resolutions, and legal mechanisms available to individuals who are not members of a protected class. To address how key federal agencies are coordinating their efforts to combat school bullying, we interviewed officials from Education, HHS, and Justice and reviewed relevant documents. These departments were represented with officials from many component agencies. For Education, we spoke to officials from the Office of Safe and Healthy Students (formerly the Office of Safe and Drug-Free Schools), OCR, and Office of Special Education Programs. For HHS, we spoke to officials from the Office of the Assistant Secretary for Public Affairs, Office of the Assistant Secretary for Planning and Evaluation, CDC, Health Resources and Services Administration (HRSA), National Institutes of Health, and Substance Abuse and Mental Health Services Administration (SAMHSA). For Justice, we spoke to officials from the Office of Community Oriented Policing Services, CRT, and Office of Justice Programs. We focused on these three departments, given their leadership roles on an interdepartmental coordinating committee and website (www.stopbullying.gov, last accessed May 22, 2012) on bullying. We analyzed coordination of efforts based on key practices that GAO has previously identified as effective coordination practices.our interviews and analysis, we asked questions about such effective coordination practices as agreeing on roles and responsibilities or establishing compatible policies, procedures, or other means to operate across agency boundaries. We focused on these practices, among those GAO has identified, based on our professional judgment and relevance For example, in for the coordinated federal efforts regarding bullying. Related documents that we reviewed included plans, meeting agendas, conference materials, interagency agreements, and educational materials provided to the public. We also attended the second annual bullying prevention conference of the interdepartmental coordinating committee. In addition, we conducted interviews with Education, HHS, and Justice officials about efforts within their departments to combat bullying. We also reviewed relevant documents and agency websites. We conducted this performance audit from April 2011 through May 2012 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 5 compares how four nationally representative surveys define and measure bullying. This appendix provides estimates of the overall prevalence of youth who reported being bullied by sex and by race/ethnicity. Three of the four federal surveys that we reviewed present an estimate of overall prevalence of being bullied, and for these three surveys, the results of each are shown separately by sex and by race/ethnicity. Unless otherwise noted, all estimates in these tables have 95 percent confidence intervals of within plus or minus 2.1 percentage points. The difference between boys and girls reporting that they were bullied was statistically significant in one survey (YRBS), with girls reporting a higher percentage of bullying, but was not statistically significant in the other two surveys (SCS and HBSC). See table 6. White youth reported being bullied at higher percentages than African- American youth in two of the three surveys (YRBS and HBSC), while the other survey found no difference. In two of the three surveys (YRBS and HBSC), differences between the overall prevalence for white compared with Hispanic youth and for African-American youth compared with Hispanic youth were not statistically significant.Hispanics reported a lower percentage of bullying than whites or African- Americans. Asian-American youths reported a lower percentage of bullying in the one survey (NCVS) that captured information for that demographic group. See table 7. This appendix provides estimates of the prevalence of being bullied for certain types of bullying behaviors. Three of the four federal surveys that we reviewed provide estimates of the prevalence of being bullied for certain types of behaviors, and the results of each are shown separately. Unless otherwise noted, all estimates in these tables have 95 percent confidence intervals of within plus or minus 2.1 percentage points. These surveys also found that boys may be subject to somewhat different types of bullying than girls. For example, estimates from SCS and HBSC showed that a higher percentage of boys were bullied physically than girls, while girls were more commonly bullied than boys with rumors or social exclusion, which are examples of relational bullying, or bullying using interpersonal relationships. In table 11 are selected coordination practices that we have previously found help to enhance and sustain coordination across federal agencies, as well as the ways that key interdepartmental activities against bullying reflect those coordination practices. In addition to supporting antibullying activities, the Departments of Education, HHS, and Justice support more broadly focused services and programs that may be used for bullying prevention. Generally, bullying prevention represents one of many allowable activities within these services and programs. Within each agency, officials identified a range of services and programs, including technical assistance, funding opportunities, information sharing, and research, that may include bullying prevention. For example, HHS provides funding for the Children’s Safety Network (CSN), a national resource center for the prevention of childhood injuries and violence. See table 12. While these programs and services generally support a broader range of activities than just bullying, several of them have been used to directly address bullying. For example, several grantees have used Safe Schools/Healthy Students funding to implement bullying prevention programs. Also, in fiscal year 2010, 2 of the 11 SEAs awarded Safe and Supportive Schools grants devoted resources to bullying prevention. While these services and programs do not always exclusively focus on bullying prevention, officials across the three federal agencies— Education, HHS, and Justice—agreed that their emphasis on violence reduction and healthy behaviors can help prevent and reduce bullying behavior, even if the funds are not used specifically to address bullying. Bryon Gordon (Assistant Director), Ramona L. Burton (Analyst-in- Charge), Susannah Compton, Alex Galuten, Avani Locke, Ashley McCall, Sheila McCoy, Jean McSween, Mimi Nguyen, Karen O’Conor, Kate O’Dea, Michael Pahr, Rebecca Rose, Regina Santucci, Matthew Saradjian, Ronni Schwartz, and John Townes made significant contributions to all aspects of this report.
Millions of youths are estimated to be subject to bullying in U.S. schools. GAO was asked to address (1) what is known about the prevalence of school bullying and its effects on victims, (2) approaches selected states and local school districts are taking to combat school bullying, (3) legal options federal and selected state governments have in place when bullying leads to allegations of discrimination, and (4) key federal agencies’ coordination efforts to combat school bullying. GAO reviewed research on the prevalence and effects on victims; analyzed state bullying laws, and school district bullying policies; and interviewed officials in 8 states and 6 school districts. States were selected based on various characteristics, including student enrollment, and their definitions of bullying. Also, GAO reviewed selected relevant federal and state civil rights laws, and interviewed officials from Education, HHS, and Justice. School bullying is a serious problem, and research shows that it can have detrimental outcomes for victims, including adverse psychological and behavioral outcomes. According to four nationally representative surveys conducted from 2005 to 2009, an estimated 20 to 28 percent of youth, primarily middle and high school-aged youths, reported they had been bullied during the survey periods. However, differences in definitions and questions posed to youth respondents make it difficult to discern trends and affected groups. For example, the surveys did not collect demographic information by sexual orientation or gender identity. The Departments of Education (Education) and Health and Human Services (HHS) are partially addressing the issue of inconsistent definitions by collaborating with other federal departments and subject matter experts to develop a uniform definition of bullying that can be used for research purposes. However, gaps in knowledge about the extent of bullying of youths in key demographic groups remain. According to Education, as of April 2012, 49 states have adopted school bullying laws. The laws in the 8 states that GAO reviewed vary in who is covered and the requirements placed on state agencies and school districts. For example, 6 of the states cover a mix of different demographic groups, referred to as protected classes, such as race and sex or gender, in their bullying laws, while 2 states do not include any protected classes. With respect to school districts, each of the 6 districts GAO studied adopted policies that, among other things, prohibit bullying and describe the potential consequences for engaging in the behavior. Also, school district officials told GAO that they developed approaches to prevent and respond to bullying. For example, several school officials said they implemented a prevention-oriented framework to promote positive school cultures. Both state and local officials expressed concerns about various issues, including how best to address incidents that occur outside of school. Federal civil rights laws can be used to provide protections against bullying in certain circumstances, but certain vulnerable groups are not covered and therefore have no recourse at the federal level. For example, federal agencies lack jurisdiction under civil rights statutes to pursue discrimination cases based solely on socioeconomic status or sexual orientation. While some state civil rights laws provide protections to victims of bullying that go beyond federal law, federal complainants whose cases are dismissed for lack of jurisdiction are not always informed about the possibility of pursuing claims at the state level. Three federal departments—Education, HHS, and the Department of Justice (Justice)—have established coordinated efforts to carry out research and broadly disseminate information on bullying to the public, including establishment of a central website and an informational campaign to raise awareness about bullying. In addition to these efforts, Education has issued information about how federal civil rights laws can be used to address bullying of protected classes of youths and is conducting a comprehensive study of state bullying laws and how selected school districts are implementing them. However, no similar information is being gathered on state civil rights laws and procedures that could be helpful in assessing the adequacy of legal protections against school bullying. GAO recommends that Education compile information about state civil rights laws and procedures that relate to bullying, and inform complainants about state legal options; Education, HHS, and Justice develop information about bullied demographic groups in their surveys; and assess whether legal protections are adequate for these groups. Education disagreed with our first recommendation and we clarified it to address some of their concerns. Education is considering our second recommendation, agreed with our third, and provided information on efforts related to the last. HHS agreed with our recommendations. Justice did not provide a written response.
VA’s efforts to assist Gulf War veterans began in 1992 with the implementation of the Persian Gulf Registry Health Examination Program. In 1993 and 1997, respectively, the Congress passed legislation giving Gulf War veterans special eligibility (priority care) for VA health care and allowing VA expanded authority to treat veterans for health problems that may have resulted from their Gulf War service. In addition to assisting Gulf War veterans in gaining entry into the continuum of VA health care services and providing them with a free physical examination, the Registry database provides a communications link with Gulf War veterans, a mechanism to catalogue prominent symptoms at the time of their examination, and a way to report exposures and diagnoses. In 1995, VA modified the Registry program by implementing the Uniform Case Assessment Protocol, designed in conjunction with DOD and the National Institutes of Health, to help guide physicians in the diagnosis of symptoms reported by veterans who had served in the Gulf War. VA requires medical facilities having a Gulf War program to designate a Registry physician to be responsible for implementing the protocol. The Registry physician is expected to follow VA’s Uniform Case Assessment Protocol, which prescribes a two-phase examination. Phase I requires Registry physicians to (1) obtain a detailed medical history from the veteran, which includes collecting information on exposure to environmental and biochemical hazards; (2) conduct a physical examination; and (3) order basic laboratory tests. Phase II, which is to be undertaken if veterans still have debilitating symptoms that are undiagnosed after phase I, includes additional laboratory tests, medical consultations, and symptom-specific tests. If followed as written, the protocol gives the Registry physician very little flexibility in deciding what tests should be performed. At the completion of these examinations, veterans are to receive personal counseling about their examination results and need for additional care. In addition, the Registry physician is charged with preparing and signing a follow-up letter explaining the results of the Registry examination. Veterans with continuing medical problems who do not receive a diagnosis after phase II may be sent to one of VA’s four Persian Gulf Referral Centers for additional testing and evaluation. Registry physicians are also responsible for clinically managing the treatment of Gulf War veterans and serving as their primary health care provider unless another physician has been assigned. VA’s implementing guidance acknowledges that the veterans’ Registry physician, or designee, plays a significant role in determining the perceptions veterans have concerning the quality of VA health care services and of their treatment by VA health care providers. VA’s Environmental Agents Service is responsible for overseeing the operation and implementation of the Registry program. The program is currently available to Gulf War veterans at 162 VA medical centers and 50 outpatient clinics nationwide, including Guam, the Philippines, and Puerto Rico. While it is widely accepted that almost 700,000 U.S. service members took part in the Gulf War from August 2, 1990, to July 31, 1991, estimating how many of these veterans suffer from illnesses related to their service in the Gulf region is much more problematic. Although there are certain symptoms that are associated with Gulf War veterans who are ill, there are currently no case definitions for Gulf War illnesses in use by VA. Veterans may have multiple symptoms or only a few, with no particular pattern of association. Past data collection efforts have been too limited to provide a case definition. In addition, federally supported research projects and Gulf War Registry programs have generally failed to study the conjunction of multiple symptoms in individual veterans. Further, VA’s Under Secretary for Health stated that while the Registry’s record of veterans’ symptoms, diagnoses, and exposures makes it valuable for health surveillance purposes, the voluntary, self-selected nature of the database means that the exposures, illnesses, and health profiles of those in the Registry cannot be generalized to represent those of all Gulf War veterans. Consequently, only a rough estimate of those potentially suffering from Gulf-related illnesses is possible on the basis of data that report numbers of Gulf War veterans who received services for health complaints of any type. To obtain a general sense of how many veterans may have suffered adverse health effects as a result of their Gulf War service, we requested information from several VA and DOD health care program databases. We found, however, that while these databases did report on the number of Gulf War veterans receiving certain health care services, they did not indicate whether these services were provided for Gulf War-related conditions. For example, VA reports that over 68,000 Gulf War veterans have participated in its Persian Gulf War Registry program by receiving the Registry examination and being included in the Registry database. However, about 12 percent of these veterans reported no adverse health problems as a result of their Gulf War service. According to the Under Secretary for Health, these veterans wished to participate in the examination only because they were concerned that their future health might be affected as a consequence of their service in the Gulf War. VA also reports that more than 22,000 Gulf War veterans have been hospitalized, about 221,000 veterans have made outpatient visits to VA facilities, and approximately 83,000 veterans have been counseled in Vet Centers since the war. Like VA’s Registry data, however, there is no indication of how many of these veterans suffer from illnesses that actually resulted from their Gulf War experience. DOD reports that about 33,000 service members have participated in its Registry examination program but, like VA, does not have information that would definitively link the service members’ exposure history to their health problems. Combined, VA and DOD report that over 100,000 Gulf War veterans have requested a Registry examination. Although VA has a program in place to help guide physicians in the diagnosis and treatment of Gulf War veterans, this program has not been fully developed and implemented to effectively meet their health care needs. Specifically, VA’s diagnostic protocol is not being consistently implemented, and VA referral centers are being underutilized. As a result, some veterans may not be receiving a clearly defined diagnosis for their symptoms. Communication between physicians and veterans has also been less than satisfactory. Mandated personal counseling of veterans often does not occur, and form letters that are sent regarding examination results are not always clear and understandable. Health care that incorporates diagnosis, treatment, and follow-up is also not well coordinated for Gulf War veterans. Instead, Gulf War veterans are typically referred to one of several primary care teams or physicians who are not always familiar with the symptoms commonly reported by Gulf War veterans. Moreover, VA does not effectively monitor the clinical progress of Gulf War veterans and thus has no way of knowing whether these veterans are getting better as a result of the care provided. Our reviews of Gulf War veterans’ medical records, observation of program operations during site visits, and discussions with program officials, including physicians, showed that VA’s Registry examination protocol is not being consistently implemented in the field. For example, our review of veteran’s medical records revealed that at two of the six locations we visited the Registry physicians often did not review the results of the examination performed by the physician’s assistants or nurse practitioners, as required by the Registry protocol. Moreover, while the protocol mandates that disabled veterans without a clearly defined diagnosis are to receive additional baseline laboratory tests and consultations, these tests and consultations were not typically provided in the facilities we visited. Our review of 110 veterans’ medical records indicated that, in 45 cases, veterans received no, or minimal, symptom-specific testing for unresolved complaints or undiagnosed symptoms. Furthermore, veterans suffering from undiagnosed illnesses were rarely evaluated in VA’s referral centers. Of the approximately 12,500 cases of veterans reported as having health complaints but no medical diagnosis,only about 500 have been evaluated at a referral center. Of the 110 medical records we reviewed, including those records for veterans with symptoms for whom no diagnosis was provided (24) and those with undiagnosed or unexplained illnesses (30), only 1 record indicated that the veteran was sent to a referral center for evaluation. While VA central office officials told us that some medical centers are now capable of conducting the more detailed diagnostic tests and analyses typically offered at the referral centers, we found little evidence at the sites we visited that this is taking place. For example, at one full-service medical center we visited, 14 of the 20 cases we reviewed received no diagnosis and 17 received very little, if any, testing. Veterans we spoke with who received care from this facility indicated that they were extremely frustrated and believed that they were not getting adequate testing for their ailments. Some veterans told us that the examination they received seemed too superficial to fully evaluate the complex symptoms they were experiencing. According to a VA program official, health care providers reported that they spend, on average, about 1 hour to perform each registry examination. In addition, 24 percent of the records we reviewed (26 of 110) indicated that the diagnoses reached were essentially restatements of the veterans’ symptoms. Of these 26, only 11 received symptom-specific treatment or follow-up and referral. Several of the physicians we interviewed believed they should have the flexibility to use their own clinical judgment in determining which tests are necessary to establish a diagnosis and treatment plan. One VA facility official stated that some physicians do not know that phase II tests are required. One physician stated that a good physician should, in most cases, be able to diagnose a veteran’s symptoms without using the more complex battery of tests mandated by the protocol. We were told that some of the phase II symptom-specific tests are invasive procedures that could have serious side effects and, unless the tests are specifically needed, they should not be given routinely just because a veteran has symptoms. Other physicians resisted prescribing some phase II tests because of the associated costs. Furthermore, some physicians told us that they believe there are no physical bases for the symptoms Gulf War veterans are experiencing and that these symptoms are often psychologically based and not very serious. According to the Assistant Chief Medical Director responsible for the Registry program, Registry physicians are expected to follow the diagnostic protocol as laid out in program guidance. She added that program guidance is designed to direct physicians’ behaviors, not necessarily their attitudes. She told us, however, that the unsympathetic attitudes displayed by some physicians toward Gulf War veterans is inexcusable and cannot be tolerated. Physicians and veterans in two of the six facilities we visited were often frustrated with the process they were required to follow in obtaining certain tests and consultations. Physicians told us that the lack of existing specialists in these facilities forced them to refer patients to other VA medical facilities for needed services even though this often resulted in increased travel for the veteran, delays in scheduling appointments, and increased waiting times to have consultations and receive test results. Officials at both facilities told us that coordination between VA medical facilities affects not only Gulf War veterans but the entire veteran population. According to VA guidance, counseling veterans about their examination results is one of the key responsibilities of the Registry physician. While VA’s guidance provides some criteria on what information should be shared during counseling, the American Medical Association’s Physicians’ Current Procedural Terminology indicates that counseling discussions with a patient and/or family may concern one or more of the following areas: (1) diagnostic results, impressions, and/or recommended studies; (2) prognosis; (3) risks and benefits of management (treatment) options; (4) instructions for treatment or follow-up; (5) importance of compliance with chosen treatment; (6) risk-factor reduction; and (7) patient and family education. We found that personal counseling between veterans and their physicians often does not take place. For example, veterans we spoke with indicated that personal counseling is generally not provided on the results of the Registry exam. This is true for veterans who receive a diagnosis as well as for those who do not. Our review of 110 veterans’ medical records revealed that only 39 records, or 35 percent, contained physician documentation of one-to-one counseling about examination results and a discussion of a proposed plan of care. All 39 records were from one facility. VA medical staff, as well as veterans we talked with, stated that feedback on examination results is typically provided through a form letter. The letter generally states the results of laboratory tests and provides a diagnosis if one was reached. Some form letters sent to veterans at the completion of the examination generated considerable anger among Gulf War veterans. These veterans interpreted the letters to mean that since their test results came back normal, the physicians believed that either there was nothing medically wrong with them or their conditions were not related to their service in the Gulf. Furthermore, at one of the facilities we visited, we were told that counseling letters for more than half of the cases we reviewed were sent to the veterans without incorporating the results of all diagnostic tests. “Gulf War veterans with complex medical conditions may require frequent medical follow-up by their primary care teams and various other health care providers. Utilizing case management techniques to coordinate health care services for Gulf War veterans with complex and difficult to manage conditions will improve both treatment effectiveness and patient satisfaction.” In September 1997, VA released an educational video on the use of case management as a tool to improve quality of care in medical centers throughout the VA system. The video cited the Birmingham VA Medical Center’s program of case management, which offers continuing and coordinated care for Persian Gulf veterans, as a noteworthy model. In response to a congressional mandate, VA has also recently initiated demonstration projects to test health care models that incorporate approaches such as case managers and specialized clinics. Based on our work, we found that continuous coordinated care was provided at two of the six facilities we visited through the efforts of an individual Registry physician and clinical staff members serving Gulf War veterans. For example, at one facility, veterans have the option of receiving treatment at a Persian Gulf Special Program Clinic. Although it operates only on Tuesdays and Fridays, the clinic allows veterans to receive primary care from medical staff experienced with Gulf War veterans and their concerns. Veterans are still referred to hospital specialists as necessary, but responsibility for tracking patients’ overall medical care is assigned to the Persian Gulf clinic’s case manager, who is supervised by the Persian Gulf Registry physician. The case manager is a registered nurse who serves as an advocate for veterans and facilitates communications among patients, their families, and the medical staff. The clinic staff also interacts regularly with the Persian Gulf Advisory Board, a local group of Persian Gulf veterans who meet weekly at the VA medical center to discuss specific concerns. Veterans we spoke with were pleased with the clinic and supported its continued operation. They believed that it reflects a VA commitment to take seriously the health complaints of Gulf War veterans. They also believed that the clinic gives veterans access to physicians who understand and care about the special needs of Gulf War veterans and their families. In addition, veterans we talked with who use this facility indicated a high level of satisfaction with the care they received. At the second facility, the Registry physician served as the veterans’ primary care physician. This physician ordered all necessary consults and scheduled follow-up visits for Gulf War patients. He also tracked veterans’ visits and documented their environmental exposure histories. Veterans at this facility had a clear point of contact whenever they had questions or concerns about their treatment. Veterans we spoke with told us that they were very satisfied with the treatment they received and were extremely complimentary of the care and concern shown by the Registry physician. In contrast, at four of the six facilities we visited, we observed that there was very little clinical continuity or coordination among medical professionals during the diagnostic and treatment phases of care provided to Gulf War veterans. Specifically, at these four facilities we found that veterans with symptoms were not always sent for treatment and follow-up care and when they did get treatment they were assigned to primary care teams who treat the general hospital population. Furthermore, some physicians told us that clinical information obtained during the Registry examination is not always forwarded to or used by primary care physicians. As a result, the physicians treating these veterans may not be aware of, or responsive to, their unique experiences and symptoms. Many of the veterans we spoke with who were treated for their symptoms at these four facilities told us that they believed their treatment was ineffective. In fact, several veterans believed their medication made them feel worse and stopped using it. Primary care physicians we spoke with acknowledged that greater continuity between the diagnostic and treatment process would benefit both the physician and the veteran. In February 1998, VA’s Under Secretary for Health said in testimony before the House Committee on Veterans’ Affairs that a case management approach intended to improve services to Persian Gulf veterans with complex medical problems had been implemented in 20 of VA’s 162 medical centers that have a Persian Gulf Registry Health Examination Program. To determine the specific focus and nature of the case management approaches being utilized, we contacted each of the 20 facilities identified by VA. Based on our work, we found that provision of continuous coordinated care for Persian Gulf veterans was in place at 8, or 40 percent, of the 20 facilities. Specifically, these eight facilities provided Gulf War veterans with coordinated and continuing clinical care through (1) a singular Registry physician who conducts the examination and provides follow-up treatment, (2) a primary care team dedicated to diagnosing and treating Persian Gulf veterans, or (3) a coordinated effort between the Registry physician who performs the examination and a Persian Gulf primary care team that provides treatment. Although each facility’s approach is slightly different, all eight provide links between the diagnostic and treatment phases of care and are focused on the special needs of Gulf War veterans. The remaining 12 facilities generally do not provide focused, coordinated, or continuing care programs for Gulf War veterans other than the care available to all veterans. Two of these facilities cited lack of staff as the reason for not attempting or continuing Gulf War dedicated care. For example, one of these two facilities had a dedicated program but recently lost physician staff through budget cuts and has not been able to restart its program. Increased continuity and coordination between the diagnosis and treatment of Gulf War veterans offers several advantages. It validates veteran concerns. By having physicians clearly identified as responsible for the care and treatment of Gulf War veterans, these veterans are more confident that VA takes their complaints seriously. It enhances opportunities for veterans to receive follow-up care. After completing the Registry examination, veterans have an immediate point of contact should they have questions about their condition or require follow-up care. It allows for increased awareness of VA’s referral centers. One of the primary care doctors we spoke with was not aware of the availability of VA referral centers for veterans with undiagnosed conditions or who do not respond to treatment. If designated physicians were responsible for treatment of Gulf War veterans, greater awareness and use of the referral centers would likely take place. It allows for a better treatment focus. If designated physicians see the majority of Gulf War veterans, there is an increased likelihood of recognizing symptomatic and diagnostic patterns and developing an effective treatment program. This approach may also lead to greater understanding of the nature and origin of Gulf War illnesses. Periodic reevaluation and management of patient symptoms, diagnosis, and treatment is part of continuous and coordinated care. This is important for Persian Gulf veterans because of the need to ensure that their diagnosis is correct, assess their progress, check for new symptoms, and determine how they are responding to their treatment plan. Although VA officials contend that Gulf War veterans are generally being treated appropriately for the symptoms they display, they also recognize the need to evaluate health outcomes and treatment efficacy. In February 5, 1998, testimony before the House Committee on Veterans’ Affairs, VA’s Under Secretary for Health acknowledged the need to establish mechanisms to evaluate Gulf War veterans’ clinical progress and identify effective treatment outcomes. He stated that VA and DOD have jointly asked the National Academy of Sciences’ IOM to provide advice and recommendations on how best to develop and implement a methodology to collect and analyze this type of information. IOM is expected to issue its final report by June 1999. Gulf War veterans are generally dissatisfied with the diagnostic care and treatment they receive from VA for Gulf War-related symptoms. This sentiment was expressed in conversations and communications we had with individuals and groups of Gulf War veterans, the results of our nationwide survey of veterans who received the Persian Gulf Registry health examination in calendar years 1996 and 1997, and findings from VA’s satisfaction survey of Gulf War veterans who received outpatient care from fiscal year 1992 through 1997. In both individual and group discussions and in correspondence, Gulf War veterans indicated that while they greatly appreciated the efforts of some individual doctors, they were often dissatisfied with the overall health care they received from VA. They cited delays in getting the Registry examination; superficial examinations, particularly when they were experiencing complex health problems; and attitudes among health care professionals that implied veterans’ physical problems were “all in their heads.” Veterans voiced displeasure with the lack of personal counseling and the use of form letters to explain the results of their examinations. They added that these form letters generated considerable anger because they were often interpreted to mean that VA physicians did not believe that veterans were suffering from any physical illness. Gulf War veterans also indicated that they clearly preferred the use of specific physicians to treat their conditions. Veterans noted that designated physicians tended to be genuinely concerned about their patients and more likely to take their health problems seriously. Recognizing that those who initially communicated with us might be more dissatisfied than the typical Gulf War veteran who receives care, we designed and administered a mail-out questionnaire that we sent to an adjusted random sample of 452 Gulf War veterans. Our sample was selected from 8,106 veterans who received VA’s Registry examination nationwide during calendar years 1996 and 1997. Our survey population was limited to 1996 and 1997 Registry participants because this group received the examination after VA’s most recent update to the protocol, which was implemented as of January 1, 1996. The questionnaire collected information on veterans’ satisfaction with (1) the Persian Gulf Registry Health Examination, (2) the treatment VA provided, and (3) sources of health care other than VA. Sixty-three percent, or 283, of the 452 veterans surveyed responded. Analyses of the characteristics of nonrespondents showed them to be similar to those of respondents, thus increasing our confidence that our survey results are representative of the views of the sampled population. Based on our survey results, we estimate that the median age of veterans in our survey was 33. Seventy-six percent of them were no longer active in the military service, while 12 percent were active in a Reserve Unit, 10 percent were members of the National Guard, and 2 percent were active duty members of the U.S. Armed Services. Because the Persian Gulf Registry examination was first offered in 1992, we asked the veterans to indicate the reasons why they did not receive the examination until 1996 or 1997. One half reported that they did not know that VA offered the examination. Some also reported that they waited to take the examination because they tried to ignore their symptoms at first (40 percent), they believed their problem would go away on its own (33 percent), or their symptoms developed several years after the war was over (19 percent). Fourteen percent were treated by non-VA providers before they requested VA health care. Almost 60 percent of the veterans rated their current health as either poor or fair, while only about 10 percent rated their health as excellent or very good. In addition, over 80 percent indicated that compared to their health before going to the Gulf, their health now was worse. About three-fourths of the veterans reported experiencing health problems that they believed were caused by their service in the Persian Gulf. Table 1 shows the extent to which various problems were reported by these veterans. Based on our survey results, we estimate that about half of the veterans who received the Registry examination in 1996 and 1997 were dissatisfied with that examination. These veterans often expressed dissatisfaction with specific aspects of VA’s examination process. For example, they indicated that VA health providers are generally not very good at communicating with their patients. Specifically, about half of these veterans indicated that they were dissatisfied with their physicians’ ability to diagnose their symptoms or explain their diagnosis once one was reached. Moreover, 42 percent were dissatisfied with the explanations provided regarding the need for specific tests, and about 50 percent were not satisfied with the explanations given on the results of these tests. Forty percent were dissatisfied with the thoroughness of the examination. We estimate that about 45 percent of the veterans who received the examination in 1996 and 1997 and who had health problems they believed were caused by their Gulf War service received treatment from VA. However, about 41 percent of the veterans in our survey who received treatment reported that, overall, they were dissatisfied with the VA treatment services. Forty-eight percent of the veterans who received treatment told us that VA provided little or only some of the treatment they believe they needed. They also indicated that they did not receive treatment they felt was necessary because VA health providers did not believe they needed it (42 percent), treatment was never scheduled (28 percent), or VA providers determined that the veterans’ health problems were not related to the Gulf War (22 percent). Even when treatment was provided, veterans were often not satisfied. About 50 percent of respondents who received treatment indicated that they were dissatisfied with their treatment outcomes. While many veterans we surveyed were dissatisfied with the overall service they received from VA, they were satisfied with certain aspects of the care that VA provided. For example, over half of the veterans we surveyed reported that they were satisfied with the attention (52 percent) and respect (62 percent) paid to them by individual VA physicians. Almost one half of the veterans in our survey indicated that they sought health care from physicians and medical professionals outside VA for problems they believe were caused by their service in the Persian Gulf. These veterans indicated that they sought care from non-VA health providers because they did not realize that their symptoms were related to their Gulf War service (36 percent), were unaware that they were eligible for the VA services they needed (29 percent), they had to wait too long for a VA appointment (26 percent), and the VA facility was too far away (20 percent). Sixty-four percent of the respondents also submitted written comments with their surveys. These comments revealed that veterans who receive the examination continue to question VA’s willingness to provide them with an adequate diagnosis and treatment for the ailments they are experiencing. For example, some veterans felt that the Registry examination represented little more than a token effort on the part of VA to pacify Gulf War veterans and that the examination did not provide any meaningful answers to their health problems. Other veterans noted that VA in general, and some health care providers in particular, failed to express a genuine concern for the needs of Gulf War veterans. Specifically, these veterans reported that some VA health professionals did not take their problems seriously; questioned their motives in requesting health care services; treated them with disrespect and a lack of sensitivity; and failed to provide adequate explanations of test results, treatment, and follow-up care. In describing his experience with VA, one Gulf War veteran noted that the doctor who examined him laughed at the problems associated with his medical condition. “He made me feel very embarrassed and humiliated,” the veteran stated, adding, “I feel his attitude was anything but professional.” The same veteran wrote that he felt the person who examined him had already made up his mind that “there was nothing to Persian Gulf Syndrome and that we (veterans) are either just looking for compensation for nothing, or have just convinced ourselves we’re sick when we’re not.” This veteran also mentioned that he did not believe that the physician took the Registry examination seriously, performed it thoroughly, or provided adequate treatment for the health problems that were identified. “When I arrived I was given a list of questions. I filled out the questionnaire and then was taken back to see the doctor. I gave him the questionnaire; he looked it over and left the room. I was then told by a nurse that I could go. The doctor never asked me one question about my health or my problems. I believe that the doctor could not have cared about my health.” A third veteran noted that after receiving the examination, he was not notified of its results nor provided with a treatment plan to address his health problems. Another veteran wrote of similar frustrations when trying to receive a diagnosis for his ailments. “ easier to live with,” he said, “than trying to get someone [in VA] to find out what wrong.” A fifth veteran indicated that, after receiving an examination, he expected to be given treatment for his continuing health problems but was told by VA personnel that his visit was “just Registry.” Other comments we received revealed that veterans are greatly concerned about the impact their Gulf War service has had on the health of their family members. Specific health concerns they noted include miscarriages, Down syndrome, spina bifida, immune system deficiencies, and the premature deaths of young children. Although the majority of comments we received were critical, several veterans reported satisfaction with the care they received from VA. Some veterans attributed their satisfaction to the efforts and concerns displayed by individual physicians. For example, one veteran stated, “I have been treated very well at the VA center.. . . The doctor I see always answers my questions and always asks what problems I’m having.” VA’s National Customer Feedback Center implemented a survey in 1997 to over 41,000 Gulf War veterans who had received care in a VA outpatient facility during fiscal years 1992 through 1997. Forty percent of the veterans surveyed responded. The survey found that Gulf War era veterans are not satisfied with the continuity and overall coordination of the care they received. The VA survey also showed that Gulf War veterans, as a group, are generally more dissatisfied with VA care than VA’s general outpatient population that responded to a similar satisfaction survey at an earlier date. For example, while 62 percent of the general patient population responded that the overall quality of care provided by VA was excellent or very good, only 38 percent of Gulf War veterans responded in this way. Twenty-nine percent of the Gulf War veterans rated the quality of VA’s care as fair to poor. Furthermore, while 54 percent of the general population reported they would definitely choose to come to the same VA facility again, only 24 percent of Gulf War veterans reported that they would. In September 1996, VA requested the IOM to conduct an assessment of the adequacy of its Uniform Case Assessment Protocol to address the wide-ranging medical needs of Gulf War veterans and to review the implementation of the protocol. IOM’s final report, issued in early 1998, represents another evaluation of VA’s Gulf War program and discusses several inconsistencies in the implementation of its protocol. For example, IOM reports that the diagnostic process followed in some VA facilities does not adhere to the written protocol. While stating that it is encouraging that practitioners exercise their clinical judgment to determine what consultations and tests are best for an individual patient, IOM noted that such deviation introduces inconsistency in evaluations across facilities and variations in data recording and reporting. These work against achieving one of the purposes for which the system was developed—to identify previously unrecognized diagnostic entities that could explain the symptoms commonly reported by Gulf War veterans with unexplained illnesses. The IOM report recognizes that while a great deal of time and effort was expended to develop and implement VA’s diagnostic program for Gulf War veterans, new information and experiences are now available that can be used to improve VA’s protocol and its implementation. IOM concluded that the goal of implementing a uniform approach to the diagnosis of Gulf War veterans’ health problems is admirable and should be encouraged but recommended that a more flexible diagnostic process be adopted and that the protocol’s phase I and phase II designations be eliminated. It also recommended that each VA facility adopt and implement a process that would provide Gulf War veterans with an initial evaluation; symptom-specific tests, as needed; and referral for treatment when a diagnosis is reached. If a clear diagnosis cannot be reached, the patient would receive additional evaluation and testing or be sent to a center for special evaluation. Gulf War patients who receive a diagnosis and are referred for treatment would also receive follow-up evaluations under IOM’s proposal. IOM suggested that a defined approach must be established for those who remain undiagnosed or whose major symptoms have not been accounted for, through periodic reevaluation, treatment, or sending the patient to a referral center. The IOM report also noted that some patients could have diseases that cannot be diagnosed at present because of limitations in scientific understanding and diagnostic testing. IOM’s report stated that this group of undiagnosed patients, some of whom are designated as having an “unexplained illness,” will contain a diversity of individuals who will require monitoring and periodic reassessment. IOM specifically recommended that VA plan for and include periodic reevaluations of these undiagnosed patients’ needs. VA currently has efforts under way to evaluate the IOM recommendations and to develop plans to implement them, where feasible. Although VA has made progress in some of its VA locations, it has not fully implemented an integrated diagnostic and treatment program to meet the health care needs of Gulf War veterans. While VA has developed a Registry protocol that provides an approach for evaluating and diagnosing Gulf War veterans, that process is not being consistently implemented in the field. As a result, some veterans may not receive a clearly defined diagnosis for their symptoms, and others may be confused by the diagnostic process, thus causing frustration and dissatisfaction. Furthermore, while VA recognizes that continuous and coordinated patient care will improve both treatment effectiveness and patient satisfaction, many VA facilities have not implemented such an approach for Gulf War veterans. An integrated process should focus services on the needs of Gulf War veterans and should provide a case management approach to the diagnosis, treatment, and periodic reevaluation of their symptoms. Such a focused and integrated process is particularly important for Gulf War veterans because baseline health and postdeployment status information is often not available for this group of veterans. An integrated health care process that provides continuous and coordinated services for Gulf War veterans would not only improve patient satisfaction but also could assist VA health care providers in recognizing symptomatic and diagnostic trends and help identify appropriate and effective treatment options. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to uniformly implement a health care process for Gulf War veterans that provides for the coordination of diagnoses of illnesses, treatment of symptoms and illnesses, evaluation of treatment effectiveness, and periodic reevaluation of those veterans whose illnesses remain undiagnosed. In commenting on a draft of this report, VA expressed general agreement with our findings and conclusions and concurred with our recommendation that it implement a more uniform, coordinated health care process for Gulf War veterans. VA further detailed its program improvement strategies, which it believes will significantly enhance program responsiveness to the needs of Gulf War veterans and ensure a more integrated treatment process at all organizational levels. VA also mentioned that the timing of our review precluded the observation of resulting improvements from these program improvement strategies. We believe that we have appropriately recognized relevant initiatives in the body of our report and have noted that many of the initiatives are preliminary or in the planning stage. In two instances, VA took issue with information contained in our draft report. First, VA asserted that our report concludes that “specialized Gulf War clinics are the only effective means to provide coordinated, quality health care.” We disagree with this characterization. Our conclusions focus on the need for an integrated health care process that “provides continuous and coordinated services for Gulf War veterans” and does not identify Gulf War clinics as our preferred model of care. One of the examples of coordinated care cited in our report resulted from the efforts of an individual Registry physician who did not provide care through a specialized Gulf War clinic. As demonstrated by our discussion of the six facilities we visited, we believe that coordinated, quality care can be provided in a variety of settings and through various approaches. Second, VA said that it believes our report misinterprets the guidance provided for implementation of the phase II Registry examination. VA states that the phase II protocol should be used to “evaluate veterans with debilitating unexplained illnesses, and not for unexplained symptoms, as GAO states” in the background section of the report. We have made adjustments to the report as appropriate to clarify VA’s criteria for initiation of phase II evaluations. The full text of VA’s comments is included in appendix II. Copies of this report are being sent to the Secretary of Veterans Affairs, other congressional committees, and interested parties. We will also make copies available to others upon request. Please contact me on (202) 512-7101 if you or your staff have any questions or need additional assistance. Major contributors to this report included George Poindexter, Stuart Fleishman, Patricia Jones, Jon Chasson, and Steve Morris. Our review consisted primarily of four data collection efforts: (1) reviews of existing databases showing the number of veterans of the Gulf War that VA and DOD report as potentially suffering from related illnesses, (2) work performed at VA’s central office and one Veterans Integrated Service Network (VISN) office, (3) case studies at six VA medical facilities including discussions with groups of Gulf War veterans, and (4) implementation of a questionnaire sent to a nationwide sample of veterans who received the Persian Gulf Registry health examination. We collected data on the number of veterans who received either some type of VA health care service or who participated in either VA’s or DOD’s Registry examination program. With the exception of VA’s Persian Gulf Registry database, however, we did not address the accuracy or reliability of either agency’s databases. Data on VA medical center inpatient and outpatient services were taken from data collected and reported by VA’s Gulf War Information System, which, according to VA officials, is the most reliable information available on those services. We also met with officials from VA’s Systems Division in Austin, Texas, to discuss the validity of the Persian Gulf Registry Health Examination Program database. Our work in VA’s central office in Washington, D.C., and VISN 7 in Atlanta, Georgia, involved primarily the collection of program descriptive material and summary data. We interviewed officials from the Veterans Health Administration (VHA), its Division of Environmental Medicine and Public Health, the Environmental Agents Service, and the VISN 7 office. We collected and reviewed studies, reports, program information, and data from these offices and compared that information with observations made during visits to VA medical facilities and information provided by the Gulf War veterans who communicated with us. We also reviewed testimony, legislation, and reports by others, including the Presidential Advisory Committee on Gulf War Veterans’ Illnesses and the National Academy of Science’s Institute of Medicine (IOM). We conducted case study site visits to VA medical facilities in six locations—Albuquerque, New Mexico; Atlanta, Georgia; Birmingham, Alabama; El Paso, Texas; Manchester, New Hampshire; and Washington, D.C. We also visited VA Persian Gulf referral centers in Birmingham, Alabama, and Washington, D.C. We selected these sites judgmentally to include VA facilities that (1) were in different geographical locations, (2) were varied in size and workload, (3) differed in terms of having an onsite referral center, and (4) implemented their Persian Gulf Registry Health Examination Program using different approaches. During our site visits, we interviewed Registry program officials on various aspects of program operations, reviewed samples of case files, and discussed specific cases with program physicians. At each VA medical facility we visited, we randomly selected 10 to 40 medical records/case files of program participants who had received a Registry examination after January 1, 1996. We reviewed a total of 110 medical records. While these cases were selected randomly, they are not a representative sample of each facility’s total number of Registry program participants. Through our case study file reviews and discussions with program officials, we obtained detailed information on the types of diagnostic and treatment services provided to Gulf War veterans at each facility. In addition, through our review of medical records, we attempted to identify all efforts to provide continued, coordinated care to veterans who suffer from complex medical problems at the facilities we visited. We met with groups of Gulf War veterans served by each of the six VA facilities we visited to collect information on their Gulf War experiences, their past and present health status, and the types of health care services they received from VA. We inquired specifically about their satisfaction with VA’s Persian Gulf Registry examination and the treatment they received for their symptoms. In addition, we asked them to fill out a questionnaire; however, their responses were not part of our random nationwide survey. We also contacted the 20 VA medical centers that VA identified as using case management to improve services to Gulf War veterans. One of the 20 centers was also one of our case study locations, and there we discussed program issues with physicians and program personnel. At the 19 sites we did not visit, we talked with physicians and program administrators by telephone to determine the extent to which case management had been implemented and had contributed to continuous and coordinated care for Gulf War veterans. Gulf War veterans with whom we initially spoke often indicated that they believed VA facilities failed to provide them with needed care or that they were dissatisfied with the care provided by VA. Recognizing that those who were most unhappy might be the most likely to contact us or to be critical when we talked with them, we designed and administered a mail-out questionnaire. We sent the questionnaire to a nationwide random sample of Gulf War veterans who received VA’s Registry examination during 1996 and 1997. These 2 years were chosen because VA’s most recent update to its protocol, which was intended to make the examination more uniform across all VA facilities, was implemented on January 1, 1996. The questionnaire collected information on the respondents’ (1) satisfaction with the Persian Gulf Registry examination, (2) satisfaction with treatment VA provided, and (3) sources of health care outside of VA. We selected a sample of 477 veterans from a universe of 8,106 veterans who received the Registry examination in 1996 and 1997. To these veterans we mailed (1) a predelivery notification postcard about 2 weeks before questionnaires were mailed and (2) an initial mailing of the questionnaire with a cover letter describing the nature of our survey effort. Of the initial 477 questionnaires mailed, about 100 were returned as nondeliverable. In most cases we were able to mail the questionnaire to a second address by using forwarding addresses provided by the Post Office or addresses provided by a secondary source. Ultimately, 23 veterans in our sample did not receive a questionnaire because of inadequate or incorrect address information. In addition, two questionnaires were returned by family members who reported that the veterans were deceased. Therefore, our adjusted random sample mailing size was 452. Other efforts used to improve the response rate included sending a postcard reminder, 1 week after the initial questionnaire mailing, to all veterans sampled and sending a second questionnaire to all nonrespondents about 5 weeks after the initial mailing. Two hundred eighty-three usable questionnaires were returned. Consequently, the response rate for this survey (defined as the number of usable questionnaires returned divided by the number of questionnaires delivered) was 63 percent. Our survey sample allowed us to estimate population proportions with sampling errors that do not exceed plus or minus 9 percentage points. Since failure to obtain a response from a sampled veteran could affect the representativeness of the survey data, we conducted analyses to assess the impact of nonresponse. Using information available in VA’s Persian Gulf Registry database, we compared respondents and nonrespondents using a variety of demographic and medical characteristics, including whether or not the veteran reported symptoms at the time the examination was administered and self-reported assessments of functional impairments and general health. We found no relationship between any of these characteristics and whether or not the veteran responded to our questionnaire. On this basis, we believe that respondents did not differ significantly from nonrespondents and, therefore, are representative of the population sampled. Throughout our review, veterans voluntarily contacted us by telephone, e-mail, and letter to discuss their experiences with illnesses they believe are related to their Gulf War service and the health care they have received from VA. We documented these contacts and used the veterans’ comments in our report where appropriate. VA Health Care: Preliminary Observations on Medical Care Provided to Persian Gulf Veterans (GAO/HEHS-98-139R, Apr. 20, 1998). VA Health Care: Persian Gulf Dependents’ Medical Exam Program Ineffectively Carried Out (GAO/HEHS-98-108, Mar. 31, 1998). Gulf War Veterans: Incidence of Tumors Cannot Be Reliably Determined From Available Data (GAO/NSIAD-98-89, Mar. 3, 1998). Gulf War Illnesses: Federal Research Strategy Needs Reexamination (GAO/T-NSIAD-98-104, Feb. 24, 1998). Gulf War Illnesses: Research, Clinical Monitoring, and Medical Surveillance (GAO/T-NSIAD-98-88, Feb. 5, 1998). Gulf War Illnesses: Public and Private Efforts Related to Exposures of U.S. Personnel to Chemical Agents (GAO/NSIAD-98-27, Oct. 15, 1997). Gulf War Illnesses: Reexamination of Research Emphasis and Improved Monitoring of Clinical Progress Needed (GAO/T-NSIAD-97-191, June 25, 1997). Gulf War Illnesses: Enhanced Monitoring of Clinical Progress and of Research Priorities Needed (GAO/T-NSIAD-97-190, June 24, 1997). Gulf War Illnesses: Improved Monitoring of Clinical Progress and Reexamination of Research Emphasis Are Needed (GAO/NSIAD-97-163, June 23, 1997). VA Health Care: Observations on Medical Care Provided to Persian Gulf Veterans (GAO/T-HEHS-97-158, June 19, 1997). Defense Health Care: Medical Surveillance Improved Since Gulf War, but Mixed Results in Bosnia (GAO/NSIAD-97-136, May 13, 1997). Operation Desert Storm: Health Concerns of Selected Indiana Persian Gulf War Veterans (GAO/HEHS-95-102, May 16, 1995). Operation Desert Storm: Questions Remain on Possible Exposure to Reproductive Toxicants (GAO/PEMD-94-30, Aug. 5, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Veterans Affairs' (VA) provision of health care services to Gulf War veterans, focusing on: (1) the number of veterans VA and the Department of Defense (DOD) report as suffering from Gulf War-related illnesses and the criteria used to identify these illnesses; (2) how VA diagnoses, counsels, treats, and monitors Gulf War veterans for the health problems they report; and (3) Gulf War veterans' satisfaction with the health care they receive from VA. GAO noted that: (1) while the number of Persian Gulf War veterans who participated in the military operations known as Desert Shield and Desert Storm is well established at almost 700,000, the number who actually suffer, or believe they suffer, from illnesses related to their Gulf War service remains uncertain 7 years after the war; (2) the primary difficulty in assessing the impact of such illnesses lies in the fact that the link between the veterans' symptoms and the causes of those symptoms has not yet been identified scientifically; (3) thus, while some data on Gulf War veterans' symptoms have been collected and categorized, it is not yet known whether the symptoms reported are the direct result of the veterans' Gulf War service; (4) combined, VA and DOD report, however, that about 100,000 Gulf War veterans have requested Persian Gulf Registry examinations because of war-related health concerns; (5) in response to a variety of symptoms and illnesses reported by Gulf War veterans, VA implemented a program in 1992 to help them receive VA health care; (6) this free diagnostic and referral process has two stages: (a) an initial medical history and a physical examination with basic laboratory testing; and (b) if needed, further evaluation through specialist consultation and additional symptom-specific testing; (7) 212 VA facilities offer the Registry program to Gulf War veterans; (8) however, VA's guidance regarding the evaluation and diagnosis of Gulf War veterans is not being consistently implemented in some of its medical facilities; (9) while VA records show that thousands of veterans remain undiagnosed, only about 500 veterans have been sent to referral centers for additional evaluations, as recommended by the Registry guidance; (10) mandated personal counseling of veterans often does not occur, and the form letters sent to veterans at the completion of the Registry examination do not always sufficiently explain test results or diagnoses, often leaving veterans frustrated and confused; (11) VA's guidance provides that Registry physicians are responsible for giving veterans medical examinations and necessary treatment; (12) VA has not fully developed and implemented an integrated diagnostic and treatment program to meet the health care needs of Gulf War veterans; (13) VA's diagnostic and treatment implementation problems are reflected by Gulf War veterans' general dissatisfaction with their health care; and (14) based on GAO's nationwide survey, over one half of the veterans who received the Registry examination in 1996 and 1997 were dissatisfied with the examination they received.
Health Center Program grantees are private, nonprofit community-based organizations or, less commonly, public organizations such as public health department clinics. Health centers funded through HRSA’s Health Center Program are typically managed by an executive director, a financial officer, and a clinical director, and provide comprehensive primary care services including enabling services, such as translation and transportation, that help facilitate access to health care. HRSA identified 19 program requirements, which it indicated were based on Section 330 of the Public Health Service Act and regulations, which health center grantees must meet to continue receiving grant funding. HRSA groups these 19 program requirements into four broad categories: patient need, the provision of services, management and finance, and governance. Table 1 provides a summary of the 19 requirements. HRSA uses a competitive process to award Health Center Program grants. This process applies both to health centers receiving a grant for the first time—known as new starts—and to existing health center grantees that must compete periodically for grants. In either case, prospective or existing grantees are required to submit the applicable grant application to HRSA, and if approved, receive grants to provide services to individuals located in a specified area, known as their service area. HRSA approves funding for health centers for a specified time period, known as a project period. Currently, HRSA approves new start grantees funding for a 2-year project period, and existing grantees funding for project periods of 1, 3, or 5 years. The length of the project period for existing grantees is determined, in part, based on how well grantees are complying with the 19 program requirements. Each year of a project period is referred to as a budget period. After the competitive award of a grant for the first year, or budget period, HRSA awards noncompetitive continuation grants for each remaining budget period if funds are available, and the grantee demonstrates satisfactory progress in providing its services. A grantee demonstrates satisfactory progress by submitting a budget period progress report for HRSA’s review. In both the competitive grant application and the budget period progress report, a grantee is, among other things, required to describe the services offered, provide a listing of its key management staff, and include a detailed narrative description of the current status and any changes in its operations related to the 19 program requirements. In addition to maintaining compliance with the 19 program requirements and submitting annual budget period progress reports, health center grantees are required to periodically submit other information to HRSA. For example, grantees are required to submit to HRSA an annual independent financial audit in accordance with federal audit requirements. In addition, in the first quarter of every year, grantees must submit a variety of information to HRSA’s Uniform Data System (UDS); UDS tracks a variety of information on Health Center Program grantees, including information on their patient demographics (e.g., race/ethnicity, insurance status, income level); revenues; expenses; quality of care measures; and health center staffing and patient utilization patterns. HRSA’s BPHC has primary responsibility for overseeing health center grantees’ compliance with program requirements. This includes monitoring grantees to determine if they are in compliance with the 19 program requirements and addressing cases of grantee noncompliance. BPHC has four operating divisions, each containing five branches; the branches correspond to specified geographic locations. Within each branch there are project officers who are responsible for the ongoing oversight of an assigned portfolio of grantees. As of March 2012, HRSA had 111 project officers, whose portfolios ranged from 4 to 17 health center grantees; the average portfolio size was 10 grantees per project officer. Each project officer reports to a supervisor, known as a branch chief. HRSA project officers use an on-line electronic system, called the Electronic Handbook, to document their oversight activities, as well as correspond with and exchange documents with health center grantees. The system contains several different modules within which project officers record such information. To help them conduct their oversight, project officers have a variety of internal and external resources. For example, officials from the BPHC’s Office of Policy and Program Development can assist project officers in interpreting program guidance and Health Center Program requirements. In addition, project officers have access to consultants through an over $30-million, 4-year contract with Management Solutions Consulting Group, a nationwide management consulting firm that provides HRSA access to approximately 300 to 350 consultants. The consultants are to provide a range of services, including conducting site visits and helping assess the results of health center grantees’ annual financial audits. HRSA primarily relies on three main methods to oversee grantees’ compliance with Health Center Program requirements: annual compliance reviews, site visits, and routine communications. Additionally, when HRSA identifies noncompliance with these requirements, the agency has a recently revised process to address this with its grantees. HRSA relies on three main methods to oversee grantees’ compliance with Health Center program requirements. To oversee health center grantees’ compliance with the 19 program requirements, HRSA requires project officers to conduct an annual compliance review for each of the grantees in their assigned portfolios. During this review, project officers are responsible for determining whether a health center grantee is in compliance with each of the 19 program requirements. The annual compliance review process begins when a health center grantee submits an application for a competitive grant or submits a budget period progress report to HRSA. When conducting a compliance review, HRSA project officers are responsible for reviewing information contained in the grantee’s submission, such as information on the grantee’s policies and a narrative explaining how the grantee believes it meets, or plans to meet, the 19 program requirements. HRSA also expects project officers to review other available information about the grantee, such as results from the grantee’s annual financial audit and UDS information. Project officers generally have the option to contact the grantee during their annual review if they need clarification about the information in a grantee’s application or budget period progress report. HRSA provides guidance to project officers for determining whether grantees are meeting each of the 19 program requirements. In particular, HRSA provides project officers with a list of key factors and questions related to the 19 program requirements to consider when making their assessment of compliance. Table 2 includes examples of the factors and questions provided to project officers for the 6 program requirements we selected for more in-depth review. To conduct and document their compliance review, project officers use an electronic evaluation tool that is contained in the Electronic Handbook. The evaluation tool lists each of the 19 program requirements, and, among other things, asks project officers to indicate whether the grantee is in or out of compliance. If after reviewing available information, the project officer remains uncertain whether or not the grantee has demonstrated compliance with a requirement, then, according to HRSA’s guidance, the project officer should indicate that the grantee is in compliance until noncompliance is clearly determined. In such cases, HRSA’s guidance instructs project officers to document their concerns about compliance by writing a comment in a text field of the evaluation tool. In addition, as part of the review, a project officer may decide to designate a performance improvement area. According to HRSA, performance improvement areas are actions or other measures that project officers recommend to help grantees improve their delivery of services and, ultimately, patient outcomes. Performance improvement areas are intended to promote continuous improvement for grantees above and beyond compliance with the 19 program requirements; they are not intended to address findings of noncompliance with these requirements. Once project officers complete their review, branch chiefs are responsible for reviewing and approving project officers’ assessments, including their determinations regarding compliance and the identification of performance improvement areas. According to HRSA officials, branch chiefs are responsible for providing leadership and guidance in areas such as program evaluation and monitoring, which establishes an important quality control for the annual compliance reviews. While HRSA has conducted annual reviews of grantees’ compliance for several years, the process for conducting these reviews has changed. To improve their oversight process, in 2008 HRSA officials revised the annual compliance evaluation tool to link the annual compliance reviews to each of the Health Center Program requirements. As a result of this change, project officers now make an assessment of whether grantees are in compliance with each requirement, rather than just an overall assessment of compliance. In addition, HRSA officials indicated that they continually assess the annual review process, and have recently made changes such as requiring grantees to submit more detailed narrative descriptions and an updated sliding fee discount schedule for the fiscal year 2012 reviews. HRSA’s process for identifying noncompliance is insufficient as annual compliance reviews do not identify all instances of noncompliance and the extent to which HRSA uses site visits to assess compliance is unclear, but appears to be limited. Moreover, HRSA’s project officers do not consistently identify and document grantee noncompliance. Finally, HRSA’s ability to address noncompliance is unclear as the agency’s process for doing so has recently changed. HRSA’s annual compliance reviews do not identify all instances of health center grantee noncompliance that other methods, such as site visits, have identified. Among the eight grantees included in our review, we identified 10 instances where the project officer determined that a grantee was in compliance with a program requirement during the annual compliance review, but a site visit a short time later found the grantee to be noncompliant with the same requirement. For example, in April 2010, a project officer completed an annual compliance review and found that a grantee was in compliance with 16 of the 19 program requirements. However, in July 2010, just 3 months later, a HRSA consultant completed an operational assessment site visit and found that the grantee was not in compliance with 10 of the 19 requirements; this included 7 requirements for which the project officer had previously concluded the grantee was in compliance. During the annual compliance review, for instance, the project officer determined that the grantee was in compliance with both the board composition and board authority program requirements. However, the site visit found, among other things, that the board had less than the minimum number of required members, did not meet monthly as required, and was not fulfilling its required duties and responsibilities to oversee the operations of the center—key aspects of these 2 program requirements. HRSA officials could not definitively explain why the site visit identified these issues of noncompliance, when the annual compliance review had failed to do so. HRSA officials speculated that because this grantee was having management problems, its performance may have rapidly deteriorated since the annual review was completed. Although the grantee may have been experiencing management problems, the consultant’s site visit report indicates that the grantee did not fall out of compliance with all 7 of these requirements in the intervening 3 months. Rather, the report indicated that several of these noncompliance issues were ongoing, including one that had existed for several years. Additionally, none of the 10 annual compliance review decisions included an indication that the project officer was uncertain about whether or not the grantee demonstrated compliance. Thus, it does not appear that the affirmative compliance decisions were due to project officers indicating that a grantee is in compliance until noncompliance is clearly determined. In addition to finding instances where the annual compliance review failed to identify grantee noncompliance, our review of HRSA’s oversight documentation of selected grantees revealed that project officers frequently determined a grantee was in compliance with selected program requirements without having sufficient information to make such decisions. Our analysis of 48 compliance decisions that project officers made during their fiscal year 2011 annual compliance reviews for our eight selected grantees found that in 43 cases (90 percent) project officers determined grantees were in compliance with requirements.However, in 23 of these 43 cases (53 percent), we were unable to find sufficient information to support the project officer’s compliance decision and the project officers did not indicate that they were unable to clearly determine compliance, which is what HRSA guidance instructs them to do if they are uncertain about whether or not the grantee demonstrated compliance, for example: Project officers determined that all eight selected health center grantees were in compliance with the after hours coverage requirement. However, it appears that six of the eight project officers had insufficient information when making their assessments. Our review of HRSA’s oversight documentation found that information grantees provided ranged from a sentence or two in their budget period progress report narrative stating they had a 24-hour answering service that will arrange for contact with an on-call clinician, to no mention of how they were meeting the after hours coverage requirement. In contrast, we found the other two project officers had information from recent site visits to assess compliance with this requirement. Project officers determined that six of the eight selected health center grantees were in compliance with the sliding fee discounts requirement. However, we found that four of six project officers who made this determination did not, at the time, have a current, updated version of their grantees’ sliding fee discount schedule to review. These project officers made their compliance decisions based on limited information, including grantee assertions that they had a current and up-to-date schedule. According to HRSA officials, beginning with the fiscal year 2012 annual compliance reviews, grantees will be required to submit an updated sliding fee discount schedule. While HRSA requires project officers to document their basis for finding a grantee out of compliance with a requirement, it does not require project officers to document their basis for finding a grantee in compliance. Therefore, there were often no records documenting how or why a project officer determined a health center grantee was in compliance with the requirements. In 26 of the 43 compliance decisions (60 percent) we reviewed in which project officers determined grantees were in compliance with selected program requirements, the project officers had not documented the basis for their decisions. The lack of documentation is not consistent with internal control standards for the federal government, which indicate “that all transactions and other significant events need to be clearly documented” and stress the importance of “the creation and maintenance of related records which provide evidence of execution of these activities as well as appropriate documentation.” The absence of such documentation may limit HRSA’s ability to ensure that project officers have identified all cases of grantee noncompliance during the annual compliance review and make it more difficult for HRSA to keep track of issues affecting grantee compliance especially when oversight responsibilities transfer among staff. For example, without such documentation, it is difficult for supervisors to appropriately assess the basis for project officers’ decisions. Further, according to HRSA, about 40 percent of grantees have had a change in their assigned project officer and branch chief over the past few years due in part to HRSA’s hiring of a significant number of new project officers to meet the expected increase in the number of health center grantees. While HRSA officials indicated they have a process to ensure a smooth transition between oversight staff, we found the absence of documentation can present challenges. For example, each of the eight project officers we interviewed had been assigned to their grantee for 2 years or less, and some of the project officers were unable to answer questions about why previous project officers determined their grantees were in compliance with specific requirements. Additionally, when project officers are uncertain about compliance, HRSA instructs project officers to consider grantees in compliance. As noted earlier, HRSA’s guidance indicates that project officers are to document these instances when compliance is unclear by writing a comment in a text field of the evaluation tool, but HRSA has no centralized or automated mechanism to ensure this occurs. The lack of such a mechanism, coupled with the lack of documentation of project officers’ basis for finding a grantee in compliance, limits HRSA’s ability to determine whether a project officer decided a grantee was in compliance with a requirement because the file contained evidence demonstrating compliance, or because the project officer was unsure about compliance and simply defaulted to an affirmative compliance decision without including documentation of his or her concerns. Data limitations make it difficult to determine the extent to which HRSA uses site visits to assess compliance; however, our analysis of these data suggest that the number of compliance-related site visits is limited. HRSA does not have aggregate, readily available data on site visits conducted prior to January 2011. Consequently, to determine which health center grantees had compliance-related site visits prior to January 2011, HRSA officials would have to manually compile a list by accessing each site visit report located in each individual grantee’s file. To help the agency in planning site visits, HRSA began requiring that all site visits be recorded in its on-line Electronic Handbook in January 2011. However, the reliability of at least some of the data elements, including the type of site visit, is uncertain. After a site visit record is created in the Electronic Handbook, which is the first step for documenting a planned a site visit, the system prevents project officers from editing certain fields, As a result, if the including the field for the type of site visit conducted.site visit type changes after project officers create the site visit record, the record will be inaccurate. Further, project officers are not required to update certain other fields, such as the site visit start and end dates, which increases the potential for data inaccuracies. While HRSA officials indicated that the type of site visit does not frequently change, when we compared the site visit data to information contained in site visit reports, we found that the type of site visit had changed for one of the five visits that took place at our selected grantees since January 2011. After discussing this with HRSA officials, the officials indicated that they would alter their electronic system to allow project officers to revise the site visit type; however, they have yet to do so. In addition, HRSA officials indicated the electronic system does not have a mechanism to ensure that a cancelled site visit is properly recorded. Therefore, when a planned site visit is cancelled, the record is removed only if a project officer proactively takes action to remove it. If the project officer fails to remove the record, the database will contain inaccurate information. From the programwide site visit data we received, we determined that the data included at least one site visit that had been cancelled, but not removed from the database. However, there may be other instances that we were unable to identify based on the available data. As noted earlier, HRSA considers site visits an important tool for assessing and assuring grantee compliance with Health Center Program requirements. According to our analysis, site visits were conducted at 417 (37 percent) of the 1,128 health center grantees between January 1, 2011 and October 27, 2011. A total of 472 site visits were conducted during this period because some grantees had multiple visits. Although HRSA’s data on the type of site visit conducted has inaccuracies, these data suggest that only a small portion of grantees had compliance-related visits. HRSA’s data indicate that 58 grantees, or 5 percent of all health center grantees, had site visits to review compliance with all 19 program requirements during this time period.(6 percent) had a site visit that may have assessed compliance with some of the 19 program requirements. The remaining grantees either did not have a site visit during the period or had a site visit which was not intended to assess compliance with the 19 program requirements. Although HRSA’s standard operating procedures do not currently specify how frequently compliance-related site visits should be conducted, HRSA officials indicated that, beginning in 2012, the agency is requiring that project officers schedule an operational assessment—a site visit intended to assess compliance with all 19 program requirements—for each grantee at least every 5 years. At their current rate and assuming the number of grantees remains the same, it would take HRSA over 15 years to conduct an operational assessment visit at each of the over 1,100 health center grantees. HRSA officials recognized that in order to meet this goal, they will have to increase the number of operational assessment site visits which are conducted annually. Along those lines, officials indicated that HRSA increased the amount of funding and planned number of operational assessment site visits to be provided through their current nationwide contract for conducting site visits. HRSA’s project officers do not consistently identify noncompliance and document it through the placement of conditions. For three of the six program requirements we reviewed, the HRSA project officers we interviewed did not have consistent interpretations of what constitutes compliance and what should therefore result in the placement of a condition on a health center’s grant, raising concerns about the adequacy of HRSA’s guidance and training for project officers. The project officers we spoke with had different interpretations regarding the board composition, after hours coverage, and key management requirements. Health center grantees are required, by statute and regulations, to have a governing board, the majority of whose members are patients of the center and who demographically represent the population served by the grantee. However, some project officers we spoke with indicated that the lack of an appropriately representative board would not result in a condition; these project officers did not consider the lack of an appropriately representative board an issue of noncompliance. While HRSA’s guidance for project officers indicates that, at a minimum, a grantee’s after hours coverage system should ensure that patients have telephone access to a clinician who can assess whether they need emergency medical care, some of the project officers we spoke with indicated that they would consider using a performance improvement area, not a condition, if a health center had only an answering machine directing patients to the emergency room. Other project officers stated that if a grantee had only an answering machine directing patients to the emergency room they would not be in compliance with this requirement. HRSA guidance instructs project officers to assess whether a health center grantee maintains a fully staffed management team as appropriate for the size and need of their health center. When asked about the criteria they use for determining whether grantees are in or out of compliance with the key management staff requirement, two project officers told us that they base their compliance decision on whether the grantee’s management staff includes a Chief Executive, Financial, and Medical Officer. In contrast, the other six project officers said that a grantee did not necessarily need to have all of these positions staffed. We also found one instance where HRSA’s guidance on what constitutes compliance is inconsistent with Health Center Program requirements, and thus project officers may not be making correct decisions regarding grantee compliance and appropriately addressing noncompliance. In this particular instance, HRSA guidance instructs project officers to use a performance improvement area, not a condition, if a grantee has not used the most recent federal poverty guidelines for developing their sliding fee discounts; the guidance therefore indicates that grantees are to be considered in compliance with the requirement even if their sliding fee discount schedule is outdated. Health Center Program regulations, however, require a grantee’s sliding fee discounts to be based on the most recent guidelines. As a result, a grantee that has not used the correct federal poverty guidelines should be deemed noncompliant with this program requirement and a condition should be placed on its grant. When we raised this issue with HRSA officials, they acknowledged that the guidance was not consistent with requirements, and that it would be revised. They also confirmed that if a grantee has not used the correct federal poverty guidelines in its sliding fee discount schedule, a project officer should deem the grantee noncompliant and that a condition should be issued. HRSA officials further indicated they are developing a policy notice on the sliding fee discounts program requirement, and the guidance will specify that a grantee’s sliding fee discounts must be revised annually to reflect updates to the most recent federal poverty guidelines. Finally, we found instances where grantee noncompliance was identified through site visits, but HRSA failed to place conditions on the grant. According to HRSA’s standard operating procedures, when a site visit determines that a grantee is noncompliant with at least one of the 19 program requirements, a project officer must place a condition on the health center’s grant. However, as part of our review of the eight selected grantees, we identified five site visits from 2009 through August 2011 that clearly identified findings of noncompliance with some of the 19 program requirements, but HRSA did not issue conditions to grantees for the majority of these findings. For example, one site visit found that a grantee was not in compliance with 16 of the 19 requirements, but HRSA did not issue any conditions to the grantee. At the time of the site visit, this grantee had been receiving HRSA funding for about 15 months, and had been experiencing compliance issues for at least 12 months. Despite this, HRSA officials told us that because it was a new grantee that was receptive to technical assistance, HRSA wanted to give the grantee more time to address their compliance issues before placing numerous conditions on it. Another site visit found that a grantee was not in compliance with the board authority and conflict-of-interest policy requirements, but HRSA did not issue any conditions to the grantee as a result of this site visit. Instead, HRSA arranged for a consultant to provide the grantee with technical assistance to revise and update its bylaws to address these issues. The extent to which HRSA’s revised process—the progressive action process—is adequately resolving conditions or terminating grantee funding is unclear because HRSA’s experience with this revised process is too recent to make any overall assessment. The progressive action process, which was implemented in April 2010, can potentially take over a year to move through all of the phases. Completing the first three phases of the progressive action process can take up to 9 months, while grantees with conditions that allow for a 120-day implementation phase can take Thus, HRSA has limited up to 19 months to fully complete the process.experience with the process to date, and does not have sufficient data to assess the extent to which the process is effective in bringing grantees into compliance or in addressing those grantees that have failed to achieve compliance by the end of the final phase. During the first 18 months that the progressive action process has been in place—from April 9, 2010, through October 7, 2011—HRSA issued 1,017 conditions for grantee noncompliance to a total of 417 different grantees (approximately 37 percent of all grantees), with some grantees having multiple conditions. Over half of the conditions were for grantee noncompliance with requirements related to the management and finance category. (See app. II for additional information about the conditions placed during this time period.) As of November 10, 2011, 775 conditions (76 percent) were resolved and 240 conditions (24 percent) were still in process. The remaining 2 conditions, which belonged to the same grantee, were not resolved in the allotted time; thus, HRSA officials indicated that the agency was is in the process of terminating the grantee’s funding. HRSA’s Health Center Program provides access to health care for people who are uninsured or who face other barriers to receiving needed care. Over the past decade the program has expanded and, given the additional funding appropriated by PPACA, will likely continue to do so over the next few years. As such, it will play an increasingly greater role as a health care safety net for vulnerable populations. Particularly in light of the growing federal investment in health centers, it is important for HRSA to ensure that health centers are operating effectively and in compliance with Health Center Program requirements. HRSA has taken steps to improve its oversight of health center grantees over the past few years, such as by standardizing its process for addressing grantee noncompliance. Despite these efforts, however, HRSA’s oversight is insufficient to ensure that it consistently identifies all instances of grantee noncompliance with Health Center Program requirements. Although HRSA has devoted substantial resources to overseeing grantees—including having over 100 project officers to perform annual compliance reviews and having a more than $30-million contract for consultants who conduct site visits and provide other assistance— limitations in HRSA’s oversight methods have affected the agency’s performance in identifying issues of noncompliance. The annual compliance reviews place too little emphasis on documenting project officers’ basis for making their compliance decisions, while HRSA’s guidance instructs project officers to indicate that a grantee is in compliance with Health Center Program requirements, even if the project officer is uncertain about the grantee’s compliance. Further, HRSA does not have a systematic process for tracking and following-up on instances when project officers are uncertain about a grantee’s compliance to ensure that compliance is ultimately demonstrated. The lack of such a process, coupled with the lack of documentation of project officers’ basis for finding a grantee in compliance, limits HRSA’s ability to assess whether project officers accurately determined that grantees were actually in compliance with a requirement, or whether they were simply unsure about compliance. This is especially problematic because project officers we interviewed had different interpretations of what constitutes compliance with certain requirements and therefore, when they should place a condition on a health center’s grant. Additionally, while HRSA officials indicated, and we found, that site visits are an important tool for overseeing grantees and verifying compliance with Health Center Program requirements, the agency’s use of compliance-related site visits appears to be limited. HRSA has a goal of having an operational assessment visit to each grantee at least once every 5 years. The agency’s ability to effectively meet this goal, however, is challenged by a lack of comprehensive and reliable data on which grantees have had various types of site visits. To the extent HRSA is able to develop and analyze accurate data on site visits, it will be in a better position to target its resources to those grantees that may be in greater need of such visits. Furthermore, HRSA needs to ensure that when site visits are conducted, the information obtained is appropriately used, for example, by ensuring that instances of noncompliance identified during a site visit result in the placement of a condition on a health center’s grant. Finally, HRSA’s recently revised process for addressing grantee noncompliance with the 19 program requirements seems to provide both the agency and grantees with a uniform structure for addressing compliance deficiencies. However, given the length of time the progressive action process provides grantees to address noncompliance, HRSA has had limited experience with the process, and thus it is too early to tell whether this revised process is effective. As HRSA gains more experience with the process, it will be important for the agency to assess whether the process is functioning as intended and whether any changes are needed to make the process more effective. To improve HRSA’s ability to identify and address noncompliance with Health Center Program requirements, the Administrator of HRSA should take the following six actions: Develop and implement a mechanism for recording, tracking, and following-up on instances when project officers are unable to determine compliance during the annual compliance review process. Require that when completing annual compliance reviews, project officers clearly document their basis for determining that grantees are in compliance with program requirements. Clarify agency guidance and provide training, as needed, to better ensure that project officers are accurately and consistently assessing grantees’ compliance with program requirements. Ensure that site visit data contained in HRSA’s electronic system are complete, reliable, and accurate to better target the use of available resources and to help ensure that all grantees have compliance- related site visits at regular and timely intervals. Develop and implement procedures to ensure that instances of noncompliance with program requirements consistently result in the placement of a condition on a health center’s grant. Periodically assess whether its new progressive action process for addressing grantee noncompliance, including the time frames allotted for grantees to respond, is working as intended and make any needed improvements to the process. We provided a draft of this report to HHS for its review, and HHS provided written comments (see app. III). HHS concurred with all six of our recommendations and indicated that while resource availability may impact the extent of certain actions, HRSA is already in the process of planning and implementing many of the recommendations. For example, HHS indicated that HRSA is in the process of enhancing the electronic evaluation tool, known as the Program Analysis and Recommendations tool, which project officers use to conduct and document annual compliance reviews. HRSA is also working on issuing additional policies, procedures, and guidance documents to better ensure that project officers are consistently assessing grantee compliance and documenting noncompliance. While HHS concurred with our recommendations and indicated that the report’s findings were helpful in informing ongoing efforts to improve oversight of the Health Center Program, it did not concur with what it characterized as some of the central conclusions drawn from the report’s findings. First, HHS indicated that it did not concur with what it characterized as our conclusion that HRSA’s process for identifying noncompliance is insufficient because annual compliance reviews do not identify all instances of noncompliance. HHS indicated that HRSA’s active monitoring of grantees is not limited to the project officer’s annual compliance review, but is accomplished through a variety of available resources including, but not limited to, the review of grantee data reports, independent annual audit reports, quarterly conference calls, site visits, and correspondence from the grant recipient. We agree with HHS’s statement, and our report reflects that HRSA uses multiple methods to oversee grantees. However, we believe that HHS mischaracterized the nature of our conclusion. Our conclusion that HRSA’s oversight of health center grantees is insufficient was not based solely on our assessment of HRSA’s annual compliance reviews, but rather was based on our assessment of several key oversight methods described throughout our report including HRSA’s use of site visits, the consistency of project officers’ oversight, and the use of programwide data to aid oversight across grantees. HHS also did not concur with what it characterized as our conclusion that HRSA’s process for identifying noncompliance is insufficient because HRSA’s project officers do not consistently identify and document grantee noncompliance. In explaining its concerns, HHS focused on instances where project officers cannot definitively determine whether or not grantees are complying with program requirements. For example, HHS noted that when project officers are uncertain about compliance, HRSA’s standard operating procedures require project officers to record these areas of uncertainty for follow-up action. However, our findings about the lack of consistency in the identification and documentation of grantee noncompliance are not limited to instances when project officers are uncertain about compliance. Rather, as the report indicates, we found that project officers we interviewed did not have consistent interpretations of the criteria for assessing compliance and what should therefore result in the placement of a condition on a health center’s grant. Furthermore, we found one instance where HRSA’s guidance on what constitutes compliance is inconsistent with Health Center Program requirements and found several instances where identified noncompliance did not result in the placement of a condition on a health center’s grant. As the report notes, in cases when project officers may be uncertain about compliance, we found that HRSA did not have a centralized mechanism to ensure that project officers are recording such instances. Additionally, despite HHS’s comment stating that HRSA’s procedures provide for such follow-up, it agreed with our recommendation that HRSA should develop a mechanism for ensuring that recording, tracking, and following up on such instances occurs. Finally, HHS did not concur with our finding that the lack of documentation in the annual compliance review is not consistent with internal control standards for the federal government. HHS indicated that HRSA established its annual compliance review tool to record documented findings of noncompliance and utilizes a standard progressive action process to resolve these areas consistent with its overall internal control procedures. While we agree that HRSA’s process provides for both documenting areas of identified noncompliance and a standard process for resolving these issues, our findings were not limited to an assessment of what HRSA has included in its oversight process, but also takes into account what HRSA did not include in this process. Thus, our findings take into account the fact that HRSA does not require project officers to document their basis for finding a grantee in compliance. Therefore, as stated in the report, we found there were often no records documenting how or why a project officer determined a health center grantee was in compliance with the requirements. The lack of such documentation makes it difficult for managers to assess the accuracy of project officers’ decisions and assure that grantees are in compliance with applicable laws and regulations, which is a key purpose to having effective internal controls. Thus, we continue to believe that this lack of documentation is not consistent with internal control standards for the federal government, which indicate “that all transactions and other significant events need to be clearly documented” and stress the importance of the creation and maintenance of related records which provide evidence of execution of these activities as well as appropriate documentation. As noted above, our conclusion that HRSA’s oversight of health center grantees is insufficient was based on our overall assessment of HRSA’s key oversight methods. In addition to finding limitations with HRSA’s annual compliance reviews and a lack of consistency among HRSA project officers, we also found that HRSA’s use of site visits to assess compliance has been limited. Thus, we stand by our conclusion that HRSA’s process for identifying noncompliance is insufficient. We are pleased that HRSA is already taking steps to implement our recommendations and encourage the agency to continue to take actions to help to improve its oversight of health center grantees. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrator of HRSA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. As part of our assessment of the extent to which the Health Resources and Services Administration’s (HRSA) process identifies and addresses noncompliance with Health Center Program requirements, we reviewed HRSA’s oversight of eight selected health center grantees. The grantees were selected to provide variation in: size, as determined by the number of delivery sites; length of time as a Health Center Program grantee; and compliance experience, as determined by the number of the number of findings of noncompliance—referred to as conditions—that HRSA had cited for each grantee that were unresolved as of July 11, 2011. (See table 4.) During the first 18 months of HRSA’s progressive action process, from April 9, 2010, through October 7, 2011, HRSA issued 1,017 conditions to 417 health center grantees, with some grantees having multiple conditions during this time period. Specifically, the number of conditions HRSA issued to the 417 grantees ranged between 1 and 17 conditions per grantee, with HRSA issuing between 1 and 3 conditions to most of these grantees. (See fig. 2.) HRSA issued conditions for each of the 19 program requirements, with the greatest numbers issued for the financial management and control policies, program and data system reporting, and board composition requirements. (See fig. 3.) Grantees can have multiple and simultaneous conditions associated with the same program requirements, with each condition being related to a different component of the requirement. For example, in fiscal year 2011, there were 3 possible conditions related to the financial management and control policy requirement. In addition to the contact named above, key contributors to this report were Michelle B. Rosenberg, Assistant Director; Krister P. Friday; David Lichtenfeld; Lillian Shields; and Jennifer M. Whitworth.
Under the Health Center Program, HRSA provides grants to eligible health centers. HRSA is responsible for overseeing over 1,100 health center grantees to ensure their compliance with Health Center Program requirements. GAO was asked to examine HRSA’s oversight. This report (1) describes HRSA’s oversight process and (2) assesses the extent to which the process identifies and addresses noncompliance with what HRSA refers to as the 19 key program requirements. GAO reviewed and analyzed HRSA’s policies and procedures and available programwide data related to HRSA's oversight of health centers, interviewed HRSA officials, and reviewed documentation of HRSA’s oversight from 8 selected grantees that varied in their compliance experience, as well as other factors. The Department of Health and Human Services’ (HHS) Health Resources and Services Administration (HRSA) relies on three main methods to oversee grantees’ compliance with the 19 key program requirements. Annual compliance reviews. HRSA project officers review available information, including that submitted by grantees, to determine whether the grantee is in compliance with each of the 19 program requirements. Site visits. HRSA and its consultants visit grantees to review documentation, meet with officials, and tour the health center. Some of these visits are intended to assess compliance with some or all program requirements. Routine communications. Project officers communicate with grantees via phone and e-mail to learn about issues that may affect their compliance. When HRSA identifies noncompliance with program requirements, it uses a process, implemented in April 2010, to address this with a grantee. This process provides a grantee with defined time frames for addressing any identified noncompliance. If a grantee is unable to correct the compliance issue by the end of the process, HRSA’s policy is to terminate the health center’s grant. HRSA’s ability to identify grantees’ noncompliance with Health Center Program requirements is insufficient. HRSA does not require project officers to document their basis for determining that a grantee is in compliance with a requirement. When project officers are uncertain about compliance, HRSA instructs them to consider a grantee in compliance and to note the lack of certainty in a text field of their evaluation tool. However, HRSA has no centralized mechanism to ensure this occurs. Thus, it is unclear whether project officers' decisions that a grantee is in compliance with a requirement are because there was sufficient evidence demonstrating compliance or the project officer failed to document that compliance was uncertain. The number of compliance-related visits conducted may be limited. HRSA’s available data indicates that only 11 percent of grantees had a compliance-related site visit from January through October 2011; less than half of which had a visit that assessed compliance with all 19 program requirements. HRSA’s project officers do not consistently identify and document grantee noncompliance. Project officers GAO interviewed had different interpretations of what constitutes compliance with some program requirements and therefore when they should cite a grantee for noncompliance. HRSA’s process for addressing grantee noncompliance with program requirements seems to provide both the agency and grantees with a uniform structure for addressing noncompliance. However, the extent to which this process is adequately resolving grantee noncompliance or terminating grantee funding is unclear because HRSA’s experience with this process is too recent for GAO to make an overall assessment. GAO recommends that, among other things, HRSA improve its documentation of compliance decisions, strengthen its ability to consistently identify and cite grantee noncompliance, and periodically assess whether its new process for addressing grantee noncompliance is working as intended. HHS concurred with all of GAO’s recommendations, and stated that HRSA has already begun implementing many of them. HHS, however, did not concur with what it characterized as certain conclusions drawn from the findings. HHS based its comments on only some of the evidence. GAO’s analysis of all the evidence and HRSA’s planned implementation of the recommendations confirm the validity of the findings and conclusions.
In the 124 years since the first national park, Yellowstone, was created, the national park system has grown to include 369 park units. In all, these units cover more than 80 million acres of land, an area larger than the state of Colorado. The mix of park units is highly diverse and includes more than 20 types; these range from natural resource preserves encompassing vast tracts of wilderness to historic sites and buildings in large urban areas. The Park Service’s mission is twofold: to provide for the public’s enjoyment of these parks and to protect the resources so that they will remain unimpaired for the enjoyment of future generations. The Park Service’s 1980 survey of threats found not only that the parks’ resources were being harmed but also that improvements were needed in determining what cultural and natural resources existed in each park, what their condition was, and how and to what extent they were being threatened. In response, the Park Service called for the development of resource management plans to identify the condition of each park’s resources and the problems with managing them, including significant threats. Three times since 1987, we have reported that the Park Service has made limited progress in meeting the information and monitoring needs it had identified in 1980. Our findings included incomplete, out-of-date, or missing resource management plans and an incomplete inventory of threats, their sources, or mitigating actions. In 1994, after examining the external threats to the parks, we recommended that the Park Service revise its resource management planning system to identify, inventory, categorize, and assign priorities to these threats; describe the actions that could be taken to mitigate them; and monitor the status of the actions that had been taken. Such an inventory has not been implemented, according to Park Service headquarters officials, because of funding and hiring freezes that have prevented the completion of needed changes to the planning system’s guidelines and software. In commenting on a draft of this report, the Park Service said that implementing this recommendation is no longer appropriate. The Park Service’s comments and our evaluation are presented in the agency comments section of this report. For internal, as for external threats, the Park Service has limited systemwide information. It does not have a national inventory of internal threats that integrates information it already has, and many of its individual units do not have a readily available database on the extent and severity of the threats arising within their borders. However, in commenting on this report, Park Service officials told us that headquarters has the systemwide information it needs to make decisions and that many decisions are made at the park level, where the superintendents decide what information is needed. They added that rather than developing a database of threats to resources, they need better data on the condition of resources to allow park managers to identify those that are the most threatened. According to headquarters officials, the Park Service has developed systems focused on particular categories of resources. Park managers and headquarters staff use these systems to identify, track, or assess problems, resource conditions, or threats. An overview of these systems follows: The Museum Collections Preservation and Protection Program requires parks to complete a checklist every 4 years on the deficiencies in the preservation, protection, and documentation of their cultural and natural resource collections. An automated system is being developed to collect these data. The data are used to make funding decisions. Another system for monitoring the condition of a cultural resource is the List of Classified Structures, which inventories and gives general information on historic structures in the parks. Headquarters officials said that the list is not complete because of insufficient funding. Headquarters rangers report that automated systems are in place to track illegal activities in parks, such as looting, poaching, and vandalism, that affect cultural and natural resources. Headquarters officials report that the inventory and information on the condition of archeological resources, enthnographic resources, and cultural landscapes are poor at present but that there are plans to develop improved systems, if staffing and funding allow. Although the Park Service’s guidance requires the parks to develop resource management plans, it does not require the plans to include specific information on the internal and external threats facing the parks. Such information would assist managers of the national park system in identifying the major threats facing parks on a systemwide basis, and it would give the managers of individual parks an objective basis for management decisions. At the eight parks studied, the managers identified 127 internal threats that directly affected natural and cultural resources. Most of these threats fell into one of five broad categories: the impact of private inholdings or commercial development within the parks, the results of encroachment by nonnative wildlife or plants, the damage caused by illegal activities, the adverse effects of normal visits to the parks, and the unintended adverse effects of the agency’s or park managers’ actions (see fig. 1). The majority of the threats affected natural resources, such as plants and wildlife, while the remainder threatened cultural resources, such as artifacts, historic sites, or historic buildings. (See app. I for a summary of the threats in each category at each of the eight parks.) Overall, the park managers we visited said that the most serious threats facing the parks were shortages in staffing, funding, and resource knowledge. The managers identified 48 additional threats in these categories. We classified these as indirect threats to cultural and natural resources because, according to the managers, the shortages in these areas were responsible for many of the conditions that directly threaten park resources. (See app. II for a list of these threats at the eight parks.) In addition, the managers identified other threats in such categories as laws or regulations, agency policies, and park boundaries. After reviewing the information about these threats provided by park managers in documents and interviews, we decided that the threats were indirect and should not be listed among the direct threats. In gathering data for each park, we also identified threats to services for visitors. Our analysis showed that many of these threats also appeared as threats to cultural and natural resources. We did not compile a list of threats to services for visitors because this report focuses on cultural and natural resources. Private inholdings and commercial development within park boundaries accounted for the largest number of specific threats. The managers of seven of the eight parks we reviewed identified at least one threat in this category. For example, at Olympic National Park in Washington State, the managers said that the homes situated on inholdings along two of the park’s largest lakes threatened groundwater systems and the lake’s water quality. At Lake Meredith National Recreation Area in Texas, the managers were concerned about the impact of the frequent repair and production problems at about 170 active oil and gas sites (see fig. 2) and the development of additional sites. At the Minute Man National Historical Park, the long, linear park is bisected by roads serving approximately 20,000 cars per day. The traffic affects cultural resources, such as nearby historic structures; natural resources, such as populations of small terrestrial vertebrates (e.g., the spotted salamander and spotted turtle); and visitors’ enjoyment of the park (see fig. 3). Encroachment by nonnative wildlife and plants—such as mountain goats, trout introduced into parks’ lakes and streams, and nonnative grasses and other plants—accounted for the second largest number of reported threats. The managers at all of the parks we reviewed identified at least one threat in this category. At Arches National Park in Utah, for example, the managers cited the invasion by a plant called tamarisk in some riverbanks and natural spring areas. In its prime growing season, a mature tamarisk plant consumes about 200 gallons of water a day and chokes out native vegetation. At Olympic National Park, nonnative mountain goats introduced decades ago have caused significant damage to the park’s native vegetation. The goats’ activity eliminated or threatened the survival of many rare plant species, including some found nowhere else. Controlling the goat population reduced the damage over 5 years, as the contrast between figures 4a and 4b shows. Illegal activities, such as poaching, constituted the third main category of threats. The managers at the eight parks reported that such activities threatened resources. For example, at Crater Lake National Park in Oregon, the managers believe that poaching is a serious threat to the park’s wildlife. Species known to be taken include elk, deer, and black bear. At both Crater Lake and Olympic national parks, mushrooms are harvested illegally, according to the managers. The commercial sale of mushrooms has increased significantly, according to a park manger. He expressed concern that this multimillion-dollar, largely unregulated industry could damage forest ecosystems through extensive raking or other disruption of the natural ground cover to harvest mushrooms. Similar concern was expressed about the illegal harvesting of other plant species, such as moss and small berry shrubs called salal (see fig. 5). About 30 percent of the internal threats identified by park managers fell into two categories—the adverse effects of (1) people’s visits to the parks and (2) the Park Service’s own management actions. The number of recreational visits to the Park Service’s 369 units rose by about 5 percent over the past 5 years to about 270 million visits in 1995. Park managers cited the effects of visitation, such as traffic congestion, the deterioration of vegetation off established trails, and trail erosion. The threats created unintentionally by the Park Service’s own management decisions at the national or the park level included poor coordination among park operations, policies calling for the suppression of naturally caused fires that do not threaten human life or property, and changes in funding or funding priorities that do not allow certain internal threats to parks’ resources to be addressed. For example, at Gettysburg National Military Park, none of the park’s 105 historic buildings have internal fire suppression systems or access to external hydrants because of higher-priority funding needs. Park managers estimated that about 82 percent of the direct threats they identified in the eight parks we reviewed have caused more than minor damage to the parks’ resources. We found evidence of such damage at each of the eight parks. According to the managers, permanent damage to cultural resources has occurred, for example, at Indiana Dunes National Lakeshore in Indiana and at Arches National Park in Utah. Such damage has included looting at archeological sites, bullets fired at historic rock art, the deterioration of historic structures, and vandalism at historic cemeteries. (See figs. 6 and 7.) At both of these parks, the managers also cited damage to natural resources, including damage to vegetation and highly fragile desert soil from visitors venturing off established trails and damage to native plants from the illegal use of off-road vehicles. At Gettysburg National Military Park, the damage included the deterioration of historic structures and cultural landscapes, looting of Civil War era archeological sites, destruction of native plants, and deterioration of park documents estimated to be about 100 years old, which contain information on the early administrative history of the park. Figure 8 shows these documents, which are improperly stored in the park historian’s office. Nearly one-fourth of the identified direct threats had caused irreversible damage, according to park managers (see fig. 9). Slightly more than one-fourth of the threats had caused extensive but repairable damage. About half of the threats had caused less extensive damage. Some/minor or no damage (can be repaired) Extensive damage (can be repaired) The damage to cultural resources was more likely to be permanent than the damage to natural resources, according to park managers (see fig. 10). Over 25 percent of the threats to cultural resources had caused irreversible damage, whereas 20 percent of the threats to natural resources had produced permanent effects. A Park Service manager explained that cultural resources—such as rock art, prehistoric sites and structures, or other historic properties—are more susceptible to permanent damage than natural resources because they are nonrenewable. Natural resources, such as native wildlife, can in some cases be reintroduced in an area where they have been destroyed. Generally, park managers said they based their judgments about the severity of damage on observation and judgment rather than on scientific study or research. In most cases, scientific information about the extent of the damage was not available. For some types of damage, such as the defacement of archeological sites, observation and judgment may provide ample information to substantiate the extent of the damage. But observation alone does not usually provide enough information to substantiate the damage from an internal threat. Scientific research will generally provide more concrete evidence identifying the number and types of threats, the types and relative severity of damage, and any trends in the severity of the threat. Scientific research also generally provides a more reliable guide for mitigating threats. In their comments on this report, Park Service officials agreed, stating that there is a need for scientific inventorying and monitoring of resource conditions to help park managers identify the resources most threatened. At all eight parks, internal threats are more of a problem than they were 10 years ago, according to the park managers. They believed that about 61 percent of the threats had worsened during the past decade, 27 percent were about the same, and only 11 percent had grown less severe (see fig. 11). At seven of the eight parks, the managers emphasized that one of the trends that concerned them most was the increase in visitation. They said the increasing numbers of visitors, combined with the increased concentration of visitors in certain areas of many parks, had resulted in increased off-trail hiking, severe wear at campgrounds, and more law enforcement problems. At Arches National Park, for example, where visitation has increased more than 130 percent since 1985, greater wear and tear poses particular problems for the cryptobiotic soil. This soil may take as long as 250 years to recover after being trampled by hikers straying off established trails, according to park managers. Another increasing threat noted by managers from parks having large natural areas (such as Crater Lake, Olympic, and Lake Meredith) is the possibility that undergrowth, which has built up under the Park Service’s protection, may cause more serious fires. According to the managers, the Park Service’s long-standing policy of suppressing all park fires—rather than allowing naturally occurring fires to burn—has been the cause of this threat. Although the park managers believed that most threats were increasing in severity, they acknowledged that a lack of specific information hindered their ability to assess trends reliably. The lack of baseline data on resource conditions is a common and significant problem limiting park managers’ ability to document and assess trends. They said that such data are needed to monitor trends in resource conditions as well as threats to those resources. Park managers said that they believed some action had been taken in response to about 82 percent of the direct threats identified (see fig. 12). However, the Park Service does not monitor the parks’ progress in mitigating internal threats. Various actions had been taken, but many were limited to studying what might be done. Only two actions to mitigate an identified threat have been completed in the eight parks, according to the managers. However, they noted that in many cases, steps have been taken toward mitigation, but completing these steps was often hampered by insufficient funding and staffing. At Arches National Park, actions ranged from taking steps to remediate some threats to studying how to deal with others. To reduce erosion and other damage to sensitive soils, park managers installed rails and ropes along some hiking trails and erected signs along others explaining what damage would result from off-trail walking. Managers are also studying ways to establish a “carrying capacity” for some of the frequently visited attractions. This initiative by the Park Service stemmed from visitors’ comments about the need to preserve the relative solitude at the Delicate Arch (see fig. 13). According to park managers, about 600 visitors each day take the 1-1/2-mile trail to reach the arch. At Lake Meredith, to reduce the impact of vandalism, park managers are now replacing wooden picnic tables and benches with solid plastic ones. Although initially more expensive, the plastic ones last longer and cost less over time because they are more resistant to fire or other forms of vandalism. Lake Meredith has also closed certain areas for 9 months of the year to minimize the looting of archeological sites. At Saguaro National Park, the park managers closed many trails passing through archeological sites and revoked the permit of two horseback tour operators for refusing to keep horses on designated trails. The natural and cultural resources of our national parks are being threatened not only by sources external to the parks but also by activities originating within the parks’ borders. Without systemwide data on these threats to the parks’ resources, the Park Service is not fully equipped to meet its mission of preserving and protecting these resources. In times of austere budgets and multibillion-dollar needs, it is critical for the agency to have this information in order to identify and inventory the threats and set priorities for mitigating them so that the greatest threats can be addressed first. In our 1994 report on external threats to the parks’ resources, we recommended that the National Park Service revise its resource management planning system to (1) identify the number, types, and sources of the external threats; establish an inventory of threats; and set priorities for mitigating the threats; (2) prepare a project statement for each external threat describing the actions that can be taken to mitigate it; and (3) monitor the status of actions and revise them as needed. If the Park Service fully implements the spirit of our 1994 recommendations, it should improve its management of the parks’ internal threats. We therefore encourage the Park Service to complete this work. Not until this effort is completed will the Park Service be able to systematically identify, mitigate, and monitor internal threats to the parks’ resources. We provided a draft of this report to the Department of the Interior for its review and comment. We met with Park Service officials—including the Associate Director for Budget and Administration, the Deputy Associate Director for Natural Resources Stewardship and Science, and the Chief Archeologist—to obtain their comments. The officials generally agreed with the factual content of the report and provided several technical corrections to it, which have been incorporated as appropriate. The Park Service stated that it would not implement the recommendations cited from our 1994 report. However, we continue to believe that this information, or data similar to it, is necessary on a systemwide level to meet the Park Service’s mission of preserving and protecting resources. Park Service officials stated that obtaining an inventory of and information on the condition of the parks’ resources was a greater priority for the agency than tracking the number and types of threats to the parks’ resources, as our previous report recommended. They said that headquarters has the necessary systemwide information to make decisions but added that better data on the condition of resources are needed to allow the park managers to better identify the most threatened resources. They stated that the Park Service is trying to develop a better inventory and monitor the condition of resources as staffing and funding allow. Park Service officials also cited a number of reasons why implementing our past recommendations to improve the resource management planning system’s information on threats is no longer appropriate. Their reasons included the implementation of the Government Performance and Results Act, which requires a new mechanism for setting priorities and evaluating progress; the Park Service-wide budget database that is used to allocate funds to the parks; the existing databases that provide information on resources and workload; and the decentralization of the Park Service, which delegates authority to the park superintendents to determine what information is needed to manage their parks. We continue to believe that information on threats to resources, gathered on a systemwide basis, would be helpful to set priorities so that the greatest threats can be addressed first. The Park Service’s guidelines for resource management plans emphasize the need to know about the condition of resources as well as threats to their preservation. This knowledge includes the nature, severity, and sources of the major threats to the parks’ resources. We believe that knowing more about both internal and external threats is necessary for any park having significant cultural and natural resources and is important in any systemwide planning or allocation of funds to investigate or mitigate such threats. We agree that the number and types of threats are not the only information needed for decision-making and have added statements to the report to describe the Park Service’s efforts to gather data on the condition of resources. In addition, the Park Service commented that a mere count and compilation of threats to resources would not be useful. However, our suggestion is intended to go beyond a surface-level count and to use the resource management plan (or other vehicle) to delineate the types, sources, priorities, and mitigation actions needed to address the threats on a national basis. We believe that the Park Service’s comment that it needs a more complete resource inventory and more complete data on resources’ condition is consistent with our suggestion. As agreed with your office, we conducted case studies of eight parks because we had determined at Park Service headquarters that no database of internal threats existed centrally or at individual parks. At each park, we interviewed the managers, asking them to identify the types of internal threats to the park’s natural and cultural resources and indicate how well these threats were documented. We also asked the managers to assess the extent of the damage caused by the threats, identify trends in the threats, and indicate what actions were being taken to mitigate the threats. Whenever possible, we obtained copies of any studies or other documentation on which their answers were based. Given an open-ended opportunity to identify threats, a number of managers listed limitations on funding, staffing, and resource knowledge among the top threats to their parks. For example, the park managers we visited indicated that insufficient funds for annual personnel cost increases diminished their ability to address threats to resources. Although we did not minimize the importance of funding and staffing limitations in developing this report, we did not consider them as direct threats to the resources described in appendix I. These indirect threats are listed in appendix II. We performed our review from August 1995 through July 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and Members of Congress; the Secretary of the Interior; the Director, National Park Service; and other interested parties. We will make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix III. On the basis of our analysis of the data, we determined that the following threats affect cultural and natural resources directly. Threats in the three other categories of staffing, funding, and resource knowledge are listed for the eight parks in appendix II. Minute Man National Historical Park (continued) Minute Man National Historical Park (continued) In addition to the direct threats to natural and cultural resources listed in appendix I, park managers of these resources also cited the following indirect threats that, in their opinion, significantly affected their ability to identify, assess, and mitigate direct threats to resources. Brent L. Hutchison Paul E. Staley, Jr. Stanley G. Stenersen The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed internal threats to the national parks' resources, focusing on the: (1) National Park Service's (NPS) information on the number and types of internal threats; (2) damage these threats have caused; (3) change in the severity of these threats over the past decade; and (4) NPS actions to mitigate these threats. GAO found that: (1) because NPS does not have a national inventory of internal threats to the park system, it is not fully equipped to meet its mission of preserving and protecting park resources; (2) park managers at the eight parks studied have identified 127 internal threats to their parks' natural and cultural resources; (3) most of these threats are due to the impact of private inholdings or commercial development within the parks, the impact of nonnative wildlife or plants, damage caused by illegal activities, increased visitation, and unintended adverse effects of management actions; (4) park managers believe the parks' most serious threats are caused by shortages in staffing, funding, and resource knowledge; (5) 82 percent of the internal threats have already caused more than minor damage, and cultural or archeological resources have suffered more permanent damage than natural resources in many parks; (6) 61 percent of internal threats, particularly those from increased visitation and serious fires, have worsened over the past decade, 27 percent have stayed about the same, and 11 percent have diminished; (7) park managers lack baseline data needed to judge trends in the severity of internal threats; and (8) some parks are closing trails to reduce erosion, installing more rugged equipment to reduce vandalism, revoking uncooperative operators' permits, and posting signs to inform visitors of the damage from their inappropriate activities.
Medicare, the federal health insurance program that serves the nation’s elderly, certain disabled individuals and individuals with end-stage renal disease, had total program expenditures of $565 billion in 2011, making it one of the largest federal programs. The Medicare program is administered by CMS and consists of four parts: A, B, C, and D. Medicare parts A and B are also referred to as fee-for-service programs. Part A covers hospital and other inpatient stays, hospice, and home health service; and Part B covers hospital outpatient, physician, and other services. The Medicare card is used as proof of eligibility for both of these programs. Part C is Medicare Advantage, under which beneficiaries receive benefits through private health plans. Part D is the Medicare outpatient prescription drug benefit. CMS requires that cards issued by Part C and Part D health plans do not display an SSN. For most individuals, SSA determines eligibility for Medicare and assigns the individual’s HICN. However, for the approximately 550,000 Railroad Retirement beneficiaries and their dependents, the RRB determines Medicare eligibility and assigns this number. CMS or RRB mails paper cards to all beneficiaries, which display the individual’s full name, gender, eligibility status (Part A and/or Part B), their effective date of eligibility, and the SSN-based HICN, referred to on the card as the Medicare Claim Number. (See fig. 1.) The HICN is constructed using the 9-digit SSN of the primary wage earner whose work history qualifies an individual for Medicare, followed by a 1- or 2-character code, referred to as the beneficiary identification code, that specifies the relationship of the card holder to the individual who makes the beneficiary eligible for benefits. In most cases, the SSN on the card is the card holder’s own; however, approximately 14 percent of Medicare beneficiaries have cards that contain the SSN of the family member whose work history makes the beneficiary eligible for Medicare benefits. A unique identifier is an essential component for administering health insurance. Such an identifier is used by providers to identify beneficiaries and submit claims for payment. As Medicare’s primary unique identifier, the HICN is used by beneficiaries, providers, and CMS and its contractors. State Medicaid programs, which are jointly funded federal- state health care programs that cover certain low-income individuals, use the HICN to coordinate payments for dual-eligible beneficiaries— individuals who are enrolled in both Medicare and Medicaid. (See table 1 for examples of various interactions that require the HICN). Beneficiaries must use their HICN when interacting with CMS, such as when they log into the Medicare website or call 1-800-MEDICARE for assistance. Using their issued card, beneficiaries also provide this information to providers at the time of service, and providers use this information to confirm eligibility and submit claims to receive payment for services. CMS and its contractors operate approximately 50 information technology (IT) systems, many of which are interdependent, that use this information in some manner to process beneficiary services and claims and conduct a number of other activities related to payment and program-integrity efforts. These IT systems vary considerably in terms of age and interoperability, making them difficult to change. In its November 2011 report, CMS proposed three options for removing SSNs from Medicare cards. One option would involve altering the display of the SSN through truncation, and the other two options would involve the development of a new identifier. All three options would vary with regard to the type of identifier displayed on the card and the actions providers and beneficiaries would need to take in order to use the identifier for needed services. CMS officials told us that they limited their options to those retaining the basic format of the current paper card, and did not consider other options that they believed were outside the scope of the congressional request. For example, CMS did not consider using machine-readable technologies, such as bar codes or magnetic stripes. Option 1: Truncating the SSN: Under this option, the first five digits of the SSN would be replaced with ‘X’s (e.g., XXX-XX-1234) for display on the card. However, the full SSN would continue to be used for all Medicare business processes. As a result, when interacting with CMS, beneficiaries would need to recall the full SSN or provide additional personally identifiable information in order for CMS to match the beneficiary with his or her records. To interact with CMS, providers would also need to obtain the complete SSN using an existing resource. This would involve querying an existing database, calling a CMS help line, or asking the beneficiary for the complete SSN or other personally identifiable information. Option 2: Developing a New Identifier for Beneficiary Use: Under this option, the SSN would be replaced by a new identifier not based on the SSN that would be displayed on the card, similar to private health insurance cards. CMS refers to this new identifier as the Medicare Beneficiary Identifier (MBI). This number would be used by beneficiaries when interacting with CMS. Providers, however, would be required to continue to use the SSN when interacting with CMS and conducting their business processes. To obtain this information, providers would be expected to electronically request it from CMS using the new identifier. CMS said it would need to create a new database for this purpose. Option 3: Developing a New Identifier for Beneficiary and Provider Use: Under this option, the SSN would be replaced by a new identifier not based on the SSN, which would be displayed on the card. As in option 2, CMS referred to this number as the MBI. In contrast to option 2, however, this new number would be used by both beneficiaries and providers for all interactions with CMS. Under this option, the SSN would no longer be used by beneficiaries or providers when interacting with CMS, which could eliminate the need for providers to collect or keep the SSN on file. CMS and its contractors would continue to use the SSN for internal data purposes, such as claims processing. Table 2 summarizes the characteristics of the CMS options. CMS, SSA, and RRB reported that all three options would generally require similar efforts, including coordinating with stakeholders; converting IT systems; conducting provider and beneficiary outreach and education; conducting training of business partners; and issuing new cards. However, the level and type of modifications required to IT systems vary under each option. These systems are responsible for various business functions that perform claims processing, eligibility verification, health plan enrollment, coordination of benefits, program integrity, and research efforts. According to CMS, between 40 and 48 of its IT systems would require modifications, depending on the option selected. The truncated SSN option would require modifications to 40 systems; the option that uses a new identifier for beneficiary use would require modifications to 44 systems; and the option that uses a new identifier for beneficiary and provider use would require modifications to 48 systems. In its 2011 report, CMS estimated that any of the 3 proposed options would likely take up to 4 years to implement. During the first 3 years, CMS would coordinate with stakeholders; complete necessary IT system conversions; conduct provider and beneficiary outreach and education; and conduct training of business partners. In the fourth year, CMS would issue new Medicare cards to all beneficiaries over a 12-month period. CMS officials stated that the agency could not implement any of the options without additional funding from Congress. In its report, CMS noted that the actual time needed for implementation could vary due to changing resources or program requirements. Similar to its 2006 report, CMS has not taken action needed to implement any of the options for removing the SSN it presented in its report. DOD has taken steps to remove the SSN from display on the approximately 9.6 million military identification cards that are used by active-duty and retired military personnel and their dependents to access health care services. DOD is replacing the SSNs previously displayed on these cards with two different unique identifiers not based on the SSN. In 2008, DOD began its SSN removal effort by removing dependents’ SSNs from display on their military identification cards, but retained the sponsor’s SSN and left SSNs embedded in the cards’ bar codes. The dependents’ cards did not display any unique identifier. On June 1, 2011, DOD discontinued issuing any military identification card that displayed an SSN and began issuing cards that displayed two different unique identifiers; however, SSNs continued to be embedded in the cards’ bar codes. Starting December 1, 2012, DOD will discontinue embedding the SSN in the cards’ bar codes. With the exception of cards issued to retired military personnel, DOD anticipates that the SSNs will be completely removed from all military identification cards by December 2016.contain the SSN as an identifier, and because some contractors providing DOD officials reported that because retirees’ cards may still health care services may continue to use the SSN for eligibility purposes and processing claims, DOD’s IT systems will continue to support multiple identifiers, including the SSN, until such time as all SSNs have been replaced with the two new unique identifiers. DOD cards issued to active- duty military personnel also contain a smart chip, which is used for accessing facilities and IT systems, and may be used to access health care services in some facilities. Cardholders’ SSNs are concealed in the smart chip. VA has also taken steps to remove the SSN from display on its identification and health care cards. The Veterans Identification Card (VIC) is issued by VA to enrollees and can be used by veterans to access health care services from VA facilities and private providers. In 2011, 8.6 million veterans were eligible to receive health care services and, according to VA officials, about 363,000 dependents of veterans were VA eligible to receive care through VA’s dependent-care programs. began removing SSNs from display on the VIC in 2004, but the SSN continues to be embedded in the cards’ magnetic stripes and bar codes. Since that time, VA officials report that the department has issued approximately 7.7 million VICs. VA officials also stated that, in the first quarter of fiscal year 2013, VA will start issuing new VICs that will display a new unique identifier for the veteran and embed the new identifier in the card’s magnetic stripe and bar code, replacing the SSN. VA also removed SSNs from display on the cards issued to beneficiaries in VA dependent-care programs without replacing it with a new identifier, and beneficiaries in these programs now provide their SSN verbally at the time of service. Representatives from a national organization representing private health insurers told us that, to their knowledge, all private health insurers have removed the SSN from display on insurance cards and replaced it with a unique identifier not based on the SSN. Private insurers use these new identifiers for all beneficiary and provider interactions, including determining eligibility and processing claims. According to these officials, private health insurers took those steps to comply with state laws and protect beneficiaries from identity theft. Consistent with this, representatives from the private health insurers we interviewed reported removing SSNs from their cards’ display and issuing beneficiaries new identifiers not based on the SSN, which are now used in all beneficiary and provider interactions. Officials we interviewed from DOD, VA, and private health insurers all reported that the process to remove the SSN from cards and replace the SSN with a different unique identifier is taking or took several years to implement and required considerable planning. During their transition periods, DOD, VA, and private health insurers reported that they made modifications to IT systems; collaborated with providers and contractors; and educated providers and beneficiaries about the change. One private health insurer we interviewed reported that it allowed for a transition period during which providers could verify eligibility or submit claims using either the SSN or the new unique identifier. This health insurer noted that this allowance, along with the education and outreach it provided to both beneficiaries and providers, resulted in a successful transition. Another health insurer reported that it is providing IT support for both the SSN and the new unique identifier indefinitely in case providers mistakenly use the SSN when submitting claims. Replacing the SSN with a new identifier for use by beneficiaries and providers offers beneficiaries the greatest protection against identity theft relative to the other options CMS presented in its report. (See fig. 2.) Under this option, only the new identifier would be used by beneficiaries and providers. This option would lessen beneficiaries’ risk of identity theft in the event that their card was lost or stolen, as the SSN would no longer be printed on the card. Additionally, because providers would not need to collect a beneficiary’s SSN or maintain that information in their files, beneficiaries’ vulnerability to identity theft would be reduced in the event of a provider data breach. The other two options CMS presented in its 2011 report provide less protection against identity theft. For example, replacing the SSN with a new number just for beneficiary use would offer some protection against identity theft for beneficiaries because no portion of the SSN would be visible on the Medicare card. This would reduce the likelihood of identity theft with the SSN if a card is lost or stolen. However, providers would still need to collect and store the SSN, leaving beneficiaries vulnerable to identity theft in the event of a provider data breach. CMS’s truncated SSN option would provide even less protection against identity theft. This option would eliminate full visibility of the SSN on the Medicare card, making it more difficult to use for identity theft. However, we have previously reported that the lack of standards for truncation mean that identity thieves can still construct a full SSN fairly easily using truncated SSNs from various electronic and hard copy records. In addition, under this option, providers would still store the SSN in their files, thereby making beneficiaries vulnerable to identity theft in the event of a provider data breach. We found that CMS’s option to replace the SSN with a new identifier for use by beneficiaries and providers presents fewer burdens for beneficiaries and providers relative to the other options presented in CMS’s 2011 report. (See fig. 3.) Under this option, the new identifier would be printed on the card, and beneficiaries would use this identifier when interacting with CMS, eliminating the need for beneficiaries to memorize their SSN or store it elsewhere as they might do under other options. This option may also present fewer burdens for providers, as they would not have to query databases or make phone calls to obtain a Private health insurers we beneficiary’s information to submit claims.interviewed all reported using a similar approach to remove SSNs from their insurance cards. Representatives from these insurers reported that while there was some initial confusion and issues with claims submission during the transition period, proactive outreach efforts to educate providers about this change, as well as having a grace period during which the SSN or new identifier could be used by providers to submit claims, minimized issues and resulted in a relatively smooth transition. provider use) (Beneficiary use only) The other two options CMS presented in its 2011 report would create additional burdens for beneficiaries and providers. Beneficiaries may experience difficulties under the truncated SSN option, as they may need to recall their SSN, which could be their own SSN or that of a family member. CMS officials stated that the age of Medicare beneficiaries and the fact that their current identification number may be based on another family member’s SSN could make it difficult for beneficiaries to remember the number. In addition, about 31 percent of Medicare beneficiaries residing in the community have a known cognitive or mental impairment, making recalling their number by memory potentially difficult. Under both of these remaining options, providers would need to perform additional tasks, such as querying a CMS database or calling CMS, to obtain the full SSN to verify eligibility and submit claims. Regardless of option, the burdens experienced by CMS would likely be similar because the agency would need to conduct many of the same activities and would incur many of the same costs. For example, it would need to reissue Medicare cards to current beneficiaries; conduct outreach and education to beneficiaries and providers; and conduct training for business partners. CMS would also likely see increased call volume to its 1-800-Medicare line with questions about the changes. In addition, there would likely be costs associated with changes to state Medicaid IT systems. However, according to CMS officials, the option that calls for replacing the SSN with a new identifier to be used by beneficiaries and providers would have additional burdens because of the more extensive changes required to CMS’s IT systems compared to the other options. This option, however, would also potentially provide an additional benefit to CMS, as the agency would be able to completely “turn off” the identification number and replace it with a new one in the event that a beneficiary’s number is compromised, something that is not possible with the SSN. CMS did not consider in its 2011 report how machine readable technologies—such as bar codes, magnetic stripes, or smart chips— could assist in the effort to remove SSNs from Medicare cards. Machine- readable technologies have been implemented to varying degrees by DOD and VA. According to DOD and VA officials, DOD is using a smart chip and barcode to store the cardholder’s personally identifiable information, and VA is issuing cards in which such information and other identifiers are stored in magnetic stripes and bar codes. Machine- readable technologies may provide additional benefits, such as increased efficiency for providers and beneficiaries. Furthermore, machine readable technologies provide some additional protection against identity theft, but officials we spoke with stated that the widespread availability of devices to read magnetic stripes and bar codes have made these technologies less secure. Because of this, both DOD and VA have plans to remove SSNs that are stored in these technologies on their cards. If CMS were to use machine-readable technologies, they could present significant challenges to providers. For example, providers could experience difficulties due to the lack of standardization across these technologies. Representatives from one private health insurer we interviewed stated that while the use of cards with magnetic stripes worked well within a small region where they have large market- penetration, implementing such an effort in regions where providers contract with multiple insurers would be more difficult due to this lack of standardization. In addition, use of machine-readable cards would likely require providers to purchase additional equipment and could be problematic for providers that lack the necessary infrastructure, such as high-speed internet connections, to make machine-readable technologies feasible. According to CMS officials, implementing machine-readable technologies may also require cards that cost more than the paper Medicare card currently in use. Removing the SSN from the Medicare card and not replacing it with a new identifier, an option also not considered in CMS’s report to Congress, could reduce beneficiaries’ vulnerability to identity theft, but would create burdens for beneficiaries, providers, and CMS. Complete removal of the SSN from the Medicare card would protect beneficiaries from identity theft in the event that a card is lost or stolen. However, like the truncation option, beneficiaries may have difficulty recalling their SSN at the time of service or when interacting with CMS. This could also be difficult because the SSN needed to show eligibility may not be the beneficiary’s own. In addition, providers would likely need to change their administrative processes to obtain the needed information either by querying a database, calling CMS, or obtaining it directly from the beneficiary. Finally, because providers would still need to collect and store the SSN for eligibility verification and claims submission, beneficiaries would remain vulnerable to identity theft in the event of a provider data breach. The VA used this approach to remove SSNs from the approximately 363,000 dependent care program cards, and officials stated that it requires providers to obtain the SSN at the time of service. However, Medicare covers over 48 million beneficiaries who receive services from 1.4 million providers, making such a change more burdensome. In addition, CMS would still encounter similar burdens as in the options presented in its 2011 report to Congress, including the need to educate beneficiaries and providers, and issue new cards, though the extent of the necessary changes to CMS IT systems under such an option is unknown. In its 2011 report to Congress, CMS, in conjunction with SSA and RRB, developed cost estimates for the three options to alter the display of the SSN on Medicare cards or replace the SSN with a different unique identifier. CMS projected that altering or removing the SSN would cost between $803 million and $845 million. CMS’s costs represent the majority of these costs (approximately 85 percent); while SSA and RRB’s costs represent approximately 12 percent and 0.2 percent, respectively. (See table 3.) Approximately two-thirds of the total estimated costs (between $512 million and $554 million depending on the option) are associated with modifications to existing state Medicaid IT systems and CMS’s IT system conversions. While modifications to existing state Medicaid IT systems and related costs are projected to cost the same across all three options, the estimated costs for CMS’s IT system conversions vary. This variation is due to the differences in the number of systems affected and the costs for modifying affected systems for the different options. CMS would incur costs related to modifying 40 IT systems under the truncated SSN option, 44 systems under the new identifier for beneficiary use option, and 48 systems under the new identifier for beneficiary and provider use option. In addition, the cost associated with changes to specific systems varied depending on the option. CMS’s estimates for all non-IT related cost areas are constant across the options. Other significant cost areas for CMS include reissuing the Medicare card, conducting outreach and education to beneficiaries about the change to the identifier, and responding to beneficiary inquires related to the new card. Both SSA and RRB would also incur costs under each of the options described in CMS’s 2011 report. SSA estimated that implementing any of the three options presented in the 2011 report would cost the agency $95 million. SSA’s primary costs included $62 million for responding to inquiries and requests for new Medicare cards from beneficiaries and $28 million for processing new cards mailed by CMS that are returned as undeliverable. SSA officials told us that even though CMS would be responsible for distributing new Medicare cards, SSA anticipated that about 13 percent of the beneficiary population would contact SSA with questions. RRB’s costs totaled between $1.1 million and $1.3 million. Between 21 and 34 percent of RRB’s total costs were related to IT system updates and changes, depending on the option. The rest of RRB’s costs were related to business functions, such as printing and mailing new cards; user costs related to system and procedure changes; and education and outreach. The cost estimates included in CMS’s 2011 report were as much as 2.5 times higher than those estimated in its 2006 report to Congress. CMS attributed these increases to the inclusion of costs not included in the 2006 report, such as those associated with changes to state Medicaid systems and changes to its IT systems related to Part D, as well as a more thorough accounting of costs associated with many of the other cost areas, including SSA costs. In addition, CMS said in its 2006 report that phasing in a new identifier for beneficiaries over a 5- to 10-year period would reduce costs. However, in its 2011 report, CMS stated that such an option would be cost prohibitive because it would require running two parallel IT systems for an extended period of time. There are several key concerns regarding the methods and assumptions CMS used to develop its cost estimates that raise questions about the reliability of its overall cost estimates. First, CMS did not use any cost estimating guidance when developing its estimates. GAO’s Cost Estimating and Assessment Guide identifies a number of best practices designed to ensure a cost estimate is reliable. However, CMS officials acknowledged that the agency did not rely on any specific cost-estimating guidance, such as GAO’s cost-estimating guidance, during the development of the cost estimates presented in the agency’s report to Congress. The agency also did not conduct a complete life-cycle cost estimate on relevant costs, such as those associated with IT system conversions. CMS officials told us they did not conduct a full life-cycle cost estimate for each option because this was a hypothetical analysis, and doing so would have been too resource intensive for the purpose of addressing policy options. Second, the procedures used to develop estimates for the two largest cost categories—changes to existing state Medicaid IT systems and CMS’s IT system conversions—are questionable and not well documented. For each of CMS’s options, the agency estimated Medicaid Given the size of this cost category, IT changes would cost $290 million. we have concerns about the age of the data, the number of states used to generalize these estimates, as well as the completeness of the information CMS collected. For example, CMS’s estimates for costs associated with its proposed changes were based on data collected in 2008, at which time the agency had not developed all of the options In addition, while CMS asked for cost data presented in its 2011 report.from all states in 2008, it received data from only five states—Minnesota, Montana, Oklahoma, Rhode Island, and Texas—and we were unable to determine whether these states are representative of the IT system changes required by all states. CMS extrapolated national cost estimates based on the size of these states, determined by the number of Medicare eligible beneficiaries in them. However, the cost of IT modifications to Medicaid systems would likely depend more on the specific IT systems and their configurations in use by the state than on the number of Medicare beneficiaries in the state. CMS was unable to provide documentation about the data it requested from states related to its cost projections, or documentation of the responses it received from states on the specific modifications to Medicaid IT systems that would be required. CMS officials also acknowledged that each state is different and their IT systems would require different modifications. For the CMS IT-system conversion costs, officials told us that CMS derived its IT-system conversion cost estimates by asking its IT system owners for costs associated with changes to the systems affected under each of the three options. However, CMS provided us with limited documentation related to the information it supplied to its system owners when collecting cost data to develop its estimates, and no supporting documentation for the data it received from system owners. The documentation CMS provided asked system owners to provide the basis for their estimates (including, for example, costs related to labor and hardware, and software changes and additions), and laid out general assumptions for system owners to consider. However, because CMS asked for estimates for broad cost categories, the data it received were general in nature and not a detailed accounting of specific projected costs. CMS officials also told us that system requirements changed over the course of their work; however, they provided no documentation related to how these changes were communicated to system owners. In addition, CMS officials told us that they generally did not attempt to verify estimates submitted by system owners. CMS could not explain how or why a number of the systems the agency believed would require modifications would be affected under its three options, or the variance in the costs to modify these systems across the options. Moreover, CMS’s cost estimates for the IT-related costs in its 2011 report were approximately three times higher than the estimate in the agency’s 2006 report. That report stated that the majority of changes necessary to replace the existing number with a non-SSN-based identifier would affect only two systems; however, the agency estimated in its 2011 report that up to 48 systems would require modification, depending on the option selected. Furthermore, CMS’s 2006 report stated that the 2 primary IT systems affected—the Medicare Beneficiary Database and the Enrollment Database—account for $70 million, or 85 percent, of the IT-related costs. However, in the 2011 report, these 2 systems accounted for 5 percent or less of the IT-related costs, depending on the option implemented. CMS officials we interviewed were unable to explain the differences in the number of systems affected, or the costs of required modifications to IT systems between the 2006 and 2011 reports. Third, there are inconsistencies in some assumptions used by CMS and SSA in the development of the estimates. For example, CMS and SSA used different assumptions regarding the number of Medicare beneficiaries that would require new Medicare cards. According to CMS officials, the agency based its cost estimates on the number of Medicare beneficiaries at the time the report was prepared (47 million), whereas SSA officials told us the agency based its estimates on the expected number of beneficiaries in 2015 (55 million), the year they estimated the new card would likely be issued. In addition, nearly 30 percent of SSA’s costs were related to processing newly-issued Medicare cards that are returned as undeliverable. However, SSA officials told us that they were not aware that CMS’s cost estimates included plans to conduct an address-verification mailing at a cost of over $45 million prior to issuing new cards. Such a mailing could reduce the number of cards returned as undeliverable, and thus SSA’s costs associated with processing such cards. Finally, CMS did not take into account other factors when developing its cost estimates, including related IT modernization efforts or potential savings from removing the SSN from Medicare cards. In developing its estimates, CMS did not consider ways to integrate IT requirements for removing the SSN from Medicare cards with those necessitated by other IT modernization plans to realize possible efficiencies. DOD and a private health insurer we interviewed reported that when removing SSNs from their cards, they updated their systems to accommodate this change in conjunction with other unrelated system upgrades. CMS officials told us that because many of the agency’s other IT modernization plans are unfunded, the agency does not know when or if these efforts will be undertaken. As a result, the agency is unable to coordinate the SSN removal effort or to estimate savings from combining such efforts. In its report, CMS also acknowledged that if the agency switched to a new identifier used by both beneficiaries and providers, there would likely be some savings due to improved program integrity and reduced need to monitor SSNs that may be stolen and used fraudulently. However, in developing its estimates, CMS did not include any potential savings the agency might accrue as a result of removing the SSN from Medicare cards. Nearly six years have passed since CMS first issued a report to Congress that explored options to remove the SSN from the Medicare card, and five years have elapsed since the Office of Management and Budget directed federal agencies to reduce the unnecessary use of the SSN. While CMS has identified various options for removing the SSN from Medicare cards, CMS has not committed to a plan to remove them. The agency lags behind other federal agencies and the private sector in reducing the use of the SSN. DOD, VA, and private health insurers have taken significant steps to eliminate the SSN from display on identification and health insurance cards, and reduce its role in operations. Of the options presented by CMS, the option that calls for developing a new identifier for use by beneficiaries and providers offers the best protection against identity theft and presents fewer burdens for beneficiaries and providers than the other two. Consistent with the approach taken by private health insurers, this option would eliminate the use and display of the SSN for Medicare processes conducted by beneficiaries and providers. While CMS reported that this option is somewhat more costly than the other options, the methods and assumptions CMS used to develop its estimates do not provide enough certainty that those estimates are credible. Moreover, because CMS did not have well-documented cost estimates, the reliability of its estimates cannot be assessed. Use of standard cost-estimating procedures, such as GAO’s estimating guidance, would help ensure that CMS cost estimates are comprehensive, well documented, accurate and credible. Moving forward, CMS could also explore whether the use of magnetic stripes, bar codes, or smart chips could offer other benefits such as increased efficiencies. Absent a reliable cost estimate, however, Congress and CMS cannot know the costs associated with this option and how to prioritize it relative to other CMS initiatives. Lack of action on this key initiative leaves Medicare beneficiaries exposed to the possibility of identity theft. In order for CMS to implement an option for removing SSNs from Medicare cards, we recommend that the Administrator of CMS select an approach for removing the SSN from the Medicare card that best protects beneficiaries from identity theft and minimizes burdens for providers, beneficiaries, and CMS, and develop an accurate, well-documented cost estimate for such an option using standard cost-estimating procedures. We provided a draft of this report to CMS, DOD, RRB, SSA, and VA for review and comment. CMS and RRB provided written comments which are reproduced in appendixes II and III. DOD, SSA, and VA provided comments by e-mail. CMS concurred with our first recommendation to select an approach for removing the SSN from Medicare cards that best protects beneficiaries from identity theft and minimizes burdens for providers, beneficiaries, and CMS. The agency noted that such an approach could protect beneficiaries from identity theft resulting from loss or theft of the card and would allow CMS a useful tool in combating Medicare fraud and medical identity theft. CMS also concurred with our second recommendation that CMS develop an accurate, well-documented cost estimate using standard cost-estimating procedures for an option that best protects beneficiaries from identity theft and minimizes burdens for providers, beneficiaries, and CMS. CMS noted that a more rigorous and detailed analysis of a selected option would be necessary in order for Congress to appropriate funding sufficient for implementation, and that it will utilize our suggestions to strengthen its estimating methodology for such an estimate. DOD had no comments and did not comment on the report’s recommendations. RRB stated that the report accurately reflected its input and had no additional comment. SSA provided only one technical comment, which we incorporated as appropriate, but did not comment on the report’s recommendations. VA concurred with our findings, but provided no additional comments. We are sending copies to the Secretaries of HHS, DOD and VA, the Administrator of CMS, the Commissioner of SSA, the Chairman of RRB, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, you may contact us at: Kathleen King, (202) 512-7114 or [email protected] or Daniel Bertoni, (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Burdens of CMS’s Proposed Options for Removal of SSN from Medicare Card (Accessible Text) provider use) (beneficiary use only) Social Security number (SSN) While any change to the beneficiary identifier could cause initial confusion for beneficiaries, this option creates no additional burden for the beneficiary because the number on the card would be used to receive services and interact with CMS. While any change for the beneficiary identifier could cause initial confusion for beneficiaries, this option creates no additional burdens to the beneficiary because the number on the card would be used to receive services and interact with CMS. While any change to the beneficiary identifier could cause initial confusion among providers, this option would not create additional burdens for the provider, as the provider would be able to obtain the number from the card provided by the beneficiary. Kathleen King, (202) 512-7114 or [email protected] or Daniel Bertoni, (202) 512-7215 or [email protected]. In addition to the contacts named above, the following individuals made key contributions to this report: Lori Rectanus, Assistant Director; Thomas Walke, Assistant Director; David Barish; James Bennett; Carrie Davidson; Sarah Harvey; Drew Long; and Andrea E. Richardson.
More than 48 million Medicare cards display the SSN, which increases Medicare beneficiaries’ vulnerability to identity theft. GAO was asked to review the options and associated costs for removing SSNs from the Medicare card. This report (1) describes the various options for removing the SSN from Medicare cards; (2) examines the potential benefits and burdens associated with different options; and (3) examines CMS’s cost estimates for removing SSNs from Medicare cards. To do this work, GAO reviewed CMS’s report, cost estimates, and relevant supporting documentation. GAO also interviewed officials from CMS and other agencies that perform Medicare related activities (the Social Security Administration and Railroad Retirement Board), as well as officials from DOD and VA, which have undertaken SSN removal efforts. GAO also interviewed private health insurance companies and relevant stakeholder groups. The Centers for Medicare & Medicaid Services’ (CMS) 2011 report to Congress proposed three options for removing Social Security numbers (SSN) from Medicare cards. One option would truncate the SSN displayed on the card, but beneficiaries and providers would continue to rely on the SSN. The other two options would replace the SSN with a new identifier that would be displayed on the card and either be used only by beneficiaries, or by both beneficiaries and those who provide Medicare services. CMS, however, has not selected or committed to implementing any of these options. The Departments of Defense (DOD) and Veterans Affairs (VA), and private insurers have already removed or taken steps to remove SSNs from display on their identification or health insurance cards. CMS’s option to replace the SSN with a new identifier for use by both beneficiaries and providers offers the greatest protection against identity theft. Beneficiaries’ vulnerability to identity theft would be reduced because the card would no longer display the SSN and providers would not need the SSN to provide services or submit claims (negating the need for providers to store the SSN). This option would also pose fewer burdens than the other two options because beneficiaries would not have to remember an SSN to receive services or to interact with CMS. Providers also would not need to conduct additional activities, such as querying a CMS database, to obtain the SSN. The burdens for CMS would generally be similar across all the options, but CMS reported that this option would require more information technology (IT) system modifications. CMS reported that each of the three options would cost over $800 million to implement, and that the option to replace the SSN with a new identifier for use by both beneficiaries and providers would be somewhat more expensive, largely because of the IT modifications. However, the methodology and assumptions CMS used to develop its estimates raise questions about their reliability. For example, CMS did not use appropriate guidance, such as GAO’s cost-estimating guidance, when preparing the estimates to ensure their reliability. Additionally, CMS could provide only limited documentation related to how it developed the estimates for the two largest cost areas, both of which involve modifications to IT systems. GAO recommends that CMS (1) select an approach for removing SSNs from Medicare cards that best protects beneficiaries from identity theft and minimizes burdens for providers, beneficiaries, and CMS and (2) develop an accurate, well-documented cost estimate for such an option. CMS concurred with our recommendations. VA, DOD, and RRB had no substantive comments. SSA had a technical comment.
The two largest federal school meals programs, the National School Lunch Program, established in 1946, and the School Breakfast Program, permanently established in 1975, aim to address problems of hunger, food insecurity, and poor nutrition by providing nutritious meals to children in schools. Although federal requirements for the content of school meals have existed since the programs began, as research has documented the increasing incidence of children who are overweight and obese in the United States, the federal government has taken steps to improve the nutritional content of meals. The Healthy, Hunger-Free Kids Act of 2010 required USDA to update federal requirements for school lunches, breakfasts, and to establish standards for competitive foods—foods sold to children in schools other than through the school meals programs. USDA issued final regulations that made changes to many of the meal content and nutrition requirements in January 2012, and many of the new lunch requirements were required to be implemented beginning in school year 2012-2013, with changes to breakfast generally beginning in school year 2013-2014. USDA issued interim final regulations that established new nutrition standards for competitive foods in June 2013, and USDA required them to be implemented beginning in school year 2014-2015. For lunches, USDA regulations implementing the Healthy, Hunger-Free Kids Act of 2010 made changes to meal components and nutrition standards. Regarding meal components—fruits, vegetables, meats, grains, and milk—lunches must now include fat-free or low-fat milk and both fruit and vegetable choices. While students may be allowed to decline two of the five lunch components they are offered, they must select at least one half cup of fruits or vegetables as part of their meal in order for it to be reimbursable. (See fig. 1.) For nutrition standards, the regulations include minimum and maximum calorie levels for lunches and require that lunches do not include trans fat and contain reduced amounts of sodium and saturated fat. USDA regulations also phased in some of the new lunch requirements. For example, in school year 2012-2013, USDA initially required that at least 50 percent of grain products offered in lunches had to be whole grain-rich—containing at least 50 percent whole grains—during the school week. In school year 2014-2015, the whole grain-rich requirement was increased to 100 percent of grain products, although SFAs may request temporary exemptions from this requirement from their states. USDA regulations also phase in requirements for sodium reductions in lunches. The Target 1 sodium limits became effective in school year 2014-2015, and future sodium reductions are set for school years 2017- 2018 (Target 2) and 2022-2023 (Target 3). (See table 1.) However, USDA cannot implement these future reductions until the latest scientific research establishes that they are beneficial to children. For breakfast, USDA’s regulations establish three meal components— fruit, grains, and milk—and require that breakfasts include whole grain- rich foods and only fat-free or low-fat milk. (See fig. 2.) Starting in school year 2014-2015, schools must offer one cup of fruit with each breakfast each day and may offer vegetables in place of fruit. If a school chooses to offer four or more food items, a child must take at least three, including at least one half cup of fruit or vegetable substitute, in order to have a reimbursable meal. In addition, the new nutrition standards include minimum and maximum calorie levels for breakfasts and require that breakfasts include no trans fat and reduced amounts of sodium and saturated fat. Similar to lunch, the whole grain-rich requirement was phased in, with exemptions available, and the first sodium target was effective in school year 2014-2015, with further reductions in future years. (See table 2.) For competitive foods, USDA regulations establish nutrition standards for foods or beverages, other than school meals, sold to children in schools during the school day, by the SFA or school groups. Competitive food sales may take place at fundraisers on the school campus, as well as at specific venues in schools, such as vending machines, school stores, and a la carte lines in the cafeteria, through which the SFA sells individually priced food and beverage items. The new federal competitive food requirements include limits on calories, sugars, total and saturated fat, trans fat, and sodium, establish new standards for beverages, and add whole grain-rich requirements. Competitive foods sold in schools generally must meet these requirements during the school day, which the regulations define as beginning at midnight and ending 30 minutes after the end of the school day. However, USDA has provided flexibility to states to grant exemptions from the competitive food requirements for infrequent fundraisers held during the school day, allowing states to set the number of fundraisers schools can hold in which the food items being sold do not have to meet competitive food requirements. Outside of the school day, food sales also do not have to comply with the requirements. Before the Healthy, Hunger-Free Kids Act of 2010, competitive foods were largely unregulated at the federal level, with only minimal restrictions prohibiting the sale of certain competitive foods, known as foods of minimal nutrition value, during meal periods in school cafeterias and other food service areas. Nationwide, participation in the National School Lunch Program has declined in recent years after having increased steadily for more than a decade. In our January 2014 report, we found that total student participation in the National School Lunch Program—the total number of students who ate school lunches—dropped by a cumulative 1.2 million students (or 3.7 percent) from school years 2010-2011 through 2012- 2013, with most of the decrease occurring during school year 2012- 2013. According to our recent analysis of USDA data, school lunch participation continued to decline during school year 2013-2014, reaching a cumulative decline of 1.4 million students (or 4.5 percent) since school year 2010-2011. (See fig. 3.) The participation rate, which measures the proportion of all students in schools that take part in the National School Lunch Program who ate school lunches, also declined during this period, falling from 62 percent in school year 2010-2011 to 58 percent in school year 2013-2014. The decrease in the total number of students eating school lunches during the last three school years was driven primarily by a decrease in students paying full price for meals. Based on family income, children who participate in school meals programs either pay full price or qualify to receive free or reduced-price meals. The number of students paying full price for meals declined by two million, a decrease that could be caused by students choosing to no longer purchase lunch at school or by students becoming eligible for free or reduced-price meals. The decline in students paying full price for lunch exceeded the increase in the number of students eating free lunches during the same time period. (See fig. 4.) State and SFA officials told us that several factors likely influenced decreases in participation in the school lunch program, though the extent to which each factor affected participation is unclear. For example, officials from all the SFAs we interviewed during school year 2012-2013 reported that student acceptance of the lunch content changes was a challenge, which we heard again from officials we interviewed during school year 2014-2015. Further, officials from seven of the eight states we interviewed in school year 2014-2015 reported that the decreases in lunch participation were influenced by student acceptance of the changes made to comply with the new lunch content and nutrition standards. According to officials from four states, another factor that may have led to lower participation among students paying full price for lunch is the federally-required increase in the prices of paid lunches in certain districts—also known as paid lunch equity. These federally-required price increases may have resulted in students who previously purchased school lunches deciding not to purchase lunch at the higher price. The increase in lunch participation by students who qualify for free lunches may be influenced by several factors. USDA has reported that the introduction of the Community Eligibility Provision, which allows qualifying, high-poverty schools to provide free meals to all students, is intended to remove the stigma of receiving free meals and reduce administrative barriers to student participation. Other factors that may have influenced the increase in participation by students who receive free school lunch include the economic downturn that began in 2007 and adjustments made to how student eligibility for free and reduced-price meals is determined. As we previously found in our January 2014 report, the impact of other factors at the local level on lunch participation may vary. Specifically, changes to district and school policies that affect school lunch may increase or decrease lunch participation. For example, in 2014, officials in three of the eight districts we visited noted that the time allotted for lunch periods may affect participation. In addition, USDA officials told us that school closures, mergers, moves, consolidation due to economic conditions, and issues with food service management companies may affect school lunch participation. Participation in the School Breakfast Program continued its upward trend during school year 2013-2014, continuing more than a decade of steady increases. According to our analysis of USDA data, participation in the School Breakfast Program grew by 1.4 million students (or 12 percent) from school year 2010-2011 through school year 2013-2014, to a total of 13.5 million students. (See fig. 5.) The participation rate also increased during this period, growing from 26 percent in school year 2010-2011 to 28 percent in school year 2013-2014. In comparison with school lunch, the number of students participating in school breakfast was less than half of the number participating in school lunch in school year 2013-2014. According to USDA data, the increases in School Breakfast Program participation can be explained, in part, by program expansion. Specifically, since school year 2010-2011, the program has expanded to more than 1,500 additional schools, while also increasing the proportion of enrolled students eating breakfast. That growth has been driven largely by increases in the number of students eating breakfasts that receive free meals, which have accounted for 1.3 million of the 1.4 million additional average daily breakfasts served from school years 2010-2011 through 2013-2014. (See fig. 6.) Breakfast has also benefitted from efforts intended to increase participation using alternative formats, such as providing students with breakfast in the classroom and breakfast after the school day has started. USDA found in 2009 that the probability of student participation in school breakfast increases when breakfast is served in the classroom rather than in the cafeteria, and that the more time students have to eat breakfast, the more student participation increases. SFAs and states reported that some of the challenges they have experienced meeting the new school meals requirements have persisted since school year 2012-2013, such as increased plate waste. Plate waste occurs when students take food required for a school meal, but then choose not to eat it. In our January 2014 report, 48 states reported that plate waste—particularly for fruits and vegetables—was a challenge for their SFAs in school year 2012-2013. Further, in 7 of the 17 schools we visited in school year 2012-2013, we saw many students throw away some or all of their fruits and vegetables at lunch. During school year 2014-2015, directors and staff from five of the eight SFAs we reviewed indicated that this issue has persisted as a challenge, though staff from three SFAs reported that plate waste was not a problem at their schools. Our lunch observations suggest that plate waste may be beginning to decrease as students adjust to school meals that meet the new requirements. Specifically, the plate waste we observed when visiting schools in school year 2014-2015 was generally limited to a small number of students throwing away some of their fruits and vegetables in 7 of the 14 schools. SFA food preparation changes and student acceptance of fruits and vegetables may be helping to reduce fruit and vegetable waste in some districts. For example, officials from three SFAs noted that it is sometimes difficult for students to eat whole fruit during a meal, and one SFA has responded by giving pre-cut, rather than whole, fruit to elementary and middle school students. In addition, when we asked students what they like about school lunch, students we spoke with in 13 of the 14 schools generally reported liking fruit and vegetable options, and those at 5 schools highlighted fruits or vegetables as their favorite breakfast or lunch items. Although there has been some progress, poor student acceptance of certain foods is another longstanding challenge that SFAs continued to report. For example, in our prior work, we found that SFAs were challenged by student acceptance of some whole grain-rich products in school year 2012-2013. With the requirement increasing from half of grains served in meals having to meet the whole grain-rich definition beginning in school year 2012-2013, to all grains in school year 2014- 2015, we found that these acceptance challenges have continued for most SFAs we reviewed. Specifically, directors and staff from seven of eight SFAs told us that students do not like certain whole grain-rich foods, so getting them to take and eat them continues to be a challenge. Representatives from five of these SFAs highlighted whole grain pasta as being particularly challenging to serve, with one noting, for example, that whole grain-rich pasta loses structural integrity soon after being served, becoming unappealing to students. While none of the SFAs we visited had applied for the temporary pasta exemption or grain product exemption made available by USDA, we found that two had nevertheless been serving pasta that was not in compliance with the whole grain-rich requirement. For example, one SFA director told us that she was not serving compliant lasagna noodles because she was unable to find whole grain-rich noodles that would work well in the SFA’s recipe. In addition, SFA directors and staff mentioned whole grain-rich bread and crackers as examples of other items that have been challenging to get some students to accept. In our June 2013 testimony, we also found poor student acceptance of vegetables in the beans and peas (legumes) and red/orange vegetable subgroups, and we found during our recent SFA visits that this challenge has persisted for five of the eight SFAs. For example, two SFA directors reported that they have tried to replace regular potatoes with sweet potatoes in fries or tater tots, but students have not embraced the change. Also, one SFA director noted that even when staff prepare a small amount of legumes, they end up throwing some of it away because children do not take it. Despite their persistence, the challenges around student acceptance that we previously reported may be improving over time as students adjust to the lunch changes and SFAs find more acceptable products and recipes. Specifically, directors and staff from four SFAs we visited reported some success in addressing challenges in obtaining student acceptance of whole grains, including three that used white whole wheat flour and one that mixed whole grains—such as rice—in with other foods rather than serving them on their own. Another SFA director indicated that the SFA’s early adoption of whole grain-enriched foods helped ease the transition to meeting the federal standards, and student acceptance has improved over time. The opportunities for SFAs to receive temporary exemptions from whole grain requirements—both specifically for pasta and generally for any grain product—were designed to help ease these challenges in some SFAs across the country. Further, food industry representatives we spoke with reported that they are taking steps to help schools improve preparation of these products, which may also help improve student acceptance. For example, representatives of three companies, including one that produces pasta, said that they are currently focused on educating SFAs and schools on preparation techniques that maximize palatability of whole grain-rich products. In addition, directors from two SFAs noted that they have found ways to incorporate legumes and red/orange vegetables into dishes that students will eat. For example, one said the SFA was able to incorporate red/orange vegetables into popular harvest cake, pumpkin bars, and a sweet potato and apple side dish on the elementary school menus in the fall, and another successfully included black and refried beans in tacos. In our prior work, we also found that managing food costs was a significant challenge in school year 2012-2013. During our current review, we found that managing food costs has persisted as a challenge for several of the SFAs we visited. Specifically, 47 states reported that food costs were a challenge for their SFAs in school year 2012-2013, and all eight directors of SFAs we visited reported that fruit and vegetable expenditures increased substantially from school year 2011-2012 to school year 2012-2013. During our recent visits to SFAs, we found that four of the eight SFA directors continue to report increased food costs due to the new requirements for school meals, which three attributed, in part, to increased costs for fruits and vegetables. In addition, two SFAs reported a net financial loss from school years 2012-2013 to 2013-2014— a trend they expected to continue for school year 2014-2015—which they said reduced their SFAs’ fund balances. In addition to increasing food costs, several SFA and state officials highlighted increasing employee wages and benefits as another major driver of SFAs’ increasing costs. The 2014-2015 school year marked the first school year for which SFAs were required to comply with sodium reductions as part of a planned phase-in for new sodium limits for breakfast and lunch, and while the eight SFAs we reviewed reported meeting the first sodium targets, officials reported difficulties in doing so. For example, to meet the first sodium target, three SFAs altered popular items to comply with the requirements, and three SFAs removed popular items altogether— including certain cereals, biscuits and gravy, and chili. Staff from another SFA said they had replaced all added salt with pepper, which resulted in a strong pepper flavor for many foods, and other SFA staff reported switching to low-sodium gravy and removing pickles from the condiment station. Some students we spoke with in 6 of 14 schools made comments about the lack of flavor in school meals that suggested they noticed the changes SFAs made to meet the first targets. Further, we found that students in two of the SFAs had attempted ways to add sodium to their school meals, including bringing salt and pepper shakers to lunch in the cafeteria (see fig. 7) and asking school faculty and administrators to obtain additional condiment packets for them from the cafeteria. The dietitian in the latter SFA noted that under the current sodium restrictions, the SFA no longer allows students to take unlimited quantities of condiments. SFA directors, state officials, and industry representatives we interviewed expressed concerns about the future sodium targets for school meals. Directors and staff from three SFAs indicated that they made changes to food that are within their control in order to meet the first sodium requirements, and those from two noted that they are doubtful it will be feasible to meet the future targets without changes made by the food industry. Similarly, officials from three of eight states noted that SFAs’ success with meeting future sodium targets depends on industry’s ability to manufacture compliant foods that students will eat. Representatives from four of the eight food companies we interviewed said they anticipated problems with developing foods that would meet future targets. Three of the representatives noted that sodium is a necessary component of certain products, including breads, meat, and cheese, so reducing sodium content further in those products will be difficult. For example, one company representative said that reduced sodium could shorten products’ shelf lives. Representatives from two of the companies that are already reformulating their food products to meet the future targets said they have encountered challenges with respect to palatability when testing these reformulated foods. In addition, we found that uncertainty about when and if future targets will be implemented may be delaying some industry progress toward developing compliant products. For example, a representative from one company said that the company is waiting to see if USDA will maintain the first target as the ultimate sodium limit, in part because changing formulations is expensive and time-intensive. Consistent with this, officials from two states said they believe some food manufacturers may be taking a wait-and-see approach to the sodium targets due to uncertainty about the implementation of future targets, which could impact the availability of low-sodium food products in the future. In April 2015, USDA officials told us that, consistent with the statutory requirements, they have been examining the science around the health effects of sodium, and they do not expect to make any policy decisions on future sodium targets until the Dietary Guidelines for Americans recommendations are released later this year. In 2010, the National Academies’ Institute of Medicine reported that several broad barriers precluded rapid and large reductions in the sodium content of school meals, noting that most schoolchildren prefer salty food, and that this preference should be expected to persist as long as children are exposed to salty foods at home or elsewhere. Pointing to studies showing that acceptance of diets with lower sodium content is more successful if carried out gradually, the Institute of Medicine recommended that USDA gradually phase in the sodium reductions in school meals over 10 years. However, the Institute of Medicine said that following a phased-in approach may not remove all related challenges to student acceptance, and SFAs will not be able to accomplish these reductions on their own. The report noted that the food industry’s partnership is essential because the current sodium content of many commercially prepared foods available for school meals is moderately high or high, and the food industry is responsible for the diversity and quality of these foods. It also acknowledged the correlation between palatable foods at school and school meals participation and concluded that it is unlikely that children will be easily motivated to continue to eat foods they find unappealing. Further, the report also noted that any loss of revenue based on decreased participation presents a threat to the financial stability of SFAs’ programs. Recognizing the challenges related to implementation of the sodium targets, USDA officials told us they have taken steps intended to assist SFAs as they modify foods in preparation for the future targets. For example, officials told us they began planning an initiative in the summer of 2014 called “What’s Shaking? Creative Ways to Boost Flavor with Less Sodium” with the goal of developing and sharing strategies for reducing sodium levels in school meals. Officials reported that they have 36 partners in this initiative, including organizations focused on improving public health and other federal agencies such as the Department of Health and Human Services’ Centers for Disease Control and Prevention, who began meeting in April 2015. USDA officials reported that they have also held roundtable discussions and are planning future listening sessions among districts to discuss and receive feedback on specific challenges SFAs are encountering as they try to plan menus with reduced sodium. An additional effort that may assist SFAs struggling with the sodium reductions in the future is USDA’s training and peer-to-peer mentoring program called “Team Up For School Nutrition Success,” which aims to leverage successes from some SFAs by sharing best practices with others that are struggling with specific aspects of implementing the new nutrition requirements. The department has also made a number of resources on reducing sodium in school meals available for SFAs on the department’s website, including recipes, fact sheets, presentations, and webinars. USDA is also beginning to gather information from food producers about their progress toward developing school meals that meet the future sodium targets. In June 2015, the department requested proposals for a study examining the market availability of foods that meet both the current and future targets, as well as successes and challenges experienced by the food industry and SFAs as they take steps to reduce sodium in school meals. Among other things, the study aims to identify barriers the food industry faces and expects to face in the future in providing schools with lower sodium foods, as well as gauge progress made toward future sodium targets. In addition, officials reported that they have conducted some outreach to food industry representatives on the future sodium targets, for example, by visiting a spice manufacturing company to discuss alternative food preparation techniques that require little or no added salt. USDA officials also said that they are gathering feedback on trends related to new technologies and tactics industry is employing to lower sodium levels in food. In July 2015, USDA officials noted that food industry representatives they have talked to have committed to working toward the future targets, but have also emphasized that research and development efforts take time, and they do not want to compromise quality or taste in producing compliant foods. USDA officials noted that while they believe the sodium reductions in school meals can be achieved, doing so will take time and energy and will require both industry innovation and cooperation among SFAs, the food industry, USDA, and other partners that promote good child nutrition. While many states and school districts had pre-existing policies on nutrition standards for competitive food sales when the new federal requirements were established, SFAs and school groups in seven of the eight districts we reviewed reported that they had to make changes to the competitive foods they offered for sale in school year 2014-2015 to comply. USDA reported that, by 2012, at least half of states had competitive food standards for foods sold in schools through a la carte sales in the cafeteria, vending machines, school stores, and snack bars, and almost half had nutrition standards for foods sold through bake sales. Six of the eight school districts we reviewed were in states that had pre- existing competitive food policies. However, pre-existing policies did not eliminate the need for most SFAs to make changes to comply in school year 2014-2015, as officials from seven of the eight SFAs we reviewed said that they discontinued some products and added other products to comply with the new federal standards. For example, some products, such as candy, were discontinued entirely. Further, two of eight SFA directors said that they stopped selling some items a la carte that were served as part of the school lunch, such as a side salad with dressing, peach cobbler, and fried potatoes, because the individual items did not meet the competitive food requirements. At one school, because some meal components met the competitive food requirements and others did not, the SFA discontinued all meal component a la carte sales to avoid confusing students. SFA directors, athletic directors, and a school store manager said that non-SFA groups also had to change the products they sold, for example, discontinuing sales of candy, donuts, chicken sandwiches from a restaurant, and full calorie soda. SFAs and school groups also added various compliant food products to their competitive food sales. For example, some added products that had been reformulated to comply with the new requirements, such as flavored chips, puffed grain snacks, ice cream, and marshmallow cereal treats. Sports drinks continued to be offered at some schools, but in their lower calorie or no calorie versions and in small size bottles. Carbonated drinks, which were generally previously prohibited by federal regulations from being sold in school food service areas at mealtimes, were also added in their low-calorie and no calorie forms to competitive food sales in high school cafeterias at three of the eight SFAs we contacted. Six of eight SFAs told us they had difficulty, particularly during the beginning of school year 2014-2015, finding compliant competitive food products and obtaining sufficient quantities of the products to meet student demand. For example, two SFAs said they had a difficult time obtaining products like compliant chips and marshmallow cereal treats. Food industry representatives we spoke with reported that these shortages were the result of initial difficulty estimating demand for the new reformulated products. However, SFAs reported that these supply challenges diminished as the year progressed. SFAs reported concerns about revenue losses from switching to sales of competitive foods that comply with the new federal nutrition requirements. Of the five SFAs that provided us with information on their competitive food sales from the beginning of school year 2014-2015 up to the time of our visits, four told us that their competitive food sales had decreased in comparison to the previous school year, while one SFA reported increased competitive food sales. Specifically, reasons the SFAs gave for reduced revenues included lost sales from some of the popular, discontinued foods, and students not buying as much of the new foods offered for sale. SFAs also reported that the cost of compliant items was greater than the cost of noncompliant items sold previously, but they did not feel that they could increase prices to the same degree, which resulted in lower profits. The SFA director that reported increased sales said that her a la carte sales were slightly higher part way through the 2014-2015 school year, possibly because she had not raised prices and that she offered a wide variety of snacks and beverages. We also heard of mixed effects on school group fundraising revenues. For example an athletic director in one district we reviewed and a school store manager in another said that they had experienced reduced revenues from fundraising, which resulted in less money to subsidize athletic facilities, equipment, uniforms, field trips, and travel for competition at regional and national events. However, some groups also mentioned that changing from food to non-food sales can sometimes increase fundraiser revenues. For example, an athletic director in another district and a representative of a national association said that other types of fundraising, such as craft fairs or sales of cards that provide discounts at local restaurants and other businesses, have raised more money than candy sales in some districts. The concerns we heard about competitive food revenues are generally consistent with those raised by school districts in the past and discussed by USDA in its interim final regulations. For example, school districts we visited in 2005 that had taken steps to substitute healthy competitive foods for less nutritious items expressed strong concerns about potential revenue losses. However, at that time, the limited data available on competitive foods revenue from the schools and districts we visited suggested that districts experienced mixed revenue effects from changes made to competitive food sales. In USDA’s interim final regulations, the department acknowledged that there was considerable variation among schools in the share of their revenue from competitive foods, and some schools might see substantial reductions in competitive food revenues after implementation of the federal requirements, at least in the short term. At the same time, revenue effects and other challenges related to implementing the new federal competitive food standards have likely been mitigated for school groups that received exemptions from the new requirements for certain fundraisers. School groups are subject to the same federal competitive food requirements as SFAs, unless exempted; USDA permits states to determine the allowed frequency of fundraising exemptions without federal review as long as those sales do not take place in the food service area during meal times. The eight states we reviewed varied in the number of fundraising exemptions allowed in school year 2014-2015, with no fundraiser exemptions allowed in four states and varied policies on the number exempted and their duration in the remaining four. For example, during school year 2014-2015, one state allowed 10 exempted fundraisers per year, per school with each lasting no longer than 4 consecutive days, while another state allowed unlimited fundraisers on 36 designated days during the school year. (See table 3.) Fundraiser exemptions remove challenges some school groups may face complying with the federal requirements, but they potentially raise other challenges as well. For example, in 2013 we found that some SFAs were concerned that these exemptions put the SFA at a competitive disadvantage relative to other food sales within the school. Further, some commenters on the interim final regulation raised concerns that exempt fundraisers threaten the rule’s public health goals and student participation in the meals programs. Without a fundraiser exemption, school groups may also sell other food products that do not comply with the competitive food requirements, on the school campus 30 minutes after school or in other non-school locations. In three of eight districts we reviewed, some vending machines were not turned on until after school. At two schools, we observed sales of food and beverage products 30 minutes after the end of the school day, when additional products were sold that did not meet the competitive food requirements, such as candy, large muffins, pizza, and full-calorie carbonated drinks. We were told that school groups also continued to have concession stands at after school sporting and other school events that sold products that were not subject to the competitive food requirements. Groups also continued to sell items like candy bars as fundraisers, providing boxes of candy bars to students that were to be sold to family, neighbors, and friends outside of school. According to states and SFAs, the involvement of a variety of groups in competitive food sales at schools has made oversight of competitive food sales challenging. According to USDA, SFAs are responsible for the compliance of the sales they operate, and school districts are responsible for the compliance of foods sold in areas that are outside the control of the SFA. However, while principals at all of the schools we contacted were aware of competitive food sales at their schools, our conversations with them suggested that their involvement with oversight of these sales varied. For example, 6 of 16 principals said that they or the school administration must approve fundraisers. Further, 4 school principals told us that they rely on the SFA director for information about the competitive food requirements. This is consistent with our conversations with SFA directors, as five of the eight SFA directors said that they provided information and assistance regarding the new competitive food requirements to school administration or school groups and helped determine product compliance. In one school district, competitive foods had to be approved by the SFA’s dietitian, and in two other school districts, the SFA director provided an approved-list of items that could be sold. However, at the same time, three of eight SFAs expressed concerns that not all school groups were in compliance with the new requirements and the consequences for noncompliance were unclear. For example, one SFA director said that when she observed noncompliant snacks being sold at schools, she reported it to the principal and superintendent, but that typically did not result in the noncompliant sales being shut down permanently. Beginning in school year 2014-2015, USDA requires that periodic state reviews of schools identify the entities responsible for selling food and beverages and ensure that such food meets the competitive food requirements, which should help address such situations. During our visits to schools and discussions with SFAs and students, we found that schools in six of the eight SFAs we reviewed had some competitive foods for sale during the school day that were not compliant with the new federal requirements. In some cases, particular items were out of compliance, such as a sports drink sold a la carte by a SFA that exceeded the size and calorie requirements, trail mix sold at a school store that exceeded the calorie and fat requirements, and pizza slices sold by a school store during lunch that we were told did not meet whole grain-rich requirements. In other cases, we observed a bake sale in a school cafeteria during the lunch period, and students told us about another bake sale held by a teacher in a classroom during the school day and sales of candy in the library during the school day. All states we contacted indicated there were challenges in local level oversight of competitive foods. Officials from three of eight states said that there is a lack of clarity at the local level about who is responsible for compliance. Officials from two states said that neither the SFAs nor school administrators want to be the “food police,” responsible for overseeing compliance at the local level. Although officials from three states said that ultimately the superintendent is responsible for the compliance of school groups’ competitive food sales, not all districts have yet to work out how this operates at the local level or defined the role of the SFA director and others. In its interim final regulations, USDA foresaw issues with ensuring compliance at the local level and said it envisioned that a school district designee, such as a local school wellness coordinator, may need to take the lead in this area. USDA also suggested that sorting through who is responsible for monitoring competitive foods would initially require planning and cooperation, but if all parties (i.e., school wellness coordinator, SFA, and school groups) worked together, such as to share information on allowable foods, the department believed that implementation in future years would be greatly streamlined. The expansion of state oversight responsibility to include reviews of competitive foods could help ensure compliance with the nutrition standards, though some states reported that initial oversight of this area has been challenging. Specifically, beginning in school year 2014-2015, states are now responsible for overseeing local compliance with both federal school meals and competitive food requirements, including providing technical assistance and developing corrective action plans for SFAs and schools when noncompliance is found. However, officials from two of eight states told us that their initial oversight of this area suggests that all school districts do not take competitive food compliance seriously, and some try to find ways around the requirements. An official from another state said that there is sometimes resistance to modifying these sales from principals, especially when competitive food sales may be the only source of discretionary money for school activities and equipment. Officials from three of eight states also expressed concerns that they currently do not have enforcement tools for competitive food sales, and officials in one state said that they have heard from SFA directors who do not think that school groups will take compliance with the requirements seriously unless there are financial consequences. USDA officials also said that they will be reviewing all aspects of the first year’s implementation of the competitive food requirements and will consider any necessary changes before finalizing the competitive food regulations. When issues of competitive foods noncompliance arise during state oversight reviews, USDA has said that it sees technical assistance and training as the first step to address them. However, the competitive foods interim final regulations also indicate that USDA will issue a proposed rule to address a number of integrity issues related to administration of the school meal programs and competitive foods, which will provide states options for imposing fines against any school or SFA failing to comply with program regulations. Officials from several states and SFAs that we interviewed during school year 2014-2015 indicated that USDA’s assistance on the new school meals and competitive food requirements was helpful or has recently improved; at the same time, some found the amount of USDA guidance to be overwhelming. As part of our January 2014 report, we found that all states found USDA’s guidance and training to be useful as the new school lunch requirements were implemented during school year 2012- 2013. As implementation of the school meals and competitive food changes continued during school years 2013-2014 and 2014-2015, officials from five states and two SFAs who we interviewed in school year 2014-2015 similarly noted that USDA assistance has been helpful. For example, officials from one state noted that frequent webinars, covering such topics as menu planning using USDA foods, have been useful. In addition, two SFAs reported that USDA guidance on the changes has improved over time; however, a state official also told us that USDA has issued too much guidance, and it has overwhelmed SFAs. Officials from three SFAs we spoke with also told us that the amount of guidance they have received from USDA has been challenging, especially given the complex nature of the guidance, and one SFA official noted there is not enough time to read all of the guidance. This difficulty keeping up with the extensive amount of USDA guidance is consistent with what some states and SFAs reported to us during school year 2012-2013. In our 2014 report, we found that, in the first 33 months after the Healthy, Hunger- Free Kids Act of 2010 was enacted (from January 2011 through September 2013), USDA issued about 90 memos to provide guidance to states and SFAs on the new requirements for the content of school lunches and paid lunch equity. In the 19 months since then (from October 2013 through April 2015), we found that USDA issued 51 additional memos related to these changes. In total, USDA has issued nearly 4,700 pages of guidance on the lunch, breakfast, and competitive food requirements since the final rule on changes to the lunch and breakfast content and nutrition standards was issued in January 2012. With the bulk of the changes to school meals and competitive foods nutrition standards already required to be implemented, the need for USDA to issue additional guidance on these changes should diminish in future years, according to officials. In our 2014 report, we found that several SFAs reported challenges with the high volume of USDA guidance that was issued at the same time that they were implementing significant changes to the lunch program. Similarly, during the 2013-2014 and 2014-2015 school years, SFAs were implementing substantial program changes—related to breakfast, competitive foods, and whole grains and sodium for lunch—concurrently with USDA’s issuance of guidance on those topics. Since the only future planned changes are additional sodium reductions for school meals and minor changes to competitive foods, it is likely the amount of USDA guidance will decrease. Some state and SFA officials we spoke with also found USDA’s assistance in response to questions about school meals and competitive food requirements was not always timely or clear. While several state officials we spoke with said they appreciated USDA’s assistance or had good working relationships with USDA regional offices, officials from five of the eight states we spoke with were frustrated by USDA’s response times. Specifically, officials from two of those states said that USDA sometimes did not respond to questions for more than a month. Officials from two states also expressed frustration with the timing of USDA’s release of guidance surrounding state compliance reviews, as changes were made after the review cycle began. An official from one of the two states told us that as a result of the timing, the state was not able to effectively redesign its electronic records system after it had already been finalized. In addition, officials from two states and four SFAs also told us that some of USDA’s guidance was unclear or inconsistent. For example, one SFA official said that guidance on whether certain a la carte items meet the competitive foods requirements was difficult to interpret. Two SFA officials also said that the guidance issued on juice—including how much can be offered and how frequently it can be offered—has been confusing. In addition, a state official described frustration with the multiple modifications of USDA’s guidance surrounding smoothies, which substantially changed how smoothie components were to be credited toward the meal pattern. However, not all SFAs we spoke with highlighted such challenges related to USDA guidance. For example, one SFA director noted that while one part of the competitive foods guidance was unclear, the competitive foods requirements are new, and she expected that USDA would provide clarity. She added that she thought USDA provided sufficient communications on the new requirements. In recognition of the challenges SFAs have faced while implementing the new requirements, USDA has provided other types of assistance intended to help clarify the regulations and guidance. For example, USDA partnered with the Institute of Child Nutrition to create the “Team Up For School Nutrition Success” training and mentoring program, which is designed for SFAs to share best practices. Also under this program, experienced SFAs provide targeted technical assistance to those struggling with certain aspects of implementation to clear up lingering confusion. In addition, the initiative offers monthly webinars for states and SFAs on a wide variety of topics—including menu planning and sodium, which are also made available online for later viewing. Although USDA has taken some steps toward addressing three of the four recommendations included in our report and testimony on the school lunch nutrition changes, the department has not yet acted to address an issue we reported in our 2013 testimony that would help SFAs meet the school lunch calorie requirements. Specifically, we found that the gap in the calorie ranges for the 6-8 and 9-12 grade groups—600-700 and 750- 850, respectively—was problematic for districts we visited that included schools with students in both groups. In guidance, USDA acknowledged that the lack of overlap in the calorie ranges for the two grade groups can be challenging and suggested that districts serve a menu appropriate for the lower grade level and add a few additional foods for students in the upper grade level. However, as part of that report we also found that this may not be a feasible solution for some schools because, for example, students in different grade ranges may use the same serving lines during a shared lunch period. In such schools, cashiers at the point-of-sale may not know each student’s grade level and therefore may not be able to accurately identify whether lunches comply with requirements. Thus, in that report we recommended that USDA provide flexibility to help SFAs with schools that serve students in both grade groups comply with the defined calorie ranges. While USDA generally agreed with the recommendation, indicating that it recognizes the need to address the challenges posed by the lack of overlap in the calorie ranges, the department has not yet taken action to address the issue. In the absence of additional USDA assistance in this area, state and SFA officials we spoke with in school year 2014-2015 described varied approaches they have taken to address this issue, all of which are inconsistent with USDA regulations and guidance. For example, one state official told us that for schools serving meals to students in both grade ranges, the state recommended serving meals that met the calorie range associated with the predominant grade group served at each lunch period. An SFA from another state took a different approach, planning its menus for these schools to offer lunches with maximum calorie counts midway between the middle school maximum and the high school minimum. Officials from another state said that while USDA requirements do not provide flexibility in this area, and schools with both 6-8 and 9-12 grade groups are supposed to be considered out of compliance if they are not meeting both calorie ranges, the state has instead chosen to use common sense when reviewing the menus in such schools. As the absence of USDA guidance is leading to states making varied decisions about menu compliance in these schools, we continue to believe that our June 2013 recommendation to USDA (to provide some flexibility to SFAs with such schools) continues to have merit and should be fully implemented. We provided a draft of this report to the U.S. Department of Agriculture for review and comment. On September 1, 2015, the FNS Director for Program Monitoring and Operational Support Division, Child Nutrition Programs, and other FNS officials provided us with their oral comments. The officials stated that they generally agreed with the report findings. However, we discussed our previous recommendation that USDA provide school districts with flexibility to help them comply with the lack of overlap in the calorie ranges for lunches served to students in grades 6-8 and 9- 12 in schools with both grade groups. As stated in this report, we again found that some SFAs and states with schools that include students from both grade groups were taking varied approaches to address the lack of overlap in the calorie ranges for school lunch, which were inconsistent with USDA requirements. Officials noted that USDA believes it is important to maintain the scientifically-based, age-appropriate calorie ranges and is unlikely to change the calorie requirements. However, they noted that our findings show an inconsistency in state oversight of districts with such schools and indicate a continued need for clarification. The officials said that they intend to continue providing menu planning guidance and technical assistance to states and districts to help them comply with these requirements. We continue to believe that additional flexibility from USDA would assist school district efforts to comply and is consistent with the Institute of Medicine’s suggestions that USDA and states should work together to find a solution in these schools, though we appreciate that additional technical assistance from USDA may also help achieve this goal. FNS also provided technical comments on the draft report, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To assess trends in school lunch and breakfast participation, we analyzed USDA’s national data on meals served in the National School Lunch and School Breakfast Programs from school year 2000-2001 through school year 2013-2014. We used the same methodology to assess trends in participation as we had in our prior report on the initial implementation of school lunch changes. Each month, states report to USDA on the FNS- 10 form the number of lunches and breakfasts served by category of student—free, reduced-price, and paid—as well as average daily lunches and breakfasts served to all students. These data are used to determine federal reimbursement payments to states. Additionally, in October of each school year, states report to USDA the total number of students enrolled in schools with the National School Lunch and School Breakfast Programs. Although USDA does not collect additional data on the number of students participating in the programs each month, the department uses the lunch and breakfast data it collects to determine the number of students participating in the programs. Specifically, USDA adjusts the data on average daily lunches and breakfasts served each month upward to help account for students who participated in the programs for a number of days less than all days in the month. To make this adjustment, USDA uses an estimate of the proportion of students that attend schools daily nationwide. To analyze participation in the National School Lunch and School Breakfast Programs, we reviewed USDA’s data on meals served and students enrolled, as well as the department’s methodology for determining student participation, and determined these data and the method to be sufficiently reliable for the purposes of this report. Specifically, we interviewed USDA officials to gather information on the processes they use to ensure the completeness and accuracy of the school lunch and breakfast data, reviewed related documentation, and compared the data we received from the department to its published data. To determine school year participation from these data, both overall and by free, reduced-price, and paid categories, we relied on 9 months of data—September through May—for each year. To understand the scale and scope of assistance USDA has provided to states and SFAs, we analyzed guidance memos USDA issued from October 2013 through May 2015. This period allows for a seamless continuation of the analysis we conducted for the previous study on implementation of the new nutrition requirements for school lunch, which reviewed guidance issued from January 2011 through September 2013. Our intent was to continue our longitudinal review of USDA-issued guidance addressing implementation of the updated nutrition requirements, as well as paid lunch equity, and we used the same methodology that we used for the prior study. Specifically, we reviewed all guidance memos issued to states during this time period and further analyzed those that provided guidance addressing the new requirements for the content of school meals and competitive foods, including related issues such as food procurement and state review of SFA compliance with the requirements, as well as those addressing the paid lunch equity requirements. These memos included the department’s policy and technical assistance memos, as well as other relevant guidance memos that were not designated in one of those categories. For guidance memos that were released in multiple versions, we considered each version to be a separate piece of guidance. We assessed the number of pages included in each document, defined as the number of digital pages for each guidance document, including attachments. In the case of spreadsheet files, we counted each worksheet within the file as a single page. We did not conduct an independent legal analysis of these guidance memos. To gather information from the local level on implementation of the new nutrition requirements for school meals and competitive foods, we contacted the same eight school districts across the country that we had visited as part of our prior study of implementation of the new school lunch requirements in school year 2012-2013. From December 2014 through March 2015, we visited Carlisle Area School District (PA), Chicago Public Schools (IL), Coeur d’Alene School District (ID), Fairfax County Public Schools (VA), Irving Independent School District (TX), Mukwonago Area School District (WI), and Spokane Public Schools (WA). For the eighth district—Caddo Parish Public Schools (LA)—we gathered information over the phone from the SFA director and officials from two schools, as our on-site visit to the district was impeded by weather-related school closings. We selected these school districts because they provided variation across geographic location, district size, and certain characteristics of the student population and district food services. For example, the proportion of students eligible for free and reduced-price lunches and the racial and ethnic characteristics of the student population varied across the districts selected. Further, we selected districts with different food service approaches, including some that generally prepared school meals in one central kitchen before delivering them to schools, some that prepared meals in kitchens on-site in each school, and others that used alternative approaches for food preparation. Six of the school districts we contacted managed their own food service operations, while two districts contracted with food service management companies. We relied on the U.S. Department of Education’s Common Core of Data, which provides information on public schools, to ensure selected districts met several of our criteria. As a result, all of the districts we selected for site visits were public, although non-profit private elementary and secondary schools, as well as residential child care institutions, also participate in the National School Lunch Program and the School Breakfast Program. In each of the districts we visited, to gather information on local level implementation of the new nutrition requirements, we interviewed the SFA director, as well as other key district-level SFA staff and food service staff in two schools. During these interviews, we collected information about lunch and breakfast participation trends; challenges, if any, implementing the new meal content requirements; challenges, if any, implementing new nutrition standards for competitive foods; and USDA and state assistance with the changes. To select the schools we visited in each district, we worked with the SFA director to ensure the schools included students of differing grade levels. This allowed us to observe any relevant differences in their reactions to the new meal and competitive food requirements. In each school we visited, we observed breakfast and lunch service, as well as competitive food sales—including students’ food selections, consumption, and plate waste—and, when feasible, interviewed students and school staff to obtain their thoughts on the changes. We also interviewed the eight state child nutrition program directors overseeing these districts to gather information on statewide lunch and breakfast participation trends; SFA challenges, if any; and USDA and state assistance with implementation of the changes. Contacting these districts and states a second time provides a short-term longitudinal perspective on challenges related to implementation of the phased-in changes. However, we cannot generalize our findings from SFAs, districts, and states beyond those that we contacted. To gather additional information we interviewed various school nutrition stakeholder groups, including subject matter experts, professional organizations, and industry representatives. We selected groups among those that were involved with school meals, had an interest in children’s nutrition, and/or were involved in competitive food snacks and beverages, as well as school fundraisers. This included the School Nutrition Association (SNA), the National PTA, the Center for Science in the Public Interest, and other members of the National Alliance for Nutrition and Activity. We also spoke with the American Beverage Association, the Snack Food Association, and a group of industry officials who are also members of SNA. In addition to the contact named above, Rachel Frisk (Assistant Director), Dan Meyer (Analyst-in-Charge), Luke Baron, Sara Pelton, and Christine San made key contributions to this report. Also contributing to this report were Divya Bali, James Bennett, Jessica Botsford, David Chrisinger, Aimee Elivert, Kathy Leslie, Theresa Lo, Jean McSween, Steve D. Morris, Lorin Obler, and Lindsay Read.
The Healthy, Hunger-Free Kids Act of 2010 required USDA to update nutrition standards for school lunches and breakfasts and add standards for other food sold in schools, known as competitive foods. In response, USDA set new nutrition requirements, including limits on calories, sodium, and fats. Previously, GAO reported on the implementation of changes to school lunches in school year 2012-2013. Since then, additional requirements for lunches have taken effect, as well as new requirements for breakfasts and competitive foods. GAO was asked to review implementation of the nutrition changes to school food. GAO reviewed (1) recent trends in school meals participation, (2) challenges SFAs faced in implementing the new requirements for school meals, (3) challenges SFAs and districts faced in implementing new requirements for competitive foods, and (4) USDA assistance in implementing the changes. GAO reviewed relevant federal laws, regulations, and guidance; analyzed federal school meals participation data from school years 2000-2001 through 2013-2014; reviewed implementation in the same eight school districts visited for the report on school year 2012-2013 lunch changes, selected to provide variation in geographic location and certain district and food service characteristics; and interviewed USDA and state officials, as well as food industry and stakeholder groups. Nationwide, participation in the National School Lunch Program declined by 1.4 million children (or 4.5 percent) from school year 2010-2011 through school year 2013-2014, to 30.4 million children. The participation rate of enrolled students also declined, from 62 to 58 percent. Seven of eight states that GAO interviewed reported that challenges with student acceptance of changes made to comply with new federal nutrition requirements contributed to the decrease. Also, four of eight states noted that recent required increases in the price of lunch may have decreased participation among some students. At the same time, nationwide participation in the breakfast program continued its trend of steady increases, which can be explained, in part, by program expansion into more schools. The U.S. Department of Agriculture (USDA), states, and the eight School Food Authorities (SFAs) GAO reviewed, which administer meal programs in school districts, reported some ongoing challenges with meal requirements; however some SFAs noted success in certain areas. For example, five of eight SFAs, described continuing challenges with plate waste, that is, students taking required foods and then not eating them. However, officials in the other three, as well as GAO's mealtime observations across the two school years, suggest that plate waste may be decreasing in some SFAs. Also, five of the SFAs reported difficulty serving certain required food items in ways that appeal to students, though others reported some success. Regarding sodium, SFA, state, and food company officials expressed concerns about meeting future targets, which USDA plans to phase in over the next 8 years. To address these concerns, USDA is gathering information from SFAs and the food industry on progress toward reducing sodium levels in school meals. New requirements for competitive foods—foods sold to students in schools other than through the school meals programs—also challenged SFAs and schools during school year 2014-2015. Six of eight SFAs reported difficulty procuring items that met the new requirements, particularly at the beginning of the school year. Also, four SFAs and two school groups selling competitive foods in the eight districts GAO reviewed reported decreased revenues due to lower student demand for products that comply with the requirements. In addition, SFA and state officials reported issues with ensuring compliance and providing oversight of these sales. To identify and help address such issues, USDA recently required states to begin including competitive foods in their periodic reviews of SFAs. Officials from five states and four SFAs reported that USDA's assistance in implementing these changes has been helpful or improving over time; however, some SFAs noted problems with the amount or clarity of the guidance. USDA has initiated efforts to assist SFAs, such as by conducting webinars on a variety of topics, including menu planning. At the same time, officials from three of eight SFAs said USDA guidance on the new requirements—comprising nearly 4,700 pages issued from January 2012 through April 2015—has been challenging to keep up with. However, according to USDA, the substantial changes to nutrition standards have already occurred, and therefore, the need for additional guidance should decrease in future years. Moreover, USDA has provided other types of assistance that help clarify the guidance, including initiatives that facilitate the sharing of best practices and provide peer mentoring. GAO is not making any recommendations.
Over the past several years, we reported that serious breakdowns in management processes, systems, and controls have resulted in substantial waste and inefficiency in DOD’s excess property reutilization program. Our June 2002 testimony and our November 2003 report documented instances where DOD sold to the public items such as Joint Service Lightweight Integrated Suit Technology (JSLIST) and other chemical and biological protective suits and related gear that should have been restricted to DOD use only. Our November 2003 report also identified several examples that showed that at the same time DOD excessed biological equipment items in good or excellent condition and sold many of them to the public for pennies on the dollar, it was purchasing the same or similar items. Our May 2005 report stated that DOD reported $466 million in lost, damaged, and missing excess property from fiscal years 2002 through 2004, including property with demilitarization restrictions, such as chemical and biological protective suits, body armor, and guided missile warheads. Some of the restricted items had been sold to the public. We also reported that during fiscal years 2002 and 2003, the military services purchased at least $400 million of identical items instead of using available excess items in new and unused condition. At the time of our May 2005 report, waste and inefficiency occurred because condition codes were assigned to excess property that incorrectly identified it as unusable and DOD lacked adequate systems and processes for assuring that excess items in A-condition were reused to avoid unnecessary purchases. We also found that DOD lacked adequate security over excess items requiring demilitarization, resulting in losses reported by DRMOs of nearly 150 chemical and biological protective suits, over 70 units of body armor, and 5 guided missile warheads. Losses reported by DLA supply depots included thousands of sensitive military items, such as weapons system components and aircraft parts. Our undercover investigators purchased several sensitive excess military equipment items that were improperly sold to the public at DOD liquidation sales. These items included 3 ceramic body armor inserts identified as small arms protective inserts (SAPI), which are the ceramic inserts currently in demand by soldiers in Iraq and Afghanistan; a time selector unit used to ensure the accuracy of computer-based equipment, such as global positioning systems and system-level clocks; 12 digital microcircuits used in F-14 Tomcat fighter aircraft; guided missile radar test sets used to check the operation of the data link antenna on the Navy’s Walleye (AGM- 62) air-to-ground guided missile; and numerous other electronic items. In instances where DOD required an EUC as a condition of sale, our undercover investigator was able to successfully defeat the screening process by submitting bogus documentation and providing plausible explanations for discrepancies in his documentation. We identified at least 79 buyers for 216 sales transactions involving 2,669 sensitive military items that DOD’s liquidation contractor sold to the public between November 2005 and June 2006. We are referring information on these sales to the appropriate federal law enforcement agencies for further investigation. Posing as DOD contractor employees, our investigators also entered DRMOs in two east coast states, and obtained about $1.1 million in excess military items that required demilitarization as well several other items that are currently in use by the military services. DRMO personnel even helped us load the items into our van. These items included 2 launcher mounts for shoulder-fired guided missiles, an all-band antenna used to track aircraft, 16 body armor vests, body armor throat and groin protectors, 6 circuit card assemblies used in computerized Navy systems, and 2 Palm V personal data assistant (PDA) organizers. Using a fictitious identity as a private citizen, our undercover investigator applied for and received an account with DOD’s liquidation sales contractor. The undercover investigator was then able to purchase several sensitive excess military items that were being improperly sold to the public. During our undercover purchases, our investigator engaged in numerous conversations with liquidation sales contractor staff during warehouse inspections of items advertised for sale and DRMS and DLA Criminal Investigative Activity (DCIA) staff during the processing of our EUCs. On one occasion our undercover investigator was told by a DCIA official that information provided on his EUC application had no match to official data and that he had no credit history. Our investigator responded with a plausible story and submitted a bogus utility bill to confirm his mailing address. Following these screening procedures, the EUC was approved by DCIA and our undercover investigator was able to purchase targeted excess military items. Once our initial EUC was approved, our subsequent EUC applications were approved based on the information on file. The following discussion presents the case study details of our undercover purchases of sensitive excess military items that should have been destroyed when no longer needed by DOD and should not have been sold to the public. Although these items had a reported acquisition cost of $461,427, we paid a liquidation sales price of $914 for them—less than a penny on the dollar. Small arms protective insert. In March 2006, our undercover investigator purchased 3 ceramic body armor inserts identified as small arms protective inserts (SAPI), which are the ceramic inserts currently in demand by soldiers in Iraq and Afghanistan. SAPI are designed to slide into pockets sewn into the front and back of military vests in order to protect the warfighter’s chest and back from small arms fire. The SAPI had been improperly included in a batch lot of items that did not require demilitarization. The batch lot reportedly contained 609 items, including shelter half-tents, canteens and canteen covers, small tools, first aid pouches, insect nets, barracks bags and waterproof bags, small arms cases, miscellaneous field gear, and the SAPI. We paid $129 for the batch lot, which had a reported acquisition cost of $1,471. The SAPI have a demilitarization code of D, which requires them to be destroyed when no longer needed by DOD rather than being sold to the public. Figure 1 shows a photograph of one of the SAPI that we purchased. Time selector unit. In March 2006, our undercover investigator purchased an excess DOD time selector unit used to ensure the accuracy of computer- based equipment, such as global positioning systems and system-level clocks. According to our Chief Technologist, this technology is important because it prevents users in the battlefield from exposing their position to get timing signals from outside sources. We paid $65 for the time selector unit, which had an original acquisition cost of $343,695. Also, although the unit was listed as being in F7 condition (unserviceable, reparable condition), it appeared to be in working order. The time selector unit had a demilitarization code of D, which required it to be destroyed when no longer needed by DOD. The unit also had a FedLog controlled inventory item code (CIIC) of 7, which indicates it is a classified item that requires protection in the interest of national security, in accordance with DOD 5200.1-R, Information Security Program. Although the link on the national stock number (NSN) included on DOD’s liquidation contractor’s Internet sale Web site showed this item was assigned a demilitarization code of D, it was sold to the public as a trade security controlled item—demilitarization code B. As such, we were required to complete an application and obtain an approved EUC. Our undercover investigator submitted bogus information on his EUC application. A DCIA official contacted our undercover investigator and told him that the information on his application did not match official data and he had no credit history. After responding with a plausible story and submitting a bogus utility bill to document our mailing address, our EUC for the time selector unit was approved in April 2006. Figure 2 shows a photograph of the excess DOD time selector unit we purchased. Digital microcircuits. Our undercover investigator purchased a total of 82 excess DOD digital microcircuits, including 12 microcircuits used on the F- 14 Tomcat fighter aircraft. Because of their sensitive technology, the microcircuits had a demilitarization code of D, which requires their total destruction when they are no longer needed by DOD. The 12 microcircuits also had a CIIC of 7, which indicates they are classified items that require protection in the interest of national security, in accordance with DOD 5200.1-R. In violation of DOD demilitarization policy for D coded items, the microcircuits were improperly included in a batch lot with several other electronic items that did not require demilitarization. Further, only 12 of the 82 demilitarization code D microcircuits that we purchased were listed on the liquidation sale advertisement. We paid approximately $58 for the entire batch lot, which included a total of 591 items with a reported acquisition cost of $112,700. Because several items in the batch lot had demilitarization codes that designated them as trade security control items restricted by the U.S. Munitions List or the Commerce Control List of the U.S. Department of Commerce, an EUC was required for approval of our purchase. Our EUC for the digital microcircuits was approved in May 2006 based on our bogus information already on file. Figure 3 shows an enlarged photograph of one of the microcircuits that were improperly sold to our undercover investigator. Guided weapon radar test sets. Two guided weapon radar test sets were included in the batch lot with the digital microcircuits that our undercover investigator purchased from DOD’s liquidation sales contractor in April 2006. The test sets, which were advertised for sale as radar test sets, are used to check the operation of the data link antenna on the Navy’s Walleye (AGM-62) air-to-ground guided missile delivered by the F/A-18 Hornet fighter aircraft. The Walleye is designed to deliver a self-guided high- explosive weapon from an attack aircraft to a surface target. Because of their sensitive technology the test sets have a demilitarization code of B, which requires an EUC for trade security purposes. Figure 4 shows a photograph of the guided weapon test sets that we purchased and obtained using bogus EUC documentation. Universal frequency counter. The new, unused universal frequency counter purchased by our undercover investigator was manufactured (initially calibrated) in February 2003. DOD awarded a contract to Fluke Corporation in 2002 for 67 of these items, which are designed to count the speed at which an electrical system fluctuates. According to a manufacturer official, this item’s military application is to ensure the frequency of communication gear is running at the expected rate. The universal frequency counter has a demilitarization code of B, which requires trade security control under the U.S. Munitions List. We paid a total of $475 for this item, which had a reported acquisition cost of $1,685. In April 2006, when we purchased the universal frequency counter, DOD’s liquidation sales contractor sold a total of 15 of these items for $5,506, or about $367 per unit. The 15 items had a reported total acquisition value of $25,275, or $1,685 per unit. The bogus paperwork that we submitted with our EUC application was approved by DCIA in May 2006. Figure 5 shows a photograph of the unit that we purchased. Directional coupler. In March 2006, our undercover investigator purchased an excess military item advertised as a directional coupler from DOD’s liquidation sales contractor. We paid $186 for the sales lot, which contained a total of 8 electronic equipment and supply items with a listed acquisition cost of $1,200. According to FedLog, the directional coupler advertised had an actual acquisition cost of $1,876. This directional coupler is used in the F-14 Tomcat fighter aircraft to monitor, measure, isolate, or combine electronic signals. Because of its technology, this directional coupler has a demilitarization designation code of D, which required it to be destroyed when no longer needed by DOD. The directional coupler also had a CIIC of 7, which indicates it is a classified item that requires protection in the interest of national security, in accordance with DOD 5200.1-R. However, after receiving the item, we discovered that it was not the item identified by the national stock number in the sales advertisement. As a result, it appears that DOD not only lost accountability over the actual item identified in its excess property inventory, but advertised and recorded a public sale of a sensitive military item on the U.S. Munitions List, which was required to be disposed of by destruction in accordance with DOD demilitarization policy. We observed numerous sales of additional excess sensitive military items that were improperly advertised for sale or sold to the public, including fire control components for weapon systems, body armor, and weapon system components. The demilitarization codes for these items required either key point or total destruction rather than disposal through public sale. Although we placed bids to purchase some of these items, we lost to higher bidders. We identified at least 79 buyers for 216 public liquidation sales transactions involving 2,669 sensitive military items. We are referring these sales to federal law enforcement agencies for further investigation and recovery of the sensitive military equipment. The following discussion highlights the details of sales of sensitive military equipment items that we observed or targeted for purchase but did not obtain because we were outbid during the respective sales auctions. Optical fire control items. Our investigative team identified a January 2006 sale of excess U.S. Army Armament Command optical instrument prisms and optical lenses. DOD data showed that these optical instruments are components of the fire control sighting mechanism used in the M-901A Improved Armored Anti-tank vehicle. The M-901A fires the TOW 2 series missiles. Our Chief Technologist advised us that both the prisms and lenses are high-quality optical sighting equipment used in the fire control system of the M-901A. We made an undercover visit to one of DOD’s liquidation contractor sales facilities to inspect the prisms in January 2006. Our inspection of the items listed for sale disclosed that the property label on the boxes listed 11 optical instrument prisms with an acquisition cost of $93,093. Although the demilitarization code of Q listed on the property label for the prisms identified them as requiring trade security control as an item on the Commerce Control List, the NSN listed for the prisms in fact related to a demilitarization code of D, which required their total destruction when no longer needed by DOD. Upon further inspection, we found that the items labeled as prisms were in sealed manufacturer packages that listed them as optical instrument lenses, not prisms. The NSN associated with the 11 lenses indicated that they had a total acquisition cost of $1,859 and a demilitarization code of D, requiring their total destruction rather than disposal by public sale. The mislabeling of these items indicates that DOD may have lost accountability over both the prisms and the lenses. Both the prisms and the lenses have a controlled CIIC code of 7, which indicates they are classified items that require protection in the interest of national security, in accordance with DOD 5200.1-R. We bid $550 for the lenses and lost to a higher bidder, who paid $909 for them. Figure 6 is a photograph of one of the boxes labeled as containing prisms that actually contained lenses. Body armor. Our investigative team also identified a March 2006 liquidation sale of body armor fragmentation vests. Upon our visit to the sales warehouse, we identified a total of four body armor fragmentation protective vests in two separate sales lots. According to the NSN, all of the items sold had a demilitarization code of E, which required either key point or total destruction of the item when no longer needed by DOD. We did not bid on this sale, but have included it in our referrals to federal law enforcement agencies for follow-up investigations. Figure 7 shows a photograph of the actual body armor vest that that we observed for sale in March 2006. During our undercover operations, we also noted 13 advertised sales events, including 179 items that were subject to demilitarization controls, where the items were not sold. In 5 of these sales involving 113 sensitive military parts, it appears that DOD or its liquidation sales contractor caught the error in demilitarization codes and pulled the items from sale. One of these instances involved an F-14 fin panel assembly that we had targeted for an undercover purchase. During our undercover inspection of this item prior to sale, a contractor official told our investigator that the government was in the process of changing demilitarization codes on all F-14 parts and it was likely that the fin panel assembly would be removed from sale. Of the remaining 8 sales lots containing 66 sensitive military parts, we could not determine whether the items were not sold because DOD or its contractor caught the demilitarization coding errors or because minimum bids were not received during the respective sales events. Our investigators used publicly available information to develop fictitious identities as DOD contractor personnel and enter DRMO warehouses (referred to as DRMO A and DRMO B) in two east coast states on separate occasions in June 2006, to requisition excess sensitive military parts and equipment valued at about $1.1 million. Our investigators were able to search for and identify excess items without supervision. In addition, DRMO personnel assisted our investigators in locating other targeted items in the warehouse and loading these items into our van. At no point during either visit did DRMO personnel attempt to verify with the actual contractor that our investigators were, in fact, contractor employees. During the undercover penetration, our investigators obtained numerous sensitive military items that were required to be destroyed when no longer needed by DOD to prevent them from falling into the wrong hands. These items included two guided missile launcher mounts for shoulder-fired missiles, several types of body armor, an all-band antenna used to track aircraft, six circuit card assemblies used in Navy computerized systems, a digital signal converter used in naval electronic surveillance, and two Palm V personal digital assistants (PDA) that were certified as having their hard drives removed. Shortly after leaving the second DRMO, our investigators received a call from a contractor official whose employees they had impersonated. The official had been monitoring his company’s requisitions of excess DOD property and noticed transactions that did not appear to represent activity by his company. He contacted personnel at DRMO A, obtained the phone number on our excess property screening letter, and called us. Upon receiving the call from the contractor official, our lead investigative agent explained that he was with GAO and we had performed a government test. The following discussion presents the details of our case study requisitions of sensitive military items we obtained during our penetration of the first east coast DRMO. Guided missile launcher mounts. Posing as DOD contractor employees, our undercover investigators entered DRMO A in June 2006 and requisitioned two excess DOD shoulder-fired guided missile launcher mounts with a total reported acquisition cost of $6,246. The missile launcher mounts provide the electrical connection between the round and the tracker and contain a remote firing mechanism for the wire-guided Dragon missiles. While the Dragon has been replaced by newer technology missiles, it is a man-portable, shoulder-fired, medium antitank weapon system that can defeat armored vehicles, fortified bunkers, concrete gun emplacements, and other hardened targets. Under department demilitarization policy, missile launcher mounts have a demilitarization code of C, which requires removal and/or demilitarization of installed key point(s) or lethal parts, components, and accessories to prevent them from falling into the wrong hands. The missile launcher mounts also have a CIIC code of 7, which indicates they are classified items that require protection in the interest of national security, in accordance with DOD 5200.1-R. Figure 8 shows a photograph of one of the guided missile launcher mounts obtained by GAO. Kevlar body armor fragmentation vests. Our undercover investigators obtained six Kevlar body armor fragmentation vests with a total reported acquisition cost of $2,049 from DRMO A during our June 2006 security penetration. This body armor has a woodland camouflage pattern and was designed for use by ground troops and parachutists. Although the Kevlar fragmentation vest has been replaced by newer technology, it is still considered a sensitive military item and has a demilitarization code of E, which identifies it as critical items/materiel determined to require demilitarization, either key point or total destruction. The Kevlar fragmentation vests also have a CIIC code of 7, which indicates they are classified items that require protection in the interest of national security, in accordance with DOD 5200.1-R. Figure 9 shows a photograph of one of the fragmentation vests obtained during our undercover penetration. Digital signal converter. During the undercover penetration at DRMO A, our investigators also obtained a DOD digital signal converter with a reported acquisition cost of $882,586. The digital signal converter is used as part of a larger surveillance system on the Navy’s E2C Hawkeye early warning and control aircraft. Under department demilitarization policy, this digital signal converter has a demilitarization code of D that requires it to be destroyed when no longer needed by DOD. This signal converter also has a CIIC code of 7, which indicates it is a classified item that requires protection in the interest of national security, in accordance with DOD 5200.1-R. Figure 10 shows a photograph of the digital signal converter our investigators obtained from DRMO A. All-band antenna. Our undercover investigators identified and requisitioned a new, unused all-band antenna during their June 2006 security penetration at DRMO A. According to manufacturer information, the antenna is a high-powered portable unit that is used by the Air Force to track aircraft. The antenna can be tripod-mounted or mounted on a portable shelter. The new, unused all-band antenna, which was purchased by DOD in 2003, had a reported acquisition cost of $120,000. A manufacturer representative told our investigator that this antenna is currently in production. Under department demilitarization policy, this all- band antenna has a demilitarization code of D that requires it to be destroyed when no longer needed by DOD. This antenna also has a CIIC code of 7, which indicates it is a classified item that requires protection in the interest of national security, in accordance with DOD 5200.1-R. Figure 11 shows a photograph of the all-band antenna obtained during our undercover penetration of security at DRMO A. Posing as employees for the same DOD contractor identity used during our June 2006 penetration at DRMO A, our investigators entered DRMO B a day later for the purpose of testing security controls at that location. DRMO officials appeared to be unaware of our security penetration at DRMO A the previous day. During the DRMO B undercover penetration, our investigators obtained the following items, most of which had demilitarization requirements. Body armor fragmentation vests. Our undercover investigators obtained 10 body armor fragmentation vests with a total reported acquisition cost of $290 from DRMO B. Although the protective capability of this body armor has been superseded by newer technology, it would still provide firearm protection to terrorists or criminals. These fragmentation vests have a demilitarization code of E, which identifies them as critical items/materiel determined to require demilitarization, either key point or total destruction. Figure 12 shows a photograph of one the 10 fragmentation vests obtained during our undercover penetration. Throat and groin protection armor. Our undercover investigators also obtained a Kevlar throat protector related to the camouflage body armor. The throat protector had a reported acquisition cost of $3.35 and a demilitarization code of D, which requires it to be destroyed when no longer needed by DOD. The groin protector, which is designed to hold a ceramic insert, had a reported acquisition cost of $37.85 and a demilitarization code of D. Figure 13 shows a photograph of the throat and groin protection armor obtained during our undercover penetration at DRMO B. Circuit card assemblies. Our undercover investigators obtained six circuit card assemblies with a reported acquisition cost of $77,011 from DRMO B. The circuit card assemblies, which were turned in by the Naval Air Warfare Center, had a demilitarization code of D which requires them to be destroyed when no longer needed by DOD. A Lockheed Martin representative, who confirmed that his company manufactured the circuit cards we obtained, told our investigator that the circuit card assemblies are used in a variety of computerized Navy systems. The circuit cards also have a CIIC code of 7, which indicates they are classified items that require protection in the interest of national security, in accordance with DOD 5200.1-R. Figure 14 shows a photograph of the circuit card assemblies obtained during our undercover penetration at DRMO B. Palm V Organizer PDAs. During our undercover security penetration at DRMO B in June 2006, our investigators noticed two Palm V Organizer PDAs and accessories. The Palm PDAs had tags affixed to them which read “Certificate of Hard Drive Disposition/This certified hard drive was removed from CPU” and “Computer Casing Empty.” Because PDAs do not have hard drives, after successfully requisitioning them, we asked our information technology (IT) security expert to test them to confirm that all sensitive information had been properly removed. Our IT expert used National Institute of Standards and Technology (NIST) utilities recommended for forensic analysis to run the tests. Based on the tests, our IT expert determined that the RAM on both devices had been wiped clean of any trace of residual data, leaving only the normal information that a user would expect to find on an unused Palm V PDA. Figure 15 shows a photograph of one of the Palm V PDAs and related accessories obtained from DRMO B. Because significant numbers of new, unused A-condition excess items still being purchased or in use by the military services are being disposed of through liquidation sales, it was easy for our undercover investigator to pose as a liquidation sales customer and purchase several of these items for a fraction of what the military services are paying to obtain these same items from DLA supply depots. For example, we paid $1,146 for several wet weather and cold weather parkas, a portable field x-ray enclosure, high- security locks, a gasoline engine that can be used as part of a generator system or as a compressor, and a refrigerant recovery system used to service air conditioning systems on automobiles. The military services would have paid a total acquisition cost of $16,300 for these items if ordered from supply inventory, plus a charge for processing their order. It was easy for us to purchase new, unused items that are in demand by the military services because of the limited scope of DOD’s actions to address this problem. Our undercover investigator used a fictitious identity to obtain a DOD liquidation sales customer account and purchase several new, unused excess DOD items that the military services are continuing to order from supply inventory or use in operations. The following discussion describes examples of the new, unused excess DOD items that we purchased. Wet-weather parkas. In March 2006, our undercover investigator purchased 10 new, unused excess DOD wet-weather parkas with the manufacturer’s tags still attached from DOD’s liquidation sales contractor. Although Army combat units have begun using an upgraded version of the parkas, they are a nondeteriorative item, and Army training units and other military services are continuing to use them in military operations. However, after the New Jersey Army National Guard turned in the unused items as excess to their needs, the parkas were transferred to DOD’s liquidation contractor for sale instead of being returned to supply inventory for reissue. We paid $87 for the 10 wet-weather parkas, which had a total reported acquisition cost of a $359. Figure 16 shows a photograph of one of the wet-weather parkas our undercover investigator purchased at the public liquidation sale. Cold-weather parkas. In May 2006, our undercover investigator purchased 10 excess DOD cold-weather desert camouflage parkas from DOD’s liquidation sales contractor. Although the parkas were listed as being in H condition (unserviceable, condemned condition), they were advertised as new. We paid a total of $373 for these 10 parkas, which had a total reported acquisition cost of $1,468. After receiving the parkas, we noted that all of them appeared to be unused and 7 of them still had the manufacturer’s tags attached. According to a Defense Supply Center, Philadelphia official, these cold-weather parkas are nondeteriorative and are currently stocked and issued to the military services. The cold-weather parkas, which were ordered in support of Operation Enduring Freedom, were turned in as excess by the Al Udeid Air Base, in Qatar. Instead of being returned to inventory for reissue, the new, unused parkas were transferred to DOD’s liquidation sales contractor. Figure 17 shows a photograph of one of the excess new, unused parkas that we purchased. Portable field x-ray processing enclosure. In April 2006, our undercover investigator purchased a portable field x-ray processing enclosure with a reported acquisition cost of $7,235. We paid $87 for this item. We received the x-ray enclosure in May 2006, after approval of our bogus Food and Drug Administration (FDA) certificate. DOD’s liquidation sales contractor requires buyers of medical and laboratory equipment items that are subject to federal regulation to submit FDA certificates as a condition of sale. On the FDA certificate, the buyer certifies that he or she is a licensed medical practitioner or person regularly and lawfully engaged in the manufacture or refurbishing of the medical device listed and agrees to assure that items resold will not be adulterated or misbranded within the meaning of those terms in the Federal Food, Drug and Cosmetic Act (codified at 21 U.S.C. Ch. 9). A manufacturer official told our undercover investigator that the x-ray enclosure that we purchased is manufactured and sold to DOD on an as-needed basis. The official stated that there is no shelf-life issue associated with this product. In addition, a Defense Supply Center, Philadelphia official assigned to the X-ray Equipment and Supplies/Biomedical Systems Office of the Technical, Quality, and Packaging Staff responsible for x-ray equipment and supply items advised us that the x-ray enclosure is currently used by the military services, and the Army is the primary user. The supply center official noted that the enclosure is a depot-stocked item. However, after checking the inventory system, the official told us that there were currently none of these items in stock. The supply center official confirmed that the enclosure has no shelf- life issues. At the time we purchased the x-ray enclosure, 40 identical x-ray enclosures with a reported acquisition cost of $289,400 were sold for a total liquidation sales price of $2,914. Figure 18 is a photograph of the excess DOD portable x-ray enclosure that we purchased over the Internet. The enclosure is stored in an oversized foot-locker-type container approximately 5 feet in length. High-security locks. Our undercover investigator purchased 20 new, unused high-security locks from the DOD liquidation sales contractor in April 2006. The locks, which were in the original manufacturer’s boxes, had a total reported acquisition cost of $1,675, and we paid a total of $59 for them. We contacted the manufacturer, whose representative told us that his company sold DLA 100 of these locks in September 2005. The representative explained that the locks are used to secure the back bay of logistics trucks. He said that his company was not aware of any problems with the locks. A U.S. Marine Corps unit in Albany, Georgia, turned the locks in as excess, and they were not returned to inventory for reissue. At the time we purchased the 20 locks, DOD’s liquidation sales contractor had advertised a total of 19 lots consisting of 480 locks for sale. Six of the 19 lots, with a reported total acquisition cost of $18,423, sold for $365. Figure 19 shows a photograph of one of the excess DOD high-security locks that we purchased in April 2006. Gasoline engine. Our undercover investigator purchased a new, unused Teledyne 4-cylinder gasoline engine in March 2006. The engine, which was manufactured in the 1990s, is part of a generator unit. It can also be used with a compressor. According to FedLog data, the engines are required to be issued until current supplies are exhausted. The item manager for this engine told our undercover investigator that DLA currently has about 1,500 of these engines in stock and they are still being issued, primarily to Army National Guard and Reserve units. He said that the Air Force and the Marine Corps also use them. He noted that the Marine Corps ordered 4 of these engines in June 2006. We paid $355 for the gasoline engine, which had a reported acquisition cost of $3,119—the amount the Marine Corp paid for each item, plus a service charge. At the time we purchased this unit, a total of 20 identical gasoline engines with a total reported acquisition cost of $62,380 were sold for a total liquidation sales price of $6,221. Figure 20 shows a photograph of the gasoline engine that we purchased. Refrigerant recovery system. In April 2006, our undercover investigator purchased a new, unused excess DOD refrigerant recovery system Model ST-100A. This is a portable system designed to recover and recycle R-12, R- 22, R-500, and R-502 refrigerants at the rate of 2 to 3 pounds per minute. According to a manufacturer representative, the unit that we purchased is designed to recover refrigerants from small systems, such as those in automotive vehicles. We paid a total of $185 for the new, unused refrigerant recovery system, which had a reported acquisition cost of $2,445. According to a Refrigerant Recovery Systems, Inc., representative, this item is still being purchased and used by DOD. The refrigerant recovery system that we purchased was likely turned in as excess by the Army Risk Assessment Modeling System (ARAMS) Project Office located in Chesapeake, Virginia. ARAMS turned in nine identical excess recovery systems in January 2006 that appeared to have been sold during the liquidation sales event at which we made our undercover purchase. These 9 refrigerant recovery systems, which had a listed acquisition cost of $22,004, sold for a total liquidation sale price of $1,140. When our undercover investigator went to pick up the refrigerant recovery system that we purchased, he found that it was stored outside and exposed to weather. As a result, the box the unit was stored in had become wet and the filters included with the unit had become soaked. Figure 21 is a photograph of the excess DOD refrigerant recovery system that we purchased. Although DLA and DRMS implemented several initiatives to improve the overall reutilization rate for excess A-condition items, our analysis of DRMS data found that the reported reutilization rate as of June 30, 2006, remained the same as we had previously reported—about 12 percent. This is primarily because DLA reutilization initiatives are limited to using available excess A-condition items to fill customer orders and to maintain established supply inventory retention levels. As a result, excess A- condition items that are not needed to fill orders or replenish supply inventory are disposed of outside of DOD through transfers, donations, and public sales, which made it easy for us to purchase excess new, unused DOD items. The disposal of items that exceed customer orders and inventory retention levels is an indication that DOD bought more items than it needed. In addition, several of the items we purchased at liquidation sales events were being ordered from supply inventory by military units at or near the time of our purchase, and for one supply-depot-stocked item— the portable field x-ray enclosure—no items were in stock at the time we made our undercover purchase, indicating continued waste and inefficiency. DLA and DRMS initiatives resulted in a reported $38.1 million in excess property reutilization savings through June 2006. According to DLA data as of June 30, 2006, interim supply system initiatives using the Automated Asset Recoupment Program, which is part of an old legacy system, achieved reutilization savings of nearly $2.3 million since July 2005, while Business System Modernization supply system initiatives, implemented in January 2006 as promised at the June 2005 hearing, have resulted in reutilization savings of nearly $1.1 million. In addition, DRMS reported that excess property marketing initiatives implemented in late March 2006 have resulted in reutilization savings of a little over $34.8 million through June 2006. These initiatives include marketing techniques using Web photographs of high-dollar items and e-mail notices to repeat customers about the availability of A-condition items that they had previously selected for reutilization. On June 28, 2006, we briefed DOD, DLA, DRMS, and military service management on the results of our investigations. We discussed the causes of the control breakdowns we identified with regard to security of sensitive excess military equipment and provided our perspectives on ways to address the following problems. Some military units and DLA supply depots recorded incorrect demilitarization codes to excess military property items and in some cases improperly included these items in batch lots before sending these items to DRMOs. DRMO personnel failed to verify the recorded demilitarization codes when they processed receipts of excess military property. The limited scope of DLA and DRMS compliance reviews is not sufficient to detect problems with incorrect demilitarization codes. DOD’s excess property liquidation sales contractor failed to verify demilitarization codes of items received and return items requiring mutilation or destruction to the DRMO for proper disposal. The managers told us that they shared our concern about the breakdowns in security controls that allowed sensitive military items requiring demilitarization to be sold to the public. They asked us for pertinent documentation obtained during our investigations to support their follow- up inquiries and corrective action plans. We have provided this information. In addition, the managers told us that the DRMOs rely on access controls executed by the DOD installations at which the DRMOs are located to preclude access by unauthorized parties. During our briefing, we also pointed out that because the reutilization and marketing program permits public access to DRMOs and liquidation sales locations, it is most important to confirm the identities and requisitioning authority of the individuals who enter the DRMOs to screen and requisition excess property. With regard to reutilization program economy and efficiency issues, the DOD managers maintained that forecasting the correct inventory level is difficult and that some amount of excess purchasing is necessary to assure that inventory is available when needed. They also stated that there is a cost associated with retaining excess inventory for extended periods of time. We provided DOD documentation to show that the excess A-condition items that we purchased were continuing to be ordered and used by the military services at the time of our undercover purchases. Our security tests clearly show that sensitive military equipment items are still being improperly released by DOD and sold to the public, thus posing a national security risk. The sensitive nature of these items requires particularly stringent internal security controls. Our tests, which were performed over a short duration, were limited to our observations, meaning that the problem may likely be more significant than what we identified. Although we have referred the sales of items identified during our investigation to federal law enforcement agencies for follow-up, the solution to this problem is to enforce controls for preventing improper release of these items outside DOD. Further, liquidation sales of items that military units are continuing to purchase at full cost from supply inventory demonstrates continuing waste and inefficiency in DOD’s excess property reutilization program. We provided a draft of our report to DOD for comment on July 10, 2006. The Deputy Under Secretary of Defense for Logistics and Materiel Readiness responded that given the time allotted to comment, the Department was not able to do a detailed review and has no comments at this time. However, the Deputy Under Secretary also stated that the department continues to implement changes to our procedures based on recommendations in our May 13, 2005, report. We are sending copies of this letter to interested congressional committees, the Secretary of Defense, the Deputy Under Secretary of Defense for Logistics and Personnel Readiness, the Under Secretary of Defense Comptroller, the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, the Director of the Defense Logistics Agency, the Director of the Defense Reutilization and Marketing Service, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7455 or [email protected], if you or your staffs have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix IV. Department of Defense (DOD) property is assigned a demilitarization code to identify the required disposition of items when they are no longer needed by DOD. Demilitarization codes are contained in the Defense Demilitarization Manual, DOD 4160.21-M-1 (1995), which implements DOD policy to apply appropriate controls (e.g., restrictions to use by authorized parties, destruction when no longer needed by DOD) over items that have a significant military technology application to prevent improper use or release of these items outside of DOD. These items include materiel that the Secretary of Defense has designated as requiring demilitarization, articles on the U.S. Munitions List (22 C.F.R. pt. 121), and certain articles subject to export controls because they are on the Commerce Control List establish by the U.S. Department of Commerce (15 C.F.R. § 774, Supp. 1). Appendix 3 of the Manual provides the demilitarization codes to be assigned to federal supply items and coding guidance. The codes indicate whether property is available for reuse without restriction or whether specific restrictions apply, such as removal of classified components, destruction of sensitive military technology, or trade security control. The table below defines the DOD demilitarization codes. The Department of Defense’s (DOD) condition code is a two-digit alphanumeric code used to denote the condition of excess property from the supply and the disposal perspective. The DOD supply condition code is the alpha character in the first position and shows the condition of property in the Defense Logistics Agency supply depot inventory, or is assigned by the unit turning in the excess property. The General Services Administration (GSA) disposal condition code, in the second position, shows whether the property is in new, used, or repairable condition, salvageable, or should be scrapped. Staff making key contributions to this report include Mario Artesiano, Donald L. Bumgardner, Matthew S. Brown, Paul R. Desaulniers, Stephen P. Donahue, Lauren S. Fassler, Gayle L. Fischer, Cinnimon Glozer, Jason Kelly, John Ledford, Barbara C. Lewis, Richard C. Newbold, John P. Ryan, Lori B. Ryza, Lisa Warde, and Emily C. Wold. Technical expertise was provided by Keith A. Rhodes, Chief Technologist, and Harold Lewis, Assistant Director, Information Technology Security, Applied Research and Methods.
GAO's previous work found problems in security controls over sensitive excess military equipment that resulted in lost and stolen items, some of which were sold to the public, and significant waste and inefficiency in the Department of Defense (DOD) excess property reutilization program. GAO was asked to perform follow-up investigations to determine whether (1) unauthorized parties could obtain sensitive excess military equipment that requires demilitarization (destruction) when no longer needed by DOD and (2) system and process improvements are adequate to prevent sales of new, unused excess items that DOD continues to buy or that are in demand by the military services. GAO investigators posing as private citizens to disguise their identity purchased several sensitive military equipment items from DOD's liquidation sales contractor, indicating that DOD has not enforced security controls for preventing sensitive excess military equipment from release to the public. GAO investigators at liquidation sales purchased ceramic body armor inserts currently used by deployed troops, a cesium technology timing unit with global positioning capabilities, a universal frequency counter, 2 guided missile radar test sets, 12 digital microcircuits used in F-14 fighter aircraft, and numerous other items. GAO was able to purchase these items because controls broke down at virtually every step in the excess property turn-in and disposal process. GAO determined that thousands of military items that should have been demilitarized (destroyed) were sold to the public. Further, in June 2006, GAO undercover investigators posing as DOD contractor employees entered two excess property warehouses and obtained about $1.1 million in sensitive military equipment items, including 2 launcher mounts for shoulder-fired guided missiles, several types of body armor, a digital signal converter used in naval surveillance, an all-band antenna used to track aircraft, and 6 circuit cards used in computerized Navy systems. At no point during GAO's warehouse security penetration were its investigators challenged on their identity and authority to obtain DOD military property. The table below shows examples of sensitive military equipment obtained during GAO's undercover operations. GAO investigators posing as private citizens also bought several new, unused items currently being purchased or in demand by the military services from DOD's excess property liquidation sales contractor. Although military units paid full price for these items when they ordered them from supply inventory, GAO paid a fraction of this cost to purchase the same items, demonstrating continuing waste and inefficiency.
According to FAA officials, FAA’s medical certification requirement was established to prevent or mitigate the effect of various medical conditions that present an undue risk to the safety of pilots, passengers, or others. While most general aviation accidents are attributed to pilot error involving a loss of aircraft control, according to information provided by NTSB, medical causes were a factor in approximately 2.5 percent of the accidents from 2008 through 2012. By ensuring that applicants meet medical standards, FAA aims to reduce the likelihood of incapacitation of a pilot due to a medical cause. Federal regulations establish three classes of medical certification that correspond to the types of operations that pilots perform. Airline transport pilots who serve as pilots in command of scheduled air-carrier operations must hold first-class medical certificates. Pilots who fly for compensation or hire generally hold second-class medical certificates. Private pilots hold third-class medical certificates. (See table 1.) Depending on their age and the class of medical certificate sought, pilots must renew their medical certificate periodically, from every 6 months to every 5 years (e.g., commercial pilots—generally those needing first- or second-class medical certificate—must have their medical certificate updated more frequently than private pilots). After obtaining a medical certificate, and between renewal periods, pilots are prohibited from performing pilot operations when they know or have reason to know of a medical deficiency that would make them unable to fulfill their pilot operation. In the fiscal year 2014 budget submission, FAA estimated that its Office of Aerospace Medicine would need about $56.1 million in funding—about 4.7 percent of the total Aviation Safety budget—to carry out its mission. To assist in the nearly 400,000 medical evaluations of pilots and new applicants each year, FAA designates medical certification authority to approximately 3,300 private physicians, or Aviation Medical Examiners (AMEs). The AMEs review applicants’ medical histories and perform physical examinations to ensure that applicants meet FAA’s medical standards and are medically fit to operate an aircraft at the time of their medical exam. Although AMEs are not FAA employees, they are trained in aviation medicine by the FAA and entrusted to make medical eligibility determinations for the majority of applicants, on behalf of the FAA. In order to become an AME and be authorized to administer medical exams, FAA requires AMEs to complete online courses in clinical aerospace physiology and medical certification standards and procedures before attending a one-week basic AME seminar. AMEs must also complete at least 10 pilot medical exams each year and a subsequent refresher courses every 3 years. All applicants for medical certificates and renewals follow a similar process. Applicants begin the medical certification process by completing Form 8500-8, Application for Airman Medical Certificate or Airman Medical & Student Pilot Certificate (medical application form) in MedXPress (online application system). For applicants with disqualifying medical conditions or for those who do not meet FAA’s medical standards, the AME must defer the applicant to FAA to authorize a special issuance. The special issuance process may require additional medical information and evaluations from, for example, a primary care physician or medical specialist. Also, a special issuance may be subject to operational limitations for safety reasons, or may be valid for a shorter time period than an unrestricted medical certificate. As a provision of the special issuance, FAA may authorize AMEs to make future medical determinations of the applicant—separate from the centralized special issuance process—under the AME Assisted Special Issuance (AASI) process. various outcomes. Alternatively, if FAA determines that an applicant’s medical condition is static and non- progressive and has found the applicant capable of performing pilot duties without endangering public safety, the FAA may grant a Statement of Demonstrated Ability (SODA) to the applicant, which does not expire and authorizes AMEs to make future medical determinations of the applicant, without requiring the applicant to go through the special issuance review process. According to FAA officials, pilot medical standards were developed to help manage safety risk. FAA’s current medical standards have been codified in federal regulation since March 19, 1996. The regulations set out 15 medical conditions that are specifically disqualifying. Medical conditions identified during an evaluation that are not specifically listed as disqualifying but do not meet the general medical standard regarding safe performance of duties and exercise of privileges, are also disqualifying under general medical standards, according to FAA. (See app. II for a summary of selected FAA medical standards.) According to FAA officials, the standards and the medical certification process were developed to manage the risk of an aircraft accident or incident by identifying applicants with medical conditions that could potentially incapacitate them in the flight environment or during critical take-off and landing periods. FAA takes steps designed to ensure that its medical policies and procedures are consistent with current medical and aeromedical practice, and that these steps result in periodic updates to its medical policies. The Federal Air Surgeon establishes medical policies and medical certification procedures that are published in internal guidance for FAA’s Office of Aerospace Medicine and for AMEs in the Guide for Aviation Medical Examiners (AME Guide). The agency uses several techniques to update policies: First, the Aeromedical Standards and Policies Branch develops policy recommendations for the Federal Air Surgeon, which address medical conditions, medication use, and medical procedures. According to FAA officials, medical policy review is a continuous process influenced by several factors, which include (1) announcements of significant new developments in the medical literature; (2) medical appeals to the Federal Air Surgeon; (3) announcements and alerts by the Food and Drug Administration; (4) inquiries by aviation stakeholder groups and pilot advocacy groups; (5) aircraft accidents or events; (6) inquiries by the Office of Aerospace Medicine personnel and AMEs; and (7) communications with international aviation authorities, and medical advocacy groups, among other things. Second, according to FAA officials, the agency refers dozens of individual cases annually for independent review by experts in a wide variety of medical specialties, such as cardiology, psychology, and neuropsychology. FAA officials stated that implicit to the process of reviewing each case is to consider changes to current policy based on current medical practice. FAA also periodically uses independent medical experts to evaluate its medical policies, particularly with regard to cardiovascular conditions, which were present in more than one-third of the applicants who received special issuances in 2012. In January 2013, for example, FAA hosted a cardiology roundtable to review FAA’s policies with regard to cardiovascular conditions and to suggest updates to the policies, if necessary. The roundtable’s suggested policy changes were presented to the Federal Air Surgeon, who approved several of them. However, FAA officials have said that they do not convene such roundtables frequently due to time and cost constraints. Third, the results of CAMI’s aerospace medical and human factors research have been used to inform changes to FAA guidance and policies. In particular, CAMI’s aerospace medical research focuses on the biomedical aspects of flight, including studies on aviation safety associated with biomedical, pharmacological, and toxicological issues. For example, CAMI’s research on sedating medication influenced guidance in this area. According to FAA officials, a review of accident investigation data showed that many pilots involved in accidents were using over the counter and prescription sedative medications. As a result, FAA, in coordination with the aviation industry, issued guidance extending the length of time a pilot should wait after using these medications and before operating an aircraft. A letter jointly signed by the FAA and all major aviation advocacy groups was sent to all pilots and published on the FAA website and in various public and private publications advising pilots to comply with the new guidance. Fourth, CAMI’s library allows research staff to collect and review academic journals on aviation medical issues, general medical research, engineering, management, and other general topics. CAMI researchers have also published approximately 1200 aerospace medicine technical reports on topics including, for example, pilot age, alcohol and substance abuse, fatigue, psychology, and vision. FAA’s policy branch periodically reviews this and other medical literature, which FAA officials say can also result in a possible revision. http://www.faa.gov/data_research/research/med_humanfacs/oamtechreports/. In addition, FAA has recently begun analyzing aviation accident information to develop a predictive model based on historic data of medical conditions that have been identified as contributing factors to aircraft accidents. The officials stated that they plan to use the model as a data-driven guide to help inform how they determine the relative risk of various medical conditions. FAA officials noted that the agency has begun this work as part of a broader Safety Management Systems (SMS) initiative that seeks to further enhance safety by shifting to a data-driven, risk-based oversight approach. All aerospace medical experts we interviewed generally agreed that FAA’s medical standards were appropriate, and most (16 of 20) said that Some the standards should be applied to commercial and private pilots.of these experts said that standards should apply equally to private pilots because they share airspace with commercial pilots or because private pilots typically do not fly with a copilot—an important safety feature for commercial flight operations. In addition, although some of the experts (7 of 20) suggested no changes to FAA’s policies, many of the experts (13 of 20) identified at least one medical standard for which they considered FAA’s policies to be either too restrictive or too permissive. A restrictive policy might lead FAA to deny certification to an applicant who may be sufficiently healthy to safely fly a plane, or may result in FAA requiring a more thorough medical evaluation than the experts considered necessary. A permissive policy, on the other hand, might lead FAA to certify an applicant with health issues that could impair his or her ability to safely fly a plane, or may result in FAA not completing as thorough a medical evaluation as the experts considered necessary. Although expert opinions varied regarding which standards were too permissive or restrictive, neurological issues were most commonly discussed by some (9 of 20) of the experts.noted that the FAA medical certification requirements for applicants who use antidepressants, including selective serotonin reuptake inhibitors (SSRI), are restrictive and onerous and may require an applicant not to fly for an extended period of time. A medical representative from the Aircraft Owners and Pilots Association (AOPA) said that FAA’s policies may require a pilot using antidepressants to undergo costly cognitive studies that were viewed as medically unnecessary for milder cases of depression. For example, some experts Alternately, some medical experts said that policies regarding cognitive functioning in aging pilots, traumatic head or brain injuries, and attention deficit disorders may be too permissive. An FAA official stated that the area of neurology is complex and has been somewhat difficult for AMCD due, in part, to variation in opinion as to how to assess cognitive function and when testing should be done. The agency hosted a neurology summit in 2010 that convened neurology experts to review FAA policies on neurological issues—including traumatic brain injury, migraine headaches, and neurocognitive testing—and resulted in recommendations that the Federal Air Surgeon adopted regarding migraine treatments, among other neurological conditions. Also, the Division Manager of AMCD said that they consult with neurologists, as needed, to review the application of certification policies regarding individual applicant cases. To a lesser extent, some (5 of 20) experts had mixed views on the policies for diabetes and medical conditions related to endocrine function. Of those, three experts thought that FAA’s current policies on diabetes might be too restrictive, for example, because the FAA has not kept pace with medical advances and treatment options currently available to pilots. One expert noted that some commercial pilots with insulin treated diabetes mellitus (ITDM) may be medically fit to fly a plane with a special issuance if they can demonstrate that their condition is stable, just as private pilots are allowed to do. In addition, representatives from the American Diabetes Association and a member of the Regional Airline Association stated that FAA’s policies for commercial pilots with ITDM have not kept current, when considering the advancements in medical treatment of ITDM and the redundancy of having a copilot and crew in commercial aircraft to reduce the risk associated with commercial pilots with ITDM. Conversely, two experts thought that FAA may be too permissive with regard to diabetes, citing, for example, concerns about the increase in diabetes among Americans, in general, and the potential for undiagnosed cases. FAA officials agreed that there have been improvements in the clinical care for diabetes and the Office of Aerospace Medicine has studied the safety and efficacy of new diabetes treatment over the past several years, including the risks associated with new medications and insulin formulations. However, according to FAA officials, independent consultants—including endocrinologists and diabetes experts—have told the FAA that the risk of incapacitation related to hypoglycemia has not changed regardless of advancements in treatment. All of the experts suggested ways FAA could ensure its medical standards are current, many of which were consistent with approaches FAA is already taking. For example, some of the experts (9 of 20) said FAA could review its medical standards at regular time intervals or as medical advances occur, and some (8 of 20) of the experts said FAA could review its medical standards based on evidence of the likelihood of each condition causing an accident. Some experts (5 of 20) specifically suggested FAA should convene a panel on neurology and mental health issues. FAA convened a panel on neurological issues in 2010. As previously mentioned, FAA is currently undertaking an agency-wide initiative—SMS—that seeks to further enhance safety by shifting to a data-driven, risk-based safety oversight approach. As part of this approach, FAA implemented the Conditions an AME Can Issue, or CACI, program in April 2013. The CACI program authorizes AMEs to issue medical certificates to applicants with relatively low risk medical conditions that had previously required a special issuance from the FAA. FAA developed the program by identifying medical conditions that, in most cases, did not pose a safety risk, based on FAA analysis of historic medical and accident data. Agency officials expect the program to allow more applicants to be certified at the time of their AME visit while freeing resources at FAA to focus on medically complex applicants with multiple conditions or medical conditions that may pose a greater risk to flight safety, such as applicants who have had coronary artery bypass surgery. Based on information provided by FAA, as of December, 31, 2011, approximately 19 percent of all pilots reported medical conditions that may now be evaluated by their AME as a result of the CACI program. Of those pilots, about one-third—or nearly 39,000 pilots—reported no additional medical conditions, making it more likely that in the future, they may be certified at the time of their AME visit, rather than through the special issuance process. Other medical conditions have been proposed for the CACI program but have not yet been approved by FAA officials. Most medical experts (18 of 20) we interviewed approved of the CACI program, and some (8 of 20) believed that FAA should continue to expand it to include additional medical conditions. Representatives of an industry association agreed and noted that by authorizing AMEs to make a greater number of medical certification decisions, AMCD officials could speed up the application process for more applicants. Medical conditions that were proposed but not yet approved for CACI, include, for example: carotid stenosis, bladder cancer, leukemia, and lymphoma. FAA authorization for a special issuance that they believe should be considered under the CACI program. Their suggestions included, for example, non-insulin-treated diabetes, which was a factor in about 17 percent of the special issuances in 2012; sleep apnea and other sleep disorders, which were a factor in about 11 percent of the special issuances in 2012; and various forms of cancer, which were a factor in about 10 percent of special issuances in 2012. FAA officials have begun to allow AMEs to make medical determinations for applicants with certain types of cancer under the CACI program and have said that they will evaluate other medical conditions to include in the CACI program in the future. Although neurological conditions (including migraines, head trauma, stroke, and seizures) accounted for approximately 4 percent of special issuances in 2012, some experts (5 of 20) thought, as mentioned above, that FAA should convene an expert panel to re-evaluate its policies in this area. Half of the experts we interviewed also said that FAA could evaluate its medical standards based on the relative risk of incapacitation associated with various medical conditions, assessed through greater use of data. That is, with a better understanding of the likelihood of each medical condition to cause a suddenly incapacitating event in flight— based on historic data of accidents and incidents—FAA could modify its risk threshold for various medical standards and policies to manage risk. As previously mentioned, FAA has begun to collect and analyze data that will help it develop a proactive approach to managing aviation medical risk; however, FAA officials told us that data from historic accidents and incidents can be difficult to obtain and link to medical causes. The officials also said that they would need to change how they code, or classify, the medical information they collect—and re-code medical information they already have—to more accurately classify medical conditions of applicants and, therefore, improve the reliability of their predictive model. Without more granular data collection on health conditions, officials said it is difficult for FAA to accurately determine the level of risk, associated with various medical conditions. In addition, officials at FAA and NTSB noted that data on medical causes of accidents and incidents are likely to be incomplete because not all accidents are investigated in the same way and medical causation can be difficult to prove in light of other contributing factors. For example, an official from NTSB explained that there are different levels of medical investigations performed after accidents, depending on factors like whether or not the pilot has survived, the condition of the aircraft or severity of the crash, and the number of people impacted. As of February 14, 2013, NTSB and FAA agreed to a memorandum of understanding (MOU) that will facilitate NTSB’s data sharing and record matching for aircraft accidents and incidents with CAMI. Although most medical certification determinations are made by one of the approximately 3,300 FAA-designated AMEs at the time of an applicant’s medical exam, approximately 10 percent of applications—or nearly 40,000 annually—are deferred to FAA for further medical evaluation if the applicant does not meet FAA’s medical standards or has a disqualifying medical condition. According to FAA officials, the 10 percent of applicants who are deferred requires a significant amount of resources from FAA’s medical certification division, which in recent years, has experienced a backlog of special issuance applications in need of review. As of February 2014, an FAA official estimated this backlog at about 17,500 applications. FAA has not met its internal goals for responding to individuals whose applications have been deferred. Specifically, FAA has set an internal goal of 30 working days to make a medical determination or to respond to an applicant with a request for further information. However, according to FAA data, the average time it takes FAA officials to make a medical determination or request further information from an applicant has increased over the past 6 fiscal years, taking an average of approximately 45 working days—or about 9 weeks—in fiscal year 2013, and more than 62 working days in December 2013. If FAA makes multiple requests for further information from an applicant, the special issuance process can take several months or longer. Officials from AOPA stated that some applicants for private pilot medical certificates discontinue the application process after an initial denial from the FAA because the applicants decide that the cost of extra medical evaluations and added time is too great to support what the applicant views as a recreational activity.that delays can also occur as a result of applicants who may take a long time to respond to an FAA request for further evaluation. According to AOPA, having information upfront would speed up the process by helping applicants understand FAA’s additional medical requirements for a special issuance. FAA has increasingly encouraged its Regional Flight Surgeons to become more actively involved in making medical determinations for applicants seeking a special issuance. However, an official from FAA noted FAA officials at AMCD stated that there are several reasons for the increased processing time for applicants requiring special issuances. For example, AMCD has faced a technical issue deploying the Document Imaging Workflow System (DIWS), a web-based computer system used by AMCD to process, prioritize, and track all medical certification applications. One AMCD official noted that delays in deployment of the system have decreased productivity of the AMCD to as low as just 25 percent of normal levels. In addition, officials cited multiple backlogs throughout the division, such as, the electrocardiogram (ECG) unit, which receives up to 400 ECGs each day, and the pathology-coding unit, which may require manual coding of medical conditions to feed information into DIWS. Part of the challenge, identified in FAA’s National Airspace Capital Investment Plan, is that the current medical certification systems are based on obsolete technology from the 1990s. Accordingly, technical working groups at AMCD have identified more than 50 problems and potential technological solutions to enhance their systems, including the special issuance processes, of which about 20 have been identified as high-priority, including improvements to the online application system, AMCS, DIWS, and the ECG transmittal and review process. For example, officials stated that updating DIWS to import and read electronic files would reduce the need to manually scan from paper documents, and providing AMEs or applicants limited access to DIWS so they can check the status of an application could reduce the number of calls AMCD receives at its call center. As of February 2014, FAA officials stated they received funding they requested in June 2013, to upgrade the ECG system from analog to digital—a process which they estimate will take about 11 months to complete. In addition, FAA has not established a timeline for implementing its broader set of technology enhancements, some of which may be less contingent on resource constraints. A timeline to guide the implementation of the highest-priority enhancements would help the agency take another step toward reducing the delays and bottlenecks in the special issuance process related to FAA’s technology issues. In addition to the proposed enhancements, the Office of Aerospace Medicine collaborated with the Volpe National Transportation Systems Center (Volpe Center), in 2013, to define broader challenges of the current medical certification process and develop a strategy to reengineer the existing business processes, including the online medical-certification system and its supporting information-technology infrastructure. Officials from the Office of Aerospace Medicine have said that their effort with the Volpe Center will ultimately inform their plan to replace FAA’s current medical information systems with the Aerospace Medicine Safety Information System (AMSIS), which the agency plans to begin developing in fiscal year 2015. FAA officials stated that they envision several long- term positive changes that may result from AMSIS—including redesigning the online application system and form, providing applicants with information on actions to complete before they meet with their AME, and a more transparent special issuance process with the capacity for applicants to check the status of their applications. However, FAA officials have also identified several challenges to implementing AMSIS, including working within the confines of legal and regulatory requirements, protecting sensitive information, and obtaining the estimated $50 million needed to fund the system. One of FAA’s main tools to communicate its medical standards directly to applicants, and to solicit medical information from them, is its online medical application system. While FAA also offers training and produces pamphlets, videos, and other educational material for AMEs and pilots, the online medical application system is used by all applicants to apply for a medical certificate. (See app. III for FAA’s training programs and other communication tools for AMEs and pilots). The system includes information such as the online medical-application form and instructions used by applicants to submit medical information to their AME and to FAA, and a link to the AME Guide, which contains pertinent information and guidance regarding regulations, examination procedures, and protocols needed to perform the duties and responsibilities of an AME. We compared the online application system with select guidelines related to content, navigation, and design that are considered good practices by Based on our evaluation and discussion with experts, we Usability.gov.identified areas in which FAA might enhance the usability of the online application system by (1) providing useful information directly to applicants, and (2) using links to improve how applicants navigate through the application system. Providing Additional Useful Information Directly to Applicants: According to Usability.gov, a good practice in website design includes providing useful and relevant information that is easy to access and use. Some experts (7 of 20), including four who were also AMEs, said that applicants may be unsure about medical requirements and documentation. Representatives of two aviation medical associations also said a lack of clarity can lead to delays in processing the medical certification if applicants learn during their medical examination that they must obtain additional test results or documentation from their primary care physician. Some medical experts (4 of 20) said that technological improvements would be helpful. For example, FAA could develop a Web page on its website or within the online application system with more information for applicants. In addition, two pilot associations stated that a specific Web page or website for applicants with links to information on various medical conditions, their risks to flight safety, and additional medical evaluations that might be needed for applicants with those conditions would be helpful. The online application system currently contains a link to the AME Guide; however, applicants may find the 334-page AME Guide—written for aviation medical examiners—difficult to navigate and understand and therefore, may be unable to find information about specific documentation and additional medical evaluations they may need. FAA officials in the medical certification division said that providing documentation requirements to applicants could reduce certification delays, AME errors, and the number of phone calls to AMCD’s medical certification call center because the applicants would know what additional evaluations or documents they should get from their primary care physician before they visit their AME for a medical exam. Similarly, the FAA officials noted that applicants may not recall information they had previously reported in prior medical certificate evaluations or may not disclose their complete medical history when they see a new AME. NTSB officials stated that the AME cannot see information about any previous applications and knows only what the pilot has reported on their current application. This means that the applicant has to recall all of his or her past medical history each time they apply for a medical certificate. Additionally, according to the NTSB officials, it would be useful for the pilot to access previously reported information and update only what has changed since their previous exam. As part of the more than 50 technological solutions discussed earlier that FAA has identified to enhance the special issuance process, the agency has proposed providing access to worksheets which specify required medical documentation and providing access to previously reported medical data to applicants and AMEs. FAA officials stated that these issues, if addressed, would facilitate information flow between the applicant, the AME and FAA and allow AMCD officials to more efficiently do their work. Additionally, some experts (9 of 20) said that it would be helpful to applicants and treating physicians if FAA posted a list of banned medications. In a couple experts’ view, without a public list of banned medications, applicants may not disclose their medical treatment regimen to FAA out of fear of losing or not receiving their certification. NTSB recommended in 2000 that DOT develop a list of approved medications and/or classes of medications that may be safely used when operating a vehicle; however, DOT—including FAA—did not implement the recommendation because, in DOT’s view, a list of approved medications would be difficult to maintain and would be a liability for the transportation industry if the Department approved a medication that later caused an accident. Officials from AOPA told us that the association provides an unofficial list of approved and banned medications to its members but believes that this information should be made public and provided by FAA. However, FAA states in its AME guide that maintaining a published list of approved medications would not contribute to aviation safety because it doesn’t address the underlying medical condition being treated. Instead, FAA’s current policy prohibits AMEs from issuing medical certificates to applicants using medications that have not been on the market for at least one year after approval by the Food and Drug Administration (FDA), and FAA has recently updated its AME Guide, to include a “Do Not Issue—Do Not Fly” list of several general classes of medication and some specific The “Do Not Issue” list pharmaceuticals and therapeutic medications. names medications that are banned—meaning the AME should not issue a medical certificate without clearance from FAA—and the “Do Not Fly” list names medications that the pilot should not use for a specified period of time before or during flight, including sleep aids and some allergy medications. FAA officials said that the “Do Not Issue—Do Not Fly” list is intended to be a “living document” that they will revisit periodically. NTSB officials suggested that it would be helpful if medications that an applicant discloses on the medical application form could be automatically checked against the “Do Not Issue—Do Not Fly” list to notify their AME of the applicant’s use of a medication on the list. http://www.faa.gov/about/office_org/headquarters_offices/avs/offices/aam/ame/guide/ph arm/dni_dnf/. Easier Website Navigation: Navigation is the means by which users get from page to page on a website to find and access information effectively and efficiently. According to Usability.gov, a good practice related to navigability is to use a clickable list of contents, thus minimizing scrolling. The Pilot’s Bill of Rights Notification and Terms of Service Agreement— which contains a statement advising the applicant that responses may be used as evidence against the applicant, a liability disclaimer, a statement of privacy, and a Paperwork Reduction Act statement among other statements—requires the user to scroll through what equates to nearly 10 pages of text (2,441 words over 417 lines of text), viewable through a small window that shows approximately 10 to 12 words across and four lines down at a time (see fig. 2). FAA might enhance the visibility of this information and help applicants better understand what they are agreeing to, if it created a larger window with hyperlinks to help the reader navigate through various sections of the notification and agreement. Similarly, the question and answer page for applicants could be enhanced by including clickable links between the questions and answers to allow readers to more easily find answers of interest to them. Another good practice, according to Usability.gov, is to design websites for popular operating systems and common browsers while also accounting for differences. According to a notification on the online application system’s log-in screen, applicants are advised to use only Internet Explorer to access the system. The system functions inconsistently across other browsers such as Google Chrome, Mozilla Firefox, and Apple Safari. For example, links to from the medical application form to its instructions do not work for Firefox or Google Chrome; instead, they lead the applicant back to the log-in page, causing any unsaved information to be lost. As described in the previous section, FAA officials at the medical certification division identified technological problems and potential solutions to enhance the online application system, but as of April 2014, no changes have been made. For example, the officials observed that some applicants enter the date in the wrong format, switching the order of day and month (DD/MM/YYYY, as opposed to MM/DD/YYYY), which can lead to problems when the AME imports the application. As a result, FAA officials proposed using drop-down boxes—with the name or abbreviation of each month, followed by the day, and the year—to collect date information. This proposed solution is consistent with a good practice— anticipates typical user errors—highlighted by Usability.gov. Additionally, the officials noted that it is not uncommon for an applicant to be logged out of their session due to inactivity, resulting in a loss of data entered during the session. To address this, FAA proposed that the online application system incorporate an auto-save feature that would be activated prior to the session expiring—consistent with Usability.gov guidelines—to warn users that a Web session may be expiring after inactivity to prevent users from losing information they entered into the online application system. In addition to these enhancements, FAA collects some information from applicants and AMEs regarding their experience with the application process. For example, FAA operates a 24-hour call center to answer technical questions that applicants, and AMEs may have about the online application system throughout the application process. FAA also has surveyed AMEs and pilots to collect information about their experience with the medical certification process. The Plain Writing Act of 2010 requires federal agencies, including FAA, to write specified types of new or substantially revised publications, forms, and publicly distributed documents in a “clear, concise, well-organized” manner. Several years before the Plain Writing Act of 2010—in 2003— FAA issued Writing Standards to improve the clarity of its communication. The Writing Standards include guidance for anyone who writes or reviews FAA documents intended for internal or external distribution. FAA has continued to make efforts in recent years to improve its employees’ understanding of plain language and how to incorporate it in written documents. FAA’s Plain Language Program in the Office of Communications trains employees and supports Plainlanguage.gov, a website devoted to improving communication from the Federal government to the public. Although plain writing is only required for new or substantially changed government documents, and is therefore not required for the current medical application form, the goal of plain writing is to help readers find the information they need, understand what they find, and use it to meet their needs. In regard to the medical certification process, this would include helping applicants understand each question and more accurately complete the application form in the way that FAA intended. In addition, stakeholders from two pilot associations were concerned that unclear questions on the medical application form could lead to incomplete or inaccurate responses, which they said could also lead to applicants’ being accused of misrepresenting themselves or falsifying information on the application form—an offense punishable by a fine up to $250,000 and imprisonment up to 5 years and may also result in a suspension or revocation or all pilot and medical certificates. More specifically, FAA’s Writing Standards also recommend using active voice and active verbs to emphasize the doer of the action. Our analysis of FAA’s medical application form and instructions showed that in some cases, FAA used passive voice although active voice would make the statements clearer. According to FAA’s writing standards, because the active voice emphasizes the doer of an action, it is usually briefer, clearer, and more emphatic than the passive voice. For example, on the medical application form the current statement, “Intentional falsification may result in federal criminal prosecution,” may be clearer to the applicant if stated, “If you intentionally falsify your responses, you may be prosecuted for a federal crime,” or a similar, more direct way of notifying the applicant. However, FAA officials noted that any re-wording of legal warnings or disclaimers must be approved by legal counsel. We also asked the medical experts to review the online application form. In response, many medical experts (12 of 20) we interviewed stated that certain questions can be confusing or too broad. For example, some experts have said that terms like “frequent,” “abnormal,” or “medication” aren’t clearly defined and therefore certain questions could generate inaccurate responses. For example, many experts (15 of 20) said that question 17a, on medication use, was unclear because, among other reasons, the reader may not know whether supplements or herbal medicines should be included. Some medical experts (7 of 20) also suggested adding items to question 18, about medical history, for areas such as cancer and sleep apnea. In 2009, NTSB recommended that FAA modify its medical application form to elicit specific information about risk (See app. factors or any previous diagnosis of obstructive sleep apnea.IV for a copy of the medical application form.) Many of the medical experts we consulted further suggested simplifying one question on the form; this question has also been examined by FAA officials. Specifically, the question on the form pertains to an applicant’s arrests or convictions. For example, many experts (13 of 20) suggested simplifying the question. FAA’s writing guidance suggests shortening sentence length to present information clearly or using bullets or active voice. In addition, FAA officials from the medical certification division used a computer program to analyze the readability of the question and discovered that an applicant would need more than 20 years of education to understand it. According to FAA officials, the agency can make changes to the medical application form for various reasons, including, for example: a response to findings or a recommendation made in a report by NTSB or by the Department of Transportation Inspector General, or a change in medical practices, for example, resulting from advancements in medicine. Since 1990, FAA revised the application form several times, to add or remove questions, change time frames related to the questions, or to clarify the questions, among other types of changes. When FAA announced in the Federal Register that it would replace its paper application form with an online application system, the agency said that the online application system would allow it to make and implement any needed or mandated changes to the application form in a timelier manner, resulting in a more dynamic form. However, agency officials noted that while they maintain a list of questions on the application form that pose problems for applicants, they do not make frequent changes, in part, because of the time and resources needed to complete a lengthy public comment and Office of Management and Budget (OMB) approval FAA officials also processes which, they say, can take up to two years.said that the Office of Aerospace Medicine must balance “plain language” with the requirements levied by FAA’s General Counsel to make sure that the wording is legally correct and enforceable. While it will take time and resources to improve the clarity FAA’s medical application form, if left unchanged, the accuracy and completeness of the medical information provided by applicants may not be improved. Aerospace medical experts we interviewed generally agreed that FAA’s current medical standards are appropriate and supported FAA’s recent effort to authorize its AMEs to certify a greater number of applicants by using a data-driven approach to assessing risk through the CACI program. Expanding the CACI program, as some experts suggested, could reduce the time it takes for applicants with lower risk conditions to become medically certified and, more importantly, allow FAA to prioritize the use of its constrained resources for medical determinations for applicants with the highest-risk medical conditions. FAA has identified approximately 50 potential technological enhancements to its computer systems that support its certification process, including adding new functionality to facilitate the process and provide applicants with more information about medical requirements. According to FAA officials, these enhancements would potentially reduce the workload at the medical certification division. Although FAA intends to eventually replace its current medical-certification computer systems with a new Aerospace Medicine Safety Information System (AMSIS), temporary enhancements are expected to help FAA reduce the delays and bottlenecks currently posing challenges to the agency. FAA has not established a timeline for implementing its broader set of 50 proposed technological enhancements, some of which may be less expensive than others. A timeline to guide the implementation of the highest-priority enhancements would help the agency take another step toward reducing the delays and bottlenecks related to FAA’s technology limitations. The online-application system and form that FAA uses to communicate directly to applicants contain confusing questions and instructions that do not meet FAA’s own plain language guidance. In addition, broken links and other navigability issues make the website difficult to follow. Efforts to provide applicants with useful and relevant information and improve the clarity of the questions and instructions contained in the online application system and form could allow FAA to more clearly communicate medical requirements to applicants. These improvements could not only aid an applicant’s understanding of the medical standards and requirements, but also may result in more accurate and complete information provided by applicants to better inform FAA’s certification decisions. To improve the applicants’ understanding of the medical standards and the information required to complete FAA’s medical certification process, the Secretary of Transportation should direct the Administrator of FAA to 1. develop a timeline for implementing the highest-priority technological improvements to the internal-computer systems that support the medical-certification process, and 2. enhance the online medical-application system by clarifying instructions and questions on the medical application form and providing useful information to applicants. We provided the Department of Transportation with a draft of this report for review and comment. DOT provided technical comments, which we incorporated into the report as appropriate, and DOT agreed to consider the recommendations. We are sending copies of this report to the Department of Transportation, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix V. The objectives of this report are to provide information on (1) FAA’s medical standards and policies and certification processes, along with medical experts’ views on them, and (2) steps that could be taken to promote private pilot applicants’ understanding of FAA’s medical requirements, including potential revisions to the medical application form. To meet these objectives, we reviewed pertinent statutes, regulations, and FAA documents regarding its pilot medical certification process, standards and application form. We also reviewed academic, trade and industry articles, government reports, and other relevant literature. We interviewed officials from FAA and the National Transportation Safety Board (NTSB), and other stakeholders in the pilot medical certification process, including officials representing government advocacy, medical and legal issues within the Aircraft Owners and Pilots Association (AOPA) and the Experimental Aircraft Association (EAA), the Aeromedical Advisor to the Air Line Pilots Association (ALPA), attorneys who assist pilots through the medical certification process, and representatives from the American Diabetes Association. responses from the President and representatives from three member airlines of the Regional Airline Association, the Executive Director of the Aerospace Medical Association (AsMA), and the President and physician members of the Civil Aviation Medical Association (CAMA). We also visited the Civil Aerospace Medical Institute (CAMI) in Oklahoma City to interview representatives of FAA’s Aerospace Medical Certification Division (AMCD), and we attended a training seminar for Aviation Medical Examiners (AME). To obtain expert opinions on FAA’s medical standards, we collaborated with the National Academies’ Institute of Medicine to identify aviation medical experts. We provided the Institute of Medicine with criteria and considerations for identifying experts, including (1) type and depth of experience, including recognition in the aerospace medicine professional community and relevance of any published work, (2) employment history and professional affiliations, including any potential conflicts of interest, and (3) other relevant experts’ recommendations. We also contacted the American College of Cardiology and the American Academy of Neurology to solicit their views but they did not reply for an interview. From the list of 24 experts identified by the National Academies, we added 3 experts recommended to us and omitted 7 due to their unavailability, their concern that they may not have the expertise to respond to our questions, or their stated conflicts of interest. We ended up with a total of 20 aviation medical experts who represented private, public, and academic institutions. Fourteen of the experts are board certified by at least one of the American Board of Medical Specialties member boards, including 9 who are board certified in aerospace medicine. Eight of the 20 medical experts we interviewed are AMEs for the FAA, and 16 are pilots or have had pilot experience in the past. Two experts are from aviation authorities in Australia and New Zealand, and a third expert was from the United Kingdom. Each expert verified that they had no conflicts of interest in participating in our study. We conducted semi-structured interviews by telephone with the experts in August and September 2013 to solicit their views on FAA’s medical standards and qualification policies, the medical application form, and FAA’s communication with AMEs and pilot applicants. We also asked general questions about aviation medical policies, followed by specific questions about private pilots, where applicable. We provided all medical experts with relevant background information prior to our interview, and we provided the option to bypass questions if they believed they were unqualified to respond in a professional capacity. Prior to conducting the interviews, we pretested the interview questions with three aviation medical experts (two were AMEs and one was also a pilot). We conducted pretests to make sure that the questions were clear and unbiased and that they did not place an undue burden on respondents. We made appropriate revisions to the content and format of the questionnaire after the pretests. Each of the 20 interviews was administered by one analyst and notes were taken by another. Those interview summaries were then evaluated to identify similar responses among the experts and to develop our findings. The analysis was conducted in two steps. In the first step, two analysts developed a code book to guide how they will analyze the expert responses. In the second step, one analyst coded each transcript of expert responses, and then a second analyst verified those codes. Any coding discrepancies were resolved by both analysts agreeing on what the codes should be. We examined responses to determine if there were systematic differences in responses between experts who were and were not pilots and between experts who were and were not AMEs. Because we found no significant differences between the pilot and AME groups, we reported the results for the experts as a whole rather than by the pilot or AME subgroups. We used indefinite quantifiers throughout the report—”few” (2-3 experts); “some” (4-9 experts); “half” (10); “many” (11-15 experts); and, “most” (16- 19 experts)—to inform the reader of the approximate quantity of medical experts that agreed with a particular statement. We only reported on issues raised by at least two experts. We interviewed individuals with broad aerospace-medicine expertise to provide their expert opinion on FAA’s medical standards and qualification policies. While the experts provided their opinions on some specific standards, we do not believe that these opinions alone provide sufficient evidence to recommend any specific changes to FAA medical standards and policies. Rather, the information from these interviews provides us with an overall expert assessment of FAA’s medical standards, policies, and practices. The results of our interviews represent opinions among the experts we interviewed but cannot be generalized to the larger population of aviation medical experts. See table 2, below, for a list of medical experts we interviewed. In addition to asking medical experts and other stakeholders about their view of FAA’s communication of its medical certification requirements, we reviewed MedXPress.faa.gov (online application system) used by pilots to obtain a medical certificate. We reviewed the Pilots Bill of Rights Notification and Terms of Service Agreement, Form 8500-8 (medical application form) and instructions, and links within the online application system, evaluating that information against federal government website- usability guidelines and against FAA’s plain language guidelines. We evaluated the online application system based on the following criteria (1) content—whether the website contained relevant and appropriate information users need—and (2) navigation—how easily users can find and access information on the site and move from one webpage to another, focusing on, for example, the clickable links within a website and limited reliance on scrolling. In addition, we reviewed various other website usability resources and criteria, including Usability.gov, to understand the key practices for making websites easy to use and helpful. We evaluated the medical application form and its instructions based on criteria established by FAA’s Office of Communications, including its Plain Language Tool Kit and its Writing Standards. These criteria include (1) writing principles—for example, whether the document is appropriate for its audience, its content is well organized, and it uses active voice, clear pronouns, and short sentences and paragraphs—and, (2) formatting principles—for example, whether the document layout and use of headers and blank space conform with best practices to clearly present information to the reader. We conducted this performance audit from January 2013 through April 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Second-Class Commercial Pilot Every year, regardless of age. First-Class Airline Transport Pilot Every 6 months if > 40 years old. Every year if < 40 years old. 20/20 or better in each eye separately, with or without correction. Third-Class Private Pilot Every 2 years if > 40 years old. Every 5 years if < 40 years old. 20/40 or better in each eye separately, with or without correction. 20/40 or better in each eye separately (Snellen equivalent), with or without correction, as measured at 16 inches. 20/40 or better in each eye separately (Snellen equivalent), with or without correction at age 50 and over, as measured at 32 inches. No requirement. Ability to perceive those colors necessary for safe performance of airman duties. Demonstrate hearing of an average conversational voice in a quiet room, using both ears at 6 feet, with the back turned to the examiner or pass one of the audiometric tests below. Audiometric speech discrimination test: Score at least 70% reception in one ear. Pure tone audiometric test. Unaided, with thresholds no worse than: No ear disease or condition manifested by, or that may reasonably be expected to be maintained by, vertigo or a disturbance of speech or equilibrium. Not disqualifying per se. Used to determine cardiac system status and responsiveness. No specified values stated in the standards. The current guideline maximum value is 155/95. Not routinely required. No diagnosis of psychosis, or bipolar disorder, or severe personality disorders. A diagnosis or medical history of “substance dependence” is disqualifying unless there is established clinical evidence, satisfactory to the Federal Air Surgeon, of recovery, including sustained total abstinence from the substance(s) for not less than the preceding 2 years. A history of “substance abuse” within the preceding 2 years is disqualifying. “Substance” includes alcohol and other drugs (i.e., PCP, sedatives and hynoptics, anxiolytics, marijuana, cocaine, opioids, amphetamines, hallucinogens, and other psychoactive drugs or chemicals). Unless otherwise directed by the FAA, the Examiner must deny or defer if the applicant has a history of (1) diabetes mellitus requiring hypoglycemic medication; (2) Angina pectoris; (3) Coronary heart disease that has been treated or, if untreated, that has been symptomatic or clinically significant; (4) Myocardial infarction; (5) Cardiac valve replacement; (6) Permanent cardiac pacemaker; (7) Heart replacement; (8) Psychosis; (9) Bipolar disorder; (10) Personality disorder that is severe enough to have repeatedly manifested itself by overt acts; (11) Substance dependence; (12) Substance abuse; (13) Epilepsy; (14) Disturbance of consciousness and without satisfactory explanation of cause, and (15) Transient loss of control of nervous system function(s) without satisfactory explanation of cause. AME Training Program/ Communication tool Description Clinical Aerospace Physiology Review for Aviation Medical Examiners (CAPAME) course Medical Certification Standards and Procedures Training (MCSPT) Prospective AMEs must complete these online courses as a prerequisite to becoming an AME. Prospective AMEs generally must attend this one-week seminar to be designated as an AME. Practicing AMEs must complete refresher training every three years to maintain their designation as an AME. AMEs generally fulfill this requirement by either attending an AME Refresher Seminar; or, completing the online MAMERC course in lieu of attending an AME theme seminar. This course can be used as a substitute for a theme seminar on alternate 3-year cycles, which extends the time between theme seminar attendance to six years. In addition to the AME training and continued professional refresher courses, AMEs generally maintain a proficiency requirement of at least 10 exams per year. According to the Federal Air Surgeon, FAA policies go into effect when they are updated in the Guide for Aviation Medical Examiners, available online. Published quarterly for aviation medical examiners and others interested in aviation safety and aviation medicine. The Bulletin is prepared by the FAA’s Civil Aerospace Medical Institute, with policy guidance and support from the Office of Aerospace Medicine. Aerospace Medical Certification Subsystem (AMCS) E-mail notifications are sent to AMEs and their staff through AMCS. AMCS support is available by phone, (405) 954-3238, or e-mail, [email protected]. FAA TV, http://www.faa.gov/tv, is a central repository for FAA videos related to pilot medical requirements, among other topics. For example, FAA has produced two MedXPress videos: http://www.faa.gov/tv/?mediaId=554 or http://www.faa.gov/tv/?mediaId=634 FAA also posts videos on its YouTube page http://www.youtube.com/user/FAAnews/videos FAA also uses Facebook and Twitter to communicate directly with pilots and others who choose to follow FAA through social media. Bimonthly publications promote aviation safety by discussing current technical, regulatory, and procedural aspects affecting the safe operation and maintenance of aircraft. FAA pilot safety brochures provide essential information to pilots regarding potential physiological challenges of the aviation environment so pilots may manage the challenges to ensure flight safety. Brochure topics include: Alcohol and Flying, Medications, Spatial Disorientation, Hearing and Noise, Hypoxia, Pilot Vision, Seat Belts and Shoulder Harnesses, Sleep Apnea, Smoke, Sunglasses for Pilots, Deep Vein Thrombosis and Travel, and Carbon Monoxide, among other topics. MedXPress support is available for pilots by phone, (877) 287-6731, or e-mail, 9- [email protected], 24 hours each day. Appendix IV: FAA Form 8500-8 (Medical Application Form) In addition to the contact named above, the following individuals also made important contributions to this report: Susan Zimmerman, Assistant Director; Colin Fallon; Geoffrey Hamilton; John Healey; Linda Kohn; Jill Lacey; Maren McAvoy; and Sara Ann Moessbauer.
FAA developed its medical standards and pilot's medical-certification process to identify pilot applicants with medical conditions that may pose a risk to flight safety. The Pilot's Bill of Rights (P.L. 112-153) mandated GAO to assess FAA's medical certification standards, process, and forms. This report addresses: (1) FAA's medical standards, policies, and certification processes, along with medical experts' views on them, and (2) steps that FAA could take to promote private pilots' understanding of its medical requirements. GAO reviewed statutes, regulations, FAA documents, and interviewed officials from FAA, NTSB, pilot associations, and 20 aviation medical experts primarily identified by the National Academies' Institute of Medicine. Experts were selected based on their type and depth of experience, including recognition in the aerospace-medicine professional community. GAO also interviewed FAA's medical certification division and evaluated the usability of FAA's online application system and the clarity of its application form against federal writing guidelines and best practices in website usability. Aerospace medical experts GAO interviewed generally agreed that the Federal Aviation Administration's (FAA) medical standards are appropriate and supported FAA's recent data-driven efforts to improve its pilot medical-certification process. Each year, about 400,000 candidates apply for a pilot's medical certificate and complete a medical exam to determine whether they meet FAA's medical standards. From 2008 through 2012, on average, about 90 percent of applicants have been medically certified by an FAA-designated aviation medical examiner (AME) at the time of their medical exam or by a Regional Flight Surgeon. Of the remaining applicants, about 8.5 percent have received a special issuance medical certificate (special issuance) after providing additional medical information to FAA. Approximately 1.2 percent were not medically certified to fly. According to an industry association, the special issuance process adds time and costs to the application process, in part, because applicants might not understand what additional medical information they need to provide to FAA. Officials from FAA's medical certification division have said that technological problems with the aging computer systems that support the medical certification process have contributed to delays in the special issuance process. FAA's medical certification division has identified about 50 potential technological enhancements to its internal computer systems that support the medical certification process, of which about 20 have been identified as high priority, but the division has not yet implemented them or developed a timeline to do so. By developing a timeline to implement the highest-priority enhancements, FAA would take another step toward expediting the certification process for many applicants hoping to obtain a special issuance. FAA recently established a datadriven process using historic medical and accident data that authorizes AMEs to certify a greater number of applicants with medical conditions who had previously required a special issuance. Officials expect this effort to allow more applicants to be certified at the time of their AME visit and to free resources at FAA to focus on applicants with higher-risk medical conditions. GAO's analysis and medical experts' opinions indicate that FAA could improve its communication with applicants by making its online application system--part of FAA's internal computer systems discussed above--more user-friendly and improving the clarity of the medical application form. Specifically, GAO found that the online application system requires applicants to scroll through a lengthy terms-of-service agreement and does not provide clear instructions, and that the application form contained unclear questions and terms that could be misinterpreted by the applicant. FAA could enhance its online application system by using links to improve navigability of the system and providing information that is more useful to applicants--for example, links to information about the risk that specific medical conditions pose to flight safety and any additional medical information applicants with those conditions would need to provide to FAA. FAA could also improve the clarity of its medical application form by incorporating guidelines established in FAA's Writing Standards, including shorter sentences and paragraphs, active voice, and clear terms and questions. These clarifications could not only aid an applicant's understanding of the medical standards and requirements, but also may result in more accurate and complete information provided by applicants to better inform FAA's certification decisions. GAO recommends that FAA (1) develop a timeline for implementing high-priority technological improvements to the internal computer systems that support the medical certification process, and (2) enhance the online medical-application system by clarifying instructions and questions on the form and providing useful information. The Department of Transportation agreed to consider the recommendations.
The federal Food Stamp Program is intended to help low-income individuals and families obtain a more nutritious diet by supplementing their income with benefits to purchase nutritious food such as meat, dairy, fruits, and vegetables, but not items such as soap, tobacco, or alcohol. The Food and Nutrition Service (FNS) pays the full cost of food stamp benefits and shares the states’ administrative costs—with FNS usually paying approximately 50 percent—and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states administer the program by determining whether households meet the program’s income and asset requirements, calculating monthly benefits for qualified households, and issuing benefits to participants on an electronic benefits card. In fiscal year 2005, the Food Stamp Program issued almost $28.6 billion in benefits to about 25.7 million individuals participating in the program, and the maximum monthly food stamp benefit for a household of four living in the continental United States was $506. As shown in figure 1, the increase in the average monthly participation of food stamp recipients in 2005 continues a recent upward trend in the number of people receiving benefits. Retailers are the front line for determining which goods can be purchased and for ensuring the integrity of the food stamp transaction. FNS operates 44 field offices throughout the country, and they have the primary responsibility for authorizing retailers to participate in the Food Stamp Program. To become an authorized retailer, a store must offer on a continuing basis a variety of foods in each of the four staple food categories—meats, poultry or fish; breads or cereals; vegetables or fruits; and dairy products—or 50 percent of its sales must be in a staple group such as meat or bakery items. However, the regulations do not specify how many food items retailers should stock. The store owner submits an application and includes forms of identification such as copies of the owner’s Social Security card, driver’s license, business license, liquor license, and alien resident card. The FNS field office program specialist then checks the applicant’s Social Security number against FNS’s database of retailers, the Store Tracking and Redemption System, to see if the applicant has previously been sanctioned in the Food Stamp Program. The application also collects information on the type of business, store hours, number of employees, number of cash registers, the types of staple foods offered, and the estimated annual amount of gross sales and eligible food stamp sales. If the application is complete, most field offices will forward a request to the private contractor employed by FNS to conduct on-site inspections that verify the information in the application and provide additional information for the approval process. The contractor visits the store and submits a map of the store layout, the inspection form, and photographs of the outside and inside of the store and its inventory. The contractor reports information on the type of store and its location, access to parking, the number of cash registers and EBT point-of-sale devices, whether shopping carts or baskets are available, and the availability of nonfood stock and services offered, such as liquor, tobacco, gasoline, check cashing, and lottery tickets. As part of the inspection, the contractor also evaluates the general store conditions and notes problems—such as empty coolers and shelves, dusty cans and expired or outdated foods—that could indicate that this may not be a viable grocery operation. Upon receiving favorable information from the contractor, the FNS program specialist authorizes the store to participate in the Food Stamp Program for 5 years. Unless a problem arises with the store, it typically would not be re-inspected until it applies for reauthorization. At the end of fiscal year 2005, more than 160,000 retailers were authorized to accept food stamp benefits. During the fiscal year, almost 24,000 new stores were authorized, 30,000 were reauthorized and almost 17,000 left the program, most for voluntary reasons. As shown in table 1, supermarkets account for only about 22 percent of the authorized stores but redeem the lion’s share of food stamp benefits. FNS defines a supermarket as a store with $2 million of gross sales, three or more cash registers, and coded as a supermarket on its food stamp application. for allowable foods. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), however, required each state agency to implement an EBT system to electronically distribute food stamp benefits, and the last state completed its implementation in fiscal year 2004. Under the EBT system, food stamp recipients receive an EBT card imprinted with their name and a personal account number, and food stamp benefits are automatically credited to the recipients’ accounts once a month. As shown on the left in figure 2, in a legitimate food stamp transaction, recipients run their EBT card, which works much like a debit card, through an electronic point-of-sale machine at the grocery checkout counter, and enter their secret personal identification number to access their food stamp accounts and to authorize the transfer of food stamp benefits from a federal account to the retailer’s account to pay for the eligible food items. The legitimate transaction contrasts with a trafficking transaction portrayed on the right, in which recipients swipe their EBT card, but instead of buying groceries, they receive a discounted amount of cash and the retailer pockets the difference. In addition to approving retailers to participate in the program, FNS has the primary responsibility for monitoring their compliance with requirements and administratively disqualifying those who are found to have trafficked food stamp benefits. FNS headquarters officials collect and monitor EBT transaction data to detect suspicious patterns of transactions by retailers. They then send any leads to FNS program specialists in the field office who either work the cases themselves or refer them to undercover investigators in the Retailer Investigations Branch to pursue by attempting to traffic food stamps for cash. FNS notifies the USDA’s Office of the Inspector General (OIG) before the field office specialist or undercover investigator develops a case, and the OIG may choose to open an investigation on this case on its own for possible criminal prosecution. The OIG may also work with the US Secret Service, the Federal Bureau of Investigation, or other agencies to investigate retailers for criminal prosecution. Secret Service officials told us they have a memorandum of understanding with the USDA that allows them to initiate food-stamp-trafficking investigations on their own, provided they notify the OIG of all the investigations in which an authorized retailer is targeted. When trafficking is proved, FNS penalizes the store owners, usually by permanent program disqualification but in limited circumstances they may receive civil money penalties. Store owners who sell ineligible goods but do not traffic are generally subject to a 1-year temporary program disqualification. If a field office specialist finds that a retailer has trafficked, the specialist sends a letter to the retailer detailing the charges and the intended penalty. If the Retailer Investigations Branch succeeds in trafficking food stamps with a retailer, it first refers the case to the OIG, which then decides whether it will investigate the case further for possible prosecution by the US Attorney’s office or by state and local prosecutors or refer the case back to the FNS field office to complete the disqualification action. The retailer may attempt to rebut the charges, but if the retailer does not respond or cannot provide a reasonable explanation for the specific charges, then a letter is sent executing the program disqualification. The retailer may appeal the decision, first to the Administrative Review Branch at FNS headquarters and later to the appropriate federal district court. In addition to administering the day-to-day operation of the Food Stamp Program, states also have the primary responsibility for monitoring recipients’ compliance with the program’s requirements and investigating any case of alleged intentional program violation. This includes cases of ineligible persons attempting to obtain food stamps or applicants deliberately providing false information in an attempt to receive more benefits than they should as well as cases in which recipients traffic their food stamp benefits. States must ensure that appropriate cases are acted upon, either through administrative disqualification hearings or referral to a court of appropriate jurisdiction, in accordance with the procedures outlined in the Food Stamp Program regulations. FNS estimates that the rate of food stamp trafficking was 1.0 cent on the dollar for calendar years 2002 to 2005. Overall, the estimated rate of trafficking at small stores is much higher than the estimated rate for supermarkets and large groceries, which redeem most food stamp benefits. Furthermore, the implementation of EBT eliminated the role of the middleman by requiring personal identification numbers each time the EBT card is used. FNS’s most recent estimate suggests that the food-stamp-trafficking rate was 1.0 cent on the dollar for calendar years 2002 to 2005 and that this rate and the total estimated benefits trafficked have declined in recent years. FNS’ first trafficking study in 1995 estimated that about 3.8 cents of every dollar of food stamp benefits issued was trafficked in 1993. As shown in table 2, subsequent FNS studies estimated that this trafficking rate continued to decline. The trafficking exchange rate that retailers offer for food stamp benefits can vary from place to place. While retailers generally offer recipients about 50 cents for each dollar of benefits, in New York City we were told by an FNS undercover investigator that the exchange rate is about 70 cents, and in a few locations, some retailers will exchange one dollar of cash for one dollar of benefits as an accommodation to the food stamp recipient. FNS studies suggest that small convenience and grocery stores continue to be the most common sites for trafficking. Small stores, including small grocery, convenience, specialty, and gas/grocery stores have an estimated trafficking rate of 7.6 cents per dollar. In contrast, supermarkets and large grocery stores have an estimated rate of 0.2 cents per dollar. However, because supermarkets account for the lion’s share of food stamp benefit redemptions, even at this lower rate, over $49 million of benefits may have been trafficked in supermarkets and large grocery stores in fiscal year 2005. Most FNS field officials we interviewed told us these findings reflected their experience. They characterized a typical trafficking case at their field office occurring at a convenience, small grocery, or gas/grocery store located in an urban area where the store owner traffics with familiar neighborhood food stamp recipients. The nationwide implementation of EBT has changed the way some food stamp benefits are trafficked. Previously, in addition to trafficking conducted directly between store owners and recipients, middlemen could wait around public assistance offices or subsidized housing complexes to purchase large numbers of food stamp coupons at a discounted rate directly from recipients. The coupons might also change hands among multiple middlemen, with each taking a cut, before ultimately being exchanged for cash from a willing retailer. Field office officials told us that EBT has largely eliminated the middleman because retailers must now have the recipients’ EBT card and personal identification number to conduct a trafficking transaction. As a result, some recipients have adapted their trafficking behavior to the new EBT environment. For example, one field office official told us that some food stamp recipients now stand outside of stores offering to loan their EBT cards to shoppers entering the store. In this situation, the shopper would purchase groceries using the card and return it and a discounted amount of cash to the recipient upon leaving the store. During our field office visit to Tallahassee, a GAO analyst was approached in his hotel parking lot by a would-be trafficker offering such a transaction. FNS has taken advantage of new technology to improve its monitoring and sanctioning of food stamp retailers, but other federal agencies’ have been investigating and prosecuting fewer traffickers. With the implementation of EBT, FNS has supplemented its traditional undercover investigations by the Retailer Investigations Branch with cases developed by analyzing EBT transaction data. These EBT cases now account for more than half of the permanent disqualifications by FNS (see fig. 3 below). Although the number of trafficking disqualifications based on undercover investigations has declined, these investigations continue to play a key role in combating trafficking. However, as FNS’s ability to detect trafficking has improved, the number of suspected traffickers investigated by other federal entities, such as the USDA Inspector General and the U.S. Secret Service have declined. These entities have focused more on a smaller number of high- impact investigations. As a result, retailers who traffic are less likely to face severe penalties or prosecution. The nationwide implementation of EBT has given FNS powerful new tools to supplement its traditional undercover investigations of retailers suspected of trafficking food stamp benefits. FNS traditionally sent its investigators into stores numerous times over a period of months to attempt to traffic benefits. However, PRWORA gave FNS the authority to charge retailers with trafficking in cases based solely on EBT transaction evidence, called “paper cases.” A major advantage of paper cases is that they can be prepared relatively quickly and without multiple store visits. These paper cases accounted for the majority of FNS’s 841 trafficking disqualifications in fiscal year 2005. As part of the monitoring process, FNS collects each month’s food stamp transaction data from the states’ EBT processors and adds the data to its EBT transaction database for analysis. Six months’ worth of EBT transactions—about 500 million—are available on line. Information on the amount of the transaction is reported. Information on the items being purchased is not available through EBT. The system scans these data to flag transactions or sets of transactions that fit a certain set of criteria defined by established patterns of fraudulent activity. The system then generates a monthly “Watch List” of retailers with suspicious transaction patterns incongruent with a store’s particular type of retail operation. The Watch List is sent out to the responsible FNS field office for follow-up. In the field offices, program specialists begin their work on paper cases by reviewing the Watch List and leads from other sources, such as the state food stamp agency, the state EBT processors, and law enforcement agencies. Using experience with the retailers in the area, program specialists may determine that suspicious transactions for some retailers are explainable. In such cases, the specialist may take no further action or schedule a later review of the store’s transactions. In cases for which they cannot explain the suspicious transactions, program specialists determine which retailers they will pursue as paper cases. If the program specialist is unable to develop a paper case, the case may be referred to the Retailer Investigations Branch for an undercover investigation. After deciding to open a paper case, FNS obtains clearance from the OIG to pursue the case, and then the program specialist uses FNS data and a variety of other resources to gather evidence. Program specialists generally use 3 months of EBT data to show suspicious patterns. In the case files we reviewed, charge letters typically contained hundreds of examples of suspicious transactions, although FNS guidance does not specify the number of transactions necessary to support a case. Specialists also review FNS historical data on retailers to check for such things as prior program violations. In addition, these specialists obtain more current transaction data as well as information on recipients suspected of trafficking with the retailer, through state Food Stamp Program databases. Many specialists supplement these data with online resources, such as mapping software to identify suspicious shopping patterns. Program specialists can also consult the photos taken at the time of authorization to assess whether conditions in the store support the volume of food stamp redemptions claimed. Figure 4 shows the limited counter space and the single cash register of a store that claimed food stamp redemptions of almost $200,000 per month and was later disqualified for trafficking. Such information enables the program specialists to corroborate conclusions they have drawn based on patterns in the EBT transaction data. In addition, most program specialists in the offices we visited told us they also visit the store once before charging a retailer with trafficking. Some store visits allow the program specialist to check for possible explanations for the suspicious transaction patterns, while others corroborate the suspicion that the stores are in business to traffic. For example, during one store visit, program specialists found cans of food on the shelves with thick layers of dust, many items that had passed their expiration dates, and jars of spaghetti sauce so old that the contents had separated. The store owner may attempt to rebut the charges. For example, a store owner may claim to have extended credit to recipients so they could purchase food until they received their next month’s food stamp benefits, and the high-dollar transactions were repayment of the credit. Although extending credit is also a violation of program rules, it carries a lesser penalty—temporary disqualification—than trafficking. If the owner is unable to rebut the charges, and the program specialist disqualifies the store, the store owner may appeal to the Administrative Review Branch. In 2005, about 6 percent of the permanent disqualifications were modified or reversed by the branch. The length of time between a new store’s authorization and its first disqualification has decreased over the last 10 years. Stores that received a temporary or permanent disqualification in 1996 had been open an average of about 8.7 years, but by 2005, that average had dropped to 6.3 years. Two factors may have contributed to this 28 percent decrease in length of time between authorization and disqualification: improved FNS monitoring of the program and use of EBT transaction data or more store owners who begin to traffic food stamps sooner. The officer-in-charge of the Chicago field office believes that in her area an increasing number of store owners are trafficking immediately after authorization. We analyzed FNS’s authorized retailer data for stores in the Chicago area and found that the average time between authorization and a store’s first temporary or permanent disqualification dropped by nearly half. In 1996, it took a Chicago store about 5 years to receive a term or permanent disqualification, and in 2005, it was just 2.6 years. The number of Retailer Investigations Branch undercover trafficking investigations has declined, but these investigations are often used in cases where EBT data alone are not enough to prove a retailer is trafficking. The investigators initiate cases based on requests from FNS field offices, their own review of the Watch List, or leads from state or local law enforcement agencies. Like the paper case process, FNS consults with the OIG before opening a case. To build a case, the investigators make undercover visits to the store to determine whether the retailer is selling ineligible goods or trafficking food stamps. If a retailer sells the investigator ineligible goods but does not traffic, the resulting temporary disqualification from the program for selling ineligibles can create a deterrent effect on the disqualified store owner, other store owners, and trafficking recipients, because such penalties often become known in the community. Personal safety can be a concern for investigators. One investigator told us that there are some stores, especially in urban areas, where it would be dangerous to attempt an undercover investigation. Although cases in which the Retailer Investigations Branch finds trafficking are routinely referred to the OIG for possible prosecution, in most cases the OIG returns the case to the field office for administrative disqualification. As with paper cases, the field office sends a charge letter, detailing the dates on which the retailer sold ineligibles or trafficked food stamp benefits, and the retailer may attempt to rebut the charges. Once disqualified, the retailer can appeal the penalty to the Administrative Review Branch. If no violation is found, the Retailer Investigations Branch refers the case to the field office to determine whether to continue investigating. In recent years, the USDA OIG has opened a decreasing number of food- stamp-trafficking investigations and has focused on high-impact investigations. In 2000, the OIG opened 179 trafficking investigations, while in 2005 it opened 77. According to OIG, this has occurred both because of a lack of resources—the number of OIG investigators has dropped by 28 percent since 1997—and because the OIG has focused its resources on high-impact investigations such as those with large-scale trafficking, those involving other criminal activity, or those involving possible terrorist connections since September 11, 2001. In addition, OIG officials told us that it can take up to 5 years to investigate and prosecute a store owner, and the process of developing an investigation for prosecution further strains limited resources. Other federal agencies are also conducting fewer retailer food stamp trafficking investigations. The US Secret Service used to take on investigations when large amounts of food stamp coupons were being trafficked. However, its involvement in retailer trafficking investigations is rare because the Secret Service finds that large trafficking investigations are less common since the implementation of EBT. EBT cards typically only have a few hundred dollars of benefits each month, so it takes many transactions for a dishonest store owner to traffic a large amount of money. However, in large trafficking investigations or those where a retailer is believed to be diverting profits from trafficking to terrorist causes, the Secret Service or the FBI might work with the OIG and other agencies on a sting operation or a joint task force. For example, the OIG and FBI worked jointly with state and local law enforcement authorities in Florida on an investigation involving store owners who were ordered to pay $2.6 million in restitution to the USDA and went to prison after pleading guilty to trafficking over $3 million in food stamp benefits. OIG officials told us they were actively conducting task force investigations with other federal, state and local law enforcement authorities. If an investigation is accepted and developed for prosecution by a law enforcement entity, there is still no guarantee that the trafficker will be prosecuted. Most US Attorneys’ offices will not prosecute a retailer unless a great deal of money is involved, although the threshold varies from one region to another, according to federal law enforcement officials. Thus, prosecuting the store owners is a challenge. Figure 5 shows a decline in recent years in the number of investigations deemed serious enough to be referred by the OIG to the US Attorney for prosecution, down from 202 in fiscal year 2001 to 21 in 2005. These data illustrate the relatively small number of store owners who have faced prosecution for trafficking in recent years, particularly in light of the 841 owners who were disqualified in fiscal year 2005. These data also show that the proportion of investigations accepted by the US Attorney for prosecution has been increasing in recent years. OIG officials told us they believe they are better targeting investigations for referral. With fewer retailers prosecuted, the number of convictions has also declined. Because of the length of time it takes to prosecute a case, there is a lag between the time when a trafficking investigation is accepted by the US Attorney for prosecution and the time when a retailer is convicted. Thus, it is not possible to compare the figures for investigations accepted for prosecution and those resulting in convictions in the same year. However, as shown in figure 6, the number of convictions resulting from investigations by the OIG has declined from 260 in 2000 to 94 in 2005. Despite the declining FNS estimates of retailer trafficking, retailers can still enter the program intending to traffic and do so, often without fear of severe criminal penalties. Minimal food stock requirements for authorization and a lack of FNS oversight of contractor inspections may allow dishonest retailers into the program, and delays in access to transaction data may allow retailers to traffic large amounts for several months undetected. In addition, some retailers have adapted their trafficking behaviors to avoid detection while others have found new ways to exploit the EBT technology. FNS does not yet have an overall strategy to target its monitoring resources to high risk areas. Moreover, the available FNS penalties for trafficking may not be sufficient to deter retailers from trafficking, and the states’ lack of focus on recipient trafficking can also facilitate trafficking. Minimal food stock requirements may allow corrupt retailers to enter the program, yet their stocks will not likely be checked for 5 years absent the indication of a problem. FNS field office officials told us their first priority is getting stores into the program to ensure needy people have access to food. In part because large grocery stores are sometimes scarce in urban, low-income areas, officials may allow stores with minimal food stock that meet the minimum FNS requirements to become authorized food stamp retailers. Officials told us that when a retailer only stocks small quantities of eligible food items, such as just a few cans of one kind of vegetable, it is often an indication of the intent to traffic. However, FNS regulations do not specify the amount of food items that would constitute sufficient stock. The officer-in-charge of a large urban field office expressed frustration with this lack of specificity. Many authorized stores in her area are gas-and-grocery combinations or convenience stores and some of these stores stock only one item from each required food group. However, she said the field office cannot deny these stores authorization based upon minimal food stock because, in her experience, the denial would be overturned if appealed. Another official at an FNS regional office told us about a store that was denied authorization in that region. According to this official, the denial was overturned by the Administrative Review Branch when the reviewing officer determined that a single can of corn sufficed as one of the three different products required in the fruit or vegetable food group. In addition, Secret Service officials said that some merchants quickly learn that they do not need to restock their stores to continue to redeem food stamps because stores aren’t routinely checked for 5 years unless there is some indication of a problem with the store. Staff in one of the 10 FNS field offices we visited told us that they have to authorize some retailers who seem suspicious, but they perform post- authorization visits of these stores to ensure they are legitimate. During the authorization process, FNS field offices rely on contractors to inspect stores to ensure they meet program requirements, but FNS does not independently verify the inspectors’ reports. The inspector provides the final check that a store exists, it has food in each of the required food groups, and the information provided on the application for authorization to become a food stamp retailer is correct. However, at one field office, a contract inspector was submitting false reports, allowing dishonest retailers into the program. Oversight of retailers’ entry into the program and early operations is important because newly authorized retailers can quickly ramp up the amount of food stamps they traffic, and there is no limit on the value of food stamps a retailer can redeem in 1 month. At one field office location where retailers are often innovative in their trafficking schemes, FNS officials noticed that some retailers quickly escalated their trafficking within 2 to 3 months after their initial authorization. As shown in figure 7, one disqualified retailer’s case file we reviewed at that field office showed the store went from $500 in monthly food stamp redemptions to almost $200,000 within 6 months. Redemption activity dropped precipitously after the trafficking charge letter was sent to the retailer in late October. In its application for food stamp authorization, this retailer estimated he would have $180,000 of total annual food sales, yet the retailer was redeeming more than that each month in food stamp benefits before being caught in a Retailer Investigations Branch investigation. Although EBT implementation provides FNS with valuable transaction data to identify potential trafficking, an FNS headquarters official said monitoring and identification of traffickers will be improved once program specialists have faster access to transaction data to detect suspicious ramp-up activity. Currently, FNS receives each state’s EBT transaction data monthly on disk from the states’ EBT contractors. Using this process, the program specialists would not be aware of a retailer’s rapid ramp-up activity until they had 2 months’ worth of transaction data, in the third month after the retailer’s authorization. Then, following the normal case development process, a charge letter would not be sent to the store until the fourth month, leading to possible disqualification in the fifth month. According to this official, as retailers learned that FNS would eventually discover them by analyzing their EBT transactions, they responded by ramping up their trafficking activity more quickly to make a quick profit before FNS could take action. FNS officials told us they believe that the solution to combating rapid ramp-up trafficking is for FNS to receive EBT transaction data daily. FNS systems could then monitor the data more quickly and produce daily reports of rapidly increasing amounts of retailer transactions called “spike reports.” In order for FNS to receive so much data on a daily basis, it is working on building large data pipelines from the states’ EBT processors and developing its ability to manage that data before the end of this year. In the interim, FNS is piloting the use of spike reports using monthly data. As some retailers have become familiar with FNS’s monitoring techniques, they have adapted their trafficking patterns to avoid detection. Unlike those who quickly ramp up their trafficking behavior for quick profit before detection through FNS monitoring, other retailers have adjusted to EBT monitoring by manipulating trafficking transactions to prevent detection by FNS analysis of transaction patterns. One field official said that there is a large network of trafficking retailers in her field office area that dissects the charge letters sent to traffickers to determine what analyses FNS conducts and to teach other retailers how to elude detection. Secret Service officials confirmed the existence of fraud networks in this area and said that one ringleader will recruit, encourage, and reward an entire family and the friends of that family for trafficking food stamp benefits. Some retailers have also found new ways to exploit the EBT technology and continue to traffic. In her July 2003 testimony, the USDA Inspector General reported that her office had recently identified a fraudulent scheme that, while rare, appeared to be growing in the Food Stamp Program. The OIG noticed that some authorized retailers were moving their point-of-sale terminals to an unauthorized location, such as an unauthorized store or apartment, for trafficking purposes. In its Semiannual Report to Congress for the first half of fiscal year 2004, the OIG reports that four individuals moved the authorized terminals to different locations in Chicago so they could exchange cash for food stamp benefits away from the authorized stores and possible detection. This allowed them to conduct a large number of transactions one after another. These individuals had been sentenced to serve from 15 to 57 months in prison and ordered to pay $29.1 million in restitution for defrauding the Food Stamp Program in this way from the fall of 1997 through August 2001. OIG headquarters officials told us that moving authorized and unauthorized terminals remains a significant area of concern because of the large volume of money that could be redeemed quickly. FNS has not taken the steps to ensure that it identifies those areas or stores that are at highest risk for trafficking so that it can allocate its resources accordingly. FNS has made good use of EBT transaction data to produce its Watch List to identify suspicious transaction patterns and target certain stores. It has also established task forces of undercover investigators when it identifies geographic areas needing additional coverage. However, it is now at a point where it can begin to formulate more sophisticated analyses to identify high risk areas and target its resources. For example, certain states have a disproportionate share of the disqualified stores compared with the number of food stamp recipients in their states, yet it is not clear whether these numbers indicate that trafficking is more common in those states or whether FNS program specialists and investigators have engaged in more intensive pursuit of traffickers in those areas. Our analysis of FNS’s database of retailers showed that of the 9,808 stores permanently disqualified from the Food Stamp Program, about 35 percent were in just 4 states: New York, Illinois, Texas, and Florida, and yet about 26 percent of food stamp recipients lived in those states. However, FNS headquarters officials did not know the number of program specialists in the field offices in these states who devote a portion of their time to monitoring food stamp transactions and initiating paper cases. Moreover, FNS officials believe there are probably other areas of the country where trafficking is occurring that may warrant further attention or additional resources, such as California, where fewer than 5 percent of all permanent store disqualifications occurred and about 8 percent of food stamp recipients live. However, FNS officials have not yet developed a clear strategy or criteria to systematically identify those areas and reallocate resources in response. In addition, some retailers and store locations have a history of program violations that lead up to permanent disqualifications, but FNS did not have a system in place to ensure these stores were quickly targeted for heightened attention. Our analysis showed that, of the 9,808 stores that had been permanently disqualified from the program, about 90 percent were disqualified for their first detected offense. However, 9.4 percent of the disqualified retailers had shown early indications of problems before being disqualified. About 4.3 percent of these retailers had received a civil money penalty, 4.3 percent had received a warning letter for program violations, and 0.8 percent had received a temporary disqualification. Most of these stores were small and may present a higher risk of future trafficking than others, yet FNS does not necessarily target them for speedy attention. Further, some store locations may be at risk of trafficking because a series of different owners had trafficked there. After an owner was disqualified, field office officials told us the store would reopen under new owners who continued to traffic with the store’s clientele. One field office official would like to be able to bar these repeat store locations, while another suggested a 90-day waiting period before a new owner of a disqualified store location could qualify as an authorized food stamp retailer. As table 3 shows, our analysis of FNS’s database of retailers found that about 174, or 1.8 percent, of the store addresses had a series of different owners over time who had been permanently disqualified for trafficking at that same location, totaling 369 separate disqualifications. In one case, a store in the District of Columbia had 10 different owners who were each disqualified for trafficking, consuming FNS’s limited compliance-monitoring resources. Our analysis of the data on these stores with multiple disqualified owners indicates that FNS officials found this type of trafficking in a handful of cities and states. Almost 60 percent of repeat store locations were in six states and 44 percent were in 8 cities, often concentrated in small areas. For example, as figure 8 shows, 14 repeat store locations were clustered in downtown areas of both Brooklyn and Baltimore. However, it is not clear whether these data indicate heightened efforts of compliance staff or whether trafficking is more common in these areas. Regardless, early monitoring of high-risk locations when stores change hands could be an efficient use of resources. Efficient use of resources is particularly important because available compliance-monitoring resources have decreased in recent years. As the importance of paper cases has grown, the compliance-monitoring workload has gradually shifted to field office program specialists at a time when overall program resources have dwindled. Officials said the number of field investigators and field staff nationwide, which includes program specialists, has declined over the last 10 years. FNS penalties alone may not be sufficient to deter traffickers. The most severe FNS penalty that most traffickers face is disqualification from the program, and FNS must rely on other entities to conduct investigations that could lead to prosecution. For example, in the food-stamp-trafficking ramp-up case previously cited, this retailer redeemed almost $650,000 of food stamps over the course of 9 months before being disqualified from the program in November 2004. As of August 2006, there was no active investigation of this retailer. Because of the time it takes to develop an investigation for prosecution and the costs associated with doing so, a natural tension exists between the goal of disqualifying a retailer as quickly as possible to prevent further trafficking and seeking prosecution of the retailer to recover losses and deter other traffickers. One FNS field office official said it can take months or even years to investigate a case for prosecution and in the meantime the store continues to traffic. FNS can disqualify a retailer relatively quickly— thereby saving federal dollars from misuse—compared with the time OIG needs to investigate a case for referral for prosecution. However, if prosecution is successful, a retailer’s assets and profits from trafficking can be seized, providing a potential deterrent to others considering trafficking. Paper cases often identify recipients suspected to have trafficked their food stamp benefits with a dishonest retailer, and some FNS field offices send a list of those recipients to the appropriate state. In response, some states actively pursue and disqualify these recipients. For example, Illinois has used these lists to disqualify more than 3,000 of the almost 20,000 suspected recipients referred to them since 1999 through FNS retailer investigations. In addition to pursuing recipients who are suspected of trafficking, one state told us it uses some recipients charged with trafficking to gather evidence against retailers. However, FNS field offices do not always send lists of suspected individual traffickers to states or counties administering the program, and not all states investigate the individuals on these lists. Officials from four FNS field offices we visited said they don’t send the list of recipients suspected of trafficking to the states or counties administering the program. Other field office officials said they send the lists to their states, but they are not acted upon because states do not have the resources to conduct investigations into recipients who may be trafficking. FNS headquarters officials also believe that not many states are acting on the lists they receive because it is difficult and potentially costly to prove individual cases of recipient trafficking. One field office official said that store owners represent only half of the problem and that states could do more to address trafficking. If states could reduce recipients’ trafficking, it would curb retailer trafficking as well. Instead of focusing on food stamp recipients who traffic their benefits, states are using their resources to focus on recipients who improperly collect benefits, according to FNS officials. The current incentive structure for the states includes performance bonuses to reward states for correcting payment errors and reducing error rates. In addition, states are penalized financially if their error rates reach a specific threshold for 2 years in a row. States that do investigate recipient traffickers can keep 35 percent of any monies they recover; however, it may be difficult to recover the funds, and the amount recovered may be minimal. When a state proves a recipient has trafficked, the recipient can no longer receive benefits, but other members of the family can. States can try to recover some of the benefits trafficked by deducting a set amount from the family benefits each month. However, pursuing recipients who traffic can be costly and time-consuming. Taken together, these factors can result in states’ choosing to focus on improper benefit payments rather than recipient trafficking. This inaction by some states allows recipients suspected of trafficking to continue the practice, and such inaction also leaves a pool of recipients ready and willing to traffic their benefits as soon as a disqualified store reopens under new management. In fact, California field office staffs have begun to track suspected trafficking recipients from a disqualified store to a new store, where they begin exhibiting the same patterns. In the Food Stamp Program, stores are the frontline for ensuring that recipients use food stamps to purchase appropriate food items, and these stores operate with no day-to-day oversight. Although the vast majority of stores do not traffic food stamp benefits, each year millions of dollars of program benefits that were awarded to provide food to needy individuals and families are trafficked. FNS, using EBT data, has made significant progress in taking advantage of new opportunities to monitor and disqualify traffickers. However, because store owners can begin trafficking as soon as they are authorized to participate in the program, pocketing large sums of cash for months before FNS can detect potentially suspicious transaction patterns, early monitoring and detection are critical to curbing larger losses to the program. FNS has at its fingertips a wealth of information that could help it develop additional criteria to target certain stores or geographic areas for early or more heightened monitoring, including the presence of low food stocks, the location of repeat offender stores, areas of recipient trafficking, and areas with evidence of organized fraudulent activity. FNS’s loss of monitoring staff in recent years magnifies the need to ensure that compliance-monitoring resources are focused on those stores and geographic areas at greatest risk of trafficking. A more focused effort to target and disqualify these stores could help FNS meet its continuing challenge of ensuring that stores are available and operating in areas of high need while still maintaining program integrity. Yet, as EBT has limited the amount of benefits that can be trafficked at one time, there is less chance the retailer or the recipient will be prosecuted. There is no easy solution to this lack of deterrence. Law enforcement agencies are making decisions to efficiently use their resources by targeting larger or more critical cases. And FNS currently does not have authority to impose stiffer penalties on retailers other than program disqualification or in limited situations, civil money penalties in lieu of disqualification. Food stamp trafficking will continue to be lucrative for retailers as long as the potential rewards outweigh the penalties and there are recipients willing to exchange their benefits for cash and resources are not used for investigations and penalizing recipients. We recommend that the Secretary of the Department of Agriculture direct FNS to take the following five actions. To help ensure that its limited compliance-monitoring resources are used efficiently, FNS should develop additional criteria to help identify stores most likely to traffic and their locations; conduct risk assessments, using compliance and other data, to systematically identify stores and areas that meet these criteria; and allocate resources accordingly, and provide more targeted and early oversight of stores that meet these criteria, such as conducting early monitoring or follow-up inspections. To provide further deterrence for trafficking, FNS should develop a strategy to increase the penalties for trafficking, working with the OIG as needed. If these penalties entail additional authority, consider developing legislative proposals for program reauthorization in 2007. To promote state efforts to pursue recipients suspected of trafficking and thereby reduce the pool of recipient traffickers, FNS should: ensure that FNS field offices report to states those recipients who are suspected of trafficking with disqualified retailers, and revisit the incentive structure to incorporate additional provisions to encourage states to investigate and take action against recipients who traffic. We provided a draft of this report to the U.S. Department of Agriculture and the U.S. Secret Service for review and comment. On September 5, 2006, FNS officials provided us with their oral comments. The officials generally agreed with our findings, conclusions, and recommendations. However, FNS officials raised a concern regarding our recommendations on more efficient use of their compliance-monitoring resources. They stated they believe they do have a strategy for targeting resources through their use of the Watch List, which helps them identify suspicious transaction patterns and target certain stores, combined with their ability to establish task forces of investigators when they identify geographic areas needing additional coverage. We believe that FNS has made good progress in its use of EBT transaction data; however, it is now at a point where it can begin to formulate more sophisticated analyses. For example, these analyses could combine EBT transaction data with other available data, such as information on stores with minimal inventory and stores with a past history of trafficking, to develop criteria to better and more quickly identify stores at risk of trafficking. In addition, FNS could also take advantage of more sophisticated analysis tools, such as certain mapping programs,to better identify those areas where trafficking is more prevalent. Finally, to increase the likelihood of success, FNS will need to combine the expertise of its field investigators and its program specialists and then allocate these resources to monitor those stores at the greatest risk of trafficking. FNS and OIG officials also provided technical comments, which we incorporated where appropriate. The U.S. Secret Service did not provide us with formal comments but told us it concurred with the findings in our report and that it agreed with our recommendation that additional work needs to be done to increase existing penalties for trafficking. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributors to this report are listed in appendix I. In addition to the contact named above, Kay Brown, Assistant Director; Gloria Hernandezsaunders; Kevin Jackson; Kevin Kumanga, Analyst-in-Charge; Crystal Lazcano; Jesus Moreno; Phil Reiff; Ramon Rodriguez; Eden Savino; Dan Schwimer; Vanessa Taylor; Rachael Valliere; and Jill Yost. Improper Payments: Federal and State Coordination Needed to Report National Improper Payment Estimates on Federal Programs. GAO-06-347. Washington, D.C.: April 14, 2006. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Financial Management: Coordinated Approach Needed to Address the Government’s Improper Payments Problems. GAO-02-749. Washington, D.C.: August 9, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Executive Guide: Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington D.C.: January 19, 2001. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffick Benefits. GAO/RCED-00-61. Washington D.C.: March 7, 2000. Food Assistance: Reducing the Trafficking of Food Stamp Benefits. GAO/T-RCED-00-250. Washington D.C.: July 19, 2000. Food Stamp Program: Information on Trafficking Food Stamp Benefits. GAO/RCED-98-77. Washington D.C.: March 26, 1998.
Every year, food stamp recipients exchange hundreds of millions of dollars in benefits for cash instead of food with retailers across the country, a practice known as trafficking. From 2000 to 2005, the Food Stamp Program has grown from $15 billion to $29 billion in benefits. During this period of time, the U.S. Department of Agriculture's (USDA) Food and Nutrition Service (FNS) replaced paper food stamp coupons with electronic benefit transfer (EBT) cards that work much like a debit card at the grocery checkout counter. Given these program changes and continuing retailer fraud, GAO was asked to provide information on (1) what is known about the extent and nature of retailer food stamp trafficking, (2) the efforts of federal agencies to combat such trafficking, and (3) program vulnerabilities. To do this, GAO interviewed agency officials, visited 10 field offices, conducted case file reviews, and analyzed data from the FNS retailer database. FNS's estimates suggest trafficking declined between 1995 and 2005 from 3.8 cents per dollar of benefits redeemed to 1.0 cent, resulting in an estimated $241 million in food stamps trafficked in 2005. The rate of trafficking in small grocery and convenience stores is 7.6 cents per dollar, significantly higher than the rate for large stores, where it is estimated to be 0.2 cents per dollar. In addition, the use of EBT cards has changed the way some benefits are trafficked, for example eliminating middlemen who used to collect and redeem large amounts of paper coupons from program participants willing to sell them. FNS has taken advantage of EBT data to improve its ability to detect and disqualify trafficking retailers, while law enforcement agencies have conducted a decreasing number of investigations. Cases using only EBT transaction data now account for more than half of trafficking disqualifications, supplementing traditional, but more time-consuming, undercover investigations. Other federal entities, such as the USDA's Inspector General and the U.S. Secret Service, have reduced the number of traffickers they pursue in recent years and focused their efforts on high-impact cases. This has resulted in fewer cases referred for federal prosecution and fewer federal convictions for retailer trafficking. Despite FNS progress, the program remains vulnerable because retailers can enter the program intending to traffic, often without fear of severe criminal penalties. FNS authorizes some stores with limited food supplies so that low-income participants in areas with few supermarkets have access to food, but may not inspect these stores again for 5 years unless there is some indication of a problem. Oversight of early operations is important because newly authorized retailers can quickly ramp up the amount of benefits they traffic. One location that FNS disqualified for trafficking redeemed almost $650,000 in 9 months. In addition, FNS has not conducted analyses to identify high risk areas and to target its limited compliance-monitoring resources. Furthermore, disqualification, FNS's most severe penalty, may not be a sufficient deterrent, and FNS must rely upon others for prosecution. Finally, states' failing to pursue trafficking recipients leaves a pool of recipients willing to traffic when a disqualified store reopens.
Medicaid enrollees across various eligibility categories may have access to private health insurance for a number of reasons. For example, some adults may be covered by employer-sponsored private health insurance even though they also qualify for Medicaid. Children similarly may be eligible for Medicaid while also being covered as a dependent on a parent’s private health plan. Individuals age 65 and older may receive private coverage from a former employer or purchase such coverage to supplement their Medicare coverage. Medicaid benefits and costs may vary depending on an enrollee’s eligibility category. CMS requires states to provide for the identification of Medicaid enrollees’ other sources of health coverage, verification of the extent of the other sources’ liability for services, avoidance of payment for services in most circumstances where the state believes a third party is liable, and recovery of reimbursement from liable third parties after Medicaid payment, if the state can reasonably expect to recover more than it spends in seeking Specifically, states must provide that the following steps 1. Coverage identification. To identify enrollees with third-party health coverage, states are required to request coverage information from potential Medicaid enrollees at the time of any determination or redetermination of eligibility for Medicaid. States are also required to obtain and use information pertaining to third-party liability, for example by conducting data matches with state wage information agencies, Social Security Administration wage and earning files, state motor vehicle accident report files, or state workers compensation files. 2. Coverage verification. When other health coverage is identified, states need to verify the information, including the services covered through the other insurance and the dates of eligibility. 3. Cost avoidance. Cost avoidance occurs when states do not pay providers for services until any other coverage has paid to the extent of its liability, rather than paying up front and recovering costs later. After a state has verified other coverage, it must generally seek to ensure that health care providers’ claims are directed to the responsible party.of the cost savings associated with third-party liability. The cost-avoidance process accounts for the bulk 4. Payment recovery. When states have already paid providers for submitted claims for which a third party is liable, they must seek reimbursement from the third party, if it is cost effective to do so. States have flexibility in determining specific approaches to achieve these ends. For example, states are increasingly contracting with managed care plans to deliver services to Medicaid enrollees (such plans are hereafter referred to as Medicaid managed care plans), and may delegate TPL responsibilities to such plans. Both states and Medicaid managed care plans may obtain the services of a contractor to identify third-party coverage by conducting electronic data matches and to conduct other TPL responsibilities, such as payment recovery. Ensuring compliance with Medicaid TPL requirements has long been challenging for states. The McCarran-Ferguson Act affirms the authority of states to regulate the business of insurance in the state, without interference from federal regulation, unless federal law specifically provides otherwise. Thus, states generally regulate private health insurers operating in the state. However, states may not have authority over private insurers that are not licensed to do business in the state but still provide coverage to state residents. For example, some individuals work and receive health insurance through employment in one state but live in a neighboring state. In addition, states are preempted by the Employee Retirement Income Security Act of 1974 (ERISA) from regulating employer-sponsored health benefit plans that self-insure coverage rather than purchase coverage from an insurer. Due to the bifurcated nature of private health insurance regulation, both federal and state legislation has been required to allow states to enforce TPL requirements. For example, the Omnibus Budget Reconciliation Act of 1993 required all states to enact laws prohibiting insurers from taking Medicaid status into account in enrollment or payment for benefits and to enact laws giving the state rights to payments by liable third parties. In addition, the Deficit Reduction Act of 2005 (DRA) contained provisions affecting state authority to verify coverage and recoup payments from liable health insurers. Under the DRA, states must attest that they have laws in place to require health insurers to, among other requirements, provide information necessary to identify Medicaid enrollees with third- party coverage and, within specified time limits, respond to inquiries from the state regarding claims, as well as to agree not to deny claims solely because of the date the claim was submitted, the form that was used, or the failure to properly document coverage at the point of service. The 2013 HHS OIG report on TPL cost savings and challenges concluded that the DRA provisions likely had a positive effect on states’ ability to avoid costs and recover payments from private health insurers, in part through improvements in states’ identification of enrollees with insurance. States also credited process improvements, such as online verification of coverage and electronic data matching agreements with private insurers, as well as contractor assistance. However, the study reported that states continue to face key challenges working with private insurers, including the following: 96 percent of states reported challenges with insurers denying claims for procedural reasons. 90 percent of states reported challenges with insurer willingness to release coverage information to states. 86 percent of states reported challenges with insurers providing incomplete or confusing information in response to attempts to verify coverage. 84 percent of states reported problems with pharmacy benefit managers—entities which administer pharmacy benefits on behalf of insurers or employers—such as pharmacy benefit managers not providing coverage information or claiming a lack of authority to pay claims to Medicaid agencies. Based on responses to the U.S. Census Bureau’s ACS, we estimate that 7.6 million Medicaid enrollees—13.4 percent—also had a private source of health insurance in 2012. However, the prevalence of private health insurance varied among four Medicaid eligibility categories that we analyzed—children, adults, disabled, and aged. For example, according to our estimates, 34.6 percent of aged Medicaid enrollees also had private health insurance, compared to 12.4 percent of adult Medicaid enrollees and 8.4 percent of children. (See fig. 1 and see app. II, table 1, for more detailed estimates). The number of Medicaid enrollees who also have private health insurance is expected to increase beyond the estimated 7.6 million with the expansion of Medicaid; however, the extent of the increase is uncertain. The Congressional Budget Office projected that approximately 7 million nonelderly individuals would enroll in Medicaid in 2014 as a result of the While some newly Medicaid expansion and other PPACA provisions.Medicaid eligible individuals can be expected to have access to private sources of health insurance, the extent to which they will participate in Medicaid, or maintain private insurance once enrolled in Medicaid, is unknown. If these individuals’ rates of private insurance are similar to the 12.4 percent of adult Medicaid enrollees whom we estimated had private insurance in 2012, about 868,000 of the projected 7 million new enrollees in 2014 would be expected to have private insurance. States face multiple challenges in ensuring that Medicaid is the payer of last resort for enrollees that have private health insurance. Selected states and CMS have taken various steps to address some of these challenges; however, selected states and stakeholders suggested that further CMS guidance and efforts to facilitate information sharing among states could improve TPL efforts nationwide. As the identification of Medicaid enrollees with private health insurance is a critical first step for achieving TPL cost savings, many states nationwide conduct electronic data matches of Medicaid enrollment files with insurer files themselves or through a contract with a vendor that conducts matches on the state’s behalf. While not required, such state efforts to independently identify enrollees with private insurance can lead to significant cost savings. For example, Minnesota officials reported that by contracting with a vendor for electronic data matching, the state nearly doubled identified cases of TPL in a 5-year period, saving the state an Despite such efforts, states we estimated $50 million over this period.included in our review reported experiencing the following challenges to their coverage identification efforts: Challenges obtaining out-of-state coverage data. Medicaid enrollees in one state may have coverage from a health insurer that is licensed in a different state—for example, some enrollees work and participate in employer-sponsored insurance in one state while living and enrolling in Medicaid in a neighboring state. State laws requiring insurers to provide coverage data may not apply if insurers are not licensed in the state, and officials from two of the states we reviewed noted that insurers sometimes refuse to provide coverage data to Medicaid agencies outside the state in which they are licensed. HMS representatives reported that, while HMS advocates that insurers provide coverage data to Medicaid agencies outside the state in which the insurers are licensed, many insurers refuse to do so. According to CMS, there is a significant amount of third-party coverage derived from insurers licensed in a different state from where the Medicaid enrollee resides. Challenges with insurers conducting data matches. State and HMS representatives reported that, rather than providing coverage data to the state (or its contractor, as applicable), some insurers request the Medicaid data and perform the data match themselves. HMS representatives reported that, in such cases, states only have access to matches identified by the insurer, which may understate the number of individuals with overlapping coverage. One state reported estimating that insurers missed the identification of about 7 percent of the individuals with private insurance when insurers conducted the match instead of the state’s contractor. Challenges with obtaining key data elements. Insurers may not maintain or provide states or their contractors access to key data elements, such as Social Security numbers, and not having access to these data can reduce the efficiency or usefulness of data matches, according to officials in several states we reviewed. For example, officials from two selected states noted that data matches are more difficult and error-prone when Social Security numbers are not available. Similarly, officials from two other states we reviewed reported that their ability to verify identified coverage would be assisted if employer identification numbers were included in insurer coverage data. Challenges with timeliness of data matches. Most selected states reported that there is a time lag, typically up to 15 to 30 days, between an individual’s enrollment in Medicaid and when the individual is included in a data match with private insurers. As a result, states may not be able to identify other coverage until after enrollees have already begun using services. States would generally then seek reimbursement for paid claims. States in our review reported taking various steps to address these and other coverage identification challenges. Four of the eight selected states reported initiatives underway or completed to improve data-matching strategies to identify private coverage, some of which focused on nationally coordinated approaches. For example, Minnesota officials reported that Minnesota law allows the state Medicaid agency and Medicaid managed care plans to participate in a national coverage data registry, launched in late 2013 by CAQH, an association of health plans and trade associations. The data registry allows participating insurers and states to submit coverage data files for comparison with files of other participants in order to identify individuals with overlapping coverage. Minnesota officials commented that the registry was at an early stage but expected that participation of private insurers would increase over time because of benefits to private insurers of coordinating with one another. Table 1 describes a variety of initiatives underway or completed to improve coverage data in selected states. In addition, at least two of the eight states had laws that addressed challenges with obtaining private insurer compliance with TPL requirements, including requirements to provide coverage data. For example, Michigan law authorizes the state to collect coverage data from insurers to determine TPL and to assess penalties on insurers for noncompliance.in obtaining national coverage data from insurers. In addition, Minnesota Michigan officials reported that the state was successful law requires that all insurers that cover state Medicaid enrollees must comply with TPL requirements irrespective of where they are licensed. Selected states have taken various actions that support or increase oversight of Medicaid managed care plan TPL activities, as applicable. For example, in five of the eight states in our review, individuals with third- party coverage may be eligible to enroll in Medicaid managed care plans The laws and certain TPL responsibilities are delegated to these plans. of two selected states—Ohio and Minnesota—specifically authorize Medicaid managed care plans to recover TPL payments on the state’s behalf. Ohio officials in particular credited the legislation as effective in improving insurer cooperation with the state’s Medicaid managed care plans. While the DRA required states to have laws in effect compelling insurers to provide states with access to data and recognize the states right to recoup payments, it did not provide that those laws specifically require insurers to similarly cooperate with Medicaid managed care plans conducting such work on behalf of states. CMS provided guidance that, when states delegate TPL responsibilities to a Medicaid managed care plan, third-parties should treat the plan as if it were the state.representatives reported that this guidance has been effective in garnering cooperation from insurers that previously refused to provide coverage data or pay claims to Medicaid managed care plans in various states without legislation specifically requiring them to do so. However, a few insurers continue to refuse to cooperate with such plans despite this guidance, according to information provided by representatives of HMS HMS and Medicaid Health Plans of America (MHPA)—an association of Medicaid managed care plans. In addition, Minnesota sought to improve its oversight of Medicaid managed care TPL activities by initiating a program to allow the state to review Medicaid managed care plan TPL payment recoveries and to arrange for conducting supplemental recoveries when the plans had not recouped payment within a set time. However, according to a representative of the National Association of Medicaid Directors, it can be difficult for states to work with Medicaid managed care plans and insurers as needed to strengthen state oversight. The other states included in our review that delegate TPL work to Medicaid managed care plans did not report conducting this type of oversight, which is consistent with information provided by MHPA in which plans indicated that some states that contract with Medicaid managed care plans to perform TPL activities do not specifically review these activities. We have previously found that some Medicaid managed care plans may have a conflict of interest in conducting payment recoveries. Specifically, Medicaid managed care plans may not have appropriate incentives to identify and recover improper payments—which include payments made for treatments or services that were not covered by program rules, that were not medically necessary, or that were billed for but never provided—because doing so could reduce future capitation rates. Most selected states reported challenges with denials from private insurers for procedural reasons, such as for not obtaining prior authorization before receiving services or not using in-network providers. HMS representatives estimated that in 2013, insurers had denied about $120 million in claims for failure to obtain prior authorization, and about $30 million for failure to use an in-network provider, for states and for Medicaid managed care plans with which HMS contracted. Selected states reported various methods to reduce such denials: Ohio and Missouri laws explicitly prohibit denials due solely to a lack of prior authorization for services. Massachusetts, Georgia, and New York officials reported that they contest denials due solely to a lack of prior authorization for services based on general state legislation passed in accordance with the DRA, which requires states to prohibit insurers from denying claims based solely on the date the claim was submitted, the form that was used, or the failure to properly document coverage at the point of service. Michigan and Minnesota, through their Medicaid provider manuals, require providers to check for third-party coverage and specify that providers are not to be paid by Medicaid for services provided to enrollees if rules of the third-party coverage were not followed. For example, Michigan’s Medicaid provider manual states that Medicaid will not cover charges incurred when enrollees elect to go out of their third-party insurer’s preferred provider network. Michigan and Minnesota officials reported that these types of denials were generally not problems for the state. See Michigan Medicaid Provider Manual, Coordination of Benefits, §§ 1.3, 2.1 (October 2014) and Minnesota Medicaid Provider Manual, Billing Policy (Overview), Section on Coordination of Services (September 2014) and Medicare and Other Insurance, Section on Third-Party Liability (TPL) (December 2013). CMS has taken steps, including issuing additional guidance, to address certain challenges that states face in ensuring that Medicaid is the payer of last resort. For example, CMS published a set of frequently asked questions (FAQ) in September 2014 that clarified the parameters under which health insurers are permitted to release coverage information to states in light of Health Insurance Portability and Accountability Act of 1996 privacy restrictions, and emphasized the role of state legislation in specifying the scope of information required to be submitted by health insurers. The guidance also reiterated previously published information, such as clarifying that when states delegate TPL responsibilities to a Medicaid managed care plan, third parties are required to treat the plan as if it were the state. CMS officials also noted that the agency is available to provide technical assistance relating to TPL at the request of states or other entities. In addition, CMS has also taken steps to foster collaboration among states. For example, CMS solicited effective TPL practices that had been implemented as of 2013 from states and published the responses. On a related note, CMS officials highlighted the role of the Coordination of Benefits (COB)-TPL Technical Advisory Group (TAG) in providing states with opportunities to coordinate and share information on TPL challenges and effective practices. Specifically, CMS officials said that COB-TPL TAG representatives are responsible for canvassing states about problems that may be occurring and reporting these back to CMS. However, officials from one state suggested that COB-TPL TAG representatives need to do more to proactively survey states and share information about problems that states not directly represented on the COB-TPL TAG are experiencing. While acknowledging CMS’s efforts, stakeholders and officials from selected states suggested a need for additional federal action, commenting on how, for example, additional or clarified guidance could facilitate state efforts to conduct certain TPL activities. The National Association of Medicaid Directors recommended, given the growth in states’ use of managed care, that CMS require states to share available insurance coverage information with Medicaid managed care plans and provide an approved approach for conducting oversight of such plans’ TPL activities. According to a representative of this association, several states indicated that explicit CMS guidance in this area would provide states leverage to strengthen their Medicaid managed care plan contracts and oversight related to TPL. HMS representatives recommended that CMS strengthen its statements encouraging insurers to share coverage information with out-of-state Medicaid agencies, and further clarify through regulations existing CMS guidance regarding insurer cooperation with Medicaid managed care plans that conduct TPL activities on behalf of states. State officials suggested that CMS could provide information to ensure all states are aware of promising available data-matching strategies. CMS, however, may have incomplete information to inform such guidance as, according to CMS, the agency does not actively track all states’ coverage-identification strategies on an ongoing basis, and in some cases, may not be aware of promising state initiatives. While the effective state practices CMS solicited and shared with states included information on initiatives implemented as of 2013, other state initiatives underway were not included. For example, Minnesota officials said they had submitted information about the CAQH data registry; however, the state’s submission did not meet the criteria for inclusion in the effective practices document because the state had not yet implemented the registry. In addition, while CMS suggests that states should oversee Medicaid managed care plan TPL activities, as applicable, the agency does not track which states delegate TPL responsibilities to Medicaid managed care plans, nor the problems with or oversight of related Medicaid managed care plan TPL activities in states that do. Officials from selected states also emphasized efficiencies and other benefits that could be gained from state collaboration and information sharing, which CMS could support. For example, Michigan officials noted that the state wanted to explore sharing the national coverage data it obtained from insurers, as well as the TPL tracking and billing system it developed, with other states, noting the cost-effectiveness of states using its system and data rather than each developing their own. In addition, officials in multiple states noted the value of CMS-facilitated national TPL conferences that provide states with opportunities to discuss emerging problems and share expertise regarding solutions. CMS officials indicated that the last conference occurred when there were significant changes under the DRA and that CMS has no specific plans to facilitate future TPL conferences, but officials noted that discussions were underway regarding additional conferences or other training opportunities. National survey data suggest that a substantial number of Medicaid enrollees—7.6 million—had private health insurance in 2012 and that many of these enrollees were in eligibility groups that incur, traditionally, higher medical costs. Furthermore, this number is expected to increase because of the Medicaid expansion. States have front-line responsibility for ensuring that Medicaid is the payer of last resort and are required to take steps to identify individuals with other health insurance and ensure that other insurance pays to the extent of its liability. Substantial increases in TPL cost savings in recent years highlight that improvements to TPL efforts, such as heightened attention to coverage identification, can substantially improve TPL cost avoidance and recoveries. The scale of the cost savings to Medicaid at both federal and state levels through the identification of coverage through, and payment of services by, private health insurance—reportedly nearly $14 billion in 2011—underscores the potentially significant return on investment that may be gained from continued TPL improvement efforts and attention to resolving remaining gaps in state access to available coverage data. Selected states have taken a variety of steps to further improve TPL efforts, and other states may also be implementing initiatives to address persistent challenges states report in ensuring Medicaid pays after other liable third parties. The various initiatives that selected states have undertaken—such as initiatives to improve identification of enrollees with private health insurance through data matches or to ensure that TPL efforts are maintained in an increasingly managed care environment— highlight options that other states could consider to improve their respective TPL savings. Other states may also have initiatives that could be adopted more broadly. CMS has taken steps to support states and publicize effective state practices. However, as new strategies emerge over time, a robust ongoing effort to collect and share information about state initiatives would ensure that states—particularly any states that may not conduct data matches with private insurers— are aware of available data matching strategies and solutions to challenges states or Medicaid managed care plans may face in conducting TPL activities. Given the significant federal Medicaid outlays, which are increasing as Medicaid expands under PPACA, the federal government has a vested financial interest in further increasing states’ TPL cost savings, and CMS should play a more active leadership role in monitoring, understanding, supporting and promoting state TPL efforts. In light of the federal interest in ensuring that Medicaid should pay only after other liable third parties; state initiatives to improve TPL efforts, such as coverage identification strategies; and states’ increasing use of managed care, we recommend that the Secretary of Health and Human Services direct CMS to take the following two additional actions to oversee and support state TPL efforts: Routinely monitor and share across all states information regarding key TPL efforts and challenges. Provide guidance to states on their oversight of TPL efforts conducted by Medicaid managed care plans. We provided a draft of this report to HHS for comment. In its written comments—reproduced in appendix III—HHS concurred with our recommendations. HHS stated that it will continue to look at ways to provide guidance to states to allow for sharing of effective practices and to increase awareness of initiatives under development in states. HHS also stated that it will explore the need for additional guidance regarding state oversight of TPL efforts conducted by Medicaid managed care plans. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To assess the extent to which Medicaid enrollees have private health insurance, we utilized the ACS, an annual survey conducted by the U.S. Census Bureau. The ACS includes representative samples of households from each state and also includes individuals residing in institutions such as nursing homes. The ACS collects self-reported information, such as the type of health insurance coverage as of the date of the survey (if any), disability status, age, and state of residence. We analyzed data from the most recent ACS Public Use Microdata Sample (PUMS) that was available at the time we conducted our work, which covered calendar year 2012. Medicare is a federal health insurance program for individuals aged 65 and older or with certain disabilities and individuals with end-stage renal disease. TRICARE is a federal health program generally for active-duty military personnel and their dependents, and retirees and their dependents and survivors. Medicaid coverage was assigned to foster children, certain individuals receiving Supplementary Security Income or Public Assistance, and the spouses and children of certain Medicaid beneficiaries. Medicare coverage was assigned to individuals aged 65 and older who received Social Security or Medicaid benefits. TRICARE was assigned to active-duty military personnel and their spouses and children. that the ACS PUMS data were sufficiently reliable for the purposes of our engagement. From the available ACS PUMS data, we constructed the following variables for our analysis: Medicaid coverage and eligibility category. We defined individuals as having Medicaid if they reported health coverage through Medicaid, medical assistance, or any kind of government assistance plan for individuals with low incomes or a disability. These sources of coverage are combined in one question in the ACS PUMS. For purposes of the report, we refer to these individuals collectively as Medicaid enrollees. We further categorized Medicaid enrollees into four broad Medicaid eligibility categories—children, adults, disabled, and aged: We defined the child eligibility category as individuals aged 0 through 18 who did not report a disability. We defined adult eligibility category as individuals aged 19 through 64 who did not report a disability. We defined the disabled eligibility category as individuals aged 0 through 64 who reported one or more of the 6 disability indicators included in the ACS data. We defined the aged eligibility category as individuals aged 65 and older. Third-party private and public health coverage. We defined individuals as having private insurance coverage if they reported having health insurance through a current or former employer or union, insurance purchased directly from an insurance company, or both. We defined individuals as having public coverage other than Medicaid if they reported coverage through Medicare or TRICARE, or having ever used or enrolled in health care provided through the Department of Veterans Affairs (VA). Based on the variables defined above, we used calendar year 2012 ACS PUMS data to estimate the number and percentage of Medicaid enrollees with private and other sources of health coverage. We produced separate estimates by Medicaid eligibility group and state of residence. To generate our estimates, we applied the appropriate weights contained in the ACS PUMS data files in order to expand the sample to represent the total population and to account for the complex sample design. Specifically, we used the person weights to generate estimated numbers and percentages. We used the person replicate weights to generate standard errors. To assess the precision of our estimates, we calculated a relative standard error for each estimate. A relative standard error is calculated by dividing the standard error of the estimate by the estimate itself. For example, if an estimate has a mean of 100 and a standard error of 20, the relative standard error would be 20/100, which would be 20 percent. Estimates with small relative standard errors are considered more reliable than estimates with large relative standard errors. A small relative standard error is a more precise measurement since there is less variance around the mean. Unless otherwise noted, estimates included in this report have relative standard errors of less than 15 percent. The following tables provide more detailed information about the estimates derived from of our analysis of the 2012 American Community Survey (ACS) Public Use Microdata Sample (PUMS). Specifically, tables 1 and 2 provide estimates of the number and percentage of Medicaid enrollees with other sources of health coverage by Medicaid eligibility category and by state. In addition to the contact named above, Susan Anthony, Assistant Director; Emily Beller; George Bogart; Britt Carlson; Laurie Pachter; and Ying Long made key contributions to this report.
In fiscal year 2013, Medicaid—jointly financed by states and the federal government—provided health care coverage to over 70 million individuals at a total cost of about $460 billion. Congress generally established Medicaid as the health care payer of last resort, meaning that if enrollees have another source of health care coverage—such as private insurance—that source should pay, to the extent of its liability, before Medicaid does. This is referred to as third-party liability (TPL). There are known challenges to ensuring that Medicaid is the payer of last resort. GAO was asked to provide information on the prevalence of private insurance among Medicaid enrollees and on state and CMS efforts to ensure that Medicaid is the payer of last resort. This report examines (1) the extent to which Medicaid enrollees have private insurance, and (2) state and CMS initiatives to improve TPL efforts. GAO analyzed the 2012 ACS; interviewed Medicaid officials from eight states with high program spending or enrollment that used managed care; interviewed CMS officials and stakeholders; and reviewed relevant laws, regulations, and CMS guidance. Based on responses to the 2012 U.S. Census Bureau's American Community Survey (ACS)—the most recent available at the time the work was conducted—GAO estimates that 7.6 million Medicaid enrollees (13.4 percent) had private health insurance in 2012. The estimated prevalence of private health insurance varied among Medicaid eligibility categories, which may differ with respect to Medicaid benefits and costs. The number of Medicaid enrollees with private health insurance is expected to increase with the expansion of Medicaid. Selected states reported taking various steps to address challenges to ensuring that Medicaid is the payer of last resort and acknowledged recent Centers for Medicare & Medicaid Services (CMS) support, while also suggesting additional federal action. Four of the eight reviewed states reported various initiatives to improve coverage identification, such as arranging to participate in a data registry that allows participants to identify individuals with overlapping coverage. CMS has taken steps to issue TPL guidance and share some information on effective state practices, and such federal efforts should be ongoing to ensure that evolving approaches are captured and shared across states. In addition, officials in five states reported that enrollees with third-party coverage may be eligible to enroll in Medicaid managed care—in which states contract with health plans to provide services to enrollees and may delegate TPL activities such as payment recoveries to these plans. One of the five states had initiated a program to oversee plans' TPL recoveries, while other states did not report similar oversight. The National Association of Medicaid Directors reported that, in the absence of explicit CMS guidance in this area, it can be difficult for states to work with plans to improve TPL oversight and has recommended CMS provide such guidance. GAO recommends that the Secretary of the Department of Health and Human Services (HHS) direct CMS to (1) routinely monitor and share across all states information regarding key TPL efforts and challenges, and (2) provide guidance on state oversight of TPL efforts conducted by Medicaid managed care plans. HHS concurred with GAO's recommendations and noted plans to address them.
In an effort to promote and achieve various U.S. foreign policy objectives, Congress has expanded trade preference programs in number and scope over the past 3 decades. The purpose of these programs is to foster economic development through increased trade with qualified beneficiary countries while not harming U.S. domestic producers. Trade preference programs extend unilateral tariff reductions to over 130 developing countries. Currently, the United States offers the Generalized System of Preferences (GSP) and three regional programs, the Caribbean Basin Initiative (CBI), the Andean Trade Preference Act (ATPA), and the African Growth and Opportunity Act (AGOA). Special preferences for Haiti became part of CBI with enactment of the Haitian Hemispheric Opportunity through Partnership Encouragement (HOPE) Act in December 2006. The regional programs cover additional products but have more extensive criteria for participation than the GSP program. Eight agencies have key roles in administering U.S. trade preference programs. Led by the United States Trade Representative (USTR), they include the Departments of Agriculture, Commerce, Homeland Security, Labor, State, and Treasury, as well as the U.S. International Trade Commission (ITC). U.S. imports from countries benefiting from U.S. preference programs have increased significantly over the past decade. Total U.S. preference imports grew from $20 billion in 1992 to $110 billion in 2008. Most of this growth in U.S. imports from preference countries has taken place since 2000. This accelerated growth suggests an expansionary effect of increased product coverage and liberalized rules of origin for least- developed countries (LDC) under GSP in 1996 and for African countries under AGOA in 2000. In particular, much of the growth since 2000 is due to imports of petroleum from certain oil producing nations in Africa, accounting for 79.5 percent of total imports from Sub-Saharan Africa in 2008. For example, in that same year, U.S. imports from the oil producing countries of Nigeria grew by 16.2 percent, Angola by 51.2 percent, and the Republic of Congo by 65.2 percent. There is also evidence that leading suppliers under U.S. preference programs have “arrived” as global exporters. For example, based on a World Trade Organization (WTO) study in 2007, the three leading non-fuel suppliers of U.S. preference imports—India, Thailand, and Brazil—were among the top 20 exporters in the world, and were also major suppliers to the U.S. market. Exports from these three countries also grew faster than world exports as a whole. However, these countries have not reached World Bank “high income” level criteria, as they range from “low” to “upper middle” levels of income. GSP—the longest standing U.S. preference program—expires December 31, 2009, as do ATPA benefits. At the same time, legislative proposals to provide additional, targeted benefits for the poorest countries are pending. Preference programs entail a number of difficult policy trade-offs. For example, the programs are designed to offer duty-free access to the U.S. market to increase beneficiary trade, but only to the extent that access does not harm U.S. industries. U.S. preference programs provide duty-free treatment for over half of the 10,500 U.S. tariff lines, in addition to those that are already duty-free on a most favored nation basis. But they also exclude many other products from duty-free status, including some that developing countries are capable of producing and exporting. GAO’s analysis showed that notable gaps in preference program coverage remain, particularly in agricultural and apparel products. For 48 GSP-eligible countries, more than three-fourths of the value of U.S. imports that are subject to duties (i.e., are dutiable) are not included in the programs. For example, just 1 percent of Bangladesh’s dutiable exports to the United States and 4 percent of Pakistan’s are eligible for GSP. Although regional preference programs tend to have more generous coverage, they sometimes feature “caps” on the amount of imports that can enter duty- free, which may significantly limit market access. Imports subject to caps under AGOA include certain meat products, a large number of dairy products, many sugar products, chocolate, a range of prepared food products, certain tobacco products, and groundnuts (peanuts), the latter being of particular importance to some African countries. A second, related, trade-off involves deciding which developing countries can enjoy particular preferential benefits. A few LDCs in Asia are not included in the U.S. regional preference programs, although they are eligible for GSP-LDC benefits. Two of these countries—Bangladesh and Cambodia—have become major exporters of apparel to the United States and have complained about the lack of duty-free access for their goods. African private-sector representatives have raised concerns that giving preferential access to Bangladesh and Cambodia for apparel might endanger the nascent African apparel export industry that has grown up under AGOA. Certain U.S. industries have joined African nations in opposing the idea of extending duty-free access for apparel from these countries, arguing these nations are already so competitive in exporting to the United States that in combination they surpass U.S. free trade agreement partners Mexico and those in CAFTA, as well as those in the Andean/AGOA regions. This trade-off concerning what countries to include also involves decisions regarding the graduation of countries or products from the programs. The original intention of preference programs was to provide temporary trade advantages to particular developing countries, which would eventually become unnecessary as countries became more competitive. Specifically, the GSP program has mechanisms to limit duty-free benefits by “graduating” countries that are no longer considered to need preferential treatment, based on income and competitiveness criteria. Since 1989, at least 28 countries have been graduated from GSP, mainly as a result of “mandatory” graduation criteria such as high income status or joining the European Union. Five countries in the Central American and Caribbean region were recently removed from GSP and CBI/CBTPA when they entered into free trade agreements with the United States. In addition to country graduation, the United States GSP program also includes a process for ending duty-free access for individual products from a given country by means of import ceilings—Competitive Needs Limitations (CNL). These ceilings are reached when eligible products from GSP beneficiaries exceed specified value and import market share thresholds (LDCs and AGOA beneficiaries are exempt). Amendments to the GSP in 1984 gave the President the power to issue (or revoke) waivers for CNL thresholds under certain circumstances, for example through a petition from an interested party, or when total U.S. imports from all countries of a product are small or “de minimis.” In 2006 Congress passed legislation affecting when the President should revoke certain CNL waivers for so called “super competitive” products. In 2007, the President revoked eight CNL waivers. Policymakers face a third trade-off in setting the duration of preferential benefits in authorizing legislation. Preference beneficiaries and U.S. businesses that import from them agree that longer and more predictable renewal periods for program benefits are desirable. Private-sector and foreign government representatives have stated that short program renewal periods discourage longer-term productive investments that might be made to take advantage of preferences, such as factories or agribusiness ventures. Members of Congress have recognized this argument with respect to Africa and, in December 2006, Congress renewed AGOA’s third-country fabric provisions until 2012 and AGOA’s general provisions until 2015. However, some U.S. officials believe that periodic program expirations can be useful as leverage to encourage countries to act in accordance with U.S. interests such as global and bilateral trade liberalization. Furthermore, making preferences permanent may deepen resistance to U.S. calls for developing country recipients to lower barriers to trade in their own markets. Global and bilateral trade liberalization is a primary U.S. trade policy objective, based on the premise that increased trade flows will support economic growth for the United States and other countries. Spokesmen for countries that benefit from trade preferences have told us that any agreement reached under the Doha round of global trade talks at the WTO must, at a minimum, provide a significant transition period to allow beneficiary countries to adjust to the loss of preferences. GAO found that preference programs have proliferated over time and have become increasingly complex, which has contributed to a lack of systematic review. In response to differing statutory requirements, agencies involved in implementing trade preferences pursue different approaches to monitoring the various criteria set for these programs. We observed advantages to each approach but individual program reviews appeared disconnected and resulted in gaps. For example, some countries that passed review under regional preference programs were later subject to GSP complaints. Moreover, we found that there was little to no reporting on the impact of these programs. To address these issues, GAO recommended that USTR periodically review beneficiary countries, in particular those that have not been considered under GSP or regional programs. Additionally, we recommended that USTR should periodically convene relevant agencies to discuss the programs jointly. In our March 2008 report, we also noted that even though there is overlap in various aspects of trade preference programs, Congress generally considers these programs separately, partly because they have disparate termination dates. As a result, we suggested that Congress should consider whether trade preference programs’ review and reporting requirements may be better integrated to facilitate evaluating progress in meeting shared economic development goals. In response to the recommendations discussed above, USTR officials told us that the relevant agencies will meet at least annually to consider ways to improve program administration, to evaluate the programs’ effectiveness jointly, and to identify any lessons learned. USTR has also changed the format of its annual report to discuss the preference programs in one place. In addition, we believe that Congressional hearings in 2007 and 2008 and again today are responsive to the need to consider these programs in an integrated fashion. In addition to the recommendations based on GAO analysis, we also solicited options from a panel of experts convened by GAO in June 2009 to discuss ways to improve the competitiveness of the textile and apparel sector in AGOA beneficiary countries. While the options were developed in the context of AGOA, many of these may be applicable to trade preferences programs in general. Align Trade Capacity Building with Trade Preferences Programs: Many developing countries have expressed concern about their inability to take advantage of trade preferences because they lack the capacity to participate in international trade. AGOA is the only preference program for which authorizing legislation refers to trade capacity building assistance; however, funding for this type of assistance is not provided under the Act. In the course of our research on the textile and apparel inputs industry in Sub-Saharan African countries, many experts we consulted considered trade capacity building a key component for improving the competitiveness of this sector. Modify Rules of Origin among Trade Preference Program Beneficiaries and Free Trade Partners: Some African governments and industry representatives of the textile and apparel inputs industry in Sub-Saharan African countries suggested modifying rules of origin provisions under other U.S. trade preference programs or free trade agreements to provide duty-free access for products that use AGOA textile and apparel inputs. Similarly, they suggested simplifying AGOA rules of origin to allow duty-free access for certain partially assembled apparel products with components originating outside the region. Create Non-Punitive and Voluntary Incentives: Some of the experts we consulted believe that the creation of non-punitive and voluntary incentives to encourage the use of inputs from the United States or its trade preference partners could stimulate investment in beneficiary countries. One example of the incentives discussed was the earned import allowance programs currently in use for Haiti and the Dominican Republic. Such an incentive program allows producers to export certain amounts of apparel to the U.S., duty free, made from third-country fabric, provided they import specified volumes of U.S. fabric. Another proposal put forth by industry representatives was for a similar “duty credit” program for AGOA beneficiaries. A simplified duty credit program would create a non-punitive incentive for use of African regional fabric. For example, a U.S. firm that imports jeans made with African origin denim would earn a credit to import a certain amount of jeans from Bangladesh, duty free. However, some experts indicated that the application of these types of incentives should be considered in the context of each trade preference program, as they have specific differences that may not make them applicable across preference programs. While these options were suggested by experts in the context of a discussion on the African Growth and Opportunity Act, many of these options may be helpful in considering ways to further improve the full range of preference programs as many GSP LDCs face many of the same challenges as the poorer African nations. Some of the options presented would require legislative action while others could be implemented administratively. Mr. Chairman, thank you for the opportunity to summarize the work GAO has done on the subject of preference programs. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information on this testimony, please contact Loren Yager at (202) 512-4347, or by e-mail at [email protected]. Juan Gobel, Assistant Director; Gezahegne Bekele; Ken Bombara; Karen Deans; Francisco Enriquez; R. Gifford Howland; Ernie Jackson; and Brian Tremblay made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. trade preference programs promote economic development in poorer nations by providing duty-free export opportunities in the United States. The Generalized System of Preferences, Caribbean Basin Initiative, Andean Trade Preference Act, and African Growth and Opportunity Act unilaterally reduce U.S. tariffs for many products from over 130 countries. However, two of these programs expire partially or in full this year, and Congress is exploring options as it considers renewal. This testimony describes the growth in preference program imports, identifies policy trade-offs, and summarizes the Government Accountability Office (GAO) recommendations and options suggested by a panel of experts on the African Growth and Opportunity Act (AGOA). The testimony is based on studies issued in September 2007, March 2008, and August 2009. For those studies, GAO analyzed trade data, reviewed trade literature and program documents, interviewed U.S. officials, did fieldwork in nine countries, and convened a panel of experts. Total U.S. preference imports grew from $20 billion in 1992 to $110 billion in 2008, with most of this growth taking place since 2000. The increases from preference program countries primarily reflect the addition of new eligible products, increased petroleum imports from some African countries, and the rapid growth of exports from countries such as India, Thailand, and Brazil. Preference programs give rise to three critical policy trade-offs. First, opportunities for beneficiary countries to export products duty free must be balanced against U.S. industry interests. Some products of importance to developing countries, notably agriculture and apparel, are ineligible by statute as a result. Second, some developing countries, such as Bangladesh and Cambodia, are not included in U.S. regional preference programs; however, there is concern that they are already competitive in marketing apparel to the United States and that giving them greater duty-free access could harm the apparel industry in Africa and elsewhere. Third, Congress faces a trade-off between longer preference program renewals, which may encourage investment, and shorter renewals, which may provide leverage to encourage countries to act in accordance with U.S. interests such as trade liberalization. GAO reported in March 2008 that preference programs have proliferated and become increasingly complex, which has contributed to a lack of systematic review. Moreover, we found that there was little to no reporting on the impact of these programs. In addition, GAO solicited options from a panel of experts in June 2009 for improving the competitiveness of the textile and apparel sector in AGOA countries. Options they suggested included aligning trade capacity building with trade preference programs, modifying rules of origin to facilitate joint production among trade preference program beneficiaries and free trade partners, and creating non-punitive and voluntary incentives to encourage the use of inputs from the United States or its trade preference partners to stimulate investment in beneficiary countries.
DON is a major component of the Department of Defense (DOD), consisting of the Navy and the Marine Corps. It is a large and complex organization, whose primary mission is to organize, train, maintain, and equip combat-ready naval forces capable of winning wars, deterring aggression by would-be foes, preserving freedom of the seas, and promoting peace and security. To support this mission, DON performs a variety of interrelated and interdependent information technology (IT)- dependent functions. In fiscal year 2010, DON’s IT budget was approximately $7.4 billion, for 971 investments. NGEN is one such system investment. NGEN is to provide secure data and IT services, such as data storage, e- mail, and video-teleconferencing, to the Navy and the Marine Corps. NGEN is also intended to provide the foundation for DON’s future Naval Networking Environment. DON is acquiring NGEN through multiple providers (contractors) to replace and improve the enterprise network and services provided by NMCI. It is to be developed incrementally, with the first increment to provide comparable NMCI capabilities, additional information assurance, and increased government control of the network. Future increments have yet to be defined. The program’s preliminary life cycle cost estimate (through fiscal year 2025) for the first increment is about $50 billion. As of September 30, 2010, the NGEN program had reportedly spent about $432 million. To bridge the time frame between the end of the NMCI contract and the full transition to NGEN, DON awarded a $3.7 billion continuity of services contract in July 2010 to the NMCI service provider, Hewlett Packard Enterprise Services. In addition to providing continuity of network services, the contract includes transition services and transfer to DON of NMCI infrastructure and intellectual property, as the NGEN contracts are to require use of the NMCI infrastructure and access to processes, procedures, and technical data. The continuity of services contract is scheduled to run from October 2010 through April 2014. To reduce the risk during the transition period from NMCI to NGEN, DON is currently performing eight early transition activities. The activities are discrete efforts intended to establish government management capabilities, allow for greater participation in operational decisions, and help expedite the transition time. Table 1 describes each of these activities. To deliver NGEN capabilities, DON plans to award five contracts. See table 2 for a description of these contracts. According to the NGEN Acquisition Strategy, DON plans to complete the Marine Corps’ initial transition to NGEN in January 2012 and final transition in February 2013. The Navy’s initial and final transition to NGEN are scheduled to be completed in December 2012 and March 2014, respectively. To manage the acquisition and deployment of NGEN, DON established a program management office within the Program Executive Office for Enterprise Information Systems. The program office manages the program’s cost, schedule, and performance and is responsible for ensuring that the program meets its objectives. In addition, various DOD and DON organizations share program oversight and review responsibilities. Table 3 lists key entities and their roles and responsibilities. NGEN is subject to both Office of the Secretary of Defense (OSD) and DON Major Automated Information System (MAIS) acquisition policy and guidance, which require it to comply with Defense Acquisition System (DAS) requirements. According to these requirements, all MAIS programs require a Materiel Development Decision prior to entering the first DAS phase. In making this decision, the milestone decision authority is to review the Initial Capabilities Document, which defines operational goals and needed capabilities, and authorizes the phase in which the program is to enter the DAS. The system consists of five key program life cycle phases and three related milestone decision points. Table 4 provides a description of each DAS phase. In addition to Defense Acquisition System requirements, according to DON guidance and policy, all DON MAIS and pre-MAIS programs are required to go through a “Two-Pass/Six-Gate” acquisition review process. The first pass, which consists of Gates 1 through 3, is focused on requirements development and validation and is led by the Chief of Naval Operations or the Commandant of the Marine Corps. The second pass, which consists of Gates 4 through 6, is focused on developing and delivering a solution via systems engineering and acquisition and is led by the Assistant Secretary of the Navy (Research, Development, and Acquisition). In addition to meeting specific criteria for passing a given gate and proceeding to the next gate, all gate reviews are to consider program health (i.e., satisfactory cost and schedule performance, known risks, and budget adequacy) in deciding whether to proceed. Table 5 lists the key purpose of each gate review. The DAS and DON acquisition phases and decision points for MAIS programs are illustrated in figure 1. As depicted in figure 2, DON completed a Gate 3 review of NGEN requirements in April 2008. In April 2009, the DON CIO completed the AOA for NGEN increment 1, and at the Gate 2 review the same month, the Deputy Chief of Naval Operations (Integration of Capabilities and Resources) and the Deputy Marine Corps Commandant for Programs and Resources approved the AOA to be submitted to the NGEN AOA Advisory Group. The advisory group subsequently approved the analysis and forwarded it in April 2009 to OSD Cost Assessment and Program Evaluation (CAPE), which approved it in December 2009. DON conducted a Gate 4 review of its System Design Specification in November 2009, and a Gate 5 review of its Transport Services request for proposal in October 2010. DON plans to conduct a Gate 6 review in July 2011. In May 2010, the USD (AT&L) completed the NGEN Materiel Development Decision, which designated the first increment of NGEN as a MAIS and authorized the program to enter the DAS in the production and deployment phase. A Milestone C review is currently planned for August 2011. In June 2010, the USD (AT&L) approved the current acquisition approach. An AOA is intended to help identify the most promising acquisition approach by comparing alternative solutions’ costs and operational effectiveness. The NGEN AOA contained key weaknesses in its cost estimates and operational effectiveness analysis that impaired its ability to inform investment decision making. Further, none of the alternatives in this analysis match the current acquisition approach, and these differences have not been analyzed to determine the breadth of risk that exists. According to DON officials, the AOA reflects the most that could be accomplished in the time available to meet an imposed deadline. In addition, OSD officials stated that the differences between the current approach and the alternatives that were assessed are, in their view, not significant. However, the current approach is estimated to cost at least $4.7 billion more than any of the AOA alternatives. Without sufficient information to understand the differences in the relative costs and operational effectiveness among alternatives, decision makers lack assurance that their selected approach is the most promising and cost- effective course of action. According to relevant guidance, a key component of an AOA is a cost analysis that provides for cost estimates of each alternative. As such, cost estimates should be reliable in order to provide the basis for informed investment decision making, realistic budget formulation, meaningful progress measurement, and accountability for results. Our research has identified four characteristics of a high-quality, reliable cost estimate: well- documented, comprehensive, accurate, and credible. The NGEN AOA assessed four alternatives. All alternatives were assumed to deliver the same NMCI capabilities and the technolog alternatives was assumed to be substantially the same. The primary differences among the alternatives were how NGEN was to be acquired, managed, and operated. Table 6 below provides a description of each alternative. The four alternatives’ estimated costs for increment 1 from fiscal year 2011 to fiscal year 2015 ranged from $10.25 billion (alternative 1) to $10.84 billion (alternatives 2 and 3V). However, the estimates were not reliable because they substantially met only one of the characteristics of reliable cost estimates. Specifically, The AOA cost estimates were substantially well-documented. To be well- documented, the cost estimates should state the purpose of the estimate; provide program background, including a system description; provide a schedule for developing the estimates; specify the scope of the estimate (in terms of time and what is and is not included); disclose key ground rules and assumptions, data sources, calculations performed and their results, the estimating methodology and rationale, and the results of a risk analysis; and provide a conclusion about whether the cost estimate is reasonable. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced to their sources. Finally, the cost estimates should be reviewed and accepted by management. Although the AOA did not sufficiently document the schedule, scope, and results of the risk analysis, it defined the purpose of the estimate; provided program background (e.g., system description); and disclosed ground rules and assumptions, data sources, calculations performed and their results, and the estimating methodology. Also, the data used to derive the estimates were captured in such a way that they could largely be traced to their sources, and the final AOA was reviewed and accepted by DON and OSD oversight entities. The AOA cost estimates were not comprehensive. To be comprehensive, the cost estimates should include all government and contractor costs over the program’s full life cycle, from program inception through design, development, deployment, and operation and maintenance to retirement. They should also provide sufficient detail and reflect all cost-influencing ground rules and assumptions. However, the cost estimates were not full life cycle costs. Instead, they only included government and contractor costs for a 5-year period from fiscal year 2011 to fiscal year 2015, covering 2 years of continued NMCI services with the current provider, 2 years of transition to the new provider(s), and 1 year of NGEN operation and maintenance. DON and OSD CAPE officials attributed this to the assumption that NGEN increment 1 contracts would have a 5-year period of performance and that future NGEN increments might be introduced after that period. Further, while the estimates were based on a cost element structure that was decomposed to a sufficient level of detail, and the documentation largely identified ground rules and assumptions, the cost estimates did not reflect all assumptions identified in the AOA, such as schedule and performance risks associated with (1) implementing IT processes, (2) expanding the government workforce, and (3) formulating the NGEN contracts. These were significant cost-influencing risks and thus should have been incorporated into the estimates. The AOA cost estimates were not substantially accurate. To be accurate, the cost estimates should not be overly conservative or optimistic and should be, among other things, based on an assessment of the most likely costs, and adjusted properly for inflation. In addition, steps should be taken to minimize mathematical mistakes and to ground the estimate in documented assumptions that can be verified by supporting data and a historical record of actual cost and schedule experiences on comparable programs. To DON’s credit, the cost estimates were developed based on NMCI historical cost data, were adjusted properly for inflation, contained few mathematical mistakes, and were largely grounded in documented assumptions. However, the supporting data for key assumptions were not verified. For example, all estimates assumed that transition activity costs would amount to about 18 percent of the estimated cost of NGEN in its first year of operation, and alternative 3’s estimate assumed that total cost would be reduced by 10 percent due to increased competition from its multicontract approach. However, the supporting data used by Deloitte Consulting for these assumptions were not provided to DON or the independent government review team for verification because the data were proprietary to the contractor. Further, NMCI historical data were only available at an aggregate level, so the team had to rely on subject- matter experts and other sources to estimate costs at a finer level of detail. The AOA cost estimates were not credible. To be credible, the cost estimates should discuss any limitations in the analysis due to uncertainty or biases surrounding the data and assumptions. Major assumptions should be varied and other outcomes computed to determine how sensitive the estimate is to changes in the assumptions. Risk and uncertainty inherent in the estimate should be assessed and disclosed. Further, the estimate should be properly verified by, for example, comparing the results with an independent cost assessment. While the AOA identified limitations in the cost analysis, such as the use of NMCI data that did not reflect prices of other service providers, and evaluated the impact on costs of using different transition timeline scenarios, it did not include a sensitivity analysis of the key cost driver (i.e., the number of personnel needed to manage NGEN), despite concerns that the Navy’s estimates of these numbers were not stabilized at the time of the AOA. In addition, while each cost estimate included a cost risk analysis based on the quality of data used, there were discrepancies in how the analysis was conducted and reported. For example, the cost for local area network facilities was estimated based on the contractor’s experience, which was considered by the cost team to be a less credible data source, but it was scored higher on their risk scale, indicating that the data source was more credible. Also, schedule and performance risks were not quantified and reflected in the estimates, which is significant because a qualitative assessment of schedule and performance risks among alternatives revealed increased risk in implementing a segmented approach. If such risks had been quantified and reflected in the estimates, the results would have shown higher costs associated with alternatives 3 and 3V. Nevertheless, the AOA concluded that there was no significant cost difference among the alternatives. In addition, the cost estimates were not validated by the independent team responsible for reviewing the cost analysis. Specifically, independent review team officials told us that they participated in a line-by-line review of the cost model where they raised comments and questions to the cost team. However, about 69 percent of the team’s comments were not fully addressed and included notable concerns, such as the questionable use of certain industry-based assumptions that may not be comparable to a program as large as NGEN or to the government environment. Independent review team officials attributed the comments not being closed to the fact that the team did not have authority over the cost model to ensure that its comments were addressed. Further, these officials told us that they were not asked to review the final version of the cost model, which was the version that first introduced alternative 3V, and their review of the final version of the AOA report occurred after the DON CIO had submitted it to OSD CAPE for final approval. According to officials responsible for developing the AOA, the weaknesses in the AOA cost estimates largely exist because there was not enough time to conduct a more thorough analysis. Specifically, they told us that the AOA schedule was constrained because the program wanted to get requests for proposals for NGEN contracts out by a predetermined date. This position was also supported by various management meeting minutes and other artifacts that we reviewed. However, DOD and DON officials disagreed with this position and told us that the time allotted to conduct the AOA did not negatively impact its quality or scope. A time-constrained approach is not consistent with DOD guidance, which states that the scope of an alternatives analysis should be proportionate to the amount of resources affected by the decision, with more significant programs receiving more analytical attention. The combination of the AOA weaknesses we identified, and the fact that NGEN has a preliminary program life cycle cost estimate of about $50 billion for increment 1 and is intended to provide the foundation for DON’s future networking environment, suggest the need for considerable analytical attention to alternative approaches. Without reliable cost estimates for each alternative, decision makers did not have a sound basis for making an informed decision on an NGEN solution. Most notably, since the estimates did not reflect increased risks associated with the segmented approach, the differences in the alternatives’ costs were understated, and the amount of risk and costs accepted by proceeding with a segmented approach were not fully understood. In addition to including reliable cost estimates, an AOA should assess how well each alternative satisfies required capabilities or goals. According to DOD guidance, such an analysis should (1) identify the operational capabilities and goals to be achieved with the system solution, (2) establish quantitative or qualitative measures for evaluating the operational effectiveness of each alternative, and (3) assess the ability of each alternative to achieve these measures. While the AOA identified program capabilities and goals, it did not sufficiently assess the alternatives’ operational effectiveness, making it unclear how well the alternatives would actually satisfy NGEN capabilities and goals. Specifically, The AOA identified capabilities and goals that the system solution should achieve. Among other things, these included addressing NMCI capability limitations identified based on 8 years of operational experience, as well as capabilities needed to support DOD and DON networking strategies for DOD’s Global Information Grid Network Operations and DON’s future Naval Networking Environment. (See table 8 for these capabilities and goals.) The AOA did not establish quantitative or qualitative measures for assessing the alternatives’ ability to achieve the identified NGEN capabilities and goals, as shown in table 8. For example, one of the capabilities was visibility into root causes for major network outages, which the AOA merely concluded that alternatives 2, 3V, and 3 were equally effective in addressing, even though no quantitative or qualitative measures of the alternatives’ respective ability to provide visibility into root causes were defined. Further, the AOA did not discuss the methodology for assessing the alternatives. Rather, it simply states that it was a qualitative assessment. While the AOA did not establish measures for assessing the alternatives’ ability to achieve NGEN capabilities and goals, it did establish several quantitative measures to differentiate among the alternatives’ respective approaches to acquiring, managing, and delivering NMCI capabilities. However, these measures alone do not provide insight into how they would influence the operational effectiveness of each alternative because they were not linked to NGEN capabilities and goals, and they did not provide sufficient insight for selecting a preferred alternative. For example, while the AOA recognized that an increase in the number of contractual relationships would result in more complexity and risk in implementing the alternative, it did not include measures for quantifying how much more risk is introduced as the number of contractual relationships increases. (See table 7 for the measures that were provided in the AOA.) In addition, the AOA included a separate assessment of the likelihood of each alternative to successfully implement IT best practices for end-to-end IT service delivery (i.e., IT Service Management framework). To DON’s credit, the approach used to measure the alternatives in this assessment was more structured and better documented. Specifically, the AOA team conducted table-top exercises with subject-matter experts representing each of the communities that will contribute to the acquisition, operation, and oversight of NGEN, and it worked through scenarios, such as everyday operations and responding to a computer network incident, to determine the extent to which each alternative could employ IT best practices to address a given scenario. The team captured comments made by participants and used them to infer rankings that resulted in numerical scores for each alternative. The AOA did not assess the alternatives’ ability to address capabilities and goals using defined measures of operational effectiveness because, as stated previously, no measures were established. Instead, it compared the alternatives based on qualitative determinations of whether the capability or goal was either met or partially met. (See table 8 for the results of DON’s assessment.) As with the cost estimates, officials responsible for developing the AOA told us that the operational effectiveness analysis was subject to time constraints so that requests for proposals could be issued on time. Although DOD and DON officials told us that the time allotted to conduct the AOA did not negatively impact its quality or scope, our review suggests otherwise. Further, the time-constrained approach is not consistent with DOD guidance, which states that the scope of an alternatives analysis should be proportionate to the resources affected by the decision, with more significant programs receiving more analytical attention. Without a more thorough effectiveness analysis, decision makers did not have a sound basis for making an informed decision on the best NGEN alternative to pursue. Instead, DON has selected a segmented approach on the basis that it would provide increased flexibility in meeting NGEN capabilities and goals with no additional cost, even though the degree of increased flexibility among the alternatives remains unclear. According to DOD guidance, an AOA should examine viable solutions with the goal of identifying the most promising option, thereby informing acquisition decision making. However, the segmented approach currently being pursued by DON was not one of the alternatives assessed in the AOA. Specifically, the current approach has more contracts, a different segmentation scheme, and a different transition timeline than the analyzed alternatives. Further, the impact of these differences in terms of how they compare to the original alternatives was not assessed. The approach that is being pursued by the program office includes a higher number of contracts than those analyzed in the AOA. Given that the AOA highlighted greater schedule and performance risks as the number of contracts and contractual relationships in the approach increase, the relative schedule and performance risks for the current approach are likely greater than those for alternative 3, and therefore are likely to result in greater costs. In support of this likelihood, DON’s November 2009 risk- adjusted preliminary program life cycle cost estimate for the current approach for fiscal year 2011 through fiscal year 2015 shows that the current approach will cost at least an estimated $4.7 billion more than any of the alternatives in the AOA. (See table 9 for a comparison of the current approach to the approaches assessed in the AOA and fig. 3 for an illustration of the contractual relationships associated with DON’s current approach.) OSD CAPE officials told us that they believe the differences between the current approach and alternatives assessed in the AOA are not significant because DON is still pursing a segmented approach and that the differences were the result of “an appropriate evolution of the segmented approach.” They further said that the increased risks in the current approach are offset by mitigating factors, such as the use of staggered phases to implement NGEN and the use of more efficient segmentation schemes. However, we have yet to receive any analysis to support their positions, and the current approach is estimated to cost about $4.7 billion more. As a result, DON cannot demonstrate that it is pursuing the most cost-effective approach for acquiring NGEN capabilities and meeting NGEN goals. The success of a large-scale acquisition program depends in part on having a reliable schedule that defines, among other things, when work activities and milestone events will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for systematic program execution but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. Without a reliable schedule, it is likely that established program milestones will slip. In the case of NGEN, such delays are already being experienced. Our work has identified nine best practices associated with developing and maintaining a reliable schedule. These are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating schedule activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying reasonable “float” between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations. See table 10 for a description of each of these best practices. In December 2009, NGEN established a baseline integrated master schedule composed of over 25 separate underlying schedules (or subschedules) to capture program milestones and the expected completion dates for activities leading up to them. However, the most current version of this schedule (May 2010) that was available at the time we began our review was not reliable because only two of the four subschedules that we analyzed substantially met any of the nine practices. The results of our analysis of the four subschedules are summarized in table 11. Capturing all activities. All four subschedules partially met this practice. Specifically, the majority of the activities contained in these subschedules could be mapped back to the program’s NGEN work breakdown structure. However, this structure is defined at a high level and is not expected to be further decomposed into planned work products and deliverables until the program enters the deployment phase when NGEN contracts are awarded. Until this structure is sufficiently defined, it cannot be determined whether the program schedules capture all work needed to accomplish program objectives. For example, we identified risk mitigation activities for 10 active risks that should have been, but were not, captured as scheduled work. During our review, program officials told us that they had since taken steps to ensure that all risk mitigation activities are added to the schedule. However, until NGEN work is sufficiently defined, the program does not have complete assurance that the activities currently captured in the various schedules support NGEN increment 1. Sequencing all activities. One subschedule substantially met this practice while the other three minimally met it. The subschedule that substantially met this practice had less than 1 percent of activities missing a predecessor or successor dependency. Of the remaining three subschedules, two did not identify predecessor or successor activities for over half of the activities in their schedules. This is of concern because if an activity that has no logical successor slips, the schedule will not reflect the effect of these slips on the critical path, float, or scheduled start dates of “downstream” (i.e., later) activities. Additionally, one subschedule had “constraints” placed on about 73 percent of its activities, meaning that these activities cannot begin earlier even if upstream work is completed ahead of schedule. According to program officials, they are working to reduce the number of constraints in the schedule. However, until activities are properly sequenced, these issues reduce the credibility of the dates calculated by the scheduling tool. Assigning resources to all activities. Program officials told us that they do not assign resources to any of the program schedules. They stated that the effort necessary to assign resources within the schedules would be significant and that they did not have the staff available to do this. However, without proper allocation of resources in the schedule, the program office cannot accurately forecast the likelihood that activities will be completed by their projected end dates, and the risk that key milestones will slip increases. Establishing the duration of all activities. Two subschedules met this practice while two only minimally met it. The two subschedules that met this practice had established activities with reasonable durations—the majority of which were under 30 days. The remaining two did not establish reasonable durations for their activities. For example, the majority of the activities that were in progress for the Transition Integrated Product Team subschedule had durations ranging from 50 days to 1000 days. When such long durations are assigned to work activities, it is likely that the activity is not defined to the necessary level to identify all the work that must be performed. Integrating schedule activities vertically and horizontally. One of the subschedules substantially met and the other three partially met this practice. The subschedule that substantially met the practice is horizontally aligned, meaning activities are logically sequenced, and vertically aligned, meaning that detailed activities roll up into larger summary tasks. The other three subschedules are also vertically aligned; however, they are unable to demonstrate horizontal integration because, as previously discussed, activities were not all logically sequenced. The integration issues identified on these subschedules also impact the NGEN master schedule. Because of the high number of missing dependencies, the number of in-progress activities with durations exceeding 30 days, and the high number of constraints, the master schedule is likely not fully horizontally integrated. Further, one of the subschedules is not vertically aligned with the master schedule because none of the key work activities in the subschedule were included in the master schedule. In addition, the master schedule was not integrated with the approved NGEN acquisition strategy. Program officials told us they did not revise the dates in the master schedule until after the continuity of services contract was awarded (July 2010), and that the dates in the acquisition strategy reflected the current information. By using a source other than the program office’s working schedule, oversight officials’ expectations about when milestones will be met may not be realistic. Establishing the critical path for all activities. None of the four subschedules fully met this practice. Specifically, the scheduling tool was unable to generate a valid critical path for the subschedules due to the extent of issues associated with the sequencing of activities, integration of activities, and identification of reasonable float (discussed below). Program officials stated that they do not manage a critical path generated by the scheduling tool. Instead, these officials stated that they track activities associated with the deployment phase decision (Milestone C), which they have designated as being critically important to them. However, such practice does not allow the program to have immediate insight into the full sequence of activities (both critical and noncritical) that, if delayed, would impact the planned completion date of Milestone C, as well as a projected completion date should one of these activities be delayed. Identifying reasonable float between activities. Two subschedules partially met this practice, while the remaining two minimally met it. Each of these subschedules identified float; however, the amount of excessive float varied. Both the Contract Technical Representative Workforce Reconstitution and IT Service Management Process Development subschedules partially met this practice because only 25 percent and 41 percent of their work activities had float of 100 days or greater, respectively. The two remaining subschedules minimally met this practice because over 60 percent of their activities contained float of 100 days or greater. Excessive float values are indicative of schedule logic that is flawed, broken, or absent. As such, these float values are of limited value to mitigate risk by reallocating resources from tasks that can safely slip to tasks that must be completed on time. Conducting a schedule risk analysis. The program has not performed a schedule risk analysis. Instead, according to program officials, schedule risks are considered during risk management board meetings and program health assessments. However, without this analysis, it is not possible to determine a level of confidence in meeting program milestones. A schedule risk analysis will calculate schedule reserve, which can be set aside for those activities identified as high-risk. Without this reserve, the program faces the risk of delays if they were to occur on critical path activities. Updating the schedule using logic and durations. All four subschedules partially met this practice. According to program officials, status updates are performed on the subschedules once a week. However, despite status updates, date anomalies exist. For example, the Contract Technical Representative Workforce Reconstitution subschedule included five activities with an actual start date in the future. Furthermore, the subschedules’ inability to produce a valid critical path indicates that the sequencing of activities is not appropriate, thus impairing the scheduling tool’s ability to generate realistic start and end dates. According to program officials, they were aware of some of these schedule weaknesses based on a May 2010 assessment of the schedule performed by a support contractor. Among other things, the contractor’s assessment found that the schedule did not provide for stakeholder review of most of the major acquisition documents or steps to mitigate known risks, and that it lacked a valid critical path due to network logic issues and activity constraints. Officials told us that they plan to address these issues. In addition, program officials stated that they hold monthly program management reviews to discuss schedule quality issues, as well as risks or issues that might affect the schedule. However, these reviews are not addressing key schedule issues. Specifically, the NGEN schedule management plan calls for the schedule to be resource-loaded from a centralized resource pool approved by the program manager, activities beginning within 90 days to have durations of no more than 20 days, and activities for mitigating approved program risks to be added to the schedule. However, our analysis of the schedule showed that resources are not assigned within the schedule, activities that are to begin within 90 days have durations that exceed 20 activities for mitigating 10 approved program risks were not included. Collectively, the weaknesses in implementing the nine key practices for the program’s integrated master schedule increase the risk of schedule slippages and related cost overruns and make meaningful measurement and oversight of program status and progress, as well as accountability for results, difficult to achieve. Moreover, they undermine the schedule’s ability to produce credible dates for planned NGEN milestones and events. In the case of increment 1, this risk has already been realized. Specifically, the NGEN master schedule was rebaselined in August 2010, resulting in delays in a number of key dates, including a 5-month delay of the Milestone C decision. See table 12 for a summary of key event and milestone delays. While officials stated that they have addressed some of the weaknesses identified above in the August 2010 rebaselined integrated master schedule, they conceded that this schedule does not assign resources to work activities, and the scheduling tool is unable to generate a valid critical path. Because these key scheduling practices are not being performed, the schedule is still not reliable. Without a fully integrated and reliably derived schedule for the entire NGEN program, the program office cannot identify when and how it will proceed through Milestone C and ultimately transition from NMCI to NGEN, and it cannot adequately manage and measure its progress in executing the work needed to do so. Successful execution of system acquisition programs depends in part on effective executive-level governance, to include having organizational executives review these programs at key milestones in their life cycles and make informed performance- and risk-based decisions as to how they should proceed. DON policy recognizes the importance of such milestone reviews. According to this policy, acquisition programs must proceed through a series of gate reviews (as discussed above), during which program performance is assessed and satisfactory program health must be demonstrated prior to moving forward. Currently, program performance and health at each gate are assessed using the Naval Probability of Program Success assessment methodology, which was established in September 2008. This assessment addresses four aspects of a program: (1) requirements, (2) resources, (3) planning/execution, and (4) external influencers. Associated with each aspect are two or more metrics, each of which is scored based on underlying criteria that are unique to each gate. (See table 13 for a description of each metric.) At a given gate review, the criteria are rated in terms of green, yellow, or red. Further, the metrics can be designated as critical, meaning that any significant issues that are associated with these metrics must be resolved before the gate can be exited. As noted earlier, a Gate 1 review was not held because the gate-review process was not established when the program began. In lieu of a Gate 1 review, according to the NGEN Acquisition Strategy, the Chief of Naval Operations Executive Board met to confirm NGEN requirements during the winter of 2007/2008 and these meetings were “nominally” a Gate 1 review. Subsequent to the establishment of the DON gate process, an NGEN Gate 2 review—intended to focus on an analysis of alternatives— was waived in early 2008 because the department planned to continue the use of existing NMCI technology, and NGEN entered the DON review process at Gate 3 in April 2008. OSD later identified the program as a pre- MAIS acquisition, resulting in the direction to conduct an analysis of acquisition alternative approaches. As such, DON held a Gate 2 review in April 2009, one year after the Gate 3 review. Since then, DON held a Gate 4 review in November 2009, as well as a Gate 5 review in October 2010. As discussed below, the extent to which each of the gate reviews was performance- and risk-based varied. Gate 3 review. At the time of this review, which was in April 2008, the Probability of Program Success assessment methodology was not yet in place. Therefore, program review documentation focused on, for example, program activities that had been completed, were under way, and were planned. However, these activities were not presented relative to any benchmark or goal, and thus program performance was not apparent in the documentation. Further, while program documentation shows that risks were disclosed, such as funding shortfalls for fiscal years 2008 and 2009, as well as workforce and training challenges, the scope and nature of the information presented did not extend to the level that the assessment methodology provides. For example, the information presented did not address the realism and achievability of the program master schedule and the confidence level associated with the current cost estimate, including the difference between the program office and independent cost estimates, which are both relevant criteria under the assessment methodology for the gate. Notwithstanding these gaps in information that would have limited informed program decision making, the program was approved to proceed. Gate 2 review. At the time of this review, which was in April 2009, the Probability of Program Success assessment methodology was in place. However, it was not used to inform program decision making. Instead, the review focused on the AOA, next steps, and the overall program timeline. While briefing documentation shows that cost estimates for the alternatives exceeded planned funding, the documentation did not disclose the range of AOA and integrated master schedule weaknesses discussed earlier in this report, and the risks associated with these limitations. This is significant because the Gate 2 assessment criteria focus on, among other things, whether the AOA cost estimates and master program schedule are reliable and whether program execution is on schedule. Notwithstanding these weaknesses, the program was approved to proceed. Gate 4 review. For this review, DON used its Probability of Program Success methodology and assessed the health of the program against each of the 17 metrics, including 3 that DON designated as potentially critical—parameter status, budget and planning, and acquisition management. According to the program health assessment used at this gate, 8 of the 17 metrics were rated as red, meaning that the program had identified significant issues that would inhibit delivery of capability within approved cost and schedule constraints and that mitigation strategies had not been identified. Moreover, the 8 metrics rated as red included 3 that were designated as critical, meaning that these issues needed to be resolved before exiting the gate. Specifically, the parameter status metric was rated as red because NGEN requirements that increment 1 is to meet had not yet been defined; the budget and planning metric was rated as red because the program was not fully funded; and the acquisition management metric was rated as red because the USD (AT&L) had yet to authorize the milestone at which the program would enter the Defense Acquisition System. (See fig. 4 for the assessment results for all 17 metrics.) Moreover, the gate briefing document highlighted a number of risks facing the program. For example, it faced the risk that key program documentation, such as the System Engineering Plan and the Test and Evaluation Master Plan, would not be completed until NGEN requirements were defined. Further, it faced the risk that insufficient funding would impact the program office’s ability to acquire NMCI assets. Nevertheless, the program was approved to proceed. Gate 5 review. For this review, which was conducted in October 2010, DON again used its Probability of Program Success methodology and assessed program performance and risk against all 18 metrics, including 9 that DON designated as potentially critical. Three metrics were rated as red; 1, test and evaluation, was deemed critical. According to the assessment, the test and evaluation metric was rated as red because the Test and Evaluation Master Plan was not complete; the budget and planning metric was rated as red because of significant NGEN funding reductions; and the manning metric was rated as red because of inadequate program office contracting, engineering and logistics personnel. Further, according to the assessment, the Test and Evaluation Master Plan was not complete because the requirements were not defined. As discussed above, the program recognized, at Gate 4, the risk that a delay in defining NGEN requirements would impact the completion of this plan. (See fig. 5 for the assessment results for all 18 metrics.) According to the gate briefing document, these red ratings introduced a number of risks, such as the risk that the program would not be able to execute its current acquisition approach and meet program milestones. In addition, even though the assessment rated the acquisition management metric as green, this rating is not consistent with our findings in this report about the NGEN integrated master schedule. Specifically, the rationale for the green rating was that the August 2010 rebaselined schedule was viewed as realistic and achievable by key stakeholders. However, as stated earlier, program officials conceded that the schedule does not assign resources, and the scheduling tool is unable to generate a valid critical path, which are key scheduling practices; thus the August 2010 schedule was not reliable. The approval of the Assistant Secretary of the Navy (Research, Development and Acquisition) for NGEN to proceed beyond Gate 5 was made conditional on the program satisfactorily completing action items focused on releasing the request for proposals for the Transport Services contract (scheduled for January 2011) and resolving its funding shortfall. As shown above, DON has demonstrated a pattern of approving NGEN at key acquisition review gates in the face of both limited disclosure of the program’s health and risks and known program risks and shortfalls in performance. According to DON officials, the decisions to pass the gates and proceed were based on their view that they had sufficiently mitigated known risks and issues. By not fully ensuring that NGEN gate decisions sufficiently reflected program challenges, DON has increased the likelihood that the NGEN acquisition alternative that it is pursuing is not the most cost-effective course of action, and that the program will cost more and take longer to complete than planned. Given the enormous size, complexity, and mission importance of NGEN, it is vital that DON and DOD assure decision makers, including the congressional defense committees, that the approach to acquiring needed capabilities is the most cost-effective and that its execution is guided by a well-defined schedule and informed milestone decision making. To date, this has not occurred to the degree that it should. Most notably, while DON produced substantially well-documented cost estimates, the NGEN acquisition approach currently being followed is not grounded in a reliable analysis of alternative approaches, and the selected approach was not even assessed and is about $4.7 billion costlier and introduces more risk than the alternatives that were assessed. Further, the program’s execution to date has not been based on the kind of reliably derived integrated master schedule that is essential to program success. While the program office is aware of some of the schedule weaknesses and intends to address them, additional work is needed to ensure that the schedule can produce credible dates for planned NGEN milestones and events. Exacerbating this is an equally troubling pattern of missed milestones and delays in key program documentation, as well as gate review decisions that have allowed the program to proceed in the face of significant performance shortfalls and risks. While NGEN is scheduled for an OSD-level milestone review in August 2011, the above schedule limitations make it likely that this review date will slip. It is thus imperative, given the scope and nature of the program’s problems, that swift and immediate action be taken to ensure that the most cost-effective acquisition approach is pursued and that a reliable schedule and performance- and risk-based decision making are employed. To do less increases the chances that needed NGEN capabilities will be delivered late and be more costly than necessary. To ensure that NGEN capabilities are acquired in the most cost-effective manner, we recommend that the Secretary of Defense take the following two actions: direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to conduct an interim NGEN milestone review, and direct the Secretary of the Navy to immediately limit further investment in NGEN until this review has been conducted and a decision on how best to proceed has been reported to the Secretary of Defense and congressional defense committees. At a minimum, this review should ensure that DON pursues the most advantageous acquisition approach, as evidenced by a meaningful analysis of all viable alternative acquisition approaches, to include for each alternative reliably derived cost estimates and metrics-based operational effectiveness analyses. In addition, the review should consider existing performance shortfalls and known risks, including those discussed in this report. To facilitate implementation of the acquisition approach resulting from the above review, we further recommend that the Secretary of Defense direct the Secretary of the Navy to take the following two actions: ensure that the NGEN integrated master schedule substantially reflects the key schedule estimating practices discussed in this report, and ensure that future NGEN gate reviews and decisions fully reflect the state of the program’s performance and its exposure to risks. In written comments on a draft of this report, signed by the Deputy Assistant Secretary of Defense (C3, Space and Spectrum), and reprinted in appendix II, DOD stated that it concurred with one of our four recommendations, did not concur with one recommendation, and partially concurred with two. The department’s comments are discussed below. The department partially concurred with our recommendation to conduct an interim milestone review that provides assurance that DON is pursuing the most advantageous acquisition approach. Specifically, the department stated that it intended to leverage the next OSD-chaired NGEN Overarching Integrated Product Team meeting in February 2011 for the review and that following this meeting, the USD(AT&L) will conduct a Milestone Decision Authority review of the current NGEN approach, along with risks. According to the department, this approach balances the review processes already in place, resource constraints, and the need for an additional milestone review. Further, the department said it had concluded that DON’s AOA was sufficient and that the analysis had been approved by CAPE. DOD added that it will complete an economic analysis—a post AOA-activity—for the August 2011 Milestone C review, which will include a follow-on independent cost estimate and an updated determination of the most cost-effective solution. While these are important steps, DOD’s planned actions do not appear to fully address our recommendation. Specifically, the department did not indicate any intent to reevaluate whether the current solution is indeed the most advantageous approach, despite the weaknesses contained in the AOA identified in this report and the fact that the current approach was not included in its analysis. According to the September 2010 draft NGEN economic analysis development plan, only the status quo and the current approach are to be analyzed, not the other three alternatives that were included in the AOA. Without a meaningful analysis of alternatives, DOD will be unable to determine the most cost-effective solution in its two upcoming key reviews. The department did not concur with our recommendation that it limit further investment in NGEN until a decision has been made on how best to proceed based on an interim review that considers all viable alternative acquisition approaches and this decision has been reported to the Secretary of Defense and to congressional defense committees. The department stated that DON’s NGEN acquisition strategy and program management have been approved by the milestone decision authority, and that adequate oversight is in place to ensure regulatory and statutory compliance. Further, the department said that the limitation on NGEN investments will impact future DON business operations and, ultimately, Naval warfighting capabilities. The department added that it will make adjustments to NGEN investments if it determines they are required; however, it also said it must continue to execute the investments within the time frame of the continuity of services contract. While oversight is in place for the NGEN program, it is not effective. Specifically, as discussed in this report, DON’s past reviews have resulted in decisions that were not always performance- and risk-based. Given that DON is continuing to proceed in the face of the problems we are reporting, it is even more important that adequate oversight be provided by the Secretary and congressional defense committees. Moreover, we maintain that limiting further investment in NGEN, thereby delaying the Milestone C event and its associated activities, is the most prudent action at this time. By not evaluating all viable acquisition approaches before proceeding with further investment in NGEN, the department cannot be assured that it is pursuing the most cost-effective approach. Further, by selecting an approach that, as discussed in this report, carries greater relative schedule and performance risks than other alternatives and is being executed against an unreliable program schedule, the department increases the risk that its approach will lead to future cost overruns, requiring it to expend additional resources that could otherwise be used to provide other warfighting capabilities. Furthermore, even if the department proceeds along its current course, the issues we have identified with the program’s schedule, along with the delays already experienced, raise concerns that it will be unable to complete the transition as planned within the time frames of the current continuity of services contract. The department partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to ensure that the NGEN integrated master schedule substantially reflects the key schedule estimating practices discussed in this report. DOD stated that the integrated master schedule was developed in accordance with industry best practices. However, as discussed in this report, none of the subschedules that we analyzed reflected all the practices that our work has identified as necessary to develop and maintain a reliable schedule. To its credit, DOD also said it would seek ways to improve schedule performance and that DON will review the scheduling practices discussed in this report and incorporate those found to be beneficial. We continue to believe that the Secretary of the Navy should ensure that the NGEN integrated master schedule incorporates all of the best practices for schedule estimating discussed in this report to help manage and measure its progress in executing the work needed to proceed through Milestone C and ultimately transition from NMCI to NGEN. The department concurred with our recommendation to ensure that future NGEN gate reviews and decisions fully reflect the state of the program’s performance and its exposure to risks. In this regard, the department stated that it plans to continue to conduct monthly risk management board meetings and program health reviews, and report the results to program leadership. It will be critical that decisions on NGEN fully reflect the state of the program’s performance and exposure to risks. We are sending copies of this report to the appropriate congressional committees; the Director, Office of Management and Budget; the Congressional Budget Office; the Secretary of Defense; and the Secretary of the Navy. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to determine whether (1) the Department of the Navy (DON) sufficiently analyzed alternative approaches for acquiring its Next Generation Enterprise Network (NGEN), (2) DON has a reliable program schedule for executing NGEN, and (3) acquisition decisions have been performance- and risk-based. To address the first objective, we evaluated the analysis of alternatives (AOA) report and its supporting documentation against relevant Department of Defense (DOD) guidance and GAO’s Cost Estimating and Assessment Guide and compared the alternatives in the AOA final report with the NGEN Acquisition Strategy. More specifically, For the cost analysis, we compared the AOA cost estimating documentation, such as the cost model spreadsheet, supporting documentation for the cost model, and the final NGEN AOA report, against the four characteristics of a reliable estimate in GAO’s Cost Estimating and Assessment Guide to determine the extent to which the cost estimates reflected each of the four characteristics. For the operational effectiveness analysis, we compared an NGEN alternatives performance assessment report and the AOA final report against the relevant DOD guidance to determine the extent to which the analysis was sufficient. In addition, we reviewed NGEN AOA Advisory Group meeting minutes and documentation containing the results of a Space and Naval Warfare Systems Command review of the cost analysis. We also interviewed cognizant DON and Office of the Secretary of Defense officials about the AOA’s development and results. To address the second objective, we first reviewed the integrated master schedule and 4 of the 29 subschedules that existed when we began our review and that comprised the early transition activities intended to address key program risks, as well as high-level plans for postdeployment. Accordingly, we focused on assessing the May 2010 subschedules against the nine key schedule estimating practices in GAO’s Cost Estimating and Assessment Guide using commercially available software tools to determine the extent to which each subschedule reflected each of the nine practices (e.g., a logical sequence of activities and reasonable activity durations). Further, we characterized the extent which each subschedule satisfied each of the practices as either met, substantially met, partially met, minimally met, or not met. In addition, compared the baseline schedule, established in December 2009, to the rebaselined schedule, established in August 2010, to identify whether key event and milestone dates had slipped. We also interviewed cognizant officials about development and management of the integrated master schedule and underlying subschedules. We also reviewed program documentation, such as the NGEN schedule management plan, program performance reports, program management reviews, and the acquisition strategy. To address the third objective we compared program review documentation, such as briefings, program performance assessments, and meeting minutes, to DON acquisition review policies and procedures, as well as to other programmatic documents, such as risk registers and risk management board briefings and meeting minutes. We also interviewed cognizant program officials regarding NGEN performance and program risks. To assess the reliability of the data that we used to support the findings in this report, we reviewed relevant program documentation to substantiate evidence obtained through interviews with agency officials. We determined that the data used in this report are sufficiently reliable. We have also made appropriate attribution indicating the sources of the data. We conducted this performance audit at DOD offices in the Washington, D.C., metropolitan area and at the Space and Naval Warfare Systems Command in San Diego, California, from October 2009 to February 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to this report were Randolph C. Hite, Director; Carol Cha, Assistant Director; Monica Anatalio; Mathew Bader; Neil Doherty; Cheryl Dottermusch; James Houtz; Kaelin Kuhn; Neela Lakhmani; Lee McCracken; Jeanne Sung; and Adam Vodraska.
The Department of the Navy (DON), a major component of the Department of Defense (DOD), has launched its Next Generation Enterprise Network (NGEN) program to replace the Navy Marine Corps Intranet (NMCI) program. NGEN capabilities, such as secure transport of voice and data, data storage, and e-mail, are to be incrementally acquired through multiple providers. As planned, the first increment is expected to provide comparable NMCI capabilities, additional information assurance, and greater DON network control, at a cost of about $50 billion through fiscal year 2025. Given the size, importance, and complexity of NGEN, GAO was asked to determine whether DON has sufficiently analyzed alternative acquisition approaches and has a reliable schedule for executing the program, and whether program acquisition decisions have been performance- and risk-based. To do this, GAO reviewed the NGEN analysis of alternatives, integrated master schedule, and key milestone decisions. DON did not sufficiently analyze alternative acquisition approaches for NGEN because the alternatives analysis contained key weaknesses, and none of the alternatives assessed match the current acquisition approach. Specifically, the cost estimates for the respective alternatives were not reliable because they were not substantially accurate, and they were neither comprehensive nor credible. Further, the operational effectiveness analysis, the other key aspect of an analysis of alternatives, did not establish and analyze sufficient measures for assessing each alternative's ability to achieve program goals and deliver program capabilities. Moreover, the acquisition approach that DON is actually pursuing was not one of the alternatives assessed in the analysis, and it is riskier and potentially costlier than the alternatives analyzed because it includes a higher number of contractual relationships. According to program officials, the analysis reflects the most that could be done in the time that was available to complete it, and they do not view the alternative selected as materially different from the assessed alternatives, even though it is about $4.7 billion more costly. DON does not have a reliable schedule for executing NGEN. Only two of the four subschedules that GAO reviewed, each of which help form the master schedule, adequately satisfied any of the nine practices that are associated with developing and maintaining a reliable schedule. These weaknesses have contributed to delays in key program milestones. During the course of GAO's review, DON stated that action was taken to address some, but not all, of these weaknesses. According to program officials, schedule estimating was constrained by staffing limitations. NGEN acquisition decisions were not always performance- and risk-based. In particular, the program was approved in the face of known performance shortfalls and risks. For example, the program was approved at a key acquisition review despite the lack of defined requirements, which was recognized as a risk that would impact the completion of other key documents, such as the test plan. This risk was later realized as a critical issue. According to program officials, the decisions to proceed were based on their view that they had sufficiently mitigated known risks and issues. Collectively, these weaknesses mean that DON does not have a sufficient basis for knowing that it is pursuing the best approach for acquiring NGEN capabilities and the program's cost and schedule performance is unlikely to track to estimates. GAO is recommending that DOD limit further investment in NGEN until it conducts an interim review to reconsider the selected acquisition approach and addresses issues discussed in this report. In its comments, DOD stated that it did not concur with the recommendation to reconsider its acquisition approach; GAO maintains that without doing so, DOD cannot be sure it is pursuing the most cost-effective approach.
IRS’s operating divisions develop annual plans to guide audit decisions in terms of the number of returns to be audited. SB/SE audit plans strive to balance the number of audits in any fiscal year across all types of tax returns (e.g., individual income tax returns) and taxpayers (e.g., individual wage earners, small businesses, corporations) given the available number and location of IRS auditors, and knowledge about types of noncompliance to pursue through audits. SB/SE conducts audits through field offices located in seven regional areas. These audits generally are conducted by meeting with the taxpayer and/or his or her representatives. The field auditors include revenue agents who tend to audit the most complex returns and tax compliance officers who tend to audit simpler returns. SB/SE also does audits through its four campus locations; these audits tend to be the simplest and are generally done by tax examiners through correspondence with the taxpayers. Figure 1 shows an organizational chart of IRS’s operating divisions and SB/SE’s audit offices. In fiscal year 2014, SB/SE closed 823,904 audits, representing more than half of nearly 1.4 million closed audits across IRS in fiscal year 2014. SB/SE audits resulted in over $12 billion of the $33 billion in total recommended additional taxes across all IRS audits. For details on results of SB/SE audits, see appendix II. In addition to audits, IRS conducts nonaudit compliance checks, which may lead to an audit. These checks include the Math Error, Automated Underreporter (AUR), and Automated Substitute for Return (ASFR) programs. The Math Error program electronically reviews tax returns as they are filed for basic computational errors or missing forms/schedules. Several months after returns have been filed, AUR electronically matches information reported by third parties, such as banks or employers, against the information that taxpayers report on their tax returns. This matching helps identify potentially underreported income or unwarranted deductions or tax credits. ASFR also uses information return data to identify persons who did not file returns; constructs substitute tax returns for certain nonfilers; and assesses tax, interest, and penalties based on those substitute returns. Although these and other compliance checks may identify potentially noncompliant tax returns that are subsequently audited, these programs are not the subject of this report. In March 2014, IRS’s Chief Risk Officer, who oversees its agency-wide program to identify and assess risks, completed a high-level, risk-based review of the IRS audit selection process. The review focused on the potential for bias based on the judgment of the Risk Officer and not on analysis against objective standards, such as comparing steps in the process to the internal control standards. Even so, the Risk Officer concluded that IRS maintained sound internal controls in its audit programs and that the risk of partiality in IRS’s audit selection was very low. The risk of partiality appeared lowest in the automated selection programs. It appeared to be slightly higher for manual selection and referral programs because greater employee judgment was involved. SB/SE selects potentially noncompliant tax returns for audit using a multiphase process intended to enable IRS to narrow the large pool of available returns to those that most merit investment of audit resources. As shown in figure 2, in broad terms, this process generally includes (1) identifying an initial inventory of tax returns that have audit potential (e.g., reporting noncompliance), (2) reviewing that audit potential to reduce the number of returns that merit selection for audit (termed “classification”), (3) selecting returns by assigning them to auditors based on a field manager’s review of audit potential given available resources and needs, and (4) auditing selected returns. SB/SE uses 33 methods, called workstreams, to identify and review tax returns that may merit an audit. These workstreams can be categorized into seven groups based on how the return was initially identified (see appendix IV for a table of workstreams by group). We have listed these groups in general order of how much discretion is involved in identifying, reviewing, and selecting returns, starting with those that involve more discretion. This ordering does not correspond to the number of audits conducted. For example, although referrals generally involve more discretion in selecting returns for audit, they do not make up the largest percentage of SB/SE field audits (see figure 3). Referrals. IRS employees and units, as well as external sources, such as other agencies and citizens, can refer potentially noncompliant taxpayers to SB/SE. SB/SE may start an audit if the referral indicates significant potential for noncompliance. Referrals can involve, among others, those promoting shelters created to avoid taxation, whistleblowers, and those not filing required tax returns. Related pickups. After opening an audit, SB/SE may identify the taxpayer’s prior or subsequent year returns or returns of related taxpayers to audit. User-developed criteria. These criteria use filters or rules embedded in computer software to identify returns with specific characteristics, often for projects. These characteristics generally involve a specific tax issue known or suspected to have high noncompliance in a particular geographic area, industry, or population. For example, the criteria may be used for projects that explore or test ways to uncover noncompliance or improve compliance. Computer programs. Computer programs use rules or formulas to identify potential noncompliance across a type of tax return, rather than for a specific tax issue. For example, IRS uses a computer algorithm, the discriminant function (DIF), to determine the probability of noncompliance somewhere on the tax return. When a return receives a high enough score, SB/SE may review the return for audit potential. Data matching. When information on a tax return—such as wages, interest, and dividends—does not match information provided to IRS by states, employers, or other third parties, these discrepancies may prompt SB/SE to review returns for audit potential. An example of a workstream that uses data matching is the payment card income pilot, which uses information from credit card transactions to identify income that may be underreported. Taxpayer-initiated. When taxpayers contact IRS to request an adjustment to their respective tax returns, tax refunds, or tax credits, or request to have a previous audit reconsidered, SB/SE may initiate an audit after reviewing these requests. Random identification. The National Research Program (NRP) studies tax compliance through audits of a randomly-identified sample of tax returns. Specifically, NRP measures voluntary compliance in reporting income, deductions, and credits, among other categories, and generalizes those measures to the population being studied. All of SB/SE’s selection methods or workstreams follow the general multiphase selection process to identify and review potentially noncompliant returns before selecting and actually auditing them. Workstreams also share some common characteristics. For example, multiple staff are involved in the various phases so that one person cannot control the entire process. About one-third of the workstreams use some form of automation to identify the returns that should enter the workstream. Most workstreams involve some form of manual review to determine which returns have audit potential. For example, IRS auditors review (i.e., classify) tax returns identified as having audit potential to determine which returns have the highest potential and which parts of the return should be audited. Finally, all workstreams screen out returns as part of the review process. This winnowing means that the large pool of returns initially identified as having audit potential becomes a much smaller pool of returns that are selected for audit. However, variations exist among the workstreams, particularly between the field and campus. For example, the field process generally uses more review steps and manual involvement (e.g., classification) than for campus. The latter generally focuses on a single compliance issue and relies more on automated filters and rules to identify returns. Among field workstreams, the extent of review varies. For example, a few workstreams use a committee to review proposals and authorize new projects or investigations before returns can enter the workstream. Also, for field audits, group managers generally decide whether to assign, hold, or screen out returns for audit, whereas returns selected for campus audits are generally assigned through automated processes after campus analysts review the returns to ensure that they adhere to the selection rules embedded in the automated processes. Some workstreams, such as taxpayer claims and some referrals, involve more manual processes to identify and review returns; other workstreams involve both manual and automated processes or are almost entirely automated. Finally, the procedures for screening out returns vary across workstreams. In fiscal year 2014, related pickups from various identification methods or workstreams accounted for about 50 percent of SB/SE closed field audits. Most of these pickups were related to various ways in which taxpayers attempt to shelter income from taxation and DIF-sourced returns. The DIF workstream alone (part of the computer program identification group) accounted for over 22 percent of SB/SE closed field audits, and various referral workstreams accounted for nearly 7 percent, as shown in figure 3. For details on the workstreams included in the categories shown in figure 3, see appendix VI. For campus audits closed in fiscal year 2014, available IRS data showed that 31 percent focused on the Earned Income Tax Credit (EITC). SB/SE relies on a computer program known as the Dependent Database (DDb) to identify most of the returns to be audited for EITC issues. DDb is a rules-based system that identifies potential noncompliance related to tax benefits based on the dependency and residency of children. According to IRS, DDb rules are reviewed yearly for changes, and no additional filtering or review is needed on the cases that are selected for audit. In fiscal year 2014, DDb identified more than 77 percent of the closed EITC audits. The other approximate 23 percent of closed EITC audits were identified using various other methods, such as referrals from within IRS and pickups related to audits of other tax returns. SB/SE does not have complete data on the number of returns that are initially identified as having audit potential, reviewed, and selected for audit for all 33 workstreams. Using data that are available, table 1 illustrates differences in the extent to which returns are winnowed from identification through selection for two workstreams. For example, about half of the DIF-sourced returns reviewed were selected for audit, and almost all returns reviewed for NRP were selected for audit. An effective internal control system can help federal agencies achieve their missions and objectives and improve accountability. As set forth in Standards for Internal Control in the Federal Government, also known as the Green Book, internal controls comprise the plans, methods, and procedures used to meet an entity’s mission, goals, and objectives, which support performance-based management. Internal controls help agency program managers achieve desired results. They also provide reasonable assurance that program objectives are being achieved through, among other things, effective and efficient use of resources. Internal control is not one event, but rather a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis. Two examples of internal control standards are the establishment of clearly defined objectives and a commitment to documenting significant events. SB/SE has some procedures in place that are consistent with internal control standards. However, we identified some internal control weaknesses that leave SB/SE vulnerable to inconsistent return selection for audit or the perception of it. Our review of IRS and SB/SE procedures on selecting returns for audit found several procedures that adhered to internal control standards which provided some assurance of fairness and integrity in the selection process. For our review, we relied on documentation demonstrating that the standards were employed and did not independently test whether the standards were systemically applied. Ethics. SB/SE demonstrated a commitment to promoting ethical behavior among staff, which provides some high-level assurance that it may be able to meet its goal of integrity and fair treatment of taxpayers in general. For example, IRS’s ethics training and annual certification process provide some assurance that IRS staff should be aware of the need to act ethically and impartially. Awareness of internal controls by managers. SB/SE has demonstrated a commitment to employ internal control activities to ensure accountability in achieving its mission. All managers are required to do an annual self-assessment of internal control procedures. To the extent that SB/SE managers report deficiencies and SB/SE uses the results, the annual self-assessment can provide assurance that the importance of internal control is understood in SB/SE. Our work was not designed to test how effectively IRS used the self-assessments to identify and address deficiencies. Segregation of duties. All of SB/SE’s selection workstreams involve multiple parties so that no individual can control the decision-making process. For example, staff who classify a return cannot later audit the same return. Also, for field audits, IRS coordinators in an area office generally determine which returns will be assigned to the field offices, rather than field offices and auditors generating their own work. SB/SE also has procedures to ensure that managers review about 10 percent of returns classified for the DIF and NRP workstreams. Also, managers must approve auditors’ requests to open audits for prior or subsequent year and related returns. Although not every step in the selection process is reviewed, these procedures provide some assurance that the decision to audit a return is not determined unilaterally. Safeguarding data/systems. SB/SE demonstrated that safeguards are in place to restrict system access to authorized users. IRS has procedures on system security and uses a multitiered authentication process to control system access, which we observed. The mission statements for both IRS and SB/SE declare the strategic goal of administering the “tax law with integrity and fairness to all.” SB/SE officials stated that integrity and fairness are core values of IRS. However, they did not define these terms or provide evidence that staff know what is to be achieved by this strategic goal. Without a clear definition of fairness that has been communicated to staff, SB/SE has less assurance that its staff consistently treat all taxpayers fairly. Internal Control Standard: Define objectives Internal control standards call for program objectives to be clearly defined in measurable terms to enable the design of internal control for related risks. Specific terms should be fully defined and clearly set forth so they can be easily understood at all levels of the entity. Consistent information must be reliably communicated throughout the entity if the entity is to achieve its goals. “The purpose of the Internal Revenue Service is to collect the proper amount of tax revenues at the least cost to the public, and in a manner that warrants the highest degree of public confidence in our integrity, efficiency and fairness.” “All must perform their professional responsibilities in a way that supports the IRS Mission. This requires auditors to provide top quality service and to apply the law with integrity and fairness to all.” “The obligation to protect taxpayer privacy and to safeguard the information taxpayers entrust to us is a fundamental part of the Service’s mission to apply the tax law with integrity and fairness to all.” “Requirements governing the accuracy, reliability, completeness, and timeliness of taxpayer information will be such as to ensure fair treatment of all taxpayers.” These references point to the overall concept of fairness without explaining what it means, particularly when selecting tax returns for audit. Fairness can be difficult to define because everyone may have different concepts of what constitutes fair treatment. We heard different interpretations of fairness and integrity from IRS participants involved in the selection process during the eight focus groups we conducted. Given the different interpretations, not having a clear definition of fairness unintentionally can lead to inconsistent treatment of taxpayers and create doubts as to how fairly IRS administers the tax law. In our focus groups, SB/SE staff stated that they viewed audit selection as fair when they: focus on large, unusual, and questionable items, do not consider taxpayer’s name, location, etc., avoid auditing taxpayers they know or may be in their neighborhood, treat issues consistently across returns, apply same standards, treat all taxpayers the same, account for varying costs across locations (e.g., housing costs), and avoid being influenced by personal preferences. Each comment represents someone’s concept of fairness. According to SB/SE officials, IRS relies on the judgment of its staff to determine what is fair. Although many concepts sound similar, they can be different, or even incompatible. For example, some participants said that not considering a taxpayer’s name or geographic location was fair treatment. However, other participants said that considering geographic location was necessary to avoid auditing taxpayers they knew or to determine whether expenses were reasonable for that location (e.g., larger expenses may be reasonable for high-cost locations). Also, some audit projects focus on indications of certain types of noncompliance in specific locations, such as an IRS area or a state. SB/SE officials stated that both views of fairness regarding location may be appropriate for classification. We reviewed training materials used to instruct revenue agents in the decision-making process when selecting returns to audit, as well as the orientation briefing provided to staff assigned to classification details. Our review of the documentation, as well as discussions with focus group participants involved in classification, indicate that the training materials and the briefing have not defined fairness or how to apply it consistently when selecting returns for audit. Another challenge to treating all taxpayers consistently or under the same standard arises when the group manager in the field has to manage resource constraints. Some group managers talked about not having the right type and grade of auditor in a location to select a particular return that was deemed worth auditing. Others talked about not having enough travel money for auditors to justify selecting some tax returns. Group managers in other locations may be able to select a similar return because they have fewer of these constraints. In addition, SB/SE officials said that what is fair may vary depending on the role of the IRS staff involved. They said IRS staff members may have different perspectives of what is “fair” depending on their responsibilities and position, such as IRS staff who are analysts or managers in headquarters versus analysts, auditors, and their managers in the field. SB/SE has not established objectives on the fair selection of returns. Without a definition of fairness, SB/SE cannot be assured that an objective for fair selection clearly indicates what is to be achieved. For example, objectives could be based on definitions of fairness that we heard in our focus groups, such as the extent to which selection occurs because of large, unusual, and questionable items on a return or because SB/SE is applying the same standards to similar tax returns. Internal Control Standard: Assess risks and performance to objectives Internal control standards call for management to set program objectives that align with an entity’s mission, strategic plan, goals, and applicable laws and regulations. Clearly-defined objectives can enhance the effectiveness and efficiency of a program’s operations and are necessary to assess risks. Objectives should clearly define what is to be achieved, who is to achieve it, and how and when it will be achieved. Documenting objectives promotes consistent understanding. SB/SE develops audit objectives in its annual work plan. For fiscal year 2014, audit objectives included (1) review workload identification and selection models, collaborate with other IRS units to revise processes/guidelines, and develop guidance and monitoring tools to ensure consistent application; and (2) use more research data to develop alternative workload identification streams and delivery. These objectives address the process of selecting returns but not whether returns are selected fairly. For example, applying selection models and processes consistently does not ensure that the models and processes were designed to achieve fairness. Further, IRS has not identified a level of consistency that would indicate that fairness has been achieved. Without clearly-defined objectives aligned to its mission and a clear understanding across SB/SE of how fairness is defined, SB/SE has less assurance that it is measuring progress toward or achieving its strategic goal of treating taxpayers fairly. Given that SB/SE does not have clearly-defined objectives on fair selection, it also does not have performance measures aligned with these objectives and explicitly tied to integrity or fairness. For example, if IRS defined fairness as focusing on large, unusual, and questionable items and developed an objective based on this definition, performance measures could assess the quality and extent to which auditors focused on these items. SB/SE officials pointed to a variety of existing performance measures that they believe assess whether selection processes were impartial and consistent. Examples of these performance measures include: IRS’s Customer Satisfaction survey asks taxpayers to rate their satisfaction with the auditor’s explanation for how the return was selected for audit. However, SB/SE did not show how answers were used to assess whether the selection process was fair or modify the process to make it fair. Further, taxpayer dissatisfaction is subjective, and taxpayers would not have context to know why their returns were selected compared to others. SB/SE conducts business reviews to assess how well its selection process is performing. However, concerns raised in these reviews focused on selection process steps, such as ordering returns and conducting research projects, instead of the underlying fairness of selecting a return. All employees are to be evaluated on how well they provide fair and equitable treatment to taxpayers as required by the Internal Revenue Service Restructuring and Reform Act of 1998; the IRM provides examples of behaviors that would meet this requirement. These behaviors may be consistent with IRS’s mission, but they focus on how taxpayers were treated after the audit started rather than how auditors reviewed returns for potential audit selection. Without performance measures that align with objectives to achieve fair selection, SB/SE lacks assurance that it can measure progress toward fair return selection. IRS’s efforts to identify risks and assess whether and how to manage them operate under two complementary approaches. Internal controls framework. The procedures in IRM 1.4.2 govern IRS’s processes for monitoring and improving internal controls, which include the identification and mitigation of risks. Managers are expected to understand the risks associated with their operations and ensure that controls are in place and operating properly to mitigate those risks. Enterprise Risk Management (ERM). ERM is broader in scope than internal controls, focusing on agency-wide risks. ERM is intended to help organizations in setting strategy to consider risk and how much risk the organization is willing to accept. IRS implemented ERM in February 2014 to increase awareness by IRS management of IRS- wide risks and to serve as an early-warning system to identify emerging challenges and address them before they affect operations. Both approaches to risk management require clear, defined objectives in measurable terms to identify and analyze risks that could challenge achieving desired outcomes. Risks toward achieving those objectives can be identified and analyzed, and risk tolerances can be determined. Understanding the significance of the risks to achieving objectives provides the basis for responding to the risks. Without clear audit selection objectives on fairness, SB/SE lacks assurance that it can identify and assess risks to the fair selection of returns to audit. Absent risk identification and assessments linked to program objectives, vulnerabilities may go unaddressed, which could lead to unfair return selection. We found many instances where SB/SE documented the review and selection of returns for audit. However, we also found several instances where SB/SE did not document various aspects of its return selection process nor locate documentation in time for our review. Internal Control Standard: Document transactions Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for review. Audit plan changes. Changes to the field audit plan are documented during the annual planning process, but SB/SE did not document its process for modifying the field audit plan during the year. According to SB/SE officials, they modify the plan during the year as additional budget and staffing information from IRS’s finance unit becomes available. Officials stated that changes to this audit plan are documented by the budget information received and by the recalculated plan. However, SB/SE did not document how it translated the budget and staffing information into changes in the inventory targets or staffing nor why some targets were changed but not others. Selection decisions and rationale. SB/SE did not consistently document decisions for selecting certain tax returns over others for audit and the rationale behind the decisions. SB/SE does not require all of these decisions and rationales to be documented. Returns that are stored electronically and are deemed to be excess inventory can be screened out without documentation such as a form, stamp, or signature. For discriminant function (DIF)-sourced returns, SB/SE’s primary workstream for field audits, and some referrals, only a group manager stamp is required to screen out the returns, rather than also documenting the rationale for screening them out. Documentation requirements also vary within a workstream. For example, for returns involving a tax shelter fostered by a promoter, audit screen-out rationales are required to be documented at the group level in the field but not at the area office level. Officials said that, aside from the Form 1900 for certain returns, they generally do not document why a return was not selected. To illustrate, we found nine files without documentation of the screen-out decision or rationale in our file review of 30 screened-out returns. Regardless of whether a form is required, the screen-out decision should be documented. Files not located. IRS could not locate 18 of the 233 files we requested in time for our review. For example, for non-DIF pickup returns, 5 out of 24 files requested were not located in time. For all types of referrals we reviewed, we were unable to review 8 out of 56 files requested because they were not located in time. According to officials, IRS could not locate these files because files for one audit may be stored with files for any number of related audits, files for open or recently closed audits may not yet be available, and files may have been stored in the wrong location. In addition to internal control standards, the IRM requires all records to be efficiently managed until final disposition. Having procedures to ensure that selection decisions and rationale are clearly and consistently documented helps provide assurance that management directives are consistently followed and return selection decisions are made fairly. Further, being able to find files efficiently can aid congressional and other oversight, and prevent unnecessary taxpayer burden if IRS later needs to contact the taxpayer regarding material that would have been in the file. As discussed earlier in this report, SB/SE has procedures that, if implemented, help provide some assurance that its return selection process is generally monitored. However, we found that SB/SE did not have requirements to monitor certain steps in the selection process. Internal Control Standard: Monitor controls Program managers should have a strategy and procedures to continually monitor and assure the effectiveness of its control activities. Key duties and responsibilities should be divided among different people to reduce the risk of error and to achieve organizational goals. Program managers need operational data to determine whether they are meeting their strategic and annual performance plans and their goals for effective and efficient use of resources. Dollar threshold for campus audits. We found that the dollar threshold for selecting some returns for campus audits has remained constant or has been adjusted informally based on inventory needs. SB/SE has not evaluated whether the threshold should change or be changed more formally. According to officials, the dollar threshold is the break-even point for collecting enough tax to justify the audit. However, the threshold is only a guide; sometimes the threshold can be higher depending on how many returns need to be audited to meet the audit plan. According to one official, the threshold amount has been in place at least 4 years and possibly as long as 10 years. Classification review. We also found that classification decisions are not always required to be reviewed. For DIF and NRP returns, about 10 percent of classified returns are required to be reviewed for accuracy and adherence to classification guidelines. However, other field audit selection methods, including some referrals, do not include a formal classification quality review. Likewise, campus audit selections by analysts are not formally reviewed. Review of group manager decisions. SB/SE does not always require that group manager return selection decisions (i.e., screen- out) be reviewed. Even though multiple people are involved, in some cases, the group manager can independently make the final selection or screen-out decision. For state and agency referrals, and others to varying degrees, screen-out decisions by group managers are not reviewed. For example, in our file review of 30 screened-out returns, 8 were screened out by group managers. We did not see documentation of the approval for screening out these returns because such documentation was not required. According to SB/SE officials, group managers are the most knowledgeable about the resources available to meet audit goals. The managers also consult with territory and area managers to determine which returns should be screened out. For campus audits, approvals are not required to screen out returns from audit. Officials said that workload selection analysts communicate about the status of current and upcoming work to determine which returns are excess inventory and not needed to meet the annual audit plan or unable to be worked because of resource limitations. Source codes. We found that some codes for identifying the return to be audited, called source codes, were mislabeled, not used, or not well defined, even though the IRM states that all data elements in IRS databases should be defined and documented. In our review of 215 files, six returns were coded as non-Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) related pickups. SB/SE officials later explained that these returns were mislabeled and should be moved to the source code used for TEFRA-related work. We also found two files that were coded as information referrals that should have been coded as related pickup audits, one file that was coded as a DIF-sourced return that should have been coded as a claim by a taxpayer to adjust a return he or she had filed, and three files that were coded as compliance initiative projects that should have been coded as returns selected to train auditors. For campus audits, source codes are assigned to each return audited but are not used to identify, select, or monitor campus inventory and do not serve any other purpose in campus audits. As a result, a source code may not represent the actual source of the inventory. Further, we found two source codes that were not well defined. One source code associated with about 35 percent of campus audits completed in fiscal year 2014 included references to DIF that were generally not applicable, since these returns were not related to or identified using DIF scoring. Another source code associated with about 18 percent of campus audits completed in fiscal year 2014 was labelled as two different items and did not accurately describe many of the returns using this code. Spreading responsibility for reviewing selection and screen-out decisions can reduce the potential for error and unfairness. In addition, adequate controls can help ensure that audits are appropriately coded so that IRS has accurate information to better ensure the efficient and effective use of resources. For example, having better controls on how returns are coded decreases the risk that data elements are misleading, which can hinder the decision-making process, such as prioritizing returns to select for audit and analyzing whether goals are met. SB/SE relies on a variety of sources and processes to select returns for audit. This complexity underscores the importance of having a robust internal control system to support the selection process and achieve SB/SE’s mission of administering the “tax law with integrity and fairness to all.” SB/SE has some procedures in place that are consistent with internal control standards. However, we identified some internal control weaknesses that leave its audit program vulnerable to inconsistent return selection or the perception of it. Without effective internal controls, including defining fairness in selecting returns, SB/SE cannot know if it is achieving its mission and whether its return selection policies and procedures are aligned with its mission. Further, IRS will not be able to manage risk or monitor performance as well as it otherwise could. Finally, IRS risks the appearance that its return selection process is unfair to taxpayers because it is unable to communicate key pieces of information, such as its definition of fairness, to the public. To help ensure SB/SE’s audit selection program meets its mission and selects returns fairly, we recommend that the Commissioner of Internal Revenue take the following actions: Clearly define and document the key term “fairness” for return selection activities. Clearly communicate examples of fair selections to staff to better assure consistent understanding. Develop, document, and implement program-level objective(s) to evaluate whether the return selection process is meeting its mission of applying the tax law with integrity and fairness to all. To help ensure that SB/SE’s audit selection objective(s) on fairness are used and met, we recommend that the Commissioner of Internal Revenue take the following actions: Develop, document, and implement related performance measures that would allow SB/SE to determine how well the selection of returns for audit meets the new objective(s). Incorporate the new objective(s) for fair return selection into the SB/SE risk management system to help identify and analyze potential risks to fair selections. In addition, we recommend that the Commissioner of Internal Revenue take the following actions: Develop and implement consistent documentation requirements to clarify the reasons for selecting a return for audit and who reviewed and approved the selection decision. Develop, document, and implement monitoring procedures to ensure that decisions made and coding used to select returns for audit are appropriate. We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. The Deputy Commissioner for Services and Enforcement provided written comments on November 23, 2015, which are reprinted in appendix VII. IRS stated that it agrees with the importance of sound internal controls and is committed to their improvement, especially in the areas we recommended. IRS stated that it agreed with our seven recommendations. Accordingly, the enclosure to the letter listed specific IRS actions planned to implement the recommendations. IRS also provided technical comments, which we incorporated where appropriate. As IRS’s letter mentioned, its audit program includes various features that are intended to promote fair return selection, such as documents that convey the importance of “fairness,” existing objectives and measures, and types of monitoring. However, as our report discusses, these features do not clarify what fair selection of returns for audit entails and how IRS would know whether fair selections are occurring, except for when someone such as a taxpayer questions the fairness of return selection. For our recommendations on defining and documenting “fairness” for return selection activities and communicating examples of fair selections to staff, IRS stated that the concept of fairness has both collective and individual attributes. IRS noted that fairness for return selection encompasses three components—pursuing those who fail to comply, objectively selecting noncompliant returns across all areas of noncompliance, and respecting and adhering to taxpayers’ rights. As such, IRS has taken the first step to implement our recommendation. However, to fully implement our recommendation, IRS will need to clarify how each component relates to return selection. For example, the first and third components also cover what happens after return selection, such as pursuing noncompliance and interacting with taxpayers during the audit. In regard to our recommendations on developing one or more program objectives and related measures on return selection related to fairness, as our report discusses, IRS’s current program objectives and measures do not address fair selection of returns. We believe that IRS should develop at least one objective and related measure that tie to its definition of fairness. Doing so would allow IRS to more conclusively demonstrate and assess whether its selection decisions were fair. We also recommended that IRS improve the documentation and monitoring of selection decisions. Our report acknowledges that documentation and monitoring does occur in many areas but provides examples of the need for more in other areas. As such, IRS needs additional documentation and monitoring as opposed to merely a plan to evaluate the need to take these actions. We note three other clarifications based on statements in IRS’s letter. First, IRS’s letter correctly stated that our report did not identify any instances where the selection was considered inappropriate or unfair. We did not design our study to look for inappropriate and unfair selections, but rather to assess the internal controls that help ensure a fair selection process. Further, even if we did design our study to look for unfair selections, our design would be hampered by the lack of a definition for fairness and related objective(s) and measures(s) to evaluate whether selections were fair. Second, IRS’s letter stated that the seven groupings in our report do not reflect how IRS views its workstreams for identifying returns for potential audit selection. As discussed in the report, our groupings are based on how a return was initially identified rather than on IRS’s workstreams. For example, related pickups, including DIF-related pickups, are identified by auditors, whereas DIF-selected returns are identified by a computer algorithm. Therefore, we separately grouped DIF-related pickups from DIF- selected returns. Furthermore, IRS could not provide complete data on the number of returns audited from each of its workstreams but could provide data on audits selected from other sources, such as related pickups. While some of these sources could be associated with a workstream, it was not possible for all. As a result, we used the available IRS data to show how all SB/SE audits were distributed by these audit identification workstreams and sources (shown in the report as figure 3). Third, DIF return selections do not involve the least amount of discretion, as IRS’s letter stated. As discussed in our report, many returns that were initially identified through DIF automation as having audit potential were not audited. The actual audit selections do not occur until multiple IRS staff review those returns, requiring some human discretion. Our report discusses other groupings with less staff discretion than DIF, such as when taxpayers request that IRS review their returns or when IRS randomly selects returns for a research program. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. This report (1) describes the processes for selecting Small Business/Self- Employed (SB/SE) returns for audit, and (2) assesses how well the processes and controls for selecting those returns support SB/SE’s mission of “applying the tax law with integrity and fairness to all.” For the first objective, we reviewed Internal Revenue Service (IRS) documents that describe the processes and criteria for selecting SB/SE returns for audit. These documents included sections of the Internal Revenue Manual (IRM), procedures documents, process flowcharts, and summaries of selection processes prepared by SB/SE officials. We also interviewed IRS officials responsible for overseeing audit selection. To provide information on closed IRS and SB/SE audits, we analyzed data for 2011 through 2014 from the Compliance Data Warehouse Audit Information Management System (AIMS) closed table. We compared the results of our analyses of data in AIMS to the IRS data book to assess consistency of results. We determined that these data were sufficiently reliable for the purposes for which they were used in this engagement. For the second objective, we reviewed SB/SE’s procedures for selecting returns for audit and related internal controls intended to help SB/SE achieve its stated mission of “enforcing the tax law with integrity and fairness to all.” We then assessed whether these procedures followed standards from Standards for Internal Control in the Federal Government that were relevant to return selection. To determine which standards were most relevant, we used our Internal Control Management and Evaluation Tool, in conjunction with observations from our preliminary audit work. We selected the most relevant internal control standards as criteria in consultation with SB/SE officials and our financial management and assurance and information technology teams. We also conducted eight focus groups with selected SB/SE staff who are responsible for reviewing or selecting SB/SE returns for audit. We held two groups with field office staff who review returns for audit potential, two groups with area office staff who coordinate the review process, two groups with field office group managers who select returns for audit, one group with campus staff who review and select returns for audit, and one group with specialty tax group managers who select returns for audit. Within these five populations, we randomly selected participants who met our criteria of having more than 2 years of IRS work experience, working in different IRS offices nationwide, and covering a range of compliance issue areas. In total, our groups involved 58 participants with an average of about 9 years of IRS experience, with a range from 3 to 32 years of experience. The focus groups were held by telephone. We asked questions on internal control related topics, such as the clarity of SB/SE procedures and the adequacy of guidance to apply these procedures. To assess the extent to which SB/SE implemented its procedures, we conducted a file review. We used IRM sections and SB/SE procedures documents as criteria. We obtained the population of SB/SE audits opened from March 2014 to February 2015 as shown in the open AIMS database and selected a nonprobability sample of 173 returns to review. Although the results of our file review cannot be projected to the population of SB/SE audits, they represent a variety of types of returns, sources, and selection processes. We focused on processes that required more manual review or affected a large number of taxpayers. As reflected in table 2, we reviewed more files for referrals and compliance initiative projects because they involve more human discretion in deciding whether to include the return in the selection inventory and in reviewing the returns for audit potential than for some other categories. We also reviewed more files for discriminant function (DIF) returns compared to some other categories because DIF returns are the largest portion of SB/SE’s field audit workload by selection method or workstream. We reviewed the files to determine if decisions were documented and if staff followed procedures, such as documenting the rationale and approval for selecting or screening out returns. In sum, table 2 reflects the different types of returns we sampled, the type of files we reviewed, and the population and sample size of the files. As shown in the last two rows of table 2, we also reviewed nongeneralizable, random samples of 30 returns that had been surveyed (i.e., screened out) and 30 classification quality review records for the same general time period as the audit files we reviewed. We created a separate sample of screened-out returns because audits were not opened on these returns. The database we used to create the audit file sample only contained returns that had been audited. We obtained the population of screened-out returns from SB/SE officials and randomly selected our sample from this population. We created a separate sample for classification quality review records because SB/SE reviews classification decisions per auditor rather than per return. We obtained the population of auditors that were reviewed during the same general time period as the files for the other samples. We identified subpopulations by region and selected a stratified random sample of these subpopulations. Finally, we interviewed SB/SE officials about the procedures and discussed deficiencies we identified. We designed uniform data collection instruments for our file review to consistently capture information on the completeness of required documentation and approvals related to return selection. IRS reviewed the instruments and the data we captured. To ensure accuracy, two of our analysts reviewed each file we assessed and reconciled any differences in responses. We then analyzed the results of these data collection efforts to identify main themes and develop summary findings. We conducted this performance audit from September 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. Area Office Referral - Area office field personnel refer potential leads with correspondence audit issues to Campus Reporting Compliance (CRC). 2. Audit Information Management Systems (AIMS)/AIMS Computer Information System (A-CIS)/Previously Adjusted Exam Issues on Subsequent-year Filings - Quarterly A-CIS reports are run to identify every campus case closed agreed or default in each of the discretionary audit programs. The subsequent year returns are classified for the same issues that are on the closed audit cases. 3. Audit Reconsideration - Reevaluates the results of a prior audit where additional tax was assessed and remains unpaid, or a tax credit was reversed. IRS also uses the process when the taxpayer contests a Substitute for Return determination by filing an original delinquent return. 4. Campus Reporting Compliance (CRC) Compliance Initiative Project (CIP) Usage - CRC uses CIP Authorization (Form 13498) to document approval for testing potential new inventory in correspondence audits. 5. Category A Claims for Refund - Accounts Management staff refer claims for refunds that meet criteria indicating audit potential directly to Classification and Claim Teams within the campuses. 6. Criminal Investigation Referral - CRC uses IRS’s databases to determine if the issues Criminal Investigation identified exist on the referred returns. 7. Claim - A request for refund or an adjustment of tax paid or credit not previously reported or allowed. 8. Collection Referral - CRC receives two kinds of referrals from collection each year. CRC receives three referrals yearly of potential nonfiler leads from the collection queue. CRC also receives occasional referrals of Form 3949 Information Item referrals. 9. Compliance Data Environment Release 3 - Identifies potential audits through user-defined filters and queries, and forwards those selected to the correct treatment stream. 10. Compliance Data Warehouse/Potential Unreported Heavy Use Tax - Identifies Form 2290 returns (Heavy Highway Vehicle Use Tax Return) with potential unreported heavy use tax. 11. Compliance Initiative Project (CIP) – When IRS identifies potential noncompliance in specific groups of taxpayers, CIPs are used to contact or audit taxpayers or collect taxpayer data within that group when another method to identify such workload is not already in place. 12. Discriminant Function (DIF) - A mathematical technique to estimate or “score” the potential merit of auditing a particular tax return based on its characteristics. 13. Discretionary Exam Business Rules (DEBR) - DEBR rules were developed to identify non-Earned Income Tax Credit returns with the highest audit potential for additional tax assessment for certain return conditions. 14. Employee Audit - Any employee selected for audit under any and all methods of inventory identification (e.g., DIF (see definition above), referrals). It also includes inventory that is specifically identified based on the individual’s position within IRS. Inventory identification is designed to ascertain compliance among IRS employees while maintaining their right to privacy. 15. Employment Tax Referral - Specialty tax personnel refer potential audit leads relating to possible unfiled payroll tax returns to CRC (see definition above). 16. Estate & Gift Tax Form 1041 - Filters identify Form 1041 returns reporting charitable contributions, fiduciary fees, and other miscellaneous deductions. 17. Estate & Gift (E&G) Referrals - E&G tax personnel refer potential audit leads relating to possible unreported executor fees to CRC. 18. Government Liaison and Disclosure (GLD) Referrals - GLD personnel refer information to CRC from sources outside IRS, such as states and the Puerto Rican Tax Authority (see definition below), that are potential audit leads. 19. High Income Nonfiler - Strategy designed to address the filing compliance of taxpayers with known sources of income exceeding $200,000. 20. Information Reports - Reports and referrals that may include information on substantial civil tax potential and significant potential for fraud, or are related to returns for tax years not yet required to be filed. 21. National Research Program (NRP) - A comprehensive effort by IRS to measure compliance for different types of taxes and various sets of taxpayers. It provides a statistically valid representation of the compliance characteristics of taxpayers. 22. Offers-In-Compromise/Doubt as to Liability - An offer in compromise is an agreement between the taxpayer and IRS that settles a tax debt for less than the full amount owed. Doubt as to liability exists where there is a genuine dispute as to the existence or amount of the correct tax debt under the law. 23. Payment Card Income Pilot - Potential underreporters are flagged when Form 1099-K receipts, as a portion of gross receipts, are significantly greater than for similar taxpayers, suggesting cash underreporting. 24. Promoter Investigations and Client Returns - SB/SE auditors, as well as other IRS or external sources, refer potentially abusive transaction promoters/preparers for audit. Client returns are audited to determine whether penalties and/or an injunction are warranted. 25. Puerto Rican Tax Authority Nonfiler - The Puerto Rican Tax Authority provides information to IRS through the Government Liaison Office about residents in Puerto Rico who fail to file their federal tax return. 26. Research Referral - Research personnel refer potential audit leads relating to NRP, possible nonfilers, and problem preparers to CRC. 27. Return Preparer Program Action Cases and Client Returns - Clients of questionable preparers are audited to determine whether preparer penalties and/or injunctive actions are warranted. These are limited to preparer misconduct or incompetence that is pervasive and widespread. 28. Submissions Processing - Submission Processing staff refer potential audit leads relating to the Alternative Minimum Tax program, math error, and unallowables to CRC or campus classifiers. 29. State Audit Referral Program (SARP) - SARP utilizes the audit report information submitted to IRS by various taxing agencies to address areas of noncompliance. 30. State/Other Agency Referral - Federal, state, and local governmental agencies share relationships and data with IRS through the Governmental Liaison staff to increase compliance levels, reduce the tax gap, reduce taxpayer burden, and optimize use of resources. 31. Treasury Inspector General for Tax Administration (TIGTA) Referral - TIGTA personnel refer potential audit leads relating to TIGTA investigations to CRC. 32. Tip Program Referral - Employees who do not report at or above the tip rate as agreed upon by the employer under various agreements with IRS may be referred for audit. 33. Whistleblower Claim - Allegations of violation of federal tax laws made by a person who requests a reward. Table 5 shows the selection methods or workstreams by how the returns were identified. Figures 4 and 5 represent general similarities and variations in the Small Business/Self-Employed (SB/SE) return selection process at its field and campus locations, respectively. They do not include every process that occurs in the various methods or workstreams. In addition, the phases and processes in the figures are not necessarily discrete events but may overlap and involve other processes and staff. The AIMS source code indicates the initial source of how the return was identified for audit. Table 6 shows the number of field audits closed by source code and by grouping of source codes into categories for fiscal year 2014. In addition to the contact named above, Tom Short (Assistant Director), Sara Daleski, Hannah Dodd, David Dornisch, Elizabeth Fan, Ted Hu, Ada Nwadugbo, Robert Robinson, Ellen Rominger, Stewart Small, Andrew J. Stephens, and Elwood White contributed to this report.
IRS audits small businesses and self-employed individuals to ensure compliance with tax laws. Audits can help improve reporting compliance and reduce the tax gap—the difference between taxes owed and those voluntarily paid on time, which is estimated at $385 billion annually after late payments and enforcement actions. Therefore, it is important that IRS makes informed decisions about how it selects taxpayers for audit. GAO was asked to review IRS's processes and controls for selecting SB/SE taxpayers for audit. This report (1) describes these processes and (2) determines how well SB/SE's selection processes and controls support its mission to apply the tax law with integrity and fairness to all. GAO reviewed IRS criteria, processes, and control procedures for selecting taxpayers for audit; assessed whether IRS control procedures followed Standards for Internal Control in the Federal Government ; and reviewed nonprobability samples of over 200 audit files. GAO also conducted eight focus groups with SB/SE staff who review or make audit selection decisions and interviewed IRS officials. The Small Business/Self-Employed (SB/SE) division of the Internal Revenue Service (IRS) uses over 30 methods, called workstreams, to identify and review tax returns that may merit an audit. These returns were initially identified through seven sources which include referrals; computer programs that run filters, rules, or algorithms to identify potentially noncompliant taxpayers; and related returns that are identified in the course of another audit. SB/SE's workstreams follow a general, multiphase process for identifying, reviewing (classifying), and selecting returns for audit. Within this general approach, the selection process varies across workstreams. Differences include the number of review steps and manual processes, which are greater for field audits compared to correspondence audits which generally focus on a single compliance issue and are identified using automated processes. For fiscal year 2013, IRS reported that SB/SE's primary workstream for field audits identified about 1.6 million returns as potentially most noncompliant. About 77,500 returns (5 percent) were selected for audit, a much smaller pool of returns than was initially identified. SB/SE has control procedures for safeguarding data and segregating duties across the overall selection process, among others, but it has not implemented other key internal controls. The lack of strong control procedures increases the risk that the audit program's mission of fair and equitable application of the tax laws will not be achieved. Examples of internal control deficiencies include the following: Program objectives and key term of fairness are not clearly defined. Fairness is specified in SB/SE's mission statement and referenced in IRS's procedures for auditors. However, IRS has not defined fairness or program objectives for audit selection that would support its mission of treating taxpayers fairly. GAO heard different interpretations of fairness from focus group participants. Not having a clear definition of fairness can unintentionally lead to inconsistent treatment of taxpayers and create doubts as to how fairly IRS administers the tax law. Further, the lack of clearly articulated objectives undercuts the effectiveness of SB/SE's efforts to assess risks and measure performance toward achieving these objectives. Procedures for documenting and monitoring selection decisions are not consistent. SB/SE does not always require selection decisions and rationales to be documented. For example, SB/SE requires that some workstreams document survey decisions (when returns are not assigned for audit), rationale, and approval using a form. Other workstreams, such as its primary workstream for field audits, require a group manager stamp but do not require the rationale to be documented. Also, SB/SE does not always require classification decisions (when returns are assessed for audit potential and compliance issues) to be reviewed. Having procedures to ensure that selection decisions and rationale are consistently documented and reviewed can reduce the potential for error and unfairness. GAO recommends that IRS take seven actions to help ensure that the audit selection program meets its mission, such as establishing and communicating program objectives related to audit selection and improving procedures for documenting and monitoring the selection process. In commenting on a draft of this report, IRS agreed with the recommendations.
Until 1993, most forces based in the United States were not assigned to a single geographic command. Due to their location, these forces had limited opportunities to train jointly with the overseas-based forces they would joint in time of crisis or war. The lack of a joint headquarters to oversee the forces of the four military services based in the continental United States (CONUS) was long considered a problem that the Joint Chiefs of Staff tried twice to fix. The concept of a joint headquarters for U.S.-based forces resurfaced again at the end of the Cold War and led to the establishment of the U.S. Atlantic Command (USACOM) in 1993 as the unified command for most forces based in CONUS. With the fall of the Berlin Wall and the collapse of the Eastern European communist regimes in 1989, the Cold War was over and a new world order began. Senior Department of Defense (DOD) leadership began considering the implications of such changes on the Department. They recognized that the end of the Cold War would result in reduced defense budgets and forces, especially overseas-based forces, and more nontraditional, regional operations such as peacekeeping and other operations short of a major theater war. In developing a CONUS power projection strategy, they looked at options for changing the worldwide command structure, which included establishing an Americas Command. The initial concept for an Americas Command—a command that would have geographic responsibility for all of North and South America—was not widely accepted by DOD leadership. However, the Chairman, Joint Chiefs of Staff, General Colin Powell, and other senior military leaders during the early 1990s increased attention to the need to place all CONUS-based forces under one joint command to respond to worldwide contingencies. Factors influencing this concept were the anticipation that the overall DOD force drawdown would increase reliance on CONUS-based forces and that joint military operations would become predominant. Chairman Powell believed such a command was needed because CONUS-based forces remained service-oriented. These forces needed to train to operate jointly as a way of life and not just during an occasional exercise. The concept of one command providing joint training to CONUS-based forces and deploying integrated joint forces worldwide to meet contingency operations was recommended by Chairman Powell in a 1993 report on roles and missions to the Secretary of Defense. The mission of this command would be to train and deploy CONUS-based forces as a joint team, and the Chairman concluded that the U.S. Atlantic Command was best suited to assume this mission. The Chairman’s 1993 report on roles and missions led to an expansion of the roles of the U.S. Atlantic Command. Most notably, the Secretary of Defense, upon review of the Chairman’s report, endorsed the concept of one command overseeing the joint training, integrating, and deploying of CONUS-based forces. With this lead, but without formal guidance from the Joint Staff, USACOM leadership began developing plans to expand the Command. As guidance and the plan for implementing the Command’s expanded roles developed, DOD’s military leadership surfaced many issues. Principal among these issues was whether (1) all CONUS-based forces would come under the Command, including those on the west coast; (2) the Commander in Chief (Commander) of USACOM would remain the Commander of NATO’s Supreme Allied Command, Atlantic; and (3) the Command would retain a geographic area of responsibility along with its functional responsibilities as joint force integrator. While these issues were settled early by the Secretary of Defense, some issues were never fully resolved, including who would be responsible for developing joint force packages for deployment overseas in support of operations and numerous concerns about who would have command authority over forces. This lack of consensus on the expansion and implementation of USACOM was expressed in key military commands’ review comments and objections to USACOM’s implementation plan and formal changes to the Unified Command Plan. Table 1.1 provides a chronology of key events that led to giving the U.S. Atlantic Command the new responsibilities for training, integrating, and providing CONUS-based forces for worldwide operations. The USACOM implementation plan and revised Unified Command Plan, both issued in October 1993, provided the initial approval and guidance for expanding the responsibilities of the U.S. Atlantic Command. The Unified Command Plan gave USACOM “additional responsibilities for the joint training, preparation, and packaging of assigned CONUS-based forces for worldwide employment” and assigned it four service component commands. The implementation plan provided the institutional framework and direction for establishing USACOM as the “Joint Force Integrator” of the bulk of CONUS-based forces. As the joint force integrator, USACOM was to maximize America’s military capability through joint training, force integration, and deployment of ready CONUS-based forces to support geographic commanders, its own, and domestic requirements. This mission statement, detailed in the implementation plan, evolved into USACOM’s functional roles as joint force trainer, provider, and integrator. The USACOM implementation plan was developed by a multiservice working group for the Chairman, Joint Chiefs of Staff, and approved by the Secretary of Defense and the Chairman. The plan provided USACOM the basic concept of its mission, responsibilities, and forces. It further detailed the basic operational concept to be implemented in six areas. Three of these areas of particular relevance to USACOM’s new functional roles were (1) the adaptive joint force packaging concept; (2) joint force training and interoperability concepts; and (3) USACOM joint doctrine and joint tactics, techniques, and procedures. The Command was given 12 to 24 months to complete the transition. The Unified Command Plan is reviewed and updated not less than every 2 years. In 1997, USACOM’s functional roles were revised in the plan for the first time to include the following: Conduct joint training of assigned forces and assigned Joint Task Forcestaffs, and support other unified commands as required. As joint force integrator, develop joint, combined, interagency capabilities to improve interoperability and enhance joint capabilities through technology, systems, and doctrine. Provide trained and ready joint forces in response to the capability requirements of supported geographic commands. Overview of USACOM DOD has nine unified commands, each of which comprises forces from two or more of the military departments and is assigned broad continuing missions. These commands report to the Secretary of Defense, with the Chairman of the Joint Chiefs of Staff functioning as their spokesman. Four of the commands are geographic commands that are primarily responsible for planning and conducting military operations in assigned regions of the world, and four are functional commands that support military operations. The ninth command, USACOM, is unique in that it has both geographic and functional missions. Figure 1.1 shows the organizational structure of the unified commands. In addition to its headquarters staff, USACOM has several subordinate commands, such as U.S. Forces Azores, and its four service component commands—the Air Force’s Air Combat Command, the Army’s Forces Command, the Navy’s Atlantic Fleet Command and the Marines Corps’ Marine Corps Forces Atlantic. Appendix I shows USACOM’s organizational structure. USACOM’s service component commands comprise approximately 1.4 million armed forces personnel, or about 80 percent of the active and reserve forces based in the CONUS, and more than 65 percent of U.S. active and reserve forces worldwide. Figure 1.2 shows the areas of the world and percentage of forces assigned to the geographic commands. While USACOM’s personnel levels gradually increased in its initial years of expansion—from about 1,600 in fiscal year 1994 to over 1,750 in fiscal year 1997—its civilian and military personnel level dropped to about 1,600in fiscal year 1998, primarily because part of USACOM’s geographic responsibilities were transferred to the U.S. Southern Command. During this period, USACOM’s operations and maintenance budget, which is provided for through the Department of the Navy, grew from about $50 million to about $90 million. Most of the increase was related to establishing the Joint Training, Analysis and Simulation Center, which provides computer-assisted training to joint force commanders, staff, and service components. The Command’s size increased significantly in October 1998, when five activities, controlled by the Chairman, Joint Chiefs of Staff, and their approximately 1,100 personnel were transferred to USACOM. The Secretary of Defense also assigned USACOM authority and responsibility for DOD’s joint concept development and experimentation in 1998. An initial budget of $30 million for fiscal year 1999 for these activities was approved by DOD. USACOM estimates it will have 151 personnel assigned to these activities by October 2000. In response to congressional interest in DOD’s efforts to improve joint operations, we reviewed the assimilation of USACOM into DOD as the major trainer, provider, and integrator of forces for worldwide deployment. More specifically, we determined (1) USACOM’s actions to establish itself as the joint force trainer, provider, and integrator of most continental U.S.-based forces; (2) views on the value of the Command’s contributions to joint military capabilities; and (3) recent expansion of the Command’s responsibilities and its possible effect on the Command. We focused on USACOM’s functional roles; we did not examine the rationale for USACOM’s geographic and NATO responsibilities or the effect of these responsibilities on the execution of USACOM’s functional roles. To accomplish our objectives, we met with officials and representatives of USACOM and numerous other DOD components and reviewed studies, reports, and other documents concerning the Command’s history and its activities as a joint trainer, provider, and integrator. We performed our fieldwork from May 1997 to August 1998. A more detailed discussion of the scope and methodology of our review, including organizations visited, officials interviewed, and documents reviewed, is in appendix II. Our review was performed in accordance with generally accepted government auditing standards. In pursuing its joint force trainer role, USACOM has generally followed its 1993 implementation plan, making notable progress in developing a joint task force commander training program and establishing a state-of-the-art simulation training center. The joint force provider and integrator roles were redirected with the decision, in late 1995, to deviate from the concept of adaptive joint force packages, a major element of the implementation plan. For its role as joint force provider, USACOM has adopted a process-oriented approach that is less proactive in meeting force requirements for worldwide deployments and is more acceptable to supported geographic commanders. To carry out its integrator role, USACOM has adopted an approach that advances joint capabilities and force interoperability through a combination of technology, systems, and doctrine initiatives. USACOM planned to improve joint force training and interoperability through six initiatives laid out in its implementation plan. The initiatives were to (1) improve the exercise scheduling process, (2) develop mobile training teams, (3) train joint task force commanders and staffs, (4) schedule the use of service ranges and training facilities for joint training and interoperability, (5) assist its service components in unit-level training intended to ensure the interoperability of forces and equipment, and (6) develop a joint and combined (with allied forces) training program for U.S. forces in support of nontraditional missions, such as peacekeeping and humanitarian assistance. USACOM has taken actions on the first two initiatives and has responded to the third, fifth, and sixth initiatives through its requirements-based joint training program. While the fourth initiative was included in the Command’s implementation plan, USACOM subsequently recognized that it did not have the authority to schedule training events at the service-owned ranges and facilities. The Chairman of the Joint Chiefs of Staff initially gave USACOM executive agent authority (authority to act on his behalf) for joint training, including the scheduling of all geographic commander training exercises, USACOM’s first initiative. In September 1996, the Chairman removed this authority in part because of resistance from the other geographic commands. By summer 1997, the Chairman, through the Joint Training Policy, again authorized USACOM to resolve scheduling conflicts for worldwide training. While USACOM maintains information on all training that the services’ forces are requested to participate in, the information is not adequately automated to enable the Command to efficiently fulfill the scheduling function. The Command has defined the requirement for such information support and is attempting to determine how that requirement will be met. USACOM does provide mobile training teams to other commands for training exercises. Generally, these teams cover the academic phase of the exercises. The Command, for example, sent a training team to Kuwait to help the Central Command prepare its joint task force for a recent operation. It also has included training support, which may include mobile training teams, for the other geographic commanders in its long-range joint training schedule. To satisfy its third, fifth, and sixth initiatives, USACOM has developed a joint training program that reflects the supported geographic commanders’ stated requirements. These are expressed as joint tasks essential to accomplishing assigned or anticipated missions (joint mission-essential tasks). The Command’s training program is derived from the six training categories identified in the Chairman of the Joint Chiefs of Staff’s joint training manual and are described in appendix III. USACOM primarily provides component interoperability and joint training and participates in and supports multinational interoperability, joint and multinational, and interagency and intergovernmental training. The Command’s primary focus has been on joint task force training under guidance provided by the Secretary of Defense. Joint training, conducted primarily at USACOM’s Joint Training, Analysis and Simulation Center, encompasses a series of exercises—Unified Endeavor—that provide training for joint force commanders and their staffs. The training focuses on operational and strategic tasks and has evolved into a multiphased exercise. USACOM uses state-of-the-art modeling and simulation technology and different exercise modules that allows the exercise to be adapted to meet the specific needs of the training participants. For example, one module provides the academic phase of the training and another module provides all phases of an exercise. Until recently, the exercises generally included three phases, but USACOM added analysis as a fourth phase. Phase I includes a series of seminars covering a broad spectrum of operational topics. Participants develop a common understanding of joint issues. Phase II presents a realistic scenario in which the joint task force launches crisis action planning and formulates an operations order. Phase III implements the operations order through a computer-simulated exercise that focuses on joint task force procedures, decision-making, and the application of doctrine. Phase IV, conducted after the exercise, identifies lessons learned, joint after-action reviews, and the commander’s exercise report. USACOM and others consider the Command’s Joint Training, Analysis and Simulation Center to be a world premier center of next-generation computer modeling and simulation and a centerpiece for joint task force training. The Center is equipped with secured communications and video capabilities that enable commands around the world to participate in its exercises. These capabilities allow USACOM to conduct training without incurring the significant expenses normally associated with large field training exercises and help reduce force personnel and operating tempos. For example, before the Center was created, a joint task force exercise would require approximately 45,000 personnel at sea or in the field. With the Center, only about 1,000 headquarters personnel are involved. As of December 1998, USACOM had conducted seven Unified Endeavor exercises and planned to provide varying levels of support to at least 17 exercises—Unified Endeavor and otherwise—per year during fiscal years 1999-2001. Figure 2.1 shows one of the Center’s rooms used for the Unified Endeavor exercises. We attended the Unified Endeavor 98-1 exercise to observe firsthand the training provided in this joint environment. While smooth joint operations evolved over the course of the exercise, service representatives initially tended to view problems and pressure situations from a service rather than a joint perspective. The initial phase allowed the key officers and their support staff, including foreign participants, to grasp the details of the scenario. These details included the basic rules of engagement and discussions of what had to be accomplished to plan the operation. In the exercise’s second phase, staff from the participating U.S. and foreign military services came together to present their proposals for deploying and employing their forces. As the exercise evolved, service representatives came to appreciate the value and importance of coordinating every aspect of their operations with the other services and the joint task force commander. The third phase of the exercise was a highly stressful environment. The joint task force commander and his staff were presented with numerous unknowns and an overwhelming amount of information. Coordination and understanding among service elements became paramount to successfully resolving these situations. For interoperability training, units from more than one of USACOM’s service components are brought together in field exercises to practice their skills in a joint environment. USACOM sponsors three recurring interoperability exercises in which the Command coordinates the training opportunities for its component commands, provides specific joint mission-essential tasks for incorporation into the training, and approves the exercise’s design. The goal of the training is to ensure that U.S. military personnel and units are not confronted with a joint warfighting task for the first time after arrival in a geographic command’s area of responsibility. For example, USACOM sponsors a recurring combat aircraft flying exercise—Quick Force—that is designed to train Air Force and participating Navy and Marine Corps units in joint air operations tailored to Southwest Asia. This exercise is devised to train commanders and aircrews to plan, coordinate, and execute complex day and night, long-range joint missions from widely dispersed operating locations. USACOM relies on its service component commands to plan and execute interoperability training as part of existing service field exercises. According to USACOM’s chief for joint interoperability training, the service component commanders are responsible for evaluating the joint training proficiency demonstrated. The force commander of the exercise is responsible for the accomplishment of joint training objectives and for identifying any operational deficiencies in doctrine, training, material, education, and organization. USACOM provides monitors to evaluate exercise objectives. Until recently, USACOM limited its attention to interoperability training, as its primary focus was on its Unified Endeavor training program. As this training has matured, USACOM recently began to increase its attention on more fully developing and planning the Command’s interoperability training. The Command recently developed, with concurrence from the other geographic commanders, a list of joint interoperability tasks tied to the services’ mission-essential task lists. With the development and acceptance of these joint interoperability tasks, Command officials believe that their joint interoperability exercises will have a better requirements base from which to plan and execute. Also, USACOM is looking for ways to better tie these exercises to computer-assisted modeling. USACOM provides joint and multinational training support through its coordination of U.S. participation in “partnership for peace” exercises. The partnership for peace exercise program is a major North Atlantic Treaty Organization (NATO) initiative directed at increasing confidence and cooperative efforts among partner nations to reinforce regional stability. The Command was recently designated the lead activity in the partnership for peace simulation center network. USACOM also supports training that involves intergovernmental agencies. Its involvement is primarily through support to NATO, as Supreme Allied Commander, Atlantic, and to non-DOD agencies. For example, USACOM has begun including representatives of other federal agencies, such as the State Department and Drug Enforcement Administration, in its Unified Endeavor exercises. USACOM has made substantive changes to its approach to providing forces. Adaptive joint force packaging was to have been the foundation for implementing its force provider role. When this concept encountered strong opposition, USACOM adopted a process-oriented approach that is much less controversial with supported geographic commands and the military services. With over 65 percent of all U.S. forces assigned to it, USACOM is the major source of forces for other geographic commands and for military support and assistance to U.S. civil agencies. However, its involvement in force deployment decisions varies from operation to operation. The Command also helps its service components manage the operating tempos of heavily used assets. USACOM’s implementation plan introduced the operational concept of adaptive joint force packages as an approach for carrying out USACOM’s functional roles, particularly the provider and integrator roles. Under this approach, USACOM would develop force packages for operations less than a major regional war and complement, but not affect, the deliberate planning process used by geographic commanders to plan for major regional wars. USACOM’s development of these force packages, using its CONUS-based forces, was conceived as a way to fill the void created by reductions in forward-positioned forces and in-theater force capabilities in the early 1990s. It was designed to make the most efficient use of the full array of forces and capabilities of the military services, exploring and refining force package options to meet the geographic commanders’ needs. The approach, however, encountered much criticism and resistance, particularly from other geographic commands and the military services, which did not want or value a significant role for USACOM in determining which forces to use in meeting mission requirements. Because of this resistance and the unwillingness of the Chairman of the Joint Chiefs of Staff to support USACOM in its broad implementation of the force packaging concept, USACOM largely abandoned it in 1995 and adopted a process-oriented approach. Adaptive joint force packages and their demise are discussed in appendix IV. The major difference between the adaptive joint force packaging concept and the process-oriented approach that replaced it is that the new approach allows the supported geographic commander to “package” the forces to suit his mission needs. In essence, USACOM prepares the assets, which are put together as the supported commander sees fit rather than having ready-to-go packages developed by USACOM. The new approach retains aspects of the force packaging concept. Most notably, geographic commanders are to present their force requirements in terms of the capability needed, not in the traditional terms of requests for specific units or forces. Forces are to be selected by the supported commanders, in collaboration with USACOM, from across the services to avoid over-tasking any particular force. The process is shown in figure 2.2 and discussed in more detail in appendix V. USACOM, commanding nearly 68 percent of the combat forces assigned to geographic commands, is the major provider of forces for worldwide operations. The size of its assigned forces far exceeds the requirements for operations within the Command’s area of responsibility, which is much less demanding than that of other geographic commands. As a result, USACOM can provide forces to all the geographic commands, and its forces participate in the majority of military operations. The Command also provides military support and assistance to civil authorities for domestic requirements, such as hurricane relief and security at major U.S. events. During 1998, USACOM supported over 25 major operations and many other smaller operations worldwide. These ranged from peacekeeping and humanitarian assistance to evacuation of U.S. and allied nationals from threatened locations. On average, USACOM reported that it had over 30 ships, 400 aircraft, and 40,000 personnel deployed throughout 1998. The Pacific, European, and Special Operations Commands also have assigned forces, but they are unable to provide the same level of force support to other commands as USACOM. The Pacific Command has large Navy and Marine Corps forces but has limited Army and Air Force capabilities. European Command officials said their Command rarely provides forces to other commands because its forces are most often responding to requirements in their own area of responsibility. The Special Operations Command provides specialized forces to other commands for unique operations. The Central and Southern Commands have very few forces of their own and are dependent on force providers such as USACOM to routinely furnish them with forces. USACOM provides forces throughout the world for the entire range of military operations, from war to operations other than war that may or may not involve combat. Since the Gulf War in 1991, the U.S. military has largely been involved in operations that focus on promoting peace and deterring war, such as the U.S. military support to the NATO peacekeeping mission in Bosnia and the enforcement of U.N. sanctions against Iraq. The extent of USACOM’s involvement in force decisions varies from operation to operation. In decisions regarding deployment of major combatant forces, the Command plays a very limited role. The military services and USACOM’s service components collaborate on such decisions. Although USACOM’s interaction with geographic commands and service components may influence force decisions, USACOM’s Commander stated that when specific forces are requested by a geographic commander, his Command cannot say “no” if those forces are available. USACOM is not directly involved in the other geographic commands’ deliberate planning—the process for preparing joint operation plans—except when there is a shortfall in the forces needed to implement the plan or the supported commander requests USACOM’s involvement. Every geographic command is to develop deliberate plans during peacetime for possible contingencies within its area of responsibility as directed by the national command authority and the Chairman of the Joint Chiefs of Staff. As a supporting commander, USACOM and its service component commands examine the operation plans of other commands to help identify shortfalls in providing forces as needed to support the plans. USACOM’s component commands work more closely with the geographic commands and their service components to develop the deployment data to sequence the movement of forces, logistics, and transportation to implement the plan. During crises, for which an approved operation plan may not exist, the responsible geographic command either adjusts an existing plan or develops a new one to respond to specific circumstances or taskings. The time available for planning may be hours or days. The supported commander may request inputs on force readiness and force alternatives from USACOM and its component commands. A European Command official said USACOM is seldom involved in his Command’s planning process for crisis operations because of the compressed planning time before the operation commences. USACOM has its greatest latitude in suggesting force options for military operations other than war that do not involve combat operations, such as nation assistance and overseas presence operations, and for ongoing contingency operations. In these situations, time is often not as critical and USACOM can work with the supported command and component commands to develop possible across-the-service force options. A primary consideration in identifying and selecting forces for deployment is the operating and personnel tempos of the forces, which affect force readiness. As a force provider, USACOM headquarters supports its service component commands in resolving tempo issues and monitors the readiness of assigned forces and the impact of deployments on major contingency and war plans. While tempo issues are primarily a service responsibility, USACOM works with its service component commands and the geographic commands to help balance force tempos to maintain the readiness of its forces and desired quality-of-life standards. This involves analyzing tempo data across its service components and developing force alternatives for meeting geographic commands’ needs within tempo guidelines. According to USACOM officials, the Command devotes much attention to managing certain assets with unique mission capabilities that are limited in number and continually in high demand among the geographic commands to support most crises, contingencies, and long-term joint task force operations in their regions. These low-density/high-demand assets, such as the Airborne Warning and Control Systems and E/A-6B electronic warfare aircraft and Patriot missile batteries, are managed under the Chaiman of the Joint Staff’s Global Military Force Policy. This policy, which guides decisions on the peacetime use of assets that are few in number but high in demand, establishes prioritization guidelines for their use and operating tempo thresholds that can be exceeded only with Secretary of Defense approval. The policy, devised in 1996, is intended to maintain required levels of unit training and optimal use of the assets across all geographic commander missions, while discouraging the overuse of selected assets. USACOM is responsible for 16 of the 32 low-density/high-demand assets—weapon systems and personnel units—that are included in the Global Military Force Policy. The Pacific and European Commands have some of these 16 assets, but the bulk of them are assigned to USACOM. These assets are largely Air Force aircraft. In this support role, USACOM has initiated several actions to help implement the policy, including bringing the services and geographic commands together to resolve conflicts over the distribution of assets, devising a monitoring report for the Joint Staff, and recommending to the services assets that should be included in future policy revisions. Appendix VI provides a list of the low-density/high-demand assets currently assigned to USACOM. The Global Military Force Policy does not capture all of the highly tasked assets. For example, the policy does not include less prominent assets such as dog teams, military security police, water purification systems, intelligence personnel, and medical units. There were similar concerns about the high operating tempos of these assets, and USACOM has monitored them closely. Most of these assets, or alternatives to them, were available across the services. Therefore, USACOM has some flexibility in identifying alternative force options to help balance unit tempos. Another Joint Staff policy affecting USACOM as a force provider is the Global Naval Force Presence Policy. This policy establishes long-range planning guidance for the location and number of U.S. naval forces—aircraft carriers and surface combatant and amphibious ships—provided to geographic commands on a fair-share basis. Under this scheduling policy, the Navy controls the operating and personnel tempos for these heavily demanded naval assets, while it ensures that geographic commands’ requirements are met. USACOM has little involvement in scheduling these assets. While this policy provides little flexibility for creating deployment options in most situations, it can be adjusted by the Secretary of Defense to meet unexpected contingencies. According to an action officer in USACOM’s operations directorate, one of USACOM’s difficulties in monitoring tempos has been the lack of joint tempo guidelines that could be applied across service units and assets. Each service has different definitions of what constitutes a deployment, dissimilar policies or guidance for the length of time units or personnel should be deployed, and different systems for tracking deployments. For example, the Army defined a deployment as a movement during which a unit spends an overnight away from its home station. Deployments to combat training centers were not counted. In contrast, the Marine Corps defines a deployment as any movement from the home station for 10 days or more, including a deployment for training at its combat training center. As a result, it is difficult to compare tempos among the services. An official in USACOM’s operations directorate said the services would have to develop joint tempo guidelines because they have the responsibility for managing the tempos of their people and assets. The official did not anticipate a movement anytime soon to create such guidelines because of the differences in the types of assets and in the management and deployment of the assets. DOD, in responding to a 1998 GAO report on joint training, acknowledged that the services’ ability to measure overall deployment rates is still evolving. The integrator role has changed significantly since 1993 and is still evolving. It was originally tied to adaptive joint force packaging. But with that concept’s demise, the Command’s role became to implement a process to improve interoperability and enhance joint force capabilities through the blending of technology, systems, and doctrine. The Command’s force integration objectives are to (1) identify and refine doctrinal issues affecting joint force operations; (2) identify, develop, evaluate, and incorporate new and emerging technologies to support joint operations; and (3) refine and integrate existing systems to support joint operations. The Command’s emphasis since 1996 has been to sponsor advanced concept technology demonstration projects that have a multiservice emphasis and search for solutions to joint interoperability problems among advanced battle systems. It has given limited attention to joint doctrinal issues. Establishing its integration role has not been easy for USACOM. USACOM’s Commander (1994-97) characterized the Command’s integration efforts as a “real struggle” and said the Joint Staff was not supportive. The current USACOM Commander expressed similar comments, citing the integration role as the most challenging yet promising element of his Command’s mission. He told us the Command stumbled at times and overcame numerous false starts until its new integration role emerged. He said that as USACOM’s functional roles mature, the Command may create more friction with the services and other commands, many of which view USACOM as a competitor. Its efforts were significantly enhanced with the October 1998 transfer to the Command of five joint centers and activities previously controlled by the Chairman of the Joint Chiefs of Staff (see ch. 4). USACOM’s primary means to fulfill its integration role has been to sponsor advanced concept technology demonstration projects. These projects are designed to permit early and inexpensive evaluations of mature advanced technologies to meet the needs of the warfighter. The Command considered such projects to be the best way to achieve integration by building new systems that are interoperable from the beginning. The warfighter determines the military utility of the project before a commitment is made to proceed with acquisition. These projects also allow for the development and refinement of operational concepts for using new capabilities. As an advanced concept technology demonstration project sponsor, USACOM provides an operations manager to lead an assessment to determine the project’s joint military utility and to fully understand its joint operational capability. The Command also provides the personnel for the projects and writes the joint doctrine and concepts of operation to effectively employ these technologies. USACOM only accepts projects that promote interoperability and move the military toward new levels of effectiveness in joint warfighting. Various demonstration managers, such as the Deputy Under Secretary of Defense for Acquisition and Technology, fund the projects. At the completion of our review, USACOM was sponsoring 12 of DOD’s 41 active advanced concept technology demonstrations. It completed work in 1996 on the Predator project, a medium-altitude unmanned aerial vehicle that the Air Force is to acquire. Table 2.1 identifies each USACOM project and its funding through fiscal year 2003. We issued a report in October 1998 on opportunities for DOD to improve its advanced concept technology demonstration program, including the process for selecting candidate projects and guidance on entering technologies into the normal acquisition process, and the risky practice of procuring prototypes beyond those needed for the basic demonstration and before completing product and concept demonstration. In addition to its advanced concept technology demonstration projects, USACOM has sought opportunities to advance the interoperability of systems already deployed or about to be deployed that make a difference on the battlefield. Particularly critical capabilities USACOM has identified for interoperability enhancements include theater missile defense; command, control, and communications; intelligence, surveillance, and reconnaissance; and combat identification (friend or foe). The military services have a long history of interoperability problems during joint operations, primarily because DOD has not given sufficient consideration to the need for weapon systems to operate with other systems, including exchanging information effectively during a joint operation. We reported on such weaknesses in the acquisition of command, control, communications, computers, and intelligence systems in March 1998. A critical question is who pays the costs associated with joint requirements that USACOM identifies in service acquisition programs? The services develop weapon system requirements, and the dollars pass from the Secretary of Defense to the services to satisfy the requirements. If USACOM believes modifications are needed to a weapon system to enable it to operate in a joint environment, the Command can elevate this interoperability issue to the Chairman of the Joint Chiefs of Staff and to the Joint Requirements Oversight Council for action. For example, the USACOM Commander recently told the Chairman and the Council that the Air Force’s unwillingness to modify the Predator and the concept of operations to allow other services to directly receive information from the unmanned aerial vehicle would limit a joint commander’s flexibility in using such vehicles, hurt interoperability, and inhibit the development of joint tactics. According to USACOM’s Operations Manager for this area, the Air Force needs to provide additional funding to make the Predator truly joint but it wants to maintain operational control of the system. As of November 1998, this interoperability concern had not been resolved. USACOM can also enhance force integration through its responsibility as the trainer and readiness overseer of assigned reserve component forces. This responsibility allows USACOM to influence the training and readiness of these reserves and their budgets to achieve full integration of the reserve and active forces when the assigned reserves are mobilized. This is important because of the increased reliance on reserve component forces to carry out contingency missions. The USACOM Commander (1993-97) described the Command’s oversight as a critical step in bringing the reserve forces into the total joint force structure. USACOM and others believe that the Command has helped advance the joint military capabilities of U.S. forces. While USACOM has conducted several self-assessments of its functional roles, we found that these assessments provided little insight into the overall value of the Command’s efforts to enhance joint capabilities. The Command has established goals and objectives as a joint trainer, provider, and integrator and is giving increased attention to monitoring and accomplishing tasks designed to achieve these objectives and ultimately enhance joint operational capabilities. Our discussions with various elements of DOD found little consensus regarding the value of USACOM’s contributions in its functional roles but general agreement that the Command is making important contributions that should enhance U.S. military capabilities. USACOM has conducted three self-assessments of its functional roles. These appraisals did not specifically evaluate the Command’s contribution to improving joint operational capabilities but discussed progress of actions taken in its functional roles. The first two appraisals covered USACOM’s success in executing its plan for implementing the functional roles, while the most recent appraisal rated the Command’s progress in each of its major focus areas. In quarterly reports to the Secretary of Defense and in testimony before the Congress, USACOM has presented a positive picture of its progress and indicated that the military has reached an unprecedented level of jointness. In a June 1994 interim report to the Chairman of the Joint Chiefs of Staff, USACOM’s Commander noted that the Command’s first 6 months of transition into its new functional roles had been eventful and that the Command was progressing well in developing new methodologies to meet the geographic commands’ needs. He recognized that it would take time and the help of the service components to refine all the responsibilities relating to the new mission. He reported that USACOM’s vision and strategic plan had been validated and that the Command was on course and anticipated making even greater progress in the next 6 months. USACOM performed a second assessment in spring 1996, in response to a request from the Chairman of the Joint Chiefs of Staff for a review of the success of USACOM’s implementation plan at the 2-year point. The Command used Joint Vision 2010, the military’s long-range strategic vision, as the template for measuring its success, but the document does not provide specific measures for gauging improvements in operational capabilities. USACOM reported that, overall, it had successfully implemented its key assigned responsibilities and missions. It described its new functional responsibilities as “interrelated,” having a synergistic effect on the evolution of joint operations. It reported that it had placed major emphasis on its joint force trainer role and noted development of a three-tier training model. The Command described its joint force provider role as a five-step process, with adaptive joint force packaging no longer a critical component. Seeing the continuing evolution of its force provider role as a key factor in supporting Joint Vision 2010, USACOM assessed the implementation plan task as accomplished. The Command considered its joint force integrator role the least developed but the most necessary in achieving coherent joint operations and fulfilling Joint Vision 2010. Although the assessment covered only the advanced concept technology demonstrations segment of its integrator role, USACOM reported that it had also successfully implemented this task. As requested by USACOM’s Commander, USACOM staff assessed progress and problems in the Command’s major focus areas in early 1998. This self-assessment covered the Command’s directorate-level leadership responsible for each major focus area. An official involved in this assessment said statistical, quantifiable measures were not documented to support the progress ratings; however, critical and candid comments were made during the process. The assessments cited “progress” or “satisfactory progress” in 38 of 42 rated areas, such as command focus on joint training, advanced concept technology demonstration project management, and monitoring of low-density/high-demand asset tempos. Progress was judged “unsatisfactory” in four areas: (1) exercise requirements determination and worldwide scheduling process; (2) training and readiness oversight for assigned forces; (3) reserve component integration and training, and readiness oversight; and (4) institutionalizing the force provider process. This assessment was discussed within the Command and during reviews of major focus areas and was updated to reflect changes in command responsibilities. USACOM, like other unified commands, uses several mechanisms to report progress and issues to DOD leadership and the Congress. These include periodic commanders-in-chief conferences, messages and reports to or discussions with the Chairman of the Joint Chiefs of Staff, and testimony before the Congress. Minutes were not kept of the commanders-in-chief conferences, but we obtained Commander, USACOM, quarterly reports, which are to focus on the Command’s key issues. Reports submitted to the Secretary of Defense between May 1995 and April 1998 painted a positive picture of USACOM’s progress, citing activities in areas such as joint training exercises, theater missile defense, and advanced technology projects. The reports also covered operational issues but included little discussion of the Command’s problems in implementing its functional roles. For example, none of the reports discussed the wide opposition to adaptive joint force packaging or USACOM’s decision to change its approach, even though the Secretary of Defense approved the implementation plan for its functional roles, which included development of adaptive joint force packages. In congressional testimony in March 1997, the Commander of USACOM (1995-97) discussed the Command’s annual accomplishments, plans for the future, and areas of concern. The Commander noted that U.S. military operations had evolved from specialized joint operations to a level approaching synergistic joint operations. In 1998 testimony, the current USACOM Commander reported continued progress, describing the military as having reached “an unprecedented level of jointness.” USACOM’s ultimate goal is to advance joint warfighting to a level it has defined as “coherent” joint operations with all battle systems, communications systems, and information databases fully interoperable and linked by common joint doctrine. Figure 3.1 depicts the evolution from specialized and synergistic joint operations to coherent joint operations. At the conclusion of our review, USACOM was completing the development of a new strategic planning system to enhance its management of its major focus areas and facilitate strategic planning within the USACOM staff. Goals, objectives, and subobjectives were defined in each of its major focus areas, and an automated internal process was being established to help the Command track actions being taken in each area. The goals and objectives were designed to support the Command’s overall mission to maximize U.S. military capability through joint training, force integration, and deployment of ready forces in support of worldwide operations. Table 3.1 provides examples of goals, objectives, and subobjectives in the joint force trainer, provider, and integrator major focus areas. The goals and the objectives and subobjectives necessary to achieve the goals are established by officials in each major focus area. The objectives and subobjectives are to be understandable, relevant, attainable, and measurable. Progress in achieving the subobjectives becomes the measures for the objective’s success, and progress on objectives is the measure of success in achieving a goal. The relative importance of each objective and subobjective is reflected in weights or values assigned to each and is used to measure progress. Objective and subjective assessments of progress are to be routinely made and reported. Command officials expect that in some areas progress will not be easy to measure and will require subjective judgments. USACOM officials believed the Command’s new planning system, which became operational on October 20, 1998, meets many of the expectations of the Government Performance and Results Act, which requires agencies to set goals, measure performance, and report on their accomplishments. The Command believed that actions it plans to adopt in major focus areas would ultimately improve the military capabilities of U.S. forces, the mission of the Command. The officials, however, recognized that the planning system does not include assessments or measures that can be used to evaluate the Command’s impact on military capabilities. Under the Results Act, agencies’ performance plans are to include performance goals and measures to help assess whether the agency is successful in accomplishing its general goals and missions. The Congress anticipated that the Results Act principles would be institutionalized and practiced at all organizational levels of the federal government. Establishing such performance measures could be difficult, but they could help USACOM determine what it needs to do to improve its performance. DOD has begun to implement the Results Act at all organizational levels, and the Secretary of Defense tasked subordinate organizations in 1998 to align their programs with DOD program goals established under the act. Recognizing that the development of qualitative and quantitative performance measures to assess mission accomplishment has been slow, USACOM has provided training to its military officers on performance objectives. USACOM officials said that while the Command has begun to take steps to implement the principles of the Act, they believed the Command needs additional implementation guidance from the Office of the Secretary of Defense. In the absence of specific assessments of USACOM’s impact on joint operations, we asked representatives from the Joint Staff, USACOM and its service component commands, and supported geographic commands for their views on USACOM’s value and contributions in advancing DOD’s joint military capabilities. Opinions varied by command and functional role and ranged from USACOM having little or no impact to being a great contributor and having a vital role. Generally speaking, Joint Staff officials considered USACOM to be of great value and performing an essential function while views among the geographic commands were more reserved. USACOM and its service components believed the Command’s joint task force headquarters training was among the best joint training available. This training has allowed USACOM components’ three-star commanders and their senior staffs to be trained without fielding thousands of troops and to concentrate on joint tasks considered essential to accomplishing a mission anywhere in the world. The Commander of USACOM cited this training as the best example of USACOM’s success in affecting joint operations. He told us that USACOM has secured the funding it needs to do this training and has developed what he described as a “world-class” joint training program. Representatives of the geographic commands we visited believed USACOM’s joint task force commander training has provided good joint experience to CONUS-based forces. They believed this training has enabled participants to perform more effectively as members of a joint task force staff. While these commands spoke well of the training, they have been slow to avail themselves of it and could not attribute any improvement in joint tasks force operations to it. The commands have not taken advantage of this training for several reasons. First, other geographic commands considered providing headquarters’ staff joint task force commander training their responsibility and were reluctant to turn to USACOM for assistance. Second, USACOM’s joint task force commander training is conducted at the Command’s Joint Training Analysis and Simulation Center in Suffolk, Virginia. Thus, geographic commands would have to make a significant investment to deploy several hundred headquarters staff for up to 18 days to complete the three phases of USACOM’s training. Third, the commands are not confident that the training at the Center provides a true picture of the way they would conduct an operation. That is, the scenarios USACOM uses may have limited application in the other geographic commands’ regional areas of operational responsibility. The commands have, therefore, preferred to train their own forces, with assistance from the Joint Warfighting Center. Representatives from this Center have gone to the commands and assisted them with their training at no cost to the command. In October 1998, the Center was assigned to USACOM. USACOM officials believed this would enhance the training support provided by the Command to geographic commands (see ch. 4). Indications are that the geographic commands are beginning to more fully use USACOM as a training support organization. According to the Commander of USACOM, the current generation of commanders of the geographic commands have been more receptive of USACOM support than their predecessors. Also, as USACOM adjusts its training to make it more relevant to other geographic commanders, the commands are requesting USACOM’s support. In 1998, USACOM sent mobile training teams to the U.S. Central Command in support of an operation in Kuwait. The Command was also supporting the U.S. European Command in one of its major training exercises. U.S. Southern Command has requested support from USACOM for one of its major Caribbean joint exercises and asked the Command to schedule the training exercise for the next 3 years. Regarding interoperability training, USACOM’s component commands believed the Command should be more involved in planning and executing training exercises. Most of this training was existing service exercises selected to be used as joint interoperability training. Some service component officials believed that without sufficient USACOM influence, the sponsoring services would be inclined to make these exercises too service-specific or self-serving. For example, the Navy’s annual joint task force exercise has basically been a preparation for a carrier battle group to make its next deployment. The Air Force has participated, but Air Combat Command officials told us they did not believe they gained much joint training experience from the exercise. USACOM officials recognize that the Command has not given interoperability training the same level of emphasis as its joint task force training. They believed, however, that components’ use of the recently developed universal joint interoperability tasks list in planning this training would result in more joint orientation to the training. As the major joint force provider, USACOM was valued by the Joint Staff, other geographic commands, and its service component commands. The Joint Staff believed that USACOM, as a single joint command assigned the majority of the four services’ forces, has provided a more efficient way of obtaining forces to meet the mission needs of the other geographic commands. Prior to establishing USACOM, the Joint Staff dealt individually with each of the services to obtain the necessary forces. Now, the Joint Staff can go to USACOM, which can coordinate with its service component commands to identify available forces with the needed capabilities and recommend force options. The Chairman of the Joint Chiefs of Staff (1993-97) told us that forces have never been provided as efficiently as USACOM has done it and that forces were better trained and equipped when they arrived where needed. The geographic commands we visited that USACOM primarily supports viewed the Command as a dependable and reliable force provider. The U.S. Central Command stated that forces provided by USACOM have been well trained and have met the Command’s needs. The Command described USACOM forces as having performed exceptionally well in Operation Desert Thunder, in response to Iraq’s denial of access to its facilities to U.N. weapon inspectors in February 1998. The Command also stated that USACOM could provide forces more tailored to fighting in its area of responsibility than the U.S. European or Pacific Commands because USACOM forces have routinely deployed for exercises and missions in support of ongoing operations in their area. Similarly, U.S. European Command officials said that USACOM has been responsive to their Command’s force needs and was doing a good job as a force provider. The U.S. European Command also noted that USACOM has ensured equitable tasking among CONUS-based forces and has allowed the European Command to focus on the operation at hand. The U.S. Southern Command, with few forces of its own, believed that the withdrawal of U.S. forces from Panama throughout 1999 would make the Southern Command more dependent on USACOM for forces to support its exercise and operations requirements. In discussing its contributions as a major provider of forces, USACOM believed that it adds value by providing the Joint Staff with informed force selection inputs based on all capable forces available from across its service components. For example, the European Command requested that an Air Force engineering unit build a bridge in 1997. USACOM identified a Navy Seabees unit already deployed in Spain as an option. The European Command agreed to use this unit. USACOM believed that it has supported other geographic commands by providing well-trained forces and alerting them of any potential training needs when forces are deployed. USACOM and its service component commands viewed the Command as an “honest broker” that has drawn upon the capabilities of all the services, as necessary, to meet the mission requirements of the geographic commands. As pointed out by USACOM’s Commander, while USACOM has not been involved in all deployment decisions concerning its assigned forces—such as the Navy’s carrier battle groups or large Army units—and was not in a position to deny an available force to a supported command, the Command has served as a clearinghouse for high-demand forces. For example: USACOM had provided optometrists for its mobile training teams deployed to Africa to train Africans for peacekeeping activities. Optometrists were needed to diagnose eye problems of African troops, who experienced difficulties seeing with night optical equipment. The Forces Command was unable to provide the needed personnel beyond the first deployment, so USACOM tasked its Atlantic Fleet component to provide personnel for the redeployment. In May 1997, an aerostat (radar balloon) that provided coverage in the Florida straits went down. USACOM tasked the Navy’s Atlantic Fleet to provide radar coverage every weekend with an E-2C aircraft squadron. When the balloon was not replaced as expected and the requirement continued, the Atlantic Fleet asked for relief from USACOM. USACOM adjudicated resources with the Air Combat Command so that the Air Forces’s E-3 aircraft would provide coverage for half of the time. USACOM’s service component commands also saw the benefit in having a single unified command act as an arbitrator among themselves. USACOM can arbitrate differences between two of its component commands that can provide the same capability. It can provide rationale as to why one should or should not be tasked to fill a particular requirement and make a decision based on such things as prior tasking and operating and personnel tempos. Its components also saw USACOM as their representative on issues with DOD and other organizations. In representing its components, for example, USACOM handled politically sensitive arrangements over several months with a U.S. embassy, through the State Department, to provide military support to a foreign government for a counterdrug operation conducted between July 1997 and February 1998. USACOM’s involvement allowed its Air Force component, the Air Combat Command, to limit its involvement in the arrangements and concentrate on sourcing the assets and arranging logistics for the operation. The Commander of USACOM told us he considered joint force integration to be the Command’s most important functional role. He believed that over the next 2 years the Command’s integration efforts would gain more recognition for enhancing joint operational capabilities than its efforts in joint training. He said the Command was beginning to gain access to critical “levers of progress,” such as the Joint Requirements Oversight Council, which would enhance its influence. He cited the Command’s development—in collaboration with other geographic commands—of a theater ballistic missile defense capstone requirements document and its August 1998 approval by the Council as a demonstration of the Command’s growing influence and impact. This document is to guide doctrine development and the acquisition programs for this joint mission. While approval was a very significant step for jointness, it raised important questions, including who will pay for joint requirements in service acquisition programs. The services have opposed USACOM’s role and methodology in developing joint requirements and did not believe they should be responsible for funding costs associated with the joint requirements. The USACOM Commander believed the Command has made considerable progress in developing the process by which joint force integration is accomplished. He cited the Command’s advanced concept technology demonstration projects that have a joint emphasis as one of its primary means of enhancing force integration. He said, for example, that the Command’s high-altitude endurance unmanned aerial vehicle project should soon provide aerial vehicles that give warfighters near-real-time, all-weather tactical radar and optical imagery. Views and knowledge about USACOM’s integration role varied among the geographic commands we visited. Few commands were knowledgeable of USACOM’s efforts at integration but perceived them to be closely aligned with the Command’s joint force trainer and provider functions. While these commands were aware that USACOM had responded to some specific opportunities (for example, theater ballistic missile defense) in its integrator role, they described the Command’s involvement in refining joint doctrine and improving systems interoperability as a responsibility shared among the commands. A representative of the Joint Staff’s Director for Operational Plans and Interoperability told us USACOM’s integrator role, as originally defined, faded along with adaptive joint force packages. He believed the Command’s staff had worked hard to redefine this role and give it a meaningful purpose and considered the Command as adding value and performing a vital mission in its redefined role. USACOM’s evolving functional roles as joint force trainer, provider, and integrator have not been fully embraced throughout DOD. Except for USACOM’s joint force trainer role, its functional roles and responsibilities have not been fully incorporated into DOD joint publications or fully accepted or understood by other commands and the military services. USACOM’s functional responsibilities are expanding with the recent assignment of five additional joint staff activities, a new joint experimentation role, and ownership of the joint deployment process. USACOM’s Commander believes these will have a positive impact on its existing functional roles. Over time, the Joint Staff and USACOM have incorporated the Command’s joint force trainer role into joint publications. These documents provide a common understanding among DOD organizations of USACOM’s role in the joint training of forces. USACOM’s training role is identified in the Chairman, Joint Chiefs of Staff, joint training policy and discussed in detail in the Chairman’s joint training manual and joint training master plan. The Chairman’s joint training master plan makes USACOM responsible for the joint training of assigned CONUS-based forces, preparing them to deploy worldwide and participate as members of a joint task force. It also tasks the Command to train joint task forces not trained by other geographic commands. As defined in the joint training manual, USACOM develops the list of common operational joint tasks, with assistance from the geographic commands, the Joint Warfighting Center, and the Joint Staff. These common tasks, which are used by USACOM to train CONUS-based forces, have been adopted by the Chairman as a common standard for all joint training. To further clarify its training role, USACOM issued a joint training plan that defines its role, responsibilities, and programs for the joint training of its assigned forces. This plan also discusses the Command’s support to the Chairman’s joint training program and other geographic commands’ joint training. USACOM has also developed a joint task force headquarters master training guide that has been disseminated to all geographic commands and is used to develop training guides. While USACOM’s force provider and integrator roles are described in broad terms in the Unified Command Plan, these roles have not been incorporated into joint guidance and publications. This lack of inclusion could hinder a common understanding about these roles and what is expected from USACOM. For example, key joint guidance for planning and executing military operations—the Joint Operational Planning and Execution System—does not specifically discuss USACOM’s role as a force provider even though the Command has the preponderance of U.S. forces. The lack of inclusion in joint guidance and publications also may contribute to other DOD units’ resistance or lack of support and hinder sufficient discussion of these roles in military academic education curriculums, which use only approved doctrine and publications for class instruction. Internally, USACOM’s provider role is generally defined in the Command’s operations order and has recently been included as a major focus area. However, USACOM has not issued a standard operating procedure for its provider role. A standard operating procedure contains instructions covering those features of operations that lend themselves to a definite or standardized procedure without the loss of effectiveness. Such instructions delineate for staffs and organizations how they are to carry out their responsibilities. Not having them has caused some difficulties and inefficiencies among the force provider staff, particularly newly assigned staff. USACOM officials stated that they plan to create a standard operating procedure but that the effort is an enormous task and has not been started. USACOM’s integrator role is defined in the Command’s operations order and included as a major focus area. The order notes that the training and providing processes do much to achieve the role’s stated objective of enhanced joint capabilities but that effectively incorporating new technologies occurs primarily through the integration process. Steps in the integration process include developing a concept for new systems, formulating organizational structure, defining equipment requirements, establishing training, and developing and educating leaders. The major focus area for the integration role defines the role’s three objectives and tasks within each to enhance joint force operations. The Secretary of Defense continued to expand USACOM’s roles and responsibilities in 1998, assigning the Command several activities, the new role of joint experimentation, and ownership of the joint deployment process. These changes significantly expand the Command’s size and responsibilities. Additional changes that will further expand the Command’s roles and responsibilities have been approved. Effective October 1998, five activities, formerly controlled by the Chairman of the Joint Chiefs of Staff, and about 1,100 of their authorized personnel were transferred to USACOM. Table 4.1 identifies the activities and provides information on their location, missions, and fiscal year 1999 budget request and authorized military and civilian positions. According to USACOM’s Commander, these activities will significantly enhance the Command’s joint training and integration efforts. Each of the transferred activities has unique capabilities that complement each other and current USACOM organizations and activities. For example, by combining the Joint Warfare Analysis Center’s analytical capabilities with USACOM’s cruise missile support activity, the Command could make great strides in improving the capability to attack targets with precision munitions. Also, having the Joint Warfighting Center work with USACOM’s Joint Training and Simulation Center is anticipated to improve the joint training program, enhance DOD modeling and simulation efforts, and help to develop joint doctrine and implement Joint Vision 2010. USACOM’s Commander also believed the Command’s control of these activities would enhance its capability to analyze and develop solutions for interoperability issues and add to its ability to be the catalyst for change it is intended to be. The transfer of the five activities was driven by the Secretary of Defense’s 1997 Defense Reform Initiative report, which examined approaches to streamline DOD headquarters organizations. Transferring the activities to the field is expected to enable the Joint Staff to better focus on its policy, direction, and oversight responsibilities. The Chairman also expects the transfer will improve joint warfighting and training by strengthening USACOM’s role and capabilities for joint functional training support, joint warfighting support, joint doctrine, and Joint Vision 2010 development. USACOM plans to provide a single source for joint training and warfighting support for the warfighter, with a strong role in lessons learned, modeling and simulation, doctrine, and joint force capability experimentation. USACOM has developed an implementation plan and coordinated it with the Joint Staff, the leadership of the activities, other commands, and the military services. The intent is to integrate these activities into the Command’s joint force trainer, provider, and integrator responsibilities. Little organizational change is anticipated in the near term, with the same level and quality of support by the activities provided to the geographic commands. The Joint Warfighting Center and USACOM’s joint training directorate will merge to achieve a totally integrated joint training team to support joint and multinational training and exercises. Under the plan, USACOM also expects to develop the foundation for “one stop shopping” support for geographic commanders both before and during operations. In May 1998, the Secretary of Defense expanded USACOM’s responsibilities by designating it executive agent for joint concept development and experimentation, effective October 1998. The charter directs USACOM to develop and implement an aggressive program of experimentation to foster innovation and the rapid fielding of new concepts and capabilities for joint operations and to evolve the military force through the “prepare now” strategy for the future. Joint experimentation is intended to facilitate the development of new joint doctrine, organizations, training and education, material, leadership, and people to ensure that the U.S. armed forces can meet future challenges across the full range of military operations. The implementation plan for this new role provides estimates of the resources required for the joint experimentation program; defines the experimentation process; and describes how the program relates to, supports, and leverages the activities of the other components of the Joint Vision 2010 implementation process. The plan builds upon and mutually supports existing and future experimentation programs of the military services, the other unified commands, and the various defense research and development agencies. The plan was submitted to the Chairman of the Joint Chiefs of Staff in July 1998, with a staffing estimate of 127 additional personnel by September 1999, increasing to 171 by September 2000. In November 1998, USACOM had about 27 of these people assigned and projected it would have 151 assigned by October 2000. USACOM worked closely with the Office of the Secretary of Defense and the Joint Staff to establish the initial funding required to create the joint experimentation organization. USACOM requested about $41 million in fiscal year 1999, increasing to $80 million by 2002. Of the $41 million, $30 million was approved: $14.1 million was being redirected from two existing joint warfighting programs, and $15.9 million was being drawn from sources to be identified by the Office of the Under Secretary of Defense (Comptroller). The Secretary of Defense says DOD is committed to an aggressive program of experimentation to foster innovation and rapid fielding of new joint concepts and capabilities. Support by the Secretary and the Chairman of the Joint Chiefs of Staff is considered essential, particularly in areas where USACOM is unable to gain the support of the military services who questioned the size and cost of USACOM’s proposed experimentation program. Providing USACOM the resources to successfully implement the joint experimentation program will be an indicator of DOD’s commitment to this endeavor. The Congress has expressed its strong support for joint warfighting experimentation. In the National Defense Authorization Act for Fiscal Year 1999 (P.L. 105-261), it was stated that it was the sense of the Congress that the Commander of USACOM should be provided appropriate and sufficient resources for joint warfighting experimentation and the appropriate authority to execute assigned responsibilities. We plan to issue a report on the status of joint experimentation in March 1999. In October 1998, the Secretary of Defense, acting on a recommendation of the Chairman of the Joint Chiefs of Staff, made USACOM owner of the joint deployment process. As process owner, USACOM is responsible for maintaining the effectiveness of the process while leading actions to substantially improve the overall efficiency of deployment-related activities. The Joint Staff is to provide USACOM policy guidance, and the U.S. Transportation Command is to provide transportation expertise. USACOM was developing a charter to be coordinated with other DOD components, and provide the basis for a DOD directive. The deployment process would include activities from the time forces and material are selected to be deployed to the time they arrive where needed and then are returned to their home station or place of origin. According to the Secretary of Defense, USACOM’s responsibilities as joint trainer, force provider, and joint force integrator of the bulk of the nation’s combat forces form a solid foundation for USACOM to meet joint deployment process challenges. The Secretary envisioned USACOM as a focal point to manage collaborative efforts to integrate mission-ready deploying forces into the supported geographic command’s joint operation area. USACOM officials considered this new responsibility to be a significant expansion of the Command’s joint force provider role. They believed that in their efforts to make the deployment process more efficient there would be opportunities to improve the efficiency of its provider role. As executive agent of the Secretary of Defense for the joint deployment process, USACOM’s authority to direct DOD components and activities to make changes to the deployment process has yet to be defined. A Joint Staff official recognized this as a possible point of contention, particularly among the services, as the draft charter was being prepared for distribution for comment in February 1999. In October 1998, the Deputy Secretary of Defense approved the realignment or restructuring of several additional joint activities affecting USACOM. These include giving USACOM representation in the joint test and evaluation program; transferring the services’ combat identification activities to USACOM; and assigning a new joint personnel recovery agency to USACOM. USACOM and the Chairman of the Joint Chiefs of Staff believed these actions strengthened USACOM’s joint force trainer and integrator roles as well as its emerging responsibilities for joint doctrine, warfighting concepts, and joint experimentation. USACOM representation on the joint test and evaluation program, which was to be effective by January 1999, provides joint representation on the senior advisory council, planning committee, and technical board for test and evaluation. Command and control of service combat identification programs and activities provide joint evaluation of friend or foe identification capabilities. The newly formed joint personnel recovery agency provides DOD personnel recovery support by combining the joint services survival, evasion, resistance, and escape agency with the combat search and rescue agency. USACOM is to assume these responsibilities in October 1999. Retaining the effectiveness of America’s military when budgets are generally flat and readiness and modernization are costly requires a fuller integration of the capabilities of the military services. As the premier trainer, provider, and integrator of CONUS-based forces, USACOM has a particularly vital role if the U.S. military is to achieve new levels of effectiveness in joint warfighting. USACOM was established to be a catalyst for the transformation of DOD from a military service-oriented to a joint-oriented organization. But change is difficult and threatening and it does not come easy, particularly in an organization with the history and tradition of DOD. This is reflected in the opposition to USACOM from the military services, which provide and equip the Command with its forces and maintain close ties to USACOM’s service component commands, and from geographic commands it supports. As a result of this resistance, USACOM changed its roles as an integrator and provider of forces and sought new opportunities to effect change. Indications are that the current geographic commanders may be more supportive of USACOM than past commanders have been, as evidenced by their recent receptivity to USACOM’s support in development and refinement of their joint training programs. Such support is likely to become increasingly important to the success of USACOM. During its initial years the Command made its greatest accomplishments in areas where there was little resistance to its role. The Commander of USACOM said that the Command would increasingly enter areas where others have a vested interest and that he would therefore expect the Command to encounter resistance from the military services and others in the future as it pursues actions to enhance joint military capabilities. While USACOM has taken actions to enhance joint training, to meet the force requirements of supported commands, and to improve the interoperability of systems and equipment, the value of its contributions to improved joint military capabilities are not clearly discernable. If the Command develops performance goals and measures consistent with the Results Act, it could assess and report on its performance in accomplishing its mission of maximizing military capabilities. The Command may need guidance from the Secretary of Defense in the development of these goals and measures. In addition to its evolving roles as joint force trainer, provider, and integrator, USACOM is now taking on important new, related responsibilities, including the management of five key joint activities. With the exception of training, these roles and responsibilities, both old and new, are largely undefined in DOD directives, instructions, and other policy documents, including joint doctrine and guidance. The Unified Command Plan, a classified document that serves as the charter for USACOM and the other unified commands, briefly identifies USACOM’s functional roles but does not define them in any detail. This absence of a clear delineation of the Command’s roles, authorities, and responsibilities could contribute to a lack of universal understanding and acceptance of USACOM and impede the Command’s efforts to enhance the joint operational capabilities of the armed forces. While USACOM was established in 1993 by the Secretary of Defense with the open and strong leadership, endorsement, and support of the Chairman of the Joint Chiefs of Staff, General Colin Powell, the Command has not always received the same strong visible support. Without such support, USACOM’s efforts to bring about change could be throttled by other, more established and influential DOD elements with priorities that can compete with those of USACOM. Indications are that the current DOD leadership is prepared to support USACOM when it can demonstrate a compelling need for change. The adoption of the USACOM-developed theater ballistic missile defense capstone requirements document indicates that this rapidly evolving command may be gaining influence and support as the Secretary of Defense’s and Chairman of the Joint Chiefs of Staff’s major advocate for jointness within the Department of Defense. It is important that USACOM be able to evaluate its performance and impact in maximizing joint military capabilities. Such assessments, while very difficult to make, could help the Command better determine what it needs to do to enhance its performance. We, therefore, recommend that the Secretary of Defense direct the Commander in Chief of USACOM to adopt performance goals and measures that will enable the Command to assess its performance in accomplishing its mission of maximizing joint military capabilities. Additionally, as USACOM attempts to advance the evolution of joint military capabilities and its role continues to expand, it is important that the Command’s roles and responsibilities be clearly defined, understood, and supported throughout DOD. Only USACOM’s roles and responsibilities in joint training have been so defined in DOD policy and guidance documents. Therefore, we recommend that the Secretary of Defense fully incorporate USACOM’s functional roles, authorities, and responsibilities in appropriate DOD directives and publications, including joint doctrine and guidance. In written comments (see app. VII) on a draft of this report, DOD concurred with the recommendations. In its comments DOD provided additional information on USACOM’s efforts to establish performance goals and objectives and DOD’s efforts to incorporate USACOM’s functional roles, authorities, and responsibilities in appropriate DOD directives and publications. DOD noted that as part of USACOM’s efforts to establish performance goals and objectives, the Command has provided training on performance measures to its military officers. Regarding our recommendation to incorporate USACOM’s functional roles, authorities, and responsibilities in appropriate DOD directives and publications, DOD said the 1999 Unified Command Plan, which is currently under its cyclic review process, will further define USACOM’s functional roles as they have evolved over the past 2 years. It also noted that key training documents have been, or are being, updated. We believe that in addition to the Unified Command Plan and joint training documents, the joint guidance for planning and executing military operations—the Joint Operational Planning and Execution System process—should discuss USACOM’s role as the major provider of forces.
Pursuant to a congressional request, GAO provided information on Department of Defense (DOD) efforts to improve joint operations, focusing on: (1) the U.S. Atlantic Command's (USACOM) actions to establish itself as the joint force trainer, provider, and integrator of most continental U.S.-based forces; (2) views on the value of the Command's contributions to joint military capabilities; and (3) recent expansion of the Command's responsibilities and its possible effects on the command. GAO noted that: (1) USACOM has advanced joint training by developing a state-of-the-art joint task force commander training program and simulation training center; (2) the Command has also progressed in developing other elements of joint training, though not at the same level of maturity or intensity; (3) however, USACOM has had to make substantive changes in its approach to providing and integrating joint forces; (4) its initial approach was to develop ready force packages tailored to meet the geographic commands' spectrum of missions; (5) this was rebuffed by the military services and the geographic commands, which did not want or value USACOM's proactive role and by the Chairman of the Joint Chiefs of Staff (1993-97), who did not see the utility of such force packages; (6) by late 1995, USACOM reverted to implementing a force-providing process that provides the Command with a much more limited role and ability to affect decisions and change; (7) the Command's force integrator role was separated from force providing and also redirected; (8) the establishment of performance goals and measures would help USACOM assess and report on the results of its efforts to improve joint military capabilities; (9) Congress anticipated that the Government Performance and Results Act principles would be institutionalized at all organizational levels in federal agencies; (10) the Command's recently instituted strategic planning system does not include performance measures that can be used to evaluate its impact on the military capabilities of U.S. forces; (11) the Office of the Secretary of Defense, the Joint Staff, and USACOM believed the Command was providing an important focus to the advancement of joint operations; (12) the views of the geographic commands were generally more reserved, with some benefitting more than others from USACOM's efforts; (13) the Command's new authorities are likely to increase its role and capabilities to provide training and joint war fighting support and enhance its ability to influence decisions within the department; and (14) although USACOM's roles are expanding and the number of functions and DOD organizational elements the Command has relationships with is significant, its roles and responsibilities are still largely not spelled out in key DOD policy and guidance, including joint doctrine, guidance, and other publications.
DOE’s LGP was designed to address the fundamental impediment for investors that stems from the high risks of clean energy projects, including technology risk—the risk that the new technology will not perform as expected—and execution risk—the risk that the borrower will not perform as expected. Companies can face obstacles in securing enough affordable financing to survive the “valley of death” between developing innovative technologies and commercializing them. Because the risks that lenders must assume to support new technologies can put private financing out of reach, companies may not be able to commercialize innovative technologies without the federal government’s financial support. According to the DOE loan program’s Executive Director, DOE loan guarantees lower the cost of capital for projects using innovative energy technologies, making them more competitive with conventional technologies and thus more attractive to lenders and equity investors. Moreover, according to the DOE loan programs Executive Director, the program takes advantage of DOE’s expertise in analyzing the technical aspects of proposed projects, which can be difficult for private sector lenders without that expertise. Until February 2009, the LGP was working exclusively under section 1703 of the Energy Policy Act of 2005, which authorized loan guarantees for new or innovative energy technologies that had not yet been commercialized. Congress had authorized DOE to guarantee approximately $34 billion in section 1703 loans by fiscal year 2009, after accounting for rescissions, but it did not appropriate funds to pay the “credit subsidy costs” of these guarantees. For section 1703 loan guarantees, each applicant was to pay the credit subsidy cost of its own project. These costs are defined as the estimated long-term cost, in net present value terms, over the entire period the loans are outstanding to cover interest subsidies, defaults, and delinquencies (not including administrative costs). Under the Federal Credit Reform Act of 1990, the credit subsidy cost for any guaranteed loan must be provided prior to a loan guarantee commitment. In past reports, we found several issues with the LGP’s implementation of section 1703. For example, in our July 2008 report, we stated that risks inherent to the program make it difficult for DOE to estimate credit subsidy costs it charges to borrowers. If DOE underestimates these costs, taxpayers will ultimately bear the costs of defaults or other shortfalls not covered by the borrowers’ payments into a cost-subsidy pool that is to cover section 1703’s program-wide costs of default. In addition, we reported that, to the extent that certain types of projects or technologies are more likely than others to have fees that are too high to remain economically viable, the projects that do accept guarantees may be more heavily weighted toward lower-risk technologies and may not represent the full range of technologies targeted by the section 1703 program. In February 2009, the Recovery Act amended the Energy Policy Act of 2005, authorizing the LGP to guarantee loans under section 1705. This section also provided $2.5 billion to pay applicants’ credit subsidy costs. This credit subsidy funding was available only to projects that began construction by September 30, 2011, among other requirements. DOE estimated that the funding would be sufficient to provide about $18 billion in guarantees under section 1705. Section 1705 authorized guarantees for commercial energy projects that employ renewable energy systems, electric power transmission systems, or leading-edge biofuels that meet certain criteria. Some of these are the same types of projects eligible under section 1703, which authorizes guarantees only for projects that Consequently, many use new or significantly improved technologies.projects that had applied under section 1703 became eligible to have their credit subsidy costs paid under section 1705. Because authority for the section 1705 loan guarantees expired on September 30, 2011, section 1703 is now the only remaining authority for the LGP. In April 2011, Congress appropriated $170 million to pay credit subsidy costs for section 1703 projects. Previously, these costs were to be paid exclusively by the applicants and were not federally funded. Congress also authorized DOE to extend eligibility under section 1703 to certain projects that had applied under section 1705 but did not receive a loan guarantee prior to the September 30, 2011, deadline. DOE has issued nine calls for applications to the LGP. Each of these nine “solicitations” has specified the energy technologies it targets and provided criteria for the LGP to determine project eligibility and the likelihood of applicants repaying their loans (see table 1). To help ensure that that these criteria were applied consistently and that each selected project provided a reasonable prospect of repayment, in March 2009, the LGP issued a credit policies and procedures manual for the program, outlining its policies and procedures for reviewing loan guarantee applications. As shown in figure 1, this review process is divided into three stages: intake, due diligence, and “conditional commitment to closing.” We use the term “review process” to refer to the entire process. During the intake stage, the LGP assesses applications in a two-part process for most applicants. In part I, the LGP considers a project’s eligibility based on the requirements in the solicitation and relevant laws and regulations. Nuclear solicitation applications are also evaluated against programmatic, technical, and financial criteria during the part I review. Based on the LGP’s eligibility determination during part I review, qualifying applicants are invited to submit a part II application. Generally, LGP evaluates this application against programmatic, technical, and financial criteria to form a basis for ranking applications within each solicitation. Based on these initial rankings, the LGP selects certain applications for the due diligence stage. During due diligence, the LGP performs a detailed examination of the project’s financial, technical, legal, and other qualifications to ensure that the LGP has identified and mitigated any risks that might affect the applicant’s ability to repay the loan guarantee. Key to identifying risks during due diligence are required reports by independent consultants on the technical and legal aspects of the project and others, such as marketing reports, that the LGP uses when needed. The LGP also negotiates the terms of the loan guarantee with the applicant during due diligence. The proposed loan guarantee transaction is then submitted for review and/or approval by the following entities: DOE’s Credit Committee, consisting of senior executive service DOE officials, most of whom are not part of the LGP. DOE’s Credit Review Board (CRB), which consists of senior-level officials such as the deputy and undersecretaries of Energy. The Office of Management and Budget (OMB), which reviews the LGP’s estimated credit subsidy range for each transaction. Department of the Treasury. The Secretary of Energy, who has final approval authority. Following the Secretary’s approval, the LGP offers the applicant a “conditional commitment” for a loan guarantee. If the applicant signs and returns the conditional commitment offer with the required fee, the offer becomes a conditional commitment, contingent on the applicant meeting conditions prior to closing. During the conditional commitment to closing stage, LGP officials and outside counsel prepare the final financing documents and ensure that the applicant has met all conditions required for closing, and the LGP obtains formal approval of the final credit subsidy cost from OMB. Prior to closing, applications may be rejected by the LGP. Similarly, applicants can withdraw at any point during the review process. Once these steps have been completed, the LGP “closes” the loan guarantee and, subject to the terms and conditions of the loan guarantee agreement, begins to disburse funds to the project. For further detail on the review process, see appendix III. For 460 applications to the LGP from its nine solicitations, DOE has made $15.1 billion in loan guarantees and conditionally committed to an additional $15 billion, representing $30 billion of the $34 billion in loan guarantees authorized for the LGP. However, when we requested data from the LGP on the status of the applications to its nine solicitations, the LGP did not have consolidated data readily available but had to assemble them from various sources. As of September 30, 2011, the LGP had received 460 applications and made (closed) $15.1 billion in loan guarantees in response to 30 applications (7 percent of all applications), all under section 1705. It had not closed any guarantees under section 1703. In addition, the LGP had conditionally committed another $15 billion for 10 more applications (2 percent of all applications)—4 under section 1705 and 6 under section 1703. The closed loan guarantees obligated $1.9 billion of the $2.5 billion in credit subsidy appropriations funded by the Recovery Act for section 1705, leaving $600 million of the funds unused before the program expired. For section 1703 credit subsidy costs, the $170 million that Congress appropriated in April 2011 to pay such costs is available, but it may not cover all such costs because the legislation makes the funds available only for renewable energy or efficient end-use energy technologies.covered by the appropriation must pay their own credit subsidy costs. To date, credit subsidy costs for loan guarantees that DOE has closed have, on average, been about 12.5 percent of the guaranteed loan amounts. Applicants whose projects’ credit subsidy costs are not The median loan guarantee requested for all applications was $141 million. Applications for nuclear power projects requested significantly larger loan amounts—a median of $7 billion—and requested the largest total dollar amount by type of technology—$117 billion. Applications for energy efficiency and renewable energy solicitations requested the second-largest dollar amount—$74 billion. Table 2 provides further details on the applications by solicitation and the resulting closed loan guarantees and conditional commitments. Appendix II provides further details on the individual committed and closed loan guarantees. For all 460 LGP applications submitted, figure 2 shows the total loan guarantee amounts requested by type of energy technology. Table 3 provides an overview, as of September 30, 2011, of the status of the 460 loan guarantee applications that the LGP received in response to its nine solicitations. Of the 460 applications, 66 were still in various stages of the approval process (intake and due diligence), 40 had received conditional commitment or were closed, and 354 had been withdrawn or rejected. DOE documents list a wide range of reasons for application withdrawals, including inability to submit application material in a timely manner, inability to secure feedstock, project faced many hurdles, applicant did not pursue project, and applicant switched to another program. Solicitations that primarily targeted efficiency and renewable energy received the most applications, while those targeting nuclear front-end technologies (for the beginning of the nuclear fuel cycle), manufacturing, and fossil fuels received the fewest. The rejection rate was highest for applications submitted for two of the earlier solicitations and much lower for DOE’s FIPP, a more recent solicitation involving applications sponsored by private financial institutions. Since we began our review, two of the borrowers with closed loan guarantees have declared bankruptcy—Solyndra, Inc., with a $535 million loan guarantee for manufacturing cylindrical solar cells, and Beacon Power Corporation, with a $43 million loan guarantee for an energy storage technology. The elapsed time for LGP to process loan applications generally decreased over the course of the program, according to LGP data. LGP officials noted that the elapsed time between review stages includes the time the LGP waited for the applicants to prepare required documents for each stage. The process was longest for applications to the earlier solicitations, issued solely under section 1703, from start to closing.review process was shorter for applications under the four more recent solicitations, issued after the passage of section 1705. For example, the The first solicitation, known as Mixed 06, had the longest overall time frames from intake to closing—a median of 1,442 days—and the FIPP solicitation had the shortest time frames—a median of 422 days. Applications to the FIPP solicitation had the shortest elapsed time because this program was carried out in conjunction with private lenders, who conducted their own Table 4 shows reviews before submitting loan applications to the LGP.the median number of days elapsed during each review stage, by solicitation, as of September 30, 2011. From September 4, 2009, to July 29, 2011—a period of nearly 2 years— the LGP closed $5.8 billion in loan guarantees for 13 applications under section 1705. In the last few months before the authority for section 1705 loan guarantees expired, the LGP accelerated its closings of section 1705 applications that had reached the conditional commitment stage. Thus, over the last 2 months before the authority for section 1705 expired, the LPG closed an additional $9.3 billion in loan guarantees for 17 applications under section 1705. The program did not use about $600 million of the $2.5 billion that Congress appropriated to pay credit subsidy costs before the section 1705 authority expired, and these funds were no longer available for use by LGP. When we requested data from the LGP on the identity of applicants, status, and key dates for review of all the applications to its nine solicitations, the LGP did not have consolidated information on application status readily available. Instead, it had to assemble these data from various sources. To respond to our initial data request, LGP staff provided information from the following five sources: “Origination portfolio” spreadsheets, which contain information for applications that are in the due diligence stage of the review process. These spreadsheets contain identifying information, the solicitation applied under, commitment or closing status, type of technology, overall cost, proposed or closed loan amount, and expected or actual approval dates. Information in these spreadsheets is limited. For example, they do not contain dates that the applicant completed each stage and do not have information on applications that have been rejected or withdrawn. “Tear sheet” summaries for each application, which give current status and basic facts about the project and its technology, cost, finances, and strengths and weaknesses. Tear sheets are updated periodically, or as needed, but LGP officials could not easily consolidate them because they were kept in word processing software that does not have analysis or summarization capabilities. “Application trackers,” which are spreadsheets that give basic descriptive information and status of applications for some solicitations. LGP staff said they were maintained for most, but not all, solicitations. “Project Tracking Information” documents showing graphic presentations of application status summaries, loan guarantee amounts requested, technology type, planned processing dates, and procurement schedules for technical reports. These documents were updated manually through December 20, 2010. “Credit subsidy forecasts,” which are documents that track the actual or projected credit subsidy costs of the section 1705 projects in various stages of the review process and the cumulative utilization of credit subsidy funding. LGP staff needed over 3 months to assemble the data and fully resolve all the errors and omissions we identified. LGP staff also made further changes to some of these data when we presented our analysis of the data to the LGP in October 2011. According to LGP officials in 2010, the program had not maintained up-to-date and consolidated documents and data. An LGP official said at the time that LGP considered it more important to process loan guarantee applications than to update records. Because it took months to assemble the information required for our review, it is also clear that the LGP could not be conducting timely oversight of the program. Federal regulations require that records be kept to facilitate an effective and accurate audit and performance evaluation. These regulations—along with guidance from the Department of the Treasury and OMB—provide that maintaining adequate and proper records of agency activities is essential to oversight of the management of public resources. In addition, under federal internal control standards, federal agencies are to employ control activities, such as accurately and promptly recording transactions and events to maintain their relevance and value to management on controlling operations and making decisions. Under these standards, managers are to compare actual program performance to planned or expected results and analyze significant differences. Managers cannot readily conduct such analysis of the LGP if the agency does not maintain consolidated information on applications to the program and their status. Moreover, the fact that it took the LGP 3 months to aggregate data on the status of applications for us suggests that its managers have not had readily accessible and up-to-date information and have not been doing such analysis on an ongoing basis. This is not consistent with one of the fundamental concepts of internal control, in which such control is not a single event but a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis. Thus, providing managers with access to aggregated, updated data could facilitate more efficient management of the LGP. Furthermore, without consolidated data about applicants, LGP actions, and application status, LGP staff may not be able to identify weaknesses, if any, in the program’s application review process and approval procedures. For example, consolidated data on application status would provide a comprehensive snapshot of which steps of the review process are taking longer than expected and may need to be addressed. If program data were consolidated in an electronic tracking system, program managers could quickly access information important to managing the LGP, such as the current amount of credit subsidy obligated, as well as whether the agency is consistently complying with certain procedural requirements under its policies and regulations that govern the program. In addition, the program cannot quickly respond to requests for information about the program as a whole from Congress or program auditors. In March 2011, the LGP acknowledged the need for such a system. According to the March 2011 LGP summary of its proposed data management project, as the number of applications, volume of data and records, and number of employees increased, the existing method for storing and organizing program data and documents had become inadequate, and needed to be replaced. In October 2011, LGP officials stated that while the LGP has not maintained a consolidated application tracking database across all solicitations, the program has started to develop a more comprehensive business management system that includes a records management system called “iPortal” that also could be used to track the status of applications. Officials did not provide a timetable for using iPortal to track the status of applications but said that work is under way on it. However, until iPortal or some other system can track applications’ status, the LGP staff cannot be assured that consolidated information on application status necessary to better manage the program will be available. We identified 43 key steps in the LGP’s guidance establishing its review process for assessing and approving loan guarantee applications. The LGP followed most of its established review process, but the LGP’s actual process differed from this established process at least once on 11 of the 13 applications we reviewed, in part because the process was outdated. In some cases, LGP did not perform applicable review steps and in other cases we could not determine whether the LGP had completed review steps. Furthermore, we identified more than 80 instances of deficiencies in documentation of the LGP’s reviews of the 13 applications, such as missing signatures or dates. It is too early to evaluate the impact of the specific differences we identified on achieving program goals, but we and the DOE Inspector General have reported that omitting or poorly documenting review steps may pose increased financial risk to the taxpayer and result in inconsistent treatment of applications. We identified 43 key steps in the LGP credit policies and procedures manual and its other guidance that establish the LGP’s review process for assessing and approving loan guarantee applications. Not all 43 steps are necessary for every application, since the LGP’s guidance lets officials tailor aspects of the review process on an ad hoc basis to reflect the specific needs of the solicitation. For example, under the EERE 08 solicitation, the LGP required two parts of intake review for applications involving large projects that integrate multiple types of technologies, but it required only one part for small projects. Furthermore, according to LGP officials, they have changed the review process over time to improve efficiency and transparency, so the number of relevant steps also depends on when the LGP started reviewing a given application. LGP guidance recognizes the need for such flexibility and maintains that program standards and internal control need to be applied transparently and uniformly to protect the financial interests of the government. For more information on the key steps we identified, see appendix III. According to private lenders we contacted who finance energy projects, the LGP’s established review process is generally as stringent as or more stringent than those lenders’ own due diligence processes. For example, like the LGP, private lenders evaluate a project’s proposed expenses and income in detail to determine whether it will generate sufficient funds to support its debt payments. In addition, private lenders and the LGP both rely on third-party expertise to evaluate the technical, legal, and marketing risks that might affect the payments. Lenders who were not participating in the LGP generally agreed that the LGP’s process, if followed, should provide reasonable management of risk. Some lenders that sponsored applications under the FIPP solicitation said that the LGP’s review process was more rigorous than their own. They said this level of rigor was not warranted for the FIPP solicitation because it covered commercial technology, which is inherently less risky than the innovative technologies covered by other solicitations. Some private lenders we spoke with also noted that financing an innovative energy project involves a certain amount of risk that cannot be eliminated, and one lender said that a failure rate of 2 or 3 percent is common, even for the most experienced loan officers. However, we found that the LGP did not always follow the review process in its guidance. The LGP completed most of the applicable review steps for the 6 applications that we reviewed in full, but its actual process differed from the established process at least once on 5 of the 6 applications we reviewed. We also conducted a more limited examination of 7 additional applications, in which we examined the steps where the actual process differed from the established process for the first 6 applications. We again found that the LGP’s actual process differed from its established process at least once on 6 of the 7 applications. Table 4 summarizes review steps for which we either identified differences or could not determine whether the LGP completed a particular review step across all 13 applications. The 13 applications we reviewed represent all of the applications that had reached conditional commitment or closing, as of December 31, 2010, excluding 3 applications that had applied under the earliest solicitation, since the LGP’s review process was substantially different for these 3 applications. For the 13 applications we examined, we found 19 differences between the actual reviews the LGP conducted and the applicable review process steps established in LGP guidance. In most of these instances, according to LGP officials, the LGP did not perform an applicable review step because it had made changes intended to improve the process but had not updated the program’s credit policies and procedures manual or other guidance governing the review process. The following describes the 19 differences we identified, along with the LGP’s explanations: In six cases, the LGP did not obtain CRB approval prior to due diligence, contrary to the March 2009 version of its credit policies and procedures manual. This version states that CRB approval is an important internal check to ensure only the most promising projects proceed to due diligence. LGP officials explained that this step was not necessary for these applications because the CRB had verbally delegated to the LGP its authority to approve applications before these projects proceeded to due diligence. However, LGP documents indicate that CRB delegated approval authority after these projects According to an LGP official, the had proceeded to due diligence.delegation of authority was not retroactive. In seven cases, the LGP did not obtain final due diligence reports from independent consultants prior to conditional commitment, as required by its credit policies and procedures manual. Through their reporting, these independent third parties provide key input to the LGP’s loan underwriting and credit subsidy analyses in technical, legal, and other areas such as marketing, as necessary. LGP officials said that it was a preferable practice to proceed to conditional commitment with drafts of these reports and obtain a final report just prior to closing. They said this practice helps the LGP reduce financial risk, since it allows the LGP to base its decision to close the loan guarantee on final reports rather than reports completed 1 to several months earlier. An LGP official explained that this part of the review process had evolved to meet the program’s needs, but that these changes were not yet reflected in the manual. However, the LGP does not appear to have implemented this change consistently. Specifically, over the course of several months in 2009 and 2010, the LGP alternated between the old and the new process concerning final due diligence reports from independent consultants. In commenting on a draft of this report, LGP officials said that in all cases they received final independent consultant reports before the closing of the loan guarantees. Because the LGP’s policies and procedures manual at the time required final reports at the conditional commitment stage, we reviewed the reports available at conditional commitment and did not review whether LGP received final reports before closing. In three cases, the LGP conditionally committed to a loan guarantee before OMB had completed its informal review of the LGP’s credit subsidy cost estimate. According to the credit policies and procedures manual, OMB should be notified each time the LGP estimates the credit subsidy cost range, and informal discussions between OMB and LGP should ensue about the LGP estimate. This cost is to be paid by the borrower for all section 1703 projects to date and by the federal government for section 1705 projects. LGP officials explained that, in two of these cases, the LGP had provided OMB with their credit subsidy estimates, but that OMB had not completed its review because there were unresolved issues with the LGP estimates. LGP officials did not provide an explanation for the third case. Contrary to the manual, LGP officials said that OMB’s informal review of the credit subsidy estimates for these applications was not a necessary prerequisite to conditional commitment because the actual credit subsidy cost is calculated just prior to closing and is formally approved by OMB. Furthermore, under section 1705, the government rather than the borrower, was to pay credit subsidy costs. Accordingly, the LGP used these credit subsidy estimates for internal planning purposes rather than for calculating a fee to the applicant. In contrast, the LGP completed OMB’s informal review prior to conditionally committing to at least three of the other loan guarantees we reviewed—including one section 1705 project—and thus the LGP did not perform this step consistently across all projects. In its October 2011 update of its credit policies and procedures manual, the LGP retained the requirement that OMB review the LGP’s credit subsidy cost estimate prior to conditional commitment. Further, the updated guidance added that formal discussions with OMB may be required each time OMB reviews LGP’s credit subsidy cost estimate and should result with their approval. In two cases, the LGP did not complete its required background check for project participants. The documents provided indicate that LGP did not determine whether the applicants had any delinquent federal debt prior to conditional commitment. In one of these cases, LGP officials said that the delinquent federal debt check was completed after conditional commitment. In the other case, the documents indicate that the sponsor did not provide a statement on delinquent debt, and LGP officials confirmed that LGP did not perform the delinquent debt check prior to conditional commitment. In one case, the LGP did not collect the full fee from an applicant at conditional commitment as required by the EERE 08 solicitation. According to a LGP official, the LGP changed its policy to require 20 percent of this fee at conditional commitment instead of the full fee specified in the solicitation, in response to applicant feedback. This official said the policy change was documented in the EERE 09 solicitation, which was published on July 29, 2009. However, this particular application moved to conditional commitment on July 10, 2009, prior to the formal policy change. As outlined in these cases, the LGP departed from its established procedures because, in part, the procedures had not been updated to reflect all current review practices. The version of the manual in use at the time of GAO’s review was dated March 5, 2009, even though the manual states that it was meant to be updated at least on an annual basis and more frequently if needed. The LGP issued its first update of its credit policies and procedures manual on October 6, 2011,2009 manual states that it was meant to be updated at least annually and more frequently if needed. We reviewed the revised manual and found that the revisions addressed many of the differences that we identified between the LGP’s established and actual review processes. The revised manual also states that LGP analyses should be properly documented and stored in the new LGP electronic records management system. However, the revised guidance applies to loan guarantee applications processed after October 6, 2011, but not to the 13 applications we reviewed or to any of the 30 loan guarantees the LGP has closed to date. In addition to the differences between the actual and established review processes, in another 18 cases, we could not determine whether the LGP had performed a given review step. In some of these cases, the documentation did not demonstrate that the LGP had applied the required criteria. In other cases, the documentation the LGP provided did not show that the step had been performed. The following discusses these cases: In one case, we could not determine whether LGP guidance calls for separate part I and part II technical reviews for a nuclear front-end application or allows for a combined part I and part II technical review. The LGP performed a combined part I and part II technical review. In eight cases, we could not determine the extent to which the LGP applied the required criteria for ranking applications to the EERE 08 solicitation. The LGP’s guidance for this solicitation requires this step to identify “early mover” projects for expedited due diligence. The LGP expedited four such applications but the documentation neither demonstrated how the LGP used the required criteria to select applications to expedite nor why other applications were not selected. In one case, we could not determine whether the LGP completed its required background check for project participants. The documents provided indicated there were unresolved questions involving one participant’s involvement in a $17 billion bankruptcy and another’s pending civil suit. In one case, we could not determine whether the LGP had received a draft or final marketing report prior to conditional commitment in accordance with its guidance. The LGP provided a copy of the report prepared before closing but did not provide reports prepared before conditional commitment. In seven cases, LGP either did not provide documents supporting OMB’s completion of its informal review of the LGP’s estimated credit subsidy range before conditional commitment, or the documentation the LGP provided was inconclusive. We also found 82 additional documentation deficiencies in the 13 applications we reviewed. For example, in some cases, there were no dates or authors on the LGP documents. The documentation deficiencies make it difficult to determine, for example, whether steps occurred in the correct order or were executed by the appropriate official. The review stage with the fewest documentation deficiencies was conditional commitment to closing, when 1 of the 82 deficiencies occurred. Table 6 shows the instances of deficient documentation that we identified. During our review, the LGP did not have a central paper or electronic file containing all the documents supporting the key review steps we identified as being part of the review process. Instead, these documents were stored separately by various LGP staff and contractors in paper files and various electronic storage media. As a result, the documents were neither readily available for us to examine, nor could the LGP provide us with complete documentation in a timely manner. For example, we requested documents supporting the LGP’s review for six applicants in January 2011. For one of the applications, we did not receive any of the requested documents supporting the LGP’s intake application reviews until April 2011. Furthermore, for some of the review steps, we did not receive documents responsive to our request until November 2011 and, as we discussed earlier, in 18 cases we did not receive sufficient documentation to determine whether the LGP performed a given review step. Federal regulations and guidance from Treasury and OMB provide that maintaining adequate and proper records of agency activities is essential to accountability in the management of public resources and the protection of the legal and financial rights of the government and the public.agencies are to clearly document internal control, and the documentation is to be readily available for examination in paper or electronic form. Furthermore, under the federal standards for internal control, Moreover, the standards state that all documentation and records should be properly managed and maintained. As stated above, the LGP recognized the need for a recordkeeping system to properly manage and maintain documentation supporting project reviews. In March 2011, the LGP adopted a new records management system called “iPortal” to electronically store documents related to each loan application and issued guidance for using this system. As of November 1, 2011, LGP officials told us that the system was populated with data or records relevant to conditionally committed and closed loan guarantees and that they plan to fully populate it with documentation of the remaining applications in a few months. The LGP was able to provide us with some additional documents from its new system in response to an early draft of this report, but the LGP did not provide additional documentation sufficient to respond to all of the issues we identified. Accordingly, other oversight efforts may encounter similar problems with documentation despite the new system. It is too early in the loan guarantees’ terms to assess whether skipping or poorly documenting review steps will result in problems with the guarantees or the program. However, we and the DOE Inspector General have reported that omitting or poorly documenting review steps may lead to a risk of default or other serious consequences. Skipping or poorly documenting steps of the process during intake can lead to several problems. First, it reduces the LGP’s assurance that it has treated applications consistently and equitably. This, in turn, raises the risk that the LGP will not select the projects most likely to meet its goals, which include deploying new energy technologies and ensuring a reasonable prospect of repayment. In July 2010, we reported that the inconsistent treatment of applicants to the LGP could also undermine public confidence in the legitimacy of the LGP’s decisions. Furthermore, DOE’s Inspector General reported in March 2011 that incomplete records may impede the LGP’s ability to ensure consistency in the administration of the program, make informed decisions, and provide information to Congress, OMB, and other oversight bodies. The Inspector General also stated that, in the event of legal action related to an application, poor documentation of the LGP’s decisions may hurt its ability to prove that it applied its procedures consistently and treated applicants equitably. Moreover, incomplete records may leave DOE open to criticism that it exposed taxpayers to unacceptable financial risks. Differences between the actual and established review processes that occur during or after due diligence may also lead to serious consequences. These stages of the review process were established to help the LGP identify and mitigate risks. Omitting or poorly documenting its decisions during these stages may affect the LGP’s ability to fully assess and communicate the technical, financial, and other risks associated with projects. This could lead the program to issue guarantees to projects that pose an unacceptable risk of default. Complete and thorough documentation of decisions would further enable DOE to monitor the loan guarantees as projects are developed and implemented. Furthermore, without consistent documentation, the LGP may not be able to fully measure its performance and identify any weaknesses in its implementation of internal procedures. Through the over $30 billion in loan guarantees and loan guarantee commitments for new and commercial energy technologies that DOE has made to date, the agency has set in motion a substantial federal effort to promote energy technology innovation and create jobs. DOE has also demonstrated its ability to make section 1705 of the program functional by closing on 30 loan guarantees. It has also improved the speed at which it was able to move section 1705 applications through its review process. To date, DOE has committed to six loan guarantees under section 1703 of the program, but it has not closed any section 1703 loan guarantees or otherwise demonstrated that the program is fully functional. Many of the section 1703 applications have been in process since 2008 or before. As DOE continues to implement section 1703 of the LGP, it is even more important that it fully implement a consolidated system for overseeing the application review process and that LGP adhere to its review process and document decisions made under updated policies and procedures. It is noteworthy that the process LGP developed for performing due diligence on loan guarantee applications may equal or exceed those used by private lenders to assess and mitigate project risks. However, DOE does not have a consolidated system for documenting and tracking its progress in reviewing applications fully implemented at this time. As a result, DOE may not readily access the information needed to manage the program effectively and to help ensure accountability for federal resources. Proper recordkeeping and documentation of program actions is essential to effective program management. The absence of such documentation may have prevented LGP managers, DOE, and Congress from having access to the timely and accurate information on applications necessary to manage the program, mitigate risk, report progress, and measure program performance. DOE began to implement a new records management system in 2011, and LGP staff stated that the new system will enable them to determine the status of loan guarantee applications and to document review decisions. However, the LGP has neither fully populated the system with data or records on all applications it has received nor its decisions on them. Nor has DOE committed to a timetable to complete the implementation of the new records management system. Until the system has been fully implemented, it is unclear whether the system will enable the LGP to both track applications and adequately document its review decisions. In addition, DOE did not always follow its own process for reviewing applications and documenting its analysis and decisions, potentially increasing the taxpayer’s exposure to financial risk from an applicant’s default. DOE has not promptly updated its credit policies and procedures manual to reflect its changes in program practices, which has resulted in inconsistent application of those policies and procedures. It also has not completely documented its analysis and decisions made during reviews, which may undermine applicants’ and the public’s confidence in the legitimacy of its decisions. Furthermore, the absence of adequate documentation may make it difficult for DOE to defend its decisions on loan guarantees as sound and fair if it is questioned about the justification for and equity of those decisions. DOE has recently updated its credit policies and procedures manual, which, if followed and kept up to date, should help the agency address this issue. To better ensure that LGP managers, DOE, and Congress have access to timely and accurate information on applications and reviews necessary to manage the program effectively and to mitigate risks, we recommend that the Secretary of Energy direct the Executive Director of the Loan Programs Office to take the following three actions: Commit to a timetable to fully implement a consolidated system that enables the tracking of the status of applications and that measures overall program performance. Ensure that the new records management system contains documents supporting past decisions, as well as those in the future. Regularly update the LGP’s credit policies and procedures manual to reflect current program practices to help ensure consistent treatment for applications to the program. We provided a copy of our draft report to DOE for review and comment. In written comments signed by the Acting Executive Director of the Loan Programs Office, it was unclear whether DOE generally agreed with our recommendations. The Acting Executive Director stated subsequently to the comment letter that DOE disagreed with the first recommendation and agreed with second and third recommendations. In its written comments, DOE also provided technical and editorial comments, which were incorporated as appropriate. DOE’s comments and our responses to specific points can be found in appendix IV of this report. Concerning our first recommendation that LGP commit to a timetable to fully implement a consolidated system that enables the tracking of the status of applications and that measures overall program performance, in its written comments, DOE states that the LGP believes that it is important that our report distinguish between application tracking and records management. We believe we have adequately distinguished the need for application tracking and management of documentation. These are addressed in separate sections of our report and in separate recommendations. DOE also states that LGP has placed a high priority on records management and is currently implementing a consolidated state-of-the-art records management system. In the statement subsequent to DOE’s written comments, the Acting Executive Director stated the office did not agree to a hard timetable for implementing our first recommendation. As stated in the report draft, under federal internal control standards, agencies are to employ control activities, such as accurately and promptly recording transactions and events to maintain their relevance and value to management on controlling operations and making decisions. Because LGP had to manually assemble the application status information we needed for this review, and because this process took over 3 months to accomplish, we continue to believe DOE should develop a consolidated system that enables the tracking of the status of applications and that measures overall program performance. This type of information will help LGP better manage the program and respond to requests for information from Congress, auditors, or other interested parties. Concerning our second recommendation that LGP ensure that its new records management system contains documents supporting past decisions as well as those in the future, subsequent to DOE’s written comments, the Acting Executive Director stated that DOE agreed. Concerning our third recommendation that LGP regularly update the credit policies and procedures manual to reflect current program practices, subsequent to DOE’s written comments, the Acting Executive Director stated that DOE agreed. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, this report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix V. This appendix details the methods we used to examine the Department of Energy’s (DOE) Loan Guarantee Program (LGP). We have reported four times and testified three times on this program, including two previous reports in response to the mandate in the 2007 Revised Continuing Appropriations Resolution to review DOE’s execution of the LGP and to report our findings to the House and Senate Committees on Appropriations. (See Related GAO Products.) Because of questions regarding inconsistent treatment of applications raised by the most recent report in this mandated series, this report, also in response to the mandate, assesses (1) the status of the applications to the LGP’s nine solicitations and (2) the extent to which the LGP has adhered to its process for reviewing applications for loans that the LGP has committed to or closed. To gather information on the program, we met with the LGP’s management and staff from each of the program’s divisions involved with the LGP’s review of loan guarantee applications from intake to closing. In general, we reviewed the laws, regulations, policies and procedures governing the program and pertinent agency documents, such as solicitations announcing loan guarantee opportunities. We reviewed prior GAO and DOE Inspector General reports performed under or related to our mandate to audit the LGP. In addition, we gathered agency data and documents on the loan guarantee applications in process, those that had received a DOE commitment, and those that had been closed. To determine the status of the applications to all nine of the solicitations for our first objective, we explored the LGP’s available sources to see what data the program had compiled on the applications received and their current status in the review process. Because the LGP did not have comprehensive or complete application status data, we tailored a data request to collect data on the status of all 460 applications to the program. In consultation with agency officials, we prepared a data collection form requesting basic information on the identity, authority, amount requested, status, key milestone dates, and type of energy technology for all of the applications to date. These data were to provide a current snapshot of the program by solicitation and allow analysis of various characteristics. To ease the data collection burden, we populated the spreadsheets for each solicitation with the limited data from available sources. LGP staff or contractors familiar with each solicitation completed the spreadsheets, and these spreadsheets were reviewed by managers before they were forwarded to GAO. We assessed the reliability of the data the LGP provided by reviewing these data, comparing them to other sources, and following up repeatedly with the agency to clarify questions and inconsistencies, and obtain missing data. This process enabled us to develop up-to-date program-wide information on the status of applications. This process resulted in data that were complete enough to describe the status of the program. Once we collected these data, we found them to be sufficiently reliable for our purposes. The LGP updated its March 2011 applicant status data as of July 29, 2011, and we obtained additional data on the conditional commitments and closings made by the September 30, 2011, expiration of the section 1705 authority for loan guarantees with a credit subsidy. To maintain consistency between the application status data initially provided by the LGP and later data updates, we use the terms application and project interchangeably, although in some cases multiple applications were submitted for a single project. To assess the LGP’s execution of its review process for our second objective, we first analyzed the law, regulations, policies, procedures, and published solicitations for the program and interviewed agency staff to identify the criteria and the key review process steps for loan guarantees, as well as the documents that supported the process. We provided a list of the key review steps we identified to LGP officials, and incorporated their feedback as appropriate. Based on the key review steps and supporting documentation identified by LGP staff, we developed a data collection instrument to analyze LGP documents and determine whether the LGP followed its review process for the applications reviewed. Since the LGP’s review process varied across solicitations, we tailored the data collection instrument to meet the needs of the individual solicitations. We then selected a nonprobability sample of 6 applications from the 13 that had received conditional commitments from DOE or had progressed to closing by December 31, 2010, and had not applied under the Mixed 2006 solicitation, since the LGP’s review process was substantially different for this solicitation and not directly comparable to later solicitations. We requested documentation for these 6 applications representing a range of solicitations and project types. We selected our initial sample to represent each of the five solicitations where applications had reached conditional commitment and different LGP investment officers to reduce the burden on LGP staff. We requested the documents supporting the LGP’s review process from intake to closing and examined them to determine whether the applicable review steps were carried out. While we examined whether the applicable review steps were carried out, we did not examine the content of the documents and the quality of work supporting them. Where the documents were not clear about completion of the process, showed potential differences from the review process, or raised questions, we followed up with program officials to obtain an explanation and, as applicable, documentation supporting the explanation. On key questions where we identified differences from the review process for the initial sample of 6, we conducted a targeted review of documents for the 7 remaining applications that had reached conditional commitment or closed prior to December 31, 2010, excluding Mixed 2006 applicants. The six loan guarantee application files reviewed in full and the seven files reviewed in part were a nongeneralizable sample of applications. To identify the initial universe of private lenders with experience financing energy projects, we reviewed the list of financial institutions that had submitted applications to the LGP under the Financial Institution Partnership Program (FIPP) solicitation. We used these firms as a starting point because of their knowledge about DOE’s program and processes. To identify financial institutions involved in energy sector project finance outside of FIPP, we searched or contacted industry associations, industry conferences, and other industry groups in the same energy sectors that LGP solicitations to date have targeted. We interviewed seven private lenders identified through this process using a set of standard questions and the outline of the DOE’s review process to gain insights on its comparability to the review process for underwriting loans in the private sector. We conducted this performance audit from September 2010 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following tables provide basic details on the loan guarantee applications that received a conditional commitment by September 30, 2011, or had proceeded to closing by that date. Table 7 lists applications under section 1703 with conditional commitments. Table 8 lists section 1705-eligible applications with conditional commitments that did not reach closing by the expiration of the section 1705 authority on September 30, 2011. Table 9 lists the section 1705 applications with conditional commitments that reached closing by the expiration of the section of the 1705 authority on September 30, 2011. Table 10 provides basic details about key review tasks in LGP’s process for reviewing and approving loan guarantee applications, as identified from our review of relevant laws, regulations, LGP guidance, published solicitations and interviews with LGP officials. These tasks formed the basis for our examination of LGP files to determine if LGP followed its review process for each of the 13 applications that had received conditional commitments from DOE or had progressed to closing by December 31, 2010, and had not applied under the Mixed 2006 solicitation. Accordingly, the tasks listed below reflect LGP’s review process for the applications we reviewed and do not reflect LGP’s review process for applicants to the Mixed 2006 solicitation, which was substantially different and not directly comparable to later solicitations. Additionally, since we found minor variations in LGP’s review process across the solicitations, we have noted below which tasks are only applicable under certain solicitations. If no exceptions are listed, then the particular task is applicable across all the relevant solicitations. 1. We disagree with DOE’s assertion that our findings relate only to procedures that LGP had in place in 2009 and early 2010. We compared LGP’s actual process to its established process for each of the applications that reached closing or conditional commitment by December 31, 2010. As we note in the report, LGP did not revise its policies and procedures manual until October 2011, so the same established procedures were in place for all of the applications that closed by September 30, 2011. We did not review any of the applications that were committed or closed during 2011 in depth, in part because it took through November 2011 for LGP to respond to our repeated requests for available documentation for the applications closed or committed to through 2010. Our 2010 report on LGP (GAO- 10-627) and this report had information on five of the same applications. We examined DOE’s review process for these applications in much more depth for this report than in the previous one. We did take into account changes in LGP procedures, systems, and other improvements as part of our review, as noted by the references to LGP’s new records management system and its updated policies and procedures manual. We also took into account changes in LGP policies and procedures that affected the 13 files that we reviewed, when LGP was able to document that these changes had occurred. 2. As noted in the report, these systems were not fully implemented at the time we were gathering data for our review and this is still the case, according to DOE’s written comments, dated February 23, 2012. 3. As stated above, we disagree with LGP’s statement that our findings relate only to procedures that LGP had in place in 2009 and early 2010. As we note in the report, LGP did not revise its policies and procedures manual until October 2011, so the same established procedures were in place for all of the applications that closed by September 30, 2011. The report describes LGP’s efforts to update its documentation management and tracking systems and notes that none of these were fully implemented at the time of our review. 4. DOE disagrees with the recommendation to implement an application tracking system. However, as noted in our report and DOE’s comments, LGP is in the process of implementing a consolidated state of the art business management system that DOE believes may address this need. As we stated in the draft report, under federal internal control standards, federal agencies are to employ control activities, such as accurately and promptly recording transactions and events to maintain their relevance and value to management on controlling operations and making decisions. Because LGP had to manually assemble the application status information we needed for this review, and because this process took the program over three months to accomplish, we continue to believe DOE should develop a consolidated system that enables the tracking of the status of applications and that measures overall program performance. This type of information will help LGP better manage the program and respond to requests for information from Congress, auditors, or other interested parties. In addition to the individual named above, Karla Springer, Assistant Director; Marcia Carlsen; Cindy Gilbert; Cathy Hurley; Emily Owens; John Scott; Ben Shouse; Carol Shulman; Barbara Timmerman; and Lisa Van Arsdale made key contributions to this report. Recovery Act: Status of Department of Energy’s Obligations and Spending. GAO-11-483T. Washington, D.C.: March 17, 2011. Department Of Energy: Further Actions Are Needed to Improve DOE’s Ability to Evaluate and Implement the Loan Guarantee Program. GAO-10-627. July 12, 2010. Recovery Act: Factors Affecting the Department of Energy’s Program Implementation. GAO-10-497T. March 4, 2010. American Recovery and Reinvestment Act: GAO’s Role in Helping to Ensure Accountability and Transparency for Science Funding. GAO-09-515T. March 19, 2009. Department Of Energy: New Loan Guarantee Program Should Complete Activities Necessary For Effective and Accountable Program Management. GAO-08-750. July 7, 2008. Department Of Energy: Observations On Actions To Implement The New Loan Guarantee Program For Innovative Technologies. GAO-07-798T. September 24, 2007. The Department of Energy: Key Steps Needed to Help Ensure the Success of the New Loan Guarantee Program for Innovative Technologies by Better Managing Its Financial Risk. GAO-07-339R. February 28, 2007.
The Department of Energy’s (DOE) Loan Guarantee Program (LGP) was created by section 1703 of the Energy Policy Act of 2005 to guarantee loans for innovative energy projects. Currently, DOE is authorized to make up to $34 billion in section 1703 loan guarantees. In February 2009, the American Recovery and Reinvestment Act added section 1705, making certain commercial technologies that could start construction by September 30, 2011, eligible for loan guarantees. It provided $6 billion in appropriations that were later reduced by transfer and rescission to $2.5 billion. The funds could cover DOE’s costs for an estimated $18 billion in additional loan guarantees. GAO has an ongoing mandate to review the program’s implementation. Because of concerns raised in prior work, GAO assessed (1) the status of the applications to the LGP and (2) for loans that the LGP has committed to, or made, the extent to which the program has adhered to its process for reviewing applications. GAO analyzed relevant legislation, regulations, and guidance; prior audits; and LGP data, documents, and applications. GAO also interviewed DOE officials and private lenders with experience in energy project lending. The Department of Energy (DOE) has made $15 billion in loan guarantees and conditionally committed to an additional $15 billion, but the program does not have the consolidated data on application status needed to facilitate efficient management and program oversight. For the 460 applications to the Loan Guarantee Program (LGP), DOE has made loan guarantees for 7 percent and committed to an additional 2 percent. The time the LGP took to review loan applications decreased over the course of the program, according to GAO’s analysis of LGP data. However, when GAO requested data from the LGP on the status of these applications, the LGP did not have consolidated data readily available and had to assemble these data over several months from various sources. Without consolidated data on applicants, LGP managers do not have readily accessible information that would facilitate more efficient program management, and LGP staff may not be able to identify weaknesses, if any, in the program’s application review process and approval procedures. Furthermore, because it took months to assemble the data required for GAO’s review, it is also clear that the data were not readily available to conduct timely oversight of the program. LGP officials have acknowledged the need for a consolidated system and said that the program has begun developing a comprehensive business management system that could also be used to track the status of LGP applications. However, the LGP has not committed to a timetable to fully implement this system. The LGP adhered to most of its established process for reviewing applications, but its actual process differed from its established process at least once on 11 of the 13 applications GAO reviewed. Private lenders who finance energy projects that GAO interviewed found that the LGP’s established review process was generally as stringent as or more stringent than their own. However, GAO found that the reviews that the LGP conducted sometimes differed from its established process in that, for example, actual reviews skipped applicable review steps. In other cases, GAO could not determine whether the LGP had performed some established review steps because of poor documentation. Omitting or poorly documenting reviews reduces the LGP’s assurance that it has treated applicants consistently and equitably and, in some cases, may affect the LGP’s ability to fully assess and mitigate project risks. Furthermore, the absence of adequate documentation may make it difficult for DOE to defend its decisions on loan guarantees as sound and fair if it is questioned about the justification for and equity of those decisions. One cause of the differences between established and actual processes was that, according to LGP staff, they were following procedures that had been revised but were not yet updated in the credit policies and procedures manual, which governs much of the LGP’s established review process. In particular, the version of the manual in use at the time of GAO’s review was dated March 5, 2009, even though the manual states it was meant to be updated at least annually, and more frequently as needed. The updated manual dated October 6, 2011, addresses many of the differences GAO identified. Officials also demonstrated that LGP had taken steps to address the documentation issues by beginning to implement its new document management system. However, by the close of GAO’s review, LGP could not provide sufficient documentation to resolve the issues identified in the review. GAO recommends that the Secretary of Energy establish a timetable for, and fully implement, a consolidated system to provide information on LGP applications and reviews and regularly update program policies and procedures. DOE disagreed with the first of GAO’s three recommendations; GAO continues to believe that a consolidated system would enhance program management.
The fiscal year 2005 expenditure plan satisfied or partially satisfied the conditions specified in DHS’s appropriations act. Specifically, the plan, including related program documentation and program officials’ statements, satisfied or provided for satisfying all key aspects of (1) meeting the capital planning and investment control review requirements of the Office of Management and Budget (OMB) and (2) review and approval by DHS and OMB. The plan partially satisfied the conditions that specify (1) compliance with the DHS enterprise architecture and (2) compliance with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. CBP is working toward addressing our open recommendations. Each recommendation, along with the status of actions to address it, is summarized below. Develop and implement a rigorous and analytically verifiable cost- estimating program that embodies the tenets of effective estimating as defined in the Software Engineering Institute’s (SEI) institutional and project-specific estimating models. The CBP Modernization Office’s (CBPMO) implementation of this recommendation is in progress. CBPMO has (1) defined and documented processes for estimating expenditure plan costs (including management reserve costs); (2) hired a contractor to develop cost estimates, including contract task orders, that are independent of the ACE development contractor’s estimates; and (3) tasked a support contractor with evaluating the independent estimates and the development contractor’s estimates against SEI criteria. According to the summary-level results of this evaluation, the independent estimates either satisfied or partially satisfied the SEI criteria, and the development contractor’s estimates satisfied or partially satisfied all but two of the seven SEI criteria. Ensure that future expenditure plans are based on cost estimates that are reconciled with independent cost estimates. CBPMO’s implementation of this recommendation is complete with respect to the fiscal year 2005 expenditure plan. In August 2004, CBP’s support contractor completed an analysis comparing the cost estimates in the fiscal year 2005 expenditure plan (which are based on the ACE development contractor’s cost estimates) with the estimate prepared by CBPMO’s independent cost estimating contractor; this analysis concluded that the two estimates are consistent. Immediately develop and implement a human capital management strategy that provides both near- and long-term solutions to the program office’s human capital capacity limitations, and report quarterly to the appropriations committees on the progress of efforts to do so. CBPMO’s implementation of this recommendation is in progress, and it has reported on its actions to the Congress. Following our recommendation, CBPMO provided reports dated March 31, 2004, and June 30, 2004, to the appropriations committees on its human capital activities, including development of a staffing plan that identifies the positions it needs to manage ACE. However, in December 2004, CBPMO implemented a reorganization of the modernization office, which makes the staffing plan out of date. As part of this reorganization, CBP transferred government and contractor personnel who have responsibility for the Automated Commercial System, the Automated Targeting System, and ACE training from non-CBPMO organizational units to CBPMO. According to CBPMO, this change is expected to eliminate redundant ACE-related program management efforts. Have future ACE expenditure plans specifically address any proposals or plans, whether tentative or approved, for extending and using ACE infrastructure to support other homeland security applications, including any impact on ACE of such proposals and plans. CBP’s implementation of this recommendation is in progress. In our fiscal year 2004 expenditure plan review, we reported that CBPMO had discussed collaboration opportunities with DHS’s United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program to address the potential for ACE infrastructure, data, and applications to support US- VISIT. Since then, ACE and US-VISIT managers have again met to identify potential areas for collaboration between the two programs and to clarify how the programs can best support the DHS mission. The US-VISIT and ACE programs have formed collaboration teams that have drafted team charters, identified specific collaboration opportunities, developed timelines and next steps, and briefed ACE and US-VISIT program officials on the teams’ progress and activities. Establish an independent verification and validation (IV&V) function to assist CBP in overseeing contractor efforts, such as testing, and ensure the independence of the IV&V agent. CBP has completed its implementation of this recommendation. To ensure independence, CBPMO has selected an IV&V contractor that, according to CBP officials, has had no prior involvement in the modernization program. The IV&V contractor is to be responsible for reviewing ACE products and management processes and is to report directly to the CBP chief information officer. Define metrics, and collect and use associated measurements, for determining whether prior and future program management improvements are successful. CBPMO’s implementation of this recommendation is in progress. CBPMO has implemented a program that generally focuses on measuring the ACE development contractor’s performance through the use of earned value management, metrics for the timeliness and quality of deliverables, and risk and issue disposition reporting. Additionally, it is planning to broaden its program to encompass metrics and measures for determining progress toward achieving desired business results and acquisition process maturity. The plan for expanding the metrics program is scheduled for approval in early 2005. Reconsider the ACE acquisition schedule and cost estimates in light of early release problems, including these early releases’ cascading effects on future releases and their relatively small size compared to later releases, and in light of the need to avoid the past levels of concurrency among activities within and between releases. CBP has completed its implementation of this recommendation. In response to the cost overrun on Releases 3 and 4, CBPMO and the ACE development contractor established a new cost baseline of $196 million for these releases, extended the associated baseline schedule, and began reporting schedule and cost performance relative to the new baselines. Additionally, in July 2004, a new version of the ACE Program Plan was developed that rebaselined the ACE program, extending delivery of the last ACE release from fiscal year 2007 to fiscal year 2010, adding a new screening and targeting release, and increasing the ACE life-cycle cost estimate by about $1 billion to $3.1 billion. Last, the new program schedule reflects less concurrency between future releases. Report quarterly to the House and Senate Appropriations Committees on efforts to address open GAO recommendations. CBP’s implementation of this recommendation is in progress. CBP has submitted reports to the committees on its efforts to address open GAO recommendations for the quarters ending March 31, 2004, and June 30, 2004. CBPMO plans to submit a report for the quarter ending September 30, 2004, after it is approved by DHS and OMB. We made observations related to ACE performance, use, testing, development, cost and schedule performance, and expenditure planning. An overview of the observations follows: Initial ACE releases have largely met a key service level agreement. According to a service level agreement between the ACE development contractor and CBPMO, 99.9 percent of all ACE transactions are to be executed successfully each day. The development contractor reports that ACE has met this requirement on all but 11 days since February 1, 2004, and attributed one problem that accounted for 5 successive days during which the service level agreement was not met to CBPMO’s focus on meeting schedule commitments. Progress toward establishing ACE user accounts has not met expectations. CBPMO established a goal of activating 1,100 ACE importer accounts by February 25, 2005, when Release 4 is to become operational. Weekly targets were established to help measure CBPMO’s progress toward reaching the overall goal. However, CBPMO has not reached any of its weekly targets, and the gap between the actual and targeted number of activated accounts has continued to grow. To illustrate, as of November 26, 2004, the goal was 600 activated accounts and the actual number was 311. Release 3 testing and pilot activities were delayed and have produced system defect trends that raise questions about decisions to pass key milestones and about the state of system maturity. Release 3 test phases and pilot activities were delayed and revealed system defects, some of which remained open at the time decisions were made to pass key life- cycle milestones. In particular, we observed the following: Release 3 integration testing started later than planned, took longer than expected, and was declared successful despite open defects that prevented the system from performing as intended. For example, the test readiness milestone was passed despite the presence of 90 severe defects. Release 3 acceptance testing started later than planned, concluded later than planned, and was declared successful despite having a material inventory of open defects. For example, the production readiness milestone was passed despite the presence of 18 severe defects. Release 3 pilot activities, including user acceptance testing, were declared successful, despite the presence of severe defects. For example, the operational readiness milestone was passed despite the presence of 6 severe defects. The current state of Release 3 maturity is unclear because defect data reported since user acceptance testing are not reliable. Release 4 test phases were delayed and overlapped, and revealed a higher than expected volume and significance of defects, raising questions about decisions to pass key milestones and about the state of system maturity. In particular, we observed the following: Release 4 testing revealed a considerably higher than expected number of material defects. Specifically, 3,059 material defects were reported, compared with the 1,453 estimated, as of the November 23, 2004, production readiness milestone. Changes in the Release 4 integration and acceptance testing schedule resulted in tests being conducted concurrently. As we previously reported, concurrent test activities increase risk and have contributed to past ACE cost and schedule problems. The defect profile for Release 4 shows improvements in resolving defects, but critical and severe defects remain in the operational system. Specifically, as of November 30, 2004, which was about 1.5 weeks from deployment of the Release 4 pilot period, 33 material defects were present. Performance against the revised cost and schedule estimates for Releases 3 and 4 has been mixed. Since the cost and schedule for Releases 3 and 4 were revised in April 2004, work has been completed under the budgeted cost, but it is being completed behind schedule. In order to improve the schedule performance, resources targeted for later releases have been retained on Release 4 longer than planned. While this has resulted in improved performance against the schedule, it has adversely affected cost performance. The fiscal year 2005 expenditure plan does not adequately describe progress against commitments (e.g., ACE capabilities, schedule, cost, and benefits) made in previous plans. In the fiscal year 2004 expenditure plan, CBPMO committed to, for example, acquiring infrastructure for ACE releases and to defining and designing an ACE release that was intended to provide additional account management functionality. However, the current plan described neither the status of infrastructure acquisition nor progress toward defining and designing the planned account management functionality. Also, the current plan included a schedule for developing ACE releases, but neither reported progress relative to the schedule presented in the fiscal year 2004 plan nor explained how the individual releases and their respective schedules were affected by the rebaselining that occurred after the fiscal year 2004 plan was submitted. Some key bases for the commitments made in the fiscal year 2005 expenditure plan have changed, raising questions as to the plan’s currency and relevance. Neither the expenditure plan nor the program plan reflected several program developments, including the following: A key Release 5 assumption made in the program and expenditure plans regarding development, and thus cost and delivery, of the multimodal manifest functionality is no longer valid. Additional releases, and thus cost and effort, are now planned that were not reflected in the program and expenditure plans. The current organizational change management approach is not fully reflected in program and expenditure plans, and key change management actions are not to be implemented. Significant changes to the respective roles and responsibilities of the ACE development contractor and CBPMO are not reflected in the program and expenditure plans. DHS and OMB have largely satisfied four of the five conditions associated with the fiscal year 2005 ACE expenditure plan that were legislated by the Congress, and we have satisfied the fifth condition. Further, CBPMO has continued to work toward implementing our prior recommendations aimed at improving management of the ACE program and thus the program’s chances of success. Nevertheless, progress has been slow in addressing some of our recommendations, such as the one encouraging proactive management of the relationships between ACE and other DHS border security programs, like US-VISIT. Given that these programs have made and will continue to make decisions that determine how they will operate, delays in managing their relationships will increase the chances that later system rework will eventually be required to allow the programs to interoperate. Additionally, while DHS has taken important actions to help address ACE release-by-release cost and schedule overruns that we previously identified, it is unlikely that the effect of these actions will prevent the past pattern of overruns from recurring. This is because DHS has met its recently revised cost and schedule commitments in part by relaxing system quality standards, so that milestones are being passed despite material system defects, and because correcting such defects will ultimately require the program to expend resources, such as people and test environments, at the expense of later system releases (some of which are now under way). In the near term, cost and schedule overruns on recent releases are being somewhat masked by the use of less stringent quality standards; ultimately, efforts to fix these defects will likely affect the delivery of later releases. Until accountability for ACE is redefined and measured in terms of all types of program commitments—system capabilities, benefits, costs, and schedules—the program will likely experience more cost and schedule overruns. During the last year, DHS’s accountability for ACE has been largely focused on meeting its cost and schedule baselines. This focus is revealed by the absence of information in the latest expenditure plan on progress against all commitments made in prior plans, particularly with regard to measurement and reporting on such things as system capabilities, use, and benefits. It is also shown by the program’s insufficient focus on system quality, as demonstrated by its willingness to pass milestones despite material defects, and by the absence of attention to the current defect profile for Release 3 (which is already deployed). Moreover, the commitments that DHS made in the fiscal year 2005 expenditure plan have been overcome by events, which limits the currency and relevance of this plan and its utility to the Congress as an accountability mechanism. As a result, the prospects of greater accountability in delivering against its capability, benefit, cost, and schedule commitments are limited. Therefore, it is critically important that DHS define for itself and the Congress an accountability framework for ACE, and that it manage and report in accordance with this framework. If it does not, the effects of the recent rebaselining of the program will be short lived, and the past pattern of ACE costing more and taking longer than planned will continue. To strengthen accountability for the ACE program and better ensure that future ACE releases deliver promised capabilities and benefits within budget and on time, we recommend that the DHS Secretary, through the Under Secretary for Border and Transportation Security, direct the Commissioner, Customs and Border Protection, to define and implement an ACE accountability framework that ensures coverage of all program commitment areas, including key expected or estimated system (1) capabilities, use, and quality; (2) benefits and mission value; (3) costs; and (4) milestones and schedules; currency, relevance, and completeness of all such commitments made to the Congress in expenditure plans; reliability of data relevant to measuring progress against commitments; reporting in future expenditure plans of progress against commitments contained in prior expenditure plans; use of criteria for exiting key readiness milestones that adequately consider indicators of system maturity, such as severity of open defects; and clear and unambiguous delineation of the respective roles and responsibilities of the government and the prime contractor. In written comments on a draft of this report signed by the Acting Director, Departmental GAO/OIG Liaison, DHS agreed with our findings concerning progress in addressing our prior recommendations. In addition, the department agreed with the new recommendations we are making in this report and described actions that it plans to take to enhance accountability for the program. These planned actions are consistent with our recommendations. DHS’s comments are reprinted in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, the Under Secretary for Border and Transportation Security, the CBP Commissioner, and the Director of OMB. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3459 or at [email protected]. Other contacts and key contributors to this report are listed in appendix III. facilitate the movement of legitimate trade through more effective trade account management; strengthen border security by identifying import/export transactions that have an elevated risk of posing a threat to the United States or of violating a trade law or regulation; and provide a single system interface between the trade community2 and the federal government,3 known as the International Trade Data System (ITDS), and thereby reduce the data reporting burden placed on the trade community while also providing federal agencies with the data and various capabilities to support their respective international trade and transportation missions. CBP was formed from the former U.S. Customs Service and other entities with border protection responsibility. Members of the trade community include importers and exporters, brokers and trade advisors, and carriers. Includes federal agencies responsible for managing international trade and transportation processes. 1. meets the capital planning and investment control review requirements established by the Office of Management and Budget (OMB), including Circular A-11, part 7,2 2. complies with DHS’s enterprise architecture; 3. complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government; 4. is reviewed and approved by the DHS Investment Review Board (IRB),3 Secretary of Homeland Security, and OMB; and 5. is reviewed by GAO. OMB Circular A-11 establishes policy for planning, budgeting, acquisition, and management of federal capital assets. The purpose of the Investment Review Board is to integrate capital planning and investment control, budgeting, acquisition, and management of investments. It is also to ensure that spending on investments directly supports and furthers the mission and that this spending provides optimal benefits and capabilities to stakeholders and customers. determine whether the ACE fiscal year 2005 expenditure plan satisfies the determine the status of our open recommendations on ACE, and provide any other observations about the expenditure plan and DHS’s management of the ACE program. We conducted our work at CBP headquarters and contractor facilities in the Washington, D.C., metropolitan area from April 2004 through December 2004, in accordance with generally accepted government auditing standards. Details of our scope and methodology are provided in attachment 1. established by OMB, including OMB Circular A-11, part 7. 2. Complies with DHS’s enterprise architecture. 3. Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. 4. Is reviewed and approved by the DHS Investment Review Board, Secretary of Homeland Security, and OMB. 5. Is reviewed by GAO. With respect to the fiscal year 2005 expenditure plan. Initial ACE releases have largely met a key service level agreement. Progress toward establishing ACE user accounts has not met expectations. Release 3 testing and pilot activities were delayed and have produced system defect trends that raise questions about decisions to pass key milestones and about the state of system maturity. Release 3 integration testing started later than planned, took longer than expected, and was declared successful despite open defects that prevented system from performing as intended. Release 3 acceptance testing started later than planned, concluded later than planned, and was declared successful despite material inventory of open defects. Release 3 pilot activities, including user acceptance testing, were declared successful despite severe defects remaining open. Current state of Release 3 maturity is unclear because defect data since user acceptance testing are not reliable. Release 4 test phases were delayed and overlapped, and revealed a higher than expected volume and significance of defects, raising questions about decisions to pass key milestones and about the state of system maturity. Release 4 testing revealed a considerably higher than expected number of material defects. Release 4 integration and acceptance testing schedule changes resulted in tests being conducted concurrently. Release 4 defect profile shows improvements in resolving defects, but critical and severe defects remain in operational system. Performance against the revised cost and schedule estimates for Releases 3 and 4 has been mixed. The fiscal year 2005 expenditure plan does not adequately describe progress against commitments (e.g., ACE capabilities, schedule, cost, and benefits) made in previous plans. Some key bases for the commitments made in the fiscal year 2005 expenditure plan have changed, raising questions as to the plan’s currency and relevance. A key Release 5 assumption underpinning program and expenditure plans is no longer valid. Additional release(s) are now planned that were not reflected in the program and expenditure plans. The current organizational change management approach is not fully reflected in program and expenditure plans, and key change management actions are not to be implemented. Recent changes to the respective roles and responsibilities of the ACE development contractor and CBP’s Modernization Office are not reflected in the program and expenditure plans. ACE is to support eight major CBP business areas. 1. Release Processing: Processing of cargo for import or export; tracking of conveyances, cargo and crew; and processing of in-bond, warehouse, Foreign Trade Zone, and special import and export entries. 2. Entry Processing: Liquidation and closeout of entries and entry summaries related to imports, and processing of protests and decisions. 3. Finance: Recording of revenue, performance of fund accounting, and maintenance of the general ledger. 4. Account Relationships: Maintenance of trade accounts, their bonds and CBP-issued licenses, and their activity. 5. Legal and Policy: Management of import and export legal, regulatory, policies and procedures, and rulings issues. 6. Enforcement: Enforcement of laws, regulations, policies and procedures, and rulings governing the import and export of cargo, conveyances, and crew. import and export transactions, for use in making admissibility and release decisions. 8. Risk: Decisionmaking about admissibility and compliance of cargo using risk- based mitigation, selectivity, and targeting. The ACE technical architecture is to consist of layers or tiers of computer technology: The Client Tier includes user workstations and external system interfaces. The Presentation Tier provides the mechanisms for the user workstations and external systems to access ACE. The Integration Services Tier provides the middleware for integrating and routing information between ACE software applications and legacy systems. The Applications Tier includes software applications comprising commercial products (e.g., SAP1) and custom-developed software that provide the functionality supporting CBP business processes. The Data Tier provides the data management and warehousing services for ACE, including database backup, restore, recovery, and space management. Security and data privacy are to be embedded in all five layers. SAP is a commercial enterprise resource planning software product that has multiple modules, each performing separate but integrated business functions. ACE will use SAP as the primary commercial, off-the-shelf product supporting its business processes and functions. CBP’s Modernization Office is also using SAP as part of a joint project with its Office of Finance to support financial management, procurement, property management, cost accounting, and general ledger processes. Background Summary of ACE Releases The functionality associated with, status of, and plans for the 11 ACE releases are as follows. Release 1 (ACE Foundation): Provide IT infrastructure—computer hardware and system software—to support subsequent system releases. This release was deployed in October 2003 and is operating. Release 2 (Account Creation): Give initial group of CBP national account managers1 and importers access to account information, such as trade activity. This release was deployed in October 2003 and is operating. Release 3 (Periodic Payment): Provide additional account managers and importers, as well as brokers and carriers,2 access to account information; provide initial financial transaction processing and CBP revenue collection capability, allowing importers and their brokers to make monthly payments of duties and fees. CBP national account managers work with the largest importers. Brokers obtain licenses from CBP to conduct business on behalf of the importers by filling out paperwork and obtaining a bond; carriers are individuals or organizations engaged in transporting goods for hire. Background Summary of ACE Releases This release was deployed in July 2004 and is operating. As a result, CBP reports that importers can now obtain a national view of their transactions on a monthly statement and can pay duties and fees on a monthly basis for the first time since CBP and its predecessor organizations were established in 1789. Additionally, according to CBP, Release 3 provides a national view of trade activity, thus greatly enhancing its ability to accomplish its mission of providing border security while facilitating legitimate trade and travel. CBP also reports that as of December 6, 2004, it had processed 27,777 entries and collected over $126.5 million using Release 3. Release 4 (e-Manifest: Trucks): Provide truck manifest1 processing and interfacing to legacy enforcement systems and databases. This release is under development and scheduled for deployment beginning in February 2005. Screening S1 (Screening Foundation): Establish the foundation for screening and targeting cargo and conveyances by centralizing criteria and results into a single standard database; allow users to define and maintain data sources and business rules. This release is scheduled for deployment beginning in September 2005. Manifests are lists of passengers or invoices of cargo for a vehicle, such as a truck, ship, or plane. Background Summary of ACE Releases Screening S2 (Targeting Foundation): Establish the foundation for advanced targeting capabilities by enabling CBP’s National Targeting Center to search multiple databases for relevant facts and actionable intelligence. This release is scheduled for deployment beginning in February 2006. Release 5 (Account Revenue and Secure Trade Data): Leverage SAP technologies to enhance and expand accounts management, financial management, and postrelease functionality, as well as provide the initial multimodal manifest1 capability. This release is scheduled for deployment beginning in November 2006. Screening S3 (Advanced Targeting): Provide enhanced screening for reconciliation, intermodal manifest, Food and Drug Administration data, and in- bond, warehouse, and Foreign Trade Zone authorized movements; integrate additional data sources into targeting capability; provide additional analytical tools for screening and targeting data. This release is scheduled for deployment beginning in February 2007. The multimodal manifest involves the processing and tracking of cargo as it transfers between different modes of transportation, such as cargo that arrives by ship, is transferred to a truck, and then is loaded onto an airplane. Background Summary of ACE Releases Screening S4 (Full Screening and Targeting): Provide screening and targeting functionality supporting all modes of transportation and all transactions within the cargo management lifecycle, including enhanced screening and targeting capability with additional technologies. This release is scheduled for deployment beginning in February 2009. Release 6 (e-Manifest: All Modes and Cargo Security): Provide enhanced postrelease functionality by adding full entry processing; enable full tracking of cargo, conveyance, and equipment; enhance the multimodal manifest to include shipments transferring between transportation modes. This release is scheduled for deployment beginning in February 2009. Release 7 (Exports and Cargo Control): Implement the remaining ACE functionality, including Foreign Trade Zone warehouse; export, seized asset and case tracking system; import activity summary statement; and mail, pipeline, hand carry, drawback, protest, and document management. This release is scheduled for deployment beginning in May 2010. The graphic on the following slide illustrates the planned schedule for ACE. ACE Satisfaction of Modernization Act Requirements ACE is intended to support CBP satisfaction of the provisions of Title VI of the North American Free Trade Agreement, commonly known as the Modernization Act. Subtitle B of the Modernization Act contains the various automation provisions that were intended to enable the government to modernize international trade processes and permit CBP to adopt an informed compliance approach with industry. The following table illustrates how each ACE release is to fulfill the requirements of Subtitle B. Initial program and project management; continued by task 009. Initial enterprise architecture and system engineering; continued by task 010. Initial requirements development and program planning effort; continued by tasks for specific increments/releases. Design, development, testing, and deployment of Releases 1 and 2 (initially intended to build Increment 1, which was subsequently divided into four releases) Development of Release 5 project plan, documentation of ACE business processes, and development of an ACE implementation strategy. Enterprise process improvement integration. Assistance for participating government agencies to define requirements for an integrated ACE/ITDS system. Design, development, testing, and deployment of Releases 3 and 4. Follow-on to task 001 to continue program and project management activities. Follow-on to task 002 to continue enterprise architecture and system engineering activities; continued by task 017. Acquisition and setup of the necessary infrastructure and facilities for the contractor to design, develop, and test releases. Establishment of the infrastructure to operate and maintain releases. Conversion of scripts for interfacing desktop applications (MS Word and Excel) and mainframe computer applications. Development, demonstration, and delivery of a prototype to provide CBP insight into whether knowledge-based risk management should be used in ACE. Development and demonstration of technology prototypes to provide CBP insight into whether the technologies should be used in ACE. Program management and support to organizational change management through activities such as impact assessments, end user training, communication, and outreach. Coordination of program activities and alignment of enterprise objectives and technical plans through architecture and engineering activities. Application of the CBP Enterprise Life Cycle Methodology to integrate multiple projects and other ongoing Customs operations into CBPMO. Follow-on to task 012 includes establishment, integration, configuration, and maintenance of the infrastructure to support Releases 2, 3, and 4. Design, develop, test, and deploy the Screening Foundation (S1) release. Definition of requirements for the Targeting Foundation (S2) release, and initial project authorization and definition for Release 5. Background Chronology of Six ACE Expenditure Plans Since March 2001, six ACE expenditure plans have been submitted.1 Collectively, the six plans have identified a total of $1,401.5 million in funding. On March 26, 2001, CBP submitted to its appropriations committees the first expenditure plan seeking $45 million for the modernization contract to sustain CBPMO operations, including contractor support. The appropriations committees subsequently approved the use of $45 million, bringing the total ACE funding to $50 million. On February 1, 2002, the second expenditure plan sought $206.9 million to sustain CBPMO operations; define, design, develop, and deploy Increment 1, Release 1 (now Releases 1 and 2); and identify requirements for Increment 2 (now part of Releases 5, 6, and 7 and Screenings 1 and 2). The appropriations committees subsequently approved the use of $188.6 million, bringing total ACE funding to $238.6 million. In March 2001, appropriations committees approved the use of $5 million in stopgap funding to fund program management office operations. Background Chronology of Six ACE Expenditure Plans On May 24, 2002, the third expenditure plan sought $190.2 million to define, design, develop, and implement Increment 1, Release 2 (now Releases 3 and 4). The appropriations committees subsequently approved the use of $190.2 million, bringing the total ACE funding to $428.8 million. On November 22, 2002, the fourth expenditure plan sought $314 million to operate and maintain Increment 1 (now Releases 1, 2, 3, and 4); to design and develop Increment 2, Release 1 (now part of Releases 5, 6, and 7 and Screening 1); and to define requirements and plan Increment 3 (now part of Releases 5, 6, and 7 and Screenings 2, 3, and 4). The appropriations committees subsequently approved the use of $314 million, bringing total ACE funding to $742.8 million. Background Chronology of Six ACE Expenditure Plans On January 21, 2004, the fifth expenditure plan sought $318.7 million to implement ACE infrastructure; to support, operate, and maintain ACE; and to define and design Release 6 (now part of Releases 5, 6, and 7) and Selectivity 2 (now Screenings 2 and 3). The appropriations committees subsequently approved the use of $316.8 million, bringing total ACE funding to $1,059.6 million. On November 8, 2004, CBP submitted its sixth expenditure plan, seeking $321.7 million for detailed design and development of Release 5 and Screening 2, definition of Screening 3, Foundation Program Management, Foundation Architecture and Engineering, and ACE Operations and Maintenance. Objective 1 Results Legislative Conditions DHS and OMB satisfied or partially satisfied each of its legislative conditions; GAO satisfied its legislative condition. Condition 1. The plan, in conjunction with related program documentation and program officials’ statements, satisfied the capital planning and investment control review requirements established by OMB, including Circular A-11, part 7, which establishes policy for planning, budgeting, acquisition, and management of federal capital assets. The table that follows provides examples of the results of our analysis. Provide justification and describe acquisition strategy. The plan provides a high-level justification for ACE. Supporting documentation describes the acquisition strategy for ACE releases, including Release 5 and Screening 2 activities that are identified in the fiscal year 2005 expenditure plan. Summarize life cycle costs and cost/benefit analysis, including the return on investment. CBPMO issued a cost/benefit analysis for ACE on September 16, 2004. This analysis includes a life cycle cost estimate of $3.1 billion and a benefit cost ratio of 2.7. Provide performance goals and measures. The plan and supporting documentation describe some goals and measures. For example, CBPMO has established goals for time and labor savings expected to result from using the early ACE releases, and it has begun or plans to measure results relative to these goals and measures. It has defined measures and is collecting data for other goals, such as measures for determining its progress toward defining the complete set of ACE functional requirements. Examples of A-11 conditions Results of our analysis Address security and privacy. The security of Release 3 was certified on May 28, 2004, and accredited on June 9, 2004. Release 4 was certified on November 23, 2004, and accredited on December 2, 2004. CBP plans to certify and accredit future releases. CBPMO reports that it is currently preparing a privacy impact assessment for ACE. Address Section 508 compliance. CBPMO deployed Release 3 and plans to deploy Release 4 without Section 508 compliance because the requirement was overlooked and not built into either release. CBPMO has finalized and begun implementing a strategy that is expected to result in full Section 508 compliance. For example, CBPMO has defined a set of Section 508 requirements to be used in developing later ACE releases. Condition 2. The plan, including related program documentation and program officials’ statements, partially satisfied this condition by providing for future compliance with DHS’s enterprise architecture (EA). DHS released version 1.0 of the architecture in September 2003.1 We reviewed the initial version of the architecture and found that it was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as a description of business processes, information flows among these processes, and security rules associated with these information flows.2 Since we reviewed version 1.0, DHS has drafted version 2.0 of its EA. We have not reviewed this draft. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. GAO, Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains, GAO-04-777 (Washington, D.C.: Aug. 6, 2004). The Center of Excellence supports the Enterprise Architecture Board in reviewing component documentation. The purpose of the Board is to ensure that investments are aligned with the DHS EA. Objective 1 Results Legislative Conditions In August 2004, the Center of Excellence approved CBPMO’s analysis intended to demonstrate ACE’s architectural alignment, and the Enterprise Architecture Board subsequently concurred with the center’s approval. However, DHS has not yet provided us with sufficient documentation to allow us to understand DHS’s architecture compliance methodology and criteria (e.g., definition of alignment and compliance) or with verifiable analysis justifying the approval. Objective 1 Results Legislative Conditions Condition 3. The plan, in conjunction with related program documentation, partially satisfied the condition of compliance with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. The Software Acquisition Capability Maturity Model (SA-CMM®), developed by Carnegie Mellon University’s Software Engineering Institute (SEI), is consistent with the acquisition guidelines and systems acquisition management practices of the federal government, and it provides a management framework that defines processes for acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, and evaluation. In November 2003, SEI assessed ACE acquisition management against the SA- CMM and assigned a level 2 rating, indicating that CBPMO has instituted basic acquisition management processes and controls in the following areas: acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, and evaluation. Objective 1 Results Legislative Conditions In June 2003, the Department of the Treasury’s Office of Inspector General (OIG) issued a report on the ACE program’s contract, concluding that the former Customs Service, now CBP, did not fully comply with Federal Acquisition Regulation requirements in the solicitation and award of its contract because the ACE contract is a multiyear contract and not an indefinite-delivery/indefinite-quantity (IDIQ) contract. Further, the Treasury OIG found that the ACE contract type, which it determined to be a multiyear contract, is not compatible with the program’s stated needs for a contract that can be extended to a total of 15 years, because multiyear contracts are limited to 5 years. Additionally, the Treasury OIG found that Customs combined multiyear contracting with IDIQ contracting practices. For example, it plans to use contract options to extend the initial 5-year performance period. CBP disagrees with the Treasury OIG conclusion. To resolve the disagreement, DHS asked GAO to render a formal decision. We are currently reviewing the matter. Objective 1 Results Legislative Conditions Condition 4. DHS and OMB satisfied the condition that the plan be reviewed and approved by the DHS IRB, the Secretary of Homeland Security, and OMB. On August 18, 2004, the DHS IRB reviewed the ACE program, including ACE fiscal year 2005 cost, schedule, and performance plans. The DHS Deputy Secretary, who chairs the IRB, delegated further review of the fiscal year 2005 efforts, including review and approval of the fiscal year 2005 ACE expenditure plan, to the Under Secretary for Management, with support from the Chief Financial Officer, Chief Information Officer, and Chief Procurement Officer, all of whom are IRB members. The Under Secretary for Management approved the expenditure plan on behalf of the Secretary of Homeland Security on November 8, 2004. OMB approved the plan on October 15, 2004. Condition 5. GAO satisfied the condition that it review the plan. Our review was completed on December 17, 2004. For these models, see SEI’s Checklists and Criteria for Evaluating the Cost and Schedule Estimating Capabilities of Software Organizations and A Manager’s Checklist for Validating Software Cost and Schedule Estimates. With respect to the fiscal year 2005 expenditure plan. Objective 2 Results Open Recommendations Open recommendation 3: Immediately develop and implement a human capital management strategy that provides both near- and long-term solutions to program office human capital capacity limitations, and report quarterly to the appropriations committees on the progress of efforts to do so. According to the expenditure plan, CBPMO has since developed a modernization staffing plan that identifies the positions and staff it needs to effectively manage ACE. However, CBPMO did not provide this plan to us because it was not yet approved. Moreover, program officials told us that the staffing plan is no longer operative because it was developed before December 2004, when a modernization office reorganization was implemented. As part of this reorganization, CBP transferred government and contractor personnel who have responsibility for the Automated Commercial System,1 the Automated Targeting System,2 and ACE training from non-CBPMO organizational units. This change is expected to eliminate redundant ACE-related program management efforts. The Automated Commercial System is CBP’s system for tracking, controlling, and processing imports to the United States. The Automated Targeting System is CBP’s system for identifying import shipments that warrant further attention. Objective 2 Results Open Recommendations Following our recommendation, CBPMO provided reports dated March 31, 2004, and June 30, 2004, to the appropriations committees on its human capital activities, including development of the previously mentioned staffing plan and related analysis to fully define CBPMO positions. Additionally, it has reported on efforts to ensure that all modernization office staff members complete a program management training program. Objective 2 Results Open Recommendations Open Recommendation 4: Have future ACE expenditure plans specifically address any proposals or plans, whether tentative or approved, for extending and using ACE infrastructure to support other homeland security applications, including any impact on ACE of such proposals and plans. The ACE Program Plan states that ACE provides functions that are directly related to the “passenger business process” underlying the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program,1 and integration of certain ACE and US-VISIT components is anticipated. In recognition of this relationship, the expenditure plan states that CBPMO and US-VISIT are working together to identify lessons learned, best practices, and opportunities for collaboration. US-VISIT is a governmentwide program to collect, maintain, and share information on foreign nationals for enhancing national security and facilitating legitimate trade and travel, while adhering to U.S. privacy laws and policies. Objective 2 Results Open Recommendations Inventory, which includes identifying connections between legacy systems and establishing a technical requirements and architecture team to review, among other things, system interfaces, data formats, and system architectures; and People, Processes, and Technology, which includes establishing teams to review deployment schedules and establishing a team and process to review and normalize business requirements. Objective 2 Results Open Recommendations In September 2004, the teams met to develop team charters, identify specific collaboration opportunities, and develop timelines and next steps. In October 2004, CBPMO and US-VISIT program officials were briefed on the progress and activities of the collaboration teams. Objective 2 Results Open Recommendations Open recommendation 5: Establish an IV&V function to assist CBP in overseeing contractor efforts, such as testing, and ensure the independence of the IV&V agent. According to ACE officials, they have selected an IV&V contractor that has had no prior involvement in the modernization program to ensure independence. These officials stated that the IV&V contractor will be responsible for reviewing ACE products and management processes, and will report directly to the CBP CIO. Award of this contract is to occur on December 30, 2004. Open recommendation 6: Define metrics, and collect and use associated measurements, for determining whether prior and future program management improvements are successful. CBPMO has implemented a metrics program that generally focuses on measuring eCP’s performance through the use of earned value management (EVM), deliverable timeliness and quality metrics, and risk and issue disposition reporting. Additionally, CBPMO is planning to broaden its program to encompass metrics and measures for determining progress toward achieving desired business results and acquisition process maturity. The plan for expanding the metrics program is scheduled for approval in early 2005. One part of CBPMO’s metrics program that it has implemented relates to EVM for its contract with eCP. EVM is a widely accepted best practice for measuring contractor progress toward meeting deliverables by comparing the value of work accomplished during a given period with that of the work expected in that period. Differences from expectations are measured in the form of both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a –$1.7 million cost variance. Positive cost variances indicate that activities are costing less, while negative variances indicate activities are costing more. Schedule variances, like cost variances, are measured in dollars, but they compare the earned value of the work completed to the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month, but was budgeted to complete $10 million worth of work, there would be a –$5 million schedule variance. Positive schedule variances show that activities are being completed sooner than planned. Negative variances show activities are taking longer than planned. In accordance with EVM principles, eCP reports on its financial performance monthly. These reports provide detailed information on cost and schedule performance on work segments in each task order. Cost and schedule variances that exceed a certain threshold are further examined to determine the root cause of the variance, the impact on the program, and mitigation strategies. Objective 2 Results Open Recommendations Open recommendation 7: Reconsider the ACE acquisition schedule and cost estimates in light of early release problems, including these early releases’ cascading effects on future releases and their relatively small size compared to later releases, and in light of the need to avoid the past levels of concurrency among activities within and between releases. As we previously reported, the cost estimate for Releases 3 and 4 had grown to $185.7 million, which was about $36.2 million over the contract baseline, and the chances of further overruns were likely.1 Subsequently, the Release 3 and 4 cost overrun grew to an estimated $46 million, resulting in CBPMO and eCP establishing a new cost baseline for Releases 3 and 4 of $196 million. eCP began reporting performance against this new baseline in April 2004. Further, in July 2004, CBPMO and eCP changed the associated contract task order baseline completion date from September 15, 2004, to May 30, 2005, revised the associated interim task order milestones, and began reporting schedule performance relative to the new baselines. GAO, Information Technology: Early Releases of Customs Trade System Operating, but Pattern of Cost and Schedule Problems Needs to Be Addressed, GAO-04-719 (Washington, D.C.: May 14, 2004). Objective 2 Results Open Recommendations In July 2004, eCP also rebaselined the ACE program, producing a new version of the ACE Program Plan. The new baseline extends delivery of the last ACE release from fiscal year 2007 to fiscal year 2010 and adds a new screening and targeting release. The new program plan also provides a new ACE life-cycle cost estimate of $3.1 billion,1 which is a $1 billion increase over the previous life-cycle cost estimate. According to the expenditure plan, the new schedule reflects less concurrency between releases. The following figure compares previous and current schedules for ACE releases and shows a reduction in the level of concurrency between releases. CBP’s ACE life-cycle cost estimate adjusted for risk is about $3.3 billion. Open recommendation 8: Report quarterly to the House and Senate Appropriations Committees on efforts to address open GAO recommendations. CBPMO submitted reports to the Committees on its efforts to address open GAO recommendations for the quarters ending March 31, 2004, and June 30, 2004. CBPMO plans to submit a report for the quarter ending September 30, 2004, after it is approved by DHS and OMB. JavaTM Archive (JAR) files bundle multiple class files and auxiliary resources associated with applets and applications into a single archive file. Verify that related system, subsystem, or module components are capable of integrating and interfacing with each other. Test Readiness Review (TRR) System acceptance test (SAT) Verify that the developed system, subsystem, or module operates in accordance with requirements. Production Readiness Review (PRR) User acceptance test (UAT) Verify that the functional scope of the release meets the business functions for the users. Operational Readiness Review (ORR) Defect prevents or precludes the performance of an operational or mission- essential capability, jeopardizes safety or security, or causes the system, application, process, or function to fail to respond or to end abnormally. Severe (Severity 2) Defect prevents or precludes system from working as specified and/or produces an error that degrades or impacts the system or user functionality. Moderate (Severity 3) Defect prevents or precludes system from working as specified and/or produces an error that degrades or impacts the system or user functionality. An acceptable (reasonable and effective) work-around is in place that rectifies the defect until a permanent fix can be made. Minor (Severity 4) Defect is inconsequential, cosmetic, or inconvenient but does not prevent users from using the system to accomplish their tasks. GAO, Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning, GAO-03-563 (Washington, D.C.: June 9, 2003). that is intended to provide additional account management functionality. The fiscal year 2005 plan, however, did not address progress against these commitments. For example, the plan did not describe the status of infrastructure acquisition, nor did it discuss the expenditure of the $106.6 million requested for this purpose. While the plan did discuss the status of the initial ACE releases, it did not describe progress toward defining and designing the functionality that was to be in the former Release 6. Also, the fiscal year 2005 expenditure plan included a schedule for developing ACE releases, but neither reported progress relative to the schedule presented in the fiscal year 2004 plan nor explained how the individual releases and their respective schedules were affected by the rebaselining that occurred after the fiscal year 2004 plan was submitted. Further, while the fiscal year 2005 expenditure plan contained high-level descriptions of the functionality provided by Releases 1 and 2, it did not describe progress toward achieving the benefits they are expected to provide. Without such information, meaningful congressional oversight of CBP progress and accountability is impaired. GAO, Information Technology: DOD’s Acquisition Policies and Guidance Need to Incorporate Additional Best Practices and Controls, GAO-04-722 (Washington, D.C.: July 2004). Establish and communicate targets for ACE usage to encourage users to use ACE rather than ACS. If ACS remains available to ACE users, they may continue to use the legacy system, and as a result the full benefits of ACE will not be realized. Before training, make users aware of the major differences between ACS and ACE. If ACE users do not understand the differences between the legacy systems and ACE, then the users will not understand how best to use ACE, which may result in resistance to the new system and processes. Discuss the future needs of CBP to establish new roles and responsibilities within the Office of Information and Technology (OIT). If future roles of the OIT are not established, then OIT may not be prepared to provide technical support when ACE is transferred from eCP to OIT. Send staff to visit ports to build critical knowledge regarding organizational change objectives. If staff do not have adequate access to representatives of occupational groups at each port, then communications, training, and deployment efforts cannot be customized to each group's needs. This may delay or disrupt ACE adoption. GAO, Tax Systems Modernization: Results of Review of IRS’ Initial Expenditure Plan, GAO/AIMD/GGD-99-206 (Washington, D.C.: June 1999). coverage of all program commitment areas, including key expected or estimated system (1) capabilities, use, and quality; (2) benefits and mission value; (3) costs; and (4) milestones and schedules; currency, relevance, and completeness of all such commitments made to the Congress in expenditure plans; reliability of data relevant to measuring progress against commitments; reporting in future expenditure plans of progress against commitments contained in prior expenditure plans; use of criteria for exiting key readiness milestones that adequately consider indicators of system maturity, such as severity of open defects; and clear and unambiguous delineation of the respective roles and responsibilities of the government and the prime contractor. SEI’s institutional estimating guidelines are defined in Checklists and Criteria for Evaluating the Cost and Schedule Estimating Capabilities of Software Organizations, and SEI’s project-specific estimating guidelines are defined in A Manager’s Checklist for Validating Software Cost and Schedule Estimates. Institute of Electrical and Electronics Engineers (IEEE) Standard for Software Verification and Validation, IEEE Std 1012- 1998 (New York: Mar. 9, 1998). Attachment 1 Scope and Methodology CBP’s progress toward increasing the number of ACE user accounts, against ACE’s quality, using eCP defect data and testing results for Releases 3 and 4; cost and schedule data and program commitments from program management documentation. For DHS-, CBP-, and contractor-provided data that our reporting commitments did not permit us to substantiate, we have made appropriate attribution indicating the data’s source. We conducted our work at CBP headquarters and contractor facilities in the Washington, D.C., metropolitan area from April 2004 through December 2004, in accordance with generally accepted government auditing standards.
The Department of Homeland Security (DHS) is conducting a multiyear, multibillion-dollar acquisition of a new trade processing system, planned to support the movement of legitimate imports and exports and strengthen border security. By congressional mandate, plans for expenditure of appropriated funds on this system, the Automated Commercial Environment (ACE), must meet certain conditions, including GAO review. This study addresses whether the fiscal year 2005 plan satisfies these conditions, describes the status of DHS's efforts to implement prior GAO recommendations for improving ACE management, and provides observations about the plan and DHS's management of the program. The fiscal year 2005 ACE expenditure plan, including related program documentation and program officials' statements, largely satisfies the legislative conditions imposed by the Congress. In addition, some of the recommendations that GAO has previously made to strengthen ACE management have been addressed, and DHS has committed to addressing those that remain. However, much remains to be done before these recommendations are fully implemented. For example, progress has been slow on implementing the recommendation that the department proactively manage the dependencies between ACE and related DHS border security programs. Delays in managing the relationships among such programs will increase the chances that later system rework will be needed to allow the programs to interoperate. Among GAO's observations about the ACE program and its management are several regarding DHS's approach to addressing previously identified cost and schedule overruns. DHS has taken actions intended to address these overruns (such as revising its baselines for cost and schedule, as GAO previously recommended); however, it is unlikely that these actions will prevent future overruns, because DHS has relaxed system quality standards, meaning that milestones are being passed despite material system defects. Correcting such defects will require the program to use resources (e.g., people and test environments) at the expense of later system releases. Until the ACE program is held accountable not only for cost and schedule but also for system capabilities and benefits, the program is likely to continue to fall short of expectations. Finally, the usefulness of the fiscal year 2005 expenditure plan for congressional oversight is limited. For example, it does not adequately describe progress against commitments (e.g., ACE capabilities, schedule, cost, and benefits) made in previous plans, which makes it difficult to make well-informed judgments on the program's overall progress. Also, in light of recent program changes, GAO questions the expenditure plan's usefulness to the Congress as an accountability mechanism. The expenditure plan is based largely on the ACE program plan of July 8, 2004. However, recent program developments have altered some key bases of the ACE program plan and thus the current expenditure plan. In particular, the expenditure plan does not reflect additional program releases that are now planned or recent changes to the roles and responsibilities of the ACE development contractor and the program office. Without complete information and an up-to-date plan, meaningful congressional oversight of program progress and accountability is impaired.
The United States, like the European Union and Canada, maintains annual quotas on textile and apparel imports from various supplier countries. When a country’s quota fills up on a certain category of merchandise, that country’s exporters may try to find ways to transship its merchandise through another country whose quota is not yet filled or that does not have a quota. Transshipment may also occur because obtaining quota can be very expensive and the exporters want to avoid this expense. The actual illegal act of transshipment takes place when false information is provided regarding the country-of-origin to make it appear that the merchandise was made in the transited country. The effects of the illegal act of transshipment are felt in both the transited country (potentially displacing its manufactured exports) and the United States, increasing competition for the U.S. textile and apparel industry. These U.S. quotas, embodied in approximately 45 bilateral textile agreements, are scheduled for elimination on January 1, 2005, in accordance with the 1995 World Trade Organization (WTO) Agreement on Textiles and Clothing. However, U.S. quotas will remain for approximately five countries that are not members of the WTO and for specific product categories when trade complaint actions, resulting in reinstated quotas, are approved. Incentives to engage in transshipment will also continue due to the differing tariff levels resulting from the various bilateral or multilateral free trade agreements and preference programs that the United States has signed with some countries. U.S. tariffs on certain types of sensitive textile and apparel products range up to 33 percent, but such tariffs can fall to zero for imports from trade agreement countries. As with quotas, manufacturers from countries facing higher U.S. tariffs may find ways to transship their merchandise to countries benefiting from lower or no U.S. tariffs, illegally indicate the merchandise’s country-of-origin, and enter the merchandise into the U.S. market. Over the past decade, U.S. imports of textile and apparel products have grown significantly, while domestic production and employment have declined. For example, textile and apparel imports in 2002 were about $81 billion, nearly double their value in 1993. The largest suppliers to the U.S. market in 2002 were China (15 percent), Mexico (12 percent), and Central America and the Caribbean (as a group, 12 percent). See appendix II for more information on textile and apparel trade, production, and employment. While imports have grown over the decade, domestic production and employment have declined. Figure 1 shows U.S. domestic production, imports, exports, and employment in the U.S. textile and apparel sector. From 1993 through 2001 (latest year available), textile and apparel production (as measured by shipments to the U.S. market or for export) declined by 11 percent, and employment fell by 38 percent. However, the United States still maintains significant production (over $130 billion) and employment (about 850,000 jobs) in the textile and apparel sector. CBP has responsibility for ensuring that all goods entering the United States do so legally. It is responsible for enforcing quotas and tariff preferences under trade agreements, laws, and the directives of the interagency Committee for the Implementation of Textile Agreements (CITA) involving the import of textiles and wearing apparel. CBP has established a Textile Working Group under its high-level Trade Strategy Board that prepares an annual strategy for textiles and apparel. This annual strategy establishes national priorities and an action plan to carry out its goals. Within the framework of this overall strategy, CBP administers quotas for textiles, processes textile and apparel imports at U.S. ports, conducts Textile Production Verification Team (TPVT) visits to foreign countries, provides technical input for trade agreement negotiations, and monitors existing trade agreements. In addition to staff at CBP’s headquarters, officials at 20 Field Operations Offices and more than 300 CBP ports of entry oversee the entry of all goods entering the United States. CBP has a specific unit, the Strategic Trade Center (STC) in New York City, assigned to analyze textile trade data and other information sources for the targeting process. In addition to CBP, the departments of Commerce, Justice, State, and Treasury, and the Office of the U.S. Trade Representative (USTR) also play a role in transshipment issues. Further, as an interagency committee, CITA determines when market-disrupting factors exist, supervises the implementation of textile trade agreements, coordinates U.S. administration efforts to combat illegal textile and apparel transshipment, and administers the phase-out of textile and apparel quotas on WTO countries required under the 1995 Agreement on Textiles and Clothing. CBP’s process for identifying potential illegal textile transshipments depends on targeting suspicious activity by analyzing available data and intelligence. Due to increased trade volumes and shifted priorities, CBP seeks to focus its limited enforcement resources on the most suspect activity. CBP targets countries, manufacturers, shipments, and importers that it determines to be at a higher risk for textile transshipment. First, CBP identifies the countries in which trade flows and other information indicate a high potential for transshipment. CBP then targets selected manufacturers in those high-risk countries for overseas factory visits. Information from the factory visits is then used to target shipments to the United States for review and potential exclusions or penalties. Finally, CBP also targets importers based on high-risk activity and conducts internal control audits that include verifying that controls against transshipment exist. However, CBP selects only a small share of foreign factories and shipments for review due to limited resources. In response to a rapidly growing volume of trade at the border and limited resources for enforcement, CBP relies on a targeting process to identify shipments that have a high risk of being transshipped. According to CBP officials, trade growth and expanding law enforcement efforts have nearly overwhelmed its staff and resources. In addition, CBP’s modernization of its processes and technology, as called for in the Customs Modernization and Informed Compliance Act of 1993, recognizes that the nearly 25 million entries (shipments) CBP processes annually cannot all be inspected. Furthermore, since the terrorist attacks of September 11, 2001, CBP has shifted resources to security concerns as its priority mission. Inspection and some other port-level staff have been diverted from detecting commercial violations to ensuring security. In addition, during higher alert levels (such as code orange and above), additional staff is also refocused to assist in port and national security. CBP’s process of targeting high-risk activity begins by identifying the countries that supply textile imports that pose the greatest risk of illegal textile transshipment. Applying a risk-management approach, CBP targets shipments for review based on trade data, such as sudden surges of products restricted by quotas from nonquota countries, production data, results of past factory and port inspections, suspicious patterns of behavior, and tips from the private sector. CBP then reviews the targeted shipments for evidence of transshipment, while expediting the processing of nontargeted shipments. From its country-level review, CBP targets 16 countries per year on average, and actually visits 11 of them on average. For the countries CBP selects, it targets on average about 45 high-risk manufacturing plants to visit. These visits seek to find evidence of transshipment or to verify that the factories are in compliance with U.S. trade laws and regulations regarding the origin of the goods exported to the United States. If problems are found, CBP uses that information to target shipments (entries) entering the United States for possible detention and exclusion. CBP targeted 2,482 shipments in 2002. CBP has begun to target high-risk importers’ shipments for review while also conducting internal audits of selected importers. Figure 2 shows the general process CBP uses to target suspicious activity. Before the beginning of each fiscal year, CBP analyzes trade and production data, as well as other available intelligence, to assess the relative risk of each major U.S. trade partner for engaging in illegal textile transshipment. CBP generally identifies 16 countries a year on average as being at high risk for transshipment or other trade agreement violations and updates its assessment at least once during the fiscal year. The risk level (high, moderate, or low) is based largely on the volume of trade in sensitive textile categories, such as certain types of knit apparel and fabric, and the likelihood of transshipment through that country. For example, as of November 1, 2003, quotas on men and women’s knit shirts and blouses were approximately 80 percent or more filled for China, India, and Indonesia. This situation creates an incentive for producers in those countries concerned that the quotas will close before the end of the year to transship their goods. CBP may increase its monitoring of trade in these products through neighboring countries. The likelihood of transshipment is a qualitative judgment that CBP makes based on available intelligence. Countries with high production capabilities and subject to restrictive quotas and tariffs, such as China, India, and Pakistan, are considered potential source countries. These countries could produce and export to the United States far more textile and apparel products than U.S. quotas allow. Countries that have relatively open access to the U.S. market, either through relatively generous quotas (Hong Kong and Macau) or trade preferences programs (Central America and the Caribbean, and sub- Saharan Africa) are considered potential transit points for textile transshipment. CBP focuses its efforts on targeting and reviewing goods from these transit countries rather than source countries because any evidence that goods were actually produced elsewhere, such as closed factories or factories without the necessary machinery to produce such shipments, would be found in the transit country. After selecting the high-risk countries, CBP then selects a subset of these countries to visit during the year to conduct TPVT factory visits. During the past 4 years, CBP conducted 42 TPVT visits to 22 countries. Cambodia, Hong Kong, Macau, and Taiwan in Asia, and El Salvador in Latin America received three or more visits between 2000 and 2003. Table 1 shows the U.S. trade partners that CBP visited on a TPVT trip in those years, along with their share of U.S. imports of textile and apparel products in 2002. For some U.S. trade partners, their share of overall textile and apparel trade may be relatively low, but for certain products they are significant suppliers. For example, although Thailand is the tenth largest supplier overall, it is the fifth largest supplier of cotton bed sheets. The number of countries CBP visits each year has varied, but from 1996 through 2003 CBP visited 11 countries per year on average. Although the overall size of trade is an important factor in targeting countries, CBP also looks at a range of information in making its determination. For example, several relatively small suppliers, such as Nicaragua, Swaziland, and Botswana, were visited because they receive special preferences as developing countries. Also, Vietnam, which only accounted for about 1 percent of U.S. imports in 2002, was selected partly due to trade anomalies occurring during a period when Vietnam’s quota-free access to the U.S. market made it a potential transit country. Figure 3 describes the case of Vietnam as an example of the role and limitations of the targeting process. However, Canada and Mexico are both top U.S. trade partners and designated as high-risk countries, but CBP has not made any TPVT visits. Under the NAFTA, producers in these countries are subject to visits to verify NAFTA eligibility. However, these visits do not focus on transshipment specifically and although CBP has sought to send a TPVT visit to Canada, it has not yet been successful in persuading the Canadian government. CBP targets about 45 factories on average per country visit, although this number varies depending on the characteristics of each country. For example, the proximity of factories to one another and the length of trip (1 to 2 weeks) will affect the number of factories that can be visited. The importance of the trade partner in U.S. textile and apparel trade will affect the length of the trip and number of factories targeted. On the November 2003 Hong Kong TPVT trip, for example, CBP visited over 200 factories. Before undertaking a TPVT visit in a foreign country, CBP conducts a special targeting session to identify the manufacturers in that country that it suspects may be involved in textile transshipment. Similar to its targeting of countries, CBP import and trade specialists consider the recent trade flows, available intelligence, experience from past factory visits, and reviews of merchandise at U.S. ports in order to narrow down from the total list of factories in the country to a list of the highest-risk factories that they will target for a visit. The process involves collaboration between the STC trade specialists, the port-level import specialists that will travel to the factories, and headquarters staff. During the past 4 years, CBP found that about half the manufacturers that it targeted as high risk were actually found by TPVT visits to have serious problems. These problems included actual evidence of transshipment, evidence that indicated a high risk of potential transshipment, permanently closed factories, and factories that refused admission to CBP officials. Each of these problems is considered a sufficient reason to review and detain shipments from these factories as they reach U.S. ports. In addition, some factories were found to warrant additional monitoring by the STC. They were listed as low risk and their shipments were not targeted for review when they reached U.S. ports. Although the share of targeted factories found to have problems is relatively high, the factories that CBP targeted were those that generally had some indication of risk, based on intelligence or trade data analysis. Also, the targeted manufacturers that were visited (about 1,700) during the 4-year period generally make up a small share of the total number of manufacturers in each country. However, for smaller trade partners, such as those that receive trade preferences under the Caribbean Basin Trade Partnership Act (CBTPA) or African Growth and Opportunity Act (AGOA), CBP can visit a sizable share of the factories within the country because their overall number of factories is smaller. For El Salvador and Nicaragua, CBP has visited about 10 percent of the factories, and for Swaziland and Botswana, CBP has visited about 22 and 28 percent of the factories, respectively. Due to the small share of factories that CBP can actually visit, the STC says it is developing evaluation tools to improve CBP’s process of targeting foreign manufacturers for TPVT visits. Currently, the STC tracks the number and results of the TPVT visits in order to assess whether the targeted factories were actually found to have problems by the TPVT visits. CBP says it is developing a database to keep track of the specific criteria it used to target manufacturers for TPVT visits. It plans to use the results of the TPVT visits to identify which criteria were most useful in its targeting process. In 2002, CBP identified 2,482 high-risk shipments (entries) for greater scrutiny or review—less than one-tenth of 1 percent of the more than 3 million textile and apparel entries that year. CBP actually reviewed 77 percent of the shipments that were identified. Of the shipments reviewed, about 24 percent resulted in exclusions from U.S. commerce, 2 percent in penalties, and 1 percent in seizures. To choose shipments for review, CBP headquarters uses information collected from TPVT factory visits as well as other intelligence information to create criteria for its targeting system. When shipments match these criteria, they are flagged at the ports for a review. For instance, when a TPVT visit finds that a foreign factory has been permanently closed, CBP will place this information in its automated system to be used as criteria for targeting any shipments destined for entry into the United States that claimed to have been produced in that factory. In addition, other information such as prior shipment reviews or intelligence information concerning possible illegal activity by manufacturers, importers, or other parties can be entered as criteria to stop shipments. Criteria can be entered nationally for all ports, or individual ports can add criteria locally that only affect shipments to their own port. CBP has recently begun to increase targeting of U.S. importers of textile and apparel products who demonstrate patterns of suspicious behavior. For example, CBP identified more than 40 importers in the past year who have a pattern of sourcing from foreign manufacturers involved in transshipment. According to CBP officials, they can pursue penalties against these companies, because this pattern of behavior may violate reasonable care provisions of U.S. trade laws. CBP also uses this information and other intelligence it collects to target for review shipments that these importers receive. In addition to this targeting, CBP’s Regulatory Audit division has traditionally conducted internal control audits of importers, and it uses a separate targeting process to identify the importers that it will audit. One component of its audits focuses on whether the importer has and applies internal controls for transshipment. The STC has also provided information about the companies it targets to Regulatory Audit for its own investigations or audits. Although CBP’s textile transshipment strategy relies on targeting, resource constraints limit both the number of targets that CBP generates and the type of targeting analysis that CBP can conduct. First, the number of foreign factories and shipments targeted is limited by the ability of CBP to conduct the reviews. As previously discussed, CBP is able to visit only a small share of the foreign factories exporting textile and apparel products to the United States. The results of these visits then provide key information for targeting shipments for review as they arrive at U.S. ports. Similarly, CBP targets only a small share of textile and apparel shipments to U.S. ports for review. CBP officials with whom we met said CBP limits the number of shipments it targets for port reviews because port staff are unable to effectively examine a significantly larger number of shipments. In addition to resource constraints due to security (previously discussed), reviewing shipments for textile transshipment is labor intensive and involves more than a simple visual inspection of the merchandise. Unlike cases involving narcotics in which physical inspections alone can lead to discovery of the drugs, physical inspections of textile or apparel products rarely provide sufficient evidence of transshipment. Port staff generally needs to scrutinize detailed production documentation, which is time consuming, to determine a product’s origin and assess the likelihood of transshipment. Second, staff constraints restrict the extent to which CBP can utilize and develop its targeting process. As of December 2, 2003, the STC had 25 percent of its staff positions unfilled (3 out of 12 positions), while its responsibilities are growing as trade agreements are increasing. For each new trade agreement, STC staff monitor trade and investment patterns to detect whether anomalies are developing that should be targeted. Consequently, CBP officials said that resource constraints have meant that several types of analysis that the STC planned on conducting have either been delayed or not conducted at all. These included analyses of high-risk countries, improvements to existing targeting processes, and studies of alternative targeting techniques. Despite these resource limitations, CBP and the STC, in particular, have made regular improvements to the targeting process. For example, CBP’s targeting of countries and manufacturers for TPVT visits has become more systematic, relying on trade data and other intelligence to select factories for visits. CBP has consolidated textile functions at headquarters and has adapted textile review activities at the ports to changing resource levels. In response to national security priorities, CBP inspectors at the ports are being shifted to higher-priority duties, leaving import specialists at the ports to play the critical role in making decisions on excluding or seizing illegal textile shipments. CBP now relies on TPVT visits as an essential part of its targeting process, but CBP has not always finalized these TPVT results and provided them to CBP ports, CITA, and the foreign governments for follow-up in a timely manner. With the expiration of the WTO global textile quota regime in 2005, CBP will lose its authority to conduct TPVTs in the former quota countries, and supplementing the enforcement information provided to the ports will be important. Information from overseas Customs Attaché offices and cooperative efforts with foreign governments can provide additional important information for port inspections. CBP has moved most textile functions into a single headquarters division to foster a coordinated agency approach to monitoring textile imports and enforcing textile import laws, but it must still depend on its port staff to identify and catch illegal textile transshipments. As CBP inspectors are shifted to higher-priority functions, such as antiterrorism and drug interdiction efforts, import specialists at the ports are playing an increasingly central role in scrutinizing the growing volume of textile imports. They review the entry paperwork for all textile imports covered by quotas or needing visas in order to exclude shipments that are inadmissible or to seize those that are illegal, according to port officials. However, resource constraints at the ports have forced them to depend increasingly on STC targeting, results of TPVTs, and information from headquarters to identify suspect shipments and enforce textile laws. In 2001, CBP consolidated oversight of most of its textile operations into one headquarters division in the Office of Field Operations, creating the Textile Enforcement and Operations Division. One important exception to that consolidation was the Textile Clearinghouse in the New York STC, which remained in the Office of Strategic Trade. The Textile Enforcement and Operations Division is responsible for monitoring and administering textile quotas; providing technical input to textile negotiations; overseeing implementation of textile import policies at the ports; and for planning, reporting, and following up on TPVT visits. It uses the results of targeting by the STC, the findings of the TPVTs, and input from the ports to oversee the daily implementation of textile policy at the ports. It also works with CITA, the domestic textile industry, the importing community, and the Bureau of Immigration and Customs Enforcement (BICE). Notwithstanding this, the critical point in identifying and preventing illegally transshipped textiles from entering the United States is at the ports. There are more than 300 CBP ports across the country—including seaports, such as Los Angeles/Long Beach, California; land border crossings for truck and rail cargo such as Laredo, Texas; and airports handling air cargo such as JFK Airport in New York, New York. The top 10 of 42 CBP service ports that processed textile imports accounted for about 75 percent by value of all shipments in 2002, according to the official trade statistics of the Commerce Department. The key staff resources for textile enforcement at the ports are the inspectors and the import specialists. Figure 4 provides an overview of CBP’s textile monitoring and enforcement process, including targeting, port inspections, and penalty investigations. The figure also provides data for the results obtained at each stage of the process in 2002. CBP processed about 3 million entries in that year, with 2,482 entries triggering targeting criteria, of which 981 entries were detained, 455 excluded, and 24 seized. (2,482 hit targeting criteria in 2002) Entry seized (24 entries, (1,908 entries, 77 percent of targeted) 1 percent of targeted) (981 entries, 40 percent of targeted) Civil investigation and case (71 CBP cases; 45 penalties) (455 entries, 18 percent of targeted) At any point in the review or detention of an entry, entry can either be released into commerce or seized, depending on the circumstances. As national security and counternarcotics concerns have become CBP’s top priorities, CBP inspectors’ roles have shifted away from textile and other commercial inspection. The result is that, even at the larger ports, fewer CBP inspectors are knowledgeable about a specific commodity, such as textiles. These inspectors now have less time and expertise to inspect textile shipments. For example, at all but one of the ports we visited, inspectors were mainly pulling sample garments from shipments for import specialists to examine rather than acting as an additional, knowledgeable source on textiles who could do a first level of review. As a result, the import specialists have become more critical in preventing textile transshipment. About 900 import specialists work at the ports, of which approximately 255 are assigned to work on textiles, according to a senior CBP official. These specialists have always been central to determining whether illegal textile transshipment has occurred, because visual inspection is usually not sufficient. While physical clues such as cut or resewn labels can provide an indicator that a garment should be further examined, in many cases nothing about the garment itself indicates that a problem exists. To establish textile transshipment, import specialists must request production documents from the importer (who, in turn, requests them from the manufacturer) and review them to see if they support the claimed country of origin. This is a highly complex, technical, and labor- intensive process. Import specialists (or at some ports, entry specialists or inspectors) review the basic entry paperwork for all textile shipments arriving at the ports that are covered by quotas or need visas. They will place a hold on a textile shipment: 1. if there are “national criteria,” that is, if headquarters has entered an alert in the Automated Commercial System (ACS), CBP’s computer system for imports, based on targeting, TPVT findings, and other risk factors, to detain all shipments from that manufacturer or to that importer and request production documents; 2. if there are “local criteria,” that is, the port has entered an ACS alert based on concerns particular to that port; 3. if the port has conducted its own targeting on shipments arriving at the port and found questionable entries; 4. if there are abnormalities in the paperwork that warrant further review; or 5. if there is other information that may be provided by domestic industry, the Office of Textiles and Apparel at the Commerce Department, CITA, foreign governments, or informants. In most cases, shipments with national criteria will automatically be detained, a sample pulled from the shipment, and production verification documents requested. For shipments held due to local criteria, port targeting, abnormalities, or other information, the import specialist may request that the CBP inspectors pull a sample from the shipment, which must be done within 5 days. The import specialist examines the sample garments and determines whether shipments being held can be released or require further review. If further review is warranted, they detain the shipment and send the importer a detention letter, in which they ask the importer to provide the production verification documentation for an in- depth review. CBP must receive and review the documents within 30 days, or the shipment is automatically excluded. Based on the in-depth review of the documentation, the import specialist decides whether to release the goods into commerce, exclude them if found to be inadmissible, or seize them if found to be illegal. Goods are inadmissible and are denied entry when the importer has not provided sufficient information to substantiate the claimed country of origin or if documents required for entry have not been provided. Goods may be seized when the import specialist has evidence that the law has been broken; this requires a higher level of evidence than exclusion. In the post-September 11, 2001, environment, the ports have become more likely to rely on national criteria. At all of the ports we visited, CBP officials said that, in response to national criteria in ACS for textile shipments, they will detain all such shipments and request production documents. However, only a few large ports that handle a high level of textile imports, such as Los Angeles/Long Beach and New York/Newark, have been able to do much proactive local targeting. At most of the other ports, officials said that they do as much local criteria or targeting as they can but rarely get the spare time to do very much. CBP data support these statements. While national criteria accounted for about 75 percent of inspections in 2002, local criteria and self-initiated reviews accounted for 25 percent. Further, local criteria and self-initiated reviews had declined by half, from 2000 to 2002; and most of the local criteria in 2002 were generated by the ports in Los Angeles and New York. According to a senior CBP official, headquarters directs the input of national criteria to improve communications to the ports and foster greater uniformity of response and action by all affected ports. National criteria are continually tracked, analyzed, and adjusted as appropriate. One reason is that smaller ports have fewer import specialists; and in some cases, no import specialists are dedicated to specific commodities. In some ports, the import specialist is responsible for the entire range of products that can enter the country. TPVTs are a critical enforcement tool, and the conduct and reporting of TPVT visits have been made more uniform and rigorous in recent years. However, while the TPVT reports are an important part of the targeting process, they are not always provided in a timely manner to CBP ports, CITA, and the foreign governments. TPVTs are critical to enforcement because the ports increasingly depend on the national criteria that headquarters supplies to trigger enforcement. These national criteria primarily result from STC targeting and the findings of the TPVTs conducted in high-risk countries. Additionally, CBP may receive enforcement information provided by a foreign government or other sources. The TPVT process has two main objectives: (1) to verify that the production capacity of the factory matches the level and kind of shipments that have been sent to the United States and (2) to verify production of the specific shipments for which they have brought copies of the entry documents submitted to CBP. If a factory is closed, refuses entry, or the team finds evidence of transshipment, the team immediately notifies headquarters so that national criteria can be entered into ACS. Any further shipments from the closed factories will be excluded. Shipments from factories refusing entry or found to be transshipping will be detained, and importers will be asked for production verification documents. If a factory is deemed to be at high risk for transshipment, but no clear evidence has been found, CBP has generally waited until the TPVT report is approved before entering the criteria. Figure 5 shows a TPVT team verifying production in El Salvador textile factories. TPVT report drafting and approval involves several steps. First, the import specialists on the team write the initial draft of their TPVT results report while in country. When the team members return to their home ports, the team leader completes the report and forwards it to headquarters, where it is reviewed, revised, and finally approved by CBP management. Once the TPVT report is approved, the remaining national criteria for the high-risk factories are entered into ACS. CBP’s standard operating procedures for TPVTs, dated September 21, 2001, state that the TPVT team leader should finalize the reports within 21 calendar days after completing the trip and get headquarters approval within 2 weeks afterwards, or 5 weeks total. However, when we examined the approval timeline for TPVT reports during the past 4 years, we found that, in practice, report approvals have averaged 2.3 months, or almost twice as long as the procedural requirement. For example, the El Salvador TPVT we observed was conducted from July 21 through August 1, 2003, but headquarters did not approve the TPVT report until October 20, 2003. More importantly, during such interim periods, although national criteria have been identified for high-risk factories, they are generally not entered into ACS until the report is approved within CBP. The result is that questionable shipments for which criteria are intended can continue to enter commerce for another 2.3 months on average. From 2000 to 2003, an average of 37 percent of TPVT-generated criteria were for high-risk factories. This means that import specialists at the ports may not see more than a third of the criteria for about 2.3 months after the TPVT visits. At that time, if examination of these high-risk factories’ production documents show transshipment of textiles during the interim period, the import specialists will not be able to exclude these shipments, because they will have already entered commerce. Instead, import specialists will have to ask for redelivery by the importer to the port. At that point, most garments will likely have been sold. Although, according to CBP, it can charge the importer liquidated damages for failure to redeliver, additional transshipped garments will have entered commerce nevertheless. The TPVT reports are also sent to CITA and trigger another set of actions in the textile enforcement process. If the TPVT cannot verify the correct country of origin in all shipments being investigated, then CITA will ask the foreign government to investigate, which also provides it with an opportunity to respond before CITA takes an enforcement action. CITA’s goal is to get foreign governments to monitor and control their own plants—essentially to self police. According to a CITA official, if the government does not provide a satisfactory response, CITA is then obligated to direct CBP to exclude the illegal textiles. When CBP provides CITA with information that the TPVT (1) was refused entry to the factory, (2) found evidence of textile transshipment, or (3) found the factory was unable to produce records to verify production, CITA will send a letter to the foreign government requesting that it investigate whether transshipment has occurred and report back to CITA. The foreign government has 30 days to respond; if there is no response, CITA can also direct CBP to block entry of that factory’s goods, generally for 2 years. In such cases, CBP ports do not even have to review production documents first; the goods will be denied entry. Notice of this prohibition is published in the Federal Register to inform U.S. importers. When CITA sends a letter to the foreign government, CITA officials said that most governments respond with an investigation of the manufacturer. Sometimes governments penalize the factory with a suspended export license, or they report back that the factory has closed. As long as they are taking steps to prevent further transshipment, CITA is satisfied, according to CITA officials. CITA officials stated that TPVT reports are essential to CITA’s efforts to address illegal transshipment and that CBP has made progress in providing CITA, through the TPVT reports, with useful information to identify suspect factories and to determine the nature and extent of illegal transshipment. However, CITA officials continue to seek improvement in these reports, in particular for the reports to contain factual, verifiable information with definitive conclusions regarding whether a visited factory is involved in illegal transshipment and for this information to be provided clearly and concisely. While CITA officials acknowledged that it may be extremely difficult to CBP to find a “smoking gun” necessary to make this type of conclusion, CITA officials believe that increased clarity and more definitive conclusions are possible. Also, delay in receiving the reports hamper prompt action by CITA, and CBP in many instances does not advise CITA of follow-up action it has taken against factories that the CBP found to be unable to verify production or otherwise suspect. A CITA official estimated that about one-half to three-quarters of TPVTs result in CITA letters. He estimated that CITA sent about six to seven letters between October 2002 and October 2003. Overall, CBP’s TPVTs and TPVT reports are more geared toward providing CBP with national criteria, as recognized by a CBP official. However, CITA officials said that they need more detailed evidence to better support CITA enforcement actions. CBP faces further challenges to which it must adapt with the expiration of the Agreement on Textiles and Clothing—the global textile quota regime— on January 1, 2005. The end of the quota regime will mean that the United States will also lose its authority under that agreement to conduct TPVTs in former quota countries, unless customs cooperation provisions with the foreign governments are renewed. CBP has other means by which it can supplement the enforcement information it receives from targeting and TPVTs, including placing import specialists in overseas Customs Attaché offices in high-risk countries and obtaining greater foreign government cooperation. Finding means of supplementing the enforcement information provided to CBP ports will be critical once the global textile quota regime, embodied in the WTO Agreement on Textiles and Clothing, expires on January 1, 2005. The numerous U.S. bilateral quota agreements with WTO-member textile exporting countries were all subsumed in the global regime. The textile enforcement provisions in these agreements provided the authority for CBP to conduct TPVTs. All of these provisions will expire together with the global textile quota regime. CBP will have continued authority to conduct TPVTs in countries with free trade agreements and preference agreements (such as the Caribbean Basin Trade Preference Act), as well as in non-WTO countries whose bilateral quota agreements will not expire (such as Vietnam). However, certain incentives for transshipment will continue to exist. For example, special provisions that apply to imports of Chinese textiles have recently been invoked under the safeguard provision of China’s Accession Agreement to the WTO to limit growth of imports of certain textile categories. The safeguard provision allows individual categories of textiles to remain under quota for up to an additional 12 months, if the domestic industry petitions CITA for relief and CITA affirms the petition. The petition must establish that imports of Chinese origin textiles and apparel products are threatening to impede the orderly development of trade in these products, due to market disruption. The U.S. government currently maintains a Memorandum of Understanding with Hong Kong under which customs cooperation has been conducted. Given the possibility of additional safeguard quotas being imposed on Chinese textiles after the global quota regime expires, it will be critical that U.S.-Hong Kong customs cooperation continues. However, the United States does not have such memorandums of understanding with other high- risk countries in the region, such as Taiwan, Macau, and Bangladesh. CBP will no longer have the authority to conduct TPVTs in these high-risk countries unless customs cooperation agreements are renewed. CBP has sought to supplement the enforcement information it receives by placing some import specialists in overseas Customs Attaché offices in high-risk countries and by obtaining greater foreign government cooperation. CBP started sending import specialists to its overseas Customs Attaché offices in 2000. The reason for this effort was that most staff in the Customs Attaché offices were special agents who were criminal investigators and had no trade background. Import specialists were to provide this missing trade experience. CBP identified the countries that would most benefit from having an import specialist in the Attaché office, and by November 2003, six import specialists were assigned to Canada, Hong Kong, Japan, Mexico, Singapore, and South Africa. A CBP official said that the import specialists are assisting with providing information. They have been able to help in following up on TPVT findings. They also have been useful in uncovering counterfeit visa cases in which fake company names and addresses are given in import documents. If more import specialists were in Customs Attaché offices in high-risk countries to assist with textile monitoring and enforcement, additional benefits would result, according to the CBP official. In between TPVT visits, they would be able to assist the targeting effort with activities such as checking to see whether a particular factory really exists or has the level of capacity claimed. They could also verify factory addresses and licensing. Finally, they would be able to facilitate cooperation and coordination with the foreign government on textile transshipment issues, including conducting training on transshipment prevention. Another means by which CBP can also supplement the enforcement information it receives is by encouraging foreign government cooperation and self-policing. A good example of such an arrangement is CBP’s present relationship with Hong Kong customs authorities. The Hong Kong Trade and Industry Department has established an extensive system for regulating Hong Kong’s textile industry, which it enforces together with the Customs and Excise Department. Hong Kong officials work closely with the U.S. Customs Attaché Office in Hong Kong and CBP’s Textile Enforcement and Operations Division at headquarters. Hong Kong also provides self-policing assistance to CBP. Hong Kong officials conduct follow-up investigations on findings by the TPVTs, called Joint Factory Observation Visits in Hong Kong, which have resulted in numerous cancelled or suspended export licenses. Hong Kong officials have also actively prosecuted and convicted individuals violating Hong Kong’s textile transshipment laws. As it is a matter of public record, CBP gets the names of those companies that have been convicted of violations. Macau and Taiwan also provide CBP with such information. CBP creates national criteria for these manufacturers, and the ports would detain any future shipments for production verification documentation. Figure 6 shows the high volume of commercial traffic coming into Hong Kong from Shenzhen, China, at the Lok Ma Chau Control Point. However, it is not clear whether many other high-risk countries have the capacity to self-police. In some countries, customs authorities may be constrained by domestic laws that either limit their authority or do not extend sufficient authority to adequately enforce textile transshipment provisions in their bilateral agreements with the United States. For example, government officials in El Salvador said that they do not have the same authority that U.S. CBP has in requesting production documentation from Salvadoran factories, because such authority is not provided in their customs laws. Such lack of authority was also an issue that USTR addressed when it negotiated the U.S.-Singapore Free Trade Agreement (FTA), finalized in 2003. CBP, which is a technical advisor to such negotiations, encouraged the addition of a provision to require the government of Singapore to enact domestic legislation that provided the authority needed to fully enforce the agreement’s textile transshipment provisions. The United States is currently negotiating numerous new FTAs. As with the Singapore FTA negotiations, USTR may be able to include such provisions in new FTAs, providing an opportunity for the United States to buttress textile transshipment enforcement provisions and enhance the ability of foreign governments to conduct more effective self-policing. Such provisions have generally been included in the FTAs negotiated since NAFTA, according to a senior CBP official. CBP uses its in-bond system to monitor cargo, including foreign textiles, transiting the U.S. commerce or being exported to a foreign country. However, weak internal controls in this system enable cargo to be illegally diverted from the supposed destination, thus circumventing U.S. quota restrictions and duties. At most of the ports we visited, CBP inspectors we spoke with cited in-bond cargo as a high-risk category of shipment because it is the least inspected and in-bond shipments have been growing. They also noted that CBP’s current in-bond procedures allow too much reliance on importer self-compliance and that little actual monitoring of cargo using this system takes place. Lack of automation for tracking in-bond cargo, inconsistencies in targeting and examining cargo, in-bond practices that allow shipments’ destinations to be changed without notifying CBP and extensive time intervals to reach their final destination, and inadequate verification of exports to Mexico hinder the tracking of these shipments. Although CBP has undertaken initiatives to tighten monitoring, limitations continue to exist. These limitations pose a threat not only to textile transshipments but also to other areas related to national security. Without attention to this problem, enforcement of national security, compliance with international agreements, and proper revenue collection cannot be ensured. To expedite the flow of commerce into the United States, Congress established in-bond movements to allow cargo to be transported from the port of arrival to another U.S. port for entry into U.S. commerce or for export to a foreign country. Cargo can be transported in several ways using the in-bond system. When a vessel arrives with containers, an importer may elect to use the in-bond system to postpone payment of taxes and duties while moving the goods from the original port of arrival to another port. By doing this, the importer delays paying duties until the goods are closer to their ultimate destination—for example, goods arriving by ship in Los Angeles may transit the country and ultimately be inspected and have duties levied in Chicago. Or goods may pass through the United States on their way to another destination, such as goods that are transported from Los Angeles to Mexico or from Canada to Mexico. There are three types of in-bond movements: Immediate transportation (I.T.). This is merchandise that is moved from one U.S. port to another for entry into U.S. commerce. Transportation and exportation (T&E). This is merchandise “in transit” through the United States. Export to another country is intended at the U.S. destination port. Immediate exportation (I.E.). This is merchandise exported from the port at which it arrives in. Once the shipment leaves the port of arrival, the bonded carrier has 30 days to move the merchandise to the U.S. destination port. Upon arrival at the destination port, the carrier has 48 hours to report arrival of merchandise. The merchandise must then be declared for entry or exported within 15 days of arrival (see fig. 4). Based on responses from our survey of 11 of 13 major area ports, the use of the in-bond system as a method of transporting goods across the country nearly doubled from January 2002 through May 2003. For our study, we surveyed the 13 ports across the country that process the largest amount of textiles and apparel and asked them about in-bond operations at their port. Figure 7 shows the increase in in-bond shipments processed in the past 17 months at 11 of these ports. From January 2002 through May 2003, in- bond entries increased 69 percent. A recent study on crime and security at U.S. seaports estimated that approximately 50 percent of all goods entering the United States use the in-bond system and projects that this figure will increase. Based on our survey, the top three U.S. ports that were the most frequent reported destinations for in-bond shipments from October 2002 to May 2003 were Miami, New York, and Los Angeles. In-bond entries comprised a significant portion of the total entries for these ports, with 58.2 percent of total entries in Miami, 60 percent in New York, and 45.9 percent in Los Angeles. For goods arriving at the Los Angeles-Long Beach seaport, the top three intended in-bond destination ports for fiscal year 2002 were Chicago, New York, and Dallas-Fort Worth, Texas. Many officials at the ports we surveyed expressed concern in their responses over the growth of in-bond shipments and their lack of additional resources to examine and track these shipments. In addition, some port officials we spoke with also expressed concern that the in-bond system is increasingly being used for diverting goods that are quota restricted (such as textiles) or that have high duty rates. One example of how illegal in-bond diversion occurs is when textile shipments arrive by vessel at Los Angeles and are transported by truck to a port such as Laredo, Texas, where the carrier (trucking company) may declare immediate exportation to Mexico (see fig. 5). However, instead of exporting the goods to Mexico, they are shipped to another U.S. location for sale. This can occur because CBP relies heavily on importer compliance, and it requires only that carriers drop off paperwork showing exportation, without actually requiring physical inspection of the cargo. CBP and BICE presently have ongoing investigations to address the problem of illegal diversion of in-bond merchandise. For example, a 2003 in-bond diversion investigation found that 5,000 containers of apparel were illegally imported, thus avoiding quota restrictions and payment of $63 million in duties. Between May 2003 and October 7, 2003, the ports of Long Beach and El Paso made 120 seizures with cases involving a textile in-bond diversion smuggling scheme. The total domestic value for these goods was more than $33 million. Table 2 shows the number of in-bond cases and the penalty amounts assessed by CBP for the past 3 fiscal years. Total penalty amounts assessed were more than $350 million. At present, CBP lacks a fully automated system that can track the movement of in-bond transfers from one port to another. Much shipment information must be entered manually—a time-consuming task when thousands of in-bond shipments must be processed every day—and as a result, recorded information about in-bond shipments is minimal and records are often not up to date. In addition, in-bond arrival and departure information recording is not always timely; and according to our survey results, insufficient cargo information, along with a lack of communication between U.S. ports about in-bond shipments, makes it difficult for ports of destination to monitor cargo and know the number of in-bond shipments to expect. CBP has begun to automate its in-bond system but concerns remain. By definition, an in-bond movement is entry for transportation without appraisement. CBP collects significantly less information on in-bond shipments than regular entries that are appraised. While CBP has the ability to collect additional information for textile products, our survey results show that very little information is collected by CBP for in-bond shipments in general. To process an in-bond shipment, all in-bond paper transactions require a Customs Form 7512, Transportation and Entry form. This form is filled out by brokers and submitted to the port of arrival. According to many in-bond personnel responding to our survey, the information that is provided on this form to allow the shipment to travel in-bond is often minimal, capturing some, but not all, shipment manifest information, shipment data, and carrier data. They also responded that the information on the Customs Form 7512 is often vague, with not enough descriptions of the commodities shipped. The form also lacks any invoice or visa information—information that is critical for shipment targeting. This lack of information causes difficulty in tracking. Without this information, CBP is unable to effectively track in-bond shipments. In-bond shipments of textiles or textile products have specific description requirements. CBP regulations require that these shipments be described in such detail as to allow the port director to estimate any duties or taxes due. In addition, the port director may require evidence of the approximate correctness of value and quantity or other pertinent information. However, our survey results show that such additional information has not been obtained in practice. In-bond data are not entered in a timely, accurate manner, according to some port in-bond personnel we spoke with, as well as some survey respondents. Currently, CBP accounts for goods that initially arrive at one CBP port (port of arrival) but are shipped immediately to the port of entry (port of destination) through an in-bond module in CBP’s ACS. For automated entry forms submitted on electronic manifests, departure data can be entered in ACS automatically showing that an in-bond transfer is planned from the port of arrival. For nonautomated entries (paper), CBP officials are supposed to input departure data manually at the port of arrival to establish accountability for the merchandise. When the goods arrive at the port of destination, personnel are to input data indicating that the goods have arrived, at which time accountability is transferred from the port of arrival to the port of destination. However, at three of the seven ports we visited, officials stated that the departure and arrival information was not consistently maintained, because personnel did not input data promptly. As the volume of shipments transiting via in-bond has increased, the workload for ports across the country to enter this information has created a backlog, often resulting in entries that are never entered into the system. More than half of the 29 ports we surveyed reported that between 50 and 100 percent of their in-bond entries were paper entries. At two of the largest ports processing the highest volume of in-bond entries, officials reported that more than 75 percent of the entries received were paper entries requiring that staff manually enter information. CBP personnel at two major ports told us that in-bond data are often not entered into the system at the port of arrival, because CBP lacks the personnel to enter in-bond information for every shipment. Results from our survey showed that 80 percent of the ports did not track in-bond shipments once they left the port of arrival. A CBP official at the Port of Laredo, Texas, a major port of destination, said that they have no way of knowing the number of shipments intended to arrive at their port. Without proper communication between them, ports are unable to determine the location of a shipment traveling in-bond until it reaches its destination. As a result, personnel at the port of destination were unable to anticipate a shipment’s arrival and thereby identify and report any delayed arrivals, because a record of departure had never been set up. However, some ports such as Laredo, Texas are beginning to communicate with other ports more frequently to anticipate and track in-bond shipments. Finally, although CBP has computer-generated reports available to identify in-bond shipments that were not reported and closed within the required 30 days, 70 percent of ports we surveyed report that they have never used these reports. They said they do not do so because (1) they either did not consider the report to be reliable or (2) they had never heard of these reports. Tracking overdue shipments is a critical internal control, because it alerts CBP to shipments that never made it to their stated destinations. Without consistent examination of overdue shipments, CBP cannot account for in-bond shipments that failed to meet the time requirements for delivery. We reported these limitations in 1994 and 1997, and we made several recommendations to CBP on improving the monitoring of in-bond shipments. In 1998, CBP initiated the TINMAN Compliance Measurement Program to address some of the weaknesses noted in our 1997 report, including the ability to generate reports to follow-up on overdue shipments. In 2001, the Treasury Department’s Inspector General conducted a financial management audit and found that although TINMAN resolved some of the weaknesses found in prior audits, CBP was still unable to ensure that goods moving in-bond were not diverted into U.S. commerce, thereby evading quotas and proper payment of duties. Results from our survey show that this compliance program is not consistently implemented across ports. In March 2003, CBP launched an initiative to automate the in-bond system with a pilot program called the Customs Automated Form Entry System (CAFÉ’s), currently being tested at six U.S. ports. CAFÉ’s is an interim step toward full automation. It is intended to allow more detailed shipment data to be entered into the system electronically, thus reducing the amount of time personnel must spend entering shipment data. The CAFÉ’s program is currently voluntary, and, so far, about 8 to 10 percent of the brokers at the pilot ports are participating. However, according to a 2003 CBP Action Plan, all land border truck ports will be required to use the automated in-bond system by midyear 2004. Nevertheless, no time frame yet exists for deploying CAFÉ’s at other locations. Although CAFÉ’s will improve automation of the in-bond system, it will not resolve the tracking of in-bonds until full automation occurs. When we spoke to CBP headquarters officials about this continuing weakness, they stated that they had not made additional improvements to the in-bond program, because those improvements will be made when their new Automated Commercial Environment (ACE) computer system is rolled out. CBP stated that it does not have a time frame for deploying the system to fully automate in-bonds because development is still under way but it estimated this might be accomplished within 3 years. Without a definite time frame, it is not clear if the automation of in-bonds will actually be implemented. Although all incoming cargo is targeted for national security purposes, once the paperwork is filled out for a shipment to travel in-bond, CBP does not generally perform any additional targeting for these shipments. CBP instead focuses on targeting shipments making an official entry into U.S. commerce. The New York STC also does not analyze information from in- bond shipments in order to perform additional targeting. Conducting additional targeting for in-bond is also critical because in-bond shipments that are not identified as high-risk shipments by Container Security Initiative may go through CBP undetected and without inspection. Recognizing the need for targeting in-bond shipments, some ports we surveyed responded that they have begun to target in-bond shipments. However, targeting is not consistently performed because ports do not have the staff to conduct targeting or exams. Port management officials we spoke with at two major ports stated that since the September 11 attacks, resources have shifted to other antiterrorism areas. In addition, because brokers for in-bond shipments at the port of arrival provide very little information regarding shipments, targeting of in-bond shipments is difficult to conduct (See fig. 9 for illustration of in-bond shipment process and points of concern). CBP officials at most of the ports we visited cited resource constraints as a top reason for not inspecting in-bond shipments. For example, CBP officials at the Los Angeles/Long Beach, California, port—one of the busiest, with the highest volume of in-bond entries—told us that the current understaffing does not allow examination for many in-bond shipments. Moreover, results from our survey showed that more than 80 percent of the 13 area ports we surveyed do not have full-time staff dedicated to inspecting in-bond shipments. Some ports responded that if they had more staff dedicated to in-bond shipments, they would have a greater ability to inspect in-bond shipments. In addition, seven of the eight largest ports that responded to our survey stated that inspectors dedicate less than 10 percent of their time to in-bond inspections. For example, CBP officials at the port of New York/Newark said that they estimated that less than 2 percent of in-bond entries are actually inspected. According to several CBP in-bond personnel we spoke with at two ports, certain provisions in the in-bond regulations make it more difficult to track in-bond shipments. These regulations pertain to (1) whether importers can change a shipment’s final destination without notifying CBP and (2) the time allowed for in-bond shipments to reach their final destination. Under the regulations, an in-bond shipment can be diverted to any Customs port without prior notification to CBP, except where diversions are specifically prohibited or restricted. For example, an importer with a shipment arriving in Los Angeles may declare that it will travel in-bond to Cleveland, Ohio. However, after filing the paperwork, the importer may then elect to change the final destination to New York, without filing new paperwork or informing CBP. The information provided to CBP at the port of arrival will still state Cleveland as a final destination. CBP has no way of knowing where the shipment is going until and if it shows up at another port. For in-bond shipments of textiles or textile products, a change in destination requires approval of CBP’s director at the port of origin. However, officials at three ports that handle high volumes of textile in-bond shipments said that they were either unaware of the regulation or that it was too difficult to enforce due to the high volume of shipments they processed. Another problem CBP in-bond personnel mentioned in monitoring in-bond movements is the extensive time allowed to carriers to transport merchandise across the country. The Tariff Act of 1930 established the in- bond system and CBP regulations set time limits at 30 days for the delivery of merchandise at the port of destination for entry or for exportation. Port officials stated that this time limit is excessive and may contribute to the diversion of cargo by giving carriers too much time to move merchandise to different locations. Tracking would be easier if a carrier had a more restricted time period during which brokers or carriers would have to close out the in-bond, such as 10 to 20 days, depending on the distance between the port of arrival and the final port of destination, according to these CBP officials. Mexico’s in-bond system works differently than the U.S. system. In fact, when we spoke with Mexican Customs officials at the port of Nuevo Laredo in Mexico regarding illegal textile transshipment, they said that their in-bond system could track the movement of goods more easily because (1) importers were not allowed to change the final destination and (2) carriers are given a certain time limit to deliver merchandise, depending on the distance between the port of arrival and the port of destination. Several BICE investigations have uncovered in-bond fraud concerning textile shipments that were allegedly exported to Mexico but instead entered into U.S. commerce to circumvent quota and duty payment. To cope with this problem, BICE officials in Laredo, Texas, initiated an effort to improve the verification of exports to Mexico by requiring that for shipments processed for immediate exportation, brokers had to submit a Mexican document known as a “pedimento,” as proof that shipments were exported to Mexico. However, these documents are easily falsified and can be sold to willing buyers for submission to CBP, according to Laredo CBP officials. When we spoke with Mexican Customs officials at the Nuevo Laredo, Mexico, port, they acknowledged that reproducing false government pedimentos is easy to do and that it is not a reliable method for verifying exportations. The broker community in Laredo, Texas, also expressed serious concerns with fraudulent activity by some Mexican government officials. They suspected that pedimentos were being sold by some Mexican Customs officials to facilitate the diversion of goods into the United States. In fact, in August 2003, the port director of Nuevo Laredo, Mexico, was indicted for selling false Mexican government documents for $12,000 each. Moreover, many ports along the U.S.-Mexican border do not have export lots where trucks with shipments bound for Mexico can be physically examined to ensure that the shipments are actually exported to Mexico instead of entering the U.S. commerce. Although export lots were opened at one time, they have been closed at many ports as a result of resource constraints. When export lots were open, inspectors were able to verify exportation because carriers were required to physically present the truck with the shipments for inspection. Since our review began, CBP has opened an export lot in Laredo, Texas, and has required that all shipments declared for export to Mexico be presented and inspected at the export lot. However, not all ports along the border have export lots, and Laredo in-bond personnel have noticed that as a result many trucks were now choosing to clear their goods through those ports without export lots. CBP officials we interviewed in Laredo, along with the members of the Laredo broker community, have raised this concern and have noted the need to reopen export lots as a way to minimize fraud. As of October 20, 2003, a CBP directive mandated that all merchandise to be exported should be presented for export certification. Certification is not to take place until the merchandise is physically located where export is reasonably assured. According to a senior CBP official, as a result of this directive, ports with export facilities have reopened them or provided a reasonable alternative such as reporting to the import facility. He also stated that CBP has developed plans to verify that at least a representative sample of reported exports are actually reported. However, officials we spoke with at two ports are not sure whether they will have the resources to verify every in-bond export. A senior CBP official confirmed this problem, saying that verification of exports might not occur during periods of staffing constraints. CBP has broad enforcement authority regarding illegal textile transshipment, but it has experienced challenges in implementing enforcement actions. These challenges include a complex and lengthy investigative process, as well as competing priorities. As a result of these challenges, CBP generally has relied on excluding transshipped textiles from entry into the United States, rather than seizing merchandise or assessing penalties. In addition, addressing in-bond violations presents special challenges due to weaknesses in CBP’s internal controls and in the nature of the penalty structure. CBP also employs other means to deter illegal transshipment, such as informing the importer community of violations of textile transshipment laws and by making available lists of foreign violators. CBP has broad authority to act when violations of textile transshipment occur. Depending on the circumstances, CBP may pursue the following enforcement actions: Exclusion of the textile shipment. CBP can exclude textiles from entry if the importer has not been able to prove country of origin. Before admitting goods into the United States, CBP may ask for production records, review them, and then make a determination on origin. The importer must be able to prove the textiles’ country of origin. If CBP cannot clear the goods within 30 days, the textiles are automatically excluded. CBP may also deny entry of textiles if production documents reveal that the textiles were produced at a factory identified in the Federal Register by the Committee for the Implementation of Textile Agreements, as discussed below. Seizure of the textile shipment. CBP can seize the textiles, if it has evidence that violations of a law have occurred. By law, seizure is mandatory if textiles are stolen, smuggled, or clandestinely imported. In other instances, CBP can exercise discretion in deciding whether seizure is the most appropriate enforcement action. When seizure is invoked, CBP takes physical possession of the merchandise. In order for textiles to be seized, there must be specific statutory authority that allows for the seizure. Imposition of penalties. CBP has several administrative penalties available, based on the nature of the violation. CBP may levy administrative penalties locally at the port level without conducting an investigation. Alternatively, CBP may refer a suspected violation for an investigation by BICE. The outcome of the BICE investigation may be a referral to (1) CBP for an administrative penalty or (2) a referral to the U.S. Attorney for possible criminal prosecution of the importer and its principal officers and the imposition of criminal monetary penalties. Thus, some monetary penalties result from investigations performed by BICE, while others simply result from activity within a port. In addition to civil administrative penalties, CBP may also assess liquidated damages claims against bonded cartmen (carriers) implicated in violations involving cargo transported in-bond. CBP’s Office of Fines, Penalties and Forfeitures is responsible for assessing certain penalty actions for transshipment violations and is responsible for adjudicating penalties, liquidated damages claims and seizures occurring at the ports, up to a set jurisdictional amount. Pursuit of judicial criminal or civil prosecutions. CBP may refer unpaid civil administrative penalty or liquidated damages cases to the Department of Justice for the institution of collection proceedings either in federal district court or in the Court of International Trade. Additionally BICE investigates potential violations to establish the evidence needed for criminal prosecution of the violations. When BICE deems sufficient evidence can be established, cases may be referred to the appropriate U.S. Attorney’s Office for criminal prosecution. CBP has increasingly relied on exclusions rather than seizures or penalties for textile transshipment enforcement for two primary reasons. First, it is easier to exclude transshipped goods than to seize them because exclusions require less evidence. Second, although excluded textile shipments may incur penalties, often CBP does not assess penalties against importers of excluded merchandise because it is impossible to attach specific culpability to the importer. According to CBP officials, absent the evidence to conclude the importer failed to exercise reasonable care, it would be difficult to sustain a penalty against an importer of excluded merchandise. CBP also avoids the lengthy and complex process associated with criminal and civil prosecutions and penalties by excluding the shipments. In enforcing textile transshipment violations, CBP has relied more on exclusions than on seizures or penalties. Textiles may be excluded if the importer is unable to prove country of origin, whereas seizures may occur when false country of origin documents are presented to evade quota or visa restrictions—a situation requiring a higher standard of evidence. Exclusions usually have an immediate effect, although if the importer chooses to protest the decision to exclude, the importer can appeal CBP’s decision to the Court of International Trade. Import specialists in Long Beach/Los Angeles said that when an exclusion determination is made, they are ready to go to court if needed. The importer can ship to another country, abandon, or destroy the excluded textiles. CBP may elect not to levy penalties on excluded goods where culpability of the importer cannot be established, and generally issues penalties against the importer only if the importer is implicated or the transshipped textiles entered the commerce of the United States. However, a senior CBP official said that the exclusion of textiles is considered a better deterrent than penalties because the importer cannot receive the goods and, therefore, cannot get them into U.S. stores that are waiting for them—often for seasonal shopping. Also, the complexity and length of investigations and litigation are no longer of concern, since the goods are simply excluded from entering the United States. Table 3 presents port-level data on selected enforcement actions in 2000 to 2002. The investigative phase for textile transshipment cases can be a complex and lengthy effort, resulting in few criminal penalties. Investigators often must follow convoluted paper trails for the movement of goods and money, obtain accounting records—sometimes having to get Internal Revenue Service records (which can be a 6 to 9 month process). They also may have to subpoena banks, interview brokers and shippers, get foreign government cooperation, and pursue new leads as they arise. A BICE official noted that it is often difficult to pursue textile transshipment criminal cases because, unlike with some crimes, there is no “smoking gun” at the port. For example, when drugs are found, the drugs themselves are evidence of the violation. With textile transshipment, an illegal T-shirt will look no different than a legal one. The basis for the violation is established by proving that a false country of origin was knowingly claimed and that the importer intended to commit fraud, committed negligence, or gross negligence. Although CBP does not keep records on the length of time for disposition of cases, import specialists and inspectors voiced concern that investigations can be lengthy. For example, a senior CBP official noted that in 1989, there were 83 illegal entries. Although some civil cases went to the Court of International Trade in 1990, the first decisions were made in 1993, and the last were not decided until 1995, 1997, and 1999. Two of the larger civil cases against multinational corporations took 7 and 10 years to pursue at the Court of International Trade. Accordingly, CBP has a process in place to determine whether to accept offers to settle civil cases out of court, which includes evaluating the litigation risk and the resources CBP would have to devote to a trial. One factor relating to the length of the case is that, if BICE initiates a criminal investigation, any action relating to that case is held in abeyance pending possible criminal prosecution of the case. If sufficient evidence exists to justify a criminal prosecution, the case then goes to the U.S. Attorney’s Office. This move delays related civil proceedings. BICE officials in Los Angeles/Long Beach noted that U.S. attorneys are short on resources, since they are also working on drug-smuggling and money- laundering investigations; and in the past 10 years in that district, fewer than 10 cases have been sent to the U.S. Attorney’s Office and prosecuted. They noted, though, that the U.S. attorneys had not rejected any textile transshipment cases that BICE had brought to them. Neither CBP nor the Justice Department could provide exact figures on the numbers of prosecutions of illegal textile transshipments, but CBP officials noted that the figures were low. In addition, investigating a case may entail allowing the suspect textile transshipments to continue for a while, to obtain sufficient evidence. However, investigators can be pulled off a particular textile investigation for a higher priority; and then the textile case sits, with CBP sometimes never getting back to it, according to a senior CBP official. When CBP pursues a case, the monetary amounts of the penalties may get reduced, according to CBP staff, in line with CBP’s mitigation guidelines. CBP data are not available to summarize the penalty amounts assessed and the final mitigated penalty amounts. But in one example, CBP discovered that a company transshipped $600,000 worth of blue jeans to evade quota and visa restrictions. Company officials pled guilty and, in the end, paid CBP civil penalties totaling only $53,000. CBP officials in the field expressed concern that substantial penalty reductions may be a disincentive to pursuing penalties or investigations. CBP has experienced two basic challenges in deterring in-bond diversions through enforcement actions. First, the previously discussed weaknesses in the system make it difficult for CBP to track in-bond movements and catch the violators. Second, when CBP discovers a breach of a bond by a bonded cartman (carrier), the total liability associated with the bond breach is limited to the value of the bond, rather than the value of the merchandise. Additionally, it is difficult for CBP to enforce payment of unpaid penalties and liquidated damages because the Department of Justice does not have sufficient resources available to prosecute all the referrals for collections actions. Because in-bond shipments are not tracked, CBP cannot account for all the in-bond shipments that fail to fulfill the requirements of timely cargo delivery. According to a senior BICE official involved in in-bond investigations, when an investigation is initiated, BICE must physically track the cargo to prove a violation has occurred. This is difficult because the cargo is often not at the port but at a warehouse, and CBP’s surveillance must be constant in order to establish that the cargo was not exported. When CBP does find in-bond diversion occurring, it typically seeks liquidated damages for breach of the bond. When CBP demands payment of liquidated damages, the claim cannot exceed the amount of the bond. Several CBP and BICE officials stated that the bond amounts set by CBP regulations are low, compared with the value of the merchandise. The original bond amount for textile entries relates to the total value of shipments. However, according to BICE officials, convention has allowed bonds for bonded cartmen (carrier) to be generally set at $25,000-$50,000 a year—a minimal amount that, as one BICE investigator put it, is the “cost of doing business.” For example, if a textile shipment with a domestic value of $1 million is illegally diverted, liquidated damages can be set at three times the value of the merchandise. However, if the bond is set at $50,000, the demand for payment of liquidated damages cannot go above this bond amount. Furthermore, violators may request mitigation of the $50,000 fine so that the resulting mitigation may only be as little as $500. Bond amounts are usually set every calendar year and, if the liquidated damages claims in one year exceed that year’s bond amount, the next year’s bond cannot be used to pay the liquidated damages incurred the previous year. In 1989, CBP recognized the problem in which the amount of delinquent liquidated damages claims against a bonded carrier exceeded the amount of the bond. CBP then issued a directive that required district directors to periodically review bond sufficiency. CBP again issued directives in 1991 and 1993 to provide guidelines for the determination of bond sufficiency. However, CBP and BICE officials we spoke with stated that inadequate bond amounts continue to make liquidated damages for in-bond diversion a weak deterrent. CBP also employs methods to deter illegal transshipment by informing the importer community of violators of illegal textile transshipment. CBP officials view the publication of violators as a means to deter transshipment. CBP and CITA maintain various lists of foreign violators, in part, for this purpose. In addition, under the Customs Modernization Act, CBP is obligated to use informed compliance and outreach with the trade community. CBP regularly meets with the trade community to keep it informed of the latest enforcement information and to help encourage reasonable care on its part. CBP is looking increasingly at patterns of company conduct to establish lack of reasonable care. It currently is investigating or monitoring 40 U.S. importers it suspects may have violated the reasonable care standard. CBP maintains three lists associated with illegal transshipment violations: the “592A list,” the “592B list,” and the “administrative list.” The 592A list is published every 6 months in the Federal Register and includes foreign manufacturers who have been issued a penalty claim under section 592A of the Tariff Act of 1930. The 592B list enumerates foreign companies to which attempts were made to issue prepenalty notices, but were returned “undeliverable” and therefore could not be included on the 592A list. The administrative list identifies companies that have been convicted or assessed penalties in foreign countries, primarily Hong Kong, Macau, and Taiwan. CBP decided that because these companies had due process in their countries and were determined by that country’s law to have illegally transshipped textiles (false country of origin), CBP could legally make this information public, according to a senior CBP official. This list is updated as necessary. Between 1997 and October 2003, the names of 488 companies from Hong Kong, 7 from Taiwan, and 34 from Macau have been published in the administrative list. CITA has a policy in place whereby a letter is sent to the government of an offending country requiring it to address what is being done to enforce anti- transshipment policies. If the government does not respond, the company is placed on an “exclusion” list; and goods from that company may not be shipped to the United States. This exclusion could run anywhere from 6 months to 5 years, but the standard period is 2 years. In 1996, CITA issued a new policy stating that all goods could be banned if a TPVT visit was not allowed in that factory. After the policy was issued, Hong Kong began allowing the United States to observe enforcement efforts in factories, although it does not allow CBP access to companies’ books and records. Extensive enforcement efforts led to 500 convictions in Hong Kong courts for origin fraud from 1997 to October 2003. When CITA has evidence of textile transshipment from CBP’s TPVTs or other sources, it may also apply chargebacks if it has evidence of the actual country of origin and the goods have entered the commerce of the United States. Chargebacks occur when goods were not charged against quotas as they should have been. CITA then will go ahead and “charge those goods back” against the appropriate levels for an appropriate country. For example, if textiles have been transshipped through Vietnam, but their actual country of origin was found to be China, China’s quota will be reduced by the appropriate amount. CITA also has the authority to “triple charge” goods. Although CITA has the authority to issue chargebacks, over the last decade it has only issued chargebacks against China and Pakistan. The last chargebacks were issued in 2001 for a sum of $35 million. From 1994 to 2001, chargebacks totaled $139 million. Chargebacks require a higher burden of proof because they require that the actual country of origin be established. When the Customs Modernization Act became effective on December 8, 1993, CBP, then known as Customs, was given the responsibility of providing the public with improved information concerning the trade community’s rights and responsibilities. In order to do so, Customs created initiatives aimed at achieving informed compliance, that is, to help ensure that the importers are meeting their responsibilities under the law and to help deter illegal transshipment. Accordingly, Customs issued a series of publications and videos on new or revised Customs requirements, regulations, or procedures. CBP also has the responsibility to inform importers of their duty to act in accordance with its reasonable care standard. To that end, CBP provides guidance to help importers avoid doing business with a company that may be violating CBP laws. For example, CBP suggests the U.S. importer ask its supplier questions regarding the origin of the textiles, the labeling, and the production documentation, among others. CBP is currently investigating 40 importers for potential violations of the reasonable care standard. In a continuing effort to deter transshipment and meet its own responsibilities, CBP officials regularly meet with members of the trade industry to share information about the latest developments regarding textile transshipment. Despite increasing trade volumes and heightened national security priorities, CBP has maintained a focus on textile transshipment by consolidating its various textile enforcement activities and by using its expertise to target its review process at the most suspect shipments. The actual number of textile and apparel shipments CBP reviews at the ports is low (less than 0.01 percent), and in 2002 about 24 percent of these reviews resulted in exclusions, 2 percent in penalties, and 1 percent in seizures. CBP’s overall efforts at deterrence are aimed more at excluding problem shipments from U.S. commerce and emphasizing importer compliance responsibilities rather than at pursuing enforcement actions in the courts, due to the complexity and length of the investigative process and past experiences with ultimate imposition of minimal penalties. The low likelihood of review and minimal penalties limit the system’s deterrent effect and make high-quality intelligence and targeting essential to focusing limited resources on the highest risk overseas factories and shipments. Although textile import quotas on WTO members will be eliminated on January 1, 2005, with the expiration of the Agreement on Textiles and Clothing, the roles of the STC and the port import specialists will continue to be important, because incentives will continue to exist to illegally transship merchandise through countries benefiting from trade preferences and free trade agreements. In addition, quotas will remain on Vietnam until its WTO accession, and quotas may be placed into effect on certain imports from China under the safeguard provision of China’s WTO Accession Agreement. Because transshipment will remain a concern beyond this coming year, CBP will still face challenges in implementing its monitoring system. First, CBP has been slow to follow up on some of the findings from the TPVT factory visits, which are one of the key sources of information used in decisions on what textile shipments to review. CBP has not fully made the results of these trips known and acted quickly by entering all national criteria at an earlier stage rather than waiting until CBP approves the TPVT report. CBP has the authority to review any shipments presented for import. The result of waiting for TPVT report approval may mean that some suspect shipments are not reviewed or inspected at the ports. Second, CBP faces challenges in ensuring that additional import specialists are placed in Customs Attaché Offices overseas to assist with textile monitoring and enforcement activities. CBP would be able to further facilitate cooperation on textile issues, follow up on TPVT findings, and supplement the enforcement information it needs to trigger port textile reviews if it placed more import specialists in Customs Attaché Offices in high-risk countries. In addition, we found weaknesses in CBP’s current monitoring of in-bond cargo transiting the United States, and CBP has only in the last year begun to intensively address the issue of in-bond textile and apparel shipments being diverted into U.S. commerce. CBP’s current in-bond procedures may facilitate textile transshipment by allowing loosely controlled interstate movement of imported cargo upon which no quota or duty has been assessed. Internal control weaknesses have meant that CBP places an unacceptably high level of reliance on the integrity of bonded carriers and importers. Without an automated system and detailed and up-to-date information on in-bond shipments, CBP cannot properly track the movement of in-bond cargo. In addition, limited port targeting and inspections of in-bond shipments constitute a major vulnerability in monitoring possible textile transshipments and other areas of national security. CBP’s regulations regarding delivery time and shipment destination also hinder proper monitoring. Unless these concerns are addressed, proper revenue collection, compliance with trade agreements, and enforcement of national security measures cannot be ensured. While CBP has taken some preliminary steps, much remains to be done before the in-bond system has an acceptable level of internal controls. Moreover, CBP’s system for assessing liquidated damages does not provide a strong deterrent against in-bond diversion. With bond amounts set considerably lower than the value of the merchandise and mitigation of liquidated damages down to a fraction of the shipment value, violators may see paying the bond as a cost of doing business and may not perceive it as a deterrent against the diversion of goods. CBP has the authority to review bond sufficiency and can change the bond amounts to provide an effective deterrent against the illegal diversion of goods. To improve information available for textile transshipment reviews at CBP ports and to encourage continued cooperation by foreign governments, we recommend that the Commissioner of U.S. Customs and Border Protection take the following two actions: Improve TPVT follow-up by immediately entering all criteria resulting from overseas factory visits into ACS to trigger port reviews. Assign import specialists to Customs Attaché Offices in high-risk textile transshipment countries to assist with textile monitoring and enforcement activities, including conducting follow-up to TPVTs. To improve its monitoring of in-bond cargo and ensure compliance with U.S. laws and enforcement of national security, we also recommend that the Commissioner of U.S. Customs and Border Protection take the following four steps: Place priority on timely implementation of a fully automated system, including more information to properly track the movement of in-bond cargo from the U.S. port of arrival to its final port of destination. Increase port targeting and inspection of in-bond shipments. Routinely investigate overdue shipments and, pending implementation of an improved automated system, require personnel at ports of entry to maintain accurate and up-to-date data on in-bond shipments. Assess and revise as appropriate CBP regulations governing (1) the time intervals allowed for in-bond shipments to reach their final destinations, taking into consideration the distance between the port of arrival and the final port of destination and (2) whether importers or carriers can change the destination port without notifying CBP. Finally, to strengthen the deterrence value of in-bond enforcement provisions, we recommend that the Commissioner of U.S. Customs and Border Protection review the sufficiency of the amount of the bond for deterring illegal diversion of goods. The Department of Homeland Security provided written comments on a draft of this report, which is reproduced in appendix III. The Department agreed with our recommendations and stated that it would take the appropriate steps needed to implement the recommendations. In its letter, the department listed its key planned corrective actions for each of our recommendations. In addition, we received technical comments from the Departments of Homeland Security, Commerce, and the Office of the U.S. Trade Representative, which we incorporated in this report as appropriate. We are sending copies of this report to appropriate congressional Committees and the Secretaries of Homeland Security, Commerce, and State and the Office of the U.S. Trade Representative. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Additional contacts and staff acknowledgments are listed in appendix IV. In a legislative mandate in the Trade Act of 2002 (P.L. 107-210, Aug. 6, 2002), Congress directed GAO to review U.S. Customs and Border Protection’s (CBP) system for monitoring and enforcing textile transshipment and make recommendations for improvements, as needed, to the Chairman and the Ranking Minority Member of the Senate Committee on Finance and the Chairman and the Ranking Minority Member of the House Committee on Ways and Means. As discussed with Committee representatives, we have focused on answering the following questions: (1) how CBP identifies potential textile transshipment, (2) how well CBP’s textile review process works to prevent illegal textile transshipment, (3) how effectively CBP monitors foreign textiles transiting the United States in its in-bond system before entering U.S. commerce or being exported, and (4) what challenges CBP experienced in using penalties and other means to deter illegal textile transshipment. To examine how CBP identifies potential textile transshipment, we reviewed and analyzed internal planning documents and trade studies from the Office of Strategic Trade’s Strategic Trade Center (STC) in New York City, which conducts analysis and targeting of textile transshipment. We also analyzed CBP foreign factory and cargo shipment reports and summaries from the STC; the Office of Field Operations’ Textile Enforcement and Operations Division at CBP’s headquarters; and some ports of entry, from 2000 to 2003. We collected and analyzed data from 2000 to 2003 on the targeting process from CBP’s internal database and documents and reviewed how CBP collected the data. We examined the data for their reliability and appropriateness for our purposes. We found the data to be sufficiently reliable to represent CBP’s targeting activity. In addition, we also collected official U.S. international trade statistics from the Census Bureau for 1993 to 2002, textile and apparel production statistics from the Census Bureau (Annual Survey of Manufacturers) for 1993 to 2001, and employment statistics from the Bureau of Labor Statistics (Current Employment Survey) for 1993 to 2002. We defined “textile and apparel goods for international trade,” based on the definition in the World Trade Organization’s (WTO) Agreement on Textiles and Clothing (Annex), as well as additional textile and apparel goods not covered by the agreement but identified as textile and apparel goods by the Department of Commerce’s Office of Textiles and Apparel on the Department of Commerce’s Web site. We reviewed these statistics for their reliability and appropriateness for our purposes and found them sufficiently reliable to represent the trends and magnitude of trade, production, and employment in the textile and apparel sector. We also observed a targeting session at the STC in preparation for a foreign factory visit to El Salvador. In addition, we interviewed CBP officials in the Office of Strategic Trade’s STC and Regulatory Audit Division, the Office of Field Operations, and in seven ports of entry (New York/Newark, New York; Los Angeles/Long Beach, California; Laredo, Texas; Columbus and Cleveland, Ohio; and Seattle and Blaine, Washington) about their targeting activities and roles. Together, these ports represent CBP service ports that processed 55 percent of textiles and apparel imported into the United States in 2002. However, we recognize that activities among individual ports of entry within CBP service port areas may vary from ports that we visited. To gain additional perspectives on CBP’s targeting operations, we interviewed officials of the Department of Commerce and the Office of the U.S. Trade Representative (USTR), as well as former Customs officials and private sector business associations. To examine CBP’s textile review process to prevent illegal textile transshipment, we reviewed internal planning documents, directives, and reports of the Office of Field Operations’ Textile Enforcement and Operations Division, the Office of International Affairs, and the Office of Strategic Trade’s STC and Regulatory Audit Division covering the years 1999 to 2003. We visited seven ports of entry and observed operations. To review CBP’s foreign factory visits, we observed a Textile Production Verification Team (TPVT) visit in El Salvador. To report on CBP’s overall textile review activity, we collected data on TPVT visits and port-level textile review activity from 1996 to 2003 from CBP’s internal database and documents. We reviewed how CBP collected the data and examined the data for their reliability and appropriateness for our purposes. We found the data to be sufficiently reliable to represent CBP’s foreign factory inspections and port-level activity. We interviewed CBP officials in the Office of Field Operations, the Office of International Affairs, the Office of Strategic Trade, and the seven ports of entry we visited. We also interviewed officials of the Department of Commerce, including the Committee for the Implementation of Textile Agreements (CITA) and the Office of Textiles and Apparel; USTR; and the Department of State; as well as former Customs officials and private sector business associations. In addition, we interviewed customs and trade officials in Hong Kong and Macao, as well as a Mexican embassy trade official in Washington, D.C., and Mexican port officials in Nuevo Laredo, Mexico. We communicated with Canadian officials through an exchange of written questions and answers. To review how CBP uses its in-bond system to monitor foreign textiles transiting the United States before entering U.S. commerce or being exported, we observed in-bond operations at six of the ports of entry we visited: Newark, New Jersey/New York, New York; Long Beach/Los Angeles, California; Cleveland and Columbus, Ohio; Laredo, Texas; and Blaine, Washington. We reviewed documents on CBP’s in-bond operations from the Office of Field Operations’ Cargo Verification Division, as well as documents on in-bond penalties from the Office of Field Operations’ Fines, Penalties, and Forfeitures Branch. We conducted interviews on the in-bond system with CBP officials in the Cargo Verification Division; the Fines, Penalties, and Forfeitures Branch; and the Textile Enforcement and Operations Division at headquarters; and at the ports of entry and Bureau of Immigration and Customs Enforcement (BICE) headquarters and Field Offices. In addition, we conducted a survey of in-bond activities at 11 major U.S. area ports that process the highest levels of textile and apparel imports and 2 smaller area ports that also process textile and apparel imports. For each area port, we also requested that the survey be distributed to two additional subports that also processed textile and apparel imports. We asked ports to respond to the survey, based on in-bond activities from October 2001 to May 2003. We received responses from all 13 area ports and 29 subports we surveyed. We selected ports for our survey, based on four criteria: (1) ports with the highest value of textile and apparel imports; (2) geographic distribution that included coastal, in-land, northern, and southern border ports; (3) ports with the highest value of textile and apparel imports by trade preference program (such as the African Growth and Opportunity Act and the Caribbean Basin Trade Partnership Act); and (4) ports of various sizes, allowing us to include smaller ports that also process textile and apparel imports. We found the data to be sufficiently reliable to review how the in-bond system monitors foreign textiles transiting the United States. Not all ports were able to provide data for the entire time period requested; therefore, we were not able to use some of the data for the missing time period. In addition, although we received a 100-percent response rate, the in-bond data we received from the 13 area ports and 29 subports are not representative of in-bond operations at all Customs ports. Copies of the survey are available from GAO. To examine the challenges CBP experienced in using penalties and other means to deter illegal textile transshipment, we reviewed internal planning documents, memorandums, and reports, dating from 1999 to 2003, from former Office of Investigations officials now in the BICE, as well as from CBP’s Offices of Chief Counsel; Field Operations (including the Textile Enforcement and Operations Division and the Fines, Penalties, and Forfeitures Division); Strategic Trade, (including the STC and Regulatory Audit Division); and Regulations and Rulings. We also reviewed CBP’s enforcement authorities in the relevant statutes and federal regulations, as well as reviewing informed compliance publications and other information on CBP’s and BICE’s Web sites. We collected data on CBP’s enforcement and penalty actions for the years 2000 to 2002, from CBP’s internal databases and documents. We reviewed how CBP collected the data and examined the data for their reliability and appropriateness for our purposes. We found the data to be sufficiently reliable to represent CBP’s enforcement and penalty actions. We interviewed officials in BICE and in CBP’s Offices of Chief Counsel; Field Operations (including the Textile Enforcement and Operations Division and the Fines, Penalties, and Forfeitures Division); Strategic Trade (including the STC and Regulatory Audit Division); and Regulations and Rulings, as well as at the seven ports of entry we visited, and associated BICE Field Offices. We also interviewed officials of the Department of Commerce, including CITA and OTEXA; as well as former Customs officials and private sector business associations. We performed our work from September 2002 through December 2003 in accordance with generally accepted government auditing standards. U.S. textile and apparel imports have grown considerably over the past decade and have been comprised largely of apparel products. In 2002, China surpassed Mexico as the largest foreign supplier of textile and apparel to the U.S. market, followed by Caribbean Basin countries that benefit from preferential access. New York and Los Angeles are the service ports that receive the largest share (by value) of textile and apparel imports, with Miami, Florida, and Laredo, Texas, important service ports districts for imports from Latin America. The United States is in the process of gradually phasing out textile and apparel quotas under a 1995 World Trade Organization (WTO) agreement, but a significant number of quotas are still to be eliminated at the end of the agreement’s phase-out period on January 1, 2005. Elimination of these quotas is likely to affect trade patterns as more efficient producers acquire greater market share. Tariffs and other potential barriers, however, such as antidumping and safeguard measures, still exist and could still affect trade patterns and create an incentive for illegal textile transshipment. Also, as quotas are removed, a more competitive market may place increasing pressure on the U.S. textile and apparel industry. Industry production and employment in the United States has generally been declining in recent years, with employment in the apparel sector contracting the most. U.S. imports of textile and apparel products have nearly doubled during the past decade (1993 to 2002), rising from about $43 billion to nearly $81 billion. Because overall imports have also nearly doubled during the decade, textile and apparel products have maintained about a 7 percent share of total U.S. imports throughout this period. As figure 10 shows, the majority of U.S. textile and apparel imports are apparel products (about 73 percent in 2002). The remaining imports consist of yarn (10 percent), uncategorized textile and apparel products (9 percent), made-up and miscellaneous textile products (7 percent), and fabric (2 percent). The major foreign suppliers of textile and apparel to the U.S. market are China, Mexico, and the Caribbean Basin countries. However, as figure 11 shows, no major supplier had more than a 15 percent share of overall textile and apparel imports in 2002. Also, after the top 10 suppliers, remaining suppliers still provided more than a third of imports. These smaller suppliers include Africa Growth and Opportunity Act (AGOA) countries, which supplied $1.1 billion (about 1.4 percent) of imports, and Andean Trade Promotion and Drug Eradication Act (ATPDEA) countries, which supplied $790 million (about 1 percent) of imports. Countries with free trade agreements (FTA) with the United States accounted for 18.8 percent of total textile and apparel imports in 2002. This includes the North American Free Trade Agreement (NAFTA) countries, Mexico and Canada, which supplied 17.1 percent. Other FTA partners— Chile, Israel, Jordan, and Singapore—supplied the remaining 1.7 percent. In addition, the United States is negotiating FTAs with several other countries, which combined accounted for 15 percent of U.S. imports. The most important (in terms of imports) of these potential FTA partners are the countries in the Central American FTA negotiations (Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua) and the Dominican Republic, all of which are also part of the overall Free Trade Area of the Americas (FTAA) negotiations. The service ports of New York and Los Angeles were the top two recipients of textile and apparel imports into the United States in 2002. Together they accounted for more than 40 percent of imports. Furthermore, the top 10 U.S. service ports accounted for about 77 percent of textile and apparel imports in 2002 (see fig. 12). Overall, Customs has 42 service ports, encompassing more than 300 individual ports of entry. For example, the New York service port encompasses the individual ports of JFK Airport; Newark, New Jersey; and New York City. On the West Coast, Los Angeles receives a large portion of its imports from Asian suppliers such as China and Hong Kong; while in the South, Miami and Laredo receive a large portion of their imports from Caribbean countries. In-land ports, such as Columbus, Ohio, receive imports shipped across country by truck or rail from other ports or flown directly into the airports in its district. Under the WTO’s 1995 Agreement on Textiles and Clothing (ATC), the United States and other WTO members agreed to gradually eliminate quota barriers to textile and apparel trade during a 10-year transition period, ending by January 1, 2005. By 1995, the United States, the European Union, Canada, and Norway were the only WTO members to maintain quotas on textile and apparel. Each agreed, however, to remove a share of their quotas by January 1 in 1995, 1998, 2002, and 2005. Based on 2002 Department of Commerce import statistics and our analysis, the United States still maintains quotas on products that account for about 61 percent of its textile and apparel imports by value. Not all of these imports, however, are subject to quotas because not all U.S. trade partners are subject to quotas on these products. For instance, U.S. textile and apparel categories 338 and 339 (men and women’s cotton knit shirts and blouses) account for over 12 percent of U.S. imports of textile and apparel products, and categories 347 and 348 (men and women’s cotton trousers and shorts) account for about another 13 percent. Although several countries face U.S. quotas in each of these categories, not all countries are restricted. Therefore, quotas only limit a portion of the 25 percent of imports accounted for by products in these categories. Customs, though, is concerned with the trade flows relating to all the products under quotas, despite which country they originate in because the country of origin may be misrepresented. Under the ATC, the United States agreed to remove by 2005 textile and apparel quotas maintained against other WTO members. These quotas have created significant barriers to imports of certain types of textile and apparel products from quota-restricted countries. For example, in 2002, the U.S. International Trade Commission estimated that quota barriers amounted to an approximately 21.4 percent tax on apparel imports and a 3.3 percent tax on textile imports. However, these estimates were calculated across all textile and apparel products and countries. Therefore, actual barriers may be significantly higher for certain highly restricted products. Upon removal of these quotas, trade patterns are likely to change, with more efficient foreign suppliers that were formerly restricted under the quotas capturing a larger share of the U.S. market. FTAs, though, will still provide preferential access to products that meet rules of origin requirements from FTA partners. FTAs generally provide tariff-free access, while 2002 tariff rates on more restricted textile and apparel products ranged from 15 to 33 percent. Also, the United States provides similar preferential access unilaterally to countries from the Caribbean Basin, sub-Saharan Africa, and the Andean region under the CBTPA, AGOA, and ATPDEA preferential programs. Officials and experts that we spoke with said they believed these tariff differentials to be a significant incentive for continued illegal textile transshipment because they act as a tax on textile and apparel products from non-FTA partners. Also, under WTO rules, the United States may impose antidumping or countervailing duties on imports from certain countries if it can be shown that these products have either been “dumped” in the U.S. market or were subsidized. Furthermore, under China’s accession agreement with the WTO, members may impose a special safeguard mechanism on imports from China if they are shown to cause market disruption. In fact, in December 2003 the United States imposed this mechanism against imports from China of certain types of knit fabrics, dressing gowns and robes, and brassieres. U.S. textile and apparel employment has declined over the past decade (1993 through 2002), while production has declined from 1995 through 2001 (latest year statistics were available for production data). Production of apparel (and textiles to a lesser extent) in the United States tends to be relatively intensive in its use of labor. Consequently, the U.S. industry has faced strong competition from developing countries, such as China and India, where labor rates are significantly lower than in the United States. Employment in the U.S. apparel sector is higher than in the textile sector, overall; however, employment declines in the U.S. textile and apparel industry have primarily been due to declines in the apparel sector. As figure 13 shows, employment in the overall textile and apparel industry fell from about 1,570,000 jobs in 1993 to about 850,000 jobs in 2002. The majority of this decline was due to the fall in apparel employment from more than 880,000 workers in 1993 to about 360,000 workers in 2002. However, employment in the other sectors of the industry—textile mills (yarns, threads, and fabrics) and textile product mills (carpets, curtains, bedspreads, and other textile products besides apparel)—also declined. Regarding U.S. production (as measured by shipments) in the textile and apparel sectors, figure 14 shows overall textile and apparel production declined between 1997 and 2001. During that period, the value of U.S. shipments of textile and apparel products (either to the U.S. market or overseas) fell from nearly $158 billion to about $132 billion. This decline was due to contraction in the apparel and textile mills sectors. However, the textile product mills sector remained relatively stable during the same time period. In addition to those individuals named above, Margaret McDavid, Michelle Sager, Josie Sigl, Tim Wedding, Stan Kostyla, Ernie Jackson, and Rona Mendelsohn made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
U.S. policymakers and industry groups are concerned that some foreign textile and apparel imports are entering the United States fraudulently and displacing U.S. textile and apparel industry workers. Congress mandated GAO to assess U.S. Customs and Border Protection's (CBP) system for monitoring and enforcing textile transshipment and make recommendations for improvements, as needed. Therefore, GAO reviewed (1) how CBP identifies potential illegal textile transshipment, (2) how well CBP's textile review process works to prevent illegal textile transshipment, and (3) how effectively CBP uses its in-bond system to monitor foreign textiles transiting the United States. To identify potential illegal textile transshipments, CBP uses a targeting process that relies on analyzing available trade data to focus limited inspection and enforcement resources on the most high-risk activity. In 2002, CBP targeted about 2,500 textile shipments out of more than 3 million processed, or less than 0.01 percent. Given resource constraints at CBP ports, CBP's textile review process for preventing illegal textile transshipment increasingly depends on information from foreign factory visits that CBP conducts, based on the targeting results. However, CBP's foreign factory visit reports are not always finalized and provided to ports, other agencies, or the foreign governments for timely follow-up. Further, after the global textile quotas end in 2005, CBP will lose its authority to conduct foreign factory visits in former quota countries. U.S. overseas Attache offices and cooperative efforts by foreign governments can supplement information provided to the ports. Under CBP's in-bond system, foreign textiles and apparel can travel through the United States before formally entering U.S. commerce or being exported to a foreign country. However, weak internal controls in this system enable cargo to be illegally diverted from its supposed destination, thus circumventing quota restrictions and payment of duties. Moreover, CBP's penalties do not deter in-bond diversion. Bond amounts can be set considerably lower than the value of the cargo, and violators may not view the low payments as a deterrent against diverting their cargo.
The importance of airborne trade to the U.S. economy has steadily increased over the last 20 years, and the international movement of goods by air is critical to many U.S. export industries. The international aviation market is, however, heavily regulated by bilateral agreements between countries, which often limit airlines’ traffic rights—the routes they can fly and the frequency with which they can fly those routes. The departments of Transportation (DOT) and State have traditionally negotiated these agreements as part of a comprehensive exchange covering both passenger and air cargo services. However, air cargo services have characteristics and needs that differ significantly from those of passenger services—most prominently the need to move and store cargo on the ground. When these needs are not met, the competitiveness of these services is compromised. International air cargo services play a vital role in facilitating U.S. trade.As shown in figure 1.1, since 1975 the airborne share of the value of U.S. exports has more than doubled, and the airborne share of imports has almost tripled. In 1995, the value of U.S. airborne trade reached $355 billion, accounting for 31 percent of U.S. exports and 23 percent of imports—or 27 percent of all U.S. trade. U.S. airlines generated about $3.9 billion in revenues from international freight operations that year, according to DOT’s data. The development of global systems for producing and distributing goods and an attendant increase in the use of “just-in-time” inventory systems, which reduce the need to warehouse spare parts and finished products, have contributed, in part, to the growth of international air cargo services. Some analysts consider the efficiency of such supply chains to be an increasingly important competitive advantage in numerous industries. International air transport is critical to shippers who need speed and reliability. This means of transport is particularly appropriate for moving goods that (1) have high value-to-weight ratios, (2) are fragile, (3) are physically or economically perishable, and/or (4) are subject to unpredictable demand patterns. Almost 70 percent of the exports of U.S. computers and office equipment and over half of the exports of U.S. communications equipment moved by air in 1994. From 1990 to 1995, airfreight traffic between the United States and foreign countries grew by 50 percent. This traffic accounted for approximately 38 percent of the world’s estimated total airfreight traffic in 1994, the last year for which data are available. The trade to and from Latin America almost doubled. Europe and the Asia/Pacific region are the largest air trade markets for the United States, accounting for about 70 percent of the country’s air trade by weight in 1995. Furthermore, according to the Boeing Commercial Airplane Group’s forecast for airfreight traffic,international markets offer the greatest opportunities for U.S. airlines to expand their freight operations—the rate of growth in almost all international airfreight markets is forecast to exceed that of the U.S. domestic market. The international air cargo industry comprises three types of carriers: (1) integrated all-cargo carriers, such as Federal Express, that operate cargo-only aircraft and primarily offer express door-to-door delivery of shipments; (2) scheduled and charter all-cargo carriers that operate cargo-only aircraft and primarily offer airport-to-airport service; and (3) passenger/cargo carriers that carry cargo on board passenger aircraft but also may operate cargo-only aircraft, and primarily offer airport-to-airport delivery. Air cargo services have significantly different operating requirements from passenger services. First, unlike most passengers, air cargo moves in one direction only. This frequently results in directional imbalances in the flow of cargo traffic. To operate economically, a cargo carrier must have the flexibility to alter routings to take advantage of changes in traffic flows. Because most cargo is inanimate, it is also less sensitive than passengers to the number of stops made en route, to the directness of routing, or to changes in aircraft. Nevertheless, speed is usually critical to competitive air cargo services. According to DOT, rights to serve destinations without restrictions, along with the ability to route services flexibly, are even more important for efficiency in cargo operations than in passenger operations. Finally, the movement and storage of air cargo on the ground are vital for cargo services. For express carriers offering door-to-door service, the ability to operate pickup and delivery service—that is, to have intermodal rights—is essential for competitiveness. All-cargo carriers hauled almost 60 percent of the international freight carried by U.S. airlines—over 1.3 million tons in 1994. As shown in table 1.1, services by U.S. all-cargo airlines are particularly important in Latin America and the Asia/Pacific region, where they carried over 70 percent of the freight transported by U.S. airlines in 1994. In 1994, U.S. airlines flew more scheduled international freight ton-miles—about 16 percent of the world total—than the airlines of any other country. Nonetheless, U.S. carriers have not competed as successfully in international freight markets as they have in international passenger markets. From 1990 through April 1995, U.S. airlines achieved a 40.7-percent share of the U.S. international freight market, on average. By comparison, U.S. passenger/cargo airlines averaged a 53.3-percent share of the U.S. international passenger market during the same period. Notably, according to DOT’s data for 1994, airlines from foreign countries other than those where the freight originated or was destined—so-called third-country carriers—obtained a 21-percent share of the traffic in the 20 leading U.S. international freight markets. Most international airfreight is carried by major foreign passenger/cargo airlines. In contrast to the U.S. domestic market, where integrated all-cargo carriers carry about 60 percent of the freight traffic, the majority of the world’s scheduled freight traffic is carried by passenger/cargo airlines—almost 60 percent in 1994, according to the Air Cargo Management Group, an air cargo consulting firm. The comparatively small U.S. share of international freight traffic is due, in part, to the greater emphasis foreign passenger/cargo airlines have traditionally placed on freight operations compared with U.S. passenger airlines. U.S. passenger/cargo airlines have historically viewed cargo services as a by-product of their passenger services, and all but one of these airlines had ceased operating cargo-only aircraft until this year. Northwest Airlines was the only major U.S. airline operating such aircraft in 1995, though both United Airlines and Continental Airline’s subsidiary, Continental Micronesia, recently announced plans to begin all-cargo services in the Asia/Pacific region. By contrast, many major foreign passenger/cargo airlines, such as KLM Royal Dutch Airlines, Air France, and Lufthansa, operate all-cargo aircraft or so-called “combi” aircraft, on which cargo is carried in the main compartment of the passenger aircraft in addition to the bellyholds. Appendix I contains additional information on the status of the international airfreight industry. Under a framework established by the Chicago Convention in 1944, international aviation is largely governed by bilateral agreements. Two countries negotiate the air transport services between them and award airlines traffic rights. In general, traffic rights determine (1) which routes can be served between the countries and between them and third countries; (2) what services airlines can provide (e.g. scheduled or charter); (3) how many airlines from each country can fly the routes; and, in some case (4) how frequently flights can be offered. For the United States, the responsibility for developing international aviation policy and negotiating bilateral agreements resides with DOT and the State Department. Traditionally, these agencies have negotiated bilateral agreements as part of a comprehensive exchange of rights covering both passenger and cargo services. In 1989, DOT issued a statement of U.S. air cargo policy that established specific negotiating objectives designed to ensure the least restrictive operating environment for U.S. air cargo services. The 1989 statement reiterated DOT’s traditional policy of conducting comprehensive negotiations as the best means to accommodate the international interests of all-cargo airlines. DOT’s 1995 international aviation policy added the agency’s willingness to consider negotiating bilateral agreements that cover only cargo services. The State Department also helps develop aviation policy and is responsible for chairing negotiations with foreign governments and coordinating DOT’s actions with overall U.S. foreign policy. Under 49 U.S.C., section 41310, the Secretaries of State and Transportation, as well as the heads of other agencies, are required to take all appropriate action to eliminate any discrimination or unfair competitive practices faced by U.S. airlines overseas. U.S. carriers can file formal complaints with DOT about such practices. DOT takes the lead in formulating policies and countermeasures to resolve such problems, which are regulatory obstacles, administrative inefficiencies, or restrictive practices that inhibit airlines from fully exercising the rights available to them under bilateral aviation agreements or that reduce the competitiveness of their services. Concerned about the international interests of U.S. all-cargo airlines, the Chairman of the Senate Committee on Commerce, Science, and Transportation and the Chairman and Ranking Minority Member of its Subcommittee on Aviation asked us to address the following questions: What are the problems that all-cargo airlines face in doing business abroad, and what actions have the affected airlines and the U.S. government taken to resolve these problems? To what extent has the U.S. government addressed air cargo issues in policymaking and during bilateral aviation negotiations, and what are the possibilities for separating negotiations of air cargo services from broader negotiations that include passenger services? To identify the problems that U.S. all-cargo airlines face when operating abroad, we designed a questionnaire asking the airlines to catalog any such problems and assess their impact. The questionnaire was pretested with representatives of five all-cargo airlines. We then surveyed the 26 U.S. air carriers that, as of September 1995, operated cargo-only aircraft and were authorized by DOT to offer scheduled or charter international all-cargo services. We did not attempt to verify the existence of problems or their impact. As agreed with the requesters’ offices, we pledged that the airlines’ responses would be kept confidential. We received responses from 22 of the airlines, for a response rate of about 85 percent. The 22 airlines included 3 major airlines, 9 national airlines, and 9 regional airlines. These airlines carried about 60 percent of the freight carried by U.S. airlines in 1994. A copy of the questionnaire can be found in appendix IV. To examine the actions taken by U.S. all-cargo airlines and the U.S. government to resolve the airlines’ problems abroad, the questionnaire asked respondents to describe their efforts to settle the problems and evaluate the assistance they received from DOT and the State Department, if any was requested. We also interviewed officials from DOT’s Office of International Aviation and the State Department’s offices of Aviation Programs and Policy and Aviation Negotiations. To describe the disposition of cargo issues during policymaking and bilateral aviation negotiations, we reviewed relevant documents from DOT and the State Department, including DOT’s May 1989 statement of air cargo policy and April 1995 statement of international aviation policy, and spoke with DOT and State Department officials. We also reviewed applicable laws and reviewed U.S. aviation agreements concluded between January 1989 and March 1996. In addition, we reviewed the detailed notes of aviation negotiations recorded by representatives of the Air Transport Association (ATA) who were present at the discussions. We also interviewed DOT and State Department officials about aviation policymaking and bilateral negotiations. Our questionnaire asked survey respondents to evaluate the performance of these agencies in meeting their needs. Finally, we interviewed representatives of individual U.S. all-cargo and passenger/cargo airlines, the Air Freight Association (AFA), ATA, and the National Air Carrier Association (NACA). To examine the possibilities for negotiating air cargo services separately from broader negotiations that include passenger services, our questionnaire asked respondents for their views. For this issue, we also interviewed officials representing the U.S. government, U.S. airlines, foreign governments, the European Union, and aviation trade associations. We also provided copies of a draft of this report to the departments of Transportation and State for their review and comment. Our work was conducted from August 1995 through September 1996 in accordance with generally accepted government auditing standards. U.S. all-cargo airlines reported that they encounter many of the same types of problems in doing business at overseas airports that we identified in a prior study. The most significant problems, such as delays in clearing cargo through customs, are related to the regulation of aviation and international trade by foreign government agencies. The vast majority of these problems, which make U.S. carriers less effective competitors in the international marketplace, occur at airports located in Latin America and the Asia/Pacific region. The U.S. all-cargo carriers noted that they often accept these problems as a cost of operating at the airports involved or attempt to resolve them without the U.S. government’s assistance in an effort to preserve good relations with the host country. Foreign airlines also face problems in doing business in the United States. However, foreign airlines have reported fewer problems doing business here than U.S. airlines have reported having abroad. In cases in which DOT’s or the State Department’s assistance was requested, most all-cargo airlines indicated that they were generally satisfied with the agencies’ efforts. Nevertheless, some all-cargo airlines indicated that they were not aware of DOT’s or the State Department’s ability to provide assistance. Finally, DOT’s gathering of information on doing-business problems has not been comprehensive because the agency has not notified all all-cargo airlines of its efforts. As we earlier found with major U.S. passenger/cargo airlines, U.S. all-cargo airlines report a variety of obstacles in doing business abroad that raise their costs and impair their operating efficiency. The 22 airlines that responded to our survey of U.S. international all-cargo carriers reported experiencing such problems at 107 foreign airports. The respondents indicated that these problems significantly affected their operations at 81 of these airports, many of which are located in 9 of the top 10 U.S. international airfreight markets for 1994. These problems include (1) regulation by foreign governments, such as delays in clearing cargo through customs; (2) restrictive policies and inadequate services at foreign airports; (3) restrictions on ground-handling operations, such as limitations on loading and unloading cargo; and (4) limitations on how airlines can market their services in local markets. These problems affect airlines of all sizes providing both scheduled and charter services, although they may have a greater economic impact on small airlines. According to DOT officials, however, many of the problems cited by the survey respondents did not reflect discrimination against U.S. airlines but affected all airlines operating at the airport. Appendix II summarizes the 22 U.S. all-cargo airlines’ reports of significant problems in doing business and the number of airports at which they occur. The problems cited most often by airlines involve regulation by foreign aviation authorities and regulation by government agencies that have no direct jurisdiction over aviation but do have rules affecting all-cargo airlines’ operations. These regulatory impediments, cited by 13 airlines at 50 of the 81 airports at which airlines reported significant problems, were identified as occurring more frequently in Latin America and the Asia/Pacific region than in other regions. Problems involving regulation by aviation authorities include burdensome administrative requirements and delays in obtaining flight permits. Problems stemming from the actions of agencies with no direct jurisdiction over aviation include delays in clearing cargo through customs and restrictions on the ability of U.S. airlines to operate trucks for pickup and delivery services. Burdensome legal and administrative requirements were deemed a significant problem by six airlines. These airlines contend that these requirements limit their flexibility to serve their customers and raise their operating costs at 25 foreign airports, increasing the costs that they then must pass on to their customers. For example, one airline complained that the aviation authorities of one Latin American country required it to purchase liability insurance from one of the country’s national insurance companies for its aircraft operating on routes to that country, even though the aircraft was already insured by a U.S. company. Likewise, two of these airlines maintain that foreign governments in Latin America and the Asia/Pacific region require excessive documentation from carriers before allowing them to inaugurate service at their airports, imposing a burden in terms of both personnel costs and management oversight. In addition, these requirements, airlines report, can be applied in a discriminatory manner by foreign government agencies to reduce the competitiveness of the U.S. airlines’ services. According to 10 of the airlines, foreign governments also frequently limit access to their markets by refusing to grant the U.S. airlines the authority to operate on routes authorized by bilateral agreements (cited as affecting operations at 17 airports) and by delaying the issuance of permits to overfly their territory or serve their airports (cited as affecting operations at 15 airports). These problems were cited at airports in 5 of the 10 largest U.S. international airfreight markets in 1994. U.S. airlines contend that foreign governments take such actions to protect their national airlines from competition from U.S. carriers. According to some all-cargo airlines, difficulty in obtaining flight permits, although also a problem for scheduled airlines, is particularly troublesome for all-cargo airlines offering charter services because they often must operate flights on short notice to meet the needs of their customers. According to the charter airlines we surveyed, some countries in Latin America require notice of proposed charters far in advance of when the airlines typically receive requests for flights. The airlines said that if they cannot obtain the appropriate authorization in sufficient time before a proposed flight, they frequently lose the business to competing, often local airlines, thereby losing revenues and dissatisfying customers. Seven all-cargo carriers also report that curfews banning airlines from operating during night hours at 10 key airports in Latin America, Europe, Canada, and the Asia/Pacific region limit their ability to provide their customers with adequate levels of service. According to two of these airlines, curfews disproportionately affect all-cargo carriers because these airlines typically operate during night hours in order to meet delivery deadlines. Prohibitions against night operations, according to these airlines, reduce all-cargo airlines’ flexibility to schedule their flights. These curfews affect all the airlines operating at the airports, including the national carriers of the host countries. DOT officials noted, however, that U.S. and foreign airlines complain about similar curfews at airports in the United States. Eight airlines characterized problems stemming from the actions of government agencies at 22 airports, mostly in Latin America and the Asia/Pacific region, that have no direct jurisdiction over aviation as adversely affecting their operations to a significant extent. Most of these agencies are responsible for regulating trucking or administering international trade. Chief among the problems cited are restrictions on U.S. carriers’ ability to operate trucks for pickup and delivery services and delays in clearing cargo through customs. According to three U.S. all-cargo airlines, several countries require that locally owned companies pick up and deliver or transport freight shipments. Such restrictions, according to the airlines, limit their ability to provide time-sensitive delivery of packages, or deliver packages at all, at 12 foreign airports. For example, one airline reported that one Latin American government prohibits foreign companies from operating trucks with a capacity of more than 4 1/2 tons, reserving that sector of the market for its nationals. Because the airline cannot use a larger vehicle to transport shipments, its delivery of time-sensitive shipments slows and the airline’s cost of operations increases. Five airlines also reported difficulties and delays in clearing customs at 10 airports. For example, one airline attributed the slow handling of time-sensitive shipments and excessive costs to the airports’ having too few customs inspectors and cumbersome clearance processes. Such delays frustrate one of the primary purposes of air cargo transportation—speedy delivery. DOT officials noted that problems in clearing customs tend to be nondiscriminatory and also affect local airlines. According to U.S. airlines, the cumulative effect of such problems is to reduce their operating efficiency and make their services less competitive with those of foreign airlines. In November 1994, we reported that many of the problems deriving from regulation by foreign government agencies with no direct jurisdiction over aviation often arise from the country’s overall trade policies. Fourteen airlines reported problems linked to airports’ policies and services. These included problems such as discriminatory or excessive landing fees, discriminatory payment terms for airports’ services, and discriminatory or excessive fuel prices. For example, two airlines reported paying landing fees they considered excessive or discriminatory at the airports of one Latin American country. One airline complained that it must pay about $3,000 for landing services at these airports, while fees for equivalent services for the same type of aircraft at airports in nearby countries range between $750 to $1,500. In addition, the other airline contends that these high fees are discriminatory because that country’s national carriers pay about $2,000 less in fees than U.S. and other foreign airlines pay. Both airlines stated that the high fees impose a financial burden on their operations and render their services less competitive than the national airlines’. Survey respondents alleged similar problems at a total of 48 foreign airports, mostly in Latin America and the Asia/Pacific region. Thirteen U.S. airlines responding to our survey reported problems with ground-handling at 31 foreign airports, most of which are located in Latin America and the Asia/Pacific region. Ground-handling is a significant element of operations, affecting airlines’ costs and ability to compete effectively and to serve customers. U.S. airline representatives stated that such restrictions raise operating costs, lower the quality of airlines’ services, and reduce efficiency. Problems with cargo-handling include restrictions on airlines’ ability to load or unload cargo themselves, discriminatory or excessive cargo-handling fees at those airports where airlines are prohibited from performing this task themselves, and inadequate warehouse facilities. U.S. carriers particularly object to being forced to use monopoly handling agents—frequently the local carrier against whom they compete—because they contend that such agents provide less efficient, reliable, and responsive service than they could provide themselves. For example, one airline complained that the government-owned monopoly ground-handling agent at the airports of one Asian country it served gives priority services to national aircraft at all times and that the workers providing the services do not work as efficiently for foreign airlines as for national airlines. Cargo carriers want the freedom to perform their own ground-handling services or to contract for them among several competing agents. Inadequate warehouse facilities at foreign airports also pose problems, according to two U.S. all-cargo airlines. One U.S. airline reported that the government’s warehouses at a Latin American airport are very disorganized because they lack space, equipment, and trained personnel. The problems include not separating stored cargo according to the airline, not designating an area for dangerous goods, not having proper weighing equipment, and not designating a storage area for live animals. Because of these problems, the airline reported that it had to pay numerous claims for lost and damaged cargo and that various delays in departures had occurred. This airline further stated that all foreign airlines are affected by this problem. Restrictions on how all-cargo airlines can market their services and distribute their freight within local markets also affect the airlines’ ability to operate efficiently. Four U.S. cargo airlines characterized such problems at 13 airports as significantly affecting their operations. These problems include restrictions on local advertising and the number of sales offices and on the number and type of personnel the airlines can employ. For example, one airline complained that it could not obtain adequate office space at the airports it serves in one Latin American country. According to this airline, the airports lack infrastructure, so the airport authorities lease only a very limited amount of space to the airlines, on a “first-come, first-served” basis. As a result, the airline reported, it cannot establish adequate sales offices at the airports and is impeded in its ability to solicit business. In an Asian country, a U.S. airline reported that the government required it to use the government-owned forwarders to distribute its freight at the two airports it served. According to the affected airline, both forwarders provided poor service, charged high fees, and required the airline to pay a commission of 5 percent on its revenue at both airports. This created a financial burden for the airline, and it eventually sold its operating authority to this country. Foreign airlines also complain of problems in doing business in the United States. The most common problems cited by foreign airlines in our November 1994 report were excessive costs and inadequate facilities and services at U.S. airports. Officials from another foreign airline noted that foreign carriers are subject to a number of U.S. local sales and income taxes, while U.S. airlines are exempt from such taxes in several foreign countries. Two foreign airlines that we spoke with believe that the U.S. Customs Service lacks the personnel to expeditiously process cargo at Miami International Airport, the primary U.S. gateway for trade with Latin America. These airlines also complained about inadequate security at the Miami airport’s warehouses. However, foreign airlines have reported experiencing fewer problems in the United States than U.S. airlines have reported experiencing overseas. Like U.S. passenger/cargo airlines, U.S. all-cargo airlines that have problems doing business abroad can request assistance from both DOT and the State Department to resolve them. However, 18 of the 22 all-cargo carriers responding to our survey explained that they generally have not requested the U.S. government’s assistance; rather, they have attempted to develop their own solutions to the problems or treated any additional expense caused by the problems as a cost of providing service at those locations. Most of the 10 airlines that did request assistance from either DOT or the State Department indicated that they were generally satisfied with the aid they received. Some airlines reported that they were unaware of the assistance that DOT and the State Department could offer but would like guidance on how to request such assistance in the future. Recently, DOT established a database to monitor the problems U.S. airlines experience in doing business abroad. However, because DOT relied on two industry associations to notify their members of its efforts, many carriers that were not members of these associations were unaware of the initiative and have therefore provided no information. U.S. all-cargo airlines reported they were more likely to try to resolve their problems in doing business themselves or to take no action rather than ask the U.S. government to intervene. The U.S. all-cargo airlines that have attempted to resolve their problems themselves have been only slightly successful, resolving 20 of 117 such cases. However, the settlements achieved were not always optimal from the airlines’ viewpoint. For example, one airline operating at a European airport negotiated a reduction in some landing fees that the carrier considered excessive, but other landing fees at the same airport remain high. Other attempts to resolve problems have been unsuccessful. One airline reported that after trying unsuccessfully to resolve its problems in obtaining flight permits, clearing cargo through customs, and complying with burdensome legal requirements at a Latin American airport, it decided to stop operating at that airport. Another airline, which was unable to resolve significant operating and marketing problems at two airports in an Asian country, sold its rights to fly to those airports. Some U.S. all-cargo airlines have not requested DOT’s or the State Department’s intervention because (1) they view the U.S. government’s role as limited to intervening in matters involving violations of bilateral agreements only; (2) they believe requesting the U.S. government’s intervention would be too costly or time-consuming; or (3) they have been unaware that the assistance is available. Like several U.S. passenger/cargo airlines, many U.S. all-cargo airlines do not believe it is practical for airlines to rely on the U.S. government to resolve the daily difficulties of operating in foreign countries. In addition, according to DOT and State Department officials, many U.S. airlines do not seek the U.S. government’s assistance because they believe such government involvement might harm relations with the host country. Some airlines do not request the U.S. government’s assistance to resolve problems because they usually view the problems as local or unique to the airports in question. These airlines prefer not to involve DOT or the State Department in problems they view as not involving a breach of obligations under a bilateral agreement. One airline explained that it generally attempts to work with local airport officials, the International Air Transport Association (IATA), the International Civil Aviation Organization (ICAO), and other carriers to remove many of these impediments to doing business. This carrier believes that the bilateral process is not an appropriate forum for resolving many of the problems that are specific to all-cargo airlines’ operations because the process is structured to address the needs of passenger services. Like U.S. passenger/cargo airlines, many all-cargo airlines do not view the formal process for filing complaints about operating problems as a cost-effective way to resolve them. They consider the formal process of requesting the U.S. government’s intervention to be too costly or time-consuming. This view is especially common among the small and mid-size airlines that have limited resources to devote to filing complaints under 49 U.S.C., section 41310, the statute under which airlines file formal complaints with DOT about their problems in doing business abroad. Of the 28 complaints filed under the statute since 1989, only 6 were filed by all-cargo carriers. According to one airline, it is also costly to request DOT’s assistance because the agency asks the airline to collect and present to it all the necessary evidence concerning a problem before the agency will attempt to address the problem. DOT officials responded that they must have reasonable assurance of a problem’s validity, as well as detailed facts, before intervening with a foreign government on a formal basis. DOT officials told us that although the number of formal complaints is small, DOT spends a great deal of time attempting to resolve complaints informally. Some airline officials were also unaware of the processes for requesting DOT’s or the State Department’s assistance to help solve problems in doing business abroad. Officials of three airlines—one small charter, one large regional, and one national airline—stated that they were unfamiliar with how to request the U.S. government’s aid but would appreciate any information on how to do so. Officials at two of these airlines were not even aware that such assistance was available from the U.S. government. Neither DOT nor the State Department systematically provides the airlines with information on the assistance it provides or guidance on the procedures to be followed in obtaining the assistance. Finally, DOT and State Department officials, including DOT’s Assistant Director for Negotiations and the State Department’s Director of the Office of Aviation Negotiations, believe that many U.S. airlines are reluctant to request their aid in resolving problems because the airlines think that the U.S. government’s involvement will be perceived by the host country as confrontational. According to DOT officials, most U.S. airlines prefer using cooperative methods to resolve problems out of fear that a foreign government will retaliate or a desire to preserve good relations with the host country. Recently, in response to a recommendation we made in our 1994 report, DOT began to collect information on the status, nature, and severity of U.S. airlines’ problems in doing business abroad and established a consolidated database on such problems to ensure that they are prioritized and given attention. However, DOT did not notify all U.S. all-cargo airlines of the system. Instead, DOT worked through the Air Transport Association (ATA) and the National Air Carrier Association (NACA) to notify their members of the database and to request information on current doing-business problems. Only 9 of the 22 air cargo carriers that responded to our survey, however, are members of either association. As a result, the airlines that are not members—mostly regional airlines—were unaware of DOT’s efforts and have provided no information. Consequently, DOT’s gathering of information about and monitoring of doing-business problems have not been as comprehensive as they could have been. For those problems for which all-cargo airlines requested the U.S. government’s assistance, DOT and the State Department had some success, according to survey respondents. The 10 all-cargo airlines that reported turning to the U.S. government for help told us of 14 cases in which the government completely or partially resolved the doing-business problem in question. However, the airlines also reported 32 cases in which the situation remained unchanged after the U.S. government intervened. Nonetheless, 7 of the 10 airlines were generally satisfied with the assistance they received from DOT or the State Department, even if the assistance provided did not resolve the problem. As we reported in November 1994, DOT and the State Department are more successful in resolving issues that come under bilateral agreements or issues that DOT has determined denied U.S. airlines a fair and equal opportunity to compete. For example, one cargo airline reported that during recent bilateral negotiations with a European country, U.S. negotiators were successful in including in the bilateral agreement a statement that prevents that country from arbitrarily assessing landing fees. The U.S. government also intervened successfully on behalf of an all-cargo airline that reported experiencing cargo-handling restrictions and discriminatory cargo-handling fees at airports in an Asian country. In response to a formal complaint, the U.S. government imposed sanctions on the foreign government, and the foreign government ceased its discriminatory practices. According to carriers responding to our questionnaire, DOT and the State Department have had less success in resolving problems that are not covered by specific, detailed provisions in bilateral agreements or that do not represent discrimination against U.S. airlines. For example, according to one U.S. airline, the departments were not able to resolve restrictions that limited the airline’s operations to the less commercially desirable of a foreign city’s two airports. According to another airline, DOT and the State Department have been negotiating for 2 years with a Latin American country to drop a restriction that reserves for national companies and denies to others the right to transport international freight shipments in vehicles with a capacity of more than 4-1/2 tons. Some survey respondents said that their problems remain unresolved: Charter airlines, for example, continue to have difficulty obtaining flight permits at Latin American airports. As we previously reported, DOT and the State Department must consider numerous factors, including the severity of the problem and the United States’ aviation trade relationship with the country involved, in attempting to resolve U.S. airlines’ doing-business problems. At these agencies’ disposal are several statutory and regulatory tools that authorize retaliatory measures. For example, the United States may deny the schedule of flights to the United States proposed by a country’s carriers or may impose other sanctions. Such stern measures have limited application, however, in addressing practices that do not clearly violate bilateral accords or discriminate against U.S. carriers. DOT interprets its authority under 49 U.S.C., section 41310, as requiring a finding of a violation of a bilateral accord or other instance of unfair or discriminatory treatment before it may impose sanctions. We found in our November 1994 report that efforts by DOT and the State Department to resolve the range of doing-business problems that do not overtly discriminate against U.S. carriers are complicated by several constraints, such as the need to negotiate with foreign governments that are often protecting their own carriers from increasing U.S. competition. According to U.S. all-cargo airlines, their success is limited by a range of problems in doing business at key airports in Latin America and the Asia/Pacific region. Such obstacles increase carriers’ operating costs and can erode the competitiveness of their services. Although most U.S. all-cargo airlines are satisfied with the assistance they have received from DOT and the State Department in resolving their problems, two airlines were unaware of the assistance that the agencies could offer. Neither agency has systematically provided the airlines with information on the assistance available or guidance on obtaining access to it. In response to a recommendation in our prior report, DOT began to collect and analyze information on U.S. airlines’ problems in an effort to monitor the status, nature, and severity of such problems. However, because DOT has not collected information directly from the airlines, many U.S. all-cargo carriers are unaware of its efforts and have not provided any information. As a result, DOT still cannot effectively establish priorities and strategies to address the most serious and pervasive problems. We recommend that the Secretary of Transportation develop and distribute to all U.S. airlines information on the assistance available and guidance on the procedures to be followed in requesting aid from the U.S. government in resolving problems in doing business abroad and extend DOT’s current effort to collect information on the status and severity of U.S. airlines’ problems in doing business abroad to include all U.S. all-cargo airlines that operate internationally. We provided a draft of this report to the departments of Transportation and State for their review and comment, and they generally agreed with our conclusions and recommendations. U.S. delegations have discussed air cargo issues to some extent in their negotiations with more than three-quarters of the countries with which bilateral talks have been held since 1989. Aviation agreements reached during this period have generally expanded the opportunities for U.S. all-cargo carriers and, in some cases, have liberalized cargo services before passenger services. Nevertheless, restrictions persist. As a remedy, most U.S. all-cargo airlines advocate separating negotiations of cargo rights from broader negotiations that include passenger services. Separate discussions about air cargo services could allow negotiators to focus on all-cargo airlines’ unique operating requirements, according to airline representatives and DOT and State Department officials. Some all-cargo airlines also believe that such discussions could ensure that progress on cargo services is not delayed because of disputes about passenger issues. In addition, several industry observers believe that successful negotiations on cargo issues could create momentum to achieve progress on contentious passenger issues in several U.S. aviation relationships. Airline representatives and DOT and State Department officials also point out several obstacles to such an approach. Most foreign countries do not have major international all-cargo airlines. Instead, they have passenger/cargo airlines. In these countries, the governments might be unable to separate negotiations of air cargo and passenger services. Furthermore, U.S. negotiators would be unable to reciprocally exchange cargo rights for passenger rights, which could lessen their flexibility in negotiations and make it difficult for them to obtain the maximum benefits for U.S. all-cargo airlines. Finally, DOT and State Department officials caution that routinely holding separate cargo negotiations could impose a financial burden on the offices responsible for conducting them. DOT and State Department officials acknowledge that passenger issues historically have received more attention than cargo issues during bilateral aviation negotiations, primarily because, according to the DOT officials, passenger issues are more numerous and arise more frequently. However, these officials assert that the U.S. government has addressed cargo issues as they have arisen and has paid markedly greater attention to the interests of all-cargo airlines over the past several years, citing their success in liberalizing cargo services with several countries. State Department officials attributed this increased attention, in part, to (1) the growing importance of U.S. air trade with the countries of Latin America and the Asia/Pacific region and (2) the emergence of Federal Express and United Parcel Service alongside U.S. passenger/cargo carriers as major competitors in the international market. Our analysis of DOT’s and the Air Transport Association’s (ATA) records showed that the United States conducted formal aviation negotiations with 56 foreign governments between January 1989, the year that DOT issued its air cargo policy statement, and March 1996. U.S. officials discussed air cargo issues in at least one negotiating session in talks with 44 of these governments. However, most negotiating sessions focused on passenger issues; about one-third of the more than 300 individual sessions dealt with air cargo issues. According to DOT officials, passenger issues receive more attention than cargo issues during negotiations because they arise more frequently. The officials said that foreign countries frequently focus on passenger issues and such issues are the principal reason talks are held. They noted that certain kinds of disagreements that continue to arise in the passenger context, such as pricing issues, have not been raised with respect to cargo for many years. During this period, the United States amended or inaugurated 74 aviation agreements. Thirty-two of these agreements contained specific provisions governing all-cargo services. Of these, 18 agreements specify separate routes for all-cargo services and 21 agreements define the intermodal rights available to airlines. The United States has also signed “open skies” agreements with 12 European countries, under which most bilateral restrictions are eliminated, and an agreement with Canada substantially liberalizing the transborder aviation market. Finally, in March 1996, the United States successfully completed negotiations with Japan that dealt exclusively with air cargo services. Our analysis showed that air cargo issues were addressed in the majority of the negotiating rounds with 20 countries: Argentina, Brazil, China, Fiji, Greece, Guatemala, Hong Kong, India, Indonesia, Korea, Macau, Malaysia, Mexico, Nicaragua, Peru, the Philippines, Saudi Arabia, Singapore, Spain, and Thailand. U.S. negotiators reached agreements with most of these countries that generally expanded service opportunities for U.S. all-cargo airlines. For example, the agreement concluded with the Philippines in 1995 (1) increased the number of routes for all-cargo services and the number of U.S. airlines allowed to operate on those routes, (2) granted U.S. carriers the unrestricted right to change the type of aircraft for flights beyond the Philippines, and (3) ensured that U.S. airlines could operate pickup and delivery services in the Philippines. These service enhancements gave Federal Express the operating freedom necessary to establish a viable hub at Subic Bay. Still, 24 of the 32 U.S. agreements or amendments negotiated since 1989 that incorporated provisions on cargo services contained various restrictions on these services. Currently, aviation agreements governing cargo services in 7 of the 20 leading international airfreight markets for the United States—including the two largest markets, Japan and the United Kingdom—directly restrict the operations of U.S. all-cargo carriers. These seven restricted markets accounted for about one-third of the U.S. international freight traffic in 1994. Restrictions include limits on (1) the number of airlines allowed to operate on all-cargo routes, (2) the ability of U.S. airlines to carry freight to and beyond the other country, and (3) the frequency of all-cargo airlines’ flights. Agreements with some countries do not guarantee the right of U.S. airlines to perform their own ground-handling services or to truck cargo off airport property for final delivery. State Department and DOT officials note, however, that bilateral aviation agreements that restrict cargo services also tend to restrict passenger services. For example, the U.S. agreements with Japan and the United Kingdom restrict both types of service. A State Department official also said that these agreements are considerably more liberal than the agreements they amended or replaced. Appendix III contains a list of the countries with which the United States has negotiated since 1989 and a table describing specific provisions of the agreements governing air cargo services. Most U.S. air cargo carriers that we surveyed believe that the stated U.S. international aviation policy—embodied in DOT’s 1989 and 1995 policy statements—addresses their interests in liberalizing and expanding international air cargo services. Eleven of the 19 airlines that stated their views on this issue believe that, overall, DOT’s policy addresses their principal concerns to a moderate or great extent. However, only 7 of 20 respondents believe that DOT has been similarly effective in representing their interests during bilateral aviation negotiations, while 4 respondents believe that DOT has done little or nothing to represent their interests. Respondents were split as to whether the State Department has represented them well or poorly. Seven of the 12 airlines stating their views on this issue believe the State Department has represented them to a little or some extent, while 5 respondents believe the State Department has represented their interests to a moderate or great extent. Thirteen of the 19 airlines that stated their views advocate that the United States routinely hold bilateral talks dedicated exclusively to negotiating cargo rights, while only 4 support the continuation of comprehensive negotiations. DOT’s policy enunciated in the 1995 statement considers such an approach to negotiations appropriate when it can foster the comprehensive liberalization of aviation relations. While acknowledging that DOT and the State Department have been more responsive to the needs of all-cargo carriers when negotiating aviation agreements over the past several years, several of these airlines assert that under the current framework of comprehensive talks, negotiators primarily focus on the needs of passenger/cargo carriers, often to the detriment of all-cargo carriers’ interests. In addition, some of these airlines believe that the traffic needs of all-cargo operations are sufficiently different from those of passenger/cargo airlines to justify separate negotiations. Some carrier representatives also contend that when substantial consensus on cargo issues is reached during negotiations, progress on an agreement can be delayed because of disputes about passenger services. According to some U.S. all-cargo charter airline representatives, separate negotiations could facilitate agreement on specific provisions guaranteeing the airlines liberal operating rights. Many U.S. aviation agreements either do not contain a formal provision governing charter services or require that charter services be performed according to the rules of the country in which the traffic originates. According to DOT and airline officials, the regulation of charter services by foreign governments can reduce the viability of such services. For example, Argentina requires that its national airlines have the first opportunity to carry charter freight originating in Argentina. Finally, the two major international all-cargo carriers believe that separately negotiating cargo services would recognize the intrinsic link between the growth of international trade and liberalized air cargo services. Because of this connection, these airlines think air cargo services should be considered as a trade issue rather than as a transportation issue and that the Office of the U.S. Trade Representative (USTR) should play a more active role in negotiating cargo rights. One of these airlines holds that the best way to promote the liberalization of international air cargo services is by convincing U.S. negotiating partners of the benefits of increased air trade to their economies. Similarly, a State Department official pointed to the U.S. talks with Brazil in 1995 as an example of the influence that a country’s broader trade interests may have on the outcome of negotiations. The United States and Brazil amended the aviation agreement to increase the number of scheduled and charter all-cargo flights permitted, as well as to expand passenger service opportunities. Brazil’s growing air export trade to the United States, which includes shipments of automotive parts and other finished industrial products, was among the incentives for Brazil to liberalize air cargo services, he explained. DOT officials, on the other hand, believe that it was Brazil’s desire for enhanced passenger services to the United States that allowed the United States to obtain cargo rights in return. The six major U.S. passenger/cargo airlines with significant international operations are opposed to any negotiating policy that would routinely exclude them from air cargo talks with foreign countries. Two of these airlines expressed concern that separate talks for air cargo rights would place their own cargo operations at a competitive disadvantage. Several U.S. passenger/cargo airlines are dedicating increasing resources to transporting freight in international markets. While most passenger/cargo carriers do not compete directly with integrated carriers in the door-to-door, express delivery market, they do compete for traditional airport-to-airport freight traffic, according to industry analysts. Two passenger/cargo airline executives conveyed their companies’ concern that the results of air cargo talks could have profound implications for passenger services by setting unfavorable precedents for issues of common interest, such as the right of U.S. airlines to serve destinations beyond a foreign country. DOT officials stated that retaining the flexibility inherent in comprehensive discussions is entirely consistent with the U.S. government’s formal policy on negotiating bilateral aviation agreements. They explained that while the 1995 U.S. International Air Transportation Policy Statement commits DOT not to forgo agreements covering only air cargo services when circumstances warrant, the 1989 air cargo policy obligates the agency generally to retain flexibility in the interest of obtaining agreements that comport with the United States’ overall economic interests. According to another DOT official, DOT has no institutional interest in holding only comprehensive negotiations. Nevertheless, DOT officials said that comprehensive negotiations have usually proved to be the most effective way to adapt to evolving conditions during negotiations with most countries. According to airline representatives and DOT and State Department officials, in some cases conducting negotiations dedicated solely to air cargo issues could foster the liberalization of air cargo services by allowing negotiators to focus on these issues. Some all-cargo airline representatives also believe that separate negotiations could prevent negotiators from forgoing agreement on cargo services because of disputes about passenger services. Finally, by negotiating cargo issues in advance of passenger issues, negotiators might develop broad areas of agreement and understanding in an otherwise restrictive relationship, creating a model for subsequent discussions of passenger issues. Despite the potential advantages, these experts point out that significant obstacles to the successful implementation of air cargo-only negotiations exist. According to several U.S. aviation officials and all-cargo airline representatives, conducting separate all-cargo negotiations could focus officials’ attention on the operating requirements of air cargo services, such as traffic rights granting carriers maximum operating flexibility to enable them to take advantage of shifting trade flows. These include rights to carry freight to and beyond foreign countries and to alter flight routings according to market demand. They also include intermodal rights and the freedom to transfer freight between aircraft at foreign airports without restriction as to the size, number, or type of aircraft involved—so-called change-of-gauge rights. Finally, negotiators could give increased attention to the doing-business problems of air cargo carriers if discussions were separated. According to one airline representative, these problems often cannot be adequately addressed during comprehensive talks because of crowded negotiating agendas and limited time. Addressing cargo issues in advance of—and in isolation from—passenger issues could sometimes help create the momentum necessary to liberalize several bilateral relationships, according to some industry observers. Holding successful all-cargo talks in advance of more contentious discussions about passenger services, some observers explain, could create a climate of goodwill and an understanding that differences over passenger services could be resolved. These observers believe that this approach would foster liberalization much as did the deregulation of the domestic U.S. airline industry during the 1970s. The deregulation of domestic cargo services in 1977 led to the development of new service options for shippers, most prominently overnight express delivery, and stimulated dramatic growth in domestic cargo traffic. This growth partially contributed to the confidence that passenger markets could be deregulated the following year, according to these observers. Similarly, according to this point of view, a working demonstration of successfully liberalized international air cargo markets may encourage many of the United States’ foreign trading partners to negotiate for the same benefits in international passenger markets. This view, however, has yet to be proved. In contrast to such arguments for separate negotiations are obstacles suggesting that this approach may not be routinely practical or appropriate. First, most foreign governments have little incentive to conduct all-cargo negotiations because their countries do not have major international all-cargo carriers. Even though many scheduled foreign passenger/cargo airlines also operate cargo-only aircraft, many of these airlines still carry a significant amount of cargo in the holds of passenger aircraft. As a result, their market needs are defined primarily in terms of initiating or expanding passenger services, which are their primary source of revenue, according to DOT and State Department officials. When foreign officials negotiate, they often do so with the acknowledged goal of expanding their national carriers’ passenger services. In 1995, 75 foreign carriers from 44 countries operated all-cargo services to the United States. However, many of these carriers are small and their interests are considered secondary by foreign aviation officials, according to DOT officials and industry analysts. Only three foreign all-cargo airlines serving the United States—Cargolux, Nippon Cargo Airlines, and TAMPA—rank in the top 25 international airfreight carriers. Foreign negotiators, therefore, may find it difficult to bargain exclusively on behalf of small all-cargo carriers, seeking instead to gain cargo rights from the United States in the general course of comprehensive discussions. For example, a British government representative told us that while his country’s largest passenger/cargo airline, British Airways, carries significant amounts of cargo across the North Atlantic on board its passenger aircraft, its income from cargo revenue on these routes is largely a function of the frequency of its passenger flights between the United Kingdom and the United States. A second obstacle to separate all-cargo talks is the possibility that they could reduce the flexibility of U.S. negotiators to obtain new rights for all-cargo and passenger/cargo airlines. In particular, DOT and State Department officials and passenger/cargo airline representatives believe that separating talks diminishes opportunities to exchange cargo rights for passenger rights, and vice-versa. With comprehensive discussions, negotiators can seek the best overall deal, which might mean allowing more passenger flights for foreign carriers in exchange for increased flights by U.S. all-cargo carriers, according to these officials. DOT and State Department officials with whom we spoke urged adherence, in most cases, to the current framework for negotiating, which relies on comprehensive talks, with separate negotiations available as an alternative. According to these officials, the service gains available to U.S. all-cargo carriers will usually be greater when agreements arise from flexible, comprehensive talks. They cited as examples the agreements reached with Canada, Mexico, and several of the European countries with which the United States now has an “open skies” agreement. Moreover, according to the officials, the interests of large integrated all-cargo airlines are often dissimilar to those of smaller, traditional freight carriers. This diversity of interests suggests that cargo-only talks may not, in many cases, be more effective than comprehensive negotiations in meeting the needs of all members of the community of all-cargo airlines. Indeed, two of the all-cargo airlines that responded to our survey supported this assessment. These carriers expressed the fear that the specific interests of the large integrated all-cargo airlines—Federal Express and United Parcel Service—are likely to receive favored treatment in cargo-only negotiations. Finally, according to DOT and State Department officials, the U.S. government would incur additional costs by negotiating passenger and cargo rights separately. Each round of negotiations requires advance preparation to identify goals and develop strategies to achieve them. Importantly, preparation also includes consultation with the affected parties, including carriers, airports, and local communities. Aviation negotiations can involve multiple rounds of talks conducted over several months and demand negotiators’ attention before, during, and after the actual talks. Finally, when the foreign government hosts the discussions, typically for every other round, both DOT and the State Department also incur often significant travel costs. The U.S. negotiators that we spoke with are hesitant to pursue a policy of routinely separating passenger and cargo negotiations. They expressed concern that they would have insufficient time and funding to split each round of talks so that cargo issues and passenger issues would receive equal amounts of attention. Air cargo talks with Japan, concluded in March 1996, illustrate both the advantages and disadvantages of negotiating exclusively for the expansion of cargo services. One major advantage, according to DOT and State Department officials, is that the negotiations addressed cargo issues on their own merits and were not overshadowed by the contentious passenger issues in the relationship. Under the terms of the U.S.-Japan agreement, the United States received Japan’s consent for an additional U.S. airline to begin all-cargo services to Japan; for United Parcel Service to expand its service to and beyond Japan; and for Federal Express, United Airlines, and Northwest Airlines to route their flights more flexibly. However, the agreement also focuses attention on the difficulties inherent in concluding similar agreements with other countries. First, the United States and Japan were able to hold cargo negotiations because their relationship—unlike U.S. relationships with other countries—allows the cargo needs of each to be considered separately and distinctly from the passenger needs, according to DOT. Each country has at least one major all-cargo carrier, and each has passenger/cargo carriers that operate cargo-only aircraft on bilateral routes. Second, both the U.S. and the Japanese governments had concerns over the precedent that an agreement on cargo services could set for subsequent passenger talks. Japanese negotiators, in particular, did not wish to set a precedent in which the United States could regard expanded cargo rights as a precursor to similarly expanded passenger rights, according to State Department officials. Foreign negotiators representing other major U.S. trading partners are likely to express similar reservations. With Japan, the United States originally sought an agreement that would allow all-cargo carriers the maximum flexibility to respond to business opportunities with little regulatory interference. During the discussions, U.S. negotiators argued that granting the right to carry freight to destinations beyond Japan to U.S. all-cargo carriers is essentially a trade issue and that significant economic benefits would accrue to Japan from unreservedly allowing such flights. However, Japan has not accepted this reasoning, and it limited the ability of U.S. all-cargo airlines to carry cargo originating in Japan from Japanese points to points beyond Japan. One U.S. airline representative expressed concern that continuing such limits on U.S. carriers’ right to serve destinations beyond Japan may have set an unwelcome precedent for passenger services. Finally, concluding the U.S.-Japan agreement on all-cargo services has not proved to be a catalyst for accelerating progress on passenger service issues. In fact, the recent agreement on air cargo services has not prevented conflict over the pre-existing traffic rights of U.S. all-cargo airlines. The two countries resumed negotiations on passenger issues on April 29, 1996, but the talks have been at an impasse since then because of a dispute over Japan’s refusal to approve flights by two U.S. passenger/cargo airlines—United and Northwest—and Federal Express through Japan to other destinations in Asia. The United States believes these flights are authorized under current U.S.-Japan agreements. On July 16, 1996, DOT proposed to prohibit Japan Air Lines from carrying cargo from points elsewhere in Asia on its scheduled all-cargo services through Japan into the United States unless the Japanese government approved Federal Express’s request. As of September 25, 1996, the negotiations had achieved little progress on these issues and DOT had reaffirmed the U.S. intent to resolve outstanding disputes over the rights of U.S. carriers to operate flights beyond Japan before undertaking passenger negotiations over new opportunities. Two modifications to the U.S. strategy have been under discussion within government and the industry. First, conducting multilateral negotiations has been offered as an approach that could create broad areas of agreement among countries and provide an incentive for countries with relatively restrictive aviation policies to liberalize them as part of a regional agreement. Second, continuing to allow carriers and other affected parties to directly observe discussions has been advocated as a means to help ensure that all parties have an opportunity to communicate their interests to U.S. negotiators. While each modification offers promise, each also raises problems. According to DOT officials, conducting multilateral talks could, in principle, help create negotiating efficiencies by focusing federal negotiating resources on talks with several like-minded countries at one time and could promote liberalization on a large scale. DOT’s 1995 U.S. International Air Transportation Policy Statement identified the negotiation of such multilateral agreements as an option in obtaining further liberalization of U.S. aviation relations. Some DOT officials and industry experts believe that concluding a liberal multilateral agreement on cargo services might heighten foreign governments’ interest in liberalizing passenger services. By offering significantly expanded access to the vast U.S. market, such an approach could motivate countries with restrictive aviation policies to join their neighbors in concluding a relatively liberal agreement with the United States. U.S. officials have attempted to gauge foreign interest in holding multilateral negotiations. In 1991, in 1994, and again in 1996, DOT and State Department negotiators held exploratory talks with representatives of the European Commission, the executive arm of the European Union (EU). During the earlier talks, U.S. and EU officials reached an understanding on a broad array of cargo issues, which included deregulating pricing, eliminating numerical restrictions on the number of all-cargo airlines allowed to operate, allowing for an unrestricted amount of cargo to be transported between the United States and the EU, and a host of doing-business issues. Nonetheless, the Commission no longer supports holding multilateral talks on cargo services in advance of and in isolation from discussions on passenger issues, believing this approach to be counterproductive to its ultimate goal of negotiating air services between the United States and EU member states. The Commission embraces the concept of multilateral negotiations and has obtained approval from a majority of its member states to proceed with phased, exploratory talks with the United States. However, according to DOT officials, the Commission does not have the authority to negotiate traffic rights—a disabling limitation in their view. DOT officials believe that there is interest in seeking air transport liberalization through regional associations, including those in Asia and Latin America. However, both U.S. and foreign officials said that none of these groups has yet achieved a consensus favoring such an approach. Formalizing and continuing a recent U.S. policy that allows “direct participation” by carriers in comprehensive negotiations could help ensure that agreements reflect all carriers’ needs and interests. While observers do not play a formal role in the negotiations, their presence allows them to state their case directly to DOT and State Department negotiators and to react immediately to any foreign country’s positions that might adversely affect their ability to serve markets in and beyond the country in question. According to a State Department official, one advantage to formalizing direct participation would be that “carriers couldn’t complain later that they were not part of the process.” However, DOT and State Department officials have three primary concerns. First, smaller affected parties could be disadvantaged in articulating their needs because they often would be unable to send a representative to negotiations. Large, resource-rich carriers could conceivably send a representative to every negotiation, while smaller carriers could not afford the considerable travel and other staff costs of doing so. Second, U.S. delegations composed of large numbers of U.S. airlines interested in serving the relevant market may intimidate foreign negotiating teams representing weak foreign airlines. Finally, large numbers of observers may discourage negotiators from openly discussing substantive matters, increasing the frequency of so-called chairmen’s meetings to resolve key issues. Such closed meetings could create an atmosphere of mistrust between the U.S. chairman and the observing parties.
Pursuant to a congressional request, GAO reviewed U.S. air cargo airlines' reported problems in doing business abroad, focusing on the: (1) nature of the airlines' problems; (2) actions the affected airlines and the Departments of Transportation (DOT) and State have taken to resolve these problems; (3) extent to which the U.S. government has addressed air cargo issues in policymaking and during bilateral aviation negotiations; and (4) possibilities for separating negotiations of air cargo services from broader negotiations that include passenger services. GAO found that: (1) the 22 U.S. all-cargo airlines responding to a survey reported a range of obstacles to doing business abroad which impair their competitiveness and reported that they experienced significant problems at 81 foreign airports; (2) the most pervasive problems were related to regulation by foreign governments and foreign aviation authorities, with most of these problems occurring at airports in Latin America or the Asia-Pacific region; (3) many of the carriers have attempted to resolve such problems themselves, although some have requested assistance from DOT or State, while others were unaware that assistance was available; (4) U.S. delegations have raised air cargo issues with more than three-quarters of the countries with which they have conducted bilateral talks since 1989; (5) restrictions persist in spite of the resulting expansion of opportunities for U.S. all-cargo carriers; and (6) 13 of the 22 airlines advocate separating negotiations of air-cargo rights from broader negotiations that also address passenger rights, but this approach may not be practical or appropriate on a regular basis.
The AAV is a tracked (non-wheeled) vehicle with the capability to self- deploy—or launch from ships (see figure 1). The AAV has a water speed of approximately six knots, and is usually deployed from within sight of the shore, a factor that poses survivability risks in certain threat environments. According to USMC officials, the AAV has become increasingly difficult to maintain and sustain. As weapons technology and the nature of threats have evolved over the past four decades, the AAV is viewed as having limitations in water speed, land mobility, lethality, protection, and network capability. According to DOD, the need to modernize USMC’s ability to move personnel and equipment from ship to shore is essential. In the last 15 years, USMC has undertaken a number of efforts to do this. EFV: USMC began development of the EFV in 2000. The EFV was to travel at higher water speeds—around 20 knots—which would have allowed transporting ships to launch the EFV further from shore than the AAVs it was to replace. However, following a 2007 breach of a statutory cost threshold, that program was restructured and subsequently, in 2011, canceled by DOD due to affordability concerns. ACV: In 2011, the USMC completed initial acquisition documentation providing the performance requirements for a new replacement amphibious vehicle called the ACV. The ACV was expected to be self- deploying with a water speed of 8 to 12 knots which would permit deployment beyond the visual range of the shore, but would not achieve high water speed. It was also expected to provide for sustained operations on shore with improved troop protection. However, USMC leadership then requested an affordability analysis be completed that would explore the technical feasibility of integrating high water speed into ACV development. According to DOD officials, the analysis indicated that achieving high water speed was technically possible but required unacceptable tradeoffs as the program attempted to balance vehicle weight, capabilities, and cost. Meanwhile, the USMC retained a requirement to provide protected land mobility in response to the threat of improvised explosive devices—a requirement the AAV could not meet due to its underbody design. In 2014 we reported that, according to program officials, the program office was in the process of revising its ACV acquisition approach based on this affordability analysis. ACV 1.1, 1.2 and 2.0: In 2014, the USMC revised its ACV acquisition approach, adopting a plan to develop the ACV in three increments: The first increment of ACV development—ACV 1.1—is planned to be a wheeled vehicle that would provide improved protected land mobility and limited amphibious capability. The ACV 1.1 is expected to be part of an amphibious assault through the use of surface connector craft to travel from ship to shore. Surface connectors are vessels that enable the transportation of military assets, including personnel, material, and equipment, from a sea base or ship to the shore. ACV 1.1, a successor to the previously suspended Marine Personnel Carrier program, is using prototypes, demonstration testing, and other study results from that program. DOD officials estimated that, in comparing the past Marine Personnel Carrier program and the ACV 1.1 as currently envisioned, the two are about 98 percent the same. Troop capacity—nine for the Marine Personnel Carrier and a threshold, or minimum, of 10 for the ACV 1.1—is the main difference between the two. Figure 2 provides a notional drawing of the ACV 1.1. The second increment—ACV 1.2—adds two variants of the vehicle for other uses and aims to improve amphibious capability. Program officials anticipate that it will demonstrate amphibious capability that matches the AAV, including the ability to self-deploy and swim to shore. According to DOD officials, ACV 1.2 will be based on the results of ACV 1.1 testing and it is anticipated that some 1.1s will be upgraded with ACV 1.2 modifications. The third effort, referred to as ACV 2.0, focuses on technology exploration to attain high water speed—a critical capability, according to DOD officials. These technology exploration efforts are seeking design options that may enable high water speed capability without accruing unacceptable trade-offs in other capabilities, cost or schedule. According to officials, ACV 2.0 is a conceptual placeholder for a future decision point when the Marine Corps plans to determine how to replace the AAV fleet, which is expected to occur in the mid- 2020s. High water speed capability may ultimately be achieved through an amphibious vehicle or a surface connector craft. Our prior work on best practices has found that successful programs take steps to gather knowledge that confirms that their technologies are mature, their designs are stable, and their production processes are in control. The knowledge-based acquisition framework involves achieving the right knowledge at the right time, enabling leadership to make informed decisions about when and how best to move into various acquisition phases. Successful product developers ensure a high level of knowledge is achieved at key junctures in development, characterized as knowledge points. Knowledge Point 1 falls early in the acquisition process and coincides with a program’s acquisition’s decision to begin development, referred to as Milestone B. At this knowledge point, best practices are to ensure a match between resources and requirements. Achieving a high level of technology maturity and preliminary system design backed by robust systems engineering is an important indicator of whether this match has been made. This means that the technologies needed to meet essential product requirements have been demonstrated to work in their intended environment. In addition, the developer has completed a preliminary design of the product that shows the design is feasible. Figure 3 identifies the ACV 1.1 acquisition’s status within the DOD acquisition process. Our review of the available documents that have been prepared to inform the November 2015 decision to begin system development of ACV 1.1— including the acquisition strategy and an updated 2014 AOA—found that most of the ACV program’s acquisition activities to date reflect the use of best practices. The incremental approach to achieving full capability itself is consistent with best practices. The ACV 1.1 acquisition strategy minimizes program risk by using mature technology, competition, and fixed-price type-contracts when possible. In addition, our analysis of the 2014 AOA found that overall it met best practices. Going forward, however, some elements of the acquisition approach, for example, the program’s plan to hold a preliminary design review (PDR)—a technical review assessing the system design—after beginning development, do not align with best practices and could increase program risk. While some aspects of this acquisition do suggest lower levels of risk, these deviations could potentially increase program risk. GAO will continue to monitor this risk as the program moves forward. The ACV 1.1 acquisition strategy prepared to inform the upcoming start of engineering and manufacturing development minimizes program risk by following best practices, such as using mature technology, competition, and fixed-price-type contracts when possible. Technology maturity. The ACV program plans to utilize mature technology in ACV 1.1 development. According to acquisition best practices, demonstrating a high level of maturity before allowing new technologies into product development programs puts programs in a better position to succeed. To support a decision to begin development, a technology readiness assessment (TRA) was performed to assess the maturity of critical technologies to be integrated into the program. DOD defines critical technology elements as new or novel technology that a platform or system depends on to achieve successful development or production or to successfully meet a system operational threshold requirement. In a TRA, identified critical technologies are assessed against a technological readiness level (TRL) scale of 1 to 9. Specifically, a rating of TRL 1 demonstrates “basic principles observed and reported,” and TRL 9 demonstrates “actual system proven through successful mission operations.” Overall, the completed ACV 1.1 TRA assessed the program at TRL 7, indicating demonstration in an operational environment. This assessment was based on the non-developmental nature of the vehicles, the use of mature technology for modifications, and tests and demonstrations of prototype vehicles done under the Marine Personnel Carrier program. Demonstration in a relevant environment is TRL 6. Demonstration in an operational environment is TRL 7. identified adapting the Remote Weapon Station to the marine environment as a principal program risk because using the system under different operational conditions may have a significant impact on system reliability. While the program has identified additional risk mitigation strategies—including planned component testing during development and development of preventative maintenance procedures—this technology could entail a somewhat higher level of risk than the TRL level suggests and may require additional attention as development begins. Competition. According to our prior work, competition is a critical tool for achieving the best return on the government’s investment. The ACV acquisition approach has fostered competition in the acquisition process, both through competitive prototyping that took place prior to the start of development and with competition that continues through development until production. Specifically, before the Marine Personnel Carrier program was suspended, the government awarded a contract to test critical sub-systems including the engine, transmission, suspension and hydraulic hardware systems. The government also awarded four contracts for system-level prototypes demonstrating the swim capability, personnel carry capability, and survivability of each company’s vehicle. The Under Secretary of Defense for Acquisition, Technology, and Logistics—the ACV Milestone Decision Authority—has certified to the congressional defense committees that the ACV program had met the competitive prototyping requirement based on the work done under the Marine Personnel Carrier program. In addition, after development begins, the program plans to award ACV 1.1 development contracts to two vendors, maintaining competition until they select one vendor at the start of production. Contract strategy. When development begins, the ACV program plans to award hybrid contracts to each of the to-be-selected developers. According to program plans, each contract is to utilize three different pricing structures for different activities: fixed-price-incentive for ACV 1.1 vehicle development, firm-fixed-price for the delivery incentive to deliver test vehicles early, and cost-plus-fixed-fee for test support and advanced capability improvements and studies. According to the Federal Acquisition Regulation, it is usually to the Government’s advantage for the contractor to assume substantial cost responsibility and an appropriate share of the cost risk; therefore, fixed-price incentive contracts are preferred when contract costs and performance requirements are reasonably certain. Manufacturing the development vehicles is the largest anticipated portion of ACV development contract costs. According to the ACV 1.1 acquisition strategy, a fixed-price-incentive contract is considered the most appropriate contract type to utilize for the vehicle’s development because the vehicles themselves are non-developmental in nature but there is some risk related to the integration of selected systems, such as the Remote Weapon Station, and other modifications required to meet USMC requirements. Meanwhile, the strategy states that the delivery incentive is to be a firm-fixed-price, as the fee is a set dollar amount based on how early the vehicles are delivered and is not subject to adjustment based on the vendor’s costs. Under cost-reimbursement contract types, such as a cost-plus-fixed-fee contract, the government bears the risk of increases in the cost of performance. Cost-reimbursement contract types are suitable when uncertainties in requirements or contract performance do not permit the use of fixed-price contract types. A cost-plus-fixed-fee structure is planned for test support before and after the start of production, vehicle transportation and other test-related activities. According to program officials, the scope and nature of these activities are difficult to predict, making the cost-plus-fixed-fee structure appropriate. Officials also stated that the cost-plus-fixed-fee activities are expected to comprise about 11 percent of the total contract value. Requirements and cost estimates. Additional key documents have been prepared, or are underway, in accordance with DOD policy. The ACV 1.1 Capabilities Development Document, providing the set of requirements for development, is tailored specifically for ACV 1.1. In accordance with DOD policy, the ACV 1.1 Capabilities Development Document was validated prior to the release of the ACV 1.1 request for proposal in March 2015. In addition, best practices and DOD policy also call for the development of an independent cost estimate prior to the start of development. According to agency officials, the independent cost estimate is underway and will be prepared for the Milestone B decision. The acquisition strategy identifies no funding shortfalls for the program as of the fiscal year 2016 President’s budget submission. Our assessment of the 2014 AOA found that overall it met best practices for AOAs and is, therefore, considered reliable. An AOA is a key first step in the acquisition process intended to assess alternative solutions for addressing a validated need. AOAs are done or updated to support key acquisition decision points. The USMC completed an AOA update for ACV 1.1 in late 2014 to support the release of the ACV 1.1 request for proposal. Over the years, other AOAs have been completed for related acquisitions, including the EFV, the Marine Personnel Carrier and the previous version of the ACV considered in 2012. These previous AOAs and other supporting studies comprise a body of work that has informed the most recent ACV AOA update as well as the ACV 1.1 acquisition as a whole. AOAs can vary in quality, which can affect how they help position a program for success. We have previously identified best practices for the development of AOAs. Considered in the context of the related AOA body of work, the ACV AOA met 15 of the 22 AOA best practices, including ensuring that the AOA process was impartial and developing an AOA process plan, among others. Further, four of the remaining best practices were substantially met, two were partially met, and one was minimally met. For example, best practices call for the documentation of all assumptions and constraints used in the analysis. We found that the 2014 AOA does not include a full list of assumptions and constraints and any assumptions or constraints from previous analysis, if relevant, were not updated or referenced in the new analysis. As a result, it could be difficult for decision makers to make comparisons and trade-offs between alternatives. Appendices I and II provide more information on the methodology used in this analysis and appendix III provides the results of our AOA analysis in greater detail. DOD’s Cost Assessment and Program Evaluation staff also reviewed the 2014 AOA and found that it was sufficient. However, they identified a few areas of caution, including recommending additional testing of land mobility to further verify USMC assertions that the wheeled ACV 1.1 would have the same mobility in soft soil as tracked vehicles. According to USMC officials, the ACV program is pursuing an aggressive schedule in order to obtain ACV 1.1 initial operational capability in fiscal year 2020. The program is scheduled to hold its PDR after development starts, a deviation from best practices. In addition, according to program officials, as a result of the aggressive acquisition schedule, the program plans on a higher level of concurrency between development testing and production than would take place under a more typical acquisition schedule. This aggressive schedule may likely have congressional decision makers approve funds to begin production based on little to no evidence from the testing of delivered ACV 1.1 prototypes. Some factors may mitigate the risk posed by this acceleration, for example, program officials have stated that all required testing will take place prior to the start of production. However, further attention may be warranted in our future reviews of the program’s schedule. The ACV 1.1 program is planning to hold its PDR about 90 days after development begins and to combine its PDR and the critical design review (CDR) into one event. Best practices recommend that the PDR is held before development begins in order to increase the knowledge available to the agency when development starts, for example, increasing confidence that the design will meet the requirements established in the Capabilities Development Document. The absence of a PDR introduces some risk by postponing the attainment of knowledge until after development begins and reducing scheduled time to address any design issues that may arise. In addition, it is a best practice to demonstrate design stability at the system-level CDR, completing at least 90 percent of engineering drawings at that time. Combining the PDR and CDR may limit the time available to the program to address any issues identified and ensure that sufficient knowledge is attained prior to the program moving forward. For example, in a 2006 report, we found that the EFV program’s CDR was held almost immediately after the start of development—similar to the approach for ACV 1.1—and before the system integration work had been completed. Testing of the early prototypes continued for three years into system development, well after the tests could inform the CDR decision. Best practices call for system integration work to be conducted before the CDR is held. According to DOD officials, the ACV 1.1 PDR will be held after Milestone B because contracts are not planned to be awarded prior to that time. In addition, DOD officials stated that the technological maturity of ACV 1.1 reduces risk and permits both the waiver of the PDR requirement and the consolidation of the reviews. While the use of mature technology could suggest a reduced risk from this deferral, we believe that contracts could have been awarded earlier in the acquisition process in order to facilitate a PDR prior to development start. The current ACV 1.1 program schedule demonstrates concurrency between testing and production that could represent increased program risk. According to agency officials, approximately one year of development testing will take place prior to the program’s production decision in order to assess production readiness. Another ten months of testing will continue after the start of production. The intent of developmental testing is to demonstrate the maturity of a design and to discover and fix design and performance problems before a system enters production. According to agency officials, the adoption of an accelerated fielding schedule is behind the level of overlap between developmental testing and production. They stated that they plan to have completed all development testing and operational assessment required to support the production decision by the time that decision is made. DOD policy allows some degree of concurrency between initial production and developmental testing and, according to our prior work, some concurrency may be necessary when rapidly fielding urgently needed warfighter capabilities. However, our past work has also shown that beginning production before demonstrating that a design is mature and that a system will work as intended increases the risk of discovering deficiencies during production that could require substantial design changes and costly modifications to systems already built. A detailed test plan will not become available until Milestone B as is expected for acquisition programs. When such a plan is available, we will further assess the risk presented by this approach. Moreover, under the current ACV 1.1 program schedule, Congress may likely be called upon to provide production funding for ACV 1.1 production based on little to no evidence from the testing of delivered ACV 1.1 prototypes. The program is scheduled to make a production decision, and select one vendor, in fiscal year 2018. Under the normal budget process, Congress would be provided the request for funding that production with the President’s budget in February 2017, around the same time that the prototype ACV 1.1 vehicles are scheduled to be delivered. In the event that the development testing schedule experiences delays and key tests are postponed until after the planned production decision, the program may face increased risk. The success of the ACV acquisition strategy depends upon the attainment of improved amphibious capabilities over time. The first increment, ACV 1.1, is not expected to have ship to shore amphibious capability and thus is planned to use Navy surface connectors to travel from ship to shore. The USMC and the Navy have coordinated the planned operation of ACV 1.1 with surface connectors to ensure compatibility and availability. The ACV acquisition intends to rely heavily upon realizing a fully amphibious ACV 1.2, providing AAV-equivalent water mobility and the ability to self-deploy. However, the exact nature of ACV 1.2 and 2.0 is unknown at this time. Achieving the planned capabilities of future ACV increments is highly dependent upon ACV 1.1 attaining its planned amphibious capability. While ACV 1.1 is expected to have shore to shore amphibious capability, which would enable the vehicle to cross rivers and inland waterways, the vehicle is also expected to rely on Navy surface connector craft for ship to shore transportation. Connectors have become increasingly important as USMC vehicles have grown in weight. According to USMC analysis, about 86 percent of USMC expeditionary force assets are too heavy or over-sized for air transport, and need to be transported by surface connectors. The ACV 1.1 requirements include transportability by currently available and planned Navy surface connectors. Because several surface connectors can transport the ACV 1.1, the selection of specific surface connectors is planned to be based on an evaluation of mission needs and connector capabilities. Some current and planned Navy surface connectors that could transport ACV 1.1 are described below. Appendix IV provides additional information on the key capabilities of these connectors. Landing Craft Air Cushion (LCAC). The LCAC is a high speed hovercraft that supports rapid movement from ship to shore, such as during an amphibious assault. The LCAC is one of the primary connectors that provide ship to shore transportation of equipment, personnel, and vehicles. The LCAC, which can access about 70 percent of the world’s beaches, is optimized towards major combat operations and forcible entry. The Navy currently has a fleet of 72 LCACs which have received upgrades as a result of a service life extension program effort. The Navy also plans to provide additional LCAC maintenance until replacement craft are acquired. Ship to Shore Connector (SSC). The Navy plans to replace each LCAC with an SSC. The SSC, similar in design to the LCAC, is planned to maintain or improve upon LCAC capabilities with an increased payload capacity, a longer service life, and the ability to operate in more harsh marine environments. SSC is planned to reach initial operational capability of 6 craft in 2020 and full operational capability in 2027. Landing Craft Utility (LCU). The LCU is a utility connector that supports ship to shore movement in amphibious assaults and also participates in a variety of other missions. The LCU has a large range and payload capacity, but operates at a slower speed compared to the LCAC. According to Navy officials, the LCU can access about 17 percent of the world’s beaches, and stops at the waters’ edge in order to unload its cargo. Surface Connector (X) Replacement (SC(X)R). According to Navy officials, the aging LCU craft are planned to be replaced by SC(X)R craft in order to maintain a total of 32 LCUs and SC(X)Rs. According to the Surface Connector Council, the SC(X)R is likely to be larger and show improvements in materials, propulsion, maintainability, and habitability. Production for the SC(X)R is planned to begin in 2018. Expeditionary Fast Transport (EPF). The EPF, formerly known as the Joint High Speed Vessel, is a commercial-based catamaran that provides heavy-lift, high-speed sealift mobility. The EPF uses a ramp system to allow vehicles to off-load at shipping ports or where developed infrastructure is unavailable (referred to as austere ports). The EPF is planned to reach full operational capability in the year 2019. Figure 4 illustrates three examples of how various surface connectors could be used to transport ACV 1.1 from ship to shore. For example, ACVs could be loaded onto an Expeditionary Transfer Dock (ESD) and then on to LCACs or SSCs while the ESD maneuvers towards the shore. The LCACs or SSCs would then launch from the ESD and transport the ACVs to shore. The ACV could also be off-loaded at an advanced base —such as an island located within the operational area—and then loaded onto a EPF for transport to a developed or austere port. Finally, the ACVs could be directly loaded from ships on to a LCU or SC(X)R and taken to shore. This graphic includes selected examples only, and does not represent all possible transportation options. SSC acquisition risks may have consequences for employment of ACV 1.1. The Navy has identified that it requires a combined fleet of at least 72 operational LCACs and SSCs to support ship to shore transportation demands. However, the Navy previously anticipated a lack of available connectors from the year 2015 through 2024, with a maximum ‘gap,’ or shortage, of 15 craft in 2019. Navy officials said that this ‘connector gap’ has been mitigated with the extension of the LCAC service life extension program and acceleration of the SSC acquisition. In a previous assessment of the SSC program, we found that the Navy recognizes three SSC technologies as potential risk areas, for which the Navy recommended further testing. According to officials, since that report, the Navy has completed additional testing for software, drivetrain components, and engine endurance to further develop and reduce the risk of these technologies. Navy officials said the SSC program plans to continue testing these technologies and remains on-schedule. However, the SSC program entered production in 2015, more than 2 years before the estimated delivery of the test vehicle. This concurrency of development and production creates a potential risk of schedule overruns if deficiencies in the design are not discovered until late in testing and retrofits are required for previously produced craft. Navy officials said that the LCAC service life could be further extended with additional sustainment funding in the event of SSC acquisition delays. The USMC and Navy regularly coordinate on the ACV 1.1 to facilitate the future use of the surface connector fleet through the Joint Capabilities Integration Development System (JCIDS), the Surface Connector Council, and other communication. JCIDS. The JCIDS process is a DOD-wide process to identify and assess capability needs and their associated performance criteria. The Capabilities Development Document for the ACV 1.1 was developed as part of the JCIDS process. The document, among other things, identified key systems attributes, key performance parameters, and design requirements for the ACV 1.1 with input from the USMC, the Navy, and others. For example, it included design requirements that allow the SSC to transport two ACVs, and ensure that ACVs can be transported by other connector craft as well. Surface Connector Council and working group. The Surface Connector Council serves as a mechanism through which the USMC and Navy coordinate activities related to surface connectors that are used for amphibious shipping. The council has two co-chairs: the Director of the Navy’s Expeditionary Warfare Division and the Director of the USMC Capabilities Development Directorate who is also the Deputy Commandant for Combat Development and Integration. The council membership is drawn from several offices from both the Navy and the USMC. The Council is required to meet at least biannually but, according to Navy officials, in practice the Council generally meets quarterly. At these meetings, the Council has previously discussed ACV program risks, such as connector availability and the scarcity of space on connectors, and associated risk mitigation strategies, according to Navy officials. The Surface Connector Council also has a working level forum, known as the Surface Connector Roundtable, which meets on a monthly basis according to Navy officials. Informal discussions. In addition to coordination through JCIDS and the Surface Connector Council, officials said that informal discussions between USMC and Navy officials occur frequently to coordinate the ACV and connector programs. The exact nature of the ACV’s future amphibious capability is not yet known. USMC officials are confident that the ACV 1.1 would not only meet its minimum requirements for shore to shore swim capability, but may exceed those requirements and be able to swim from ship to shore. Based on tests and demonstrations to date, program officials also expressed confidence that ACV 1.2 will build on the ACV 1.1 capabilities and have the ability to self-deploy from ships. However, according to DOD officials, the capabilities of the ACV 1.2 are dependent upon the success of ACV 1.1 development. If the ACV 1.1 does not demonstrate the expected amphibious capabilities, then more development than currently anticipated may be required for ACV 1.2 to achieve ship to shore amphibious capability and greater effort may be needed to retro-fit ACV 1.1 vehicles to achieve the same capabilities. However, if ACV 1.1 demonstrates greater than expected amphibious capability, then the progression towards achieving the plans for the ACV 1.2 may be easier. Program documentation and analysis to date have been done to develop the ACV 1.1 strategy and plans and to support ACV 1.1 decisions. According to DOD officials, the USMC has not yet determined whether the development of ACV 1.2 will be done through improvements within the same program or as a separate program from ACV 1.1. DOD officials stated that the development of ACV 1.1 and 1.2 amphibious capabilities is also expected to impact the nature of ACV 2.0. According to DOD officials, with the ACV 2.0 decision, the ACV program expects to achieve high water speed, a long-standing goal and a significant increase from the current amphibious goals identified for ACV 1.1. The current USMC amphibious strategy plans for an evolving mix of ACVs and upgraded and legacy AAVs that are to maintain the needed combination of capabilities at any one time. According to USMC officials, over time, the ACV program plans to replace portions of the AAV fleet with ACV increments as they become available. This USMC strategy, and the analysis that supports it, is based on the assumption that ACV 1.2 will reach a desired level of amphibious capability and that ACV 1.1 vehicles can be upgraded to that level. If, however, those or other key capabilities cannot be achieved, revisiting the USMC’s strategy prior to making production decisions for ACV 1.1, particularly addressing changes to its overall amphibious strategy and potentially updating its analysis of alternatives, will be important. In addition, when and how the USMC will achieve the amphibious capability envisioned for ACV 2.0 remains to be determined, according to DOD officials. We will continue to monitor these issues along with the program’s performance against best practices as it progresses toward the Milestone C production decision currently planned for the second quarter of fiscal year 2018. We are not making any recommendations in this report. DOD provided written comments on a draft of this report. The comments are reprinted in appendix V. In commenting on a draft of this report, DOD stated that it believes its efforts on this program are aligned with our best practices and that our report appears to underestimate ACV 1.1’s planned technical maturity and associated risks. DOD stated that the vehicle is beyond the traditional PDR and CDR level of maturity and conducting a combined PDR and CDR is appropriate for the level of risk identified by the Program Manager. As we stated in this report, the program’s plan to hold a PDR after beginning development does not align with best practices and combining the PDR and CDR may limit the time available to the program to address any issues identified and ensure that sufficient knowledge is attained prior to the program moving forward. Further, as we stated earlier, while some aspects of this acquisition do suggest lower levels of risk, these deviations could potentially increase program risk—risks that we will continue to monitor as the program moves forward. DOD also provided technical comments that were incorporated, where appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Secretary of the Navy; and the Commandant of the Marine Corps. This report also is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on the matters covered in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Many guides have described an approach to analyses of alternatives (AOAs); however, there is no single set of practices for the AOA process that has been broadly recognized by both the government and private- sector entities. GAO has identified 22 best practices for an AOA process by (1) compiling and reviewing commonly mentioned AOA policies and guidance used by different government and private-sector entities and (2) incorporating experts’ comments on a draft set of practices to develop a final set of practices. These practices can be applied to a wide range of activities in which an alternative must be selected from a set of possible options, as well as to a broad range of capability areas, projects, and programs. These practices can provide a framework to help ensure that entities consistently and reliably select the project alternative that best meets mission needs. The guidance below is meant as an overview of the key principles that lead to a successful AOA process and not as a “how to” guide with detailed instructions for each best practice identified. The 22 best practices that GAO identified are grouped into the following five phases: 1. Initialize the AOA process: includes best practices that are applied before starting the process of identifying, analyzing, and selecting alternatives. This includes determining the mission need and functional requirements, developing the study time frame, creating a study plan, and determining who conducts the analysis. 2. Identify alternatives: includes best practices that help ensure the alternatives to be analyzed are sufficient, diverse, and viable. 3. Analyze alternatives: includes best practices that compare the alternatives to be analyzed. The best practices in this category help ensure that the team conducting the analysis uses a standard, quantitative process to assess the alternatives. 4. Document and review the AOA process: includes best practices that would be applied throughout the AOA process, such as documenting all steps taken to initialize, identify, and analyze alternatives and to select a preferred alternative in a single document. 5. Select a preferred alternative: includes a best practice that is applied by the decision maker to compare alternatives and to select a preferred alternative. The five phases address different themes of analysis necessary to complete the AOA process and comprise the beginning of the AOA process (defining the mission needs and functional requirements) through the final step of the AOA process (select a preferred alternative). There are three key entities that are involved in the AOA process: the customer, the decision maker, and the AOA team. The customer refers to the program office, service, or agency that identifies a mission need (e.g. a credible gap between current capabilities and those required to meet the goals articulated in the strategic plan). The decision maker is the person or entity that signs off on the final decision and analysis documented by the AOA report. The decision maker refers to the program manager (or alternate authority figure identified early in the AOA process) who will select the preferred alternative based on the established selection criteria. The AOA team is the group of subject matter experts who are involved in the day-to-day work of the AOA process and work to develop the analysis that is the foundation of the AOA process. Conforming to the 22 best practices helps ensure that the preferred alternative selected is the one that best meets the agency’s mission needs. Not conforming to the best practices may lead to an unreliable AOA, and the customer will not have assurance that the preferred alternative best meets the mission needs. Table 1 shows the 22 best practices and the five phases. Some best practices included in a phase can take place concurrently and do not have to follow the order presented in table 1. The phases should occur in sequence to prevent bias from entering the analysis and adding risk that the AOA team will analyze alternatives that have not been defined. However, the document and review phase can be done at any stage throughout the AOA process. For example, best practice 5 (define selection criteria) can be done at the same time as best practice 6 (weight selection criteria). On the other hand, best practice 20 (ensure AOA process is impartial) can be done at the end of every step or every phase to ensure the impartiality of the AOA as it progresses. The best practices represent an overall process that results in a reliable AOA that can be easily and clearly traced, replicated, and updated. Figure 5 shows the AOA process and how the steps in each phase are interrelated. An important best practice is an independent review of the AOA process. It is important that the AOA process and its results be validated by an organization independent of the program office and the project’s chain of command, to ensure that a high-quality AOA is developed, presented, and defended to management. This process verifies that the AOA adequately reflects the program’s mission needs and provides a reasonable assessment of the cost and benefits associated with the alternatives. One reason to independently validate the AOA process is that independent reviewers typically rely less on assumptions alone and, therefore, tend to provide more realistic analyses. Moreover, independent reviewers are less likely to automatically accept unproven assumptions associated with anticipated savings. That is, they bring more objectivity to their analyses, resulting in a reality check of the AOA process that reduces the odds that management will invest in an unreasonable alternative. To that end, we established four characteristics that identify a high- quality, reliable AOA process. These characteristics would evaluate if the AOA process is well-documented, comprehensive, unbiased, and credible. “Well-documented” means that the AOA process is thoroughly described in a single document, including all source data, clearly detailed methodologies, calculations and results, and that selection criteria are explained. “Comprehensive” means that the AOA process ensures that the mission need is defined in a way to allow for a robust set of alternatives, that no alternatives are omitted and that each alternative is examined thoroughly for the project’s entire life-cycle. “Unbiased” means that the AOA process does not have a predisposition toward one alternative over another; it is based on traceable and verifiable information. “Credible” means that the AOA process thoroughly discusses the limitations of the analyses resulting from the uncertainty that surrounds both the data and the assumptions for each alternative. Table 2 shows the four characteristics and their relevant AOA best practices. To determine how the ACV program’s efforts compare with best practices, we reviewed program documentation and other materials for the ACV acquisition, including the acquisition strategy, technology readiness assessment, and the Capabilities Development Document. We identified acquisition best practices based on our extensive body of work in that area and Department of Defense (DOD) guidance, and used this information to analyze the proposed ACV acquisition approach and acquisition activities to date. We also reviewed our previous work on the ACV and EFV programs. In addition, we interviewed program and agency officials from the USMC’s Advanced Amphibious Assault program office and Combat Development and Integration, Analysis Directorate, the Office of the Assistant Secretary of the Navy for Research, Development, and Acquisition, and the Office of the Secretary of Defense, Cost Assessment and Program Evaluation. To determine the extent to which the 2014 ACV Analysis of Alternatives (AOA) demonstrated the use of best practices, we worked with USMC officials to identify the body of analyses that informed the 2014 AOA. Different pieces of each report or analysis in the full body of work were relevant to different best practices. Because the 2014 ACV AOA is part of a larger body of related work that informs this analysis, we then worked with GAO specialists to discuss the 22 AOA best practices and categorize each as either “individual” or “combined.” Best practices labeled “individual” have been assessed based on only the 2014 ACV Analysis of Alternatives final report. Best practices noted as “combined” were assessed referring to the full body of work that, according to USMC officials, has informed the analysis of alternatives process. We then compared the 22 best practices to the 2014 AOA or the full body of AOA analysis, as determined above. We used a five-point scoring system to determine the extent to which the AOA conforms to best practices. To score each AOA process, (1) two GAO analysts separately examined the AOA documentation received from the agency and then agreed on a score for each of the 22 best practices, then (2) a GAO AOA specialist independent of the engagement team reviewed the AOA documentation and the scores assigned by the analysts for accuracy and cross-checked the scores in all the analyses for consistency. We first used this scoring system to determine how well the AOA conformed to each best practice. We then used the average of the scores for the best practices in each of four characteristics—well-documented, comprehensive, unbiased, and credible—to determine an overall score for each characteristic. We sent our draft analysis to DOD for review. They provided technical comments and additional documentation that we incorporated to ensure our analysis included all available information. We then used the same methodology and scoring process explained above to revise the analysis based on their technical comments and any additional evidence received. If the average score for each characteristic was “met” or “substantially met,” we concluded that the AOA process conformed to best practices and therefore could be considered reliable. To determine how the increments of ACV are to achieve amphibious capability, we reviewed program documentation from the ACV acquisition, including the acquisition strategy and the Concept of Employment, as well as program documentation for Navy surface connector programs, including the Ship to Shore Connector Capabilities Development Document and the Surface Connector Council charter. We also interviewed USMC officials from the Combat Development and Integration, Capabilities Development Directorate and Seabasing Integration Division, as well as U.S. Navy officials from the Naval Sea Systems Command. To update and refine the AOA best practices identified in prior GAO work, we solicited comments from a set of over 900 internal and external experts on how to improve the previous set of best practices. All comments and changes were vetted during three vetting sessions with internal GAO experts. The resulting changes include the consolidation of some best practices, reducing the number from 24 to 22, and the establishment of four characteristics that identify a high-quality, reliable AOA process. Overall, the DOD’s ACV analysis of alternatives (AOA) met the best practices we identified. Table 3 below describes our analysis of DOD’s AOA compared with best practices. Table 4 provides the average score of the best practices under each characteristic. See appendix I for an explanation of how individual best practices are grouped under each characteristic. Because the overall assessment ratings for each of the four characteristics are substantially met or met, we concluded that the AOA process conformed to best practices and can be considered reliable. Operating Sea State with Significant Wave Height (SWH) 2 (1.0 SWH) 3 (4.1 SWH) Key contributors to this report were Bruce H. Thomas, Assistant Director; Betsy Gregory-Hosler, analyst-in-charge; Zachary Sivo; Marie Ahearn; Brian Bothwell; Jennifer Echard; Kristine Hassinger; Katherine Lenane; Jennifer Leotta; David Richards; Karen Richey; Robert S. Swierczek; Hai Tran; and Ozzy Trevino.
The Marine Corps' ACV is intended to transport Marines from ship to shore and provide armored protection on land. It is to potentially replace all or a portion of the decades old AAV fleet, and is expected to eventually offer increased amphibious capability and high water speed. The National Defense Authorization Act for Fiscal Year 2014 included a provision that GAO annually review and report on the ACV program until 2018. This report provides an updated discussion of (1) how the ACV program's efforts compare to acquisition best practices and examines (2) how the increments of ACV will achieve amphibious capability. To conduct this work, GAO reviewed program documentation and other materials for the ACV acquisition and Navy surface connector programs. GAO identified acquisition and analysis of alternatives best practices based on its prior body of work and DOD guidance. GAO also interviewed program and agency officials. Most of the current activities of the U.S. Marine Corps' Amphibious Combat Vehicle (ACV) program have demonstrated the use of best practices, but plans for an accelerated acquisition schedule pose potential risks. As the program approaches the start of engineering and manufacturing development, it is seeking to rely on mature technologies that have been demonstrated to work in their intended environment as well as fostering competition—a critical tool for achieving the best return on the government's investment. Further, GAO analyzed the ACV analysis of alternatives that the Marine Corps produced for the initial portion of the ACV development, finding that overall it met best practices by, for example, ensuring that the analysis of alternatives process was impartial. However, the Marine Corps is pursuing an accelerated program schedule that presents some risks, including plans to hold the preliminary design review after the start of development—a deviation from best practices which could postpone the attainment of information about whether the design performs as expected. Moreover, GAO believes that the level of planned concurrency—conducting development testing and production at the same time—could leave the program at greater risk of discovering deficiencies after some systems have already been built, potentially requiring costly modifications. Agency officials stated that mature technologies reduce risk and that, while some concurrency is planned, all required testing will be completed prior to the production decision. While some aspects of this acquisition do suggest lower levels of risk, these deviations could potentially increase program risk. GAO will continue to monitor this risk as the program moves forward. The ACV program relies heavily on future plans to increase ACV amphibious capability gradually, in three planned increments known as ACV 1.1, 1.2, and 2.0, but exactly how this capability will be attained has not yet been determined. ACV 1.1 – Although this increment is expected to have some amphibious capability, according to program documents, it is expected to rely on surface connector craft—vessels that enable the transportation of military assets from ship to shore. Marine Corps and U.S. Navy officials regularly coordinate ACV 1.1 plans to operate with the surface connector fleet through coordination mechanisms such as the Surface Connector Council. ACV 1.2 – This increment is expected to have greater amphibious capability, including the ability to self-deploy from ships. Based on demonstrations from related programs to date, program officials believe it will reach that capability, but indicated that plans for 1.2 are expected to depend on the success of ACV 1.1 development. ACV 2.0 – This increment represents a future decision point when the Marine Corps plans to determine how to replace the Assault Amphibious Vehicle (AAV) fleet. The Marine Corps is currently exploring technologies that may enable high water speed—a significant increase from the amphibious goals identified for ACV 1.1. Therefore, how it will achieve the amphibious capability envisioned for ACV 2.0 is undetermined. GAO is not making recommendations in this report. In commenting on a draft of this report, DOD stated that it believes its efforts are aligned with best practices and that GAO's report appears to underestimate ACV 1.1's planned technical maturity. GAO found that some program plans do not align with best practices and that while some aspects of the acquisition do suggest lower levels of risk, these deviations could potentially increase program risk. GAO will continue to monitor these risks as the program moves forward.