report
stringlengths
319
46.5k
summary
stringlengths
127
5.75k
input_token_len
int64
78
8.19k
summary_token_len
int64
29
1.02k
Reservists are members of the seven reserve components, which provide trained and qualified persons available for active duty in the armed forces in time of war or national emergency. The Selected Reserve is the largest category of reservists and is designated as essential to wartime missions. The Selected Reserve is also the only category of reservists that is eligible for TRS. As of December 31, 2010, the Selected Reserve included 858,997 members dispersed among the seven reserve components with about two- thirds belonging to the Army Reserve and the Army National Guard. See figure 1 for the number and percentage of Selected Reserve members within each reserve component. Additionally, about two-thirds of the Selected Reserve members are 35 years old or younger (64 percent) and about half are single (52 percent). (See fig. 2.) The NDAA for Fiscal Year 2005 authorized the TRS program and made TRICARE coverage available to certain members of the Selected Reserve. The program was subsequently expanded and restructured by the NDAAs for Fiscal Years 2006 and 2007--although additional program changes were made in subsequent years. In fiscal year 2005, to qualify for TRS, members of the Selected Reserve had to enter into an agreement with their respective reserve components to continue to serve in the Selected Reserve in exchange for TRS coverage, and they were given 1 year of TRS eligibility for every 90 days served in support of a contingency operation. The NDAA for Fiscal Year 2006, which became effective on October 1, 2006, expanded the program, and almost all members of the Selected Reserve and their dependents--regardless of their prior active duty service--had the option of purchasing TRICARE coverage through a monthly premium. The portion of the premium paid by the members of the Selected Reserve and their dependents for TRS coverage varied based on certain qualifying conditions that had to be met, such as whether the member of the Selected Reserve also had access to an employer- sponsored health plan. The NDAA for Fiscal Year 2006 established two levels--which DOD called tiers--of qualification for TRS, in addition to the tier established by the NDAA for Fiscal Year 2005, with enrollees paying different portions of the premium based on the tier for which they qualified. The NDAA for Fiscal Year 2007 significantly restructured the TRS program by eliminating the three-tiered premium structure and establishing open enrollment for members of the Selected Reserve provided that they are not eligible for or currently enrolled in the FEHB Program. The act removed the requirement that members of the Selected Reserve sign service agreements to qualify for TRS. Instead, the act established that members of the Selected Reserve qualify for TRS for the duration of their service in the Selected Reserve. DOD implemented these changes on October 1, 2007. Generally, TRICARE provides its benefits through several options for its non-Medicare-eligible beneficiary population. These options vary according to TRICARE beneficiary enrollment requirements, the choices TRICARE beneficiaries have in selecting civilian and military treatment facility providers, and the amount TRICARE beneficiaries must contribute toward the cost of their care. Table 1 provides information about these options. Selected Reserve members have a cycle of coverage during which they are eligible for different TRICARE options based on their duty status-- preactivation, active duty, deactivation, and inactive. During preactivation, when members of the Selected Reserve are notified that they will serve on active duty in support of a contingency operation in the near future, they and their families are eligible to enroll in TRICARE Prime, and therefore, they do not need to purchase TRS coverage. This is commonly referred to as "early eligibility" and continues uninterrupted once members of the Selected Reserve begin active duty. While on active duty, members are required to enroll in TRICARE Prime. Similarly during deactivation, for 180 days after returning from active duty in support of a contingency operation, members of the Selected Reserve are rendered eligible for the Transitional Assistance Management Program, a program to transition back to civilian life in which members and dependents can use the TRICARE Standard or Extra options. When members of the Selected Reserve return to inactive status, they can choose to purchase TRS coverage if eligible. As a result of the TRICARE coverage cycle and program eligibility requirements, TMA officials estimate that at any given time, fewer than half of the members of the Selected Reserve are qualified to purchase TRS. Currently, to qualify for TRS, a member of the Selected Reserve must not be eligible for the FEHB Program, have been notified that he or she will serve on active duty in support of a be serving on active duty or have recently, that is, within 180 days, returned from active duty in support of a contingency operation. Of the more than 390,000 members eligible, about 67,000 members were enrolled in TRS as of December 31, 2010. (See fig. 3.) A number of different DOD entities have various responsibilities related to TRS. Within the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Assistant Secretary of Defense for Reserve Affairs works with the seven reserve components to determine whether members of the Selected Reserve are eligible for TRS and to ensure that members have information about TRICARE, including TRS. Within TMA, the Warrior Support Branch is responsible for managing the TRS option, which includes developing policy and regulations. This office also works with TMA's Communication and Customer Service Division to develop educational materials for this program. The Assistant Secretary of Defense for Health Affairs oversees TMA and reports to the Under Secretary of Defense for Personnel and Readiness. TMA works with contractors to manage civilian health care and other services in each TRICARE region (North, South, and West). The contractors are required to establish and maintain sufficient networks of civilian providers within certain designated areas, called Prime Service Areas, to ensure access to civilian providers for all TRICARE beneficiaries, regardless of enrollment status or Medicare eligibility. They are also responsible for helping TRICARE beneficiaries locate providers and for informing and educating TRICARE beneficiaries and providers on all aspects of the TRICARE program, including TRS. TMA's TRICARE Regional Offices, located in each of the three TRICARE regions, are responsible for managing health care delivery for all TRICARE options in their respective geographic areas and overseeing the contractors, including monitoring network quality and adequacy, monitoring customer satisfaction outcomes, and coordinating appointment and referral management policies. DOD does not have reasonable assurance that members of the Selected Reserve are informed about TRS for several reasons. First, the reserve components do not have a centralized point of contact to ensure that members are educated about the program. Second, the contractors are challenged in their ability to educate the reserve component units in their respective regions because they do not have comprehensive information about the units in their areas of responsibility. And, finally, DOD cannot say with certainty whether Selected Reserve members are knowledgeable about TRS because the results of two surveys that gauged members' awareness of the program may not be representative of the Selected Reserve population because of low response rates. A 2007 policy from the Under Secretary of Defense for Personnel and Readiness designated the reserve components as having responsibility for providing information about TRS to members of the Selected Reserve at least once a year. When the policy was first issued, officials from the Office of Reserve Affairs--who have oversight responsibility for the reserve components--told us that they met with officials from each of the reserve components to discuss how the components would fulfill this responsibility. However, according to officials from the Office of Reserve Affairs, they have not met with the reserve components since 2008 to discuss how the components are fulfilling their TRS education responsibilities under the policy. These officials explained that they have experienced difficulties identifying a representative from each of the reserve components to attend meetings about TRS education. When we contacted officials from all seven reserve components to discuss TRS education, we had similar experiences. Three of the components had difficulties providing a point of contact. In fact, two of the components took several months to identify an official whom we could speak with about TRS education, and the other one had difficulties identifying someone who could answer our follow-up questions when our original point of contact was no longer available. Furthermore, officials from three of the seven components told us that they were not aware of this policy. Regardless of their knowledge of the 2007 policy, officials from all of the reserve components told us that education responsibilities are delegated to their unit commanders. These responsibilities include informing members about their health options, which would include TRS. All of the components provide various means of support to their unit commanders to help fulfill this responsibility. For example, three of the components provide information about TRICARE directly to their unit commanders or the commanders' designees through briefings. The four other components provide information to their unit commanders through other means, such as policy documents, Web sites, and newsletters. Additionally, while most of the components had someone designated to answer TRICARE benefit questions, only one of the reserve components had an official at the headquarters level designated as a central point of contact for TRICARE education, including TRS. This official told us that he was unaware of the specific 2007 TRS education policy; however, he said his responsibilities for TRS education include developing annual communication plans, providing briefings to unit commanders, and publishing articles in the Air Force magazine about TRS. Designating a point of contact is important because a key factor in meeting standards for internal control in federal agencies is defining and assigning key areas of authority and responsibility--such as a point of contact for a specific policy. Without a point of contact to ensure that this policy is implemented, the reserve components are running the risk that some of their Selected Reserve members may not be receiving information about the TRS program--especially since some of the reserve component officials we met with were unaware of the policy. The TRICARE contractors are required to provide an annual briefing about TRS to each reserve component unit in their regions, including both Reserve and National Guard units. All three contractors told us that they maintain education representatives who are responsible for educating members of the Selected Reserve on TRS. These representatives conduct unit outreach and provide information to members of the Selected Reserve at any time during predeployment and demobilization, at family events, and during drill weekends. The contractors use briefing materials maintained by TMA and posted on the TRICARE Web site. In addition to conducting briefings, the three contractors have increased their outreach efforts in various ways, including creating an online tutorial that explains TRS, mailing TRS information to Selected Reserve members, and working closely with Family Program coordinators to provide TRS information to family members. However, the contractors are challenged in their ability to meet their requirement for briefing all units annually. First, they typically provide briefings to units upon request because this approach is practical based upon units' schedules and availability. For example, officials from one contractor told us that even though they know when geographically dispersed units will be gathering in one location, these units have busy schedules and may not have time for the contractor to provide a briefing. Each contractor records the briefings that are requested, when the briefing requests were fulfilled and by whom, and any questions or concerns that resulted from the briefings. However, some unit commanders do not request briefings from the contractors. For example, officials with one reserve component told us that they do not rely on the contractor to brief units because they were unaware that the contractors provided this service. In addition, these officials as well as officials from another reserve component told us that they did not know if their unit commanders were aware that they could request briefings from the contractors. All of the contractors told us that they conduct outreach to offer information to some of the units that have not requested a briefing, including both calling units to offer a briefing and providing materials. They added that more outreach is conducted to National Guard units because they are able to obtain information about these units from state officials. The TRICARE Regional Offices also told us that they conduct outreach to units to let them know that the contractor is available to brief the units about TRS. However, even though the contractor and the TRICARE Regional Offices conduct outreach to a unit, it does not necessarily mean that the unit will request a briefing. Furthermore, while contractors are aware of some units in their regions, they do not have access to comprehensive lists of all reserve component units in their regions because the Web site links containing unit information that TMA originally provided to the contractors have become inactive. As a result, the contractors are not able to verify whether all units in their regions have received briefings. Officials from the Office of Reserve Affairs told us that reserve components report unit information to the Defense Manpower Data Center (DMDC), which maintains personnel information about all members of the military. However, these officials raised concerns about the accuracy of this information because it could be about 3 to 6 months old and may not be comprehensive. Officials at the Office of Reserve Affairs told us that the reserve components would likely have more up-to-date information about their units as they are responsible for reporting this information to DMDC. However, officials from TMA, the TRICARE Regional Offices, and contractors also told us that a comprehensive list of units would be difficult to maintain because the unit structure changes frequently. Despite the challenges contractors face, officials with TMA's Warrior Support Branch told us that they are satisfied with the contractors' efforts to provide TRS briefings to the reserve component units in their regions. However, because officials do not know which units have been briefed on the program, there is a risk that some reserve component members are not receiving sufficient information on TRS and may not be taking advantage of coverage available to them. DOD has conducted two surveys that gauge whether members of the Selected Reserve are aware of TRS, among other issues. In 2008, TMA conducted the Focused Survey of TRICARE Reserve Select and Selected Reserve Military Health System Access and Satisfaction to better understand reserve component members' motivation for enrolling in TRS and to compare TRS enrollees' satisfaction with and access to health care services with that of other beneficiary groups. In reporting the results of this survey to Congress in February 2009, TMA stated that lack of awareness was an important factor in why eligible members of the Selected Reserve did not enroll in TRS. TMA also reported that less than half of the eligible Selected Reserve members who were not enrolled in TRS were aware of the program. However, the survey's response rate was almost 18 percent, and such a low response rate decreases the likelihood that the survey results were representative of the views and characteristics of the Selected Reserve population. According to the Office of Management and Budget's standards for statistical surveys, a nonresponse analysis is recommended for surveys with response rates lower than 80 percent to determine whether the responses are representative of the surveyed population. Accordingly, TMA conducted a nonresponse analysis to determine whether the survey responses it received were representative of the surveyed population, and the analysis identified substantial differences between the original respondents and the follow-up respondents. As a result of the differences found in the nonresponse analysis, TMA adjusted the statistical weighting techniques for nonresponse bias and applied the weights to the data before drawing conclusions and reporting the results. DMDC conducts a quarterly survey, called the Status of Forces Survey, which is directed to all members of the military services. DMDC conducts several versions of this survey, including a version for members of the reserve components. This survey focuses on different issues at different points in time. For example, every other year the survey includes questions on health benefits, including questions on whether members of the reserve components are aware of TRICARE, including TRS. In July 2010, we issued a report raising concerns about the reliability of DOD's Status of Forces Surveys because they generally have a 25 to 42 percent response rate, and DMDC has not been conducting nonresponse analyses to determine whether the surveys' results are representative of the target population. We recommended that DMDC develop and implement guidance both for conducting a nonresponse analysis and using the results of this analysis to inform DMDC's statistical weighting techniques, as part of the collection and analysis of the Status of Forces Survey results. DOD concurred with this recommendation, but as of January 2011, had not implemented it. DOD monitors access to civilian providers under TRS in conjunction with monitoring efforts related to the TRICARE Standard and Extra options. In addition, during the course of our review, TMA initiated additional efforts that specifically examine access to civilian providers for TRS beneficiaries and the Selected Reserve population, including mapping the locations of Selected Reserve members in relation to areas with TRICARE provider networks. Because TRS is the same benefit as the TRICARE Standard and Extra options, DOD monitors TRS beneficiaries' access to civilian providers as a part of monitoring access to civilian providers for beneficiaries who use TRICARE Standard and Extra. As we have recently reported, in the absence of access-to-care standards for these options, TMA has mainly used feedback mechanisms to gauge access to civilian providers for these beneficiaries. For example, in response to a mandate included in the NDAA for Fiscal Year 2008, DOD has completed 2 years of a multiyear survey of beneficiaries who use the TRICARE Standard, TRICARE Extra, and TRS options and 2 years of its second multiyear survey of civilian providers. Congress required that these surveys obtain information on access to care and that DOD give a high priority to locations having high concentrations of Selected Reserve members. In March 2010, we reported that TMA generally addressed the methodological requirements outlined in the mandate during the implementation of the first year of the multiyear surveys. While TMA did not give a high priority to locations with high concentrations of Selected Reserve members, TMA's methodological approach over the 4-year survey period will cover the entire United States, including areas with high concentrations of Selected Reserve members. In February 2010, TMA directed the TRICARE Regional Offices to monitor access to civilian providers for TRICARE Standard, TRICARE Extra, and TRS beneficiaries through the development of a model that can be used to identify geographic areas where beneficiaries may experience access problems. As of May 2010, each of the TRICARE Regional Offices had implemented an initial model appropriate to its region. These models include, for example, data on area populations, provider types, and potential provider shortages for the general population. Officials at each regional office said that their models are useful but noted that they are evolving and will be updated. To determine whether jointly monitoring access to civilian providers for TRS beneficiaries along with TRICARE Standard and Extra beneficiaries was reasonable, we asked TMA to perform an analysis of claims (for fiscal years 2008, 2009, and 2010) to identify differences in age demographics and health care utilization between these beneficiary groups. This analysis found that although the age demographics for these populations were different--more than half of the TRS beneficiaries were age 29 and under, while more than half of the TRICARE Standard and Extra beneficiaries were over 45--both groups otherwise shared similarities with their health care utilization. Specifically, both beneficiary groups had similar diagnoses, used the same types of specialty providers, and used similar proportions of mental health care, primary care, and specialty care. (See fig. 4.) Specifically: Seven of the top 10 diagnoses for both TRS and TRICARE Standard and Extra beneficiaries were the same. Three of these diagnoses--allergic rhinitis, joint disorder, and back disorder--made up more than 20 percent of claims for both beneficiary groups. The five provider specialties that filed the most claims for both beneficiary groups were the same--family practice, physical therapy, allergy, internal medicine, and pediatrics. Furthermore, the majority of claims filed for both beneficiary groups were filed by family practice providers. Both beneficiary groups had the same percentage of claims filed for mental health care and similar percentages for primary care and other specialty care. (See app. II for additional details on the results of this claims analysis.) Based on this analysis, jointly monitoring access for TRS beneficiaries and TRICARE Standard and Extra beneficiaries appears to be a reasonable approach. DOD has taken steps to evaluate access to civilian providers for the Selected Reserve population and TRS beneficiaries separately from other TRICARE beneficiaries. Specifically, during the course of our review, TMA initiated the following efforts: During the fall of 2010, TMA officials analyzed the locations of Selected Reserve members and their families, including TRS beneficiaries, to determine what percentage of them live within TRICARE's Prime Service Areas (areas in which the managed care contractors are required to establish and maintain sufficient networks of civilian providers). According to these data, as of August 31, 2010, over 80 percent of Selected Reserve members and their families lived in Prime Service Areas: 100 percent in the South region, which is all Prime Service Areas, and over 70 percent in the North and West regions. TMA officials told us that they are repeating the Focused Survey of TRICARE Reserve Select and Select Reserve Military Health System Access and Satisfaction, which had first been conducted in 2008. Using results from its first survey, TMA reported to Congress in February 2009 that members of the Selected Reserve who were enrolled in TRS were pleased with access and quality of care under their plan. However, as we have noted, the response rate for this survey was almost 18 percent, although TMA took steps to adjust the data prior to reporting the results. Officials told us that the follow-up survey will focus on whether access to care for TRS beneficiaries has changed. Officials sent the survey instrument to participants in January 2011. Officials told us that they anticipate results will be available during the summer of 2011. TRS is an important option for members of the Selected Reserve. However, educating this population about TRS has been challenging, and despite efforts by the reserve components and the contractors, some members of the Selected Reserve are likely still unaware of this option. Most of the reserve components lack centralized accountability for TRS education, making it unclear if all members are getting information about the program--a concern that is further exacerbated by the lack of awareness about the TRS education policy among officials from some of the reserve components. Additionally, the contractors' limitations in briefing all of the units in their regions about TRS make each component's need for a central point of contact more evident. Without centralized accountability, the reserve components do not have assurance that all members of the Selected Reserve who may need TRS have the information they need to take advantage of the health care options available to them. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Reserve Affairs to develop a policy that requires each reserve component to designate a centralized point of contact for TRS education, who will be accountable for ensuring that the reserve components are providing information about TRS to their Selected Reserve members annually. In establishing responsibilities for the centralized points of contact, DOD should explicitly task them with coordinating with their respective TRICARE Regional Offices to ensure that contractors are provided information on the number and location of reserve component units in their regions. In commenting on a draft of this report, DOD partially concurred with our recommendation. (DOD's comments are reprinted in app. III.) Specifically, DOD agreed that the Assistant Secretary of Defense for Reserve Affairs should develop a policy that requires each of the seven reserve components to designate a central point of contact for TRS education that will be accountable for providing information about TRS to their Selected Reserve members annually. However, DOD countered that each designee should coordinate the provision of reserve unit information through the TRICARE Regional Offices rather than communicating directly with the TRICARE contractors, noting that the TRICARE Regional Offices have oversight responsibility for the contractors in their respective regions. We understand the department's concern about coordinating contractor communications through the TRICARE Regional Offices, and we have modified our recommendation accordingly. DOD also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Defense and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. We asked the TRICARE Management Activity (TMA) to conduct an analysis of claims filed for TRICARE Reserve Select (TRS) beneficiaries and TRICARE Standard and Extra beneficiaries. We requested claims data for the most recent three complete fiscal years--2008, 2009, and 2010-- based on the fact that the program last experienced changes with eligibility and premiums in fiscal year 2007. For the purpose of this analysis, claims consist of all services provided by a professional in an office or other setting outside of an institution. Records of services rendered at a hospital or other institution were excluded from this analysis. In addition, records for medical supplies and from chiropractors and pharmacies were also excluded. We asked TMA to conduct the following comparative analyses for TRS beneficiaries and TRICARE Standard and Extra beneficiaries: 1. Demographics, including age for each year and averaged over 3 years 2. Percentage of claims filed for primary care, mental health, and other specialists each year for 3 years 3. The top 10 procedures in ranking order made each year and the 4. The top 10 primary diagnoses in ranking order made each year and the 5. The top five provider specialties in ranking order visited each year and the average over 3 years 6. Percentage of claims filed for the top five provider specialties and the To ensure that TMA's data were sufficiently reliable, we conducted data reliability assessments of the data sets that we used to assess their quality and methodological soundness. Our review consisted of (1) examining documents that described the respective data, (2) interviewing TMA officials about the data collection and analysis processes, and (3) interviewing TMA officials about internal controls in place to ensure that data are complete and accurate. We found that all of the data sets used in this report were sufficiently reliable for our purposes. However, we did not independently verify TMA's calculations. Tables 2 through 5 contain information on claims filed for TRICARE Reserve Select and TRICARE Standard and Extra beneficiaries. In addition to the contact named above, Bonnie Anderson, Assistant Director; Danielle Bernstein; Susannah Bloch; Ashley Dean; Lisa Motley; Jessica Smith; and Suzanne Worth made key contributions to this report.
TRICARE Reserve Select (TRS) provides certain members of the Selected Reserve--reservists considered essential to wartime missions--with the ability to purchase health care coverage under the Department of Defense's (DOD) TRICARE program after their active duty coverage expires. TRS is similar to TRICARE Standard, a fee-forservice option, and TRICARE Extra, a preferred provider option. The National Defense Authorization Act for Fiscal Year 2008 directed GAO to review TRS education and access to care for TRS beneficiaries. This report examines (1) how DOD ensures that members of the Selected Reserve are informed about TRS and (2) how DOD monitors and evaluates access to civilian providers for TRS beneficiaries. GAO reviewed and analyzed documents and evaluated an analysis of claims conducted by DOD. GAO also interviewed officials with the TRICARE Management Activity (TMA), the DOD entity responsible for managing TRICARE; the regional TRICARE contractors; the Office of Reserve Affairs; and the seven reserve components. DOD does not have reasonable assurance that Selected Reserve members are informed about TRS. A 2007 policy designated the reserve components as having responsibility for providing information about TRS to Selected Reserve members on an annual basis; however, officials from three of the seven components told GAO that they were unaware of this policy. Additionally, only one of the reserve components had a designated official at the headquarters level acting as a central point of contact for TRICARE education, including TRS. Without centralized responsibility for TRS education, the reserve components cannot ensure that all eligible Selected Reserve members are receiving information about the TRS program. Compounding this, the managed care support contractors that manage civilian health care are limited in their ability to educate all reserve component units in their regions as required by their contracts because they do not have access to comprehensive information about these units, and some units choose not to use the contractors to help educate their members about TRS. Nonetheless, DOD officials stated that they were satisfied with the contractors' efforts to educate units upon request and to conduct outreach. Lastly, it is difficult to determine whether Selected Reserve members are knowledgeable about TRS because the results of two DOD surveys that gauged members' awareness of the program may not be representative because of low response rates. Because TRS is the same benefit as the TRICARE Standard and Extra options, DOD monitors access to civilian providers for TRS beneficiaries in conjunction with TRICARE Standard and Extra beneficiaries. DOD has mainly used feedback mechanisms, such as surveys, to gauge access to civilian providers for these beneficiaries in the absence of access standards for these options. GAO found that jointly monitoring access for these two beneficiary groups is reasonable because a claims analysis showed that TRS beneficiaries and TRICARE Standard and Extra beneficiaries had similar health care utilization. Also, during the course of GAO's review, TMA initiated other efforts that specifically evaluated access to civilian providers for the Selected Reserve population and TRS beneficiaries, including mapping the locations of Selected Reserve members in relation to areas with TRICARE provider networks. GAO recommends that the Secretary of Defense direct the Assistant Secretary of Defense for Reserve Affairs to develop a policy requiring each reserve component to designate a centralized point of contact for TRS education. DOD partially concurred with this recommendation, citing a concern about regional coordination. GAO modified the recommendation.
5,837
723
Because no drug is absolutely safe, FDA approves a drug for marketing when the agency judges that its known benefits outweigh its known risks. After a drug is on the market, FDA continues to assess its risks and benefits. FDA reviews reports of adverse drug reactions (adverse events) related to the drug and information from clinical studies about the drug that are conducted by the drug's sponsor. FDA also reviews adverse events from studies that follow the use of drugs in ongoing medical care (observational studies) that are carried out by the drug's sponsor, FDA, or other researchers. If FDA has information that a drug on the market may pose a significant health risk to consumers, it weighs the effect of the adverse events against the benefit of the drug to determine what actions, if any, are warranted. The decision-making process for postmarket drug safety is complex, involving input from a variety of FDA staff and organizational units and information sources, but the central focus of the process is the iterative interaction between OND and ODS. OND is a much larger office than ODS. In fiscal year 2005, OND had 715 staff and expenditures of $110.6 million. More than half of OND's expenditures in fiscal year 2005, or $57.2 million, came from user fees paid by drug sponsors under the Prescription Drug User Fee Amendments of 2002. ODS had 106 staff in fiscal year 2005 and expenditures of $26.9 million, with $7.6 million from prescription drug user fees. After a drug is on the market, OND staff receive information about safety issues in several ways. First, OND staff receive notification of adverse event reports for drugs to which they are assigned and they review the periodic adverse event reports that are submitted by drug sponsors. Second, OND staff review safety information that is submitted to FDA when a sponsor seeks approval for a new use or formulation of a drug, and monitor completion of postmarket studies. When consulting with OND on a safety issue, ODS staff search for all relevant case reports of adverse events and assess them to determine whether or not the drug caused the adverse event and whether there are any common trends or risk factors. ODS staff might also use information from observational studies and drug use analyses to analyze the safety issue. When completed, ODS staff summarize their analysis in a written consult. According to FDA officials, OND staff within the review divisions usually decide what regulatory action should occur, if any, by considering the results of the safety analysis in the context of other factors such as the availability of other similar drugs and the severity of the condition the drug is designed to treat. Then, if necessary, OND staff make a decision about what action should be taken. Several CDER staff, including staff from OND and ODS, told us that most of the time there is agreement within FDA about what safety actions should be taken. At other times, however, OND and ODS staff disagree about whether the postmarket data are adequate to establish the existence of a safety problem or support a recommended regulatory action. In those cases, OND staff sometimes request additional analyses by ODS and sometimes there is involvement from other FDA organizations. In some cases, OND seeks the advice of FDA's scientific advisory committees, which are composed of experts and consumer representatives from outside FDA. In 2002, FDA established the Drug Safety and Risk Management Advisory Committee, 1 of the 16 human-drug-related scientific advisory committees, to specifically advise FDA on drug safety and risk management issues. The recommendations of the advisory committees do not bind the agency to any decision. FDA has the authority to withdraw the approval of a drug on the market for safety-related and other reasons, although it rarely does so. In almost all cases of drug withdrawals for safety reasons, the drug's sponsor has voluntarily removed the drug from the market. For example, in 2001 Baycol's sponsor voluntarily withdrew the drug from the market after meeting with FDA to discuss reports of adverse events, including some reports of fatalities. FDA does not have explicit authority to require that drug sponsors take other safety actions; however, when FDA identifies a potential problem, sponsors generally negotiate with FDA to develop a mutually agreeable remedy to avoid other regulatory action. Negotiations may result in revised drug labeling or restricted distribution. FDA has limited authority to require that sponsors conduct postmarket safety studies. In our March 2006 report, we found that FDA's postmarket drug safety decision-making process was limited by a lack of clarity, insufficient oversight by management, and data constraints. We observed that there was a lack of established criteria for determining what safety actions to take and when, and aspects of ODS's role in the process were unclear. A lack of communication between ODS and OND's review divisions and limited oversight of postmarket drug safety issues by ODS management hindered the decision-making process. FDA's decisions regarding postmarket drug safety have also been made more difficult by the constraints it faces in obtaining data. While acknowledging the complexity of the postmarket drug safety decision-making process, we found through our interviews with OND and ODS staff and in our case studies that the process lacked clarity about how drug safety decisions were made and about the role of ODS. If FDA had established criteria for determining what safety actions to take and when, then some of the disagreements we observed in our case studies might have been resolved more quickly. In the absence of established criteria, several FDA officials told us that decisions about safety actions were often based on the case-by-case judgments of the individuals reviewing the data. Our observations were consistent with two previous internal FDA reports on the agency's internal deliberations regarding Propulsid and the diabetes drug Rezulin. In those reviews FDA indicated that an absence of established criteria for determining what safety actions to take, and when to take them, posed a challenge for making postmarket drug safety decisions. We also found that ODS's role in scientific advisory committee meetings was unclear. According to the OND Director, OND is responsible for setting the agenda for the advisory committee meetings, with the exception of the Drug Safety and Risk Management Advisory Committee. This includes who is to present and what issues will be discussed by the advisory committees. For the advisory committees (other than the Drug Safety and Risk Management Advisory Committee) it was unclear when ODS staff would participate. A lack of communication between ODS and OND's review divisions and limited oversight of postmarket drug safety issues by ODS management also hindered the decision-making process. ODS and OND staff often described their relationship with each other as generally collaborative, with effective communication, but both ODS and OND staff told us that there had been communication problems on some occasions, and that this had been an ongoing concern. For example, according to some ODS staff, OND did not always adequately communicate the key question or point of interest to ODS when it requested a consult, and as ODS worked on the consult there was sometimes little interaction between the two offices. After a consult was completed and sent to OND, ODS staff reported that OND sometimes did not respond in a timely manner or at all. Several ODS staff characterized this as consults falling into a "black hole" or "abyss." OND's Director told us that OND staff probably do not "close the loop" in responding to ODS's consults, which includes explaining why certain ODS recommendations were not followed. In some cases CDER managers and OND staff criticized the methods used in ODS consults and told us that the consults were too lengthy and academic. ODS management had not effectively overseen postmarket drug safety issues, and as a result, it was unclear how FDA could know that important safety concerns had been addressed and resolved in a timely manner. A former ODS Director told us that the small size of ODS's management team presented a challenge for effective oversight of postmarket drug safety issues. Another problem was the lack of systematic information on drug safety issues. According to the ODS Director, ODS maintained a database of consults that provided some information about the consults that ODS staff conducted, but it did not include information about whether ODS staff made recommendations for safety actions and how the safety issues were handled and resolved, such as whether recommended safety actions were implemented by OND. Data constraints--such as weaknesses in data sources and FDA's limited ability to require certain studies and obtain additional data--have contributed to FDA's difficulty in making postmarket drug safety decisions. OND and ODS have used three different sources of data to make postmarket drug safety decisions, including adverse event reports, clinical trial studies, and observational studies. While data from each source have weaknesses that have contributed to the difficulty in making postmarket drug safety decisions, evidence from more than one source can help inform the postmarket decision-making process. The availability of these data sources has been constrained, however, because of FDA's limited authority to require drug sponsors to conduct postmarket studies and its resources. While decisions about postmarket drug safety have often been based on adverse event reports, FDA cannot establish the true frequency of adverse events in the population with data from adverse event reports. The inability to calculate the true frequency makes it hard to establish the magnitude of a safety problem, and comparisons of risks across similar drugs are difficult. In addition, it is difficult to attribute adverse events to particular drugs when there is a relatively high incidence rate in the population for the medical condition. It is also difficult to attribute adverse events to the use of particular drugs because data from adverse event reports may have been confounded by other factors, such as other drug exposures. FDA can also use available data from clinical trials and observational studies to support postmarket drug safety decisions. Although each source presents weaknesses that constrain the usefulness of the data provided, having data from more than one source can help improve FDA's decision- making ability. Clinical trials, in particular randomized clinical trials, are considered the "gold standard" for assessing evidence about efficacy and safety because they are considered the strongest method by which one can determine whether new drugs work. However, clinical trials also have weaknesses. Clinical trials typically have too few enrolled patients to detect serious adverse events associated with a drug that occur relatively infrequently in the population being studied. They are usually carried out on homogenous populations of patients that often do not reflect the types of patients who will actually take the drugs. For example, they do not often include those who have other medical problems or take other medications. In addition, clinical trials are often too short in duration to identify adverse events that may occur only after long use of the drug. This is particularly important for drugs used to treat chronic conditions where patients are taking the medications for the long term. Observational studies, which use data obtained from population-based sources, can provide FDA with information about the population effect and risk associated with the use of a particular drug. We have found that FDA's access to postmarket clinical trial and observational data is limited by its authority and available resources. FDA does not have broad authority to require that a drug sponsor conduct an observational study or clinical trial for the purpose of investigating a specific postmarket safety concern. One senior FDA official and several outside drug safety experts told us that FDA needs greater authority to require such studies. Long-term clinical trials may be needed to answer safety questions about risks associated with the long-term use of drugs. For example, during a February 2005 scientific advisory committee meeting, some FDA staff and committee members indicated that there was a need for better information on the long-term use of anti-inflammatory drugs and discussed how a long-term trial might be designed to study the cardiovascular risks associated with the use of these drugs. Lacking specific authority to require drug sponsors to conduct postmarket studies, FDA has often relied on drug sponsors voluntarily agreeing to conduct these studies. But the postmarket studies that drug sponsors have agreed to conduct have not consistently been completed. One study estimated that the completion rate of postmarket studies, including those that sponsors had voluntarily agreed to conduct, rose from 17 percent in the mid-1980s to 24 percent between 1991 and 2003. FDA has little leverage to ensure that these studies are carried out. In terms of resource limitations, several FDA staff (including CDER managers) and outside drug safety experts told us that in the past ODS has not had enough resources for cooperative agreements to support its postmarket drug surveillance program. Under the cooperative agreement program, FDA collaborated with outside researchers in order to access a wide range of population-based data and conduct research on drug safety. Annual funding for this program was less than $1 million from fiscal year 2002 through fiscal year 2005. In 2006, FDA awarded four contracts for a total cost of $1.6 million per year to replace the cooperative agreements. Prior to the completion of our March 2006 report, FDA began several initiatives to improve its postmarket drug safety decision-making process. Most prominently, FDA commissioned the IOM to convene a committee of experts to assess the current system for evaluating postmarket drug safety, including FDA's oversight of postmarket safety and its processes. IOM issued its report in September 2006. FDA also had underway several organizational changes that we discussed in our 2006 report. For example, FDA established the Drug Safety Oversight Board to help provide oversight and advice to the CDER Director on the management of important safety issues. The board is involved with ensuring that broader safety issues, such as ongoing delays in changing a label, are effectively resolved. FDA also drafted a policy that was designed to ensure that all major postmarket safety recommendations would be discussed by involved OND and ODS managers, beginning at the division level, and documented. FDA implemented a pilot program for dispute resolution that is designed for individual CDER staff to have their views heard when they disagree with a decision that could have a significant negative effect on public health. Because the CDER Director is involved in determining whether the process will be initiated, appoints a panel chair to review the case, and makes the final decision on how the dispute should be resolved, we found that the pilot program does not offer CDER staff an independent forum for resolving disputes. FDA also began to explore ways to access additional data sources that it can obtain under its current authority, such as data on Medicare beneficiaries' experience with prescription drugs covered under the prescription drug benefit. Since our report, FDA has made efforts to improve its postmarket safety decision-making and oversight process. In its written response to the IOM recommendations, FDA agreed with the goal of many of the recommendations made by GAO and IOM. In that response, FDA stated that it would take steps to improve the "culture of safety" in CDER, reduce tension between preapproval and postapproval staff, clarify the roles and responsibilities of pre- and postmarket staff, and improve methods for resolving scientific disagreements. FDA has also begun several initiatives since our March 2006 report that we believe could address three of our four recommendations. Because none of these initiatives were fully implemented as of May 2007, it was too early to evaluate their effectiveness. To make the postmarket safety decision-making process clearer and more effective, we recommended that FDA revise and implement its draft policy on major postmarket drug safety decisions. CDER has made revisions to the draft policy, but has not yet finalized and implemented it. CDER's Associate Director for Safety Policy and Communication told us that the draft policy provides guidance for making major postmarket safety decisions, including identifying the decision-making officials for safety actions and ensuring that the views of involved FDA staff are documented. According to the Associate Director, the revised draft does not now discuss decisions for more limited safety actions, such as adding a boxed warning to a drug's label. As a result, fewer postmarket safety recommendations would be required to be discussed by involved OND and ODS managers than envisioned in the draft policy we reviewed for our 2006 report. Separately, FDA has instituted some procedures that are consistent with the goals of the draft policy. For example ODS staff now participate in regular, bimonthly safety meetings with each of the review divisions in OND. To help resolve disagreements over safety decisions, we recommended that FDA improve CDER's dispute resolution process by revising the pilot program to increase its independence. FDA had not revised its pilot dispute resolution program as of May 2007, and FDA officials told us that the existing program had not been used by any CDER staff member. To make the postmarket safety decision-making process clearer, we recommended that FDA clarify ODS's role in FDA's scientific advisory committee meetings involving postmarket drug safety issues. According to an FDA official, the agency intends to, but had not yet, drafted a policy that will describe what safety information should be presented and how such information should be presented at scientific advisory committee meetings. The policy is also expected to clarify ODS's role in planning for, and participating in, meetings of FDA's scientific advisory committees. To help ensure that safety concerns were addressed and resolved in a timely manner, we recommended that FDA establish a mechanism for systematically tracking ODS's recommendations and subsequent safety actions. As of May 2007, FDA was in the process of implementing the Document Archiving, Reporting and Regulatory Tracking System (DARRTS) to track such information on postmarket drug safety issues. Among many other uses, DAARTS will track ODS's safety recommendations and the responses to them. We also suggested in our report that Congress consider expanding FDA's authority to require drug sponsors to conduct postmarket studies in order to ensure that the agency has the necessary information, such as clinical trial and observational data, to make postmarket decisions. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please contact Marcia Crosse at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Martin T. Gahart, Assistant Director; Pamela Dooley; and Cathleen Hamann made key contributions to this statement. Drug Safety: FDA Needs to Further Address Shortcomings in Its Postmarket Decision-making Process. GAO-07-599T. Washington, D.C.: March 22, 2007. Pediatric Drug Research: Studies Conducted under Best Pharmaceuticals for Children Act. GAO-07-557. Washington, D.C.: March 22, 2007. Prescription Drugs: Improvements Needed in FDA's Oversight of Direct- to-Consumer Advertising. GAO-07-54. Washington, D.C.: November 16, 2006. Internet Pharmacies: Some Pose Safety Risks for Consumers and Are Unreliable in Their Business Practices. GAO-04-888T. Washington, D.C.: June 17, 2004. Internet Pharmacies: Some Pose Safety Risks for Consumers. GAO-04-820. Washington, D.C.: June 17, 2004. Antibiotic Resistance: Federal Agencies Need to Better Focus Efforts to Address Risk to Humans from Antibiotic Use in Animals. GAO-04-490. Washington, D.C.: April 22, 2004. Pediatric Drug Research: Food and Drug Administration Should More Efficiently Monitor Inclusion of Minority Children. GAO-03-950. Washington, D.C.: September 26, 2003. Women's Health: Women Sufficiently Represented in New Drug Testing, but FDA Oversight Needs Improvement. GAO-01-754. Washington, D.C.: July 6, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2004, several high-profile drug safety cases raised concerns about the Food and Drug Administration's (FDA) ability to manage postmarket drug safety issues. In some cases there were disagreements within FDA about how to address these issues. GAO was asked to testify on FDA's oversight of drug safety. This testimony is based on Drug Safety: Improvement Needed in FDA's Postmarket Decision-making and Oversight Process, GAO-06-402 (Mar. 31, 2006). The report focused on the complex interaction between two offices within FDA that are involved in postmarket drug safety activities: the Office of New Drugs (OND), and the Office of Drug Safety (ODS). OND's primary responsibility is to review new drug applications, but it is also involved in monitoring the safety of marketed drugs. ODS is focused primarily on postmarket drug safety issues. ODS is now called the Office of Surveillance and Epidemiology. For its report, GAO reviewed FDA policies, interviewed FDA staff, and conducted case studies of four drugs with safety issues: Arava, Baycol, Bextra, and Propulsid. To gather information on FDA's initiatives since March 2006 to improve its decision-making process for this testimony, GAO interviewed FDA officials in February and March 2007, and received updated information from FDA in May 2007. In its March 2006 report, GAO found that FDA lacked clear and effective processes for making decisions about, and providing management oversight of, postmarket drug safety issues. There was a lack of clarity about how decisions were made and about organizational roles, insufficient oversight by management, and data constraints. GAO observed that there was a lack of criteria for determining what safety actions to take and when to take them. Insufficient communication between ODS and OND hindered the decision-making process. ODS management did not systematically track information about ongoing postmarket safety issues, including the recommendations that ODS staff made for safety actions. GAO also found that FDA faced data constraints that contributed to the difficulty in making postmarket safety decisions. GAO found that FDA's access to data was constrained by both its limited authority to require drug sponsors to conduct postmarket studies and its limited resources for acquiring data from other external sources. During the course of GAO's work for its March 2006 report, FDA began a variety of initiatives to improve its postmarket drug safety decision-making process, including the establishment of the Drug Safety Oversight Board. FDA also commissioned the Institute of Medicine to examine the drug safety system, including FDA's oversight of postmarket drug safety. GAO recommended in its March 2006 report that FDA take four steps to improve its decision-making process for postmarket safety. GAO recommended that FDA revise and implement its draft policy on the decision-making process for major postmarket safety actions, improve its process to resolve disagreements over safety decisions, clarify ODS's role in scientific advisory committees, and systematically track postmarket drug safety issues. FDA has initiatives underway and under consideration and that, if implemented, could address three of GAO's four recommendations. In the 2006 report GAO also suggested that Congress consider expanding FDA's authority to require drug sponsors to conduct postmarket studies, as needed, to collect additional data on drug safety concerns.
4,320
700
Individuals needing long-term care have varying degrees of difficulty in performing some activities of daily living without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may also have trouble with instrumental activities of daily living, which include such tasks as preparing food, housekeeping, and handling finances. They may have a mental impairment, such as Alzheimer's disease, that necessitates supervision to avoid harming themselves or others or need assistance with tasks such as taking medications. Although a physical or mental disability may occur at any age, the older an individual becomes, the more likely it is that a disabling condition will develop or worsen. Assistance for such needs takes many forms and takes place in varied settings, including care in nursing homes or alternative community-based residential settings such as assisted living facilities. For individuals remaining in their homes, in-home care services or unpaid care from family members or other informal caregivers is most common. Approximately 64 percent of all elderly individuals with a disability relied exclusively on unpaid care from family or other informal caregivers; even among almost totally dependent elderly--those with difficulty performing five activities of daily living--about 41 percent relied entirely on unpaid care. Medicaid, the joint federal-state health-financing program for low-income individuals, continues to be the largest funding source for long-term care. In 2000, Medicaid paid 46 percent (about $63 billion) of the $137 billion spent on long-term care from all public and private sources. States share responsibility with the federal government for Medicaid, paying on average approximately 43 percent of total Medicaid costs. Within broad federal guidelines, states have considerable flexibility in determining who is eligible and what services to cover in their Medicaid program. Among long-term care services, states are required to cover nursing facilities and home health services for Medicaid beneficiaries. States also may choose to cover additional long-term care services that are not mandatory under federal standards, such as personal care services, private-duty nursing care, and rehabilitative services. For services that a state chooses to cover under its state Medicaid plan as approved by the Centers for Medicare & Medicaid Services (CMS), enrollment for those eligible cannot be limited but benefits may be. For example, states can limit the personal care service benefit through medical necessity requirements and utilization controls. States may also cover Medicaid home and community-based services (HCBS) through waivers of certain statutory requirements under section 1915(c) of the Social Security Act, thereby receiving greater flexibility in the provision of long-term care services. These waivers permit states to adopt a variety of strategies to control the cost and use of services. For example, states may obtain CMS approval to waive certain provisions of the Medicaid statute, such as the requirement that states make all services available to all eligible individuals statewide. With a waiver, states can target services to individuals on the basis of certain criteria such as disease, age, or geographic location. Further, states may limit the number of persons served to a specified target, requiring additional persons meeting eligibility and need criteria to be put on a waiting list. Limits may also be placed on the costs of services that will be covered by Medicaid. To obtain CMS approval for an HCBS waiver, states must demonstrate that the cost of the services to be provided under a waiver (plus other state Medicaid services) is no more than the cost of institutional care (plus any other Medicaid services provided to institutionalized individuals). These waivers permit states to cover a wide variety of nonmedical and social services and supports that allow people to remain at home or in the community, including personal care, personal emergency response systems, homemakers' assistance, chore assistance, adult day care, and other services. Medicare--the federal health financing program covering nearly 40 million Americans who are aged 65 or older, disabled, or have end-stage renal disease--primarily covers acute care, but it also pays for limited post- acute stays in skilled nursing facilities and home health care. Medicare spending accounted for 14 percent (about $19 billion) of total long-term care expenditures in 2000. A new home health prospective payment system was implemented in October 2000 that would allow a higher number of home health visits per user than under the previous interim payment system while also providing incentives to reward efficiency and control use of services. The number of home health visits declined from about 29 visits per episode immediately prior to the prospective payment system being implemented to 22 visits per episode during the first half of 2001. Most of the decline was in home health aide visits. The four states we reviewed allocated different proportions of Medicaid long-term care expenditures for the elderly to federally required long-term care services, such as nursing facilities and home health, and to state optional home and community-based care, such as in-home personal support, adult day care, and care in alternate residential care settings. As the following examples illustrate, the states also differed in how they designed their home and community-based services, influencing the extent to which these services were available to elderly individuals with disabilities. New York spent $2,463 per person aged 65 or older in 1999 on Medicaid long-term care services for the elderly--much higher than the national average of $996. While nursing home care represented 68 percent of New York's expenditures, New York also spent more than the national average on state optional long-term care services, such as personal support services. Because most home and community-based services in New York were covered as part of the state Medicaid plan, these services were largely available to all eligible Medicaid beneficiaries needing them without caps on the numbers of individuals served. Louisiana spent $1,012 per person aged 65 or older, slightly higher than the national average of $996. Nursing home care accounted for 93 percent of Louisiana's expenditures, higher than the national average of 81 percent. Most home and community-based services available in Louisiana for the elderly and disabled were offered under HCBS waivers, and the state capped the dollar amount available per day for services and limited the number of recipients. For example, Louisiana's waiver that covered in- personal care and other services had a $35 per day limit at the time of our work and served approximately 1,500 people in July 2002 with a waiting list of 5,000 people. Kansas spent $935 per person aged 65 or older, slightly less than the national average. Most home and community-based services, including in- home care, adult day care, and respite services, were offered under HCBS waivers. As of June 2002, 6,300 Kansans were receiving these HCBS waiver services. However, the HCBS waiver services were not currently available to new recipients because Kansas initiated a waiting list for these services in April 2002, and 290 people were on the waiting list as of June 2002. Oregon spent $604 on Medicaid long-term care services per elderly individual and, in contrast to the other states, spent a lower proportion on nursing facilities and a larger portion on other long-term care services such as care in alternative residential settings. Oregon had HCBS waivers that cover in-home care, environmental modifications to homes, adult day care, and respite care. Oregon's waiver services did not have a waiting list and were available to elderly and disabled clients based on functional need, serving about 12,000 elderly and disabled individuals as of June 2002. Appendix I summarizes the home and community-based services available in the four states through their state Medicaid plans or HCBS waivers and whether the state had a waiting list for HCBS waiver services. Most often, the 16 Medicaid case managers we contacted in Kansas, Louisiana, New York, and Oregon offered care plans for our hypothetical individuals that aimed at allowing them to remain in their homes. The number of hours of in-home care that the case managers offered and the types of residential care settings recommended depended in part on the availability of services and the amount of informal family care available. In a few situations, especially when the individual did not live with a family member who could provide additional support, case managers were concerned that the client would not be safe at home and recommended a nursing home or other residential care setting. The first hypothetical person we presented to care managers was an 86- year-old woman, whom we called "Abby," with debilitating arthritis who is chair bound and whose husband recently died. In most care plans, the case managers offered Abby in-home care. However, the number of offered hours depended on the availability of unpaid informal care from her family and varied among case managers. In the first scenario, Abby lives with her daughter who provides most of Abby's care but is overwhelmed by also caring for her own infant grandchild. Case managers offered from 4.5 to 40 hours per week of in- home assistance with activities that she could not do on her own because of her debilitating arthritis, such as bathing, dressing, eating, using the toilet, and transferring from her wheelchair. One case manager recommended adult foster care for Abby under this scenario. In the second scenario, Abby lives with her 82-year-old sister who provides most of Abby's care, but the sister has limited strength making her unable to provide all of Abby's care. Case managers offered Abby in-home care, ranging from 6 to 37 hours per week. One case manager also offered Abby 56 hours per week of adult day care. In the third scenario, Abby lives alone and her working daughter visits her once each morning to provide care for about 1 hour. The majority of case managers (12 of 16) offered from 12 to 49 hours per week of in-home care to Abby. The other four case managers recommended that she relocate to a nursing home or other residential care setting. The second hypothetical person was "Brian," a 70-year-old man cognitively impaired with moderate Alzheimer's disease who had just been released from a skilled nursing facility after recovering from a broken hip. The case managers usually offered in-home care so that Brian could remain at home if he lived with his wife to provide supervisory care. If he lived alone, most recommended that he move to another residential setting that would provide him with needed supervision. In the first scenario, Brian lives with his wife who provides most of his care and she is in fair health. All 16 case managers offered in-home care, ranging from 11 to 35 hours per week. Two case managers also offered adult day care in addition to or instead of in-home care. In the second scenario, Brian lives with his wife who provides some of his care and she is in poor health. All but one of the case managers offered in- home care, ranging from 6 to 35 hours per week. One case manager recommended that Brian move to a residential care facility. In the third scenario, Brian lives alone because his wife has recently died. Concerned about his safety living at home alone or unable to provide a sufficient number of hours of in-home supervision, 13 of the case managers recommended that Brian move to a nursing home or alternate residential care setting. Two of the three care managers who had Brian remain at home offered around-the-clock in-home care--168 hours per week. Table 1 summarizes the care plans developed for Abby and Brian by the 16 case managers we contacted. In some situations, two case managers in the same locality offered notably different care plans. For example, across the eight localities where we interviewed case managers, when Abby lived alone, four case managers offered in-home care while their local counterpart recommended a nursing home or alternative residential setting. The local case managers offering differing recommendations for in-home or residential care also occurred three times when Brian lived alone and once each when Abby lived with her daughter and when Brian lived with his wife who was in poor health. Also, in a few cases, both case managers in the same locality offered in- home care but significantly different numbers of hours. For example, one case manager offered 42 hours per week of in-home care for Abby when she lived alone while another case manager in the same locality offered 15 hours per week of in-home care for this scenario. The home and community-based care that case managers offered to our hypothetical individuals sometimes differed due to state policies or practices that shaped the availability of their Medicaid-covered services. These included waiting lists for HCBS waiver services in Kansas and Louisiana, Louisiana's daily dollar cap on in-home care, and Kansas's state review policies for higher-cost care plans. Also, case managers in Oregon recommended alternative residential care settings other than nursing homes, and case managers in Louisiana and New York typically considered Medicare home health care when determining the number of hours of Medicaid in-home care to offer. Neither of our hypothetical individuals would be able to immediately receive HCBS waiver services in Kansas and Louisiana due to a waiting list. As a result, they would often have fewer services offered to them-- only those available through other state or federal programs such as those available under the Older Americans Act--until Medicaid HCBS waiver services became available. Alternatively, they could enter a nursing home. The average length of time individuals wait for Medicaid waiver services was not known in either state. However, one case manager in Louisiana estimated that elderly persons for whom he had developed care plans had spent about a year on the waiting list before receiving services. In Kansas, as of July no one had yet come off the waiting list that was instituted in April 2002. When case managers developed care plans based on HCBS-waiver services for our hypothetical individuals, the number of hours of in-home care offered by case managers could be as much as 168 hours per week in New York and Oregon but were at most 24.5 hours per week in Kansas and 37 hours per week in Louisiana. Case managers in Louisiana also tended to change the amount of in-home help offered little even as the hypothetical scenarios changed. This may have been because they were trying to offer as many hours as they could under the cost limit even in the scenario with the most family support available. (See table 2.) Two states' caps or other practices may have limited the amount of Medicaid-covered in-home care that their case managers offered. For example, case managers in Louisiana tended to offer as many hours of care as they could offer under the state's $35 per day cost limit. Therefore, as the amount of informal care changed in the different scenarios, the hours of in-home help offered in Louisiana did not change as much as they did in the other states. In Kansas, case managers often offered fewer hours of in-home care than were offered in other states, which may have been in part influenced by Kansas's supervisory review whereby more costly care plans were more extensively reviewed than lower cost care plans. A Kansas case manager also told us that offering fewer hours of care may reflect the case managers' sensitivity to the state's waiting list for HCBS services and an effort to serve more clients by keeping the cost per person low. In contrast, case managers in New York and Oregon did not have similar cost restrictions in offering in-home hours, with one case manager in each state offering as much as 24-hour-a- day care. When recommending that our hypothetical individuals could better be cared for in a residential care setting, case managers offered alternatives to nursing homes to varying degrees across the states. Case managers in Louisiana recommended nursing home care in three of the four care plans in which care in another residence was recommended for Abby or Brian. In contrast, case managers in Oregon never recommended nursing home care for our hypothetical individuals. Instead, case managers in Oregon exclusively recommended either adult foster care or an assisted living facility in the five care plans recommending care in another residence. It was also noteworthy that two case managers in Oregon recommended that either Abby or Brian obtain care in other residential care settings in a scenario when she or he lived with a family member, expressing concern that continuing to provide care to Abby or Brian would be detrimental to the family. Case managers in Kansas, Louisiana, and New York only recommended out-of-home placement for Abby or Brian in scenarios when they lived alone. State differences also were evident in how case managers used adult day care to supplement in-home or other care. For example, across all care plans the case managers developed for Abby and Brian (24 care plans in each state), adult day care was offered four times in New York and Oregon and three times in Kansas. However, none of the care plans developed by case managers in Louisiana included adult day care because it was in a separate HCBS waiver, and individuals could not receive services through two different waivers. Case managers in New York and Louisiana also often considered the effect that the availability of Medicare home health services could have on Medicaid-covered in-home care. For example, one New York case manager noted that she would maximize the use of Medicare home health before using Medicaid home health or other services. Several of the case managers in New York included the amount of Medicare home health care available in their care plans, and these services offset some of the Medicaid services that would otherwise be offered. In Louisiana, where case managers faced a dollar cap on the amount of Medicaid in-home care hours they could provide, two case managers told us that they would include the additional care available under Medicare's home health benefit in their care plans, thereby increasing the number of total hours of care that Abby or Brian would have by 2 hours per week. While six Kansas and Oregon case managers also mentioned that they would refer Abby or Brian to a physician or visiting nurse to be assessed potentially for Medicare home health, they did not specifically include the availability of Medicare home health in the number of hours of care provided by their care plans. States have found that offering home and community-based services through their Medicaid programs can help low-income elderly individuals with disabilities remain in their homes or communities when they otherwise would be likely to go to a nursing home. States differed, however, in how they designed their Medicaid programs to offer home and community-based long-term care options for elderly individuals and the level of resources they devoted to these services. As a result, as demonstrated by the care plans developed by case managers for our hypothetical elderly individuals in four states, the same individual with certain identified disabilities and needs would often receive different types and intensity of home and community-based care for his or her long-term care needs across states and even within the same community. These differences often stemmed from case managers' attempts to leverage the availability of both publicly-financed long-term care services as well as the informal care and support provided to individuals by their own family members. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For future contacts regarding this testimony, please call Kathryn G. Allen at (202) 512-7118 or John E. Dicken at (202) 512-7043. Other individuals who made key contributions include JoAnne R. Bailey, Romy Gelb, and Miryam Frieder. Kansas, Louisiana, New York, and Oregon each offered home and community-based services through their state Medicaid plans or HCBS waivers. Kansas and Louisiana had waiting lists that generally made these services unavailable to new clients. Table 3 summarizes the home and community-based services available in the four states we reviewed and whether the states had a waiting list for HCBS waiver services.
As the baby boomers age, spending on long-term care for the elderly could quadruple by 2050. The growing demand for long-term care will put pressure on federal and state budgets because long-term care relies heavily on public financing, particularly Medicaid. Nursing home care traditionally has accounted for most Medicaid long-term care expenditures, but the high costs of such care and the preference of many individuals to stay in their own homes has led states to expand their Medicaid programs to provide coverage for home- and community-based long-term care. GAO found that a Medicaid-eligible elderly individual with the same disabling conditions, care needs, and availability of informal family support could find significant differences in the type and intensity of home and community-based services that would be offered for his or her care. These differences were due in part to the very nature of long-term care needs--which can involve physical or cognitive disabling conditions--and the lack of a consensus as to what services are needed to compensate for these disabilities and what balance should exist between publicly available and family-provided services. The differences in care plans were also due to decisions that states have made in designing their Medicaid long-term care programs and the resources devoted to them. The case managers GAO contacted generally offered care plans that relied on in-home services rather than other residential care settings. However, the in-home services offered varied considerably.
4,163
304
SB/SE was formed to address various issues affecting small business and self-employed taxpayers, such as filing tax returns and paying taxes. SB/SE's strategic goals include increasing compliance and also reducing burden among SB/SE taxpayers. As part of SB/SE, TEC is to use various strategies, including providing education, outreach, assistance, and other services, to support SB/SE taxpayers in understanding and complying with tax laws. IRS created TEC in response to concerns that IRS should better balance such services with its enforcement efforts. In serving taxpayers, TEC is to partner with government agencies, small business groups, tax practitioner groups, and other stakeholders that could advance its education and outreach efforts. To meet an overall goal of increasing voluntary compliance, TEC's four program goals or priorities are to combat abusive tax schemes, reduce taxpayer burden, promote electronic filing, and negotiate agreements with SB/SE taxpayers on specific ways to voluntarily comply with tax laws. Recent events underscore the importance of human capital management and strategic workforce planning. For example, we designated strategic human capital management as a governmentwide, high-risk area in January 2001, and it was also placed at the top of the President's Management Agenda in August 2001. In addition, OMB and OPM have made efforts to improve human capital management and strategic workforce planning. The goal of strategic workforce planning is to ensure that the right people with the right skills are in the right place at the right time. Agency approaches to workforce planning can vary with their particular needs and missions. Nevertheless, looking across existing successful public and private organizations, certain critical elements recur as part of a workforce plan and workforce planning process. Although fluid, this process starts with setting a strategic direction that includes program goals and strategies to achieve those goals and flows through the critical elements to evaluating the workforce plan. Figure 1 uses a simple model to show these critical elements and their relationships to the agency's overall strategic direction and goals. Before developing a workforce plan, an agency first needs to set a strategic direction and program goals. Setting a strategic direction and program goals is part of the general performance management principles that Congress expects agencies to follow under GPRA. A workforce plan should be developed and implemented to help fulfill the strategic direction and program goals. The critical elements of what this plan should include and how it should be developed follow. Involvement of management and employees: Involving various staff (from the top to the bottom) cuts across the other critical elements. Involving staff in all phases of workforce planning can help improve the quality of the plan because staff are directly involved with the daily operations. Further, vetting proposed workforce strategies to management and those most affected by those decisions can build support for the plan and facilitate obtaining the resources needed to implement the plan and meet program goals. Establishing a communication strategy that involves various staff can create shared expectations and a clear reporting process about the workforce plan. Workforce gap analysis: Analyzing whether gaps exist between the current and future workforce needed to meet program goals is critical to ensure proper staffing. The workforce plan should assess these gaps, to the extent practical, in a fact-based manner. The absence of fact-based analyses can undermine an agency's efforts to identify and respond to current and emerging challenges. Thus, the characteristics of the future workforce should be based on the specific skills and numbers of staff that will be needed to handle the expected workload. The analysis of the current workforce should identify how many staff members have those skills and how many are likely to remain with the agency over time given expected losses due to retirement and other attrition. The workforce gap analyses can help justify budget and staffing requests by connecting the program goals and strategies with the budget and staff resources needed to accomplish them. Workforce strategies to fill the gaps: Developing strategies to address any identified workforce gaps creates the road map to move from the current to the future workforce needed to achieve the program goals. Strategies can involve how the workforce is acquired, developed and trained, deployed, compensated, motivated, and retained. Agencies need to know their flexibilities and authorities when developing the strategies, and to communicate the strategies to all affected parties. Evaluation of and revisions to strategies: Evaluating the results of the workforce strategies and making any needed revisions helps to ensure that the strategies work as intended. A key step is developing performance measures as indicators of success in attaining human capital goals and program goals, both short- and long-term. Periodic measurement and evaluation provides data for identifying shortfalls and opportunities to revise workforce plans as necessary. For example, an evaluation may indicate whether the workforce plan adequately considered barriers to achieving the goals, such as insufficient resources to hire and train the full complement of staff identified as necessary by the workforce gap analysis. Across the critical elements of a workforce plan, data collection and analysis provide fundamental building blocks. Having reliable data is particularly important to doing the workforce gap analysis. Early development of the data provides a baseline by which agencies can identify current workforce problems. Regular updating of the data enables agencies to plan for improvements, manage changes in the programs and workforce, and track the effects of changes on achieving program goals. IRS issued an Internal Revenue Manual (IRM) section for internal review and comment in March 2003, and IRS expects to finalize it in June 2003. The section outlines a strategic workforce planning system and model, and discusses the roles and responsibilities of IRS and its divisions in this system. For example, IRS is to be responsible for developing the strategic workforce plan across IRS and for analyzing current and future workforce needs. The divisions are to be responsible for providing requested data to IRS's workforce planning office and for translating the IRS-wide plan into their operations. Thus, a strategic workforce plan for a unit within a division could be developed by IRS, the division, or the unit. If developed by the division or unit, the workforce plan is to be consistent with IRS- wide strategic and workforce plans. Our objective was to determine whether TEC has a workforce plan that conforms to the critical elements for what should be in a plan and how it should be developed and implemented. To meet this objective, we reviewed human capital literature--including OPM's Human Capital Assessment and Accountability Framework--as well as workforce planning models at OPM, OMB, and IRS, among others; reviewed TIGTA and GAO reports on human capital and workforce planning; reviewed IRS and SB/SE documents on their strategic program plans, the plan that guided TEC's creation and initial staffing, and the annual TEC staffing plan as well as IRS's draft IRM section on strategic planning and workforce analyses (section 6.251) as of March 2003; and interviewed SB/SE and TEC officials on their goals, strategies, and staffing plans as well as IRS and SB/SE Workforce Council officials to determine their purposes, activities, time lines, and challenges. We conducted our work at IRS and SB/SE headquarters from February 2003 through April 2003 in accordance with generally accepted government auditing standards. We did not attempt to analyze the adequacy of any analyses done to develop a workforce plan for TEC or the program goals and strategies. The Commissioner of IRS provided comments on a draft of this report, which are discussed in the "Agency Comments and Our Evaluation" section and are reprinted in appendix I. Since its inception in October 2000, TEC has operated with short-term staffing plans that do not meet the critical elements of what a strategic workforce plan should include and how it should be developed. IRS and SB/SE are taking steps to develop a strategic workforce plan that will include TEC. However, questions remain about how the critical elements will be developed and implemented for TEC. TEC does not have a strategic workforce plan that includes the critical elements, such as analyses of the workforce gaps and strategies. Without such a workforce plan, TEC has less assurance that it has the necessary workforce to meet its current program goals and to manage changes in its programs and goals. IRS and SB/SE officials said that TEC does not have a strategic workforce plan because of the effort in creating the division and its units such as TEC to meet SB/SE taxpayer needs. These officials said this effort has been a significant undertaking, which delayed the workforce planning. SB/SE officials also said that they needed to have some experience with TEC as a new unit and some data on its new TEC workforce before developing a strategic workforce plan for TEC. Since its inception, TEC has operated under two types of staffing plans that did not use the critical elements of a workforce plan. One plan was developed prior to TEC's creation in October 2000 to guide the hiring and allocation of 1,209 full-time positions for TEC. The other plan annually allocates the number of TEC staff to its various locations, functions (e.g., partnership outreach or marketing service) and four priorities (e.g., combat abusive tax schemes and promote electronic filing). Although both plans reflect analyses of the number of TEC staff by location, these plans did not address what a TEC workforce plan should include under the critical elements. For example, the plans did not identify any gaps in the workforce needed, any strategies to fill the gaps, or any measures for evaluation purposes. Recognizing the need for workforce planning, both IRS and SB/SE are developing strategic workforce plans and a planning process for TEC and other IRS entities that broadly reflect the critical elements. However, questions remain because of the lack of details on how any workforce plan for TEC will address the critical elements. IRS and SB/SE each convened workforce planning councils, consisting of executives and human capital managers, to oversee the development of a strategic workforce plan that would include TEC. IRS started its council in the fall 2001 at the direction of the IRS commissioner. SB/SE started its council in February 2003 to create a more detailed workforce plan for TEC and its other units than would be provided in the IRS-wide plan. Our review of IRS and SB/SE documents showed that they both intend to use the critical elements of strategic workforce planning. These documents include models and discussion that reference the critical elements. For example, these models refer to elements such as analyzing the gap in the workforce and developing strategies to reduce the workforce gap. Although IRS and SB/SE are taking steps to develop a strategic workforce plan for TEC, these steps have not yet produced enough details to specify how the critical elements will be developed and implemented for TEC. IRS and SB/SE officials said that they recognize the need to further define how the strategic workforce plan will be developed and implemented over time. For example, the degree to which top management and employees will be involved in developing and implementing the workforce plan for TEC is not yet clear. The draft IRM section refers to their involvement but does not provide details on the extent and nature of their involvement. As for identifying any workforce gaps at TEC, it is not clear what analyses will be done. As of April 2003, neither IRS nor SB/SE has analyzed the type of TEC workforce needed in the future to meet program goals or the skills of the current TEC workforce. Both types of analyses are needed to determine the gap between the current TEC workforce and the workforce needed in the future. Nor is it clear how and when these analyses will be done. SB/SE officials said that given resource limitations, they have not done the necessary workforce analyses for TEC or developed an implementation schedule for when the analyses would be done. As of April 2003, IRS or SB/SE analyses have dealt with other workforce issues. While useful, the analyses do not address the TEC workforce gap in terms of the skills needed now or in the future to meet program goals, particularly newer ones such as promoting electronic filing or negotiating voluntary compliance agreements. For example, IRS has analyzed 12 mission-critical positions in terms of potential losses (e.g., retirement) from the current number of positions. These analyses have not focused on TEC because the analyses, as well as the eventual IRS-wide workforce plan, are intended to be done at a high level with minimal references to TEC. SB/SE asked officials in TEC and its other units in February 2003 to use a checklist to self-assess their current workforce and planning capabilities against OPM criteria. SB/SE has not indicated how it will verify and use the subjective check marks made by the officials to determine workforce gaps in TEC, particularly in skills needed. No analyses have been provided to justify plans for fiscal year 2004 to hire 250 additional staff in TEC to combat abusive tax schemes and to not hire any additional staff to address three other TEC goals. IRS and SB/SE workforce officials had told us that the 250 staff estimate came from the budget and finance staff in SB/SE. In a subsequent meeting during May 2003, TEC and SB/SE officials said that IRS has decided against any staff expansion in TEC due to other budget considerations. Finishing the analyses of TEC workforce gaps is important for the rest of the workforce plan. The other two critical elements involving strategies and evaluation cannot be finished until IRS and SB/SE know the specific needs of the current and future TEC workforces. As IRS and SB/SE officials develop and implement a workforce plan for TEC, major challenges are likely to arise. For example, these officials cited the challenge of balancing daily operational demands with the capacity to forecast workforce needs in terms of staff numbers, skills, and locations. Another challenge is gathering reliable data on the attrition, retirement, and skills of the current workforce to do analyses that are critical to workforce planning. IRS and SB/SE officials also pointed to budget fluctuations that could limit their strategies to close gaps in the workforce needed by TEC over time. For example, the budget may be insufficient to replace losses of TEC workforce skills due to retirement. Finally, they said that if the workforce plan could adversely affect current TEC employees, dealing with employee unions to address the concerns could be a challenge. We have reported on these and other challenges that any agency faces in doing successful workforce planning. As discussed in our previous reports, and echoed by OPM and OMB guidance, a strategic workforce plan enables an agency to identify gaps in its current and future needs, select strategies to fill the gaps, and evaluate the success of the plan to make revisions that may be needed to better meet program goals. Such a workforce plan does not yet exist for TEC. Without such a plan, TEC is less likely to have the right number of staff with the right skills in the right places at the right time to address its priorities. Further, it is difficult to justify budget and staffing requests if the workforce needs are not known. IRS and SB/SE have started taking steps to develop a strategic workforce plan for TEC based on the critical elements under OPM and OMB guidelines, and our guidelines for what a plan is to include and how it is to be developed and implemented. However, IRS and SB/SE have not yet identified many details on how the plan for TEC will incorporate the elements. Without these details, we cannot be certain that the critical elements will be used and contribute to the program goals. Given the uncertainty on how the workforce plan for TEC will be developed and implemented, we recommend that the Commissioner of Internal Revenue ensure that the workforce plan for TEC be developed in conformance with the critical elements for what a plan should include and how a plan should be developed and implemented. We requested comments on a draft of this report from IRS. The Commissioner of Internal Revenue provided written comments in a letter dated May 28, 2003. (See appendix I.) These comments neither explicitly agreed nor disagreed with our recommendation to ensure that a workforce plan for TEC is developed in conformance with the critical elements of what a plan should include and how it should be developed and implemented. The Commissioner did say that IRS strongly endorses the development of a strategic workforce plan and that IRS has made progress on this effort, listing eight steps that have been taken. The Commissioner also said that the steps were a set of integrated strategies that reflect IRS's commitment to improve its workforce planning efforts and that they addressed the issues raised in our report. To the extent that IRS had told us about how these steps contributed to a workforce plan for TEC, our report discusses them when we describe IRS's efforts to create such a plan using the critical elements. Although we believe that these steps are useful, we made our recommendation because we did not see enough details to be assured that a workforce plan for TEC would be sufficiently developed and implemented in accordance with the critical elements. We are encouraged that IRS strongly endorses development of a strategic workforce plan. We look forward to seeing a workforce plan for TEC. As we agreed with your staff, unless you publicly release the contents of this report earlier, we will not distribute it until 30 days after its issue date. At that time, we will send copies of this report to the Ranking Minority Member of the Senate Committee on Small Business and Entrepreneurship. We will also send copies to the Commissioner of Internal Revenue and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Thomas Short, Assistant Director. Other major contributors include Catherine Myrick and Grace Coleman. If you have any questions or would like additional information, please contact me at (202) 512-9110 or [email protected] or Thomas Short at (202) 512-9110 or [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Strategic workforce planning helps ensure that agencies have the right people with the right skills in the right positions to carry out the agency mission both in the present and future. The Internal Revenue Service's (IRS) Taxpayer Education and Communication (TEC) unit within its Small Business and Self- Employed Division assists some 45 million small business and self-employed taxpayers. Given the number of taxpayers it is to assist and changes in its priorities and strategies, GAO was asked to determine whether TEC has a workforce plan that conforms to critical elements for what should be in a plan and how it should be developed and implemented. Although it has existed for more than 2-and-a-half years, TEC does not have a strategic workforce plan that includes certain critical elements. For example, it has not identified gaps between the number, skills, and locations of its current workforce and the workforce it will need in the future, and the strategies to fill gaps. Such a workforce plan for TEC could be developed by IRS, the Small Business and Self-Employed Division, and/or TEC. Small Business and Self-Employed Division officials said that TEC does not have a strategic workforce plan because they focused on creating the division and units such as TEC to begin addressing taxpayer needs, and because they first wanted to gain some experience with TEC as a new unit. IRS and the Small Business and Self-Employed Division are creating a process for developing a workforce plan for TEC that in broad terms would incorporate the critical elements common to workforce planning. However, it is not yet clear whether the workforce plan for TEC will be developed and implemented consistent with these critical elements. For example, IRS and the Small Business and Self-Employed Division have not analyzed the skills that the TEC workforce will need to meet its program goals or outlined the process and data to be used to do these analyses.
3,843
408
SSA is the largest operating division within HHS. As such, it accounts for approximately 65,000 FTE employees or about 51 percent of HHS' FTE positions. SSA's fiscal year 1995 budget of about $371 billion accounts for over one-half of HHS' total budget for that year. SSA administers three major federal programs and assists other federal and state agencies in administering a variety of other income security programs for individuals and families. The programs administered by SSA are the OASI program and the DI program--two social insurance programs authorized under Title II of the Social Security Act. SSA also administers the SSI program, a welfare program authorized under Title XVI, to provide benefits to the needy aged, blind, and disabled. SSA serves the public through a network that includes 1,300 field offices and a national toll-free telephone number. Under the Title II programs, over $300 billion in benefits were paid in 1993 to over 40 million eligible beneficiaries. About 95 percent of all jobs in the United States are covered by these insurance programs. SSA also performs a number of administrative functions to pay Social Security benefits. For example, it maintains earnings records for over 140 million U.S. workers, which are used to determine the dollar amount of their OASI and DI benefits. To do this, SSA collects annual wage reports from over 6 million employers. Since 1990, it has issued new Social Security numbers to an average of over 7 million people annually. Under Title XVI, SSA provides almost $22 billion in SSI benefits annually to about 6 million recipients. This program was established to provide cash assistance to low-income individuals who are age 65 or older or are blind or disabled. In the mid-1970s, SSI replaced the categorical programs for the needy aged, blind, and disabled that were administered by the states. SSA began as an independent agency with a mission of providing retirement benefits to the elderly. A three-member, independent Social Security Board was established in 1935 to administer the Social Security program. The Chairman of the Board reported directly to the President until July 1939, when the Board was placed under the newly established Federal Security Agency (FSA). At that time, the Social Security program was expanded to include Survivors Insurance, which paid monthly benefits to survivors of insured workers. In 1946, the Social Security Board was abolished, and its functions were transferred to the newly established Social Security Administration, still within FSA. The FSA administrator established the position of Commissioner to head SSA. In 1953, the FSA was abolished and its functions were transferred to the Department of Health, Education and Welfare (HEW). Moreover, the position of SSA Commissioner was designated as a presidential appointee requiring Senate confirmation. In 1956, the Social Security program was expanded to include the DI program to provide benefits to covered workers who became unable to work because of disability. In 1965, amendments to the Social Security Act increased SSA's scope and complexity by establishing the health insurance program known as Medicare. The purpose of Medicare was to help the qualified elderly and disabled pay their medical expenses. SSA administered the Medicare program for about 12 years before Medicare was transferred to a new division within HEW, the Health Care Financing Administration. Further amendments to the Social Security Act created the SSI program, effective in 1974. This program was designed to replace welfare programs for the aged, blind, and disabled administered by the states. The SSI program added substantially to SSA's responsibilities. The agency then had to deal directly with SSI clients, which entailed determining recipients' eligibility based on income and assets. SSA has remained a part of HHS (formerly HEW) since 1953. Since 1984, congressional committees responsible for overseeing SSA's activities have considered initiatives to make SSA an independent agency. While the reasons for independence have varied over the years, legislation seeking independence from HHS has been introduced in several sessions of the Congress. Concerns expressed in congressional hearings and reports of the past decade have focused on a variety of issues, including the need for (1) improved management and continuity of leadership at SSA; (2) greater public confidence about the long-term viability of Social Security benefits; and (3) removal of the program's policies and budgets from the influence of HHS, OMB, and the administration. Statements by committee chairmen have shown a desire to make SSA more accountable to the public for its actions and more responsive to the Congress' attempts to address SSA's management and policy concerns. The act requires the Secretary of HHS and the Commissioner of SSA to develop a written interagency transfer agreement, effective March 31, 1995, which specifies the personnel and other resources to be transferred to an independent SSA. Our review of the agreement and the supporting documentation shows that SSA and HHS have developed a reasonable methodology for, and progressed well toward implementing the transition. Specifically, we found that HHS and SSA have progressed well in (1) identifying and transferring personnel and other resources; (2) effecting organizational changes prompted by the transition; and (3) addressing changes to SSA's budget process, as called for in the act. The interagency agreement submitted by HHS and SSA to the Senate Committee on Finance and House Committee on Ways and Means on December 27, 1994, notes that all major transition tasks have been completed or are under way and that personnel and resource transfers will be completed on March 31, 1995. Elements of the agreement relating to transferring personnel, resources, and property appear to meet OMB guidelines for such transfers. Under the interagency agreement, the approximately 65,000 FTE employees currently under SSA will remain with the agency. In addition, about 1,143 HHS FTE personnel who provide support services to SSA are expected to be transferred. Of these, 478 will provide personnel administration services for SSA, 289 will provide legal support, and 263 will perform audit and investigative activities. The remaining 113 FTEs will provide other administrative support services for the agency. SSA expects to reimburse HHS for providing payroll and certain other support services to SSA on an interim basis. Similarly, HHS will reimburse SSA for providing certain services to Medicare recipients through its local offices and telephone service centers. In developing the agreement, HHS surveyed its division heads to identify the functions and FTEs currently supporting SSA. At the same time, SSA developed its own estimate of the number of HHS personnel performing work for it. To supplement these data, SSA also relied on its managers' assessments of the number of HHS FTEs currently supporting the agency. HHS and SSA then engaged in extensive negotiations to agree on the final number of FTEs to be transferred. Virtually all individuals have been identified, and HHS expects to issue final staff transfer notices by February 21, 1995. Personnel have been selected primarily on the basis of the percentage of their work spent on SSA activities. However, all employees had an opportunity to appeal the transfer actions and some volunteers were sought to obtain the proper skill levels. HHS and SSA have also agreed on nonpersonnel resources to be transferred, such as funds, computer equipment, and furniture. These decisions are contingent on the numbers and specific personnel transferring to SSA. HHS and SSA are required to prepare for OMB a written itemization of resources to be transferred. OMB officials told us that no problems have arisen and it expects to provide the certification necessary to complete resource transfers on March 31, 1995. SSA will make several organizational changes to be a fully functional independent agency. (See app. I for the SSA organization charts for before and after the transition.) We believe that SSA has reasonably planned for these changes. Our assessment of the transition activities, combined with the review of the SSA and HHS interagency agreement, indicates that the organizational changes should be completed on March 31, 1995. SSA plans to establish its own Office of the General Counsel and an Office of Inspector General. SSA's Office of the General Counsel will provide the necessary legal advice and litigation services for the programs administered by SSA. An acting General Counsel will be designated to head the office until a permanent appointment is made. The SSA Office of Inspector General will conduct audits and investigations of the agency's programs and operations. The Inspector General will report directly to the Commissioner to ensure objectivity and independence from internal agency pressures. The HHS Inspector General has agreed to act as SSA's Inspector General until a permanent Inspector General has been confirmed. SSA expects that additional organizational changes will occur in conjunction with the transition. For example, SSA plans to merge the functions of its Offices of Programs and Policy and External Affairs. The new Office of Programs, Policy, Evaluation and Communications will be responsible for research, policy analysis, and program evaluation. SSA's Office of Legislation and Congressional Affairs will also be repositioned to report directly to the Commissioner. This office will facilitate a working relationship between SSA and the executive and legislative branches. SSA plans to establish a Washington, D.C., office to facilitate a closer working relationship with the Congress and the executive branch. Staffing in the Washington office is estimated at 150 to 200 permanent employees, including the Commissioner, Principal Deputy Commissioner, legislative liaison staff, Inspector General, the General Counsel, and research and statistics personnel. The agency has defined needed space requirements and acquired temporary office space. It should obtain permanent space by early 1996. SSA also plans to decentralize and transfer more management authority from its headquarters to its regional offices. Following the transition, Regional Commissioners will have direct authority over public affairs and personnel administration in their respective regions. These functions are currently managed by SSA's headquarters or by HHS. SSA has also indicated that, where possible, some of the approximately 1,143 FTEs identified for transfer may be shifted to local offices and telephone service centers to strengthen service. Finally, SSA has confirmed that the newly established Social Security Advisory Board will spend a substantial amount of time in Washington, D.C., and members will maintain offices in both the Baltimore and Washington, D.C., locations. The seven-member board will advise the Commissioner, the President, and the Congress on SSA program policies. SSA officials told us that the Congress has appointed four members. However, the President has not yet appointed the three remaining members as required by the act. The act revises the process for submitting SSA's annual budget. The act states, "the Commissioner shall prepare an annual budget for the Administration which shall be submitted by the President to the Congress without revision, together with the President's annual budget for the Administration." Traditionally, agencies, including SSA, receive budget guidance from OMB beginning in April of each year and spend about 5 months preparing a budget proposal. This proposal is submitted in September to OMB, where it is reviewed for several months. OMB then requires agencies to revise their budget proposals by incorporating OMB decisions and changes. Once approved by OMB, agency budgets are transmitted to the Congress as part of the President's budget for executive agencies. The act does not restrict OMB from continuing to exercise its traditional budgetary oversight role, and our work has shown that both OMB and SSA officials do not envision any substantive change in SSA's budget process. Presumably, the new budget provision is intended to illuminate differences between the budget SSA proposes and the President's budget for the agency. However, the process allows for OMB's April guidance to influence SSA's September budget proposal. In its comments on this report, SSA agreed that OMB's influence would continue to be a factor in the preparation of its September budget. While SSA has progressed well toward completing the transition, the agency will continue to face significant challenges as an independent agency. Some of these include the long-range solvency of the Social Security trust funds, growing disability caseloads, and issues surrounding the increase in SSI caseloads. We have identified and documented these challenges in numerous reports, testimonies, and management reviews of SSA over the last several years. With the passage of legislation creating an independent SSA, it was expected that SSA would take a more active leadership role in addressing its major program challenges. Our work has also demonstrated the need for SSA to address program policy issues and to more aggressively manage its programs. This will be crucial for SSA as it assumes the functions currently provided by HHS. SSA's independence will heighten the need for it to work with the Congress in developing options for ensuring that revenues are adequate to make future Social Security benefit payments. As noted in our previous reports, this issue has troubled the agency for many years. The financial operations of SSA's insurance programs are supported by trust funds, which are credited with revenues derived from (1) payroll taxes on earned income and on self-employment earnings up to specified limits and (2) interest income from trust fund investments. Additional financing is provided from general revenues resulting from the taxation of Social Security benefits. To address financing issues, the Social Security Amendments of 1977 and 1983 moved the trust funds from a pay-as-you-go financing basis toward the accumulation of substantial temporary reserves. However, as we reported in 1989 and 1994, economic and demographic factors have slowed the growth of the trust fund reserves and brought the projected point of insolvency for both the OASI and DI trust funds closer than originally expected. SSA's Office of the Actuary confirmed that the OASI trust fund currently has reserves sufficient to pay annual benefits until the year 2030. The DI trust fund will have funds sufficient to pay annual benefits until the year 2015. In recent years, we have reported that SSA's DI program has experienced significant caseload increases, and backlogs have remained at unprecedented levels. Moreover, changes in the characteristics of new beneficiaries have accompanied this growth. The new beneficiaries' average age is generally decreasing and is now below 50. Also, mental impairment awards to younger workers increased by about 500 percent between 1982 and 1992, helping to lower the average age. These situations could mean that once on the rolls, these beneficiaries will receive benefits for a longer period of time than other beneficiaries. In addition, an increasing percentage of new beneficiaries receives very low benefits, which indicates that these beneficiaries had limited work histories and are unlikely to return to work. Program rolls have grown and changed for several reasons. Higher unemployment probably contributes to increasing applications, and policy changes have contributed to changes in the numbers and types of beneficiaries. However, SSA lacks adequate data on how many people in the population suffer from disabilities that might qualify them for benefits. As a result, SSA has limited ability to predict future growth and change in the rolls. SSA has undertaken initiatives to improve its disability application process to more efficiently handle caseloads and reduce backlogs. Implementing these initiatives will significantly challenge SSA because it requires fundamental changes in the way the agency does its work. Further, without additional information, neither SSA nor the Congress can be sure whether the current growth will continue. SSA faces the challenge of determining what actions are needed to better manage the program and whether some fundamentals of the program should be reexamined. As we reported in previous work, SSI benefit payments and caseloads have increased significantly over the past several years. From 1986 to 1994, SSI benefit payments for the aged, blind, and disabled increased by $13.5 billion, doubling in 7 years. Benefits for the disabled accounted for almost 100 percent of this increase. Three groups--disabled children, mentally disabled adults, and legal immigrants--significantly outpaced the growth of all other SSI recipients. As an independent agency, SSA faces the challenge of addressing congressional and public concerns about SSI program growth. HHS and SSA have developed an acceptable methodology for identifying the functions, personnel, and other resources to be transferred to the independent agency. They have also progressed well toward completing the initiatives necessary for SSA to be a fully functional independent agency on the effective date. However, independence alone will not resolve the problems identified in previous GAO reviews, and SSA will continue to face significant challenges beyond March 31, 1995. The elevation of SSA to an independent agency will create opportunities for the agency to take a leadership role in addressing some of the broader program policy issues and to reexamine its processes to determine how it can improve its effectiveness. We obtained official oral comments on this report from senior officials from SSA and HHS. These officials generally agreed with our findings and conclusions. They did offer some technical suggestions that we have incorporated where appropriate in the report. We are sending copies of this report to the Secretary of HHS, the Commissioner of SSA, and other interested parties. Copies will also be made available to others upon request. If you or your staffs have any questions concerning this report, please call me on (202) 512-7215. Other major contributors are listed in appendix II. Department of Health and Human Services Social Security Administration (as of February 1995) (draft as of February 1995) In addition to those named above, the following individuals made important contributions to this report: Leslie Aronovitz, Associate Director, Income Security Issues; Daniel Bertoni, Senior Evaluator; Mary Reich, Staff Attorney; Valerie Rogers, Evaluator; and Jacquelyn Stewart, Senior Evaluator. Social Security: Rapid Rise in Children on SSI Disability Rolls Follows New Regulations (GAO/HEHS-94-225, Sept. 9, 1994). Social Security: Trust Funds Can Be More Accurately Funded (GAO/HEHS-94-48, Sept. 2, 1994). Social Security: New Continuing Disability Review Process Could Be Enhanced (GAO/HEHS-94-118, June 27, 1994). Social Security: Major Changes Needed for Disability Benefits for Addicts (GAO/HEHS-94-128, May 13, 1994). Social Security: Disability Rolls Keep Growing, While Explanations Remain Elusive (GAO/HEHS-94-34, Feb. 8, 1994). Social Security: Increasing Number of Disability Claims and Deteriorating Service (GAO/HRD-94-11, Nov. 10, 1993). Social Security: Sustained Effort Needed to Improve Management and Prepare for the Future (GAO/HRD-94-22, Oct. 27, 1993). Social Security: Telephone Busy Signal Rates at Local SSA Field Offices (GAO/HRD-93-49, Mar. 4, 1993). Social Security: Reporting and Processing of Death Information Should Be Improved (GAO/HRD-92-88, Sept. 4, 1992). Debt Management: More Aggressive Actions Needed to Reduce Billions in Overpayments (GAO/HRD-91-46, July 9, 1991). Social Security Downsizing: Significant Savings But Some Service Quality and Operational Problems (GAO/HRD-91-63, Mar. 19, 1991). Social Security: Status and Evaluation of Agency Management Improvement Initiatives (GAO/HRD-89-42, July 24, 1989). Social Security: Staff Reductions and Service Quality (GAO/HRD-89-106BR, June 16, 1989). Social Security Administration: Stable Leadership and Better Management Needed to Improve Effectiveness (GAO/HRD-87-39, Mar. 18, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO: (1) evaluated the Social Security Administration's (SSA) and Department of Health and Human Services' (HHS) transition plans; and (2) identified some of the policy changes SSA will face as an independent agency. GAO found that: (1) SSA and HHS have progressed towards the goal of SSA functioning as an independent agency; (2) HHS has successfully identified and transferred personnel and other resources to SSA; (3) there has been effective organizational changes prompted by the transition; (4) SSA and HHS have made changes to the SSA budget process, and SSA has initiated an effort to improve its claims processing function; (5) SSA and HHS have agreed that nonpersonnel transfers, such as funds, computer equipment, and furniture will be dependent on personnel transfers to SSA; (6) SSA will maintain its own legal and auditing departments; and (7) SSA will establish a Washington, DC office in order to bring about a closer working relationship with Congress and the executive branch.
4,419
224
According to IAEA, between 1993 and 2006, there were 1,080 confirmed incidents of illicit trafficking and unauthorized activities involving nuclear and radiological materials worldwide. Eighteen of these cases involved weapons-usable material--plutonium and highly enriched uranium (HEU)--that could be used to produce a nuclear weapon. IAEA also reported that 124 cases involved materials that could be used to produce a device that uses conventional explosives with radioactive material (known as a "dirty bomb"). Past confirmed incidents of illicit trafficking in HEU and plutonium involved seizures of kilogram quantities of weapons-usable nuclear material but most have involved very small quantities. In some of these cases, it is possible that the seized material was a sample of larger quantities available for illegal purchase. IAEA concluded that these materials pose a continuous potential security threat to the international community, including the United States. Nuclear material could be smuggled into the United States in a variety of ways: hidden in a car, train or ship; sent through the mail; carried in personal luggage through an airport; or walked across an unprotected border. In response to these threats, U.S. agencies, including DHS, DOD, DOE, and State, implemented programs to combat nuclear smuggling in foreign countries and the United States. DOD, DOE, and State fund, manage, and implement the global nuclear detection architecture's international programs. Many international detection programs were operating for several years before DNDO was created. For example, DOE's Materials Protection, Control, and Accounting program, initiated in 1995, provides support to the Russian Federation and other countries of concern to secure nuclear weapons and weapons material that may be at risk of theft of diversion. In addition, during the 1990s, the United States began deploying radiation detection equipment at borders in countries of the former Soviet Union. DOD's Cooperative Threat Reduction (CTR) program launched a variety of programs in the early 1990s to help address proliferation concerns in the former Soviet Union, including helping secure Russian nuclear weapons. Two other DOD programs have provided radiation portal monitors, handheld equipment, and radiation detection training to countries in the former Soviet Union and in Eastern Europe. Similarly, State programs have provided detection equipment and training to numerous countries. DHS, in conjunction with other federal, state, and local agencies, is responsible for combating nuclear smuggling in the United States and has provided radiation detection equipment, including portal monitors, personal radiation detectors (known as pagers), and radioactive isotope identifiers at U.S. ports of entry. All radiation detection devices have limitations in their ability to detect and identify nuclear material. Detecting attempted nuclear smuggling is difficult because there are many sources of radiation that are legal and not harmful when used as intended. These materials can trigger alarms-- known as nuisance alarms--that may be indistinguishable in some cases from alarms that could sound in the event of a true case of nuclear smuggling. Nuisance alarms can be caused by patients who have recently had cancer treatments; a wide range of cargo with naturally occurring radiation (e.g., fertilizer, ceramics, and food products) and legitimate shipments of radiological sources for use in medicine and industry. In October 2005, a few months after its inception, DNDO completed its initial inventory of federal programs associated with detecting the illicit transport of radiological and nuclear materials. As part of this effort, DNDO defined the architecture's general approach: a multilayered detection framework of radiation detection equipment and interdiction activities to combat nuclear smuggling in foreign countries, at the U.S. border, and inside the United States. DNDO, in collaboration with other federal agencies, such as DOD, DOE, and State, analyzed the gaps in current planning and deployment strategies to determine the ability of individual layers of the architecture to successfully prevent illicit movement of radiological or nuclear materials or devices. DNDO identified several gap areas with respect to detecting potential nuclear smuggling, such as (1) land border crossings into the United States between formal points of entry, (2) small maritime craft (any vessel less than 300 gross tons) that enter the United States, and (3) international general aviation. In November 2006, DNDO completed a more detailed analysis of programs in the initial architecture. DNDO identified 72 programs across the federal government that focused on combating radiological and nuclear smuggling and nuclear security and it discussed these programs in depth by layer. The analysis also included a discussion of the current and anticipated budgets associated with each of these programs and each of the layers. In June 2008, DNDO released the Joint Annual Interagency Review of the Global Nuclear Detection Architecture. This report provides an updated analysis of the architecture by layer of defense and a discussion of the 74 programs now associated with each of the layers, as well as an estimate of the total budgets by layer. To address the gaps identified in the domestic portions of the architecture, DNDO has initiated pilot programs to address primary areas of concern or potential vulnerability. For example: For the land border in between ports of entry, DNDO and CBP are studying the feasibility of equipping CBP border patrol agents with portable radiological and nuclear detection equipment along the U.S. border. For small marine vessels, DNDO is working with the Coast Guard to develop and expand the coverage of radiological and nuclear detection capabilities that can be specifically applied in a maritime environment. For international general aviation, DNDO is working with CBP, the Transportation Security Administration, and other agencies to develop and implement radiological and nuclear detection capabilities to scan international general aviation flights to the United States for possible illicit radiological or nuclear materials. To date, we have received briefings on each of these programs from DNDO, but we have not yet fully reviewed how they are being implemented. We will examine each of these more closely during the course of our review. Our preliminary observation is that DNDO's pilot programs appear to be a step in the right direction for improving the current architecture. However, these efforts to address gaps are not being undertaken within the larger context of an overarching strategic plan. While each agency that has a role in the architecture may have its own planning documents, DNDO has not produced an overarching strategic plan that can guide its efforts to address the gaps and move to a more comprehensive global nuclear detection architecture. Our past work has discussed the importance of strategic planning. Specifically, we have reported that strategic plans should clearly define objectives to be accomplished, identify the roles and responsibilities for meeting each objective, ensure that the funding necessary to achieve the objectives is available, and employ monitoring mechanisms to determine progress and identify needed improvements. For example, such a plan would define how DNDO will achieve and monitor the goal of detecting the movement of radiological and nuclear materials through potential smuggling routes, such as small maritime craft or land borders in between ports of entry. Moreover, this plan would include agreed-upon processes and procedures to guide the improvement of the architecture and coordinate the activities of the participating agencies. DNDO and other agencies face a number of challenges in developing a global nuclear detection architecture, including (1) coordinating detection efforts across federal, state, and local agencies and with other nations, (2) dealing with the limitations of detection technology, and (3) managing the implementation of the architecture. Our past work on key aspects of international and domestic programs that are part of the architecture have identified a number of weaknesses. In order for the architecture to be effective, all parts need to be well thought out, managed, and coordinated. As a chain is only as strong as its weakest link, limitations in any of the programs that constitute the architecture may ultimately limit its effectiveness. Specifically, in past work, we have identified the following difficulties that federal agencies have had coordinating and implementing radiation detection efforts. We reported that DOD, DOE, and State had not coordinated their approaches to enhance other countries' border crossing. Specifically, radiation portal monitors that State installed in more than 20 countries are less sophisticated than those DOD and DOE installed. As a result, some border crossings where U.S. agencies had installed radiation detection equipment were more vulnerable to nuclear smuggling than others. Since issuing our report, a governmentwide plan encompassing U.S. efforts to combat nuclear smuggling in other countries has been developed; duplicative programs have been consolidated; and coordination among the agencies, although still a concern, has improved. In 2005, we reported that there is no governmentwide guidance for border security programs that delineates agencies' roles and responsibilities, establishes regular information sharing, and defines procedures for resolving interagency disputes. In the absence of guidance for coordination, officials in some agencies questioned other agencies' roles and responsibilities. More recently, in 2008, we found that levels of collaboration between U.S. and host government officials varied at some seaports participating in DHS's Container Security Initiative (CSI). In addition, we identified hurdles to cooperation between CSI teams and their counterparts in the host government, such as a host country's legal restrictions that CBP officials said prevent CSI teams from observing examinations. Furthermore, many international nuclear detection programs rely heavily on the host country to maintain and operate the equipment. We have reported that in some instances this reliance has been problematic. For example: About half of the portal monitors provided to one country in the former Soviet Union were never installed or were not operational. In additional, mobile vans equipped with radiation detection equipment furnished by State have limited usefulness because they cannot operate effectively in cold climates or are otherwise not suitable for conditions in some countries. Once the equipment is deployed, the United States has limited control over it, as we have previously reported. Specifically, once DOE finishes installing radiation equipment at a port and passes control of the equipment to the host government, the United States no longer controls the equipment's specific settings or its use by foreign customs officials. Settings can be changed, which may decreased the probability that the equipment will detect nuclear material. Within the U.S. borders, DNDO faces coordination challenges and will need to ensure that the problems with nuclear detection programs overseas are not repeated domestically. Many pilot programs DNDO is developing to address gaps in the architecture will rely heavily on other agencies to implement them. For example, DNDO is working closely with the Coast Guard and other federal agencies to implement DNDO's maritime initiatives to enhance detection of radiological and nuclear materials on small vessels. However, maritime jurisdictional responsibilities and activities are shared among federal, state, regional, and local governments. As a result, DNDO will need to closely coordinate activities related to detecting radiological and nuclear materials with these entities, as well as ensure that users are adequately trained and technical support is available. DNDO officials told us they are closely coordinating with other agencies, and our work to assess this coordination is still underway. We will continue to explore these coordination activities and challenges as we continue our review. The ability to detect radiological and nuclear materials is a critical component of the global nuclear detection architecture; however, current technology may not able to detect and identify all smuggled radiological and nuclear materials. In our past work, we found limitations with radiation detection equipment. For example: In a report on preventing nuclear smuggling, we found that a cargo container containing a radioactive source was not detected as it passed through radiation detection equipment that DOE had installed at a foreign seaport because the radiation emitted from the container was shielded by a large amount of scrap metal. Additionally, detecting actual cases of illicit trafficking in weapons-usable nuclear material is complicated: one of the materials of greatest concern in terms of proliferation--highly enriched uranium--is among the most difficult materials to detect because of its relatively low level of radioactivity. We reported that current portal monitors deployed at U.S. borders can detect the presence of radiation but cannot distinguish between harmless radiological materials, such as ceramic tiles, fertilizer, and bananas, and dangerous nuclear materials, such as plutonium and uranium. DNDO is currently testing a new generation of portal monitors. We have raised continuing concerns about DNDO's efforts to develop and test these advanced portal monitors. We currently have additional work underway examining the current round of testing and expect to report on our findings in September 2008. Environmental conditions can affect radiation detection equipment's performance and sustainability, as we also have previously reported. For example, wind disturbances can vibrate the equipment and interfere with its ability to detect radiation. In addition, sea spray may corrode radiation detection equipment and its components that are operated in ports or near water. Its corrosive nature, combined with other conditions such as coral in the water, can accelerate the degradation of equipment. It is important to note that radiation detection equipment is only one of the tools that customs inspectors and border guards must use to combat nuclear smuggling. Combating nuclear smuggling requires an integrated approach that includes equipment, proper training, and intelligence gathering on smuggling operations. In the past, most known interdictions of weapons-useable nuclear materials have resulted from police investigations rather than by radiation detection equipment installed at border crossings. The task DNDO has been given--developing an architecture to keep radiological and nuclear materials from entering the country--is a complex and large undertaking. DNDO has been charged with developing an architecture that depends on programs implemented by other agencies. This lack of control over these programs poses a challenge for DNDO in ensuring that all individual programs within the global nuclear detection architecture will be effectively integrated. Moreover, implementing and sustaining the architecture requires adequate resources and capabilities to meet needed commitments. However, the majority of the employees in DNDO's architecture office are detailees on rotation from other federal agencies or are contractors. This type of staffing approach allows DNDO to tap into other agencies' expertise in radiological and nuclear detection. However, officials told us that staff turnover may limit the retention and depth of institutional memory since detailees return to their home organizations after a relatively short time. In some cases, there have been delays in filling these vacancies. We will continue to examine this potential resource challenge as we complete our work. In spite of these challenges, DNDO's efforts to develop a global nuclear detection architecture have yielded some benefits, according to DOD, DOE, and State officials. For example, an official from the State Department told us that DNDO is working through State's Global Initiative to Combat Nuclear Terrorism to develop model guidelines that other nations can use to establish their own nuclear detection architectures and recently sponsored a related workshop. In addition, DOE officials said that DNDO's actions have helped broaden their perspective on the deployment of radiation detection equipment overseas. Previously, the U.S. government had been more focused on placing fixed detectors at particular sites, but as a result of DNDO's efforts to identify gaps in the global detection network, DOE has begun to work with law enforcement officials in other countries to improve detection capabilities for the land in between ports of entry. Finally, DNDO, DOD, DOE, and the Office of the Director of National Intelligence for Science and Technology are now formally collaborating on nuclear detection research and development and they have signed a memorandum of understanding (MOU) to guide these efforts. The MOU will integrate research and development programs by, for example, providing open access to research findings in order to leverage this knowledge and to reduce conflict between different agency programs. In addition, the MOU encourages joint funding of programs and projects and calls on the agencies to coordinate their research and development plans. In our ongoing work, we will examine DNDO's progress in carrying through on these initiatives. DNDO reported that approximately $2.8 billion was budgeted in fiscal year 2007 for 74 programs focused on preventing and detecting the illicit transport of radiological or nuclear materials. These programs were primarily administered by DHS, DOD, DOE, and State and spanned all layers of the global nuclear detection architecture. Specifically: $1.1 billion funded 28 programs focused on the international aspects of the $221 million funded 9 programs to support detection of radiological and nuclear material at the U.S. border; $918 million funded 16 programs dedicated to detecting and securing radiological or nuclear materials within the U.S. borders; and $577 million funded 34 cross-cutting programs that support many different layers of the architecture by, for example, research and development or technical support to users of the detection equipment. The fiscal year 2007 budget of $2.8 billion will not sustain the architecture over the long term because additional programs and equipment will be implemented to address the gaps. For example, this amount does not include the cost estimates related to acquiring and deploying the next generation of advanced portal monitors that are currently being tested. In addition, DNDO is just beginning new efforts to mitigate gaps in the architecture and budget estimates for these activities are limited. We are in the process of reviewing this cost information and will provide more detailed analysis in our final report. DNDO has been given an important and complex task--develop a global nuclear detection architecture to combat nuclear smuggling and keep radiological and nuclear weapons or materials from entering the United States. This undertaking involves coordinating a vast array of programs and technological resources that are managed by many different agencies and span the globe. Since its creation 3 years ago, DNDO has conceptually mapped the current architecture and identified how it would like the architecture to evolve in the near term. While DNDO's vision of a more comprehensive architecture is laudable, to achieve this goal, it will need to address a number of key challenges including building close coordination and cooperation among the various agencies involved and developing and deploying more advanced radiation detection technology. Although DNDO has taken some steps to achieve these ends, it has not done so within the larger context of an overarching strategic plan with clearly established goals, responsibilities, priorities, resource needs, and mechanisms for assessing progress along the way. Developing and implementing a global nuclear detection architecture will likely take several years, cost billions of dollars, and rely on the expertise and resources of agencies and programs across the government. Moving forward, DNDO should work closely with its counterparts within DHS, as well as at other departments, to develop a comprehensive strategic plan that helps safeguard the investments made to date, more closely links future goals with the resources necessary to achieve those goals, and enhance the architecture's ability to operate in a more cohesive and integrated fashion. We recommend that the Secretary of Homeland Security, in coordination with the Secretary of Defense, the Secretary of Energy, and the Secretary of State, develop a strategic plan to guide the development of a more comprehensive global nuclear detection architecture. Such a plan should (1) clearly define objectives to be accomplished, (2) identify the roles and responsibilities for meeting each objective, (3) identify the funding necessary to achieve those objectives, and (4) employ monitoring mechanisms to determine programmatic progress and identify needed improvements. We provided a draft of the information in this testimony to DNDO. DNDO provided oral comments on the draft, concurred with our recommendations, and provided technical comments, which we incorporated as appropriate. Mr. Chairman, this concludes my prepared statement. We will continue our review and plan to issue a report in early 2009. I would be pleased to answer any questions that you or other Members of the Committee have at this time. For further information on this testimony, please contact me at (202) 512- 3841 or [email protected]. Glen Levis, Assistant Director, Elizabeth Erdmann, Rachel Girshick, Sandra Kerr, and Tommy Williams made key contributions to this statement. Additional assistance was provided by Omari Norman and Carol Herrnstadt Shulman. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In April 2005, a Presidential Directive established the Domestic Nuclear Detection Office (DNDO) within the Department of Homeland Security to enhance and coordinate federal, state, and local efforts to combat nuclear smuggling domestically and abroad. DNDO was directed to develop, in coordination with the departments of Defense (DOD), Energy (DOE), and State (State), an enhanced global nuclear detection architecture--an integrated system of radiation detection equipment and interdiction activities. DNDO implements the domestic portion of the architecture, while DOD, DOE, and State are responsible for related programs outside the U.S. This testimony provides preliminary observations based on ongoing work addressing (1) the status of DNDO's efforts to develop a global nuclear detection architecture, (2) the challenges DNDO and other federal agencies face in implementing the architecture, and (3) the costs of the programs that constitute the architecture. This statement draws on prior GAO reviews of programs constituting the architecture, and GAO's work on strategic planning According to GAO's preliminary work to date, DNDO has taken steps to develop a global nuclear detection architecture but lacks an overarching strategic plan to help guide how it will achieve a more comprehensive architecture. Specifically, DNDO has developed an initial architecture after coordinating with DOD, DOE, and State to identify 74 federal programs that combat smuggling of nuclear or radiological material. DNDO has also identified gaps in the architecture, such as land border crossings into the United States between formal points of entry, small maritime vessels, and international general aviation. Although DNDO has started to develop programs to address these gaps, it has not yet developed an overarching strategic plan to guide its transition from the initial architecture to a more comprehensive architecture. For example, such a plan would define across the entire architecture how DNDO would achieve and monitor its goal of detecting the movement of radiological and nuclear materials through potential smuggling routes, such as small maritime craft or land borders in between points of entry. The plan would also define the steps and resources needed to achieve a more comprehensive architecture and provide metrics for measuring progress toward goals. DNDO and other federal agencies face a number of coordination, technological, and management challenges. First, prior GAO reports have demonstrated that U.S.-funded radiological detection programs overseas have proven problematic to implement and sustain and have not been effectively coordinated, although there have been some improvements in this area. Second, detection technology has limitations and cannot detect and identify all radiological and nuclear materials. For example, smugglers may be able to effectively mask or shield radiological materials so that it evades detection. Third, DNDO faces challenges in managing implementation of the architecture. DNDO has been charged with developing an architecture that depends on programs implemented by other agencies. This responsibility poses a challenge for DNDO in ensuring that the individual programs within the global architecture are effectively integrated and coordinated to maximize the detection and interdiction of radiological or nuclear material. According to DNDO, approximately $2.8 billion was budgeted in fiscal year 2007 for the 74 programs included in the global nuclear detection architecture. Of this $2.8 billion, $1.1 billion was budgeted for programs to combat nuclear smuggling internationally; $220 million was devoted to programs to support the detection of radiological and nuclear material at the U.S. border; $900 million funded security and detection activities within the United States; and approximately $575 million was used to fund a number of cross-cutting activities. The future costs for DNDO and other federal agencies to address the gaps identified in the initial architecture are not yet known or included in these amounts.
4,426
792
Any discussion of readiness measurement must start with SORTS. This automated system, which functions as the central listing for more than 9,000 military units, is the foundation of DOD's unit readiness assessment process and is a primary source of information used for reviews at the joint and strategic levels. The system's database indicates, at a selected point in time, the extent to which these units possess the required resources and training to undertake their wartime missions. Units regularly report this information using a rating system that comprises various indicators on the status of personnel, equipment, supplies, and training. SORTS is intended to enable the Joint Staff, the combatant commands, and the military services to, among other things, prepare lists of readily available units, assist in identifying or confirming major constraints on the employment of units, and confirm shortfalls and distribution problems with unit resources. Until the early 1990s, DOD defined "readiness" narrowly in terms of the ability of units to accomplish the missions for which they were designed, and SORTS was the only nonservice-specific system DOD had to measure readiness. Even today, SORTS remains an important component of readiness assessment in that data from the system is used extensively by the services to formulate a big-picture view of readiness. However, limitations to SORTS have been well documented for many years by various audit and oversight organizations. For example, prior reviews by our office and others have found: SORTS represents a snapshot in time and does not signal impending changes in readiness. SORTS relies on military judgment for certain ratings, including the commanders' overall rating of unit readiness. In some cases, SORTS ratings reflect a higher or lower rating than the reported analytical measures support. However, DOD officials view subjectivity in SORTS reports as a strength because the commanders' judgments provide professional military assessments of unit readiness. The officials also note that much of the information in the SORTS reports is objective and quantitative. The broad measurements that comprise SORTS ratings for resource availability may mislead managers because they are imprecise and therefore may mask underlying problems. For example, SORTS allows units to report the same capability rating for personnel strength even though their personnel strength may differ by 10 percent. SORTS data is maintained in multiple databases located at combatant commands, major commands, and service headquarters and is not synchronized across the databases. SORTS data may be out-of-date or nonexistent for some units registered in the database because reporting requirements are not enforced. Army SORTS procedures that require review of unit reports through the chain of command significantly delay the submission of SORTS data to the Joint Staff. DOD is taking actions to address some of these limitations. The Chairman of the Joint Chiefs of Staff was directed last year--in the Defense Planning Guidance--to develop a plan for improving DOD's readiness assessment system. Although it has yet to be approved, the Joint Staff plan calls for a phased improvement to the readiness assessment system, starting with upgrades to SORTS. During the first phase of the plan, the Joint Staff is addressing technical limitations of SORTS. One of the objectives, for instance, is to ensure that the data is synchronized DOD-wide across multiple databases. Future phases of the Joint Staff plan would link SORTS with other databases in a common computer environment to make readiness information more readily accessible to decisionmakers. In addition, the Joint Staff plan calls for upgrades to SORTS that will make the system easier to use. Separate from the Joint Staff plan, the services are developing or implementing software to automate the process of entering SORTS data at the unit level. These technical upgrades are aimed at improving the timeliness and accuracy of the SORTS database and, therefore, are positive steps. They, however, will not address some of the inherent limitations to the system. For instance, the upgrades will not address the inability of the system to signal impending changes in readiness. In addition, the upgrades will not address the lack of precision in reporting unit resources and training. Another step DOD has taken to improve its readiness assessment capability is to institute a process known as the Joint Monthly Readiness Review. The joint review was initiated toward the end of 1994 and has matured over the last year or so. It represents DOD's attempt to look beyond the traditional unit perspective provided by SORTS--although SORTS data continues to play an important role--and to introduce a joint component to readiness assessment. We believe the joint review process has several notable features. First, it brings together readiness assessments from a broad range of DOD organizations and elevates readiness concerns to senior military officials, including the Vice Chairman of the Joint Chiefs of Staff. Second, the joint review emphasizes current and near-term readiness and incorporates wartime scenarios based on actual war plans and existing resources. Third, it adds a joint perspective by incorporating readiness assessments from the combatant commands. The services and combat support agencies also conduct readiness assessments for the joint review. Fourth, the joint review is conducted on a recurring cycle--four times a year--that has helped to institutionalize the process of readiness assessment within DOD.Finally, the joint review includes procedures for tracking and addressing reported deficiencies. I would like to note, however, that the DOD components participating in the review are accorded flexibility in how they conduct their assessments. The 11 combatant commands, for instance, assess readiness in eight separate functional areas, such as mobility, infrastructure, and intelligence, surveillance, and reconnaissance, and to do this each command has been allowed to independently develop its own measures. In addition, the process depends heavily on the judgment of military commanders to formulate their assessment. Officials involved with the joint review view this subjectivity as a strength, not a weakness, of the process. They said readiness assessment is influenced by many factors, not all of which are readily measured by objective indicators. One consequence, however, is that the joint review cannot be used to make direct comparisons among the commands in the eight functional areas. We should also point out that the services, in conducting their portion of the joint review, depend extensively on SORTS data. As I mentioned earlier, SORTS has certain inherent limitations. DOD is required under 10 U.S.C. 482 to prepare a quarterly readiness report to Congress. Under this law, DOD must specifically describe (1) each readiness problem and deficiency identified, (2) planned remedial actions, and (3) the key indicators and other relevant information related to each identified problem and deficiency. In mandating the report, Congress hoped to enhance its oversight of military readiness. The first report was submitted to Congress in May 1996. DOD bases its quarterly reports on briefings to the Senior Readiness Oversight Council. The Council, comprising senior civilian and military leaders, meets monthly and is chaired by the Deputy Secretary of Defense. The briefings to the Council are summaries from the Joint Monthly Readiness Review. In addition, the Deputy Secretary of Defense periodically tasks the Joint Staff and the services to brief the Council on various readiness topics. From these briefings, the Joint Staff drafts the quarterly report. It is then reviewed within DOD before it is submitted to Congress. We recently reviewed several quarterly reports to determine whether they (1) accurately reflect readiness information briefed to the Council and (2) provide information needed for congressional oversight. Because minutes of the Council's meetings are not maintained, we do not know what was actually discussed. Lacking such records, we traced information in the quarterly readiness reports to the briefing documents prepared for the Council. Our analysis showed that the quarterly reports accurately reflected information from these briefings. In fact, the quarterly reports often described the issues using the same wording contained in the briefings to the Council. The briefings, as well as the quarterly reports, presented a highly aggregated view of readiness, focusing on generalized strategic concerns. They were not intended to and did not highlight problems at the individual combatant command or unit level. DOD officials offered this as an explanation for why visits to individual units may yield impressions of readiness that are not consistent with the quarterly reports. Our review also showed that the quarterly reports did not fulfill the legislative reporting requirements under 10 U.S.C. 482 because they lacked the specific detail on deficiencies and planned remedial actions needed for congressional oversight. Lacking such detail, the quarterly reports provided Congress with only a vague picture of DOD's readiness problems. For example, one report stated that Army personnel readiness was a problem, but it did not provide data on the numbers of personnel or units involved. Further, the report did not discuss how the deficiency affected the overall readiness of the units involved. Also, the quarterly reports we reviewed did not specifically describe planned remedial actions. Rather, they discussed remedial actions only in general terms, with few specific details, and provided little insight into how DOD planned to correct the problems. Congress has taken steps recently to expand the quarterly reporting requirements in 10 U.S.C. 482. Beginning in October 1998, DOD will be required to incorporate 19 additional readiness indicators in the quarterly reports. To understand the rationale for these additional indicators, it may be helpful to review their history. In 1994, we told this Subcommittee that SORTS did not provide all the information that military officials believed was needed for a comprehensive assessment of readiness. We reported on 26 indicators that were not in SORTS but that military commanders said were important for a comprehensive assessment of readiness. We recommended that the Secretary of Defense direct his office to determine which indicators were most relevant to building a comprehensive readiness system, develop criteria to evaluate the selected indicators, prescribe how often the indicators should be reported to supplement SORTS data, and ensure that comparable data be maintained by the services to facilitate trend analysis. DOD contracted the Logistics Management Institute (LMI) to study the indicators discussed in our report, and LMI found that 19 of them could be of high or medium value for monitoring critical aspects of readiness. The LMI study, issued in 1994, recommended that DOD (1) identify and assess other potential indicators of readiness, (2) determine the availability of data to monitor indicators selected, and (3) estimate benchmarks to assess the indicators. Although our study and the LMI study concluded that a broader range of readiness indicators was needed, both left open how DOD could best integrate additional measures into its readiness reporting. The 19 indicators that Congress is requiring DOD to include in its quarterly reports are very similar to those assessed in the LMI study. (See app. 1 for a list of the 19 indicators DOD is to include in the quarterly reports.) Last month, DOD provided Congress with an implementation plan for meeting the expanded reporting requirements for the quarterly report. We were asked to comment on this plan today. Of course, a thorough assessment of the additional readiness indicators will have to wait until DOD begins to incorporate them into the quarterly reports in October 1998. However, on the basis of our review of the implementation plan, we have several observations to make. Overall, the implementation plan could be enhanced if it identified the specific information to be provided and the analysis to be included. The plan appears to take a step backward from previous efforts to identify useful readiness indicators. In particular, the LMI study and subsequent efforts by the Office of the Secretary of Defense were more ambitious attempts to identify potentially useful readiness indicators for understanding, forecasting, and preventing readiness shortfalls. The current implementation plan, in contrast, was developed under the explicit assumption that existing data sources would be used and that no new reporting requirements would be created for personnel in the field. Further, the plan states that DOD will not provide data for 7 of the 19 indicators because either the data is already provided to Congress through other documents or there is no reasonable or accepted measurement. DOD officials, however, acknowledged that their plans will continue to evolve and said they will continue to work with this Subcommittee to ensure the quarterly report supports congressional oversight needs. Lastly, the plan does not present a clear picture of how the additional indicators will be incorporated into the quarterly report. For example, the plan is mostly silent on the nature and extent of analysis to be included and on the format for displaying the additional indicators. We also have concerns about how DOD plans to report specific indicators. For example: According to the plan, SORTS will be the source of data for 4 of the 19 indicators--personnel status, equipment availability, unit training and proficiency, and prepositioned equipment. By relying on SORTS, DOD may miss opportunities to provide a more comprehensive picture of readiness. For example, the LMI study points out that SORTS captures data only on major weapon systems and other critical equipment. That study found value in monitoring the availability of equipment not reported through SORTS. In all, the LMI study identified more than 100 potential data sources outside SORTS for 3 of these 4 indicators--personnel status, equipment availability, and unit training and proficiency. (The LMI study did not include prepositioned equipment as a separate indicator.) DOD states in its implementation plan that 2 of the 19 indicators-- operations tempo (OPTEMPO) and training funding--are not relevant indicators of readiness. DOD states further it will not include the data in its quarterly readiness reports because this data is provided to Congress in budget documents,. However, the LMI study rated these two indicators as having a high value for monitoring readiness. The study stated, for instance, that "programmed OPTEMPO is a primary means of influencing multiple aspects of mid-term readiness" and that "a system for tracking the programming, budgeting, and execution of OPTEMPO would be a valuable management tool that may help to relate resources to readiness." For the indicator showing equipment that is non-mission capable, the plan states that the percentage of equipment reported as non-mission capable for maintenance and non-mission capable for supply will provide insights into how parts availability, maintenance shortfalls, or funding shortfalls may be affecting equipment readiness. According to the plan, this data will be evaluated by examining current non-mission capable levels versus the unit standards. While this type of analysis could indicate a potential readiness problem if non-mission capable rates are increasing, it will not show why these rates are increasing. Thus, insights into equipment readiness will be limited. Mr. Chairman, there are two areas where we think DOD has an opportunity to take further actions to improve its readiness reporting. The first area concerns the level of detail included in the quarterly readiness reports to Congress. In a draft report we will issue later this month, we have recommended that the Secretary of Defense take steps to better fulfill the legislative reporting requirements under 10 U.S.C. 482 by providing (1) supporting data on key readiness deficiencies and (2) specific information on planned remedial actions in its quarterly readiness reports. As we discussed earlier, the quarterly reports we reviewed gave Congress only a vague picture of readiness. Adding more specific detail should enhance the effectiveness of the reports as a congressional oversight tool. DOD has concurred with our recommendation. The second area where DOD can improve its readiness reporting concerns DOD's plan to include additional readiness indicators in the quarterly report. The plan would benefit from the following changes: Include all 19 required indicators in the report. Make the report a stand-alone document by including data for all the indicators rather than referring to previously reported data. Further investigate sources of data outside SORTS, such as those reviewed in the LMI report, that could provide insight into the 19 readiness indicators. Develop a sample format showing how the 19 indicators will be displayed in the quarterly report. Provide further information on the nature and extent of analysis to be included with the indicators. DOD recognizes in its plan that the type and quality of information included in the quarterly reports may not meet congressional expectations and will likely evolve over time. In our view, it would make sense for DOD to correct known shortcomings to the current implementation plan and present an updated implementation plan to Congress prior to October 1998. Mr. Chairman, that concludes my prepared statement. We would be glad to respond to any questions you or other Members of the Subcommittee may have. The following are the additional indicators the Department of Defense is required, under 10 U.S.C. 482, to include in its quarterly reports to Congress beginning in October 1998. 1. Personnel status, including the extent to which members of the armed forces are serving in positions outside of their military occupational specialty, serving in grades other than the grades for which they are qualified, or both. 2. Historical data and projected trends in personnel strength and status. 3. Recruit quality. 4. Borrowed manpower. 5. Personnel stability. 6. Personnel morale. 7. Recruiting status. 8. Training unit readiness and proficiency. 9. Operations tempo. 10. Training funding. 11. Training commitments and deployments. 12. Deployed equipment. 13. Equipment availability. 14. Equipment that is not mission capable. 15. Age of equipment. 16. Condition of nonpacing items. 17. Maintenance backlog. 18. Availability of ordnance and spares. 19. Status of prepositioned equipment. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Defense's (DOD) process for assessing and reporting on military readiness, focusing on: (1) what corrective action DOD has taken to improve its readiness assessment system; (2) whether military readiness reports provided quarterly to Congress effectively support congressional oversight; and (3) whether further improvements are needed to DOD's process. GAO noted that: (1) over the last few years, DOD has taken action to improve readiness assessment; (2) DOD has made technical enhancements to the Status of Resources and Training System (SORTS)--the automated system it uses to assess readiness at the unit level; (3) DOD also has established two forums--the Joint Monthly Readiness Review and the Senior Readiness Oversight Council--for evaluating readiness from a joint and strategic perspective; (4) however, SORTS remains the basic building block for readiness assessment, and inherent limitations to this system, such as its inability to signal impending changes in readiness and its imprecise ratings for unit resources and training, may be reflected in reviews at the joint and strategic levels; (5) DOD's quarterly reports to Congress, which are based on information provided to the Senior Readiness Oversight Council, provide only a vague description of readiness deficiencies and planned remedial actions; consequently, in their present form they are not as effective as they could be as a congressional oversight tool; (6) DOD is required to expand on these reports beginning in October 1998 by adding indicators mandated by Congress; (7) GAO has concerns about DOD's current plans for implementing this expanded reporting requirement; (8) for example, current plans do not present a clear picture of how the additional readiness will be incorporated into the quarterly report; (9) GAO's work has identified two areas in which DOD can improve its readiness reporting to Congress; (10) DOD should provide more specific descriptions and supporting information for the key readiness deficiencies and planned remedial actions identified in its quarterly report; and (11) DOD can make improvements to its current plans for adding readiness indicators to the quarterly report.
3,914
440
In May 2009, the President announced the creation of a new Global Health Initiative (GHI) and proposed $63 billion in funding for all global health programs, including HIV/AIDS, malaria, tuberculosis, and maternal and child health, through 2014. According to the proposal, the majority of this funding--$51 billion, or 81 percent--is slated for global HIV/AIDS, tuberculosis, and malaria programs. For fiscal year 2009, State and USAID allocated about $7.3 billion for global health and child survival programs, including more than $5.6 billion for HIV/AIDS programs. For fiscal year 2010, State and USAID allocated approximately $7.8 billion for global health and child survival programs, including $5.7 billion for HIV/AIDS. For fiscal year 2011, the President proposed spending $8.5 billion on global health and child survival programs, including $5.9 billion for HIV/AIDS. In February 2010, the administration released a consultation document on GHI implementation, focusing on coordination and integration of global health programs, among other things, and setting targets for achieving health outcomes. The document also proposed selection of up to 20 countries--known as GHI Plus countries--that will receive additional funding and technical assistance under the GHI. Congress first authorized PEPFAR in 2003 and, in doing so, created within State a Coordinator of the U.S. Government Activities to Combat HIV/AIDS Globally, which State redesignated the Office of the U.S. Global AIDS Coordinator (OGAC). OGAC establishes overall PEPFAR policy and program strategies; coordinates PEPFAR programs; and allocates PEPFAR resources from the Global Health and Child Survival account to U.S. implementing agencies, including USAID and the Department of Health and Human Services' (HHS) CDC. USAID and CDC also receive direct appropriations to support global HIV/AIDS and other global health programs, such as tuberculosis, malaria, and support for maternal and child health. In fiscal years 2004 through 2008--the first 5 years of PEPFAR--the U.S. government directed more than $18 billion to PEPFAR implementing agencies and the Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund). In 2008, Congress reauthorized PEPFAR at $48 billion to continue and expand U.S.-funded HIV/AIDS and other programs through fiscal year 2013. Although PEPFAR initially targeted 15 countries, known as focus countries, since its establishment PEPFAR has made significant investments in 31 partner countries and 3 regions. Representatives of PEPFAR implementing agencies (country teams) jointly develop country operational plans (COP) for the 15 focus countries and an additional 16 nonfocus countries, as well as regional operational plans (ROP) for three regions, to document U.S. investments in, and anticipated results of, U.S.- funded programs to combat HIV/AIDS. The country teams submit the operational plans to OGAC for review and ultimate approval by the U.S. Global AIDS Coordinator. As such, these operational plans serve as the basis for approving annual U.S. bilateral HIV/AIDS funding, notifying Congress, and allocating and tracking budgets and targets. Some nonfocus countries receiving U.S. HIV/AIDS funding do not submit a PEPFAR operational plan; OGAC reviews and approves HIV/AIDS-related foreign assistance funding through foreign assistance operational plans. Table 1 shows the countries and regions that received U.S. foreign assistance for HIV/AIDS programs in fiscal years 2001-2008. In 2009, UNAIDS estimated that $7 billion would be needed in developing countries in 2010 to reach HIV/AIDS treatment and care program targets, which are generally defined as 80 percent of the target population requiring treatment. Sub-Saharan Africa makes up about half (49 percent) of estimated needs for all HIV/AIDS programs in developing countries. UNAIDS's estimate includes provision of ART, testing and counseling, treatment for opportunistic infections, nutritional support, laboratory testing, palliative care, and the cost of drug-supply logistics. The costs for CD4 blood tests are also included. In fiscal years 2006-09, PEPFAR funding for ART made up nearly half (46 percent) of PEPFAR's approved budget for prevention, treatment, and care programs. (See fig. 1.) ART funding generally comprised treatment services (about 55 percent of approved treatment funding); ARV drug procurement (about 32 percent of approved treatment funding); and laboratory infrastructure (about 13 percent of approved treatment funding). In 2008, OGAC reported that tentative approval of generic ARV drugs had generated significant savings for PEPFAR. As of September 2010, HHS's Food and Drug Administration had approved, or tentatively approved, 116 ARV formulations under its expedited review process, which allows all ARV drugs to be rapidly reviewed for quality standards and subsequently cleared for purchase under PEPFAR. According to PEPFAR's Five-Year Strategy, released in December 2009, PEPFAR plans to provide direct support for more than 4 million people on ART, more than doubling the number of people directly supported on treatment during the first 5 years of PEPFAR. The strategy seeks to focus PEPFAR support on specific individuals requiring ART by prioritizing individuals with CD4 cell counts under 200/mm In addition, in countries with high coverage rates that are expanding eligibility for treatment, PEPFAR will provide technical assistance and support for the overall treatment infrastructure. PEPFAR also will expand efforts to better link testing and counseling with treatment and care and, in conjunction with its prevention of mother-to-child transmission programs, will support expanded treatment to pregnant women. As we have previously reported, federal financial standards call on agencies to use costing methods in their planning to determine resources needed to evaluate program performance, among other things. Program managers should use costing information to improve the efficiency of programs. In addition, such information can be used by Congress to make decisions about allocating financial resources, authorizing and modifying programs, and evaluating program performance. In 2008, we found that PEPFAR country teams identified and analyzed program costs in varying ways, and we recommended that the Secretary of State direct OGAC to provide guidance to PEPFAR country teams on using costing information in their planning and budgeting. irrespective of clinical symptoms. See Rapid Advice: Antiretroviral therapy for HIV Infection in Adults and Adolescents (Geneva: WHO, 2009), www.who.int/entity/hiv/pub/arv/rapid_advice_art.pdf. Overall, U.S. bilateral spending on global HIV/AIDS and other health programs generally increased in fiscal years 2001 through 2008, particularly for HIV/AIDS programs. From 2001 through 2003, U.S. bilateral spending on global HIV/AIDS rose, while spending on other global health programs dropped slightly. As would be expected given PEPFAR's significant investment, from fiscal years 2004 through 2008, U.S. bilateral HIV/AIDS spending showed the greatest increase in PEPFAR focus countries, relative to nonfocus countries and regions with PEPFAR operational plans and other countries receiving HIV/AIDS assistance. In addition, our analysis determined that U.S. spending for other health- related health assistance also increased most for PEPFAR focus countries. Spending growth rates varied among three key regions--sub-Saharan Africa, Asia, and Latin America and the Caribbean--as did these regions' shares of bilateral HIV/AIDS and other health spending following establishment of PEPFAR. (See app. II for additional information on U.S. bilateral foreign assistance spending on HIV/AIDS and other health programs in fiscal years 2001 through 2008.) Overall, U.S. bilateral foreign assistance spending on both global HIV/AIDS and other health programs increased in fiscal years 2001 through 2008. Although spending on other health programs decreased slightly from 2001 through 2003, U.S. spending on both HIV/AIDS and other health-related foreign assistance programs grew from 2004 through 2008, the first 5 years of PEPFAR. Annual growth in U.S. spending on global HIV/AIDS was more robust and consistent than annual growth for other global health spending (see table 2 and fig. 2). 2001-2003. Prior to the implementation of PEPFAR, U.S. bilateral spending on HIV/AIDS programs grew rapidly, while U.S. spending on other health programs fell slightly. HIV/AIDS. The U.S. government spent less on global HIV/AIDS programs than on other health-related programs in fiscal years 2001-2003. However, spending on HIV/AIDS grew rapidly prior to implementation of PEPFAR. Other health. U.S. spending on other health-related programs decreased from 2001 to 2003. However, total spending for these programs during this period was more than three times greater than the total for HIV/AIDS- related foreign assistance programs. 2004-2008. Following implementation of PEPFAR, U.S. bilateral spending on both global HIV/AIDS and other health-related programs increased overall, with more rapid and consistent growth in spending for HIV/AIDS programs. HIV/AIDS. In fiscal year 2004, U.S. spending on HIV/AIDS programs was roughly equivalent to the total for the previous 3 years combined; in fiscal year 2008, annual U.S. spending on global HIV/AIDS programs was nearly three times the 2004 total. In addition, U.S. spending on HIV/AIDS programs in 2005 was, for the first time, higher than spending on other health programs. By 2008, almost twice as much was spent on HIV/AIDS programs as on other health programs. Other health. Although U.S. spending on other health programs also increased overall from fiscal year 2004 through 2008, annual spending was less consistent and decreased in 2006 and 2007. Our analysis shows differences in growth trends in U.S. bilateral spending on HIV/AIDS and other health programs before and after implementation of PEPFAR for three distinct groups of countries: PEPFAR focus countries, nonfocus countries and regions with PEPFAR operational plans, and all other countries receiving HIV/AIDS foreign assistance (i.e., nonfocus countries receiving HIV/AIDS assistance that do not submit PEPFAR operational plans to OGAC). In fiscal years 2001 through 2003, U.S. bilateral spending on global HIV/AIDS programs grew for countries in all three groups, while spending on other health programs increased at lower rates. From 2004 through 2008, the average annual growth rate in U.S. bilateral spending on global HIV/AIDS programs was, predictably, greatest in focus countries, as was spending on other health programs in these countries (see table 3). For the 15 countries that would become PEPFAR focus countries, U.S. bilateral spending on both HIV/AIDS and other health programs increased steadily from 2001 through 2003, with higher growth for HIV/AIDS spending. From 2004 through 2008, U.S. bilateral spending on global HIV/AIDS-related foreign assistance programs continued to increase significantly, while spending on other health programs grew modestly overall. From 2004 through 2008, total U.S. bilateral spending on HIV/AIDS-related foreign assistance programs in PEPFAR focus countries was more than seven times greater than spending on other health programs. (See fig. 3.) For the 16 nonfocus countries and three regions that eventually would submit operational plans to receive PEPFAR funding, U.S. bilateral spending on both HIV/AIDS and other health-related foreign assistance programs increased from 2001 through 2003 (see fig. 4), but at lower rates and less consistently than for the focus countries. From 2001 through 2003, U.S. bilateral spending on other health-related foreign assistance programs was about three times greater than spending on HIV/AIDS programs in these countries and regions, although spending on HIV/AIDS programs grew more rapidly. From 2004 through 2008, U.S. bilateral spending on both global HIV/AIDS and other health programs increased overall, with greater spending on other health programs for the 5-year period. In all other countries that received some U.S. assistance for HIV/AIDS programs from 2001 through 2008 but did not submit PEPFAR operational plans--a total of 47 countries--U.S. bilateral spending on both HIV/AIDS and other health-related foreign assistance programs fluctuated from year to year but increased overall (see fig. 5). In addition, U.S. bilateral spending for other health programs greatly exceeded spending for HIV/AIDS programs both before and after the establishment of PEPFAR. From 2001 through 2003, U.S. bilateral spending on HIV/AIDS programs in these countries nearly quadrupled; spending on other health programs amounted to more than 12 times that for HIV/AIDS programs and increased slightly over the period. From 2004 through 2008, U.S. bilateral spending on other health programs continued to greatly exceed spending on HIV/AIDS-related programs in these countries; spending on both HIV/AIDS and other health programs fluctuated from year to year and grew at similar rates overall. In fiscal years 2001 through 2008, the majority of U.S. bilateral HIV/AIDS program spending was in sub-Saharan Africa, Asia, and Latin America and the Caribbean--three regions where the 15 PEPFAR focus countries and 14 of the 16 nonfocus countries with PEPFAR operational plans are located--with the greatest U.S. spending on global HIV/AIDS foreign assistance programs in sub-Saharan Africa. From 2004 through 2008, following the establishment of PEPFAR, the share of U.S. bilateral spending on other health programs directed to countries in sub-Saharan Africa and Latin America and the Caribbean declined, while the share of U.S. spending on other health programs in Asia and in other regions increased. (See fig. 6.) Average annual growth rates in spending on HIV/AIDS and other health programs also varied significantly across these three regions (see table 4). ble 4: Average Annual Growt Ta Related Foreign Assistance Spending, by Region, Fiscal Years 2001-2008 h Rates for Global U.S. Latin America and the Caribbean PEPFAR period (first 5 years) U.S. bilateral foreign assistance spending on HIVAIDS programs in sub- Saharan Africa--which includes 12 of the 15 focus countries and 8 of the 16 nonfocus countries with PEPFAR operational plans-- increased rapidly both before and after the establishment of PEPFAR. In 2003, U.S. bilateral spending on HIV/AIDS programs was nearly two times greater, and by 2008 was more than four times greater than spending on other health programs. U.S. bilateral spending on other health programs declined overall from 2001 to 2003 and remained steady from 2004 to 2007, but began to grow substantially in 2008. (See fig. 7.) U.S. bilateral foreign assistance spending on both HIVAIDS and other health-related foreign assistance programs in Asia--where 1 of the 15 focus countries as well as 5 nonfocus countries and 1 region that submit PEPFAR operational plans are located--increased overall from 2001 to 2008. Overall bilateral spending on other health programs was three times larger than spending on HIV/AIDS programs throughout the period. (See fig. 8.) From 2001 through 2008, total U.S. bilateral foreign assistance spending on HIVAIDS programs in Latin American and the Caribbean--where 2 of the 15 focus countries as well as a nonfocus country and two regions with PEPFAR operational plans are located-- increased continuously. During this period, U.S. bilateral spending on other health programs in these countries and regions fluctuated from year to year and declined overall. Bilateral spending on other health programs was consistently greater than spending on HIV/AIDS programs during this period; however, in 2008, annual spending on HIV/AIDS programs was nearly equal to spending for other health programs (see fig. 9). To inform policy and program decisions related, in part, to expanding efforts to provide ART in developing countries, OGAC, USAID, and UNAIDS have adopted three different models for ART cost analyses. OGAC uses the PEPFAR ART Costing Project Model (PACM) to estimate and track PEPFAR-supported ART costs in individual PEPFAR countries and across these countries. USAID and its partners use the HIV/AIDS Program Sustainability Analysis Tool (HAPSAT) to estimate resources needed to meet individual countries' ART goals, among other things. UNAIDS and USAID use a suite of models referred to as Spectrum to project ART costs in individual countries and globally. Table 5 provides information on the three costing models. For additional information on the components of these three models, see appendix III. Although the models have different purposes, a 2009 comparison study conducted by their developers found that the three models produced similar overall ART cost estimates given similar data inputs. According to the models' developers, data used for one model can be entered into another to generate cost estimates and projections. For example, cost data collected in Nigeria for use in HAPSAT were also used in PACM to inform PEPFAR global average treatment cost estimates. Such cost projections also can help decision makers to estimate the cost-related effects of policy and protocol changes, such as changes made in response to the World Health Organization's November 2009 recommendation that HIV patients initiate ART at an earlier stage of the disease's progression. In coordination with HHS and USAID, State's OGAC reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of State, the Office of the Global AIDS Coordinator, USAID Office of HIV/AIDS, HHS Office of Global Health Affairs, and CDC Global AIDS Program. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Responding to legislative directives, this report examines U.S. bilateral foreign assistance spending on global HIV/AIDS and other health-related programs in fiscal years 2001-2008. The report also provides information on models used to estimate HIV treatment costs. To examine trends in U.S. bilateral spending on global HIV/AIDS- and other health-related foreign assistance programs, we analyzed data from the Foreign Assistance Database (FADB) provided by the U.S. Agency for International Development (USAID), interviewed State Department, USAID, and Health and Human Services (HHS) officials in Washington, D.C., and Centers for Disease Control and Prevention (CDC) officials in Atlanta. We also interviewed representatives of the Kaiser Family Foundation who have conducted similar research and analysis. We reviewed relevant articles and reports regarding international and U.S. global health assistance funding and examined relevant data on other donor and U.S. foreign assistance. Congress, U.S. agencies, and research organizations use varying definitions of global health programs, with inclusion of safe water and nutrition programs being one varying factor among definitions. Congress funds global health programs through a number of appropriations accounts: Foreign Operations; Labor, Education and Health; and Defense; and through several U.S. agencies. The State Department, USAID, and the HHS' CDC are the primary U.S. agencies receiving congressional appropriations to implement global health programs, including programs to combat HIV/AIDS. Through foreign operations accounts administered by USAID and State, Congress specifies support for five key global health programs: child survival and maternal health, vulnerable children, HIV/AIDS, other infectious diseases, and family planning and reproductive health. In addition, Congress specifies support for five key CDC global health programs: HIV/AIDS, malaria, global disease detection, immunizations, and other global health. CDC also allocates part of its tuberculosis and pandemic flu budget for international programs, and State and USAID may transfer funds to CDC for specific activities. In addition to these programs, USAID and CDC include other programs related to global health. For example, USAID reports specific nutrition and environmental health programs in its global health portfolio. Likewise, CDC also uses its resources to provide international technical assistance when requested, such as for disease outbreak response (e.g., pandemic influenza preparedness and prevention), or reproductive health. The Committee on the U.S. Commitment to Global Health at the Institute of Medicine (IOM) defined global health programs as those aimed at improving health for all people around the world by promoting wellness and eliminating avoidable disease, disability and death. According to the Organisation for Economic Cooperation and Development (OECD), global health includes the following components: health care; health infrastructure; nutrition; infectious disease control; health education; health personnel development; health sector policy, planning and programs; medical education, training and research; and medical services. In its report on donor funding for global health, the Kaiser Family Foundation combined data from four OECD categories to construct its definition of global health: health; population policies and programs and reproductive health (which includes HIV/AIDS and sexually transmitted diseases); water supply and sanitation; and other social infrastructure and services. For the purposes of this report, we defined U.S. global spending for HIV/AIDS programs as foreign assistance for activities related to HIV/AIDS control, including information, education, and communication; testing; prevention; treatment; and care. We defined U.S. spending for other health-related programs as foreign assistance for general and basic health and population and reproductive health policies and programs (except those related to HIV/AIDS). General and basic health includes health policy and administrative management, medical education and training, medical research, basic health care, basic health infrastructure, basic nutrition, infectious disease control, health education, and health personnel development. Population and reproductive health policies and programs include population policy and administrative management, reproductive health care, family planning, and personnel development for population and reproductive health. The specific analyses presented in this report examine disbursement levels and growth trends from fiscal years 2001 to 2008 for bilateral HIV/AIDS and other health-related foreign assistance programs by time period (pre- PEPFAR and first 5 years of PEPFAR for all countries); PEPFAR country status (focus countries with PEPFAR operational plans, nonfocus countries with PEPFAR country or regional operational plans, and other nonfocus countries receiving HIV/AIDS-related foreign assistance from 2001 to 2008); and region (sub-Saharan Africa, Latin America and the Caribbean, and Asia, which received the majority of U.S. spending on bilateral HIV/AIDS-related foreign assistance). We examined disbursements--amounts paid by federal agencies to liquidate government obligations--of U.S. bilateral foreign assistance for global HIV/AIDS and other health programs, because, unlike other data, disbursement data directly reflect the foreign assistance reaching partner countries. We used USAID's deflator to convert nominal dollar amounts to constant 2010 dollar amounts, which are appropriate for spending trend analysis. As such, it is important to remember that the disbursement figures for HIV/AIDS- and other health-related foreign assistance programs presented in this report differ from appropriation or commitment data which may be reported elsewhere. Because we focused on bilateral disbursements, our analysis excludes U.S. contributions to the Global Fund to Fight HIV/AIDS, Tuberculosis, and Malaria. In addition, about $4.7 billion and $3.3 billion in disbursements for HIV/AIDS programs and other health-related foreign assistance programs, respectively, from 2001 to 2008, were not specified for an individual country or region in the FADB. As such, our analysis of bilateral spending levels and growth trends by PEPFAR country status and geographical region excludes these disbursements. We assessed the reliability of disbursement data from the FADB and determined them to be sufficiently reliable for the purposes of reporting in this manner. In assessing the data, we interviewed USAID officials in charge of compiling and maintaining the FADB, reviewed the related documentation, and compared data to published data from other sources. We also determined that, in general, USAID takes steps to ensure the consistency and accuracy of the disbursements data reported by U.S. government agencies, including by verifying possible inconsistencies or anomalies in the data received, providing guidance and other communications to agencies about category definitions, and comparing the data to other data sources. Although we did not assess the reliability of the data for complex statistical analyses, we determined that the data did not allow the identification of causal relationships between funding levels over time or among relevant categories; as such, we did not attempt an empirical analysis of the impact of PEPFAR on other health funding. To describe models used to estimate the cost of providing antiretroviral therapy (ART), we interviewed State Office of the Global AIDS Coordinator, USAID and CDC officials in Washington, D.C., and Atlanta. We also interviewed Joint United Nations Programme on HIV/AIDS (UNAIDS) officials in Washington, D.C. and Geneva, Switzerland, as well as developers of the costing models. We analyzed user manuals and guides for these models, as well as spreadsheets and additional information and technical comments provided by the U.S. agencies and model developers. We reviewed relevant literature for information on ART costing models, as well as the Leadership Act and previous GAO work regarding requirements and importance of cost information for program decision making. For fiscal years 2001 to 2008, U.S. bilateral foreign assistance spending for HIV/AIDS-related health programs varied significantly by country for both the 15 PEPFAR focus countries and the 16 countries and three regions with PEPFAR operational plans. Table 6 presents U.S. bilateral foreign assistance spending in constant dollars, by country, on HIV/AIDS programs, for fiscal years 2001-2008. As noted in appendix I, we converted nominal dollar amounts to constant 2010 dollars, which are appropriate for analysis of trends in U.S. foreign assistance spending in global health, but do not represent in-year actual spending amounts. For fiscal years 2001 to 2008, U.S. bilateral foreign assistance spending for other health programs also varied significantly by country for both the 15 PEPFAR focus countries and the 16 countries and three regions with PEPFAR operational plans. Table 7 presents U.S. bilateral foreign assistance spending in constant dollars, by country, on other health-related (i.e., non-HIV/AIDS) programs, for fiscal years 2001-2008. As noted in appendix I, we converted nominal dollar amounts to constant 2010 dollars, which are appropriate for analysis of trends in U.S. foreign assistance spending in global health, but do not represent in-year actual spending amounts. To estimate total cost of ART, three key models--the PEPFAR ART Costing Project Model (PACM), HIV/AIDS Program Sustainability Analysis Tool (HAPSAT), and Spectrum--all consider the number of patients and various drug and nondrug cost estimates. PACM and HAPSAT also address overhead costs in total cost calculations. This appendix presents the specific drug and nondrug costs that each model considers in making estimates. PACM categorizes ART patients as adult or pediatric, new or established, receiving first- or second-line ARV drugs, receiving generic or innovator ARV drugs, and living in a low- or middle-income country. In addition, PACM considers the following cost categories: Drug costs. PACM categorizes ARV drug costs as generic or innovator and first- or second-line. For each of these categories, PACM accounts costs associated with supply chain, wastage, inflation, and ARV buffer stock. Nondrug costs. PACM categorizes nondrug costs as recurrent and investment costs. Recurrent costs include personnel, utilities, building, lab supplies, other supplies, and other drugs; facility-level management and overhead costs are also captured. Investment costs include training, equipment, and construction. Overhead. PACM categorizes above-facility-level overhead costs as U.S. government, partner government, and implementing partner overhead, as well as U.S. government indirect support to partner governments (e.g., U.S. government support for system strengthening or capacity building of the national HIV/AIDS program). Table 8 summarizes how PACM categorizes numbers of patients and various unit costs to calculate the total cost of ART based on estimates of PEPFAR and non-PEPFAR shares of costs derived from PEPFAR-funded empirical studies. HAPSAT categorizes current ART patients as those receiving first- or second-line ARV drugs. In addition, HAPSAT considers the following cost categories: Drug costs. HAPSAT categorizes drug costs as first- or second-line ARV drugs. Nondrug costs. HAPSAT categorizes nondrug costs as labor (e.g., doctor, nurse, lab technician salaries) and laboratory costs. Overhead. HAPSAT categorizes overhead as administrative costs, drug supply chain, monitoring and evaluation, and training, based on country data. Overhead estimates are applied at both the facility and above-facility level. Table 9 summarizes how HAPSAT categorizes numbers of patients and various unit costs to calculate the total cost of ART. Spectrum categorizes current ART patients as adult or pediatric and receiving first- or second-line ARV drugs. In addition, Spectrum considers the following cost categories: Drug costs. Spectrum categorizes drugs costs as first- or second-line ARV drugs. Nondrug costs. Spectrum categorizes nondrug costs as laboratory and service delivery (i.e., hospital and clinic stays). Service delivery costs include inpatient hospital and outpatient clinic costs. Table 10 summarizes how Spectrum categorizes numbers of patients and various unit costs to calculate the total cost of ART. In addition to the contact named above, Audrey Solis (Assistant Director), Todd M. Anderson, Diana Blumenfeld, Giulia Cangiano, Ming Chen, David Dornisch, Lorraine Ettaro, Etana Finkler, Kendall Helm, Heather Latta, Reid Lowe, Grace Lui, Jeff Miller, and Mark Needham made key contributions to this report. President's Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries' HIV/AIDS Strategies and Promote Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President's Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President's Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4,2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: July 12, 2004. Global Health: Global Fund to Fight AIDS, TB, and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003.
U.S. funding for global HIV/AIDS and other health-related programs rose significantly from 2001 to 2008. The President's Emergency Plan for AIDS Relief (PEPFAR), reauthorized in 2008 at $48 billion through 2013, has made significant investments in support of prevention of HIV/AIDS as well as care and treatment for those affected by the disease in 31 partner countries and 3 regions. In May 2009, the President proposed spending $63 billion through 2014 on global health programs, including HIV/AIDS, under a new Global Health Initiative. The Office of the U.S. Global AIDS Coordinator (OGAC), at the Department of State (State), coordinates PEPFAR implementation. The Centers for Disease Control and Prevention (CDC) and the U.S. Agency for International Development (USAID), among other agencies, implement PEPFAR as well as other global health-related assistance programs, such as maternal and child health, infectious disease prevention, and malaria control, among others. Responding to legislative directives, this report examines U.S. disbursements (referred to as spending) for global HIV/AIDS- and other health-related bilateral foreign assistance programs (including basic health and population and reproductive health programs) in fiscal years 2001-2008. The report also provides information on models used to estimate HIV treatment costs. GAO analyzed U.S. foreign assistance data, reviewed HIV treatment costing models and reports, and interviewed U.S. and UNAIDS officials. In fiscal years 2001-2008, bilateral U.S. spending for HIV/AIDS and other health-related programs increased overall, most significantly for HIV/AIDS. From 2001 to 2003--before the establishment of PEPFAR--U.S. spending on global HIV/AIDS programs rose while spending on other health programs dropped slightly. From fiscal years 2004 to 2008, HIV/AIDS spending grew steadily; other health-related spending also rose overall, despite declines in 2006 and 2007. As would be expected, U.S. bilateral HIV/AIDS spending showed the most increase in 15 countries--known as PEPFAR focus countries--relative to other countries receiving bilateral HIV/AIDS assistance from fiscal years 2004 through 2008. In addition, GAO's analysis showed that U.S. spending on other health-related bilateral foreign assistance also increased most for PEPFAR focus countries. Spending growth rates varied among three key regions--sub-Saharan Africa, Asia, and Latin America and the Caribbean--as did these regions' shares of HIV/AIDS and other health foreign assistance spending following establishment of PEPFAR. OGAC, USAID, and UNAIDS have adopted three different models to estimate and project antiretroviral therapy (ART) costs. The three models--respectively known as the PEPFAR ART Costing Project Model, the HIV/AIDS Program Sustainability Analysis Tool, and Spectrum--are intended to inform policy and program decisions related, in part, to expanding efforts to provide ART in developing countries.
6,979
637
The Office of Acquisition and Materiel Management is the principal office within VA headquarters responsible for supporting the agency's programs. The OA&MM includes an Office of Acquisitions that, among other things, provides acquisition planning and support, helps develop statements of work, offers expertise in the areas of information technology and software acquisition, develops and implements acquisition policy, conducts business reviews, and issues warrants for contracting personnel. As of June 2005, the Office of Acquisitions was managing contracts valued at over $18 billion, including option years. In recent years, reports have cited inadequacies in the contracting practices at VA's Office of Acquisitions and also have identified actions needed to improve them. In fiscal year 2001, the VA IG issued a report that expressed significant concerns about the effectiveness of VA's acquisition system. As a result, the Secretary of Veterans Affairs established, in June 2001, a Procurement Reform Task Force to review VA's procurement system. The task force's May 2002 report set five major goals that it believed would improve VA's acquisition system: (1) leverage purchasing power, (2) standardize commodities, (3) obtain and improve comprehensive information, (4) improve organizational effectiveness, and (5) ensure a sufficient and talented workforce. Issues related to organizational and workforce effectiveness were at the center of the difficulties VA experienced implementing its Core Financial and Logistics System (CoreFLS). The VA IG and an independent consultant issued reports on CoreFLS in August 2004 and June 2004, respectively, and both noted that VA did not do an adequate job of managing and monitoring the CoreFLS contract and did not protect the interests of the government. Ultimately, the contract was canceled after VA had spent nearly $250 million over 5 years. In response to deficiencies noted in the CoreFLS reports, VA sought help to improve the quality, effectiveness, and efficiency of its acquisition function by requesting that NAVSUP perform an independent assessment of the Acquisition Operations Service (AOS). NAVSUP looked at three elements of the contracting process: management of the contracting function; contract planning and related functions; and special interest items such as information technology procurements, use of the federal supply schedule, and postaward contract management. In a September 2004 report, NAVSUP identified problems in all three elements. While VA agrees with the NAVSUP report's recommendations, limited progress has been made in implementing the seven key recommendations of the report. VA officials indicate that factors contributing to this limited progress include the absence of key personnel, a high turnover rate, and a heavy contracting workload. We found that VA has neither established schedules for completing action on the recommendations nor established a method to measure its progress. Until VA establishes well-defined procedures for completing action on the NAVSUP recommendations, the benefits of this study may not be fully realized. The status of the seven key recommendations we identified is summarized in Table 1: Action taken by VA on the seven key recommendations in the NAVSUP report has varied from no action, to initial steps, to more advanced efforts in specific areas. Long-term improvement plan. NAVSUP recommended that AOS develop a long-term approach to address improvements needed in key areas. VA acknowledges that establishing a long-term improvement plan is necessary to maintain its focus on the actions that will result in desired organizational and cultural changes. During the course of our review, however, we found that no action has been taken to develop a long-term improvement plan with established milestones for specific actions. Adequate management metrics. NAVSUP recommended that AOS develop metrics to effectively monitor VA's agencywide acquisition and procurement processes, resource needs, and employee productivity because it found it that AOS was not receiving information needed to oversee the contracting function. VA officials agree that they need to have the ability to continuously and actively monitor acquisitions from the pre- award to contract closeout stages to identify problem areas and trends. VA officials acknowledge that, without adequate metrics, its managers are unable to oversee operations and make long-term decisions about their organizations; customers cannot review the status of their requirements without direct contact with contracting officers; and contracting officers are hampered in their ability to view their current workload or quickly assess deadlines. During our review, VA officials stated that they intend to use a balanced scorecard approach for organizational metrics in the future. However, no steps had been taken to establish specific metrics at the time we completed our review. Strategic planning. NAVSUP recommended that AOS develop a supplement to the OA&MM strategic plan that includes operational-level goals to provide employees with a better understanding of their roles and how they contribute to the agency's strategic goals, objectives, and performance measures. VA officials indicated that progress on the strategic plan had been delayed because it will rely heavily on management metrics that will be identified as part of the effort to develop a balanced scorecard. With the right metrics in place, VA officials believe they will be in a much better position to supplement the strategic plan. VA had not revised the strategic plan by the time we finished our review. Process to review contract files at key acquisition milestones. NAVSUP recommended that AOS establish a contract review board to improve management of the agency's contract function. NAVSUP believed that a contracting review board composed of senior contracting officers would provide a mechanism to effectively review contracting actions at key acquisition milestones and provide needed overall management. To enhance these reviews, VA has prepared draft standard operating procedures on how contract files should be organized and documented. Final approval is pending. VA officials indicated, however, that no decisions have been made about how or when they will institute a contract review board as part of the agency's procurement policies and processes. Postaward contract management. NAVSUP recommended that the AOS contracting officers pay more attention to postaward contract management by developing a contract administration plan, participating in postaward reviews, conducting contracting officer technical representative reviews, and improving postaward file documentation. We found that VA has taken some action to address postaward contract management. For example, AOS is training a majority of its contracting specialists on the electronic contract management system. VA officials indicated that the electronic contract management system will help improve its postaward contract management capability. The electronic contract management system is a pilot effort that VA expects to be operational in early 2006. Also, final approval for a draft standard operating procedure for documenting significant postaward actions is pending. Customer relationships. NAVSUP reported that VA's ability to relate to its customers is at a low point and recommended VA take action to improve customer relations. Some mechanisms VA officials agreed are needed to improve customer relations include requiring that program reviews include both the customer and contracting personnel, greater use and marketing of the existing customer guide to customers and contracting communities, the establishment of a customer feedback mechanism such as satisfaction surveys, placing a customer section on the World Wide Web, and engaging in strategic acquisition planning with customer personnel. We noted that VA is taking some of the actions recommended by NAVSUP. For example, VA has established biweekly meetings with major customer groups, created customer-focused teams to work on specific projects, and nearly completed efforts to issue a comprehensive customer guide. Pending are efforts to include customers in the AOS review process and to develop a customer section on the web site. Employee morale. The NAVSUP report said that VA employee morale is at a low point and is having an impact on employee productivity. NAVSUP said that AOS needs to respond to its employee morale issue by addressing specific employee concerns related to workload distribution, strategic and acquisition planning, communication, and complaint resolution. VA has taken several actions related to employee morale. Workload distribution issues have been addressed by developing a workload and spreadsheet tracking system and removing restrictions on work schedules for employees at ranks of GS-15 and below. Strategic planning actions completed include the development of mission and vision statements by a cross section of VA personnel and collective involvement in approval of organizational restructuring efforts. Communication and complaint resolution issues are being resolved by facilitating a meeting between AOS management and employees to air concerns. Partially completed actions include the development of new employee training module, including a comprehensive new employee orientation package. According to VA, new employee training includes the dissemination of draft standard operating procedures. VA is also in the process of developing an employee survey to measure overall employee satisfaction. Discussions with VA officials indicate that the agency believes its limited progress has largely been due to the absence of permanent leadership and insufficient staffing levels. Officials told us that the recommendations will be implemented once key officials are in place. For example, positions for two key VA acquisition managers--Associate Deputy Assistant Secretary for Acquisitions and the Director for AOS--were unfilled for about 25 months and 15 months, respectively. But during the course of our review these positions were filled. As of August 25, 2005, AOS has still not selected permanent personnel for 17 of its 62 positions. This includes two other key management positions-- the Deputy Director of Field Operations and the Deputy Director for VA Central Office Operations, both filled by people in an acting role. Supervisory leadership has also suffered as a consequence of understaffing, VA officials said. Four of the eight supervisory contract specialist positions are filled by people in an acting role. Critical nonsupervisory positions also have remained unfilled, with 11 contract specialists' positions vacant. The absence of contract specialists has largely been caused by a high turnover rate. According to VA officials, the high turnover rate can be attributed to a heavy contracting workload, as well as the other factors identified in the NAVSUP report. When asked, the VA officials we spoke with could not provide specific time frames for completing actions on the recommendations or a method to measure progress. We believe the lack of an implementation plan with time frames and milestones, as well as a way to measure progress, contributed to VA's limited progress in implementing the key NAVSUP recommendations. The seven key NAVSUP recommendations we identified have not been fully implemented. While some progress is being made, progress is lacking in those areas that we believe are critical to an efficient and effective acquisition process. If key recommendations for improvement are not adequately addressed, VA has no assurance that billions of its Office of Acquisitions contract dollars will be managed in an efficient and effective manner, or that it can protect the government's interest in providing veterans with high-quality products, services, and expertise in a timely fashion at a reasonable price. While personnel-related factors have contributed to VA's lack of progress, the absence of schedules for completion of actions and of metrics that could be used to determine agency progress is also an important factor. Current VA officials, even those in an acting capacity, can identify timetables for completing action on key NAVSUP recommendations and establish a means to determine progress. Without these elements of an action plan, the benefits envisioned by the study may not be fully realized. We recommend that the Secretary of Veterans Affairs direct the Deputy Assistant Secretary for Acquisition and Materiel Management to identify specific time frames and milestones for completing actions on the key NAVSUP recommendations, and establish a method to measure progress in implementing the recommendations. In commenting on a draft of this report, the Deputy Secretary of Veterans Affairs agreed with our conclusions and concurred with our recommendations. VA's written comments are included in appendix III. We will send copies of this report to the Honorable R. James Nicholson, Secretary of Veterans Affairs; appropriate congressional committees; and other interested parties. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Blake Ainsworth, Penny Berrier, William Bricking, Myra Watts Butler, Christina Cromley, Lisa Simon, Shannon Simpson, and Bob Swierczek. In September 2004, the Naval Supply System Command (NAVSUP) issued a report, Procurement Performance Management Assessment Program on its review of the Department of Veterans Affairs, Office of Acquisition and Materiel Management, Acquisition Operations Service. The 24 recommendations contained in the NAVSUP report are listed in table 2 below. The first seven recommendations listed are the key recommendations we identified. To select the key recommendations from those identified in the NAVSUP September 2004 report, we focused on recommendations that, if successfully implemented, are likely to have the broadest and most significant impact on the Department of Veterans Affairs' (VA) operations. We chose recommendations that are crosscutting in nature. Accordingly, in many instances recommendations we did not identify as being key are nevertheless, we believe, covered to some extent by one or more of the key recommendations. In making our selections, we relied primarily on our professional judgment and the experience gained over many years in reviews of acquisition management issues governmentwide. In particular, we relied on the observations and guidance captured in a draft of a GAO report entitled Framework for Assessing the Acquisition Function at Federal Agencies. With this insight, we determined that 7 of the 24 NAVSUP recommendations were key. To identify the progress VA has made in implementing these seven key NAVSUP recommendations, we met with acquisition officials at VA's Office of Acquisition and Materiel Management (OA&MM). We also reviewed documents intended to demonstrate the status of VA's actions. In order to attain a broader view of VA acquisition issues, we identified and reviewed other VA and independent reports issued prior to the NAVSUP report. This included VA's Procurement Reform Task Force (May 2002) report, which recommended ways to improve procurement practices across VA, and reports by the VA Inspector General (August 2004) and Carnegie Mellon (June 2004) that noted contract management problems on a VA contract for the Core Financial and Logistics System (CoreFLS). We reviewed past and current policies, procedures, and internal controls associated with VA acquisition processes. We obtained statistics from OA&MM on the authorized size of the VA Acquisitions Operations Service (AOS) contracting workforce and positions that still need to be filled. We obtained data from the Federal Procurement Data System on what VA spent during fiscal year 2004 for products and services. Further, we obtained data from VA on the amount of contract dollars being managed by VA's Office of Acquisitions as of June 2005. We did not conduct an independent assessment of the state of the acquisition function at VA. We conducted our work from March to August 2005 in accordance with generally accepted government auditing standards.
The Department of Veterans Affairs (VA) is among the largest federal acquisition agencies, spending $7.3 billion on product and service acquisitions in 2004 alone. Recent reports by VA and other organizations identified weaknesses in the agency's acquisition function that could result in excess costs to the taxpayer. One report by the Naval Supply Systems Command (NAVSUP) made 24 recommendations to improve VA's acquisition function. VA has accepted these recommendations. GAO was asked to review the progress VA has made in implementing the key NAVSUP recommendations. GAO identified 7 of the 24 recommendations as key, based primarily on its professional judgment and prior experience. Progress made by the Department of Veterans Affairs in implementing the key recommendations from the NAVSUP report has been limited. In fact, a year after the report was issued, VA has not completed actions on any of the seven key recommendations GAO identified. While VA agrees implementation of the key recommendations is necessary, the steps it has taken range from no action to partial action. No action has been taken on three key recommendations: to develop a long-term improvement plan, adequate management metrics, and a supplement to the agency's strategic plan. No more than partial action has been taken on four others: establishment of a contract review board for reviewing files at key milestones along with improvement of postaward contract management, customer relationships, and employee morale. A lack of permanent leadership in key positions has contributed to the lack of further progress in revising acquisition policies, procedures, and management and oversight practices, according to VA officials. For example, two key VA acquisitions management positions were unfilled--one for 15 months and the other for 25 months. In addition, VA has neither set time frames for completing actions on the NAVSUP recommendations nor established a method to measure progress. Until VA establishes a process for completing action on the NAVSUP recommendations, the benefits of the study may not be fully realized.
3,167
408
The Department of Defense's 2001 Defense Planning Guidance tasked the Department of the Navy to conduct a comprehensive review to assess the feasibility of fully integrating Navy and Marine Corps aviation force structure to achieve both effectiveness and efficiency. The Department of the Navy narrowed the study to include only fixed-wing tactical aviation assets because of affordability concerns. Specifically, Navy officials were concerned that the projected procurement budget would not be sufficient to buy as many F/A-18E/Fs and Joint Strike Fighter aircraft as originally planned. The difference between the funding needed to support the Navy's original plan for procuring tactical aircraft and the Navy's projected procurement budget is shown in figure 1. Figure 1 shows that, starting in fiscal year 2005, the Navy's typical aviation allocation of $3,200 million per year would not be sufficient to support the previous procurement plan for F/A-18E/F and Joint Strike Fighter aircraft. In December 2001, the Chief of Naval Operations and the Commandant of the Marine Corps jointly commissioned a contractor to study the feasibility of integrating Naval tactical aviation. The study prompted a memorandum of agreement between the Navy and Marine Corps in August 2002 to integrate their tactical aviation assets and buy fewer aircraft than originally planned. The Plan proposes that the Navy and Marine Corps (1) merge operational concepts; (2) reduce the number of squadrons, aircraft per squadron, and backup aircraft; and (3) reduce the total number of aircraft to be procured in the future. The Department of the Navy anticipates that these changes will save approximately $28 billion in procurement costs over the next 18 years through fiscal year 2021. Operationally, the Navy and Marine Corps would increase the extent to which their tactical aviation units are used as a combined force for both services. Under the Plan, the Navy and Marine Corps would increase cross deployment of squadrons between the services and would further consolidate missions and operations through changes in aircrew training and the initiation of command-level officer exchanges. Under the Plan, the Marine Corps would increase the number of squadrons dedicated to carrier air wings, and the Navy would begin to dedicate squadrons to Marine Aircraft Wings. In 2003 the Marine Corps began to provide the Navy with the first of six additional dedicated squadrons to augment four squadrons already integrated into carrier air wings during the 1990s. As a result, each of the Navy's 10 active carrier air wings would ultimately include one Marine Corps squadron by 2012. Concurrently, the Navy would integrate three dedicated squadrons into Marine Aircraft Wings by 2008, primarily to support the Marine Corps Unit Deployment Program rotations to Japan. The first Navy squadron to deploy in support of Marine Corps operations would occur in late fiscal year 2004, with other squadrons to follow in fiscal years 2007 and 2008. As part of the new operating concept, the Department of the Navy would satisfy both Navy and Marine Corps missions using either Navy or Marine Corps squadrons. Traditionally, the primary mission of Navy tactical aviation has been to provide long-range striking power from a carrier, while Marine Corps tactical aviation provided air support for ground forces. Navy and Marine Corps tactical aviation squadrons will retain their primary mission responsibilities, but units that integrate would additionally be responsible to train as well as perform required mission responsibilities of the other service. For example, if a Navy squadron were assigned to the Marine Corps Unit Deployment Program, its pilots would receive more emphasis on training for close air support missions, and, similarly, Marine Corps pilots would place more emphasis on long-range strike missions before deploying with a carrier air wing. Moreover, Navy and Marine Corps officers would exchange Command positions to further develop a more unified culture. For instance, a Marine Corps colonel would command a carrier air wing, while a Navy captain would command a Marine Corps Aircraft Group. As indicated in table 1, the Department of the Navy would create a smaller tactical aviation force structure consisting of fewer squadrons, reduced numbers of aircraft per squadron, and fewer backup aircraft. The number of tactical aviation squadrons would decrease from 68 under the previous plan to 59 by 2012. To achieve this reduction of nine squadrons, the department would cancel plans to reestablish four active Navy squadrons as anticipated under its prior procurement plan, decommission one Marine Corps Reserve squadron as well as one Navy Reserve squadron in 2004, and decommission three active Navy squadrons. The first active squadron is scheduled to be decommissioned in fiscal year 2006; two other squadrons are to be decommissioned from fiscal year 2010 through fiscal year 2012. Under the Plan, the number of aircraft assigned to some tactical aviation squadrons would be reduced. All Navy and Marine Corps F/A-18C squadrons that transition to the future Joint Strike Fighter aircraft would be reduced from 12 to 10 aircraft. In addition, Navy F/A-18F squadrons will be reduced from 14 to 12 aircraft. Furthermore, by 2006, aircraft assigned to the remaining two Navy and three Marine Corps Reserve squadrons would be reduced from 12 to 10. By reducing the aircraft assigned to squadrons, the size of Navy air wings will transition from 46 to 44 aircraft in 2004, as the Navy procures new aircraft. A notional air wing in the Navy's current force is made up of 46 aircraft comprising a combination of F/A-18C and F-14 squadrons. However, by 2016, carrier air wings would contain 44 aircraft made up of two squadrons of 10 Joint Strike Fighters, one squadron of 12 F/A-18E fighters, and one squadron of 12 F/A-18F fighters. The Department of the Navy's Plan would also reduce the number of backup aircraft to be procured from 745 (under the previous program) to 508, for a total reduction of 237 aircraft. Backup aircraft consist of those aircraft that are not primarily assigned to active or reserve squadrons. Specifically, backup aircraft are necessary to meet a variety of needs such as training new pilots; replacing aircraft that are either awaiting or undergoing depot-level repair; meeting research, development, and test and evaluation needs; attrition during peacetime or wartime operations; and meeting miscellaneous requirements, such as adversary training and the Blue Angels demonstration team. In implementing the Plan, the Department of the Navy expects to reduce the number of tactical aviation aircraft it will purchase by 497--from 1,637 to 1,140. As indicated in table 2, it plans to procure, respectively, 88 and 409 fewer F/A-18E/F and Joint Strike Fighter aircraft. Almost half (237, or 48 percent) of the expected reduction in aircraft procurement is attributable to the plan to have fewer backup aircraft. By reducing the total number of new tactical aviation aircraft to be procured, the Department of the Navy now expects that its new procurement program will cost about $64 billion, as compared with nearly $92 billion for the previously planned force, resulting in a savings of approximately $28 billion. The Department of the Navy based its conclusion that it could meet its operational requirements with a smaller force primarily on the results of a contractor study. The contractor's analysis generally appeared reasonable because it assessed the relative capability of different tactical aviation force structures and included important assumptions about force structure, budget resources, and management efficiencies. However, from our review of the contractor's methodology and assumptions, we identified some limitations in its analysis that may understate the risk associated with implementing some aspects of the Plan. These limitations include (1) the contractor's decision to model only the carrier version of the Joint Strike Fighter despite the Marine Corps' plans to operate Short Take Off and Vertical Landing aircraft on carriers, (2) the contractor's limited studies supporting recommended reductions in backup aircraft, and (3) the contractor's method for determining aircraft capabilities used in the force analyses. The contractor modeled the effectiveness of the current force, the larger force that the Navy had previously planned to buy, and the study's recommended smaller force at three stages of a notional warfight. The warfight was based on a generic composite scenario that was developed with input from the Air Force and Army. It has been previously used by the Joint Strike Fighter Program Office to assess the effectiveness of a joint strike force in terms of phases of a warfight; geographical location of combat forces; the characteristics of targets, such as type and hardness; and whether targets are mobile. During the forward presence phase of the contractor's modeling scenario, one carrier battle group and one amphibious readiness group were deployed, and aircraft operated at a maximum distance of 400 nautical miles from the carrier. In the buildup phase, three carrier battle groups and three amphibious groups were deployed in one theater, and aircraft operated at a maximum distance of 150 nautical miles. During the mature phase, eight carrier battle groups, eight amphibious readiness groups, and 75 percent of all other assets were deployed to land-based sites, and aircraft operated at a maximum distance of 150 nautical miles from the carrier. To measure combat effectiveness levels, the contractor methodically compared the estimated capabilities of the current force, the previously planned force, and the recommended force to hit targets and perform close air support. To determine the relative capabilities of each aircraft comprising these forces, the contractor convened a panel of experts who were familiar with planned capability and used official aircraft performance data to score the offensive and defensive capabilities of different aircraft across a range of missions performed during the three stages of the warfight. As indicated in figure 2, the experts determined that the Joint Strike Fighter, which is still in development, will be the most capable aircraft and assigned it a baseline score of 1 compared with the other aircraft. Figure 2 also shows that based on the capability scores assigned other aircraft, the Joint Strike Fighter is expected to be approximately nine times more capable than the AV-8B Harrier aircraft, about five times more capable than the F-14D and F/A-18 A+/C/D aircraft, three times more capable than the first version of the F/A-18 E/F aircraft, and 50 percent more capable than the second version of the F/A-18E/F. In addition, the contractor measured the percentage of units deployed in order to ensure that Navy and Marine Corps personnel tempo and operational tempo guidelines for peacetime were not exceeded. The study concluded that, because of the expected increase in the capabilities of F/A-18 E/F and the Joint Strike Fighter aircraft, both the previously planned force and the recommended new smaller force were more effective than today's force. Furthermore, the new smaller force was just as effective in most instances as the previously planned force because the smaller force had enough aircraft to fully populate aircraft carrier flight decks and therefore did not cause a reduction in the number of targets that could be hit. However, the analysis showed that beginning in 2015, there would be a 10 percent reduction in effectiveness in close air support during the mature phase of a warfight because fewer squadrons and aircraft would be available to deploy to land bases. The analysis also showed that the smaller force stayed within personnel and operational tempo guidelines during peacetime. The contractor's analysis was based on three key assumptions that generally appeared to be reasonable and consistent with DOD plans. First, it assumed that the future naval force structure would include 12 carrier battle groups, supported by 1 reserve and 10 active carrier air wings, and 12 amphibious readiness groups. The 2001 Quadrennial Defense Review validated this naval force structure and judged that this force structure presented moderate operational risk in implementing the defense strategy. Second, it assumed that the Navy and Marine Corps' tactical aviation procurement budget would continue to be about $3.2 billion in fiscal year 2002 dollars annually through 2020. This was based on the Department of the Navy's determination that the tactical aviation procurement budget would continue to represent about 50 percent of the services' total aircraft procurement budget as it had in fiscal years 1995 to 2002. Third, it assumed that the Department of the Navy could reduce the number of backup aircraft it buys based on expected efficiencies in managing its backup aircraft inventory. Our analysis also showed, however, that certain limitations derived from the contractor's study could add risk to the expected effectiveness of the future smaller force. These limitations are the study's modeling assumption that the effectiveness of the Marine Corps' Short Takeoff and Vertical Landing version of the Joint Strike Fighter would be the same as the Navy's carrier version despite projected differences in their capability; the study's assumption that certain efficiencies in the management of backup aircraft could be realized, without documenting and providing supporting analyses substantiating how they would be achieved; and the study's process for assigning capability measures to aircraft which, because of its subjectivity, could result in an overestimation of the smaller force's effectiveness. The contractor's study assumed that all Joint Strike Fighters aboard Navy carriers, including those belonging to the Marine Corps, would have the performance characteristics of the carrier version of that aircraft. However, the Marine Corps plans to operate only the Short Takeoff and Vertical Landing version of the aircraft, which is projected to be significantly less capable than the carrier version in terms of range and payload (number of weapons it can carry). The Marine Corps believes this version is needed to satisfy its requirement to operate from austere land bases or amphibious ships in order to quickly support ground forces when needed. But the carrier version's unrefueled range and internal payload are expected to exceed those of the Short Takeoff and Vertical Landing version by approximately 50 and 100 percent, respectively. The contractor mitigated the differences in the two versions' capabilities by modeling a scenario whereby the aircraft would operate from carriers located 150 miles from the targets during the mature phase of the warfight--well within the range of the Marine Corps' version. By contrast, during Operation Iraqi Freedom, many of the targets struck from carriers would have been outside the range of the Short Takeoff and Vertical Landing version of the aircraft unless in-flight refueling was performed, thereby reducing its effectiveness. The study noted that because of the differences in performance, substitution of the Short Takeoff and Vertical Landing version for the carrier version would result in decreased effectiveness when the Short Takeoff and Vertical Landing version's performance parameters are exceeded. However, the study did not conduct additional analyses to quantify the impact of using Short Takeoff and Vertical Landing aircraft aboard carriers. Therefore, if the Plan is implemented whereby the Marine Corps operates the Short Takeoff and Vertical Landing version of the aircraft exclusively as one of four tactical aviation squadrons aboard each carrier, under a different scenario featuring a greater range to targets, the overall effectiveness of the tactical fighter group could be less than what the contractor's study predicted. Navy officials acknowledged that operating the Short Takeoff and Vertical Landing Joint Strike Fighter aircraft from carriers presents a number of challenges that the Navy expects to address as the aircraft progresses through development. The contractor's study recommended cutting 351 backup aircraft based on expected improvements and efficiencies in the Navy's management of such aircraft. The study identified three main factors prompting its conclusion that fewer backup aircraft would be needed. Actual historical attrition rates for F/A-18 aircraft, according to the Navy, suggest that the attrition rate for the F/A-18E/F and Joint Strike Fighter could be lower than expected. The Navy determined that attrition might be only 1 percent of total aircraft inventory, rather than the expected 1.5 and 1.3 percent included in the Navy's original procurement plan for the aircraft respectively; thus, fewer attrition aircraft would suffice. Business practices for managing aircraft in the maintenance pipeline could be improved. According to the contractor, if Navy depots performed as much maintenance per day as Air Force depots, it appears that the Navy could reduce the number of aircraft in the maintenance pipeline; thus, fewer aircraft could suffice. Testing, evaluating, and aircrew training could become more efficient. According to the contractor's study, fewer aircraft would be needed to test and evaluate future technology improvements because of the Navy and Marine Corps' two Joint Strike Fighter variants (the carrier and Short Takeoff and Vertical Landing versions would have many common parts). In addition, advances in trainer technology and the greater sortie generation capability of the newer aircraft could enable them to achieve more training objectives in a single flight; thus, fewer aircraft could suffice. Although the contractor recognized the potential of these efficiencies when recommending the reduction to the number of backup aircraft, it did not fully analyze the likelihood of achieving them. According to the contractor, it recommended the reduction based on limited analysis of the potential to reduce the number of attrition and maintenance pipeline aircraft. As a result, the contractor also recommended that the Department of the Navy study whether it could achieve expected maintenance efficiencies by improving its depot operations. However, the department has not conducted such an assessment. The Department of the Navy considered the risk of cutting 351 aircraft too high and instead decided to cut only 237 backup aircraft--the number reflected in the Navy's plan. Historically, the Navy's backup inventory has equaled approximately 95 percent of the number of combat aircraft. The contractor recommended that the Navy reduce its backup aircraft requirement to 62 percent of its planned inventory of combat aircraft. Concerned that this might be too drastic a cut, the Navy decided to use 80 percent when determining the number of backup aircraft in its Plan. Although the Plan's higher ratio of backup aircraft to combat aircraft will reduce operational risk by having more aircraft available for attrition and other purposes, the Navy's 80 percent factor was not based on a documented analysis. Navy officials noted that because of budget limitations, it would be difficult to purchase additional aircraft to support the smaller tactical aviation force in case some of the projected efficiencies are not realized. The contractor relied on aircraft capability scores assigned by a panel of experts as a basis for comparing the relative effectiveness of the aircraft and alternative force structures examined. The results showed that by 2020, the previously planned and new smaller force would be four times more effective at hitting targets than the current force. However, the panelists subjectively determined the capability scores from official aircraft performance parameters provided by the Navy. The contractor reportedly conducted a "sensitivity analysis" of the aircraft capability scores and found that changing the scores affected the forces' relative effectiveness. Since the contractor did not retain documentation of the analysis, we could not verify the quality of the scoring, nor attest that the relative effectiveness of the new force will be four times greater than the current force as the study reported. Nevertheless, the contractor's acknowledgement that score variations could affect relative force effectiveness raises the possibility that the estimated increases in effectiveness, both for the previously planned force and for the recommended smaller force, might not be as high as the study concluded. Navy and Marine Corps officials agreed that gaining a significant increase in total capability was key to accepting a smaller, more capable tactical aviation force. However, if the capability of the recommended smaller force is significantly less than that indicated by the study, the smaller force's ability to meet both Navy and Marine Corps' mission requirements could be adversely affected. The Navy and Marine Corps took significantly different approaches toward the task of assessing and documenting their decisions on which reserve units to decommission. The Marine Corps used a well-documented process that clearly showed what criteria were applied to arrive at its decision, whereas the Navy's approach lacked clarity and supporting documentation about how different options were evaluated. DOD has not developed criteria to guide such decommissioning decisions. In a previous report, we reviewed the Air Force's decision to reduce and consolidate the B-1B bomber fleet and found that Air Force officials did not complete a formal comprehensive analysis of potential basing options in order to determine whether they were choosing the most cost-effective units to keep. We also stated that in the absence of standard guidance for analyzing basing alternatives, similar problems could occur in the future. In this instance, the absence of standard DOD guidance for analyzing and documenting decommissioning alternatives allowed the Navy to use a very informal and less transparent process to determine which reserve squadron to decommission in fiscal year 2004. The lack of a formal process could also hinder transparency in making such decisions in the future, which adversely affects Congress's ability to provide appropriate oversight. The Marine Corps established a team that conducted and documented a comprehensive review to support its decision about which Marine Corps Reserve squadron to decommission. In conducting its analysis, the Marine Corps assumed that (1) reserve assets that had not been decommissioned must be optimized for integration in future combat roles, (2) mission readiness and productivity are crucial, and (3) the political and legal ramifications of deactivating reserve units must be considered. The study team established a set of criteria consisting of personnel, operational, fiscal, logistical, and strategic factors and applied these criteria when evaluating each of the Marine Corps' four reserve squadrons. Table 3 identifies the selection criteria applied to each squadron. The study results were presented to the Marine Requirements Oversight Council for review and recommendation and to the Commandant of the Marine Corps. The Commandant decided in May 2004 to decommission reserve squadron VMFA-321 located at Andrews Air Force Base, Maryland, by September 2004. In December 2003, the Navy decided to decommission one of three Navy Reserve tactical aviation squadrons, VFA-203, located in Atlanta, Georgia. The Chief of Naval Reserve stated that the Navy used a variety of criteria in deciding which unit to decommission. These criteria included the squadrons' deployment history, the location of squadrons in relation to operating ranges, and the location of a reserve intermediate maintenance facility. Navy officials, however, could not provide documentation of the criteria or the analysis used to support its decision. Without such documentation to provide transparency to the Navy's process, we could not determine whether these criteria were systematically applied to each reserve squadron. Furthermore, we could not assess whether the Navy had systematically evaluated and compared other factors such as operational, personnel, and financial impacts for all Navy Reserve squadrons. Two other factors could adversely affect the successful implementation of the Plan and increase the risk level assumed at the time the contractor completed the study and the Navy and Marine Corps accepted the Plan. These factors are (1) uncertainty about requirements for readiness funding to support the tactical aviation force and (2) projected delays in fielding the Joint Strike Fighter aircraft that might cause the Department of the Navy not to implement the Plan as early as expected and might increase operations and maintenance costs. If these factors are not appropriately addressed, the Department of the Navy may not have sufficient funding to support the readiness levels required for the smaller force to meet the Navy and Marine Corps' missions, and the transition to the Plan's force might be more costly than anticipated. The contractor's study stated that because the Navy and the Marine Corps would have a combined smaller tactical aviation force under the Plan, the services' readiness accounts must be fully funded to ensure that the aircraft readiness levels are adequate to meet the mission needs of both services. Furthermore, the contractor recommended that the Navy conduct an analysis to determine the future readiness funding requirements and ensure that the Navy has a mechanism in place to fully fund the readiness accounts. So far, the Navy has not conducted this analysis, nor has it addressed how it will ensure that the readiness accounts will be fully funded because Navy officials noted that they consider future budget estimates to be adequate. However, a recent Congressional Research Service evaluation of the Plan noted that operations and maintenance costs have been growing in recent years for old aircraft and that new aircraft have sometimes, if not often, proved more expensive to maintain than planned. Furthermore, our analysis of budget data for fiscal years 2001-3 indicates that the Department of the Navy's operations and maintenance costs averaged about $388 million more than what was requested for tactical aviation and other flight operations. Without a review of future readiness funding requirements, the Navy cannot be certain that sufficient funding will be available to maintain the readiness levels that will enable the smaller tactical aviation force to meet the mission needs of both the Navy and the Marine Corps. Delays in fielding the Joint Strike Fighter aircraft, both known and potential, could also affect the successful implementation of the Plan. As a result of engineering and weight problems in the development of the Joint Strike Fighter, there will be at least a 1-year delay in when the Navy and Marine Corps had expected to begin receiving the Joint Strike Fighter aircraft. As noted in the Department of the Navy's most recent acquisition reports to Congress, the Navy has delayed the Short Takeoff and Vertical Landing version from 2010 to 2012 and the Navy's carrier version from 2012 to 2013. Furthermore, in March 2004 we reported that numerous program risks and possible schedule variances could cause additional delays. Recent Joint Strike Fighter program cost increases could also delay the fielding of the aircraft. In DOD's December 31, 2003, procurement plan, the average unit cost of the aircraft increased from $69 million to $82 million. Assuming that the Department of the Navy procures the 680 Joint Strike Fighter aircraft as proposed under the Plan, the total procurement cost will be approximately $9 billion higher. This increase in cost, when considered within the limits of the expected $3.2 billion annual procurement budget, will likely prevent the Department of the Navy from fielding the smaller but more effective tactical aviation force as early as expected. Additionally, these delays will oblige the Department of the Navy to operate legacy aircraft longer than expected, which could result in increased operations and maintenance costs. A potential increase in operations and maintenance costs makes it even more important for the Department of the Navy to conduct an analysis to determine its future readiness funding requirements. The contractor's study results provided the Department of the Navy with a reasonable basis for concluding that it could afford to buy a smaller but more capable force that would meet its future operating requirements by using fewer Navy and Marine Corps tactical aviation squadrons of more capable aircraft as a combined force and achieving efficiencies that allow it to reduce the number of backup aircraft needed. However, there are known management and funding risks to realizing the new smaller forces' affordability and effectiveness. Until Navy management assesses the likelihood of future lower attrition rates and aircraft maintenance, test and evaluation, and training requirements, the Navy runs the risk that the number of backup aircraft it plans to procure will not be adequate to support the smaller tactical aviation force and add concern to the Plan's affordability. Furthermore, in the absence of clear DOD guidance citing consistent criteria and documentation requirements for supporting decisions that affect units, such as which Navy and Marine Corps Reserve squadrons to decommission, we remain concerned about the transparency of the process for reducing the force to those with oversight responsibility. The inconsistency in the Marine Corps' and Navy's approaches and supporting documentation confirms the value of such guidance to ensure clear consideration of the best alternative. Finally, until the Department of the Navy knows the readiness funding requirements for operating the new smaller force, it cannot be certain that it can maintain the readiness levels required to meet operational demands. Such an assessment of these requirements would provide a sound basis for seeking proper funding. To enhance the potential that the future Navy and Marine Corps integrated tactical aviation force will meet the mission needs of both services and ensure more transparency when making future decommissioning decisions, we recommend that the Secretary of Defense take the following three actions: direct the Secretary of the Navy to thoroughly assess all of the factors that provide the basis for the number of backup aircraft needed to support a smaller tactical aviation force under the plan to integrate Navy and Marine Corps tactical aviation forces, develop guidance that (1) identifies the criteria and methodology for analyzing future decisions about which units to decommission and (2) establishes requirements for documenting the process used and analysis conducted, and direct the Secretary of the Navy to analyze future readiness funding requirements to support the tactical aviation integration plan and include required funding in future budget requests. In written comments on a draft of this report, the Director, Defense Systems, Office of the Under Secretary of Defense, stated that the department generally agreed with our recommendations and cited actions that it is taking. The department's comments are reprinted in their entirety in appendix I. In partially concurring with our first recommendation to thoroughly assess all of the factors that provide the basis for the number of backup aircraft, DOD stated that the Department of the Navy's Naval Air Systems Command would complete an effort to review all aircraft inventories to determine the optimum quantity required by July 2004. However, we were not able to evaluate the Navy's study because Navy officials have since told us that it will not be completed until late September or early October 2004. With regard to our second recommendation to develop guidance that would identify criteria and a methodology for analyzing future decommissioning decisions and require documenting the process, DOD stated that it would change Directive 5410.10, which covers the notification of inactivation or decommission of forces, and require it to contain the criteria and methodology used to make the force structure decision. While we agree that the new guidance, if followed, would disclose these aspects of the decision-making process, it does not appear sufficient to meet the need we identified for consistency and documentation to support force structure decisions. Therefore, we believe that DOD should take additional steps to meet the intent of our recommendation by developing consistent criteria and requiring documentation to ensure transparency for those providing oversight of such decisions in the future. In partially concurring with our third recommendation related to future readiness funding requirements, the Department of Defense stated that the Department of the Navy is currently developing analytical metrics that would provide a better understanding of how to fund readiness accounts to achieve a target readiness level. We support the development of validated metrics that would link the amount of funding to readiness levels because they would provide decision makers with assurance that sufficient funding would be provided. To determine how Navy and Marine Corps operational concepts, force structure, and procurement costs would change under the Plan, we obtained information about the Navy and Marine Corps' current roles and mission, force structure, and projected tactical aviation procurement programs and conducted a comparative analysis. We also met with Navy and Marine Corps officials at the headquarters and major command levels as well as Congressional Research Service officials to further understand and document the operational, force structure, and procurement cost changes expected if the Plan is implemented. To determine what methodology and assumptions the Navy and Marine Corps used to analyze the potential for integrating tactical aviation assets and any limitations that could affect the services' analysis, we analyzed numerous aspects of the contractor's study that provided the impetus for the Plan. Specifically, we met with the contractor officials of Whitney, Bradley & Brown, Inc., to gain first-hand knowledge of the model used to assess aircraft performance capability and the overall reasonableness of the study's methodology. We also reviewed the scenario and assessed the key analytical assumptions used in order to evaluate their possible impact on the implementation of the Plan. We examined operational and aircraft performance factors to determine the potential limitations that could affect the services' analysis. Additionally, we held discussions with officials at Navy and Marine Corps headquarters, Joint Forces Command, Naval Air Forces Pacific and Atlantic Commands, Marine Forces Atlantic Command, and the Air Combat Command to validate and clarify how the Plan would or would not affect the ability of tactical aviation forces to meet mission needs. To determine the process the Navy and Marine Corps used to assess which reserve squadrons should be decommissioned in fiscal year 2004, we obtained information from the Marine Corps Reserve Headquarters and the 4th Marine Air Wing showing a comparative analysis of Marine Corps Reserve squadrons. In the absence of comparable information from the Navy, we held discussions with the Chief of Naval Reserve and the Director, Navy Air Warfare, and visited the Naval Air Force Reserve Command to obtain information about the decision-making process for selecting the Navy reserve unit to be decommissioned. We also visited the Commander of the Navy Reserve Carrier Air Wing-20, along with four reserve squadrons, two each from the Navy and Marine Corps Reserves, to clarify and better understand their roles, missions, and overall value to the total force concept. To determine what other factors might affect the implementation of the Plan, we analyzed the contractor's study, Congressional Research Service reports, and prior GAO reports for potential effects that were not considered in the final results of the analysis. We discussed these factors with officials from Navy and Marine Corps headquarters as well as Naval Air Forces Pacific and Atlantic Commands and Marine Forces Atlantic Command to assess the impact of the Plan on day-to-day operations. We assessed the reliability of pertinent data about aircraft capability, force structure, and military operations contained in the contractor's study that supports the Plan by (1) reviewing with contractor officials the methodology used for the analysis; (2) reviewing the 2001 Quadrennial Defense Review, prior GAO reports, and service procurement and aircraft performance documents; and (3) conducting discussions with Navy and Marine Corps officials. We concluded that the data were sufficiently reliable for the purpose of this report. We performed our review from July 2003 through May 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretary of the Navy; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested congressional committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-4402 if you or your staff have any questions concerning this report. Major contributors to this report are included in appendix I. In addition to those named above, Willie J. Cheely, Jr.; Kelly Baumgartner; W. William Russell, IV; Michael T. Dice; Cheryl A. Weissman; Katherine S. Lenane; and Monica L. Wolford also made significant contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select "Subscribe to Updates."
The Fiscal Year 2004 Defense Appropriations Act and the Senate Report for the 2004 National Defense Authorization Act mandated that GAO examine the Navy and Marine Corps' Tactical Aviation Integration Plan. In response to these mandates, this report addresses (1) how Navy and Marine Corps operational concepts, force structure, and procurement costs change; (2) the methodology and assumptions the services used to analyze the potential for integrating the forces; (3) the analytical process the services used to decide which reserve squadrons to decommission; and (4) other factors that might affect implementation of the Plan. Concerns about the affordability of their prior tactical aviation procurement plan prompted the Navy and Marine Corps to agree to a new Tactical Aviation Integration Plan. Under this Plan, the two services will perform their missions using fewer units of more capable aircraft and reducing total program aircraft procurement costs by $28 billion over the next 18 years. Operationally, the Navy and Marine Corps will increase the extent to which their tactical aviation units are used as a combined force to accomplish both services' missions. The Plan also reduces the services' tactical aviation force structure by decommissioning five squadrons, thus decreasing the number of Navy and Marine Corps squadrons to 59, and reduces the total number of aircraft they plan to buy from 1,637 to 1,140. The Department of the Navy based its conclusion that it could meet the Navy and Marine Corps' operational requirements with a smaller force primarily on the findings of a contractor study that evaluated the relative capability of different tactical aviation force structures. GAO's review of the contractor's methodology and assumptions about force structure, budget resources, and management efficiencies suggests that much of the analysis appears reasonable. However, GAO noted some limitations--including the lack of analytical support for reducing the number of backup aircraft--increase the risk that the smaller force will be less effective than expected. The Navy and Marine Corps each followed a different process in selecting a reserve squadron to decommission. The Marine Corps made a clear and well-documented analysis of the operational, fiscal, logistical, and personnel impacts of different options that appears to provide decision makers with a reasonable basis for selecting the Reserve unit to decommission. By contrast, the Navy selected its reserve squadron without clear criteria or a documented, comprehensive analysis, and thus with less transparency in its process. Two other factors that might affect successful implementation of the Plan are the potential unavailability of readiness funding and delays in fielding the new force. Although the contractor recommended that the Navy identify future readiness-funding requirements, to date, the Navy has not conducted this analysis. In addition, the Department of the Navy is experiencing engineering and weight problems in developing the Joint Strike Fighter that will cause it to be delayed until 2013, at least 1 year later than had been projected, and other high risks to the program remain. Because these delays will cause the Navy to operate legacy aircraft longer than expected, they might also increase operations and maintenance costs, making an analysis of future readiness funding requirements even more important.
7,431
643
Federal Medigap standards were first established by section 507 of the Social Security Disability Amendments of 1980 (P.L. 96-265), which added section 1882 to the Social Security Act (42 U.S.C. 1395 ss). Section 1882 set forth federal requirements that insurers must meet for marketing policies as supplements to Medicare and established criminal penalties for marketing abuses. As originally enacted, one of the requirements was that policies had to be expected to return specified portions of premiums as benefits--60 percent for policies sold to individuals and 75 percent for those sold to groups. Insurers were considered to have met the loss ratio requirement if their actuarial estimates showed that their policies were expected to do so. Actual loss ratios did not have to be compared with the loss ratio standards. At that time, insurers generally reported loss ratio data to the states in aggregate--that is, a combined total for all policies sold in the state. If states had wanted to verify compliance, this reporting method would not have allowed them to do so for particular policies. In 1986, we reported that section 1882 had helped protect against substandard and overpriced policies. We also pointed out the problem of insurers reporting aggregate loss ratio data and that actual loss ratios were not compared with the standards to verify compliance. Section 221 of the Medicare Catastrophic Coverage Act of 1988 (P.L. 100-360) amended section 1882 to require that insurers report their actual loss ratios to the states. The Omnibus Budget Reconciliation Act (OBRA) of 1990 (P.L. 101-508) required essentially that Medigap policies be standardized and that a maximum of 10 different benefit packages would be allowed. The act also increased the loss ratio standard for individual policies to 65 percent for policies sold or issued after November 5, 1991. Effective beginning in 1997, the 65-percent standard was applied to policies issued before November 6, 1991, by the Social Security Amendments of 1994 (P.L. 103-432). The 1990 amendments also required that insurers pay refunds or provide credits to policyholders when Medigap policies fail to meet loss ratio standards. As implemented in the NAIC model law and regulations, a cumulative 65-percent loss ratio for individual policies (75-percent for group policies) must be met over the life of a policy, which NAIC assumed to be 15 years. NAIC's methodology compares a policy's actual loss ratio for a given year with a benchmark (or target) ratio for that year, calculated using cumulative premium and claim experience. If a policy's actual loss ratio does not meet the benchmark ratio, the insurer must complete further calculations to determine whether a refund or credit is necessary to bring the loss ratio up to standard. Loss ratios on a calendar-year basis for an individual policy are expected to be 40 percent the first year, 55 percent the second year, and 65 percent the third year. Annual loss ratios would continue to increase until they reach 77 percent by the 12th year and remain at that level for the remainder of the 15-year period. This approach anticipates that the higher loss ratios in the third and later years would offset the lower loss ratios in the first 2 years. The methodology is designed to ensure a cumulative 65-percent loss ratio for individual policies by the end of a 15-year period. This same approach is used for ensuring a 75-percent loss ratio by the end of a 15-year period for group policies. NAIC's methodology for determining whether a refund or credit is required also includes a tolerance adjustment based on the number of policyholders and the length of time they have held their policies. A policy loss ratio based on less than 500 life-years of exposure since inception is considered not credible, and no refund or credit is required. After 10,000 life-years have accumulated, a policy is considered fully credible. According to an NAIC actuarial advisory group and several insurance regulators, this tolerance adjustment helps ensure that refunds or credits will not occur so often in the early years of policy experience that large premium increases will result in later years. An important factor in evaluating loss ratios is a policy's credibility--that is, whether enough people have been covered under the policy to make the loss ratio meaningful. We used two measures of credibility. First, to make the data in this report comparable with the data in our earlier reports, we used a threshold of $150,000 in premiums in a given year in a state.Information in this report on loss ratios that includes years before 1994 use this measure. Second, we used a modification of NAIC's refund methodology, which, as discussed above, measures credibility by the number of policyholders and the number of years they have held their policies. We used this method to assess whether policies met the applicable loss ratio standards in 1994 and 1995. Another factor considered when interpreting loss ratios is the length of time a policy has been in force. The refund methodology for 1994 and 1995 indicates that Medigap loss ratios are expected to meet the federal standard after 3 years, which is the criterion we used. In the 1988-95 period, the Medigap insurance market grew from about $7 billion to over $12 billion (see fig. 1), but most of that growth had occurred by 1992. From 1988 through 1992, earned premiums increased by more than 50 percent; from 1992 through 1995, growth leveled off with premiums averaging around $12 billion. In 1995, 352 insurance companies sold Medigap policies and collected premiums totaling $12.5 billion with 33 companies each reporting premiums of over $100 million and accounting for almost 75 percent of the total (see app. II). The Prudential Insurance Company of America, which underwrote the policies sold through the American Association of Retired Persons (AARP), was the largest supplier of Medigap insurance with 23 percent of the market. The average Medigap loss ratio for all policies was about the same in 1995 (86 percent) as it was in 1988 (84 percent), but average loss ratios exhibited considerable variation, increasing in some years and decreasing in others. For example, average loss ratios increased in 1990 and 1991 followed by 2 years of declining ratios and then 2 years of increases. For the 8-year period, the average loss ratio was 81 percent with a low of 76 percent in 1993 and a high of 86 percent in 1995. The average loss ratios for group policies have varied substantially, ranging from 80 percent in 1989 to 95 percent in 1995, while those for individual policies during the period have been more stable (see fig. 2). In 1995, states differed considerably in average loss ratios. Insurers doing business in Michigan had the highest average loss ratio (107 percent) followed by the District of Columbia (102 percent), Massachusetts (99 percent), Pennsylvania (97 percent), and Maine (96 percent). The five states with the lowest average loss ratios were Nebraska (73 percent), Minnesota (75 percent), Oregon (76 percent), Delaware (76 percent), and Montana (76 percent ). Appendix III lists average loss ratios by state. Moreover, loss ratios varied among insurers within a state. In Michigan, for example, average loss ratios for insurers with premiums over $150,000 ranged from 59.3 to 132.7, and, in Montana, from 29.0 to 108.8. In 1995, the average loss ratios for the 10 standardized Medigap plans-- from the basic Plan A to the top of the line Plan J--ranged from 73.8 percent for Plan G, to 102.3 percent for Plan A. The most popular of the plans, Plan F, is more costly and returns less to policyholders in benefits than the nearly identical Plan C. Plan F differs from Plan C only in its coverage of excess physicians' charges--the amounts doctors may bill patients above Medicare's allowed amount, which the law limits to no more than 15 percent. In 1995, Plan F had a nationwide average loss ratio of 75.5 percent; Plan C had an average loss ratio of 89.3 percent. Medicare data show that for over 95 percent of claims, physicians agree to accept Medicare's allowed amount so insurers seldom have to pay for excess charges. Moreover, Plan F had an average loss ratio in 1995 lower than all other plans except Plan G. Appendix IV lists the average loss ratio experience for all 10 Medigap plans in 1995 by state. In 1994 and 1995, most Medigap policies that were at least 3 years old with premiums totaling $150,000 or more in the applicable state met the federal loss ratio standards. Premiums on credible policies that had been issued 3 or more years ago that failed to meet the minimum federal loss ratio standards increased from $320 million in 1991 to $1.2 billion in 1993.However, premiums for policies that failed to meet the standards decreased to $937 million in 1994 and to $522 million in 1995 (see fig. 3). Using information not previously available in the NAIC loss ratio data tape, we incorporated features of the refund methodology to evaluate the 1994 and 1995 loss ratio data for policies that were at least 3 years old. To estimate the number of policies and associated premiums with loss ratios below standards, we measured credibility using the number of covered lives by policy reported to NAIC. Under the refund methodology, experience of less than 500 life-years is not considered credible, but 10,000 life-years is considered fully credible. A tolerance adjustment is added to the actual loss ratio on a sliding scale for life-years falling between those two numbers. In 1994, using covered lives as the measure of credibility, the actual or adjusted loss ratios of 256 of 2,670 policies did not meet the minimum loss ratio standards, and these companies earned $448 million in premiums on these policies. In 1995, the number of policies not meeting the standards was 141 or 4 percent of the total, and the premiums were $203 million. Appendixes V and VI identify the policies with loss ratios below the applicable standard, along with their premiums, benefit payments, and loss ratios. Appendix V lists individual policies, and appendix VI lists group policies. In both 1994 and 1995, more than 10,000 different Medigap policies, virtually all of which were standardized policies, were subject to the OBRA 1990 refund provision and were required to send refund calculation forms to state insurance commissioners. In those 2 years, a total of almost 14,000 policies had loss ratios below 65 percent for individual policies and below 75 percent for group policies. However, we identified only two policies that made refunds in 1995. One was a standardized policy sold in Iowa that refunded a total of about $19,000 to 148 policyholders. The other was a prestandardized plan sold in Virginia that refunded a total of about $2,000 to 76 policyholders. In follow-up contacts with 15 selected states, we identified only one policy sold in Illinois that refunded a total of about $123,000 to 3,075 policyholders for 1996. To determine why policies with loss ratios below the applicable standard in 1994 or 1995 did not have to make refunds, we selected a random sample of these policies with earned premiums under $1 million and asked the states, the District of Columbia, and Puerto Rico to send us copies of the refund calculation forms for the sample and for all policies with premiums over $1 million. All except Michigan responded. From the information on these forms, we determined the reasons refunds were not required and projected the results to the universe (see table 1). About 97 percent of the policies below the loss ratio standards had earned premiums of less than $1 million. Refunds were not required for most policies because their experience was not considered credible because they had less than 500 life-years since inception. Most of the policies with earned premiums of $1 million or more did not have to pay refunds because, although their loss ratios in 1994 or 1995 were below standards, their cumulative loss ratio since inception was greater than the benchmark ratio for the year in question. The benchmark ratios were designed with certain assumptions about policy lapse rates and other factors to ensure that the cumulative loss ratio over 15 years was at least equal to the federal loss ratio standards. Because benefit payments are generally low in the first years when policyholders are younger and healthier and increase as they age, benchmark ratios are significantly below the loss ratio standards at first and gradually increase over the years. Because all of the policies subject to the refund provision in 1994 and 1995 were issued within the last 3 or 4 years, they had benchmark ratios below loss ratio standards. In fact, in 1994 and 1995, the highest benchmark ratio for any policy was 58 percent; about 9 out of 10 policies had benchmark ratios under 50 percent. Millions of Medicare beneficiaries purchased Medigap policies, spending over $12 billion in 1995. Federal loss ratio standards and refund requirements are the main means of ensuring that Medigap policyholders receive value for their premium dollars. Medigap policies representing most of the premium dollars had loss ratios in 1994 and 1995 that were higher than federal law requires. Most policies with loss ratios below standards in 1994 and 1995 were not considered credible and, thus, were not subject to the refund provision. The amount of premiums paid for policies with loss ratios below standards has declined substantially from 1993, the last year before the refund provision became effective. The primary reason for requiring refunds and credits is to give insurers incentives to meet loss ratio standards and thereby avoid possibly unfavorable public relations consequences. The relatively low amount of premiums for policies with loss ratios below the standards indicates that the incentive is working. In commenting on a draft of this report, NAIC officials offered some technical suggestions, which we incorporated where appropriate. We are sending copies of this report to the governor of each state, NAIC, and interested congressional committees. We will make copies available to others on request. If you have any questions about this report, please call me at (202) 512-7114. Other major contributors to this report are listed in appendix VII. We obtained from the National Association of Insurance Commissioners (NAIC) its computerized database of insurance companies' Medigap annual experience exhibits for 1994 and 1995, the latest available when we began our work. In 1994, earned premiums totaled $12.7 billion for all policies, and, in 1995, earned premiums totaled $12.5 billion. In the databases we identified policies issued after 1991 and therefore subject to the Omnibus Budget Reconciliation Act of 1990 refund provision. We then identified those policies with loss ratios below the federal loss ratio standards. These policies had earned premiums of about $1.3 billion in 1994 and $.7 billion in 1995. We did not test the accuracy of the 1994 database, but we did test the accuracy of the 1995 database and found it to be accurate. Moreover, our prior work has found these databases to be accurate. To determine why policies with loss ratios below standards were not required to refund premiums or credits, we randomly selected a sample of policies with earned premiums of less than $1 million and selected all those with premiums of $1 million or more from the NAIC 1994 and 1995 databases. We asked state insurance commissioners and those for the District of Columbia and Puerto Rico to provide us with copies of all refund calculation forms that insurance companies filed with them for the related policies. All except Michigan responded. However, for about one-third of the policies, we received no refund calculation forms because states could not locate or did not receive the forms or the forms had been purged from the files. The data in the columns of table 1 (on page 10) covering policies with earned premiums under $1 million represent projections of our sample to the universe of policies in NAIC's databases for 1994 and 1995. Each estimate has a sampling error associated with it. The size of the sampling error reflects the precision of the estimate: The smaller the sampling error, the more precise the estimate. We computed sampling errors for table 1 at the 95-percent confidence level. This means that the chances are about 95 out of 100 that the actual number being estimated falls within the range defined by our estimate, plus or minus the sampling error. Table I.1 shows the sampling errors for table 1. Prudential Insurance Company of America Bankers Life & Casualty Company Empire Blue Cross & Blue Shield Medical Service Association of Pennsylvania-Pennsylvania Blue Shield Blue Cross & Blue Shield of Florida Blue Cross & Blue Shield of Virginia Blue Cross & Blue Shield of North Carolina, Inc. Blue Cross & Blue Shield of New Jersey, Inc. Mutual of Omaha Insurance Company Anthem Insurance Companies, Inc. Blue Cross & Blue Shield of Michigan Blue Cross & Blue Shield of Alabama Blue Cross & Blue Shield of Tennessee Blue Cross & Blue Shield of Connecticut, Inc. Standard Life & Accident Insurance Co. Blue Cross & Blue Shield of Kansas, Inc. Blue Cross of Western Pennsylvania American Family Life Assurance Company of Columbus, Georgia State Farm Mutual Automobile Insurance Company Blue Cross & Blue Shield of Minnesota, Inc. Arkansas Blue Cross & Blue Shield Southeastern Group, Inc. (continued) States had alternate Medigap standardized programs in effect before the federal legislation standardizing Medigap was enacted and have waivers from this requirement. Actual or adjusted loss ratio(continued) Actual or adjusted loss ratio2CMO ET AL M4 ET AL (continued) Actual or adjusted loss ratio50277 (1-90) (continued) H(65) 9703(I); 9708,17,09(G) (continued) M169 ET AL M4 ET AL H(65) (continued) M115 ET AL M2 ET AL MC-86-1 MO1 SM 20/20 SMSP - 88 - 1 (continued) Actual or adjusted loss ratio(ED 4/84) 2CMO ET AL M154 ET AL (continued) GC500(D) GSC1667 VAP1008 VAP1030 VAP1030A VAP1030D 337,987 614,725 557,304 56,367 137,803 192,331 343,121 348,048 26,405 57,625 71.9 63.3 70.0 61.8 56.8 (A7-92) GC500(D) (continued) CB 44.7 CB 44.8 GB 10-A2.1 GB 10-A2.2 GC500(D) (continued) GC500(D) MSP) (continued) GC500(D) (continued) Actual or adjusted loss ratio(continued) GC500(D) ST-II(B)-1 GC500(D)TX (continued) Thomas G. Dowdal, Assistant Director, (202) 512-6588 William A. Hamilton, Evaluator-in-Charge Michael Piskai Wayne J. Turowski The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed insurers' compliance with Medigap loss ratios and standards, focusing on: (1) the overall Medigap market; (2) which Medigap policies had loss ratios below the standards in 1994 and 1995; and (3) which policies resulted in refunds or credits, or, if not, why. GAO noted that: (1) from 1988 through 1995, the Medigap insurance market grew from $7 billion to over $12 billion with most of the growth occurring before 1993; (2) during this 8-year period, loss ratios averaged 81 percent in 1995; (3) in 1994 and 1995, over 90 percent of the policies in force for 3 years or more, representing most of the premium dollars, met loss ratio standards; (4) premiums for policies with loss ratios below standards totalled $448 million in 1994 and $203 million in 1995; (5) loss ratios varied substantially among states, among different benefit packages, and among insurers; (6) although thousands of individual policy forms had loss ratios below standards, no refunds were required in 1994 and only two were required in 1995; (7) the refund provision did not apply because most of these policies' loss experience was based on too few policyholders to be considered credible under the National Association of Insurance Commissioners' (NAIC) refund calculation methodology; (8) a number of policies had a cumulative loss ratio--the factor used to measure compliance--above that required under NAIC's refund calculation method; and (9) a primary reason for requiring refunds was to give insurers an incentive for meeting loss ratio standards, and the high proportion of premium dollars for policies doing so indicates the incentive is working.
4,371
369
The Clean Air Act Amendments of 1990 required EPA to issue a series of new or stricter regulations to address some of the more serious air pollution problems, including acid rain, toxic air pollutants, motor vehicle emissions, and stratospheric ozone depletion. In view of the estimated billions of dollars in annual costs to implement these and other requirements, the Congress required EPA to report on the benefits and costs of the agency's regulatory actions under the 1990 amendments, as well as under previous amendments to the act. Specifically, section 812 of the 1990 amendments required EPA to (1) conduct an analysis of the overall impacts of the Clean Air Act on public health, the economy, and the environment, (2) report on the estimated benefits and costs of the regulations implemented under clean air legislation enacted prior to 1990; and (3) biennially update its estimates of the benefits and costs of the Clean Air Act, beginning in November 1992. In May 1996, EPA drafted a report that examined the benefits and costs of the 1970 and 1977 amendments to the act. EPA is currently in the process of compiling its first prospective study evaluating the benefits and costs of the 1990 amendments. Section 812 also required GAO to report on the benefits and costs of the regulations issued to meet the requirements of the 1990 amendments. In a February 1994 report, we described the methodologies that EPA had used and the progress that the agency was making. In addition, since 1971 a series of executive orders and directives by OMB have required EPA and other federal agencies to consider the benefits and costs associated with individual regulations. In February 1981, President Reagan issued Executive Order 12291, which required federal agencies, including EPA, to prepare RIAs that identify the benefits, costs, and alternatives for all proposed and final major rules that the agencies issued. Subsequently, in September 1993, President Clinton issued Executive Order 12866 replacing Executive Order 12291 and directing federal agencies, including EPA, to assess benefits, costs, and alternatives for all economically significant regulatory actions. OMB and EPA have developed guidelines for conducting the benefit-cost analyses required by these executive orders. While describing the components to be included in these analyses, the guidance affords EPA's program offices considerable flexibility in preparing RIAs. Specifically, EPA's guidance stipulates that the level and precision of analyses in RIAs depend on the quality of the data, scientific understanding of the problem to be addressed through regulation, resource constraints, and the specific requirements of the authorizing legislation. This guidance also states that the amount of information and sophistication required in benefit-cost analyses depend on the importance and complexity of the issue being considered. The recently enacted Small Business Regulatory Enforcement Fairness Act of 1996 provides that before a rule can take effect, the agency preparing it must submit to GAO and make available to the Congress, among other things, a complete copy of any cost-benefit analysis of the rule. This act also provides for congressional review of major rules issued by federal agencies, including EPA, and the potential disapproval of such rules by the enactment of a joint resolution. Eight of the RIAs that we examined did not clearly identify the values assigned for key economic assumptions, such as the discount rate and value of human life, used to assess the economic viability of the regulations. Furthermore, we found that in the RIAs that identified key economic assumptions the rationale for the values used was not always explained. While EPA's guidance suggests that RIAs account for uncertainties in such values by conducting sensitivity analyses that show how benefit-cost estimates vary depending on what values are assumed, many RIAs used only a single value and did not always provide a clear explanation. Appendix I summarizes the results of our examination of the economic assumptions used in the 23 RIAs. Five of the 23 RIAs did not indicate whether the estimated future benefits and costs were discounted. The discount rate can have a significant effect on the estimated impact of an environmental regulation. For example, most environmental regulations impose immediate costs, while the benefits are realized in the future. In such a case, a lower discount rate has a more positive effect on future benefits, thus enhancing the regulation's perceived value. Conversely, using a higher discount rate makes benefits that occur in the future appear less valuable. Not clearly indicating the discount rate used in benefit-cost analyses makes it more difficult for decisionmakers to assess the desirability of a proposed regulation. EPA's guidelines recognize that there may be uncertainties about which discount rates should be used. Moreover, EPA's Director of the Office of Economy and Environment stated that there are uncertainties associated with choosing discount rates for conducting benefit-cost analyses. As a result, EPA's guidance suggests the use of sensitivity analyses to show how benefit and cost estimates are affected by different discount rates. Of the 18 RIAs that clearly identified the discount rates used, 5 showed the sensitivity of their estimates to different rates ranging from 2 to 10 percent. Thirteen of the RIAs used a single rate. Although 14 RIAs indicated that the reduction in mortality was an expected benefit, five did not indicate the value placed on a human life. Of the nine RIAs that indicated the value placed on a human life, eight included sensitivity analyses to indicate how their benefit estimates were affected by different values assumed for a life. Assigning a relatively high value for human life can have a significant positive effect on estimated benefits. However, for the nine RIAs that assumed a value for a human life, the ranges used were not always explained. For example, one RIA assumed a value of human life that ranged from $1.6 million to $8.5 million, and another, prepared in the same year, assumed a value of human life that ranged from $3 million to $12 million. In both instances, the RIAs did not provide a clear explanation of the rationale for the values that were used. Because of the agency's concern about the use of different values for key assumptions and the extent to which sensitivity analyses were used to account for uncertainties about the appropriate values for these assumptions, EPA recently formed an Economic Analysis Consistency Task Group under the direction of the Regulatory Policy Council to develop information on the causes of inconsistencies in the agency's RIAs. The Council is chaired by EPA's Deputy Administrator. In addition, EPA officials explained that the authorizing legislation for some environmental regulations is often a key determinant in the thoroughness of the agency's benefit-cost analyses. For example, they said that health-based national ambient air quality standards issued by the agency are not based on costs or other economic considerations. However, costs may be considered when developing and implementing control strategies for these standards. Although benefit-cost analyses are completed for these regulations, they do not directly impact the regulatory decision-making process. Therefore, the level of analysis and the number of alternatives analyzed could be more limited. Time constraints imposed by statutory and court-ordered deadlines and shortages of resources and staff also restrict EPA's ability to conduct comprehensive benefit-cost analyses. Given the limited resources and staff available for completing economic analyses, EPA officials stated that they assign a higher priority to benefit-cost analyses supporting regulations facing imminent deadlines, regulations expected to have greater economic impacts on society, and those for which the economic analysis has the highest potential to affect the regulatory alternative selected. OMB's and EPA's guidelines encourage EPA to quantify, to the extent feasible, all potential regulatory benefits and costs in monetary terms, but the guidance recognizes that assigning reliable monetary values to some benefits may be difficult, if not impossible. When benefits and costs cannot be described in monetary terms, the guidance recommends that RIAs include quantitative and qualitative information on the benefits and costs associated with the proposed regulations. The benefits mentioned in the guidance include reduced mortality, reduced morbidity, improved agricultural production, reduced damage to buildings and structures, improved recreational environments, improved aesthetics, and improvements in ecosystems. Our review of the 23 RIAs indicated that while all of them assigned dollar values to the costs of proposed regulations, 11 assigned dollar values to estimated benefits. EPA acknowledges that assigning monetary values to projected benefits is more difficult than assigning values to the costs of regulatory actions. According to EPA officials, the uncertainty of the science and inadequacy of other data often prevent the agency from estimating dollar benefits. For example, EPA's guidance recognizes that assigning a monetary value to reduced health risks, a potentially significant benefit, is difficult because of uncertainties about the precise relationship between different pollution levels and corresponding health effects and the appropriate monetary values to be assigned to reductions in mortality and reduced risks of individuals' experiencing serious illnesses. Estimating the monetary value of improvements in ecosystems, another potentially significant benefit, is even more complex. Although some RIAs did not assign dollar values to benefits, all 23 of the RIAs we examined contained other quantitative or qualitative information on the benefits of the proposed regulations. When benefits cannot be assigned dollar values, quantifying the benefits, such as a reduced incidence of deaths and illnesses, helps clarify the impact of proposed regulations. For example, an RIA for the National Recycling and Emissions Reduction Program's regulation estimated that 76,500 fewer cases of skin cancer and 1,400 fewer deaths from skin cancer would occur because of the regulation. Qualitative information is also helpful to decisionmakers because it gives them a more complete understanding of the overall benefits of regulations. Nineteen of the RIAs discussed qualitative benefits, such as increased crop yields, improvements in ecosystems, and reduced damage to buildings and other structures. Recognizing the difficulties associated with assigning dollar values to benefits, EPA's guidelines state that cost-effectiveness analyses can assist decisionmakers in comparing the desirability of various regulatory alternatives. We found that 20 of the RIAs we examined included the results of cost-effectiveness analyses, such as the cost per ton of reduced emissions. OMB's and EPA's guidelines require EPA to identify and discuss in RIAs the regulatory and nonregulatory alternatives for mitigating or eliminating the environmental problems being addressed and to provide the reasoning for selecting the proposed regulatory action over other alternatives. While EPA's guidance recommends that RIAs consider four major types of alternatives--voluntary actions, market-oriented approaches, regulatory approaches within the scope of the authorizing legislation, and regulatory actions initiated through other legislative authority--it states that the number and choice of alternatives to be selected for detailed benefit-cost analyses is a matter of judgment. While it was not always clear how many alternatives or what types of alternatives were considered, our examination of the 23 RIAs indicated that 6 of them compared a single alternative, which was the regulatory action being proposed, to the baseline, which was the situation likely to occur in the absence of the regulation--the status quo. All other RIAs compared two or more alternatives to the baseline. Figure 1 shows the results of our examination of the number of alternatives that EPA considered in the 23 RIAs. A major goal of RIAs is to develop and organize information on benefits and costs to clarify trade-offs among alternatives. EPA's guidance states that RIAs should provide decisionmakers with a comprehensive assessment of the implications of alternatives. EPA officials acknowledged that it is not always clear in the RIAs which alternatives were actually analyzed. They stated that some alternatives are excluded before the benefit-cost analyses are performed because of noneconomic reasons, such as statutory language that precludes EPA from using certain approaches. In our 1984 report, we recommended that future RIAs prominently include executive summaries that (1) clearly recognize all benefits and costs, even those that cannot be quantified; (2) identify a range of values for benefits and costs subject to uncertainty, as well as the sources of uncertainty; and (3) compare all feasible alternatives. While 13 of the 23 RIAs that we examined included executive summaries, some of these RIAs only briefly discussed the types of information that we recommended they contain. For example, the executive summary for the RIA on the regulation for national emissions standards for coke ovens contained a limited discussion of the uncertainties underlying the analysis, and the executive summary for the RIA on the operating permits program's regulation included only two sentences on the three alternatives that EPA considered. In contrast, the executive summary for the RIA supporting the regulation on phasing out ozone-depleting chemicals presented a relatively thorough discussion of the results of the benefit-cost analysis. For example, it included a range of cost estimates, qualitative and quantitative benefit estimates, discussions of scientific and economic uncertainties, and estimated benefits and costs for baseline conditions and three alternatives. The prominent display of this type of information in the executive summary makes it easier for decisionmakers to locate the information they need without searching through hundreds of pages in the body of the RIAs. EPA officials acknowledged that some of the RIAs did not include executive summaries and agreed that executive summaries that include information such as descriptions of the difficulties in assigning dollar values to benefits, uncertainties of the data, and regulatory alternatives are useful. However, they stated that time constraints and limited resources and staff often determine whether they prepare executive summaries and the amount of detail that is included when summaries are done. We believe that improvements in the presentation and clarity of information contained in EPA's RIAs would enhance their value to both agency decisionmakers and the Congress in assessing the benefits and costs of proposed regulations. EPA's guidelines state that the goal of RIAs is to provide decisionmakers with well-organized, easily-understood information on the benefits and costs of major regulations and to provide decisionmakers with a comprehensive assessment of the implications of alternative regulatory actions. However, many of the RIAs we reviewed did not clearly identify key economic assumptions, the rationale for using these assumptions, the degree of uncertainty associated with both the data and the assumptions used, or the alternatives considered. Not clearly displaying this information makes it difficult for decisionmakers and the Congress to appreciate the range and significance of the benefit and cost estimates presented in these documents. To help EPA decisionmakers and the Congress better understand the implications of proposed regulatory actions, we recommend that the EPA Administrator, ensure that RIAs identify the (1) value, or range of values, assigned to key assumptions, along with the rationale for the values selected; (2) sensitivity of benefit and cost estimates when there are major sources of uncertainty; and (3) alternatives considered, including those not subjected to benefit-cost analyses. We provided a draft of this report to EPA and OMB for review and comment. We obtained comments from EPA officials, including the Director, Office of Economy and Environment, and representatives of the Office of Air and Radiation. EPA officials stated that the information in the report was accurate and agreed with the recommendations in the report. They provided specific comments on a number of issues, which we have incorporated into the report, including a clarification of the objectives of the Economic Analysis Consistency Task Group. According to EPA officials, this group is in the process of identifying key issues associated with benefit-cost analyses that offer the potential for greater consistency in the agency's RIAs. Among the issues being considered are the valuation of reductions in the risk of mortality, discount rates and baselines, intergenerational issues, and distribution effects. Additionally, they emphasized that greater consistency in addressing key issues in the RIAs would enhance their usefulness for EPA's decisionmakers. EPA views this as an ongoing process and anticipates that it will result in revisions to the agency's guidelines for preparing economic analyses. OMB did not provide comments on the draft report. We conducted our work from February 1996 through February 1997 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is contained in appendix II. We are sending copies of this report to the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. Copies are also available to others on request. Please call me at (202) 512-4907 if you or your staff have any questions. Major contributors to this report are listed in appendix III. Discount rates (percent) Value of life (dollars in millions) RIA for the National Recycling and Emission Reduction Program (1) RIA for the National Recycling and Emission Reduction Program (2) RIA of Nitrogen Oxides Regulations--1993 (continued) Discount rates (percent) Value of life (dollars in millions) These are real discount rates, which exclude the effects of inflation. Nine of these RIAs did not identify reduced mortality as a benefit associated with a proposed regulation. Therefore, assigning a monetary value for a human life was not applicable. We examined 23 RIAs issued by the Office of Air and Radiation between November 1990, the effective date of the Clean Air Act Amendments of 1990, and December 1995. Eighteen of these RIAs supported regulations that were estimated to cost $100 million or more annually and therefore were considered economically significant. Five RIAs supported regulations that were considered major or significant by the Environmental Protection Agency (EPA) because of their potential impact on costs and prices for consumers, the international competitive position of U.S. firms, or the national energy strategy or because they were statutorily required by the 1990 amendments. To determine the number of the RIAs, we interviewed officials from EPA's Office of Policy, Planning, and Evaluation and Office of Air and Radiation, which has four program offices--the offices of Air Quality Planning and Standards, Mobile Sources, Atmospheric Programs, and Radiation and Indoor Air--and examined EPA's database of completed RIAs. Although EPA's other program offices are also responsible for preparing RIAs, we limited our review to the RIAs prepared by the Office of Air and Radiation because this office is primarily responsible for implementing the requirements of the 1990 amendments. We reviewed Executive Orders 12866 and 12291 and EPA's and the Office of Management and Budget's guidance on the preparation of RIAs under these executive orders. From those documents, we identified the key components of RIAs and reviewed the 23 selected RIAs for their handling of these components. We also discussed issues affecting the clarity of RIAs with officials of the Office of Air and Radiation and Office of Policy, Planning, and Evaluation. William F. McGee, Assistant Director Charles W. Bausell, Jr., Adviser Harry C. Everett, Evaluator-in-Charge Kellie O. Schachle, Evaluator Kathryn D. Snavely, Evaluator Joseph L. Turlington, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Environmental Protection Agency's (EPA) 23 regulatory impact analyses (RIA) supporting air quality regulations, focusing on whether the RIAs clearly describe: (1) key economic assumptions subject to uncertainty and the sensitivity of the results to these assumptions; (2) the extent to which benefits and costs were quantified for the proposed regulatory action; and (3) the extent to which alternative approaches were considered. GAO noted that: (1) while certain key economic assumptions, such as the discount rate and the value of human life, can have a significant impact on the results of benefit-cost analyses and are important to the regulations being proposed, eight of the RIAs did not identify one or more of these assumptions; (2) furthermore, in the RIAs that identified key economic assumptions, the rationale for the values used was not always explained; (3) for example, one RIA assumed a value of life that ranged from $1.6 million to $8.5 million and another, prepared in the same year, assumed a value of life that ranged from $3 million to $12 million; (4) in neither instance did the RIAs provide a clear explanation of the rationale for the values that were selected; (5) even though EPA's guidance suggests that RIAs account for any uncertainties in the values of key assumptions by conducting sensitivity analyses, which show how benefit and cost estimates vary depending on what values are assumed, 13 RIAs used only a single discount rate; (6) all 23 RIAs assigned dollar values to the estimated costs of proposed regulations, however, 11 of the RIAs assigned dollar values to the estimated benefits; (7) according to EPA officials, assigning dollar values to potential benefits is difficult because of the uncertainty of scientific data and the lack of market data on some of these effects; (8) all of the RIAs contained some quantitative or qualitative information on the expected benefits, such as a reduced incidence of mortality and illness; (9) while the number and the types of alternatives considered in the 23 RIAs were not always clear, GAO's examination indicated that six of the RIAs compared a single alternative, which was the regulatory action being proposed, to the baseline, which was the situation likely to occur in the absence of regulation, the status quo; (10) the remainder compared two or more alternatives to the baseline; (11) resource constraints and the specific requirements of authorizing legislation, which sometimes limits EPA's options, were factors influencing the extent to which alternatives were considered; (12) ten of the RIAs GAO examined did not include executive summaries, even though these summaries can be a significant benefit to decisionmakers; and (13) EPA officials acknowledged that some of the RIAs did not include executive summaries and agreed that executive summaries, by providing easily accessible information, can be useful to decisionmakers.
4,319
602
OMB and Treasury have established a new Data Standards Committee that will be responsible for maintaining established standards and developing new data elements or data definitions that could affect more than one functional community (e.g., financial management, financial assistance, and procurement). Although this represents progress in responding to GAO's prior recommendation, more remains to be done to establish a data governance structure that is consistent with leading practices to ensure the integrity of data standards over time. Several data governance models exist that could inform OMB's and Treasury's efforts. Many of these models promote a common set of key practices that include establishing clear policies and procedures for developing, managing, and enforcing data standards. A common set of key practices endorsed by standard-setting organizations recommends that data governance structures include the key practices shown in the text box below. We have shared these key practices with OMB and Treasury. 1. Developing and approving data standards. 2. Managing, controlling, monitoring, and enforcing consistent application of data standards. 3. Making decisions about changes to existing data standards and resolving conflicts related to the application of data standards. 4. Obtaining input from stakeholders and involving them in key decisions, as appropriate. 5. Delineating roles and responsibilities for decision-making and accountability, including roles and responsibilities for stakeholder input on key decisions. A robust, institutionalized data governance structure is important to provide consistent data management during times of change and transition. The transition to a new administration presents risks to implementing the DATA Act, including potential shifted priorities or loss of momentum. The lack of a robust and institutionalized data governance structure for managing efforts going forward presents additional risks regarding the ability of agencies to meet their statutory deadlines in the event that priorities shift over time. In June 2016, OMB directed the 24 CFO Act agencies to update the initial DATA Act implementation plans that they submitted in response to OMB's May 2015 request. In reviewing the 24 CFO Act agencies' August 2016 implementation plan updates, we found that 19 of the 24 CFO Act agencies continue to face challenges implementing the DATA Act. We identified four overarching categories of challenges reported by these agencies that may impede their ability to effectively and efficiently implement the DATA Act: systems integration issues, lack of resources, evolving and complex reporting requirements, and inadequate guidance. To address these challenges, most agencies reported taking mitigating actions, such as making changes to internal policies and procedures, leveraging existing resources, utilizing external resources, and employing manual and temporary workarounds. However, the information reported by the CFO Act agencies in their implementation plan updates indicates that some agencies are at increased risk of not meeting the May 2017 reporting deadline because of these challenges. In addition, inspectors general for some agencies, such as the Departments of Labor and Housing and Urban Development, have issued readiness review reports indicating that their respective agencies are at risk of not meeting the reporting deadline. As discussed further below, the technical software requirements for agency reporting are still evolving, so any changes to the technical requirements over the next few months could also affect agencies' ability to meet the reporting deadline. In August 2016, in response to a prior GAO recommendation, OMB established procedures for reviewing and using agency implementation plan updates that include procedures for identifying ongoing challenges. According to the procedures document, OMB will also be monitoring progress toward the statutory deadline and setting up meetings with any of the 24 CFO Act agencies that OMB identifies as being at risk of not meeting the implementation deadline. In May 2016, in response to a prior GAO recommendation, OMB released additional guidance on reporting financial and award information required under the act to address potential clarity, consistency, and quality issues with the definitions of standardized data elements. While OMB's additional guidance addresses some of the limitations we have previously identified, it does not address all of the clarity issues. For example, we found that this policy guidance does not address the underlying source that can be used to verify the accuracy of non-financial procurement data or any source for data on assistance awards. In addition, in their implementation plan updates, 11 of the 24 CFO Act agencies reported ongoing challenges related to the timely issuance of, and ongoing changes to, OMB policy and Treasury guidance. Eight agencies reported that if policy or technical guidance continues to evolve or be delayed, the agencies' ability to comply with the May 2017 reporting deadline could be affected. In August 2016, OMB released additional draft guidance on how agencies should report financial information involving specific transactions, such as intragovernmental transfers, and how agency senior accountable officials should provide quality assurances for submitted data. OMB staff told us that this most recent policy guidance was drafted in response to questions and concerns reported by agencies in their implementation plan updates and in meetings with senior OMB and Treasury officials intended to assess agency implementation status. OMB staff told us that they received feedback from 30 different agencies and reviewed over 200 comments on the draft guidance. The final guidance was issued on November 4, 2016. Although OMB has made some progress with these efforts, other data definitions lack clarity which still needs to be addressed to ensure that agencies report consistent and comparable data. These challenges, as well as the challenges identified by agencies, underscore the need for OMB and Treasury to fully address our prior recommendation to provide agencies with additional guidance to address potential clarity issues. We also noted in our report being released today that the late release of the schema version 1.0 may pose risks for implementation delays at some agencies. The schema version 1.0, released by Treasury on April 29, 2016, is intended to standardize the way financial assistance awards, contracts, and other financial data will be collected and reported under the DATA Act. A key component of the reporting framework laid out in the schema version 1.0 is the DATA Act Broker, a system to standardize data formatting and assist reporting agencies in validating their data prior to submitting them to Treasury. Treasury has been iteratively testing and developing the broker using what Treasury describes as an agile development process. On September 30, 2016, Treasury updated its version of the broker, which it stated was fully capable of performing the key functions of extracting and validating agency data. Treasury officials told us that although they plan to continue to refine the broker to improve its functionality and overall user experience, they have no plans to alter these key functions. Agencies have reported making progress creating their data submissions and testing them in the broker, but work remains to be done before actual reporting can begin. Some agencies reported in their implementation plan updates that they developed plans for interim solutions to construct these files until vendor-supplied software patches can be developed, tested, and configured that will extract data to help their clients develop files that comply with DATA Act requirements. However, some of these interim solutions rely on manual processing, which can be burdensome and increase the risk for errors. The Section 5 Pilot is designed to develop recommendations to reduce the reporting burden for federal funds recipients. It has two primary focus areas: federal grants and federal contracts (procurements). OMB partnered with the Department of Health and Human Services to design and implement the grants portion of the pilot and with the General Services Administration to implement the procurement portion. Our review of the revised design for both the grants and procurement portions of the pilot found that they partly met each of the leading practices for effective pilot design (shown in the text box below). 1. Establish well-defined, appropriate, clear, and measurable objectives. 2. Clearly articulate an assessment methodology and data gathering strategy that addresses all components of the pilot program and includes key features of a sound plan. 3. Identify criteria or standards for identifying lessons about the pilot to inform decisions about scalability and whether, how, and when to integrate pilot activities into overall efforts. 4. Develop a detailed data-analysis plan to track the pilot program's implementation and performance and evaluate the final results of the project and draw conclusions on whether, how, and when to integrate pilot activities into overall efforts. 5. Ensure appropriate two-way stakeholder communication and input at all stages of the pilot project, including design, implementation, data gathering, and assessment. We also determined that the updated design for both portions of the Section 5 Pilot meets the statutory requirements for the pilot established under the DATA Act. Specifically, the DATA Act requires that the pilot program include the following design features: (1) collection of data during a 12-month reporting cycle; (2) a diverse group of federal award recipients and, to the extent practicable, recipients that receive federal awards from multiple programs across multiple agencies; and (3) a combination of federal contracts, grants, and subawards with an aggregate value between $1 billion and $2 billion. Although this represented significant progress since April 2016, we identified an area where further improvement is still needed. Specifically, the plan for the procurement portion of the pilot does not clearly describe and document how findings related to centralized certified payroll reporting will be more broadly applicable to the many other types of required procurement reporting. This is of particular concern given the diversity of federal procurement reporting requirements. Implementation of the grants portion of the pilot is currently under way, but the procurement portion is not scheduled to begin until early 2017. Department of Health and Human Services officials and OMB staff told us that they are recruiting participants and have begun administering data collection instruments for all components of the grants portion of the pilot. However, in late November 2016, OMB staff and General Services Administration officials informed us that they decided to delay further implementation of the procurement portion of the pilot in order to ensure that security procedures designed to protect personally identifiable information were in place. As a result, General Service Administration officials expect to be able to begin collecting data through the centralized reporting portal sometime between late January 2017 and late February 2017. OMB staff stated that despite the delay, they still plan on collecting 12 months of data through the procurement pilot as required by the act. In our report being released today, we made a new recommendation to OMB that would help ensure that the procurement portion of the Section 5 Pilot better reflects leading practices for effective pilot design. In commenting on the report being released today, OMB neither agreed nor disagreed with the recommendation, but provided an overview of its implementation efforts since passage of the DATA Act. These efforts include issuing three memorandums providing implementation guidance to federal agencies, finalizing 57 data standards for use on USASpending.gov, establishing the Data Standards Committee to develop and maintain standards for federal spending, and developing and executing the Section 5 Pilot. OMB also noted that, along with Treasury, it met with each of the 24 CFO Act agencies to discuss the agency's implementation timeline, unique risks, and risk mitigation strategy and took action to address issues that may affect successful DATA Act implementation. According to OMB, as a result of these one-on-one meetings with agencies, OMB and Treasury learned that in spite of the challenges faced by the agencies, 19 of the 24 CFO Act agencies expect that they will fully meet the May 2017 deadline for DATA Act implementation. Treasury also provided comments on our report being released today. In its comments, Treasury provided an overview of the steps it has taken to implement the DATA Act's requirements and assist agencies in meeting the requirements under the act, including OMB's and Treasury's issuance of uniform data standards, technical requirements, and implementation guidance. Treasury's response also noted that as a result of the aggressive implementation timelines specified in the act and the complexity associated with linking hundreds of disconnected data elements across the federal government, it made the decision to use an iterative approach to provide incremental technical guidance to agencies. Treasury noted, among other things, that this iterative approach enabled agencies and other key stakeholders to provide feedback and contribute to improving the technical guidance and the public website. Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Paula M. Rascona at (202) 512-9816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Michael LaForge (Assistant Director), Peter Del Toro (Assistant Director), Maria Belaval, Aaron Colsher, Kathleen Drennan, Thomas Hackney, Diane Morris, Katherine Morris, and Laura Pacheco. DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016. DATA Act: Initial Observations on Technical Implementation. GAO-16-824R. Washington, D.C.: August 3, 2016. DATA ACT: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016. DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: April 19, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Data Transparency: Oversight Needed to Address Underreporting and Inconsistencies on Federal Award Website. GAO-14-476. Washington, D.C.: June 30, 2014. Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases. GAO-13-758. Washington, D.C.: September 12, 2013. Government Transparency: Efforts to Improve Information on Federal Spending. GAO-12-913T. Washington, D.C.: July 18, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The DATA Act requires OMB and Treasury to establish government-wide data standards and requires federal agencies to begin reporting financial and payment data in accordance with these standards by May 2017. The act also requires establishment of a pilot program to develop recommendations for simplifying federal award reporting for grants and contracts. Consistent with GAO's mandate under the act, the report being released today is one in a series that GAO will provide to the Congress. This statement discusses steps taken by OMB, Treasury, and federal agencies to implement the act and highlights key findings and the recommendation from GAO's report ( GAO-17-156 ). As part of this work, GAO reviewed DATA Act implementation plan updates and interviewed staff at OMB, Treasury, and other selected agencies. The Office of Management and Budget (OMB), the Department of the Treasury (Treasury), and federal agencies have taken steps to implement the Digital Accountability and Transparency Act of 2014 (DATA Act); however, more work is needed for effective implementation. Data governance and the transition to a new administration . OMB and Treasury have established a new Data Standards Committee responsible for maintaining established standards and developing new data elements or data definitions. Although this represents progress, more remains to be done to establish a data governance structure that is consistent with leading practices to ensure the integrity of data standards over time. The transition to a new administration presents risks to implementing the DATA Act, including a potential shift in priorities. The lack of a robust and institutionalized data governance structure for managing efforts going forward also presents risks regarding the ability of agencies to meet the statutory deadlines in the event that priorities shift over time. Implementation plan updates . According to the 24 Chief Financial Officers (CFO) Act agencies' implementation plan updates, most of them continue to face challenges implementing the DATA Act. GAO identified four overarching categories of challenges reported by agencies that may impede their ability to effectively and efficiently implement the DATA Act: systems integration issues, lack of resources, evolving and complex reporting requirements, and inadequate guidance. To address these challenges, most agencies reported taking mitigating actions, such as making changes to internal policies and procedures, leveraging existing resources, and employing manual and temporary workarounds. However, the information reported by the CFO Act agencies in their implementation plan updates indicates that some agencies are at increased risk of not meeting the May 2017 reporting deadline because of these challenges. In addition, inspectors general for some agencies have issued readiness review reports indicating that their respective agencies are at risk of not meeting the reporting deadline. Operationalizing data standards and technical specifications for data reporting . In November 2016, OMB issued additional guidance on how agencies should report financial information involving specific transactions, such as intragovernmental transfers, and how agency senior accountable officials should provide quality assurances for submitted data. Although OMB has made some progress with these efforts, other data definitions lack clarity which still needs to be addressed to ensure that agencies report consistent and comparable data. In September 2016, Treasury updated its version of the DATA Act broker, which it stated was fully capable of performing the key functions of extracting and validating agency data. Treasury officials stated that although they plan to continue to refine the broker to improve its functionality and overall user experience, they have no plans to alter key functions. Agencies have reported making progress creating their data submissions and testing them in the broker, but work remains before actual reporting can begin. Some agencies reported in their implementation plan updates that they developed plans for interim solutions, but some of these interim solutions rely on manual processing, which can be burdensome and increase the risk for errors. Pilot to reduce recipient reporting burden . GAO's review of the revised design for both the grants and procurement portions of the pilot found that they partly met each of the leading practices for effective pilot design. Although this represented significant progress since April 2016, GAO identified an area where further improvement is still needed. Specifically, the plan for the procurement portion of the pilot does not clearly describe and document how findings related to centralized certified payroll reporting will be applicable to other types of required procurement reporting. Further, in November 2016, this portion of the pilot was delayed to ensure that security procedures were in place to protect personally identifiable information. In addition to prior recommendations that GAO has made, in its most recent report, GAO recommends that OMB take steps to help ensure that the procurement portion of the pilot better reflects leading practices for effective pilot design. OMB neither agreed nor disagreed with the recommendation.
3,183
950
Ships' crews are often able to complete voyage repairs while the ship or battle group is underway. According to Navy officials, because ships often include redundant systems, repairs can usually be undertaken without interrupting the ship's mission or be postponed until the ship reaches a repair facility or its home port. However, voyage repairs are occasionally beyond the capability of ships' crews to complete, and must be performed by an intermediate or depot-level ship repair activity. Historically, Navy ships home-ported in Guam were permitted by U.S. law to be overhauled, repaired, or maintained in shipyards outside the United States or Guam. However, the John Warner National Defense Authorization Act for Fiscal Year 2007 amended section 7310 of Title 10 of the U.S. Code to prohibit U.S. naval ships home-ported in Guam from being repaired in shipyards outside the United States or Guam, other than in the case of voyage repairs. Since the closure of the Navy Ship Repair Facility, Guam, the Navy and MSC have relied on four different sources to provide voyage repairs in Guam. First, the Navy submarine tender USS Frank Cable, which is a ship home-ported in Guam, has provided voyage repair capabilities for submarines when needed. Second, the Navy has relied on its Emergent Repair Facility to repair submarines by using a repair crew left behind from the USS Frank Cable when that ship is deployed. Third, fly-away teams from U.S. Naval shipyards have been sent to Guam to conduct voyage repairs when needed. Finally, the Navy has used its contract with Guam Shipyard for voyage repairs of both submarines and surface ships. Guam Shipyard has repaired most MSC ships operating around Guam and has assisted the Navy in completing voyage repairs on other ships and submarines. For example, Guam Shipyard assisted U.S. Naval shipyards with extensive voyage repairs on the USS San Francisco, a submarine that struck an undersea mountain, by providing dry-dock services and selected support services. Voyage repairs have averaged about 17 percent of the total annual workload performed at Guam Shipyard. While Guam Shipyard officials told us that the voyage repair work would not be sufficient to support its current infrastructure and personnel, in 2007 it won a competition for the overhaul of the USNS Bridge, an MSC Pacific fleet support vessel. Competitions for overhaul of other MSC ships operating near Guam are scheduled beyond 2008. While Guam Shipyard has been the only commercial shipyard capable of supporting Navy ship repair and overhaul requirements on Guam since 1998, a private ship repair provider new to Guam, Gulf Copper, has initiated ship repair operations there. Although the Navy had indicated in its 2007 report to Congress that additional voyage repairs could be addressed by the submarine tender USS Frank Cable's repair department, MSC has awarded contracts to both Guam Shipyard and Gulf Copper for voyage repairs that may be needed during fiscal year 2008. MSC awarded single-year contracts without renewal options, but MSC officials said that they plan similar contracts for 2009 that will include option years. Voyage repairs are unscheduled, and the capabilities required to address them cannot be precisely predicted. The Navy has not identified voyage ship repair requirements for 2012 and beyond for surface vessels operating at or near Guam, although some information is available on which to base estimated requirements to support planning efforts. Navy officials stated that requirements have not been developed for the following three reasons. First, the Navy has not fully identified its future Pacific force structure or finalized operational plans. Second, the Marine Corps' plans for additional vessels, if any, and operations at Guam are still evolving. Third, MSC projects making changes to its force structure for ships operating near Guam. However, some information is available that could enable the Navy to develop estimates of ship repair requirements. Estimation of requirements is a prerequisite for assessing each option's ability to address those requirements in a cost- effective and timely fashion. Without developing estimated repair requirements, the Navy cannot determine the best alternative among various potential sources of repair or support planning to provide needed maintenance capabilities. Navy officials stated that voyage ship repair requirements at Guam cannot be identified until its future force structure plans are finalized. The 2006 Quadrennial Defense Review indicated that the Navy plans to operate six aircraft carrier strike groups and 60 percent of its submarine force in the Pacific. Moreover, the service has plans for a 313-ship Navy, but it has not yet identified the specific ships that will comprise the force structure in the Pacific beyond 2012. Officials stated that operational plans will dictate the number and type of vessels that will visit Guam, but those plans are periodically adjusted due to changes in the global security environment. As a result, Navy officials stated that they cannot yet develop requirements for voyage ship repairs at Guam for 2012 and beyond. Similarly, the Marine Corps' plans for additional vessels in Guam have not been finalized, but conceptual plans for relocating Marines from Okinawa to Guam may include the home-porting of four new High-Speed Vessels and two new Littoral Combat Ships at Guam. In addition to the possibility of adding vessels, the Marine Corps' force relocation from Okinawa to Guam is expected to result in visits by amphibious vessels home-ported in Japan. These vessels are to deploy to Guam to support training exercises for the Marines stationed on Guam, and they may generate demands for voyage repairs during these operations. MSC also expects changes to its force structure operating near Guam, but the timeline for these changes is uncertain. Current MSC vessels, such as ammunition ships and combat stores ships, are expected to be replaced by new dry cargo/ammunition ships on a one-for-one basis. MSC officials believe that these new vessels will require less maintenance than the vessels they replace, thus potentially reducing repair requirements. For example, these vessels use new technology, including propulsion and electrical systems that are thought to require less frequent maintenance and different repair capabilities. Guam's first new dry cargo/ammunition ship is to arrive on station sometime in 2008, but acquisition schedules for additional such ships indicate deployment delays. Delaying the arrival of the new ships will delay decommissioning of the older ships, thus raising questions about the need to continue existing levels of repair capabilities in the near term, as MSC believes the older ships may require more intensive maintenance. While the precise force structure requirements associated with the military buildup around Guam remain uncertain, the Navy has some information that can be used to identify estimated ship repair requirements. Specifically, the Navy knows the history of voyage repairs conducted on Guam; it can identify vessels likely to operate near Guam, based on planned force structure realignments in the 2006 Quadrennial Defense Review; and it can identify ship repair capabilities available at other strategic locations in the area, including Pearl Harbor, and Yokosuka, Japan. Historical data are available showing voyage repairs that have been performed on surface vessels and submarines in Guam for at least the past 6 years, and could be used to estimate likely future repair requirements based on past experience. MSC recently used these data to formulate contracts awarded for providing voyage repairs on vessels operating at or near Guam for fiscal year 2008. Table 1 shows the average number of man- days and the cost to complete voyage repairs from private sources on Guam for fiscal years 2002-2007. The Navy has identified some vessel assignments associated with the force structure changes identified in the 2006 Quadrennial Defense Review. Specifically, the Navy plans to replace the USS Kitty Hawk at its home port in Japan with the USS George Washington--a new, nuclear-powered aircraft carrier. Navy officials stated that operational plans for that carrier's strike group will include visits to Guam for periods of 2 to 3 weeks. Although the Navy has not identified the specific vessels that will make up the strike group, Navy officials know the types of vessels that are normally part of a strike group. Moreover, Navy vessels have operated in the Pacific for decades, and voyage repair experiences are readily available to the Navy through repair records, shipyard billing, or similar documents. Nonetheless, the Navy has not used these records to forecast estimated surface ship repair requirements for Guam beyond 2012. Further, extensive ship repair capabilities exist in other locations in the Pacific, such as Pearl Harbor. Given that future ship repair capabilities on Guam may need to support a larger number and different mix of ships, the Navy could use ship repair data from Pearl Harbor and other strategic forward-deployed locations--such as the Navy Ship Repair Facility, Yokosuka, Japan, and the facility that repairs the Navy amphibious ships that support the Marine Corps at Sasebo, Japan--to help it develop estimated voyage repair forecasts for Guam. DOD guidance requires that maintenance programs be clearly linked to strategic and contingency planning, and that a determination be made as to whether a specific industrial capability is required to meet DOD needs. This guidance calls for the Navy to follow industrial-based planning to ensure that required ship repair capabilities will be available when needed. Specifically, DOD Directive 5000.60, "Defense Industrial Capabilities Assessments," requires that planning occur when a known or projected problem exists, or when there is a substantial risk that an essential capability may be lost. Such problems can consist of inadequate industrial capacity operated by a DOD entity or similar inadequate capabilities in the private sector. Estimation of requirements is a prerequisite for performing an assessment of the viability of each option available for addressing those requirements in a cost-effective and timely fashion. Although some information is available for developing estimated requirements, the Navy has not identified voyage surface ship repair requirements for 2012 and beyond for vessels operating near Guam. Without developing estimated repair requirements the Navy cannot determine the best alternative among various potential sources of repair or support planning to provide needed maintenance capabilities. While the Navy has not planned for meeting voyage repair requirements on Guam for 2012 and beyond, it has identified options for providing repairs, although some require long lead times to implement. However, by not performing timely planning the Navy risks not having a repair capability in place when needed, and as time passes, limits the options that may be available to it. Navy officials have stated that they do not intend to develop plans for a voyage ship repair capability on Guam until preparations for the 2012 budget cycle begin. However, in response to our inquiries, the Navy identified four potential options for meeting future voyage ship repair requirements on Guam and acknowledged that it cannot avoid doing some voyage repairs there. First, the Navy could use existing Navy- owned voyage repair capabilities in Guam, though these face certain limitations in their ability to take on additional voyage repairs. Second, fly- away teams could be brought in from Navy-owned shipyards in the United States, and these teams would rely on facilities and infrastructure in place on Guam. Third, the Navy could develop a new repair facility, which would entail significant planning, repair of existing infrastructure, and possibly new military construction. Fourth, the Navy could contract out the work to either or both of the existing private ship repair providers or to any other contractor that might choose to locate at Guam. DOD guidance requires that a determination be made as to whether a specific industrial capability is required to meet DOD needs and that a selection be made for meeting those needs. Moreover, Navy officials acknowledge that if the option to expand existing Navy repair capabilities on Guam or establish new Navy repair capabilities were chosen, early identification of mission requirements would be needed to facilitate planning and budgeting of new or expanded Navy construction to ensure that a fully functioning Navy- owned ship repair facility would be operational in 2012. Existing Navy-owned capabilities in Guam are inadequate to address current voyage repair requirements for surface vessels and are unable to address additional voyage repair requirements without increased capabilities and capacity. First, the primary mission for the USS Frank Cable is to provide maintenance and support for the three fast attack submarines home-ported on Guam, and to address the needs of visiting submarines. At the time of our review, the submarine tender's repair crew was operating at full capacity in meeting its primary mission. As a result, the Navy contracted with Guam Shipyard to complete $1.2 million in voyage repairs on submarines between fiscal years 2002 and 2007, mostly to provide additional manpower to augment the submarine tender's repair crew. Although the Navy has not developed voyage repair plans for surface ships, it has developed some plans for the provision of voyage and other repairs for submarines. For example, current plans will require the USS Frank Cable to provide support for the new guided missile submarine that will visit Guam for rotational crewing. Additionally, the Navy plans to use part of the repair crew from the USS Frank Cable to perform repair services for the submarine tender USS Emory S. Land, which will be stationed at Diego Garcia in the British Indian Ocean Territories. The repair crew on the USS Frank Cable will be increased by about 170 personnel to enable about 160 to rotate for workload assignments on the USS Emory S. Land, leaving no more than 10 repair personnel to take on additional work. As a result, according to Navy officials, it is unlikely that the USS Frank Cable could provide voyage repairs for surface vessels in Guam in the future without adding capability and capacity beyond the 170 additional personnel already planned. Second, the Emergent Repair Facility on Guam that supports submarines when the USS Frank Cable is away from port lacks the capability to meet surface voyage repair requirements. This facility is used by a stay-behind repair crew from the USS Frank Cable when that ship is away from its home port. According to Navy officials, the Emergent Repair Facility is not adequate even for its current role. Officials estimated that the Navy would need about $21 million to expand and equip the facility just to meet its current submarine mission requirements, without taking on additional voyage repairs for surface ships. For example, the facility has no communications capabilities; repair personnel must use personal cellular telephones for any necessary communications. Navy officials acknowledge that it would have to be expanded to meet any future surface voyage repair requirements. Moreover, larger vessels may be unable to approach the Emergent Repair Facility without conducting dredging operations and completing pier improvements. As a result the Emergent Repair Facility cannot be used to provide voyage repairs for surface vessels without considerable planning and capital investment. The effective use of fly-away teams from Navy-owned shipyards in the continental United States to perform voyage repairs at Guam depends on the ability of U.S. Naval shipyards to provide personnel to perform repairs without negatively impacting their own ongoing work, as well as on the adequacy of infrastructure and facilities available for their use in Guam. Further, U.S. Naval shipyards have not been provided with voyage repair estimates to conduct workload planning and determine their capacity to provide fly-away teams to Guam. The use of fly-away teams may not be practicable or cost-effective for performing large amounts of voyage repair work, because Navy-owned shipyards in the United States that provide fly- away teams are currently operating beyond their target capacities, although they anticipate having excess capacity in the coming years. However, deploying fly-away teams to Guam to meet large amounts of voyage repair requirements without advance planning could undermine scheduled maintenance at the U.S. Naval shipyards. Fly-away teams also need sufficient infrastructure and equipment at the location at which they will conduct voyage repairs. Because the USS Frank Cable and the Emergent Repair Facility both face limitations, fly-away teams that deploy to Guam cannot be assured that these facilities would be available to provide needed infrastructure or equipment. Without more clearly defined repair requirements and further examination of equipment and personnel necessary to meet those requirements, the viability of using fly-away teams to provide future voyage repairs is uncertain. Building a new Navy depot-level repair capability would require years of planning and additional infrastructure, equipment, personnel, and funding. If the lease on the property at the former Naval Ship Repair Facility, Guam, is allowed to expire, establishing a new Navy-owned ship repair capability at that location would require the Navy to address infrastructure, equipment, and personnel requirements to create the capability needed to meet surface voyage repair requirements on Guam. The Navy would have to determine what capability is needed and then take action to acquire the equipment to provide that capability. Furthermore, infrastructure repairs may be needed to support work on Navy vessels. For example, according to Navy officials the typhoon moorings at Guam Shipyard may require repair. A new Navy depot-level ship repair capability in Guam would also require staffing by military and civilian personnel. Without a determination of equipment, infrastructure, personnel, and funding requirements for providing new surface ship repair capabilities, the Navy cannot know whether establishing a new ship repair capability in Guam is a viable option. Additionally, implementing this option would also require significant lead time. The Navy has not determined the extent to which it will rely on private- sector ship repair providers beyond 2012, when the lease on Navy property occupied by Guam Shipyard expires. While it is unclear what kind of private sector capability will be available beyond 2012, both private ship repair providers operating in Guam have been awarded 1-year contracts by MSC to provide selected voyage repairs to surface vessels operating at or near Guam for fiscal year 2008. According to MSC officials, new contracts are to be executed by the end of fiscal year 2008, and this contracting arrangement will include option years that address voyage repair requirements for MSC ships through 2012. Guam Shipyard operates on Navy property located within Naval Base, Guam. Gulf Copper operates from approximately 700 feet of pier space at the commercial port opposite Navy property on Apra Harbor. It is possible that additional private ship repair providers may express interest in performing voyage repairs at Guam in the future, and that Guam Shipyard may continue operations at another location in Guam beyond 2012 when its lease on U.S. Navy property expires. Figure 1 depicts the physical locations of Guam Shipyard and Gulf Copper. The Joint Depot Maintenance Program provides guidance on selecting sources of maintenance and repair, and a DOD Handbook entitled Assessing Defense Industrial Capabilities provides a framework for coordinating analysis and determining the most cost- and time-effective options for meeting DOD needs. If the option selected by the Navy for providing ship repairs in Guam requires military construction, as may be the case if the Navy chooses to expand existing Navy-owned capabilities or to establish new Navy-owned capabilities, the military construction requirements would have to be included in the budgeting process for fiscal year 2010 in order for new facilities to be ready by October 2012. However, Navy officials have stated that they do not intend to develop plans for a voyage ship repair capability on Guam until preparations for the 2012 budget cycle begin. Without performing an assessment of the viability of each of the options for voyage repairs in a timely manner to support planning and budgeting of critical tasks, the Navy risks not having adequate voyage repair capabilities in place when needed to support operations in the Pacific Ocean, and as time passes, limits the options that could be available to it by 2012. The Navy has not effectively identified voyage repair requirements that are a prerequisite for selecting among the options to provide such capabilities on Guam. While the Navy does not fully know its voyage surface ship repair requirements near Guam for 2012 and beyond, it does possess data that could be used to estimate requirements. Namely, it could use existing ship repair experiences, projected requirements identified in the 2006 Quadrennial Defense Review, and information about repair capabilities maintained at other strategic locations to identify its ship repair requirements for Guam in the near term and to aid in developing a baseline forecast of repair capabilities it will need for 2012 and beyond. Moreover, the requirements determination process is a precursor to planning for the provision of ship repair capabilities and selecting an option to provide those capabilities, since a certain amount of lead time would be required to implement some of the options. Additionally, a decision about future industrial repair requirements should be an integral part of ongoing Guam infrastructure planning to support the transfer of Marines to Guam from Japan. However, the Navy has not developed such plans, nor has it assessed the challenges associated with the options identified, or selected an option to provide ship repair capabilities on Guam. Without identifying requirements, performing a risk-based assessment of the viability and costs of each of the options, selecting the best option or combination of options available, and then developing and implementing an action plan to address any challenges associated with the option or options selected, the Navy lacks reasonable assurance that it will have sufficient time to prepare the best option or combination of options for meeting future surface ship repair requirements on Guam beyond 2012. To ensure that adequate voyage repair capabilities are available for ships operating near Guam, and recognizing the lead time required to implement options, we recommend that the Secretary of Defense direct the Secretary of the Navy to estimate requirements for repairs for surface vessels operating at or near Guam based on data determined to be most appropriate by the Secretary of the Navy; assess the benefits and limitations of each of the options for providing repairs to ships operating near Guam, and perform an assessment of anticipated costs and risks associated with each option; and select the best option or combination of options for providing repair capabilities to support surface ships operating near Guam, and develop a plan and schedule for implementing a course of action to ensure that the required ship repair capability will be available by October 2012. In a written response to a draft of this report, DOD concurred with all of our recommendations with comments. The department's comments are reprinted in their entirety in appendix II. The department also provided several technical comments that have been incorporated as appropriate. With regard to our first recommendation for an assessment of requirements for repairs for surface vessels operating at or near Guam, the Navy responded that it has a methodology to determine annual emergent repair requirements by ship class and fleet--which includes voyage repair execution history as a subset--and that this requirement will be included in the future years defense plan, and that no further direction is necessary. While we acknowledge that the Navy looks at overall maintenance requirements as a part of the annual budget process, this process does not provide a detailed listing of specific capabilities required for voyage repairs at strategic locations, such as Guam beyond 2012. Given its unique location and the changing circumstances that will impact voyage repair requirements in and around that location, we continue to believe that a specific assessment of requirements for providing surface vessel voyage repairs in Guam represents a necessary baseline for planning for the provision of ship repair capabilities beyond 2012 and for the selection of an option or combination of options to provide those capabilities. In concurring with our second recommendation regarding the need for an assessment of the benefits and limitations of each of the options for providing repairs to ships operating near Guam, the department's response was that the Navy has already identified a plan for providing repair capabilities for ships operating near Guam and that the Navy has determined that establishing a new repair facility on Guam is not viable since the expenditure of funds to do this is not necessary. The department's response also noted that the Navy is already developing a military construction project to expand the existing repair capabilities on Guam in fiscal year 2010, that the Navy intends to continue the practice of utilizing repair teams from U.S. Naval shipyards and private shipyards as needed, and that the Navy intends to continue the practice of contracting voyage repair work to one or more private ship repair providers. The Navy may have determined that a new repair capability on Guam is not necessary, but much of the existing repair equipment currently used to support voyage repair on surface vessels--including floating dry dock, floating crane, and industrial equipment--are owned by Guam Shipyard and could potentially be removed at the conclusion of the existing lease, if a new lease were not negotiated. We continue to believe that it is essential that the department determine whether it will have continued need for expensive capital equipment such as the floating dry dock and crane, and whether the capability provided by such equipment will be available from the private sector. Finally, it is commendable that the Navy has a plan for providing ship repair capabilities on Guam and is moving forward to implement it. However, at the time of our exit briefing with the Navy in January, the Navy did not inform us of this plan. Moreover, Navy officials have told us that this plan was developed in February, subsequent to our exit briefing and in response to our recommendations. In concurring with our third recommendation regarding selection of the best option or combination of options for providing repair capabilities to support surface ships operating near Guam, the department stated again that the Navy's plan for providing repair capabilities to support surface ships operating near Guam has already been determined, and that direction from the Secretary of Defense to the Secretary of the Navy is not needed. The response also stated that committing the Navy to a lease agreement in 2008 for a capability in 2012 is premature. While we agree that committing the Navy to a lease in 2008 for a capability required in 2012 is premature, it is not premature to decide whether or not there will be an industrial activity--either owned and operated by the government or leased by a private contractor--within the Navy installation. The department stated in its response that the Navy intends to use private- sector capability, but it did not state whether that would be on the Navy installation on Guam. Given the detailed planning that is required to support the planned buildup of military personnel expected over the next few years in Guam, we believe it is essential that the Navy determine whether or not it expects to continue to have an industrial activity operating as a part of the Guam Master Plan, and that it determine what acreage this activity would occupy. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. If you or your staff has any questions about this report, please contact me on (202) 512-4523 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Additional contacts and staff acknowledgments are provided in appendix III. To determine the extent to which the Navy has identified future ship repair requirements for ships operating in the Guam area and assessed options to address those requirements, we reviewed documents related to ship maintenance. In addition, we interviewed officials responsible for force structure planning, contracting for repairs on vessels belonging to the U.S. Navy and Military Sealift Command, and performing repairs on vessels belonging to the Navy and Military Sealift Command on Guam as well as related organizations in Hawaii, and on the west coast of the United States. Specifically, we interviewed officials and analyzed documents related to ship repair requirements and the options proposed to meet them at the offices of the Chief of Naval Operations; the Commander, Pacific Fleet; the Commander, Marine Forces Pacific; the Commander, Naval Sea Systems Command; the Commander, Naval Forces Marianas; the Chief of Naval Installations; the Commander, Military Sealift Command; the Commander, Naval Facilities Pacific; and the Guam Economic Development and Commerce Authority. We also performed work at the offices of several private ship repair providers to determine the extent to which private-sector repair capabilities may be available on Guam in the future. We also examined Department of Defense (DOD) policy and Joint Guidance for providing maintenance and repair of DOD assets afloat. We performed our review from July 2007 to January 2008 in accordance with generally accepted government audit standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Julia Denman, Assistant Director; Jeffrey Kans; Julia C. Matta; John E.Trubey; and Cheryl Weissman made major contributions to this report.
Unscheduled ship maintenance, known as voyage repairs, is a high priority for the U.S. Navy. Such repairs are sometimes beyond the capability of the ship's crew to perform; cannot be deferred; and must be made at a remote location. After the 1995 Base Realignment and Closure Commission recommended closing the former Naval Ship Repair Facility, Guam, the Navy leased the property at that facility to the Guam Economic Development and Commerce Authority, which sub-leased the property to a private shipyard. DOD has since begun planning for a military buildup on Guam. In January 2007 the Navy recommended allowing the private shipyard's lease on Navy land to expire in 2012. Consequently, the House Armed Services Committee asked GAO to determine the extent to which the Navy has (1) identified future ship repair requirements at Guam, and (2) identified and assessed options to address those requirements. GAO reviewed documents related to ship maintenance and interviewed officials affiliated with private contractors, the Guam government, the Marine Corps, Military Sealift Command, and the Navy in conducting this review. The Navy has not identified voyage surface ship repair requirements for 2012 and beyond for vessels operating near Guam, although some information is available on which to base estimated requirements for planning. Navy officials stated that they cannot estimate such requirements because the Navy expects to change its force structure, the Marine Corps has not finalized its plans for any additional vessels associated with the buildup, and Military Sealift Command expects changes to its force structure at Guam. Although the Navy, Marine Corps, and Military Sealift Command have not made final force structure decisions or operational plans for vessels operating at or near Guam, information is available to support an estimation of ship repair requirements as part of the multiyear planning and budgeting process. Specifically, the Navy (1) knows the history of voyage repairs conducted on Guam; (2) can identify vessels likely to operate near Guam based on planned force structure realignments in the 2006 Quadrennial Defense Review; and (3) can identify ship repair capabilities available at other strategic locations in the Pacific area, including Yokosuka, Japan. Developing requirements is a prerequisite for planning, and without developing estimated repair requirements the Navy cannot adequately evaluate options for meeting them. Navy officials identified potential options for providing repairs in Guam, but have not fully assessed their viability or identified time-critical planning tasks. According to Navy officials, once the Navy identifies voyage ship repair requirements for the Guam area, they will choose from four options or a combination of options for providing voyage repairs. First, the Navy could try to expand existing organic repair capabilities to conduct voyage repairs. However, the existing ship maintenance capabilities and facilities have little excess capacity without augmentation, limiting their ability to perform additional work. Second, the Navy could rely on repair teams flown in from naval shipyards in the United States. Third, the Navy could build a new Navy ship repair facility, though that could require years of planning and new funding. Fourth, the Navy could contract out work to either or both of the private ship repair providers now operating in Guam, or to any other private ship repair facility that might choose to locate in Guam. Three of these options might require building new facilities or expanding existing facilities. Officials said they would not begin planning until preparations begin for submissions to the President's budget for fiscal year 2012. However, lead time is required to perform planning tasks necessary to provide repair capabilities from the Navy's suggested options. Without assessing the viability of each option for voyage repairs in a timely manner, the Navy increases the risk that voyage repair capabilities for ships operating in the Pacific may not be available when needed, potentially undermining ships' ability to accomplish their missions.
5,993
784
Under Medicaid managed care, states contract with health plans and prospectively pay the plans a fixed monthly rate per enrollee to provide or arrange for most health services. These contracts are known as "risk" contracts because plans assume the risk for the cost of providing covered services. States' processes for developing rates may vary in a number of ways, including the type and time frames of data they use as the basis for setting rates, referred to as the base-year data, and what approach they use to negotiate rates with health plans. After rates are developed, an actuary certifies the rates as actuarially sound for a defined period of time, typically 1 year. In order to receive federal funds for its managed care program, a state is required to submit its rate-setting methodology and rates to CMS for review and approval. This review, completed by CMS regional office staff, is designed to ensure a state complies with federal regulatory requirements for setting actuarially sound rates. CMS published a final rule on June 14, 2002, outlining the agency's regulatory requirements for actuarially sound rates. These requirements largely focus on the process states must use in setting rates. For example, the regulations require states to document their rate-setting methodology and include an actuarial certification of rates. In addition, the regulations include a requirement that when states use data from health plans as the basis for rates they must have plan executives certify the accuracy and completeness of their data. The regulations do not include standards for the type, amount, or age of the data that states mayuse in setting rates. The regulations also do not include standards for the reasonableness or adequacy of rates. In the preamble to the final rule, CMS noted that health plans were better able to determine the reasonableness and adequacy of rates when deciding whether to contr act with a state. In July 2003, CMS finalized a detailed checklist that regional office staff could use when reviewing states' rate-setting submissions for compliance with the actuarial soundness requirements and that states and states' actuaries could use when developing rates. The checklist includes citations to, and a description of, each regulatory requirement; guidance on what constitutes state compliance with the requirement; and spaces for the CMS official to check whether each requirement was met and cite evidence from the state's submission for compliance with the requirement. The checklist also provides guidance on the level of review that should occur for different types of rate changes. When the state is developing a new rate, or using new actuarial techniques or data to change previously approved rates, the checklist indicates a full review should be done, which entails reviewing the state's submission for compliance with all of the requirements covered in the checklist. For adjustments to rates that were previously approved as meeting the regulations, the checklist indicates a partial review should be done; a partial review focuses on a few key requirements in the checklist, such as ensuring that the state has included a certification of rates from a qualified actuary. As of June 2010, CMS was in the process of revising the checklist. One of the planned changes was to emphasize the need for more complete encounter data because CMS officials indicated that the agency has determined that encounter data that do not include pricing information are not sufficient for setting rates. CMS expects to complete the checklist revisions by November 2010. (See table 1 for a summary of the sections in CMS's checklist.) According to CMS officials, the regional officials responsible for conducting rate-setting reviews may have a financial background, but are not actuaries. Officials also noted that CMS's OACT, which provides actuarial advice to other offices within CMS, is generally not involved with Medicaid rate-setting reviews. However, they indicated that when the CMS officials responsible for rate-setting reviews have concerns with a state's rate-setting methodology and cannot resolve those concerns with the state, they can contact OACT to request an independent review. CMS's regulations require that actuarially sound rates be developed in accordance with generally accepted actuarial principles and practices. There is no Actuarial Standard of Practice (ASOP) that applies to actuarial work performed to comply with CMS's regulations. However, in 2005, the American Academy of Actuaries published a practice note that provides nonbinding guidance on certifying Medicaid managed care rates. The practice note includes a proposed definition for "actuarial soundness," as there was no other working definition of the term that would be relevant to the actuary's role in certifying Medicaid managed care rates. Under the definition, rates are actuarially sound if, for the period of time covered by the certification, projected premiums provide for all "reasonable, appropriate, and attainable costs;" also under the definition, rates do not have to encompass all possible costs that any health plan might incur. The note emphasizes that the definition only applies to the certification of Medicaid managed care rates, and that it differs from the definition used when certifying a health plan's rates. The practice note also provides information on the actuary's role in assessing the quality of data used to set rates and refers the actuary to the ASOP on data quality for further guidance. The practice note explains that if the actuary is involved in developing the rate, then the actuary would consider all available data, including FFS data, Medicaid managed care encounter data, and Medicaid managed care financial reports and financial statements. The actuary would typically compare data sources for reasonableness and check for material differences when determining the preferred source or sources for the base-year data. The ASOP on data quality clarifies that while actuaries should generally review the data for reasonableness and consistency they are not required to audit the data. The ASOP also explains that the accuracy and completeness of the data are the responsibility of those that provided them, namely the state or health plans. CMS has been inconsistent in its review of states' rate setting. In the six CMS regional offices we reviewed, CMS had not reviewed one state's rate setting for compliance with the actuarial soundness requirements and had not conducted a full review for another. We also identified a number of other inconsistencies in CMS's review of states' compliance with the actuarial soundness requirements. Variation in CMS regional offices' practices contributed to these inconsistencies in oversight. In the six CMS regional offices we reviewed, we found inconsistencies in CMS's review of state's rate setting, including significant gaps in the agency's oversight of two states' compliance with the actuarial soundness requirements. First, CMS had not reviewed one state's (Tennessee) rate setting for compliance with the actuarial soundness requirements or approved the state's rates. In 2007, Tennessee began transitioning its managed care program, which included all of the state's approximately 1 million Medicaid enrollees, to risk contracts that were subject to the actuarial soundness requirements. Since moving to risk contracts, the state submitted at least two actuarial reports to CMS's Atlanta regional office indicating the program change, but these documents did not trigger a CMS review. These reports did not include actuarial certifications, and Tennessee officials confirmed that the state's rates had not been certified by an actuary, which is a regulatory requirement. As a result, according to CMS officials, Tennessee received, and is continuing to receive, approximately $5 billion a year in federal funds for rates that we determined had not been certified by an actuary or assessed by CMS for compliance with the requirements. Based on issues we raised during our review, CMS determined that Tennessee was not in compliance with the actuarial soundness requirements and, as of June 2010, was working to bring the state into compliance. Second, while CMS officials said that all states should have had a full review of rate setting after the actuarial soundness requirements became effective in August 2002, it appeared that CMS officials had not completed a full rate-setting review for Nebraska. CMS had no documentation of its last full review of Nebraska's rate setting, but officials believed that the last full review was completed in 2002. According to Nebraska officials, the state last made significant changes to its rate setting for the state fiscal year beginning in 2001, which according to criteria in CMS's checklist would have triggered a full CMS review. Based on what CMS and Nebraska officials told us, CMS's last full review was likely done before the actuarial requirements became effective. As a result, Nebraska received federal funds for more than 7 years for rates that may not have been in compliance with all of the actuarial soundness requirements. In addition to these gaps in oversight, we found inconsistencies in the reviews CMS completed. In instances when CMS did a full rate-setting review, it was unclear whether CMS consistently ensured that states met all of the actuarial soundness requirements. We found evidence that the rates in all 28 of the CMS files we reviewed were certified by a member of the American Academy of Actuaries, as is required by the regulations. However, the extent to which CMS ensured state compliance with other aspects of the actuarial soundness requirements--such as the requirement that rates be based only on services covered under the state's Medicaid plan or costs related to providing these services--was unclear. For example, in nearly a third of the files we reviewed, or 8 of 28 files, CMS officials did not use the rate-setting checklist to document their review; therefore we could not determine whether CMS ensured that states were in compliance with all of the requirements. In 17 of the 20 remaining files where the CMS official used the checklist, the official cited evidence of the state's compliance for some requirements, but not others. When officials did cite evidence, the evidence did not always appear to meet the requirements. For example, one of the requirements in the regulations is that states provide an assurance that rates are based only on services covered under the state's Medicaid plan or costs related to providing these services. Of the 19 files where CMS officials cited evidence of such an assurance, we were unable to locate the assurance in 2 of the files. Another requirement is that states include a comparison of expenditures under the previous year's rates to those projected under the proposed rates. In the 15 files where CMS cited evidence of the comparison of expenditures, we did not find a comparison that appeared to meet the requirement in 2 of the files. See table 2 for more information on the extent to which evidence was cited in the CMS files we reviewed. Finally, CMS did not consistently review states' rate setting for compliance with the actuarial soundness requirements prior to the new rates being implemented. In 20 of 28 files we reviewed, we found that CMS completed its review of rate setting after the state had begun implementing the proposed rates; that is, after the effective date of the proposed rates. CMS officials told us that a variety of factors could delay the approval of rates, including states submitting a request for approval after implementing the rates. CMS officials further explained that they did not consider a state to be out of compliance with the actuarial soundness requirements until the end of the federal fiscal year quarter in which the state implemented the unapproved rates. Of the 20 files where CMS approved rates after the state implemented them, 13 had rates that were approved more than 3 months after the state implemented the rates, which means that the rates were approved after the end of the quarter in which they were implemented. CMS officials confirmed that the agency generally continued to provide federal funds for the states' managed care contracts even in cases where the rates were not approved by the end of the quarter. According to CMS officials, if the state failed to gain CMS approval or had to lower the rates to achieve approval, then CMS would reduce future federal reimbursement to account for federal funds paid to states for rates that had not been approved. However, CMS reviewing states' rate setting after states have begun implementing rates may result in changes to states' rate-setting methodology; this could lead to retroactive changes, including reductions, in health plans' rates. The possibility of rates being decreased retroactively may make it difficult for health plans to assess the reasonableness and adequacy of rates when contracting with states, an assessment that CMS relies on as a check of states' rate setting. Variation in a number of regional office practices contributed to the inconsistency in CMS's oversight of states' rate setting. Regional offices varied in the extent to which they tracked state compliance with the requirements, the extent to which they withheld federal funds, their criteria for doing full and partial reviews of rate setting, and what they considered to be sufficient evidence for meeting the requirements. Tracking compliance. Officials from all of the regional offices we spoke with told us that they tracked basic information regarding the status of the CMS review process, such as when a state's submission was received and when CMS's approval letter was issued. However, based on our interviews with CMS regional officials, we found that four of the six regional offices did not track information that would allow them to identify states that were not in compliance with actuarial soundness requirements, such as the beginning and end dates of the rates specified by the actuary in the certification. Officials from the remaining two regional offices, Kansas City and San Francisco, told us they tracked the effective dates of approved rates. Withholding funds. There was also variation among regional offices in the conditions that had to be met in order for states to receive federal funds. For example, officials from the San Francisco regional office told us that they did not release federal funds to states until the states' managed care contract and rates had been approved. Officials said that the office had withheld funds in several cases until the state demonstrated compliance with the requirements. For example, from October 2008 through April 2010, the San Francisco regional office reported withholding a total of $302.7 million in federal funding for Hawaii because the state's contracts and rates did not meet the actuarial soundness requirements. In contrast, officials we interviewed from the Atlanta regional office said that the office would release federal funds to a state even if the state's rates had not yet been approved by CMS. Criteria for full and partial reviews. CMS regional officials had different interpretations of when full versus partial reviews of rate setting were necessary. For example, officials from the New York regional office told us that they completed a full review for each rate-setting submission received, regardless of the changes made to rates or rate setting. In contrast, a Kansas City regional office official told us that she completed a partial review in cases where the state adjusted the rates but had not changed the data used as the basis for rates. Sufficient evidence for compliance. Regional office officials varied in how they determined sufficient evidence for state compliance with certain requirements. For example, for the requirement that rates are for Medicaid-eligible individuals covered under the contract, officials from the San Francisco regional office told us that, while they had verified information provided by states on the populations covered under the rates, they mainly looked for an assurance from the state that rates were for eligible populations. In contrast, a Kansas City regional office official explained that an assurance from the state alone would not be sufficient. Rather, the official would require evidence of the eligible populations included in, and excluded from, the rate-setting methodology. Other variations. Variations in other regional office practices may also have contributed to the inconsistency in CMS oversight. For example, management oversight of rate-setting reviews in regional offices varied. A Kansas City regional official who reviews states' rate setting told us that, prior to approving states' rates, she submitted memoranda outlining the impact of states' proposed rate changes and the rationale for recommending approval of the package to her regional office managers. In contrast, officials from the New York regional office told us that most officials responsible for reviewing and approving states' rate setting worked independently and managers did not review a completed checklist. Other variations in practices that may have had an effect on CMS oversight included differences in training and standard procedures for conducting and documenting reviews. As a result of our review, CMS took a number of steps that may address some of the variation in regional office practices. For example: officials from two regional offices told us that their offices were implementing new standard procedures to address inconsistencies in reviews identified through the course of GAO's work; and in December 2009, CMS began requiring that regional offices use the checklist in reviewing all states' rate-setting submissions and assure central office of its use before approving a state's rates. However, as we reported above, variations existed even when the checklist was used, such as in the extent to which CMS officials using the checklist cited evidence of compliance for each of the actuarial soundness requirements. CMS's efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans, which did not provide the agency with sufficient information to ensure data quality. CMS regulations require states to describe the data used as the basis for rates and provide assurances from their actuaries that the data were appropriate for rate setting. The regulations also specify that states using data submitted by the health plans as the basis for rates must require executives from the health plans to attest that the data are accurate, complete, and truthful. The regulations do not include requirements for the type, amount, or age of data or standards for the reasonableness or adequacy of rates. Additionally, CMS does not require states to submit documentation about the quality of the data used to set rates. In our interviews with regional office officials, we found that, when reviewing states' descriptions of the data used for rate setting, CMS officials focused primarily on ensuring the appropriateness of the data used by states to set rates rather than their reliability. This included reviewing the specific services and populations included in the base-year data or checking for assurances of appropriateness from the states' actuaries. CMS officials noted that if they had concerns with the quality of a state's data they would ask the state questions. None of the officials, however, reported taking any action beyond asking questions. With limited information on the quality of data used to set rates, CMS cannot ensure that states' managed care rates are appropriate and risks misspending billions of federal and state dollars. Actuarial certification does not ensure that the data used to set rates are reliable. In particular, 9 of the 28 files we reviewed included a disclaimer in the actuary's certification that if the data used were incomplete or inaccurate then the rates would need to be revised. Additionally, in more than half of the 28 files we reviewed, the actuaries noted that they did not audit or independently verify the data and relied on the state or health plans to ensure that the data were accurate and complete. Officials from three of the five health plans we spoke with raised concerns about the completeness of the encounter data used by states to set rates. Additionally, state auditors in Washington have raised concerns about the lack of monitoring of the accuracy of data used for rate setting. The auditors found that the state did not verify the accuracy of the data used as the basis for Medicaid managed care rates in fiscal years 2003 through 2007. The state auditor's report from fiscal year 2007 concluded that the risk of paying health plans inflated rates increased when the accuracy of data used to establish rates could not be reasonably assumed to be correct. States have information on the quality of data used for rate setting-- information that CMS could obtain. State officials we spoke with reported having information on, and efforts intended to ensure, the quality of the data used to set rates. For example, New Jersey officials told us that the state tested the reliability and accuracy of the health plan financial data used to set rates against encounter data and required health plans to have an independent auditor review selected portions of the financial data. Additionally, Arizona officials indicated that the state periodically completes validation studies of the state's encounter data in which they traced a sample of the encounters back to individuals' medical records. State officials indicated that CMS used to require the state to submit results of these studies as a condition of operating its managed care program. However, given the state's extensive experience with managed care, CMS no longer requires the state to submit these studies for all participating health plans. (See app. III for a summary of selected states' efforts intended to ensure data quality.) Without requiring and reviewing information on states' data quality efforts, CMS cannot ensure that these data are of sufficient quality to be used for setting rates. In addition to information from states, CMS conducts audits that could have provided CMS officials relevant information about the quality of the data used to set rates. For example, when describing the state's efforts to ensure the quality of data used to set rates, officials from South Carolina noted that CMS periodically reviews the state's FFS data through the Payment Error Rate Measurement (PERM) program. Error rates calculated using FFS and encounter data through the PERM program could provide CMS with insights regarding the quality of the data that some states use to set rates. In CMS's rate-setting review file for South Carolina, however, there was no discussion of PERM results by either the state or CMS. CMS central office officials confirmed that regional office staff do not consider the results of data studies, such as state validation or PERM program reports, when reviewing states' rate-setting submissions. CMS also could have conducted or required periodic audits of the data used to set rates. In Medicare Advantage, which is Medicare's managed care program, CMS is required to conduct annual audits of the financial records of at least one-third of the organizations participating in the program. For Medicaid, however, CMS had not conducted any recent audits or studies of states' rate setting, including the quality of data used. Specifically, officials in all six of the regional offices we spoke with told us that they had not performed any audits or special studies of states' rate setting. Officials from CMS's central office were also not aware of any recent audits or studies done by the four other regional offices. In addition, officials from CMS's central office told us that they could only recall one instance, in the nearly 8 years since the regulations were issued, where OACT arranged for an independent assessment of a state's rate setting; that assessment was done more than 2 years ago. The statutory and regulatory requirements for actuarially sound rates are key safeguards in efforts to ensure that federal spending for Medicaid managed care programs is appropriate, which could help avoid significant overpayments and reduce incentives to underserve or deny enrollees' access to needed care. CMS, however, has been inconsistent in ensuring that states are complying with the actuarial soundness requirements and does not have sufficient efforts in place to ensure that states are using reliable data to set rates. During the course of our work, CMS took steps to address some of the variation in regional office practices that contributed to inconsistencies in overseeing state compliance, such as requiring regional office officials to use the checklist in reviewing all states' rate- setting submissions. While these are positive steps, they do not address all of the variations in regional office practices that contributed to inconsistencies in CMS's oversight of rate setting. For example, these steps do not address variations in tracking state compliance, which may have led to CMS's failure to review Tennessee's rates for compliance with the actuarial soundness requirements. Additionally, the steps taken do not address the variation in what evidence CMS officials considered sufficient for compliance, how officials used the checklist to document their reviews, and what conditions were necessary for federal funds to be released. CMS also does not have sufficient efforts in place to ensure the quality of the data states used to set rates, relying on assurances from states without considering any other available information on the quality of the data used. By relying on assurances alone, the agency risks reimbursing states for rates that may be inflated or inadequate. As a result of the weaknesses in CMS's oversight, billions of dollars in federal funds were paid to one state for rates that were not certified by an actuary, and billions more may be at risk of being paid to other states for rates that are not in compliance with the actuarial soundness requirements or are based on inappropriate and unreliable data. Given the complexity of overseeing states' unique and varied Medicaid programs, it is appropriate that CMS would allow for flexibility in states' rate setting and would expect states to have the primary responsibility for ensuring the quality of the data used to set rates. However, CMS needs to ensure that all states' rate setting complies with all of the actuarial soundness requirements and needs to have safeguards in place to ensure that states' data quality efforts are sufficient. Improvements to CMS's oversight of states' rate setting will become increasingly important as coverage under Medicaid expands to new populations for which states may not have experience serving, and may have no data on which to base rates. To improve oversight of states' Medicaid managed care rate setting, we recommend that the Administrator of CMS take three actions. To improve consistency in the oversight of states' compliance with the Medicaid managed care actuarial soundness requirements, we recommend that the Administrator of CMS: implement a mechanism for tracking state compliance, including tracking the effective dates of approved rates; and clarify guidance for CMS officials on conducting rate-setting reviews. Areas for clarification could include identifying what evidence is sufficient to demonstrate state compliance with the requirements, the conditions necessary for federal funds to be released, and how officials should document their reviews. To better ensure the quality of the data states use in setting Medicaid managed care rates, we recommend that the Administrator of CMS make use of information on data quality in overseeing states' rate setting. CMS could, among other things, require states to provide CMS with a description of the actions taken to ensure the quality of the data used in setting rates and the results of those actions; consider relevant audits and studies of data quality done by others when reviewing rate setting; and conduct or require periodic audits or studies of the data states use to set rates. We provided a draft of this report to HHS for its review and comment. HHS concurred with all three of our recommendations, and commented that it appreciated our efforts to highlight improvements that CMS can make in its oversight of states' compliance with Medicaid managed care actuarial soundness requirements, as well as its focus on the quality of data used to set managed care rates. Moreover, HHS noted that CMS has identified many of the same issues. (See app. IV for a copy of HHS's comments.) HHS agreed with our two recommendations related to improving the consistency of CMS's oversight, namely that CMS implement a mechanism for tracking state compliance with the actuarial soundness requirements and clarify guidance for CMS officials on conducting rate-setting reviews. HHS noted that CMS has established a managed care oversight team to develop and implement a number of improvements in its managed care oversight, some of which will address our recommendations. These improvements included CMS's plans to develop standard operating protocols for the review and approval of Medicaid managed care rates and provide comprehensive training to CMS staff on all aspects of the new process and requirements. As CMS implements efforts aimed at improving its oversight, we reiterate the need to implement a mechanism for tracking state compliance with actuarial soundness requirements, including the effective dates of rates. HHS also agreed with our recommendation that CMS make use of information on data quality in overseeing states' rate setting. In commenting on our finding related to CMS's limited efforts to ensure data quality, HHS noted that a number of requirements within PPACA will give CMS additional authority and responsibility for acquiring and utilizing Medicaid program data. In response to our recommendation, HHS noted that, as part of a broader effort to redesign how it collects Medicaid data, CMS will be setting standards for the type and frequency of managed care data submissions by states. HHS commented that with more complete data at its disposal, CMS will be able to better assess the underlying quality of data submissions and, thus, better execute its oversight and monitoring responsibilities. CMS should use these assessments and other available information when overseeing states' rate setting. Finally, HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess the Centers for Medicare & Medicaid Services's (CMS) oversight of states' compliance with the Medicaid managed care actuarial soundness requirements, we conducted a structured review of CMS files from 6 of the 10 CMS regional offices. We selected CMS regional offices that: represented at least 5 of the 10 CMS regional offices, collectively had oversight responsibility for at least 65 percent of the 34 states with comprehensive Medicaid managed care programs, and were geographically diverse and oversaw states with Medicaid managed care programs ranging in size. The six regional offices that we selected for our review had oversight responsibility for 26 of the 34 states (or 76 percent) with comprehensive Medicaid managed care programs. According to information from CMS, these 26 states accounted for about 85 percent of Medicaid managed care enrollment nationally in 2008 and state program size ranged from 8 percent of Medicaid enrollees in Illinois to 100 percent in Tennessee. (See table 3.) We conducted a structured review of a selection of files from the six CMS regional offices. Specifically, we reviewed the files for CMS's rate-setting reviews of the most recently approved contract for each state's comprehensive managed care program, or, for states with multiyear contracts, the file for the most recent full review of rate setting completed as of October 31, 2009. Several states in the selected regions had multiple comprehensive managed care programs that had separate contracts and rate-setting processes each subject to CMS review and approval. For states that had two programs, we selected the file for the program CMS officials indicated was the largest, as defined by the number of enrollees and estimated expenditures. For the states that had more than two programs, we selected the files for the two largest programs. For 2 of the 26 states overseen by the six regional offices (Nebraska and Tennessee), CMS had not done a review that met our criteria, so we did not review a file for those states. In total, we reviewed 28 files, which covered 24 states, 4 of which had two or more programs for which CMS did separate reviews. (See table 4.) As part of our file review, we assessed the degree to which CMS documented its review. Specifically, we determined whether the CMS official completed CMS's checklist--a tool CMS developed for regional office staff to use when reviewing states' rate-setting submissions for compliance with the actuarial soundness requirements. For those files where the CMS official did not complete the checklist and provided no other documentation of the review, we did no further assessment of CMS's review. For the files where the CMS official completed the checklist, we assessed the extent to which CMS ensured that the state complied with the actuarial soundness requirements. To do this, we identified several requirements of the regulations, including that rates were certified by a qualified actuary, that rates were based on covered services for eligible individuals, and that the state documented any adjustments to the base year data. For these requirements, we assessed whether (1) CMS documented that the state met the requirement, (2) CMS cited evidence for the assessment that the state was in compliance, and (3) the cited evidence was consistent with the guidance in CMS's checklist. Additionally, as part of our review, we summarized descriptive elements of states' rate setting and rates. For example, we documented the types of data used as the basis for rates and how the state's rates changed from the prior year. To ensure the accuracy of the information collected as part of our structured review of the files, we conducted independent verifications of each review. To describe state views of the Centers for Medicare & Medicaid Services's (CMS) oversight of state compliance with the Medicaid managed care actuarial soundness requirements and state efforts to ensure the quality of the data used to set rates, we selected 11 of the 34 states with comprehensive Medicaid managed care programs and interviewed officials from those states' programs. We selected states that: varied in the size of their Medicaid managed care programs, as defined by the numbers of managed care enrollees, the proportion of states' Medicaid population that were in managed care, and the number of MCOs participating in the program; and overlapped with the oversight responsibilities of the six selected CMS regional offices. Table 5 provides information about the selected states. The 11 states we interviewed used a combination of approaches intended to ensure the quality of the data used in Medicaid managed care rate setting. These included front-end efforts intended to prevent errors in data reported by providers and health plans, reconciliation methods to help ensure the reliability and appropriateness of reported data, and in-depth reviews that identified and addressed issues of ongoing concern. See table 6 for a summary of the selected states' efforts intended to ensure data quality. In addition to the contact named above, Michelle Rosenberg, Assistant Director; Joseph Applebaum, Chief Actuary; Susan Barnidge; William A. Crafton; Drew Long; Kevin Milne; and Dawn D. Nelson made key contributions to this report.
Medicaid managed care rates are required to be actuarially sound. A state is required to submit its rate-setting methodology, including a description of the data used, to the Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) for approval. The Children's Health Insurance Program Reauthorization Act of 2009 required GAO to examine the extent to which states' rates are actuarially sound. GAO assessed CMS oversight of states' compliance with the actuarial soundness requirements and efforts to ensure the quality of data used to set rates. GAO reviewed documents, including rate-setting review files, from 6 of CMS's 10 regional offices. The selected offices oversaw 26 of the 34 states with comprehensive managed care programs; the states' programs varied in size and accounted for over 85 percent of managed care enrollment. GAO interviewed CMS officials and Medicaid officials from 11 states that were chosen based in part on variation in program size and geography. CMS has been inconsistent in reviewing states' rate setting for compliance with the Medicaid managed care actuarial soundness requirements, which specify that rates must be developed in accordance with actuarial principles, appropriate for the population and services, and certified by actuaries. Variation in CMS regional office practices contributed to this inconsistency in oversight. For example, GAO found significant gaps in CMS's oversight of two states. 1) First, the agency had not reviewed Tennessee's rate setting for multiple years and only determined that the state was not in compliance with the requirements through the course of GAO's work. According to CMS officials, Tennessee received approximately $5 billion a year in federal funds for rates that GAO determined had not been certified by an actuary, which is a regulatory requirement. 2) Second, CMS had not completed a full review of Nebraska's rate setting since the actuarial soundness requirements became effective, and therefore may have provided federal funds for rates that were not in compliance with all of the requirements. Variation in a number of CMS regional office practices contributed to these gaps and other inconsistencies in the agency's oversight of states' rate setting. For example, regional offices varied in the extent to which they tracked state compliance with the actuarial soundness requirements, their interpretations of how extensive a review of a state's rate setting was needed, and their determinations regarding sufficient evidence for meeting the actuarial soundness requirements. As a result of our review, CMS took a number of steps that may address some of the variation that contributed to inconsistent oversight, such as requiring regional office officials to use a detailed checklist when reviewing states' rate setting. However, additional steps are necessary to prevent further gaps in oversight and additional federal funds from being paid for rates that are not in compliance with the actuarial soundness requirements. CMS's efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans--efforts that did not provide the agency with enough information to ensure the quality of the data used. CMS's regulations do not include standards for the type, amount, or age of the data used to set rates, and states are not required to report to CMS on the quality of the data. When reviewing states' descriptions of the data used to set rates, CMS officials focused primarily on the appropriateness of the data rather than their reliability. With limited information on data quality, CMS cannot ensure that states' managed care rates are appropriate, which places billions of federal and state dollars at risk for misspending. States and other sources have information on the quality of data used for rate setting--information that CMS could obtain. In addition, CMS could conduct or require periodic audits of data used to set rates; CMS is required to conduct such audits for the Medicare managed care program. GAO recommends that CMS implement a mechanism to track state compliance with the requirements, clarify guidance on rate-setting reviews, and make use of information on data quality in overseeing states' rate setting. HHS agreed with our recommendations and described initiatives underway that are aimed at improving CMS's oversight.
7,245
880
The CH-53K helicopter mission is to provide combat assault transport of heavy weapons, equipment, and supplies from sea to support Marine Corps operations ashore. The CH-53K is a new-build design evolution of the existing CH-53E and is expected to maintain the same shipboard footprint, while providing significant lift, reliability, maintainability, and cost-of-ownership improvements. Its major improvements include upgraded engines, redesigned gearboxes, composite rotor blades and rotor system improvements, fly-by-wire flight controls, a fully integrated glass cockpit, improved cargo handling and capacity, and survivability and force protection enhancements. It is expected to be able to transport external loads totaling 27,000 pounds over a range of 110 nautical miles under high- hot conditions without refueling and to fulfill land- and sea-based heavy- lift requirements. Sikorsky was awarded a sole-source contract to develop the CH-53K helicopter because, according to the program office, as the developer of the CH-53E, it is the only known qualified source with the ability to design, develop, and produce the required CH-53 variant. The program entered the system development and demonstration phase of the acquisition process in December 2005 and a $3 billion development contract was awarded to Sikorsky in April 2006. Beginning in 2006, the program experienced schedule delays that resulted in cost increases to the development contract. As a result of the schedule delays and cost growth, in 2009 the program office reported a cost and schedule deviation to its original cost and acquisition program baselines to OSD. However, these increases were not significant enough to incur what is commonly referred to as a Nunn- McCurdy breach. In July 2010, the CH-53K program completed what it deemed a successful critical design review (CDR), signaling that it had a stable design and could begin building developmental test aircraft. The program began building the first of five developmental test aircraft in early 2011, plans to make a decision to enter low-rate initial production (LRIP) in 2015, and plans to achieve an initial operational capability (IOC) in 2018. Primarily because of decisions to increase the number of aircraft and other issues, the CH-53K program has experienced approximately $6.8 billion in cost growth and a nearly 3-year delay from original schedule estimates for delivery of IOC. The program started development before determining how to achieve requirements within program constraints, which led to cost growth and schedule delays and resulted in the program delaying its preliminary design review to September 2008, nearly 3 years after development start. In addition, the program received permission to defer three performance capabilities and relax two technical metrics associated with operating and support costs--which we believe are sound acquisition decisions--and will deliver the initial capability to the warfighter in 2018, almost 3 years later than originally planned. In the end, delayed delivery will require the Marine Corps to rely longer on legacy aircraft that are more costly to operate and maintain, less reliable, and less capable of performing the same mission. The CH-53K program's estimates of cost, schedule, and quantity have significantly grown since development started in December 2005. The Marine Corps now plans to buy a total of 200 CH-53K helicopters for an estimated $25.5 billion, a 36 percent increase over its original estimates. The majority of this increase is due to added quantities. The program's schedule delays have increased the development cost estimate by over $1.7 billion, or more than 39 percent. In 2008, the Marine Corps directed the program to increase its total quantity estimate from 156 to 200 aircraft to support an increase in strength from 174,000 to 202,000 Marines. In February 2011, the Secretary of Defense testified that the number of Marine Corps troops may decrease by up to 20,000 Marines beginning in fiscal year 2015. The Marine Corps has assessed the required quantity of aircraft and determined that the requirement for 200 aircraft remains valid despite the proposed manpower decrease. Primarily as a result of the aircraft quantity increase, the program's procurement cost estimate has also increased by over $5 billion, or 35 percent, from nearly $14.4 billion to over $19.4 billion. The program's average procurement unit cost has increased 4.8 percent. In addition, the program's schedule delays have delayed its ability to achieve IOC until 2018, nearly 3 years later than originally planned. Table 1 compares the program's original baseline estimates of cost, quantity, and major schedule events to current program estimates. The program started development before determining how to achieve requirements within program constraints, which led to cost growth and schedule delays. The CH-53K program originally scheduled its preliminary design review for June 2007, a year and a half after the program began development, and later delayed it to September 2008, nearly 3 years after development start. We have reported that performing systems engineering reviews--including a system requirements review, system functional review, and preliminary design review--before a program is initiated and a business case is set is critical to ensuring that a program's requirements are defined and feasible and that the design can meet those requirements within cost, schedule, and other system constraints. Problems with systems engineering began immediately within the program because the program and Sikorsky disagreed on what systems engineering tasks needed to be accomplished. As a result, the bulk of the program's systems engineering problems related to derived requirements. According to an OSD official, the contractor did not account for total design workload, technical reviews, and development efforts. For example, the program experienced problems defining software specifications for its Avionics Management System. While Marine Corps officials commented that requirements are often difficult to define early in the engineering process and changes are expected during design maturation, they noted that in this case the use of a firm fixed-price contract with the subcontractor made it difficult to facilitate changes. As a result, completing this task took longer than the program had estimated and the program's CDR was delayed. In another example, the program has a requirement that the CH-53K be transportable by C-5 aircraft. As with the CH-53E, because of its size, the CH-53K's rotor and main gearbox will be removed from the aircraft's body in order to fit within the height requirements of a C-5. The program office interpreted this as requiring that each CH-53K be shipped in its entirety on a single C-5 aircraft, including the removed rotor and gearbox. However, the contractor interpreted the requirement differently and proposed shipping all rotors and main gearboxes in another C-5 separate from the CH-53K body. Program officials did not accept this interpretation of the requirement and required the contractor to propose a solution in which each CH-53K aircraft would be shipped and arrive in its entirety in a single C-5 aircraft. Marine Corps officials commented that even though this requirement was interpreted differently, it was identified early in the systems engineering process and addressed. The program office and contractor underestimated the time it would take to hire its workforce, and delays in awarding subcontracts made it difficult for the program to complete design tasks and maintain its schedule. According to an OSD official, while the program officially began development in December 2005, the development contract was not awarded until 4 months later--in April 2006--delaying development start. According to program officials, budget-driven hiring restrictions for government personnel, which included ceilings on the number of government personnel who could be assigned to the program management office, affected the program's ability to hire its workforce at the time the program was initiated. Similarly, program officials told us that the contractor underestimated the amount of time required to locate, recruit, train, and assign qualified personnel to the program. The contractor was also late in awarding contracts to its major subcontractors. To mitigate the risk of production cost growth, the contractor established long-term production agreements with its subcontractors. According to program officials, in these agreements subcontractors committed in advance to pricing arrangements for the production of parts and spares. While the contractor used this strategy to reduce program risk, it resulted in a delay and the major subcontracts were awarded later than needed to maintain the program's initially planned schedule. In 2010, the CH-53K program received approval from the Joint Requirements Oversight Council (JROC) to defer three performance capabilities that make up a portion of the Net-Ready key performance parameter, and from the Marine Corps to relax two maintenance-based technical performance metrics--both of which we believe are sound acquisition decisions. The Department of Defense's (DOD) decision to defer three performance capabilities was based on consultation among JROC, Headquarters U.S. Marine Corps, Chief of Naval Operations staff, and the program office in 2008, which prompted the CH-53K program office to review the program's requirements and identify potential areas in which to decrease costs. As part of that review, the program office identified several areas where costs could be deferred without decreasing capability, including three communications-related performance capabilities--Link-16, Variable Message Format, and Mode V software-- that constituted part of the Net-Ready key performance parameter. Program officials estimated that this will result in over $100 million in cost deferral. Program officials explained that these software capabilities were not removed from the program's road map, but rather have been deferred until after IOC. Originally, the program's Operational Requirements Document called for all three capabilities to be fully integrated in fiscal year 2015. However, one of the capabilities must now be fully integrated no later than 6 months after IOC, which is currently scheduled to occur in 2018, and the other two capabilities must be fully integrated within 2 years of IOC. Program officials stated that deferment of these capabilities will not affect aircraft interoperability. Two technical performance metrics were changed because, according to program officials, meeting the original maintenance-based technical performance requirements for Mean Time To Repair and Mean Corrective Maintenance Time for Operational Mission Failures was not cost effective. For example, the CH-53K's rotor blades are designed to have a two-piece design featuring a removable tip. However, the curing time to adhere the blade tip to the blade was driving up the time it would take to remove and replace the blade tip. The contractor proposed meeting the original requirement by moving to a one-piece blade; however, this would increase the program's operating and support costs by approximately $99 per flight hour and increase the logistical footprint of the helicopter. As a result, the program sought and received approval to relax the performance metric associated with replacing the blade tip instead of investing the financial resources necessary to obtain the original metrics or moving to a one-piece blade. Because of a nearly 3-year delay in initial delivery of the CH-53K, program officials estimated that it will cost approximately $927 million more to continue to maintain the CH-53E legacy system. Initial delivery of the CH- 53K to the warfighter is currently scheduled for 2018, a delay of almost 3 years that will require the Marine Corps to rely on legacy aircraft that are less reliable, more costly to operate and maintain, and less capable of performing the same mission. This delay, coupled with an increased demand for the CH-53E in foreign theaters, led the Marine Corps to pull all available assets from retirement for either reentry into service or to be used for spare parts. Continued reliance on the CH-53E will be costly, as it is one of the most expensive helicopters to maintain in the Marine Corps's fleet. For example, the drive train of the CH-53E costs approximately $3,000 per flight hour to maintain. In contrast, the program estimates that the drive train for the CH-53K--its largest dynamic system--will cost only $1,000 per flight hour to maintain. In addition, the CH-53K is expected to have improved reliability and maintainability over the CH-53E legacy system. For example, the CH-53K's engine has 60 percent fewer parts than that of the CH-53E, which the program office believes will result in a more reliable engine that is easier and less costly to maintain. In addition, the CH-53K incorporates an aluminum gearbox casing, which will decrease the need for replacement resulting from corrosion. Delayed delivery of the CH-53K will also affect the ability of the Marine Corps to carry out future missions that cannot be performed by the CH- 53E. For example, the CH-53E can carry 15,000 pounds internally compared to 30,000 pounds for the CH-53K. While the CH-53K is expected to carry up to 27,000 pounds externally for 110 nautical miles at 91.5degF at an altitude of 3,000 feet--a Navy operational requirement for high-hot conditions--the CH-53E can only carry just over 8,000 pounds under the same conditions. The increased lift capability of the CH-53K during these conditions may enable it to carry the current and incoming inventory of up-armored vehicles, which are much heavier than their less-armored predecessors. For example, the up-armoring of wheeled military vehicles, such as the High Mobility Multi-purpose Wheeled Vehicle, and the introduction of the Joint Light Tactical Vehicle have resulted in a military inventory with weights that are beyond the weight limits of the CH-53E. According to program officials, without the addition of the CH-53K, the Marine Corps will soon no longer be able to carry and deliver the military's new inventory of wheeled vehicles in high-hot conditions. Figure 1 compares the capabilities and characteristics of the CH-53E and CH-53K. The combination of the increase in the quantity of heavy-lift helicopters required to support Marine troop levels and the delayed delivery of the CH-53K to the warfighter has created a requirement gap for heavy-lift helicopters of nearly 50 helicopters (nearly 25 percent) over the next 7 years and represents an operational risk to the warfighter. However, the Marine Corps stated that it is accepting significant risk with the heavy-lift shortfall and will continue to operate under this gap until the CH-53K becomes available. Figure 2, which shows the required aircraft quantities, the current CH-53 series helicopter force structure, and planned CH-53K production, illustrates the operational risk. The CH-53K program has made progress addressing the difficulties it faced early in system development. The program held CDR in July 2010, demonstrating that it has the potential to move forward successfully. The program has also adopted mitigation strategies to address future program risk. The program's new strategy, as outlined in the President's fiscal year 2012 budget, lengthens the development schedule, increases development funding, and delays the production decision by 1 year. However, while the program's new acquisition strategy increases development time to mitigate risk, some testing and production activities remain concurrent, which could result in costly retrofits if problems are discovered during testing. The CH-53K program has taken several steps to address some of the shortfalls that the program experienced early in development. For example, the program has addressed its cost growth by revising its cost estimate to align with the current schedule. The program's 2011 budget request fully funded the development program to its revised estimate. The program addressed its early staffing issues by increasing staffing levels beginning in January 2009 and maintained those levels through completion of CDR. In addition, the program delayed technical reviews until it was prepared to move forward, thereby becoming more of an event-driven rather than a schedule-driven program. An event-driven approach enables developers to be reasonably certain that their products are more likely to meet established cost, schedule, and performance baselines. For instance, the program delayed CDR--a vehicle for making the determination that a product's design is stable and capable of meeting its performance requirements--until all subsystem design reviews were held and more than 90 percent of engineering designs had been released. In July 2010, the program completed system integration--a period when individual components of a system are brought together--culminating with the program's CDR. With completion of CDR, the program has demonstrated that the CH-53K design is stable--an indication that it is appropriate to proceed into fabrication, demonstration, and testing and that it is expected that the program can meet stated performance requirements within cost and schedule. At the time CDR was held, the program had released 93 percent of its engineering drawings, exceeding the best practice standard for the completion of system integration. According to best practices, a high percentage of design drawings--at least 90 percent--should be completed and released to manufacturing at CDR. Additionally, the program office stated that all 29 major subsystem design reviews were held prior to the start of CDR, and that coded software delivery was ahead of schedule. In the end, the Technical Review Board, the approving authority for CDR, determined that the program was ready to transition to system demonstration--a period when the system as a whole demonstrates its reliability as well as its ability to work in the intended environment--and identified seven action items, none of which were determined by the program office to be critical. The program has also adopted several mitigation strategies to address future program risk. The program has established weight improvement plans to address risks associated with any potential weight increases and has been able to locate areas where weight reductions can be made. For example, the program worked with the subcontractor responsible for designing and manufacturing the floor of the CH-53K to find areas to reduce weight. The program has also created several working groups to reduce risk to the overall capabilities of the CH-53K. For example, the Capabilities Integrated Product Team, which meets on a monthly basis, was developed to focus on risk relating to the program's requirements. This team comprises officials from the program office; Headquarters U.S. Marine Corps; Marine Corps Combat Development Command; Chief of Naval Operations staff; the Navy's Commander, Operational Test and Evaluation Force, staff; the operational testing squadron; and the developmental testing squadron. Its members work with the program office to identify, clarify, and resolve mission-related issues and program requirements. In addition, the program holds integrating design reviews every 6 months, freezing the working design in order to hold a system-level review and manage design risk. The CH-53K program's schedule contains overlap, or concurrency, between testing and production. The stated rationale for concurrency is to introduce systems in a timelier manner or to fulfill an urgent need, to avoid technology obsolescence, to maintain an efficient industrial development/production workforce, or a combination of these. While some concurrency may be beneficial to efficiently transition from development to production, there is also risk in concurrency. Any changes in design and manufacturing that require modifications to delivered aircraft or to tooling and manufacturing processes would result in increased costs and delays in getting capabilities to the warfighter. In the past, we have reported a number of examples of the adverse consequences of concurrent testing and delivery of systems and how concurrency can place significant investment at risk and increases the chances that costly design changes will surface during later testing. The CH-53K program's original schedule contained concurrency between testing and aircraft production. In 2009, reflecting the early difficulties experienced in development, the CH-53K program revised its cost and schedule estimates. This revised schedule would have reduced the program's level of concurrency. For example, while the original program schedule called for developmental testing to be ongoing during the production of all three lots of LRIP, the schedule resulting from the 2009 adjustments called for developmental testing to be ongoing during the first two lots of LRIP. However, the program had concerns that this schedule's allowance of approximately 2 years between final delivery of developmental test aircraft and the beginning of LRIP would create a production gap that could be costly. As a result, the program office was considering accelerating procurement funds in an effort to begin production 1 year earlier than planned and minimize breaks in production. This consideration was negated, however, as a result of a funding cut that the program sustained in the process of formulating the President's fiscal year 2012 budget. In February 2011, the President's fiscal year 2012 budget was released and outlined changes to the program's budget and schedule. According to a program official, the program's requested budget was reduced by approximately $30.5 million in fiscal year 2012 (and a total of $94.6 million between fiscal year 2010 and fiscal year 2015)--funds to be applied to other DOD priorities. The President's budget reports that while the CH- 53K program was fully funded to the OSD Cost Assessment and Program Evaluation Office estimate in the President's fiscal year 2011 budget, the funding adjustments made to the program in the President's fiscal year 2012 budget would result in a net increase of $69 million to the development cost estimate and a schedule delay of approximately 7 months. The new schedule results in later delivery of developmental test aircraft and delays some testing. As a result, according to program officials, the production gap issue has been addressed. Another result, though, is that the program's new schedule maintains a level of concurrency similar to that of the original schedule. Program officials have conceded that concurrency exists within their program, but state that this concurrency will reduce the operational risk of further delaying IOC. In commenting on the risks of concurrency, Marine Corps officials noted that the time allotted prior to the start of production and the small quantity of LRIP planned reduces the risks of costly retrofits resulting from issues identified during developmental test. Figure 3 compares the CH-53K program's original and new schedules. LRIP Lot 1 (6) LRIP Lot 2 (9) LRIP Lot (14) Fll rte prodction (FRP) Lot 4-9 (127) LRIP Lot 1 (6) LRIP Lot 2 (9) LRIP Lot (14) FRP (171) As the CH-53K program moves forward, it is important that further cost growth and schedule delays are mitigated. The CH-53K program's new acquisition strategy addresses previous programmatic issues that led to early development cost growth and schedule delays. DOD provided technical comments on the information in this report, which GAO incorporated as appropriate, but declined to provide additional comments. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretary of the Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix II. To determine how the CH-53K's estimates of cost, schedule, and quantity have changed since the program began development, we received briefings by program and contractor officials and reviewed budget documents, annual Selected Acquisition Reports, monthly status reports, performance indicators, and other data. We compared reported progress with the program of record and previous years' data, identified changes in cost and schedule, and obtained officials' reasons for these changes. We interviewed officials from the CH-53K program and the Department of Defense (DOD) to obtain their views on progress, ongoing concerns, and actions taken to address them. To identify the CH-53K's current acquisition strategy and determine how this strategy will meet current program targets as well as the warfighter's needs, we reviewed the program's acquisition schedule and other program documents, such as Selected Acquisition Reports and test plans. We analyzed the retirement schedule of the legacy CH-53E fleet and discussed the impact of these retirements on the Marine Corps's heavy-lift requirement with appropriate officials. To identify the CH-53K program's current acquisition strategy and to determine how the program plans to meet its new targets and still meet the needs of the warfighter, we obtained from the program--through program documents--the program's revised acquisition plans. In performing our work, we obtained documents, data, and other information and met with CH-53K program officials at Patuxent River, Maryland, and the prime contractor, Sikorsky Aircraft Corporation, at Stratford, Connecticut. We met with officials from Headquarters Marine Corps, the Office of the Chief of Naval Operations, and the Office of the Secretary of Defense's Cost Assessment and Program Evaluation Office at the Pentagon, Arlington, Virginia. We interviewed officials from the Office of Director of Defense Research and Engineering and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Office of Developmental Testing and Evaluation, in Arlington, Virginia. We also met with officials from the Defense Contract Management Agency who were responsible for the CH-53K program at Stratford, Connecticut. We drew on prior GAO work related to acquisition best practices and reviewed analyses and assessments done by DOD. To assess the reliability of DOD's cost, schedule, and performance data for the CH-53K program, we talked with knowledgeable agency officials about the processes and practices used to generate the data. We determined that the data we used were sufficiently reliable for the purpose of this report. We conducted this performance audit from February 2010 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff members made key contributions to this report: Bruce Thomas, Assistant Director; Noah Bleicher; Marvin Bonner; Laura Greifner; Laura Jezewski; and Robert Miller.
The United States Marine Corps is facing a critical shortage of heavy-lift aircraft. In addition, current weapon systems are heavier than their predecessors, further challenging the Marine Corps's current CH-53E heavy-lift helicopters. To address the emerging heavy-lift requirements, the Marine Corps initiated the CH-53K Heavy Lift Replacement program, which has experienced significant cost increase and schedule delays since entering development in 2005. This report (1) determines how the CH-53K's estimates of cost, schedule, and quantity have changed since the program began development and the impact of these changes and (2) determines how the CH-53K's current acquisition strategy will meet current program targets as well as the warfighter's needs. To address these objectives, GAO analyzed the program's budget, schedules, acquisition reports, and other documents and interviewed officials from the program office, the prime contractor's office, the Marine Corps, the Defense Contract Management Agency, and the Office of the Secretary of Defense. The CH-53K helicopter mission is to provide combat assault transport of heavy weapons, equipment, and supplies from sea to support Marine Corps operations ashore. Since the program began development in December 2005, its total cost estimate has grown by almost $6.8 billion, from nearly $18.8 billion to over $25.5 billion as a result of a Marine Corps-directed quantity increase from 156 to 200 aircraft and schedule delays. The majority of the program's total cost growth is due to added quantities. Development cost growth and schedule delays resulted from beginning development before determining how to achieve requirements within program constraints, with miscommunication between the program office and prime contractor about systems engineering tasks and with late staffing by both the program office and the contractor. The program has also deferred three performance capabilities and relaxed two maintenance-based technical performance metrics in an effort to defer cost. Delivery of the CH-53K to the warfighter is currently scheduled for 2018--a delay of almost 3 years. The CH-53K program has made progress addressing the difficulties it faced early in system development. It held a successful critical design review in July 2010 and has adopted mitigation strategies to address future program risk. The program's new strategy, as outlined in the President's fiscal year 2012 budget, lengthens the development schedule, increases development funding, and delays the production decision. However, adjustments made to the budget submitted to Congress reduce the program's fiscal year 2012 development funding by $30.5 million (and by a total of $94.6 million between fiscal years 2010 and 2015). According to information contained in the budget, this reduction would result in additional schedule delays to the program of approximately 7 months and a net increase of $69 million to the total development cost estimate. The CH-53K program's new acquisition strategy addresses previous programmatic issues that led to early development cost growth and schedule delays.
5,606
605
The South Florida ecosystem covers about 18,000 square miles in 16 counties. It extends from the Kissimmee Chain of Lakes south of Orlando to Lake Okeechobee, and continues south past the Florida Bay to the reefs southwest of the Florida Keys. The ecosystem is in jeopardy today because of past efforts that diverted water from the Everglades to control flooding and to supply water for urban and agricultural development. The Central and Southern Florida project, a large-scale water control project begun in the late 1940s, constructed more than 1,700 miles of canals and levees and over 200 water control structures that drain an average of 1.7 billion gallons of water per day into the Atlantic Ocean and the Gulf of Mexico. This construction resulted in insufficient water for the natural system and for the growing population, along with degraded water quality. Today, the Everglades has been reduced to half its original size and the ecosystem continues to deteriorate because of the alteration of the water flow, impacts of agricultural and industrial activities, and increasing urbanization. In response to growing signs of ecosystem deterioration, federal agencies established the South Florida Ecosystem Restoration Task Force in 1993 to coordinate ongoing federal restoration activities. The Water Resources Development Act of 1996 formalized the Task Force and expanded its membership to include state, local, and tribal representatives, and charged it with coordinating and facilitating efforts to restore the ecosystem. The Task Force, which is chaired by the Secretary of the Department of the Interior, consists of 14 members representing 7 federal agencies, 2 American Indian tribes, and 5 state or local governments. To accomplish the restoration, the Task Force established the following three goals: Get the water right. The purpose of this goal is to deliver the right amount of water, of the right quality, to the right places, at the right times. However, restoring a more natural water flow to the ecosystem while providing adequate water supplies and controlling floods will require efforts to expand the ecosystem's freshwater supply and improve the delivery of water to natural areas. Natural areas of the ecosystem are made up of federal and state lands, and coastal waters, estuaries, bays, and islands. Restore, preserve, and protect natural habitats and species. To restore lost and altered habitats and recover the endangered or threatened species native to these habitats, the federal and state governments will have to acquire lands and reconnect natural habitats that have become disconnected through growth and development, and halt the spread of invasive species. Foster compatibility of the built and natural systems. To achieve the long-term sustainability of the ecosystem, the restoration effort has the goal of maintaining the quality of life in urban areas while ensuring that (1) development practices limit habitat fragmentation and support conservation and (2) traditional industries, such as agriculture, fishing, and manufacturing, continue to be supported and do not damage the ecosystem. The centerpiece for achieving the goal to get the water right is the Comprehensive Everglades Restoration Plan (CERP), approved by the Congress in the Water Resources Development Act of 2000 (WRDA 2000). CERP is one of the most ambitious restoration efforts the federal government has ever undertaken. It currently encompasses 60 individual projects that will be designed and implemented over approximately 40 years. These projects are intended to increase the water available for the natural areas by capturing much of the water that is currently being diverted, storing the water in many different reservoirs and storage wells, and releasing it when it is needed. The cost of implementing CERP will be shared equally between the federal government and the state of Florida and will be carried out primarily by the U.S. Army Corps of Engineers (the Corps) and the South Florida Water Management District (SFWMD), which is the state authority that manages water resources for South Florida. After the Corps and SFWMD complete the initial planning and design for individual CERP projects, they must submit the proposed projects to the Congress to obtain authorization and funding for construction. In addition to the CERP projects, another 162 projects are also part of the overall restoration effort. Twenty-eight of these projects, when completed, will serve as the foundation for many of the CERP projects and are intended to restore a more natural water flow to Everglades National Park and improve water quality in the ecosystem. Nearly all of these "CERP- related" projects were already being designed or implemented by federal and state agencies, such as the Department of the Interior and SFWMD, in 2000 when the Congress approved CERP. The remaining 134 projects include a variety of efforts that will, among other things, expand wildlife refuges, eradicate invasive species, and restore wildlife habitat, and are being implemented by a number of federal, state, and tribal agencies, such as the U.S. Fish and Wildlife Service, the Florida Department of Environmental Protection (FDEP), and the Seminole Tribe of Florida. Because these projects were not authorized as part of CERP and do not serve as CERP's foundation, we refer to them as "non-CERP" projects. Success in completing the restoration effort and achieving the expected benefits for the ecosystem as quickly as possible and in the most cost- effective manner depends on the order, or sequencing, in which many of the 222 projects will be designed and completed. Appropriate sequencing is also important to ensure that interdependencies among restoration projects are not ignored. For example, projects that will construct water storage facilities and stormwater treatment areas need to be completed before undertaking projects that remove levees and restore a more natural water flow to the ecosystem. Recognizing the threats that Everglades National Park was facing, in 1993, UNESCO's World Heritage Committee (WHC) included the Park on its List of World Heritage in Danger. This list includes cultural or natural properties that are facing serious and specific threats such as those caused by large-scale public or private projects or rapid urbanization; the outbreak or the threat of an armed conflict; calamities and cataclysms; and changes in water levels, floods, and tidal waves. The Park's inclusion on the list resulted from five specific threats: (1) urban encroachment; (2) agricultural fertilizer pollution; (3) mercury contamination of fish and wildlife; (4) lowered water levels due to flood control measures; and (5) damage from Hurricane Andrew, which struck the south Florida peninsula in 1992 with winds exceeding 164 miles per hour. In 2006, WHC adopted a set of benchmarks that, when met, would lead to the Park's removal from the list. According to Park and WHC documents, nine projects that are part of the overall restoration effort will contribute to the achievement of these benchmarks. Forty-three of the 222 projects that constitute the South Florida ecosystem restoration effort have been completed, while the remaining projects are currently being implemented or are either in design, being planned, or have not yet started. Table 1 shows the status of the 222 restoration projects. Completed Restoration Projects -- Although 43 of the 222 projects have been completed since the beginning of the restoration effort, this total is far short of the 91 projects that the agencies reported would be completed by 2006. Nine projects were completed before 2000 when the strategy to restore the ecosystem was set. These projects are expected to provide benefits primarily in the area of habitat acquisition and improvement. Thirty-four projects were completed between 2000 and 2006. The primary purposes of these projects range from the construction of stormwater treatment areas, to the acquisition or improvement of land for habitat, to the drafting of water supply plans. Ongoing Restoration Projects -- Of the 107 projects currently being implemented, 7 are CERP projects, 10 are CERP-related projects, and 90 are non-CERP projects. Five of the seven CERP projects are being built by the state in advance of the Corps' completion of the necessary project implementation reports and submission of them to the Congress for authorization and appropriations. Nonetheless, some of the CERP projects currently in implementation are significantly behind schedule. For example, four of the seven CERP projects in implementation were originally scheduled for completion between November 2002 and September 2006, but instead will be completed up to 6 years behind their original schedule because it has taken the Corps longer than originally anticipated to design and obtain approval for these projects. Overall, 19 of the 107 projects currently being implemented have expected completion dates by 2010. Most of the remaining 88 projects are non-CERP habitat acquisition and improvement projects that have no firm end date because the land will be acquired from willing sellers as it becomes available. Projects Not Yet Implemented -- Of the 72 restoration projects not yet implemented--in design, in planning, or not yet started--53 are CERP projects that are expected to be completed over the next 30 years and will provide important benefits such as improved water flow, additional water for restoration as well as other water-related needs. In contrast, the other 19 projects include 3 CERP-related and 16 non-CERP projects, which are expected to be completed by or before 2013. Consequently, the full environmental benefits for the South Florida ecosystem restoration that the CERP projects were intended to provide will not be realized for several decades. Several of the CERP projects in design, in planning, or not yet begun, were originally planned for completion between December 2001 and December 2005, but instead will be completed from 2 to 6 years behind their original schedule. According to agency officials CERP project delays have occurred for the following reasons: It took longer than expected to develop the appropriate policy, guidance, and regulations that WRDA 2000 requires for the CERP effort. Some delays were caused by the need to modify the conceptual design of some projects to comply with the requirements of WRDA 2000's savings clause. According to this clause, CERP projects cannot transfer or eliminate existing sources of water unless an alternate source of comparable quantity and quality is provided, and they cannot reduce existing levels of flood protection. Progress was limited by the availability of less federal funding than expected and a lack of congressional authorization for some of the projects. The extensive modeling that accompanies the design and implementation of each project in addition to the "cumbersome" project review process may have also contributed to delays, as well as stakeholder comment, dispute resolution, and consensus-building that occurs at each stage of a project. Delays have occurred in completing the CERP-related Modified Water Deliveries to Everglades National Park (Mod Waters) project, which is a major building block for CERP. These delays, in turn, have delayed CERP implementation. Given the continuing delays in implementing critical CERP projects, the state has begun expediting the design and construction of some of these projects with its own resources. The state's effort, known as Acceler8, includes most of the CERP projects that were among WRDA 2000's 10 initially authorized projects, whose costs were to be shared by the federal government and the state. According to Florida officials, by advancing the design and construction of these projects with its own funds, the state hopes to more quickly realize restoration benefits for both the natural and human environments and to jump-start the overall CERP effort once the Congress begins to authorize individual projects. The Acceler8 projects include seven that are affiliated with CERP and an eighth that expands existing stormwater treatment areas. The state expects to spend more than $1.5 billion to design and construct these projects by 2011. Most of the restoration projects that would help Everglades National Park achieve the WHC's benchmarks for removing the Park from its list of world heritage sites in danger have not been completed. According to Park and WHC documents, nine restoration projects were key to meeting these benchmarks. Table 2 lists the nine projects, the type of project, implementation status, and expected completion date. As table 2 shows, only one of the nine projects has been completed; four projects are ongoing and will not be completed until at least 2012; and four projects are still in planning and design and are not expected to be completed until some time between 2015 and 2035. In February 2007, the United States prepared a status report for the WHC on the progress made in achieving the benchmarks that the committee had established for the Park in 2006. Based on its review of this progress report, at a benchmarks meeting on April 2-3, 2007, the WHC's draft decision was to retain Everglades National Park on the list of world heritage sites in danger; to recommend that the United States continue its commitment to the restoration and conservation of the Park and provide the required financial resources for the full implementation of the activities associated with CERP. WHC's draft decision also requested that the United States provide an updated report by February 1, 2008 on the progress made towards implementation of the corrective measures. However, at the WHC session held between June 23 and July 2, 2007, the WHC decided to remove the Park from the list of world heritage sites in danger and commended the United States for the progress made in implementing corrective measures. In its final decision, the WHC encouraged the United States to continue its commitment to the restoration and provide the required financial resources for the full implementation of the activities associated with CERP. It is unclear from the WHC final decision document whether any additional or new information was provided to the committee that led to its final decision. No overall sequencing criteria guide the implementation of the 222 projects that comprise the South Florida ecosystem restoration effort. For the 60 CERP projects there are clearly defined criteria to be considered in determining the scheduling and sequencing of projects. However, the Corps has not fully applied these criteria when making CERP project sequencing decisions, because it lacked key data such as updated environmental benefits data and interim goals. As a result the Corps primarily relied on technical interdependencies and availability of funding as the criteria for making sequencing decisions. The Corps has recently started to revisit priorities for CERP projects' and alter project schedules that were established in 2005 (this process is referred to as CERP-reset). However, because the Corps continues to lack certain key data for making sequencing decisions, the revised plan, when completed, will also not fully adhere to the criteria. Although CERP-related projects provide the foundation for many CERP projects, there are no established criteria for determining their implementation schedule and their estimated start and completion dates largely depend upon when and if the implementing agency will have sufficient funding to implement the project. For example, the construction of the Mod Waters project has been delayed several times since 1997 because, among other things, Interior did not receive enough funding to complete the construction of this project. This project is expected to restore natural hydrologic conditions across 190,000 acres of habitat in Everglades National Park and assist in the recovery of threatened and endangered plants and wildlife. The completion date for the Mod Waters Project has slipped again and it is now not expected to be completed until 2011. Because completion of this project is critical to the implementation of other CERP projects such as the Water Conservation Area 3 Decompartmentalization and Sheetflow Enhancement (Decomp) project-- a project that many agency officials consider key to restoring the natural system--these delays will have a ripple effect on the completion date of this project as well. Similarly, for non-CERP projects, agencies reported that they do not have any sequencing criteria; instead, they decide on the scheduling and timing of these projects primarily if and when funding becomes available. For example, Florida has a land acquisition program to acquire lands for conservation and habitat preservation throughout the state, including for some non-CERP projects that are part of the South Florida ecosystem restoration effort. State officials have identified lands and added them to a list of priority projects proposed for acquisition throughout the state. However, whether or not these lands will be acquired for non-CERP projects depends on whether there is available funding in the annual budget, there are willing sellers, and the land is affordable based on the available funding. Because of the correct sequencing of CERP projects is essential to the overall success of the restoration effort, we recommended that the Corps obtain the data that it needs to ensure that all required sequencing criteria are considered and then comprehensively reassess its sequencing decisions to ensure that CERP projects have been appropriately sequenced to maximize the achievement of restoration goals. The agency agreed with our recommendation. From fiscal year 1999 through fiscal year 2006, federal and state agencies participating in the restoration of the South Florida ecosystem provided $7.1 billion for the effort. Of this total, federal agencies provided $2.3 billion and Florida provided $4.8 billion. Two agencies--the Corps and Interior--provided over 80 percent of the federal contribution. As figure 1 shows, federal and state agencies allocated the largest portion of the $7.1 billion to non-CERP projects for fiscal years 1999 through 2006. While federal agencies and Florida provided about $2.3 billion during fiscal years 1999 through 2006 for CERP projects, this amount was about $1.2 billion less than they had estimated needing for these projects over this period. This was because the federal contribution was $1.4 billion less than expected. This shortfall occurred primarily because CERP projects did not receive the congressional authorization and appropriations that the agencies had expected. In contrast, Florida provided a total of $2 billion over the period, exceeding its expected contribution to CERP by $250 million, and therefore making up some of the federal funding shortfall. Additionally, between July 31, 2000, and June 30, 2006, the total estimated cost for the South Florida ecosystem restoration grew from $15.4 billion to $19.7 billion, or by 28 percent. A significant part of this increase can be attributed to CERP projects; for these projects costs increased from $8.8 billion to $10.1 billion. This increase represents nearly 31 percent of the increase in the total estimated cost for the restoration. Agency officials reported that costs have increased for the restoration effort primarily because of inflation, increased land and construction costs, and changes in the scope of work. Furthermore, the costs of restoring the South Florida ecosystem are likely to continue to increase for the following reasons: Estimated costs for some of the projects are not known or fully known because they are still in the design and planning stage. For example, the total costs for one project that we examined--the Site 1 Impoundment project--grew by almost $36 million; from about $46 million to about $81 million after the design phase was completed. If other CERP projects, for which initial planning and design have not yet been completed, also experience similar increases in project costs, then the estimated total costs of not only CERP but the overall restoration effort will grow significantly. The full cost of acquiring land for the restoration effort is not known. Land costs for 56 non-CERP land projects, expected to total 862,796 acres, have not yet been reported. According to state officials, Florida land prices are escalating rapidly, owing primarily to development pressures. Consequently, future project costs are likely to rise with higher land costs. Similarly, while land acquisition costs for CERP projects are included as part of the total estimated project costs, thus far, the state has acquired only 54 percent of the land needed for CERP projects, at a cost of $1.4 billion. An additional 178,000 acres have yet to be acquired; the cost of these purchases is not yet known and is therefore not fully reflected in the cost of CERP and overall restoration costs. The cost of using new technologies for the restoration effort is unknown. The Congress authorized pilot projects in 1999 and 2000 to determine the feasibility of applying certain new technologies for storing water, managing seepage, and reusing treated wastewater. While the pilot projects have been authorized, the cost to construct or implement projects based on the results of the pilot projects is not yet known. In conclusion, Mr. Chairman, our review of the South Florida Ecosystem restoration effort shows that the some progress has been made in moving the restoration forward. However, the achievement of the overall goals of the restoration and ultimately improvements in the ecological condition of Everglades National Park depends on the effective implementation of key projects that have not progressed as quickly as was expected. Moreover, the shortfall in federal funding has contributed to some of these delays and at the same time the costs of the restoration continues to increase and we believe could rise significantly higher than the current estimate of almost $20 billion. In light of these concerns, we believe that restoring the South Florida Ecosystem and Everglades National Park, will continue to be a significant challenge for the foreseeable future. This concludes our prepared statement. We would be happy to respond to any questions you may have. If you have any questions about this statement, please contact Anu K. Mittal @ 202-512-3841 or [email protected]. Other contributors to this statement include Sherry McDonald (Assistant Director) and Kevin Bray. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The South Florida ecosystem covers about 18,000 square miles, and is home to the Everglades, one of the world's unique environmental resources. Historic efforts to redirect the flow of water through the ecosystem have jeopardized its health and reduced the Everglades to about half of its original size. In 1993, the United Nations Educational, Scientific, and Cultural Organization's World Heritage Committee (WHC) added Everglades National Park (Park) to its List of World Heritage in Danger sites. In 2000, a strategy to restore the ecosystem was set; the effort was expected to take at least 40 years and cost $15.4 billion. It comprises 222 projects, including 60 key projects known as the Comprehensive Everglades Restoration Plan (CERP), to be undertaken by a multiagency partnership. This testimony is based on GAO's May 2007 report, South Florida Ecosystem: Restoration Is Moving Forward, but Is Facing Significant Delays, Implementation Challenges, and Rising Costs, and a review of WHC decision documents relating to the Park's listing. This statement addresses the (1) status of projects implemented (2) status of projects key to improving the health of the Park, (3) project sequencing factors, and (4) funding provided for the effort and extent to which costs have increased. Of the restoration effort's 222 projects, 43 have been completed, 107 are being implemented, and 72 are in design, in planning, or are not yet started. The completed and ongoing projects will provide improved water quality and water flow within the ecosystem and additional habitat for wildlife. According to restoration officials, significant progress has been made in acquiring land, constructing water quality projects, and restoring a natural water flow to the Kissimmee River--the headwater of the ecosystem. Many of the policies, strategies, and agreements required to guide the restoration in the future are also now in place. However, the 60 CERP projects, which are the most critical to the restoration's overall success, are among those that are currently being designed, planned, or have not yet started. Some of these projects are behind schedule by up to 6 years. Florida recently began expediting the design and construction of eight key projects, with the hope that they would immediately benefit the environment, enhance flood control, and increase water supply, thus providing further momentum to the restoration. In 2006, the WHC adopted several key benchmarks that if met would facilitate removal of the Everglades National Park from its List of World Heritage in Danger sites. As noted by WHC, achievement of these benchmarks was linked to the implementation of nine key restoration projects. However, only one of these projects has been completed, four are currently being implemented and four are currently being designed. Moreover, the benefits of these projects will not be available for many years because most of the projects are scheduled for completion between 2011 and 2035. There are no overarching sequencing criteria that restoration officials use when making implementation decisions for all 222 projects that make up the restoration effort. Instead, decisions for 162 projects are driven largely by the availability of funds. There are regulatory criteria to ensure that the goals and purposes of the 60 CERP projects are achieved in a cost effective manner. However, the 2005 sequencing plan developed for these projects is not consistent with the criteria because some of the data needed to fully apply these criteria were not available. Therefore, there is little assurance that the plan will be effective. GAO recommended that the agencies obtain the needed data and then comprehensively reassess the sequencing ofthe CERP projects. From fiscal years 1999 through 2006, the federal government contributed $2.3 billion and Florida contributed $4.8 billion, for a total of about $7.1 billion for the restoration. However, federal funding was about $1.4 billion short of the funds originally projected for this period. In addition, the total estimated costs for the restoration have increased by 28 percent--from $15.4 billion in 2000 to $19.7 billion in 2006 because of project scope changes, increased construction costs, and higher land costs. More importantly, these cost estimates do not represent the true costs for the overall restoration effort because they do not include all cost components for a number of projects.
4,485
877
While high-speed passenger rail has been in operation in Europe and Asia for several decades, it is in its relative infancy in the United States. The Passenger Rail Investment and Improvement Act of 2008 (PRIIA) called for development of high-speed rail corridors in the United States and led to establishment of the HSIPR program. FRA administers the HSIPR program as a discretionary grant program to states and others. This program was appropriated $8 billion in funding from the American Recovery and Reinvestment Act (Recovery Act) in 2009 and an additional $2.5 billion in funding from the fiscal year 2010 DOT Appropriations Act. According to FRA, as of October 2012, about $9.9 billion has been obligated for 150 projects. The California high-speed rail project is the largest recipient of HSIPR funds, with approximately $3.5 billion (about 35 percent of program funds obligated). We have previously reported on high-speed rail and the HSIPR program. For example, in March 2009 we reported on the challenges associated with developing and financing high-speed rail projects. These included securing the up-front investments for such projects and sustaining public and political support and stakeholder consensus. We concluded that whether any high-speed rail proposals are eventually built hinges on addressing the funding, public support, and other challenges facing these projects. In June 2010, we reported that states would be the primary recipients of Recovery Act funds for high-speed rail, but many states did not have rail plans that would, among other things, establish strategies and priorities of rail investments in a particular state. California's high-speed rail project is poised to be the first rail line in the United States designed to operate at speeds greater than 150 miles per hour. The planned 520-mile line will operate between San Francisco and Los Angeles at speeds up to 220 miles per hour (see fig.1). At an estimated cost of $68.4 billion, it is also one of the largest transportation infrastructure projects in the nation's history. The project's planning began in 1996 when the Authority was created but began in earnest after initial funding was approved in 2008 with the passage of Proposition 1A, which authorized $9.95 billion in state bond funding for construction of the high- speed rail system and improvements to connections (see fig. 2). Construction is expected to occur in phases beginning with the 130-mile first construction segment from just north of Fresno, California, to just north of Bakersfield, California. In July 2012, the California legislature appropriated $4.7 billion in state bond funds. The process of acquiring property for the right-of-way and construction is expected to begin soon. Request for proposals to select construction contractors and right-of-way acquisitions were issued in March and September 2012, respectively. According to the Authority, a design-build contract for the first construction segment is expected to be awarded in June 2013 with construction potentially commencing no earlier than summer 2013. The project underwent substantial revision earlier this year after the Authority issued its November 2011 draft business plan in response to the initial high cost and other criticisms. Most significantly, the Authority scaled back its plans to build dedicated high-speed rail lines over its entire length. Instead, the April 2012 revised business plan adopted a "blended" system in which high-speed rail service would be provided over a mix of dedicated high-speed lines and existing and upgraded local rail infrastructure (primarily at the bookends of the system on the San Francisco peninsula and in the Los Angeles basin). This change was made, in part, to respond to criticism that the cost of the full-build system contained in the November 2011 draft business plan--$98.5 billion--was too high. The revised cost in the April 2012 plan was $68.4 billion. In addition, the ridership and revenue forecasts in the April 2012 revised business plan reflected a wider uncertainty range than the forecast presented in the November 2011 plan. For example, in the November 2011 draft business plan, the Authority estimated 2030 ridership to be between 14.4 million and 21.3 million passengers and annual revenues of the high speed rail system to be between $1.05 billion and $1.56 billion. This range increased in the April 2012 revised business plan, to between 16.1 million and 26.8 million passengers and annual revenues to be between $1.06 billion and $1.81 billion. The Authority attributed the increase in the uncertainty range to additional conservatism in the low ridership estimate and the ridership changes to several factors such as the adoption of the blended approach which, among other things, allows one-seat service from San Francisco to Los Angeles to begin sooner than the original full-build approach. However, over time ridership forecasts under the blended approach are less than the original full-build approach. To date, the state of California and the federal government have committed funding to the project. In July 2012, the California state legislature appropriated approximately $4.7 billion dollars in Proposition 1A bond funds, including $2.6 billion for construction of the high-speed rail project and $1.1 billion for upgrades in the bookends. The federal government has also obligated $3.3 billion in HSIPR grant funds. Most of the HSIPR money awarded to the project was appropriated under the Recovery Act and in accordance with governing grant agreements must be expended by September 30, 2017. In addition, approximately $945 million in fiscal year 2010 funding was awarded to the project by FRA and is to remain available until expended. The Authority estimates that the high-speed rail project in California will cost $68.4 billion to construct and hundreds of millions of dollars to operate and maintain annually. Since the project is relying on significant investments of state and federal funds--and, ultimately private funds--it is vital that the Authority, FRA, and Congress be able to rely on these estimates for the project's funding and oversight (see table 1 below for a summary of the sources of funding). GAO's Cost Guide identifies best practices that help ensure that a cost estimate is comprehensive, accurate, well documented, and credible. A comprehensive cost estimate ensures that costs are neither omitted nor double counted. An accurate cost estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs. A well-documented estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference. A credible estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions. These four characteristics help minimize the risk of cost overruns, missed deadlines, and unmet performance targets. Our past work on high-speed rail projects around the world has shown that projects' cost estimates tend to be underestimated. As such, it is important to acknowledge the potential for this bias and ensure that cost estimates are as reliable as possible. Based on our ongoing review, we have found that the Authority's cost estimates exhibit strengths and weaknesses. The quality of any cost estimate can always be improved as more information becomes available. And based in part on evaluations from the Peer Review Group, the Authority is taking some steps to improve the cost estimates that will be provided in the 2014 business plan. The Authority followed best practices in the Cost Guide to ensure comprehensiveness, but also exhibited some shortcomings. The cost estimates include the major components of the project's construction and operating costs. The construction cost estimate is based on detailed construction unit costs that are, in certain cases, more detailed than the cost categories required by FRA in its grant applications. However, the operating costs were not as detailed as the capital costs, as over half of the operating costs are captured in a single category called Train Operations and Maintenance. In addition, the Authority did not clearly describe certain assumptions underlying both cost estimates. For example, Authority officials told us that the California project will rely on proven high-speed rail technology from systems in other countries, but it is not clear if the cost estimates were adjusted to account for any challenges in applying the technology in California. The Authority took a number of steps to develop accurate cost estimates consistent with best practices in the Cost Guide. The estimates have been updated to reflect the new "blended" system which will rely, in part, on existing rail infrastructure; they are based on a dataset of costs to construct comparable infrastructure projects; they contain few, if any, mathematical errors; and they have been adjusted for inflation. For example, the Authority's contractor used a construction industry database of project costs supplemented with actual bid-price data from similar infrastructure projects. However, the cost estimates used in the April 2012 revised business plan do not represent final design and route alignments, and the estimates will change as the project moves into construction and operation. The Authority did not produce a risk and uncertainty analysis of its cost estimates that would help anticipate the impact of these changes. The Cost Guide recommends conducting a risk and uncertainty analysis to determine the primary risk factors and assess the likelihood that they may occur, helping to ensure that the estimate is neither overly conservative nor optimistic. The Authority followed some, but not all, best practices in the Cost Guide to ensure that the cost estimate is well documented. In many cases, the methodologies used to derive the construction cost estimates were well documented, but in other cases the documentation was more limited. For example, while track infrastructure costs were thoroughly documented, costs for other elements, such as stations and trains, were supported with little detail or no documentation. Additionally, in some cases where the methodologies were documented, we were unable to trace the estimates back to their source data and recreate the estimates using the stated methodology. For example, we were unable to identify how the operating costs from analogous high-speed rail projects were adjusted for the California project. The Authority took some steps consistent with our Cost Guide to ensure the cost estimates' credibility, but not with respect to some best practices. In order to make cost estimates credible, GAO's Cost Guide recommends: testing such estimates with sensitivity analysis (making changes in key cost inputs), a risk and uncertainty analysis (discussed above), and an independent cost estimate conducted by an unaffiliated party to see how outside estimates compare to the original estimates. While the Authority performed a sensitivity analysis for the first 30 miles of construction and an independent cost estimate for the first 185 miles of construction in the Central Valley, neither covered the entire Los Angeles to San Francisco project. For the operating-cost estimate, the Authority conducted a sensitivity test under various ridership scenarios; however, this test was designed to measure the ability of the system to cover operating costs with ticket revenues and not to determine the potential risk factors that may affect the operating-cost estimate itself. The Authority also did not compare their operating-cost estimate to an independent cost estimate. Finally, as noted above, the Authority did not perform a risk and uncertainty analysis, which would improve the estimates' credibility by identifying a range of potential costs and indicating the degree of confidence decision-makers, can place on the cost estimates. The Authority is taking steps to improve its cost estimates. To make its operating-cost estimate more comprehensive and better documented, the Authority has contracted with the International Union of Railways to evaluate the existing methodology and data and help refine its estimates. In addition, to improve the construction cost estimates, the Authority will have the opportunity to validate and enhance, if necessary, the accuracy of its cost estimates once actual construction package contracts are awarded for the initial construction in the Central Valley in 2013. The bids for the first 30-mile construction package are due in January 2013 and will provide a check on how well the Authority has estimated the costs for this work as well as provide more information on potential risks that cost estimates of future segments may encounter. In addition to challenges in developing reliable cost estimates, the California high-speed rail project also faces other challenges. These include obtaining project funding beyond the first construction segment, continuing to refine ridership and revenue estimates beyond the current forecasts, and addressing the potential increased risks to project schedules from legal challenges associated with environmental reviews and right-of-way acquisitions. One of the biggest challenges facing California's high-speed rail project is securing funding beyond the first construction segment. While the Authority has secured $11.5 billion from federal and state sources for project construction, almost $57 billion in funding remains unsecured. A summary of funding secured to-date can be found in Table 1. As with other large transportation infrastructure projects, including high- speed rail projects in other countries, the Authority is relying primarily on public financial support, with $55 billion or 81 percent of the total construction cost, expected to come from state and federal sources. A summary of the Authority's funding plan can be found in table 2. Of the total $55 billion in state and federal funding, about $38.7 billion are uncommitted federal funds, an average of over $2.5 billion per year over the next 15 years. Most of the remaining funding is from unidentified private investment once the system is operational--a model that has been used in other countries, such as for the High Speed One line in the United Kingdom. As a result of the funding challenge, the Authority is taking a phased approach--building segments as funding is available. However, given that the HSIPR grant program has not received funding for the last 2 fiscal years and that future funding proposals will likely be met with continued concern about federal spending, the largest block of expected funds is uncertain. The Authority has identified revenues from California's newly implemented emissions cap and trade program in the event other funding is not made available, but according to state officials, the amounts and authority to use these funds are not yet established. Developing reliable ridership and revenue forecasts is difficult in almost every circumstance and for a variety of reasons. Chief among these are (1) limited data and information, (2) risks of inaccurate assumptions, and (3) accepted forecast methods vary. Although forecasting the future is inherently risky, reliable ridership and revenue forecasts are still critical components in estimating the economic viability of a high-speed rail project and in determining what project modifications, if any, may be needed. For example, the financial viability of California's high-speed rail project depends on generating sufficient ridership to cover its operating expenses. Ridership and revenue forecasts enable policymakers and private entities to make informed decisions on policies related to the proposed high-speed rail system and to determine the risks associated with a high-speed rail project when making investment decisions. Addressing these challenges will be important for the Authority as it works toward updating its ridership and revenue forecasts for the 2014 business plan. Limited data and information, especially early in a project before specific service characteristics are known, make developing reliable ridership and revenue forecasts difficult. And to the extent early stage data and information are available, they need to be updated to reflect changes in the economy, project scope, and consumer preferences. For example, in developing the ridership and revenue forecasts for the April 2012 revised business plan, the Authority updated several assumptions and inputs used to develop the initial ridership and revenue forecasts that were presented in the November 2011 draft business plan. Authority officials said this update was done, in part, to build in additional conservatism in the ridership forecasts, in particular in the low scenario, and to avoid optimism bias. Among other updates, the Authority revised model assumptions to reflect changes in current and anticipated future conditions for airfares and airline service frequencies, decreases in gasoline price forecasts, and anticipated declines in the growth rates for population, number of households, and employment. Peer review groups, such as the Ridership and Revenue Peer Review Panel (Panel) established by the Authority, and academic reviewers have examined the Authority's ridership and revenue forecast methodology. These reviewers recommended additional improvements to the model going forward. For example, in developing the forecasts used for the April 2012 revised business plan, the Authority relied on data from a 2005 survey that was conducted at airports, rail stations, and by telephone from August to November 2005. In a May 2012 report to the Authority, the Panel pointed out limitations with this data source and recommended that new data be collected to supplement the existing data for model enhancement purposes. Authority officials stated that they are currently developing a new revealed-preference and stated-preference survey to update the 2005 survey data and that they plan to begin collecting this new survey data in December 2012. Portions of the new 2012 data will be used to re-estimate and re-calibrate the ridership model to develop updated ridership and revenue forecasts for the 2014 business plan. The Authority also plans to develop a new version of the model that will make full use of the new 2012 survey data; however, the new model is not expected to be developed in time for the 2014 business plan. It will be important to complete these future model improvements as the project is developed. Risks of inaccurate forecasts are a recurring challenge for sponsors of the project. Research on ridership and revenue forecasts for rail infrastructure projects have shown that ridership forecasts are often overestimated and actual ridership is likely to be lower. For example, a recent study examined a sample of 62 rail projects and found that for 53 of them, the demand forecasts were overestimated and that actual demand was lower than forecasted demand. According to the Authority, the ridership and revenue forecasts, in its April 2012 revised business plan, include a wider range of ridership and revenue forecasts and lower ridership and revenue forecasts compared to earlier forecasts, to help mitigate the risks of optimism bias. In addition, the Authority performed a sensitivity analysis of an extreme downside scenario to test the ridership and revenue implications of a series of downside events coinciding, such as increased average rail-travel time from Merced to the San Fernando Valley and lower auto-operating costs. Based on this analysis, the Authority determined that an extreme downside scenario would be expected to reduce ridership and revenue forecasts by 27 percent and 28 percent, respectively, below that shown for the low forecasts in the April 2012 revised business plan. According to the Authority, these forecasts would still be sufficient to cover the Authority's estimated operating costs and would not require a public operating subsidy. Authority officials stated that they intend to conduct additional sensitivity analyses going forward. Finally, accepted forecasting methods vary, and FRA has not established guidance on acceptable approaches to the development of reliable ridership and revenue forecasts. Industry standards vary, and FRA has established minimal requirements and guidance related to information HSIPR grant applicants must provide regarding forecasts. As we have previously reported, different ridership-forecasting methods may yield diverse and therefore uncertain results. As such, we have recommended that the Secretary of Transportation develop guidance and methods for ensuring reliability of ridership forecasts. Similarly, the DOT OIG has also recommended that FRA develop specific and detailed guidance for the preparation of HSIPR ridership and revenue forecasts. Best practices identified by various agencies and transportation experts have identified certain components of the ridership- and revenue- forecasting process that affect results more than others and that are necessary for developing reasonable forecasts. Among others, key components include processes for developing trip tables, developing a mode-choice model, conducting sensitivity analyses, and conducting validation testing. The Authority's forecasts included each of these key components in developing the ridership and revenue forecasts for the April 2012 revised business plan. While addressing these components does not assure ridership and revenue forecasts are accurate, it does provide greater assurance that the Authority's processes for developing these forecasts are reasonable. In our ongoing review of the California high speed rail project, we are evaluating the extent to which the Authority's ridership and revenue forecasts followed best practices when completing each of these tasks. We will present the results of our assessment of the Authority's process in our 2013 report on this subject. Among the other challenges facing the project, which may increase the risk of project delays, are potential legal challenges associated with the environmental laws. Under the National Environmental Policy Act (NEPA) and the California Environmental Quality Act (CEQA), government agencies funding a project with significant environmental effects are required to prepare environmental impact statements or reports (EIS/EIR) that describe these impacts. Under CEQA, an EIR must also include mitigation measures to minimize significant effects on the environment. The Authority is taking a phased approach to comply with NEPA and CEQA by developing EIS/EIRs for both the project as a whole as well as for particular portions of the project. To date, program level EIS/EIRs have been prepared for the project as a whole (August 2005) and for the Bay Area to Central Valley (initial certification by the Authority in July 2008 and a revised final EIS/EIR issued in April 2012). Project level EIS/EIRs have been prepared for the Merced-to-Fresno portion of the project (issued April 2012), and a draft EIS/EIR has been prepared for the Fresno-to-Bakersfield portion of the project (initial draft issued in August 2011 and revised final issued July 2012). Environmental concerns have been the subject of legal challenges. For example, a lawsuit was filed in October 2010 against the Authority challenging the decision to approve the Bay Area to Central Valley segment based on an EIR alleged to be inadequate. Several lawsuits have been filed and these cases are still pending. The project also faces the potential challenge of acquiring rights-of-way. Timely right-of-way acquisition will be critical since some properties will be in priority construction zones. Property to be acquired will include homes, businesses, and farmland. Not having the needed right-of-way could cause delays as well as add to project costs. Acquisition of right-of-way will begin with the first construction segment, which has been subdivided into 4 design-build construction packages. There are a total of approximately 1,100 parcels to be acquired for this segment; all of which are in California's Central Valley. In September 2012, the Authority issued a Request for Proposals to obtain the services of one or more contractors to provide right-of-way and real property services. The Authority estimated in its April 2012 revised business plan that the purchase or lease of real estate for the phase I blended system will cost between $3.6 billion and $3.9 billion (in 2011 dollars). According to the Authority, the schedule for right-of-way acquisition will be phased, based on construction priorities with delivery of all required parcels in the Central Valley no later than spring 2016. Acquisition is anticipated to begin in February 2013. The timely acquisition of rights-of-way may be affected by at-risk properties--that is, those properties that the Authority considers at-risk for timely delivery to design-build contractors for construction. There could be a significant number of at-risk properties. For example, Authority officials told us there are about 400 parcels in the first construction package, about 200 of which are in priority construction zones. Of these, about 100 parcels (50 percent) are considered to be potentially at-risk for timely delivery. Since right-of- way acquisition has not yet begun, the extent that at-risk properties will ultimately affect project schedules or cost is not known. However, there may be an increased risk given the initial high percentage of at-risk parcels. Chairman Mica, Ranking Member Rahall, this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Committee may have at this time. For future questions about this statement, please contact Susan Fleming, Director, Physical Infrastructure, at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Paul Aussendorf, (Assistant Director), Russell Burnett, Delwen Jones, Richard Jorgenson, Jason Lee, James Manzo, Maria Mercado, Josh Ormond, Paul Revesz, Max Sawicky, Maria Wallace, and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The California high-speed rail project is the single largest recipient of federal funding from the Federal Railroad Administration's (FRA) High Speed Intercity Passenger Rail (HSIPR) grant program. The 520-mile project (see map) would link San Francisco to Los Angeles at an estimated cost of $68.4 billion. Thus far, FRA has awarded $3.5 billion to the California project. The Authority has to continue to rely on significant public-sector funding, in addition to private funding, through the project's anticipated completion date in 2028. This testimony is based primarily on GAO's ongoing review of the California high-speed rail project and discusses GAO's preliminary assessment of (1) the reliability of the project's cost estimates developed by the Authority and (2) key challenges facing the project. As part of this review, we obtained documents from and conducted interviews with Authority officials, its contractors, and other state officials. GAO analyzed the extent to which project cost estimates adhered to best practices contained in GAO's Cost Estimating and Assessment Guide (Cost Guide), which identifies industry best practices to ensure cost estimates are comprehensive, accurate, well documented, and credible--the four principal characteristics of a reliable cost estimate. GAO also reviewed project finance plans as outlined in the Authority's April 2012 revised business plan. To identify key challenges, GAO reviewed pertinent legislation, federal guidelines and best practices related to ridership and revenue forecasting, and interviewed, among others, federal, state, and local officials associated with the project. Based on an initial evaluation of the California High Speed Rail Authority's (Authority) cost estimates, GAO found that they exhibit certain strengths and weaknesses when compared to best practices in GAO's Cost Guide. Adherence with the Cost Guide reduces the risk of cost overruns and missed deadlines. GAO's preliminary evaluation indicates that the cost estimates are comprehensive in that they include major components of construction and operating costs. However, they are not based on a complete set of assumptions, such as how the Authority expects to adapt existing high-speed rail technology to the project in California. The cost estimates are accurate in that they are based on the most recent project scope, include an inflation adjustment, and contain few mathematical errors. And while the cost estimates' methodologies are generally documented, in some cases GAO was unable to trace the final cost estimate back to its source documentation and could not verify how certain cost components, such as stations and trains, were calculated. Finally, the Authority evaluated the credibility of its estimates by performing both a sensitivity analysis (assessing changes in key cost inputs) and an independent cost estimate, but these tests did not encompass the entire cost estimate for the project. For example, the sensitivity analysis of the construction cost estimate was limited to 30 miles of the first construction segment. The Authority also did not conduct a risk and uncertainty analysis to determine the likelihood that the estimates would be met. The Authority is currently taking some steps to improve its cost estimates. The California high-speed rail project faces many challenges. Chief among these is obtaining project funding beyond the first 130-mile construction segment. While the Authority has secured $11.5 billion from federal and state sources, it needs almost $57 billion more. Moreover, the HSIPR grant program has not received federal funding for the last 2 fiscal years, and future federal funding is uncertain. The Authority is also challenged to improve its ridership and revenue forecasts. Factors, such as limited data and information, make developing such forecasts difficult. Finally, the environmental review process and acquisition of necessary rights-of-way for construction could increase the risk of the project's falling behind schedule and increasing costs.
5,264
787
Task Force Hawk deployed to Albania in April 1999 as part of Operation Allied Force. Originally, the task force was to deploy to the Former Yugoslav Republic of Macedonia. However, the government of Macedonia would not allow combat operations to be conducted from its territory. The United States subsequently obtained approval from the government of Albania to use its territory to base Task Force Hawk and conduct combat operations. (See fig. 1.) Albania did not have any previously established U.S. military base camps as Macedonia did and was not viewed as having a stable security environment. According to Army officials, the size of the Task Force had to be increased to provide more engineering capability to build operating facilities and provide force protection. The task force was a unique Army organization. It was comprised of 1 attack helicopter battalion with 24 Apache attack helicopters; 1 Corps aviation brigade with 31 support helicopters; 1 Multiple Launch Rocket System battalion with 27 launchers; a ground maneuver element for force protection; and other headquarters and support forces. (See fig. 2 for a picture of an Apache helicopter.) It ultimately totaled about 5,100 personnel. Its planned mission was to conduct deep attacks against Serbian military and militia forces operating in Kosovo using Apache helicopters and Multiple Launch Rocket Systems. The task force deployed to Albania and trained for the mission but was not ordered into combat. Ultimately, its focus changed to using its radar systems to locate enemy forces for targeting by other aircraft. Additionally, the task force assumed responsibility for the protection of all U.S. forces operating out of Tirana Airfield, its staging base, which included Air Force personnel providing humanitarian assistance to Kosovo refugees. Concerned about the combat readiness of Apache helicopters and their experience in Task Force Hawk, the House Armed Services Committee's Subcommittee on Readiness held a hearing on July 1, 1999. That hearing focused on pilot shortages, the lack of pilot proficiency, and unit combat training. In addition, it discussed equipment that was not fully fielded at the time of the operation, such as aircraft survivability equipment and communication equipment. Our work was designed to address other matters associated with Task Force Hawk and how the services plan to address them for future operations. Doctrine is the fundamental principle by which the military services guide their actions in support of national objectives. It provides guidance for planning and conducting military operations. In the Army, doctrine is communicated in a variety of ways, including manuals, handbooks, and training. Joint doctrine, which applies to the coordinated use of two or more of the military services, is similarly communicated. Doctrine provides commanders with a framework for conducting operations while allowing flexibility to adapt operations to specific circumstances. According to Army and Joint Staff doctrine officials, the concept of operation that was planned to be used by Task Force Hawk, the use of Apache helicopters for a deep attack mission as part of an air campaign, fell within established Army and joint doctrine. Typically, attack helicopters are used in conjunction with Army ground forces to engage massed formations of enemy armor. They were used in this manner in the Gulf War. In the Kosovo air campaign, Task Force Hawk's planned deep attacks differed in that they were intended to be part of an air campaign, not an Army led combined arms land campaign. Additionally, the aircraft's planned attacks principally would have engaged widely dispersed and camouflaged enemy ground forces instead of massed formations. According to Army doctrine officials, doctrine is broad and flexible enough to allow a combatant commander to employ his assets in the manner that was planned for the task force. However, Army officials agree that this planned usage differed from the employment typically envisaged in Army doctrine. Furthermore, Army officials said that the Task Force Hawk experience was not something the Army routinely trained for and was considered to be an atypical operation. Although Task Force Hawk's mission and operations were consistent with both Army and joint doctrine in the broadest sense, changes to doctrine at both the Army and joint levels are being made that will address some of the operation's lessons learned. A total of 19 Army doctrine publications will be developed or modified to better address the experience gained from Task Force Hawk. Examples of new or revised doctrine include a new handbook on deep operations; an update to the Army's keystone warfighting doctrinal publication on conducting campaigns, major operations, battles, engagements, and operations other than war; and an update to the Army aviation brigade field manual that expands the role of aviation brigades and task forces with a heavier emphasis on tactics, techniques, and procedures for task force, combined arms, and joint operations. Modifications to Army doctrine are being made as part of the on-going established process for reviewing and revising doctrinal publications. A total of five joint doctrine publications will be developed or modified based at least in part on the Task Force Hawk experience. A new joint publication is being developed to cover the role of the Joint Force Land Component Commander, detailing his role and responsibilities in a "supported" and "supporting" role. (See our discussion of this role in the Joint Operations section of this report.) Updates to four remaining joint publications, including close air support and fire support, will be made during the normal 21-month joint doctrine publication and review cycle. The Army has a large effort underway to collect and resolve lessons learned pertaining to Task Force Hawk. A total of 146 Task Force Hawk lessons learned were collected at three different sources. The U.S. Army Europe developed 64 lessons and forwarded them to the Army's Deputy Chief of Staff for Operations and Plans for remedial action. The Army's Training and Doctrine Command developed a listing of 76 lessons and has assigned them to their different proponent schools for remedial action. Hundreds of joint action items were collected at the European Command on Operation Allied Force and forwarded to the Joint Warfighting Center. Of these items, six were specifically associated with Task Force Hawk and were sent to the Joint Staff for remedial action. We analyzed the 146 Task Force Hawk lessons and determined that a number of them submitted by different organizations were the same. Of the 76 lessons raised by the Training and Doctrine Command, 38 were similar to those submitted by U.S. Army Europe. Of the six European Command lessons, we determined that one was similar to an issue submitted by U.S. Army Europe. Deleting the 39 duplicates resulted in a total of 107 unique lessons submitted for remedial action. We categorized the 107 lessons into five broad themes that in our judgment characterize the type of needed remedial action. The five themes are as follows. The need for revisions to Army and joint doctrine, as discussed earlier. We identified 19 such lessons. See appendix I. Improvements in command, control, communications, computers, and intelligence (C4I) equipment or procedures. We identified 20 such lessons. See appendix II. Areas needing additional training. We identified 30 such lessons. See appendix III. The need for additional capability in areas other than C4I. We identified 24 such lessons. See appendix IV. Potential force structure changes. We identified 14 such lessons. See appendix V. We determined the status of each of the 107 lessons learned as of January 2001. We did not evaluate the merit of the actions proposed or completed. We placed them into one of two status categories: Recommended for closure: We placed 47 items in this category. However, there are varying degrees of closure within this category. First, there are items that specifically have had actions completed, such as procuring night vision goggles for Apache pilots. According to Army officials, the goggles have been procured and fielded. Twenty-three of the 47 lessons fell into this subgroup. Second, there are lessons that have had actions taken, but will require a long lead-time for implementation, such as the procurement of survival radios and a deployable flight mission rehearsal system for aviation units. For example, while approval for the survival radios has been obtained, they will not begin fielding until fiscal year 2003. In addition, the Army has recommended an interim fix for a mission rehearsal system, but it is costly. The far-term solution is the joint mission planning system, which will not be fielded until 2007. Fifteen of the 47 lessons fell into this subgroup. Finally, there are items that Army officials are recommending for closure because, upon further review, they determined the lessons should not have been submitted or events have overtaken the initial lesson and they are no longer applicable. The remaining nine lessons fell into this subgroup. Lessons learned that were recommended for closure are indicated as such in appendixes I-V. In progress: We placed 60 lessons in this category. These items are still considered open issues by the Army officials tracking Task Force Hawk lessons learned and they have been assigned to responsible bodies for resolution. Seventeen of the 60 in progress lessons reside with the Department of the Army--Headquarters, 10 with the Joint Staff or Joint Forces Command, 27 with the Army's Training and Doctrine Command, and 6 with U.S. Army Europe. Many issues remain open because they require efforts that are being incorporated into much larger overall Army projects, such as transformation or Flight School XXI, that will require a much longer time frame to implement. Other lessons learned remain open because efforts to address them are just beginning. Lessons learned where solutions are in progress are indicated as such in appendixes I-V. Figure 3 shows the 107 lessons learned issues by category and by status grouping. The Commanding General of U.S. Army Europe has emphasized the need to capitalize on the lessons learned from Kosovo and to focus on partnership with the Air Force. He is personally involved with the lessons learned process and considers the process and follow-up a personal commitment to U.S. Army Europe soldiers. During our visit to U.S. Air Forces in Europe, we were told that their commanding general has also placed a high priority on working together with the Army to address the lessons learned in conducting joint operations. While both commands have taken steps to resolve the issues, some of the remedial actions will require years to complete. In addition, over time the services assign new commanders and reassign the current commanders. We reported in 1999 that while the Army had established a program to validate that remedial action on past lessons learned were implemented, the program has not been very successful. Two key themes emerged from the lessons learned collected. One was the need for the Army and the Air Force to work together better jointly. The other theme was the interoperability of the two services' command, control, communications, computers, and intelligence equipment. The Task Force Hawk experience highlighted difficulties in several areas pertaining to how the Army operates in a joint environment. One area was determining the most appropriate structure for integrating Army elements into a joint task force. Doctrine typically calls for a Joint Force Land Component Commander or an Army Force Commander to be a part of a joint task force with responsibility for overseeing ground elements during an operation. The command structure for the U.S. component of Operation Allied Force did not have a Joint Force Land Component Commander. Both Army officials and the Joint Task Force Commander in retrospect believe that this may have initially made it more difficult to integrate the Army into the existing joint task force structure. The lack of an Army Force Commander and his associated staff created difficulties in campaign planning because the traditional links with other joint task force elements were initially missing. These links would normally function as a liaison between service elements and coordinate planning efforts. Over time, an ad hoc structure had to be developed and links established. The Army has conducted a study to develop a higher headquarters design that would enable it to provide for a senior Army commander in a future Joint Task Force involving a relatively small Army force. This senior commander would be responsible for providing command, control, communications, computers, and intelligence capability to the joint task force. The study itself is complete, but testing of the design in an exercise is not scheduled until February 2002. A second area that the Army had difficulty with during its mission training was including its aircraft in the overall planning document that controls air attack assets. The plan, called an air tasking order, assigns daily targets or missions to subordinate units or forces. Air Force officials in Europe told us that they had difficulty integrating the Army's attack helicopters into the air tasking order. According to U.S. Army Europe officials, there were no formalized procedures for how to include Army aviation into this planning document and they had little or no training on how to perform this function. The Army and the Air Force in Europe are developing joint tactics, techniques, and procedures for integrating Army assets into the air tasking order and are beginning to include this process in their joint exercises. A third area that the Army and the Air Force had difficulty with was targeting. As previously discussed, once the decision was made that Task Force Hawk would not conduct deep attacks, its resources were used to locate targets for the Air Force. According to U.S. Army Europe documentation, Army analysts in Europe had little or no training in joint targeting and analyzing targets in a limited air campaign. As a result, in the early days of the Army targeting role, mobile targets nominated by the Army did not meet Operation Allied Force criteria being used by the Air Force for verifying that targets were legitimate and, therefore, were not attacked. As the operation progressed, the two services learned each other's procedures and criteria and worked together better. The Army and the Air Force in Europe are now formalizing the process used and are developing tactics, techniques, and procedures for attacking such targets and sharing intelligence. They are including these new processes in their joint exercises. The second major theme that emerged from the lessons learned was the interoperability of the command, control, communications, computers, and intelligence equipment. The Army is transitioning from a variety of battlefield command systems that it has used for years to a digitized suite of systems called the Army Battlefield Command system. During Operation Allied Force, Army elements used a variety of older and newer battlefield command systems that were not always interoperable with each other. The mission planning and targeting system used by the Apache unit in Albania during Task Force Hawk was one of the older systems and was not compatible with the system being used by the Army team that provided liaison with the Air Force at the air operations center. The Army liaison team used the new suite of Army digitized systems that will ultimately be provided to all Army combat forces. However, at the time of Task Force Hawk, the suite of systems was not fully fielded and not all the deployed personnel were trained on the new systems. Consequently, the Apache unit in Albania used the older systems, making it difficult to communicate with the liaison team and requiring the manual as opposed to electronic transfer of data. The older mission planning and targeting system used by the Apache unit in Albania was also not compatible with the Air Force system. The Air Force has a single digital battlefield command system. The Apache unit in Albania, using its older equipment, could not readily share data directly with the Air Force. In addition, the intelligence system being used by the Army at the unit level and at the liaison level could not directly exchange information with the Air Force. As was the case within the Army, personnel had to manually transfer data. This was time consuming and introduced the potential for transcription errors. The Army is continuing to field the new suite of systems. We have previously reported that the schedules for fielding these systems have slipped and the Army in Europe is not scheduled to receive the complete suite of new systems before 2005. When it is eventually fielded, this new suite of systems is expected to reduce if not eliminate the inability of the Army's and the Air Force's systems to work together. The commanding generals of the U.S. Army and U.S. Air Forces in Europe have made resolving the lessons learned identified from Task Force Hawk a high priority. They have already made progress in taking remedial action on a number of the lessons. However, many of the lessons will require a significant amount of time, sometimes years, for implementation. In addition, over time senior military leadership changes and we have found in the past that the Army has not been very successful in ensuring that remedial actions are brought to closure. To ensure that the Army maintains the momentum to take actions to resolve Task Force Hawk lessons learned, the Congress may want to consider requiring the Army to report on remedial actions taken to implement Task Force Hawk lessons. This could be in the form of periodic progress reports or another appropriate reporting approach that would meet congressional oversight needs. To determine how Task Force Hawk's concept of operation compared to existing Army and joint doctrine, we reviewed Army and Joint Staff doctrine publications and were briefed on existing deep attack doctrine at the Army's Training and Doctrine Command and the Army's Aviation School. We then compared this information to Task Force Hawk's concept of operation. We discussed which doctrine publications would be revised based on the Task Force Hawk experience with officials at the Army's Training and Doctrine Command and the Joint Warfighting Center. To determine the number of Task Force Hawk lessons learned, we collected and reviewed Army lessons learned from the Army's Deputy Chief of Staff for Operations and Plans, the Army's Training and Doctrine Command, and the Center for Army Lessons Learned. We collected and reviewed joint lessons learned at the Office of the Joint Chiefs of Staff and the Joint Warfighting Center. To obtain an understanding of the lessons and their status, we discussed them with individuals directly involved with the Task Force Hawk operation or those directly involved in addressing the individual lessons. We discussed the lessons with individuals at the Army's Aviation School, the Army's Artillery School, U.S. Army Europe, U.S. Air Forces in Europe, and the U.S. European Command. To determine how well the Army and the Air Force worked together in Operation Allied Force, we collected documentation on joint operations and interoperability of equipment and interviewed personnel at the U.S. European Command, U.S. Army Europe, and U.S. Air Forces in Europe. We conducted our review from June 2000 through January 2001 in accordance with generally accepted government auditing standards. We reviewed the information in this report with the Department of Defense (DOD) officials and made changes where appropriate. DOD officials agreed with the facts in this report. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Greg Dahlberg, Acting Secretary of the Army; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. If you have any questions, please call me on (757) 552-8100. Key contributors to this report were Steve Sternlieb, Laura Durland, and Frank Smith. Training and Doctrine Command (TRADOC) Review Field Manual (FM)100-17--Mobilization, Deployment, Redeployment and Demobilization--to ensure that it meets the requirements of a strategic responsive Army. Review FM 100-17 for joint doctrine disconnects and implement the required changes to the pertinent field manuals. Review FM 100-17 and FM 100-17-4 to make sure the responsibilities of the major commands are adequately discussed. Conduct a mission analysis to determine if doctrine supports the goal of sustaining overmatch capabilities across the spectrum of conflict. Determine the operational impact of the Roberts Amendment, which prohibits use of funds for the deployment of U.S. armed forces to Yugoslavia, Albania, and Macedonia without congressional consultation, on alliance and coalition warfare. Recommended closed but requiring a long implementation period Revise publication FM 100-6 entitled Information Operations. Accelerate the implementation of doctrine and associated tactics, techniques, and procedures related to FM 3-13 action plan. Peace support operations doctrine needs to be updated and more fully developed. General support aviation doctrine and tactics, techniques, and procedures need to be developed and/or updated. There is no available mission-training plan for the Tactical Terminal Control System. Aviation war-fighting doctrine for the unmanned aerial vehicle employment with Army aviation is needed. Review the need to develop multi-service tactics, techniques, and procedures for Army aviation to support other services or functional components. Refine doctrine to enable better integration of Army units into joint command and control architecture. Develop joint tactics, techniques, and procedures for the employment of aircraft survivability equipment. Revise publication FM 100-5 entitled Operations. Headquarters Department of Army (HQDA) Revise publication FM 100-1 entitled The Army. Revise doctrine to include the use of echelons above division elements in the deep attack mission. All Source Analysis System, which gathers and fuses battlefield information to produce a correlated threat picture, is incompatible with other systems. Accelerate the timetable for fielding the next generation digital series of communications equipment. A 10-year fielding cycle is too slow. Improved survival radios are needed for aviation units. Upgrade Army aircraft communications capabilities to include satellite communication capabilities. The Army requires an airborne battlefield command and control center to conduct deep attack missions over extended distances. Joint intelligence tactics, techniques, and procedures are lacking. Joint analysis is lacking. The primary problem in joint intelligence operations is a lack of service/joint interoperability of intelligence systems. Additional facilities and capabilities to increase bandwidth within the intelligence and signal communities are needed. Joint intelligence, doctrine, and training need to be better coordinated and integrated. Second generation forward-looking infrared sensors are needed. The Dual Datalink, which supports intelligence operations, must be replaced. The Army space support team needs improved technologies, including a direct satellite downlink capability, to provide satellite imagery to the warfighter. Command, control, communications, computers, and intelligence operations, organizations, and materiel for the Army in a supporting role needs to be analyzed. (TRADOC has expanded this single issue to 32 separate issues.) Determine the appropriate design and augmentation required to enable a division or corps to act as an Army Force Commander, which would provide command, control, communications, computers, and intelligence to the forces. The current Battle Command Training Program fails to adequately address the joint/combined operational environment of current and future contingencies. Increased individual, crew, and junior leader development training is needed. Platoon Leader/Company Commander certification and training is inadequate as currently executed. Increase the level of survival, evasion, resistance, and escape training. A joint/combined multinational training event is required. Increased officer, noncommissioned officer, and advanced individual training is needed. Revise training to ensure new Apache helicopter pilots are basic mission qualified. There is a need for signal intelligence survey teams in the Army. Fully fund ammunition requirements for appropriate aviator training to include advanced gunnery. Provide a realistic radar threat generator for flight training. The current system only replicates a minimal amount of threat systems. U.S. Army Europe needs to continue efforts to remove, extend, or modify the current night flight, frequency management, and radar utilization restrictions in Germany to support training. Simplify procedures for obtaining identification of friend or foe interrogation training. Require and resource for each attack squadron a complete Combat Maneuver Training Center force-on-force rotation. Emphasize how the major commands fit into the Joint Deployment Process. The services need to continually reinforce and train on joint deep operations in order to maximize warfighting capabilities. Integrate high gross weight operations and complex terrain training in simulation mission scenarios. Utilize simulation to drive training scenarios. Aviation mission planning systems rehearsal tool for individual and crew utilization does not meet training requirements. Review and ensure applicability of digitized systems. Develop a deployment training exercise with the objectives of understanding the deployment process and developing synchronized movement plans. The Army needs to continue to support and deploy systems, such as the Deployable Weather Satellite Workstation, that autonomously process weather satellite imagery and data. Recommended closed but requiring a long implementation period Field a deployable flight mission rehearsal system. Field a night vision system compatible with nuclear biological chemical masks. Develop and field a new time-phased force and deployment data system. Upgrade Army aviation mission simulators. Procure and field the aviation combined-arms training suite into brigade and below training. Develop, resource, train, and sustain a combat search and rescue capability. The Apache helicopter requires extended range/self-deployment fuel tanks that are crashworthy. Upgrade Army aviation aircraft survivability equipment. Modify Apache Longbow to meet specific theater requirements to include better night vision systems, more powerful engines, increased communications, and better aircraft survivability equipment. The Army requires a self-contained lethal and non-lethal joint suppression of enemy air defenses capability. Field additional tactical engagement simulation systems to the Combat Maneuver Training Center as well as what is currently funded for the Apache Longbow. Fund the Apache helicopter self-deployment capability to include instrument flight rules and an approved global positioning system. Fund the procurement of aviation life support equipment for over-water operations. The closed loop facility at Ramstein, Germany, requires additional equipment for major strategic air deployments. U.S. Army Europe requires an alternate strategic deployment airfield. Fund Robertson fuel tanks and rotor blade anti/de-ice capability. Continue research and development of imagery transmission systems.
The Army deployed its team, called Task Force Hawk, to participate in a Kosovo combat operation known as Operation Allied Force. This report (1) examines how Task Force Hawk's concept of operation compared to Army and joint doctrine, (2) reviews the lessons learned identified from the operation and determines the status of actions to address those lessons, and (3) examines the extent to which the Army and the Air Force were able to operate together as a joint force. GAO concludes that Task Force Hawk's deep attacks against Serbian forces in Kosovo was consistent with doctrine, but was not typical in that the task force was supporting an air campaign rather than its more traditional role of being used in conjunction with Army ground forces to engage massed formations of enemy armor. The Army identified 107 items that require remedial action. As of January 2001, 47 of the 107 items had been recommended for closure. Action is in process for the remaining 60 lessons. Finally, the Army and the Air Force experienced significant problems in their ability to work together jointly and the interoperability of the command, control, communications, computers, and intelligence equipment used during the operation. The Army is working on both issues aggressively. However, it will take time for results to be seen.
5,501
263
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to discuss the subject of internal control. Its importance cannot be understated, especially in the large, complex operating environment of the federal government. Internal control is the first line of defense against fraud, waste, and abuse and helps to ensure that an entity's mission is achieved in the most effective and efficient manner. Although the subject of internal control usually surfaces for discussion after improprieties or inefficiencies are found, good managers are always aware of and seek ways to help improve operations through effective internal control. As you requested, my testimony today will discuss the following questions: (1) What is internal control? (2) Why is it important? and (3) What happens when it breaks down? "The plan of organization and methods and procedures adopted by management to ensure that resource use is consistent with laws, regulations, and policies; that resources are safeguarded against waste, loss, and misuse; and that reliable data are obtained, maintained, and fairly disclosed in reports." Internal control should not be looked upon as separate, specialized systems within an agency. Rather, internal control should be recognized as an integral part of each system that management uses to regulate and guide its operations. Internal control is synonymous with management control in that the broad objectives of internal control cover all aspects of agency operations. Although ultimate responsibility for good internal control rests with management, all employees have a role in the effective operation of internal control that has been set by management. to name a few) that achieve the goal. All internal controls have objectives and techniques. In practice, internal control starts with defining entitywide objectives and then more specific objectives throughout the various levels in the entity. Techniques are then implemented to achieve the objectives. In its simplest form, internal control is practiced by citizens in the daily routine of everyday life. For example, when you leave your home and lock the door or when you lock your car at the mall or on a street, you are practicing a form of internal control. The objective is to protect your assets against undesired access, and your technique is to physically secure your assets by locks. In another routine, when you write a check, you record the check in the ledger or on your personal computer. The objective is to control the money in your checking account by knowing the balance. The technique is to document the check amount and the balance. Periodically, you compare the checking account transactions and balances you have recorded with the bank statement. Your objective is to ensure the accuracy of your records to avoid costly mistakes. Your technique is to perform the reconciliation. These same types of concepts form the basis for internal control in business operations and the operation of government. The nature of their operations is, of course, significantly larger and more complex, as is the inherent risk of ensuring that assets are safeguarded, laws and regulations are complied with, and data used for decision-making and reporting are reliable. Focusing a discussion on objectives and techniques, the acquisition, receipt, use, and disposal of property, such as computer equipment, can illustrate the practice of internal control in the operation of government activities. Internal control at the activity level such as procuring equipment should be preceded, at a higher organizational level, by policy and planning control objectives and control techniques that govern overall agency operations in achieving mission objectives. Examples of high-level control objectives that logically follow a pattern include the following: The mission of the agency should be set in accordance with laws, regulations, and administration and management policy. Agency components should be defined in accordance with the overall mission of the agency. Missions of the agency and components should be documented and communicated to agency personnel. Plans and budgets should be developed in accordance with the missions of the agency and its components. Policies and procedures should be defined and communicated to achieve the objectives defined in plans and budgets. Authorizations should be in accordance with policies and procedures. Systems of monitoring and reporting the results of agency activities should be defined. Transactions should be classified or coded to permit the preparation of reports to meet management's needs and other reporting requirements. Access to assets should be permitted only in accordance with laws, regulations, and management's policy. Examples of control techniques to help achieve the objectives include the following: agency and component mission statements approved by management and its legal counsel; training of personnel in mission and objectives; long and short-range plans developed related to budgets; monitoring of results against plans and budgets; policies and procedures defined and communicated to all levels of the organization and periodically reviewed and revised based on internal reviews; authorizations defined, controls set to ensure authorizations are made, and classifications of accounts set to permit the capture and reporting of data to prepare required reports; and physical restrictions on access to assets and records, and training in security provided to employees. The policy and planning control objectives and techniques provide a framework to conduct agency operations and to account for resources and results. Without that framework, administration and legislative goals may not be achieved; laws and regulations may be violated; operations may not be effective and efficient and may be misdirected; unauthorized activities may occur; inaccurate reports to management and others may occur; fraud, waste, and abuse is more likely to occur and be concealed; assets may be stolen or lost; and ultimately the agency is in danger of not achieving its mission. intended results. The procurement and management of computer equipment is an example of such a specific activity. Objectives and techniques should be established for each activity's specific control. As examples of control objectives, vendors should be approved in accordance with laws, regulations, and management's policy, as should the types, quantities, and approved purchase prices of computer equipment. As examples of related control techniques, criteria for approving vendors should be established and approved vendor master files should be controlled, and the purchase governed by criteria, such as obtaining competitive bids and setting specifications of the equipment to be procured. Likewise, control objectives should be set for the receiving process. For example, only equipment that meets contract or purchase order terms should be accepted, and equipment accepted should be accurately and promptly reported. Related control techniques include (1) detailed comparison of equipment received to a copy of the purchase order, (2) prenumbered controlled receiving documents that are accounted for, and (3) maintenance of receiving logs. Throughout the purchasing and receiving of equipment there needs to be appropriate separation of duties and interface with the accounting function to achieve funds control, timely payments, and inventorying and control of equipment received. Equipment received should be safeguarded to prevent unauthorized access and use. For example, in addition to physical security, equipment should be tagged with identification numbers and placed into inventory records. Equipment placed into service should only be issued to authorized users and records of the issuances should be maintained to achieve accountability. Further, physical inventories should be taken periodically and compared with inventory records. Differences in counts and records should be resolved in a timely manner and appropriate corrective actions taken. Also, equipment retired from use should be in accordance with management's policies, including establishing appropriate safeguards to prevent unauthorized information that may be stored in the equipment from being disclosed. carelessness. Also, procedures whose effectiveness depends on segregation of duties can be circumvented by collusion. Similarly, management authorizations may be ineffective against errors or fraud perpetrated by management. In addition, the standard of reasonable assurance recognizes that the cost of internal control should not exceed the benefit derived. Reasonable assurance equates to a satisfactory level of confidence under given considerations of costs, benefits, and risks. The cost of fraud, waste, and abuse cannot always be measured in dollars and cents. Such improper activities erode public confidence in the government's ability to efficiently and effectively manage its programs. Management at a number of federal government agencies are faced with tight budgets and fewer personnel. In such an environment, related operating factors, such as executive and middle management turnover and the diversity and complexity of government operations, can provide a fertile environment for internal control weakness and the resulting undesired consequences. It has been almost 50 years since the Congress formally recognized the importance of internal control. The Accounting and Auditing Act of 1950 required, among other things, that agency heads establish and maintain effective internal controls over all funds, property, and other assets for which an agency is responsible. However, the ensuing years up through the 1970s saw the government experience a crisis of poor controls. To help restore confidence in government and to improve operations, the Congress passed the Federal Managers' Financial Integrity Act of 1982. The Integrity Act required, among other items, that we establish internal control standards that agencies are required to adhere to, the Office of Management and Budget (OMB) issue guidelines for agencies to follow in annually assessing their internal controls, agencies annually evaluate their internal controls and prepare a statement to the President and the Congress on whether their internal controls comply with the standards issued by GAO, and agency reports include material internal control weaknesses identified and plans for correcting the weaknesses. OMB has issued agency guidance that sets forth the requirements for establishing, periodically assessing, correcting, and reporting on controls required by the Integrity Act. Regarding the identification and reporting of deficiencies, OMB's guidance states that "a deficiency should be reported if it is or should be of interest to the next level of management. Agency employees and managers generally report deficiencies to the next supervisory level, which allows the chain of command structure to determine the relative importance of each deficiency." The guidance further states that "a deficiency that the agency head determines to be significant enough to be reported outside the agency (i.e., included in the annual Integrity Act report to the President and the Congress) shall be considered a 'material weakness.'" The guidance encourages reporting of deficiencies by recognizing that such reporting reflects positively on the agency's commitment to recognizing and addressing management problems and, conversely, failing to report a known deficiency reflects adversely on the agency. separation of duties between authorizing, processing, recording, and qualified and continuous supervision to ensure that control objectives are achieved; and limiting access to resources and records to authorized persons to provide accountability for the custody and use of resources. Finally, the audit resolution standard requires managers to promptly evaluate findings, determine proper resolution, and establish corrective action or otherwise resolve audit findings. Attachment I provides a complete definition of the standards and Standards for Internal Controls in the Federal Government provides additional explanation of the standards. Financial Officers Act report whether each agency is maintaining financial management systems that comply substantially with federal financial management systems requirements, federal accounting standards, and the government's standard general ledger at the transaction level. Our report, The Statutory Framework for Performance-Based Management and Accountability (GAO/AIMD-98-52, January 28, 1998) provides more detailed information on the purpose, requirements, and implementation status of these acts. In addition, that report refers to a number of other critically important statutes that address debt collection, credit reform, prompt pay, inspectors general, and information resources management. Although these acts address specific problem areas, sound internal controls are an essential factor in the success of these statutes. For example, the Results Act focuses on results through strategic and annual planning and performance reporting. Sound internal control is critical to effectively and efficiently achieving management's plans and for obtaining accurate data to support performance measures. Weak internal controls pose a significant risk to government agencies. History has shown that serious neglect will result in losses to the government that can total millions, and even billions, of dollars over time. As previously mentioned, the loss of confidence in government that results can be equally serious. Although examples of poor internal controls could be drawn from many federal programs, three key areas illustrate the extent of the problems--health care, banking, and property. The Department of Human and Human Services Inspector General reported this past year that out of $163.6 billion in processed fee-for-service payments reported by the Health Care Financing Administration (HCFA) during fiscal year 1996--the latest year for which reliable numbers were available--an estimated $23.2 billion, or about 14.6 percent of the total payments, were improper. Consequently, the Inspector General recommended that HCFA implement internal controls designed to detect and prevent improper payments to correct four weaknesses where (1) insufficient or no documentation supporting claims existed, (2) medical necessity was not established, (3) incorrect classification (called coding) of information existed, and (4) unsubstantiated/unallowable services were paid. During the 1980s, the savings and loan industry experienced severe financial losses. Extremely high interest rates caused institutions to pay high costs for deposits and other funds while earning low yields on their long-term portfolios. Many institutions took inappropriate or risky approaches in attempting to increase their capital. These approaches included accounting methods to artificially inflate the institutions' capital position and diversifying their investments into potentially more profitable, but riskier, activities. The profitability of many of these investments depended heavily on continued inflation in real estate values to make them economically viable. In many cases, weak internal controls at these institutions and noncompliance with laws and regulations increased the risk of these activities and contributed significantly to the ultimate failure of over 700 institutions. This crisis cost the taxpayers hundreds of billions of dollars. Making profitable loans is the heart of a successful savings and loan institution. Boards of directors and senior management did not actively monitor the loan award and administrative processes to ensure excessive risks in making loans were not taken. In fact, excessive risk-taking in making loans was encouraged, resulting in a lack of effective monitoring of loan performance that allowed poorly performing loans to continue to deteriorate. Also, loan documentation was a frequent problem that further evidenced weak internal supervision of loan officers and created difficulties in valuing and selling loans after the institutions failed. was not made available for reuse or effectively controlled against misuse or theft. More recently, we reported that breakdowns exist in the Department of Defense's (DOD) ability to protect its assets from fraud, waste, and abuse. We disclosed that the Army did not have accurate records for its reported $30 billion in real property or the $8.5 billion reported as government furnished property in the hands of contractors. Further, we reported that pervasive weaknesses in DOD's general computer controls place it at risk of improper modification; theft; inappropriate disclosure; and destruction of sensitive personnel, payroll, disbursement, or inventory information. Beginning in 1990, we began a special effort to review and report on the federal program areas our work had identified as high risk because of vulnerabilities to waste, fraud, abuse, and mismanagement. This effort brought a much-needed central focus on problems that were costing the government billions of dollars. Our most recent high-risk series issued focuses of six categories of high risk: (1) providing for accountability and cost-effective management of defense programs, (2) ensuring that all revenues are collected and accounted for, (3) obtaining an adequate return on multibillion dollar investments in information technology, (4) controlling fraud, waste, and abuse in benefit programs, (5) minimizing loan program losses, and (6) improving management of federal contracts at civilian agencies. See attachment II for a listing of the high-risk reports and our most recent reports and testimony on the Year 2000 computing crisis. In conclusion, effective internal controls are essential to achieving agency missions and the results intended by the Congress and the administration and as reasonably expected by the taxpayers. The lack of consistently effective internal controls across government has plagued the government for decades. Legislation has been enacted to provide a framework for performance-based management and accountability. Effective internal controls are an essential component of the success of that legislation. However, no system of internal control is perfect, and the controls may need to be revised as agency missions and service delivery change to meet new expectations. Management and employees should focus not necessarily on more controls, but on more effective controls. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. Internal control standards define the minimum level of quality acceptable for internal control systems to operate and constitute the criteria against which systems are to be evaluated. These internal control standards apply to all operations and administrative functions but are not intended to limit or interfere with duly granted authority related to the development of legislation, rule making, or other discretionary policy-making in an agency. 1. Reasonable Assurance: Internal control systems are to provide reasonable assurance that the objectives of the systems will be accomplished. 2. Supportive Attitude: Managers and employees are to maintain and demonstrate a positive and supportive attitude toward internal controls at all times. 3. Competent Personnel: Managers and employees are to have personal and professional integrity and are to maintain a level of competence that allows them to accomplish their assigned duties, and understand the importance of developing and implementing good internal controls. 4. Control Objectives: Internal control objectives are to be identified or developed for each agency activity and are to be logical, applicable, and reasonably complete. 5. Control Techniques: Internal control techniques are to be effective and efficient in accomplishing their internal control objectives. 1. Documentation: Internal control systems and all transactions and other significant events are to be clearly documented, and the documentation is to be readily available for examination. 2. Recording of Transactions and Events: Transactions and other significant events are to be promptly recorded and properly classified. 3. Execution of Transactions and Events: Transactions and other significant events are to be authorized and executed only by persons acting within the scope of their authority. 4. Separation of Duties: Key duties and responsibilities in authorizing, processing, recording, and reviewing transactions should be separated among individuals. 5. Supervision: Qualified and continuous supervision is to be provided to ensure that internal control objectives are achieved. 6. Access to and Accountability for Resources: Access to resources and records is to be limited to authorized individuals, and accountability for the custody and use of resources is to be assigned and maintained. Periodic comparison shall be made of the resources with the recorded accountability to determine whether the two agree. The frequency of the comparison shall be a function of the vulnerability of the asset. Prompt Resolution of Audit Findings: Managers are to (1) promptly evaluate findings and recommendations reported by auditors, (2) determine proper actions in response to audit findings and recommendations, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management's attention. High-Risk Series: An Overview (GAO/HR-97-1, February 1997). High-Risk Series: Quick Reference Guide (GAO/HR-97-2, February 1997). High-Risk Series: Defense Financial Management (GAO/HR-97-3, February 1997). High-Risk Series: Defense Contract Management (GAO/HR-97-4, February 1997). High-Risk Series: Defense Inventory Management (GAO/HR-97-5, February 1997). High-Risk Series: Defense Weapons Systems Acquisition (GAO/HR-97-6, February 1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, February 1997). High-Risk Series: IRS Management (GAO/HR-97-8, February 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). High-Risk Series: Medicare (GAO/HR-97-10, February 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, February 1997). High-Risk Series: Department of Housing and Urban Development (GAO/HR-97-12, February 1997). High-Risk Series: Department of Energy Contract Management (GAO/HR-97-13, February 1997). High-Risk Series: Superfund Program Management (GAO/HR-97-14, February 1997). High-Risk Program Information on Selected High-Risk Areas (GAO/HR-97-30 May 1997). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/ AIMD-10-1.19, Exposure Draft, March 1998). Year 2000 Readiness: NRC's Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation's Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/ T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/ AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems' Year 2000 Problem (GAO/T-AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration's Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/T-AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/ AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Compliance (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Affairs Computer Systems: Risks of VBA's Year 2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the subject of internal control, focusing on: (1) what internal control is; (2) its importance; and (3) what happens when it breaks down. GAO noted that: (1) internal control is concerned with stewardship and accountability of resources consumed while striving to accomplish an agency's mission with effective results; (2) although ultimate responsibility for internal controls rests with management, all employees have a role in the effective operation of internal controls established by management; (3) effective internal control provides reasonable, not absolute, assurance that an agency's activities are being accomplished in accordance with its control objectives; (4) internal control helps management achieve the mission of the agency and prevent or detect improper activities; (5) the cost of fraud cannot always be measured in dollars; (6) in 1982, Congress passed the Federal Managers' Financial Integrity Act requiring: (a) agencies to annually evaluate their internal controls; (b) GAO to issue internal controls standards; and (c) the Office of Management and Budget to issue guidelines for agencies to follow in assessing their internal controls; (7) more recently, Congress has enacted a number of statutes to provide a framework for performance-based management and accountability; (8) weak internal controls pose a significant risk to the government--losses in the millions, or even billions, of dollars can and do occur; (9) GAO and others have reported that weak internal controls over safeguarding and accounting for government property are a serious continuing problem; and (10) GAO's 1997 high-risk series identifies major areas of government operations where the risks of losses to the government is high and where achieving program goals is jeopardized.
5,402
363
Currently, DOD has five major unmanned aircraft systems in use: the Air Force's Predator A and Global Hawk, the Marine Corps' Pioneer, and the Army's Hunter and Shadow. The services also have developmental efforts underway, for example, the Air Force's Predator B, the Army and Navy's vertical take-off and landing system, and the Army's Warrior. Overall, DOD now has about 250 unmanned aircraft in inventory and plans to increase its inventory to 675 by 2010 and 1,400 by 2015. The 2006 Quadrennial Defense Review reached a number of decisions that would further expand investments in unmanned systems, including accelerating production of Predator and Global Hawk. It also established a plan to develop a new land-based, long-strike capability by 2018 and set a goal that about 45 percent of the future long-range strike force be unmanned. DOD expects unmanned aircraft systems to transform the battlespace with innovative tactics, techniques, and procedures as well as take on the so- called "dull, dirty, and dangerous missions" without putting pilots in harm's way. Potential missions for unmanned systems have expanded from the original focus on intelligence, surveillance, and reconnaissance to limited tactical strike capabilities. Projected plans call for unmanned aircraft systems to perform persistent ground attack, electronic warfare, and suppression of enemy air defenses. Unmanned aircraft fly at altitudes ranging from below 10,000 feet up to 50,000 feet and are typically characterized by approximate altitude--"low altitude" if operating at 10,000 feet or less, "medium altitude" if flying above 10,000 but below 35,000 feet, and "high altitude" if operating above 35,000 feet. The Army's classifies Warrior as a medium-altitude system, in the same category as its Hunter system, its Warrior prototype known as I- GNAT, and the Air Force's Predator A. The Air Force's Predator B is expected to operate at both medium and high altitudes. The Warrior as envisioned by the Army shares some similarities with the Air Force's Predator A and B models. First, all three systems share the same contractor, General Atomics. Second, Predator A and Warrior are expected to be somewhat similar in physical characteristics. In particular, the build of the main fuselage, the location of fuel bays, and design of the tailspar are alike. According to Army program officials, the Predator B and Warrior are expected to share the same flight controls and avionics. Predator A and Warrior are anticipated to perform some similar missions, including reconnaissance, surveillance, and target acquisition and attack. The development of the Warrior program began in late 2001 when the Army started defining requirements for a successor to its Hunter system. In September 2004, the Army released a request for a "systems capabilities demonstration" so that companies could demonstrate the capabilities of their existing aircraft. In December 2004, the Army awarded demonstration contracts worth $250,000 each to two contractors, Northrup Grumman and General Atomics. Subsequently, the Army evaluated, among other things, the demonstrated capabilities of the competitors' existing aircraft in relation to Warrior technical requirements. The Army did not perform a formal analysis of the alternatives comparing expected capabilities of Warrior with current capabilities offered by existing systems; rather, its rationale was that the Warrior is needed near- term for commanders' missions and considered this competition to be a rigorous analysis of available alternatives. Based on the competition, the Army concluded that General Atomics' proposal (based on Warrior) provided the best value solution. In August 2005, the Army awarded the system development and demonstration (SDD) contract to General Atomics. The contract is a cost plus incentive fee contract with an award fee feature. It has a base value of about $194 million, with approximately another $15 million available to the contractor in the form of incentive fees, and about an additional $12 million available as award fees. The time line in figure 1 illustrates the sequence of past and planned events for the Warrior program. The Army plans for a full Warrior system to entail 12 aircraft as well as 5 ground control stations, 5 ground data terminals, 1 satellite communication ground data terminal, 12 air data terminals/air data relays, 6 airborne satellite communication terminals, 2 tactical automatic take-off and landing systems, 2 portable ground control stations, 2 portable ground data terminals and associated ground support equipment. The Army expects to buy 1 developmental system with 17 aircraft and 11 complete production systems with a total of 132 production aircraft through 2015. However, the Army has not yet decided on the number of systems it might buy beyond that date. The Army is employing an evolutionary acquisition strategy to produce Warrior. The Army expects the current Warrior program of record to provide for immediate warfighting needs and plans to build on the capabilities of this increment as evolving technology allows. The Army has an operational requirement, approved by the Joint Requirements Oversight Council, for an unmanned aircraft system dedicated to direct operational control by Army field commanders. The Army has determined that the Warrior was the best option available to meet this operational requirement. Army program officials believe that the Predator is operationally and technically mismatched with Army needs. The Army expects Warrior to offer key technical features that will better meet Army operational needs than Predator A. According to the Army, the Predator is operationally mismatched with its division-level needs. Army program officials noted that one of the Army's current operational difficulties with Predator is that frontline commanders cannot directly task the system for support during tactical engagements. Rather, Predator control is allocated to Theater and Joint Task Force Commands, and the system's mission is to satisfy strategic intelligence, reconnaissance, and surveillance needs as well as joint needs. Army programmatic and requirements documents maintain that Army division commanders in the field need direct control of a tactical unmanned aircraft asset capable of satisfying operational requirements for dedicated intelligence, surveillance, and reconnaissance, communications relay, teaming with other Army assets, and target acquisition and attack. Army program officials also indicated that Predator's time is apportioned among various users, and the Army typically does not receive a large portion of that time. According to Warrior program documents, the Army has historically been able to draw only limited operational support from theater assets such as Predator. For example, a program office briefing noted that overall Iraq theater-level support was neither consistent nor responsive to Army needs, and that division level support was often denied or cancelled entirely. The briefing also said that the shortfall was expected to continue, even with the addition of more Predators and Global Hawks. Army program officials also told us that they expect Warrior to enhance overall force capability in ways that Predator cannot. Specifically, the Army expects Warrior to support teaming with Army aviation assets and aid these assets in conducting missions that commanders were previously reluctant to task to manned platforms. Under this teaming concept, manned assets, including the Apache helicopter, Army Airspace Command and Control system, and Aerial Common Sensor, would work jointly with Warrior to enhance target acquisition and attack capabilities. The Army plans for the manned platforms to not only receive data and video communications from Warrior but also control its payloads and flight. The Army also plans to configure Warrior for interoperability with the Army One System Ground Control Station, an Army-wide common ground control network for unmanned aircraft systems. According to Army documents, Warrior's incorporation into this network will better support the Army ground commander by allowing control of Warrior aircraft to be handed off among ground stations, provide better battlefield coverage for Joint Forces, and ensure common operator training among unmanned aircraft systems, including the Army's Warrior, Shadow, and Hunter and Marine Corps' unmanned aircraft systems. Additionally, Army program officials pointed out that Warrior will be physically controlled by an enlisted soldier deployed in the theater where Warrior is being used. They contrast this with Predator, which is typically flown from a location within the continental United States by a pilot trained to fly manned aircraft. The Army believes that the Warrior design will offer key technical features to address Army operational requirements and maintains that these features will better meet its operational needs than those found on Predator A. The technical features include: multi-role tactical common data link, ethernet, heavy fuel engine, automatic take-off and landing system, more weapons, interoperability with Army One System Ground Control Station, and dual-redundant avionics. Table 1 shows the respective purpose of each technical feature, describes whether or not a particular feature is planned for Warrior and exists now on Predator A, and provides the Army's assessment of operational impact provided by each feature. A February 2006 Warrior program office comparison of costs for Warrior and Predator A projects that Warrior's unit cost will be $4.4 million for each aircraft, including its sensors, satellite communications, and Hellfire launchers and associated electronics. The cost comparison indicates that Predator A's unit cost for the same elements is $4.8 million. Although the Air Force's Predator B is planned to be more capable than Warrior in such areas as physical size and payload and weapons capacity, the Warrior program office estimates that it will have a unit cost of $9.0 million--about double the anticipated cost for Warrior. The Army's cost estimates for the Warrior are, of course, predicated on Army plans for successful development and testing. In terms of technology maturity, design stability, and a realistic schedule, the Army has not yet established a sound, knowledge-based acquisition strategy for Warrior that is consistent with best practices for successful acquisition. Warrior is expected to rely on critical technologies that were not mature at the time of the system development and demonstration contract award in August 2005 and were still not mature in March 2006. Furthermore, it appears that the Army may be unable to complete development of these technologies and achieve overall design stability by the time of the design readiness review scheduled for July 2006. Moreover, the Warrior schedule is very aggressive and overlaps technology development, product development, testing, and production. For example, the Army plans to consider awarding a contract for procurement of long- lead items at a time when it is still unclear if Warrior will be technologically mature and have a stable design. Such concurrency adds more risk, including the potential for costly design changes after production begins, to the already compressed schedule. In the last several years, we have undertaken a best practices body of work on how leading developers in industry and government use a knowledge-based approach to develop high-quality products on time and within budget. A knowledge-based approach to product development employs a process wherein a high level of knowledge about critical facets of a product is achieved at key junctures known as "knowledge points." This event-driven approach, where each point builds on knowledge attained in the previous point, enables developers to be reasonably certain that their products are more likely to meet established cost, schedule, and performance baselines. A key to such successful product development is an acquisition strategy that matches requirements to resources and includes, among other elements, a high level of technology maturity in the product at the start of system development and demonstration, design maturity at the system's design readiness review usually held about half- way through the system's development phase, and adequate time to deliver the product. Achieving a high level of technology maturity at the start of system development is an important indicator that a match has been made between the customer's requirements and the product developer's resources in term of knowledge, money, and time. This means that the technologies needed to meet essential requirements--known as "critical technologies"--have been demonstrated to work in their intended environment. Our best practices work has shown that technology readiness levels (TRL) can be used to assess the maturity of individual technologies and that a TRL of 7--demonstration of a technology in an operational environment--is the level that constitutes a low risk for starting a product development program. As identified by the Army, the Warrior program contains four critical technologies: (1) ethernet, (2) multi-role tactical common data link, (3) heavy fuel engine, and (4) automatic take-off and landing system. Two of the four critical technologies--ethernet and data link--were not mature at the time the Army awarded the Warrior system development and demonstration contract in August 2005, and in early 2006 remain immature at TRLs of 4. Army program officials told us that they project the ethernet to be at TRL 6 and the data link at TRL 5 or 6 by the time of the design readiness review scheduled for July 2006. However, it is not certain that these two technologies will be as mature at design readiness review as the Army anticipates. Army program officials indicated that the data link hardware is still in development and expect its integration with other Warrior components to be a challenge. As such, they rated data link integration status as a moderate risk to the Warrior program. While they stated that use of the ethernet has been demonstrated on Army helicopters and should not be a technical integration challenge, the officials also said that neither the ethernet nor specific data link technologies to be used on Warrior has been integrated previously onto an unmanned aircraft platform. Further, if the technologies are demonstrated at TRL 6 by design readiness review, they will meet DOD's standard for maturity (demonstration in a relevant environment) but not the best practices maturity standard of TRL 7 (demonstration in an operational environment). The Army has technologies in place as backups for the data link and ethernet, but these technologies would result in a less capable system than the Army originally planned. According to Army program officials, there are several potential backups for the data link that could be used on the Warrior aircraft. Among the backups they cited is the same data link used on the Predator A-analog C-band. However, as we noted in a report last year, C-band is congested, suffers from resulting delays in data transmission and relay, and the Department of Defense has established a goal of moving Predator payloads from this data link. Similarly, the other data link backups cited by the officials either had slower data transmission rates or also were not yet mature. Program officials indicated that the backup for the ethernet is normal ground station control of the on-board communication among such components as the payloads, avionics, and weapons. While they stated that there would be no major performance penalty if the backup was used, they did note that the ethernet would significantly improve ease of integrating payloads and of integrating with other Army assets that might need control of a Warrior payload to support missions. The other two critical technologies, the automatic take-off and landing system and the heavy fuel engine, are mature at respective TRLs of 7 and 9. Nevertheless, some program risk is associated with these technologies as well. The contractor has never fielded an automatic take-off and landing component on an unmanned aircraft system. Army program officials told us that they are confident about the take-off and landing system because a similar landing system had been fielded on the Shadow unmanned aircraft, but they also indicated that the take-off component has not been fielded on an unmanned aircraft. The officials also expressed confidence in the heavy fuel engine because it is certified by the U.S. Federal Aviation Administration and is in use on civilian manned aircraft. However, like the complete take-off and landing system, it has not previously been integrated onto an unmanned aircraft. Best practices for successful acquisition call for a program's design stability to be demonstrated by having at least 90 percent of engineering drawings completed and released to manufacturing at the time of the design readiness review. If a product's design is not stable as demonstrated by meeting this best practice, the product may not meet customer requirements and cost and schedule targets. For example, as we reported previously, the Army's Shadow unmanned aircraft system did not meet best practices criteria because it had only 67 percent of its design drawings completed when the system entered low-rate production. Subsequent testing revealed examples of design immaturity, especially relating to system reliability, and ultimately the Army delayed Shadow's full-rate production by about 6 months. The Warrior program also faces increased risk if design drawings do not meet standards for best acquisition practices. The Warrior program office projects that Warrior's design will be stable and that 85 percent of drawings will have been completed and released to manufacturing by the time of the design readiness review in July 2006. However, it seems uncertain whether the Warrior program will meet this projection because percentages of drawings complete for some sub-components were still quite low in early 2006 and, in some cases, have declined since the system development and demonstration contract award. For example, according to an Army program official, the percentage of completed design drawings for the aircraft and ground control equipment dropped after contract award because the Army made modifications to the planned aircraft and also decided that it needed a larger transport vehicle for the Warrior's ground control equipment. The Warrior program appears driven largely by schedule rather than the attainment of event-driven knowledge points that would separate technology development from product development. The latter approach is characteristic of both best practices and DOD's own acquisition policy. Warrior's schedule is compressed and aggressive and includes concurrency among technology development, product development, testing, and production. Concurrency--the overlapping of technology and product development, testing, and production schedules--is risky because it can lead to design changes that can be costly and delay delivery of a useable capability to the warfighter if testing shows design changes are necessary to achieve expected system performance. As shown in figure 2, the Warrior schedule overlaps technology development, product development, testing, and production. The following examples highlight some of the concurrency issues within the Warrior program: Thirty-two months have been allotted from the system development and demonstration contract award in August 2005 to the low-rate production decision in April 2008. Out of that, 10 months--from July 2006 to May 2007--are set aside for integrating system components (including all four critical technologies) into the aircraft. Two of these technologies are not yet mature (as of early 2006); none of the specific technologies as planned to be used on Warrior have previously been fully integrated onto an unmanned aircraft. The Army plans to continue integration through May 2007 would seem to undermine the design stability expected to be achieved at the July 2006 design readiness review. Ideally, system integration is complete by that time. Delivery of 17 developmental aircraft is to take place within a 12-month period from April 2007 to April 2008, and the Army plans for them to undergo developmental testing as they are delivered. It is unclear whether all components will be fully integrated for this testing, but the results of some tests should be available when the Army considers approval of long-lead items for the first lot of low-rate initial production in August 2007. The Army is requesting about $31 million in fiscal 2007 to procure long-lead items, including items associated with the automatic take-off and landing system, heavy fuel engine assembly, and ground control. Prior to the planned approval of the first lot in fiscal 2008, the developmental aircraft will be evaluated in a limited user test. The Warrior program office acknowledges that the schedule is high-risk. Additionally, according to Army program officials, both the program office and contractor recognize that there are areas of moderate to high risk within the program, including integration of the tactical common data link as well as timely availability of a modified Hellfire missile and synthetic aperture radar used for visibility in poor atmospheric conditions. Army program officials told us that they are trying to manage Warrior as more of a knowledge-based, event-driven rather than schedule-driven program. As an example, they stated that the contractor is currently building two off- contract aircraft to help mitigate risk by proving out design, development, and manufacturing. However, they also told us that these two aircraft would not include the tactical common data link, Hellfire missile, synthetic aperture radar, or satellite communications used for relay purposes. They noted that some of these items are still in development so are not expected to be available, but they do plan for the two aircraft to have the ethernet, heavy fuel engine, and automatic take-off and landing system. In concept, the Army has determined that the Warrior will meet its operational requirements better than available alternatives such as the Predator. In practice, however, the Warrior might very well encounter cost, schedule, and performance problems that would hinder it from attaining the Army's goals. Half of its critical technologies are not yet mature, and its design is not yet stable. Compounding this, its aggressive schedule features extensive concurrency among technology development and demonstration, design integration, system demonstration and test, and production, leaving little time to resolve technology maturity and design stability issues by testing. If the Warrior program continues forward prior to attaining adequate technology and design, it may well produce under- performing Warrior aircraft that will not meet program specifications. The program may then experience delays in schedule and increased costs. The next key program event with significant financial implications is the scheduled approval of long-lead items for the initial lot of Warrior low-rate initial production in August 2007. That will be the first use of procurement funding for Warrior. We believe that is a key point at which the Army needs to demonstrate that the Warrior program is knowledge-based and better aligned to meet program goals within available resources than it currently appears. We recommend that the Army not approve long-lead items for Warrior low-rate initial production until it can clearly demonstrate that the program is proceeding based on accumulated knowledge and not a predetermined schedule. In particular, we recommend that, prior to approving the Warrior long-lead items for low-rate initial production, the Secretary of the Army require that critical Warrior technologies are fully mature and demonstrated; Warrior design integration is complete and at least 90 percent of design drawings be completed and released to manufacturing; and fully-integrated Warrior developmental aircraft are fabricated and involved in development testing. DOD provided us with written comments on a draft of this report. The comments are reprinted in Appendix I. DOD concurred with one part of our recommendation but not with the other two parts. DOD also provided technical comments, which we incorporated where appropriate. DOD concurred with the part of our recommendation that it should seek to have at least 90 percent of design drawings completed and released to manufacturing prior to procuring long-lead items for Warrior's low-rate initial production. However, DOD also said that the decision to procure long-lead items will not be based solely on the percentage of drawings completed, but also on the schedule impact of unreleased drawings. DOD did not concur with the rest of our recommendation that, prior to approval of long-lead items for Warrior's low-rate initial production, the Secretary of the Army needed to ensure (a) critical Warrior technologies are fully mature and demonstrated and (b) fully-integrated Warrior developmental aircraft are fabricated and involved in development testing. Although DOD agreed that two critical technologies are less mature than the others within the Warrior system, it also stated that these technologies are at the correct levels to proceed with integration. However, the Warrior program is nearing the end of integration and is about to begin system demonstration, signified by the July 2006 design readiness review. In that review, the design is set to guide the building of developmental aircraft for testing. These developmental aircraft will be used to demonstrate the design in the latter half of System Development and Demonstration. While DOD stated that risk mitigation steps are in place, including possible use of back-up technologies, if either of the two critical technologies is not ready for integration, the decisions on whether to use back-up technologies in the design would ideally have been made by the design readiness review. Even if the two critical technologies mature by that point, they would still have to be integrated into the design, as would the back-up technologies if DOD chose to use those instead. To the extent that technology maturation and integration extend beyond the design readiness review, the program will incur the risk of integrating the design at the same time it is attempting to build developmental aircraft to demonstrate the design. Our recommendation to make the technology decision before committing to long-lead items provides a reasonable precaution against letting the technology risks proceed further into the demonstration of the developmental aircraft and into the purchase of production items. Making the technology decision as early as possible is particularly important given that the program schedule allows no more than a year to demonstrate the design with the developmental aircraft before committing to production. Our past work has shown that increased costs and schedule slippages may accrue to programs that are still maturing technologies well into system development when they should be focused on stabilizing system design and preparing for production. With regards to the part of our recommendation that fully integrated development aircraft are fabricated and involved in developmental testing prior to approval of long-lead items, DOD indicated that modeling and simulation, block upgrades, early operational deployments, and early testing will enable the Department to mitigate design and performance risks while remaining on schedule. While we agree that these activities help reduce risk, the most effective way to reduce risk is to verify the design through testing of fully-integrated developmental aircraft before committing to production. Our recommendation underscores the value of conducting such testing, which can still be done if technology decisions are made early. Our work over the past several years has shown that a knowledge-based acquisition strategy consistent with best practices can lead to successful outcomes. Specifically, proceeding without mature technologies and a stable design can lead to costly design changes after production is underway and negatively impact funding in other Department programs, ultimately affecting DOD's ability to respond to other warfighter needs. To address the first objective, to identify the requirements that led to the Army's decision to acquire Warrior, we reviewed Army operational requirements, acquisition strategy, and other programmatic documents and briefings. We did not assess the validity of the Army's requirements for Warrior. We also reviewed the process the Army used in selecting Warrior. In comparing Warrior to existing unmanned systems in the inventory, we limited our review to comparable medium-altitude systems within the military services. To assess differences in operational capabilities for Warrior and Predator, we reviewed operations-related documents for Predator A and B. We also reviewed critical technologies as well as other key technical features of the respective systems that highlighted differences in Warrior and Predator A capabilities. To address the second objective, whether the Army established a sound acquisition strategy for Warrior, we reviewed planning, budget, and programmatic documents. We also utilized GAO's "Methodology for Assessing Risks on Major Weapon System Programs" to assess the Army's acquisition strategy with respect to best practices criteria. The methodology is derived from the best practices and experiences of leading commercial firms and successful defense acquisition programs. We also used this methodology to review risks within the Warrior program, but we did not focus our assessment on all risk areas the Army and Warrior contractor identified within the program. Instead, we focused on those risk areas that seemed most critical to the overall soundness of the Army's acquisition strategy. To achieve both objectives, we interviewed Army officials and obtained their views of the Army's requirements and soundness of the Army's acquisition strategy. We also incorporated information on Warrior from GAO's recent Assessments of Major Weapon Programs. We performed our review from September 2005 to April 2006 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, and the Secretary of the Air Force, and interested congressional committees. We will also make copies available to others upon request. Additionally, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me on 202-512-7773. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were William R. Graveline, Tana Davis, and Beverly Breen.
Through 2011, the Department of Defense (DOD) plans to spend $20 billion on unmanned aircraft systems, including the Army's "Warrior." Because of congressional concerns that some systems have been more costly and taken more time to produce than predicted, GAO reviewed the Warrior program. This report (1) describes the Army's requirements underlying its decision to acquire Warrior instead of existing systems such as the Air Force's Predator, and (2) assesses whether the Army has established a sound acquisition strategy for the Warrior program. The Army determined the Warrior is its best option for an unmanned aircraft system directly controlled by field commanders, compared with existing systems such as the Air Force's Predator A. The Army believes that using the Warrior will improve force capability through teaming with other Army assets; using common ground control equipment; and allowing soldiers in the field to operate it. Warrior's key technical features include a heavy fuel engine; automatic take-off and landing system; faster tactical common data link; ethernet; greater carrying capacity for weapons; and avionics with enhanced reliability. The Army projects that Warrior will offer some cost savings over Predator A. In terms of technology maturity, design stability, and a realistic schedule, the Army has not yet established a sound, knowledge-based acquisition strategy for Warrior. Two of four of the Warrior's critical technologies were immature at the contract award for system development and demonstration and remain so in early 2006, and the mature technologies still have some risk associated with them because neither has previously been fully integrated onto an unmanned aircraft. The Warrior schedule allows 32 months from award of the development and demonstration contract to the initial production decision. Achieving this schedule will require concurrency of technology and product development, testing, and production. Once developmental aircraft are available for testing, the Army plans to fund procurement of long-lead items in August 2007. Experience shows that these concurrencies can result in design changes during production that can prevent delivery of a system within projected cost and schedule. The Warrior program faces these same risks.
6,237
457
Before enactment of the Employee Retirement Income Security Act of 1974, few rules governed the funding of defined benefit pension plans, and participants had no guarantees that they would receive the benefits promised. Among other things, ERISA established rules for funding defined benefit pension plans and created the PBGC to protect the benefits of plan participants in the event that plan sponsors could not meet the benefit obligations under their plans. More than 34 million workers and retirees in about 30,000 single-employer defined benefit plans rely on PBGC to protect their pension benefits. PBGC finances the liabilities of underfunded terminated plans partially through premiums paid by plan sponsors. Currently, plan sponsors pay a flat-rate premium of $19 per participant per year; in addition, some plan sponsors pay a variable-rate premium, which was added in 1987, to provide an incentive for sponsors to better fund their plans. For each $1,000 of unfunded vested benefits, plan sponsors pay a premium of $9. In fiscal year 2004, PBGC received nearly $1.5 billion in premiums, including more than $800 million in variable rate premiums, but paid out more than $3 billion in benefits to plan participants or their beneficiaries. The single-employer program has had an accumulated deficit--that is, program assets have been less than the present value of benefits and other obligations--for much of its existence. (See fig. 1.) In fiscal year 1996, the program had its first accumulated surplus, and by fiscal year 2000, the accumulated surplus had increased to about $10 billion, in 2002 dollars. However, the program's finances reversed direction in 2001, and at the end of fiscal year 2002, its accumulated deficit was about $3.6 billion. In July 2003, we designated the single-employer insurance program as "high risk," given its deteriorating financial condition and the long-term vulnerabilities of the program. In fiscal year 2004, PBGC's single-employer pension insurance program incurred a net loss of $12.1 billion and its accumulated deficit increased to $23.3 billion, up from $11.2 billion a year earlier. Furthermore, PBGC estimated that total underfunding in single-employer plans exceeded $450 billion, as of the end of fiscal year 2004. Existing laws governing pension funding and premiums have not protected PBGC from accumulating a significant long-term deficit and have not limited PBGC's exposure to moral hazard from the companies whose pension plans it insures. The pension funding rules, under ERISA and the IRC, were not designed to ensure that plans have the means to meet their benefit obligations in the event that plan sponsors run into financial distress. Meanwhile, in the aggregate, premiums paid by plan sponsors under the pension insurance system have not adequately reflected the financial risk to which PBGC is exposed. Accordingly, defined benefit plan sponsors, acting rationally and within the rules, have been able to turn significantly underfunded plans over to PBGC, thus creating PBGC's current deficit. Earlier this year, the Administration released a proposal that aims to address many of the structural problems that PBGC faces by calling for changes in the funding rules and premium structure, among other things. Meanwhile, employers who responsibly manage their defined benefit pension plans are concerned about their exposure to additional funding and premium uncertainties. As the PBGC takeovers of severely underfunded plans suggest, the IRC minimum funding rules have not been designed to ensure that plan sponsors contribute enough to their plans to pay all the retirement benefits promised to date. The amount of contributions required under IRC minimum funding rules is generally the amount needed to fund that year's "normal cost" - benefits earned during that year plus that year's portion of other liabilities that are amortized over a period of years. Also, the rules require the sponsor to make an additional contribution if the plan is underfunded to a specified extent as defined in the law. However, sponsors of underfunded plans may sometimes avoid or reduce minimum funding contributions if they have earned funding credits as a result of favorable experience, such as contributing more than the minimum in the past. For example, contributions beyond the minimum may be recognized as a funding credit. These credits are not measured at their market value and accrue interest each year, according to the plan's long-term expected rate of return on assets. If the market value of the assets falls below the credited amount, and the plan is terminated, the assets in the plan will not suffice to pay the plan's promised benefits. Thus, some very large and significantly underfunded plans have been able to remain in compliance with the current funding rules while making little or no contributions in the years prior to termination (e.g., Bethlehem Steel). Further, under current funding rules, plan sponsors can increase plan benefits for underfunded plans, even in some cases where the plans are less than 60 percent funded. This may create an incentive for financially troubled sponsors to increase pension benefits, possibly in lieu of wage increases, even if their plans have insufficient funding to pay current benefit levels. Thus, plan sponsors and employees that agree to benefit increases from underfunded plans as a sponsor is approaching bankruptcy can essentially transfer this additional liability to PBGC, potentially exacerbating the agency's financial condition. In addition, many defined benefit plans offer employees "shutdown benefits," which provide employees additional benefits, such as significant early retirement benefit subsidies in the event of a plant shutdown or permanent layoff. In general, plant shutdowns are inherently unpredictable, so that it is difficult to recognize the costs of shutdown benefits in advance and current law does not include the cost of benefits arising from future unpredictable contingent events. Under current law, PBGC is responsible for at least a portion of any benefit increases, including shutdown benefits, even if the benefit was added to the plan within 5 years of plan termination. However, many of these provisions were included in plans years ago. As a result, shutdown benefits pose a problem for PBGC not only because they can dramatically and suddenly increase plan liabilities without adequate funding safeguards, but also because the related additional benefit payments drain plan assets. Finally, because many plans allow lump sum distributions, plan participants in an underfunded plan may have incentives to request such distributions. For example, where participants believe that the PBGC guarantee may not cover their full benefits, many eligible participants may elect to retire and take all or part of their benefits in a lump sum rather than as lifetime annuity payments, in order to maximize the value of their accrued benefits. In some cases, this may create a "run on the bank," exacerbating the possibility of the plan's insolvency as assets are liquidated more quickly than expected, potentially leaving fewer assets to pay benefits for other participants. PBGC's current premium structure does not properly reflect risks to the insurance program. The current premium structure relies heavily on flat- rate premiums that, since they are unrelated to risk, result in large cost shifting from financially troubled companies with underfunded plans to healthy companies with well-funded plans. PBGC also charges plan sponsors a variable-rate premium based on the plan's level of underfunding. However, these premiums do not consider other relevant risk factors, such as the economic strength of the sponsor, plan asset investment strategies, the plan's benefit structure, or the plan's demographic profile. PBGC is currently operated somewhat more on a social insurance model since it must cover all eligible plans regardless of their financial condition or the risks they pose to the solvency of the insurance program. In addition to facing firm-specific risk that an individual underfunded plan may terminate, PBGC faces market risk that a poor economy may lead to widespread underfunded terminations during the same period, potentially causing very large losses for PBGC. Similarly, PBGC may face risk from insuring plans concentrated in vulnerable industries affected by certain macroeconomic forces such as deregulation and globalization that have played a role in multiple bankruptcies over a short time period, as happened in the airline and steel industries. One study estimates that the overall premiums collected by PBGC amount to about 50 percent of what a private insurer would charge because its premiums do not adequately account for these market risks. Others note that it would be hard to determine the market rate premium for insuring private pension plans because private insurers would probably refuse to insure poorly funded plans sponsored by weak companies. Despite a series of reforms over the years, current pension funding and insurance laws create incentives for financially troubled firms to use PBGC in ways that Congress did not intend when it formed the agency in 1974. PBGC was established to pay the pension benefits of participants in the event that an employer could not. As pension policy has developed, however, firms with underfunded pension plans may come to view PBGC coverage as a fallback, or "put option," for financial assistance. The very presence of PBGC insurance may create certain perverse incentives that represent moral hazard--struggling plan sponsors may place other financial priorities above "funding up" their pension plans because they know PBGC will pay guaranteed benefits. Firms may even have an incentive to seek Chapter 11 bankruptcy in order to escape their pension obligations. As a result, once a plan sponsor with an underfunded pension plan experiences financial difficulty, existing incentives may exacerbate the funding shortfall for PBGC while also affecting the competitive balance within an industry. This should not be the role for the pension insurance system. This moral hazard has the potential to escalate, with the initial bankruptcy of firms with underfunded plans creating a vicious cycle of bankruptcies and terminations. Firms with onerous pension obligations and strained finances could see PBGC as a means of shedding these liabilities, thereby providing them with a competitive advantage over firms that deliver on their pension commitments. This would also potentially subject PBGC to a series of terminations of underfunded plans in the same industry, as we have already seen with the steel and airline industries in the past 20 years. In addition, current pension funding and pension accounting rules may also encourage plans to invest in riskier assets to benefit from higher expected long-term rates of return. In determining funding requirements, a higher expected rate of return on pension assets means that the plan needs to hold fewer assets in order to meet its future benefit obligations. And under current accounting rules, the greater the expected rate of return on plan assets, the greater the plan sponsor's operating earnings and net income. However, with higher expected rates of return comes greater risk of investment loss, which is not reflected in the pension insurance program's premium structure. Investments in riskier assets with higher expected rates of return may allow financially weak plan sponsors and their plan participants to benefit from the upside of large positive returns on pension plan assets without being truly exposed to the risk of losses. The benefits of plan participants are guaranteed by PBGC, and weak plan sponsors that enter bankruptcy can often have their plans taken over by PBGC. Earlier this year, the Administration released a proposal for strengthening funding of single-employer pension plans. The Administration's proposal focuses on three areas: reforming the funding rules to ensure pension promises are kept by improving incentives for funding plans adequately; improving disclosure to workers, investors, and regulators about pension plan status; and adjusting premiums to better reflect a plan's risk and ensure the pension insurance system's financial solvency. Among other things, the proposal would require all underfunded plans to pay risk-based premiums and it would empower PBGC's board to adjust the risk-based premium rate periodically so that premium revenue is sufficient to cover expected losses and to improve PBGC's financial condition. Employer groups have expressed concern about their exposure to additional funding and premium uncertainties and have claimed that the Administration's proposal may strengthen PBGC's financial condition at the expense of defined benefit plan sponsors. For example, one organization has stated that in its view, the current proposal would result in fewer defined benefit plans, lower benefits, and more pressures on troubled companies. PBGC has proactively attempted to forecast and mitigate the risks that it faces. The Pension Insurance Modeling System (PIMS), created by PBGC to forecast claim risk, has projected a high probability of future deficits for the agency. However, the accuracy of the projections produced by the model is unclear. Also, through its Early Warning Program, PBGC negotiates with companies that have underfunded pension plans and that engage in business transactions that could adversely affect their pensions. Over the years, these negotiations have directly led to billions of dollars of pension plan contributions and other protections by the plan sponsors. Moreover, PBGC has begun an initiative called the Office of Risk Assessment that combines aspects of both PIMS and the Early Warning Program and will enable the agency to better quantitatively analyze claim risks associated with individual plan sponsors. PBGC has also changed its investment strategy and decreased its equity exposure to better shield itself from market risks. However, despite these efforts, the agency, unlike other federal insurance programs, ultimately lacks the authority to effectively protect itself, such as by adjusting premiums according to the risks it faces. Over the long term, many variables, such as interest rates and equity returns, affect the level of PBGC claims. Moreover, large claims from a small number of bankruptcies constitute a majority of the risk that PBGC faces. Consequently, PBGC created the Pension Insurance Modeling System--a stochastic simulation model that quantifies risk and exposure for the agency over the long run. PIMS simulates the flows of claims that could develop under thousands of combinations of various macroeconomic and company and plan-specific data. In lieu of predicting future bankruptcies, PIMS is designed to generate probabilities for future claims. In recent annual reports, PBGC has discussed the methodologies used to develop PIMS. Furthermore, as far back as 1998, PBGC has reported PIMS results that forecast the possibility of large deficits for the agency. For example, at fiscal year end 2003--the most recent year for which PBGC has released an annual report--the model's simulations forecasted about an 80 percent probability of deficit by the year 2013. This included a 10 percent probability of the deficit reaching $49 billion within this time frame. These forecasts, made at the end of fiscal year 2003, did not include the $14.7 billion in losses that PBGC experienced from terminated plans in fiscal year 2004. Therefore, PIMS appears to have understated the extent of PBGC's long-term deficit, given that by the end of fiscal year 2004, the agency's cumulative deficit had already grown to $23.3 billion. The extent to which PIMS can accurately assess future claims is unclear. There is simply too much uncertainty about the future, with respect both to the performance of the economy and of companies that sponsor defined benefit pension plans. It is difficult to accurately forecast which industries and companies will face economic pressures resulting in bankruptcies and PBGC claims. Furthermore, because PBGC's risk lies primarily in a relatively small number of large plans, the failure or survival of any single large plan may lead to significant variance between PBGC's actual claims and the projected claims reported by PBGC in its annual reports. Academic papers report varying rates of success in predicting bankruptcy with various models that measure companies' cash flows or financial ratios, such as asset-to-liability ratios. One paper we reviewed reports that one model succeeded at a rate of 96 percent in predicting bankruptcies 1 year in advance and a rate of 70 percent for predicting bankruptcies 5 years in advance. However, another paper concludes that no single bankruptcy prediction model proposed in the existing literature is entirely satisfactory at differentiating between bankrupt and nonbankrupt firms and that none of the models can reliably predict bankruptcy more than 2 years in advance. PBGC's Early Warning Program is designed to ensure that pensions are protected by negotiating agreements with certain companies engaging in business transactions or events that could adversely affect their pension plans. Companies of particular interest to the PBGC are those that are financially troubled, have underfunded pension plans, and are engaged in transactions such as restructurings, leveraged buyouts, spin-offs, and payments of extraordinary dividends, to name a few. The Early Warning Program proactively monitors financial information services and news databases to identify these potentially risky transactions in a timely fashion. If PBGC, after completing an extensive screening process, concludes that a transaction could result in a significant loss to the pension plan, the agency will seek to negotiate with the company to obtain protections for the plan. The Early Warning Program thus raises awareness of pension underfunding, may change corporate behavior, and may allow PBGC to prevent losses before they occur. Under the program, PBGC currently monitors about 3,200 pension plans covering about 29 million participants. Since 1992, the program has protected over 2 million pension plan participants through about 100 settlement agreements valued at over $18 billion (one settlement accounted for about $10 billion). Some recent representative cases include the 2004 settlement with Invensys that provided for over $175 million of additional cash contributions to the pension plan and the 2005 agreement with Crown Petroleum whereby the plan has been assumed by a financially sound parent company and $45 million of additional cash will be contributed to the pension plan. PBGC has recently undertaken an initiative to create an Office of Risk Assessment, which will focus on improving the agency's ability to quantitatively model individual firms' claim potential. According to PBGC, neither PIMS nor the Early Warning Program provides this information. For example, PIMS projects systemwide surpluses and deficits and is not designed to predict specific company results. Meanwhile, the Early Warning Program targets specific companies, but in a manner that is qualitative in nature. The Office of Risk Assessment, however, will attempt to combine the concepts of both tools and better attempt to quantitatively analyze the claim risk associated with individual companies. PBGC has consulted with other federal agencies, such as the Federal Deposit Insurance Corporation (FDIC), that have implemented similar approaches for assessing risk. In March 2003, FDIC established a Risk Analysis Center. Guided by FDIC's National Risk Committee, which is composed of senior managers, the center is intended to "monitor and analyze economic, financial, regulatory and supervisory trends, and their potential implications for the continued financial health of the banking industry and the deposit insurance funds." The center does so by bringing together FDIC bank examiners, economists, financial analysts, resolutions and receiverships specialists, and other staff members. These members represent several FDIC organizational units and use information from a variety of sources, including bank examinations and internal and external research. According to FDIC, the center serves as a clearinghouse for information, including monitoring and analyzing economic and financial developments and informing FDIC management and staff of these developments. FDIC officials believe that the center enables them to be proactive in identifying industry trends and developing comprehensive solutions to address significant risks to the banking industry. In early 2004, PBGC adopted a new investment strategy to better manage its approximately $40 billion in assets. Although many factors that affect PBGC's financial health are beyond the agency's control, a well-crafted investment strategy is one of the few tools PBGC has to proactively manage the financial risks facing the pension insurance program. Under the new investment policy, PBGC is decreasing its asset allocation in equities from 37 percent as of fiscal year end 2003 to within a range of 15 to 25 percent. Since many of the pension plans that PBGC insures are already heavily invested in equities, some pension and investment experts have said that the agency can create more financial stability by establishing an asset allocation that can hedge against losses in the equity markets. The equity exposure reduction ensures that PBGC's own financial condition will not deteriorate to the same degree as the assets in the pension plans it insures. However, PBGC continues to benefit when equity markets rise because the plans it insures will rise in value. In addition, PBGC claims that this strategy moves the agency closer to the asset mix typically associated with private sector annuity providers. However, it is too soon tell what effects this new investment strategy will have on PBGC's long-term financial condition. Although PIMS and the Early Warning Program help PBGC assess and manage risk to some extent, PBGC lacks the regulatory authority available to other federal insurance programs, such as the FDIC, to effectively protect itself from risk. Whereas PBGC's premiums are determined by statute, Congress provided FDIC the flexibility to set premiums and adjust them every 6 months based on its analysis of risk to the deposit insurance system. Furthermore, FDIC can reject applications to insure deposits at depository institutions when it determines that a depository institution carries too much risk to the Bank Insurance Fund. By contrast, PBGC must insure all plans eligible for PBGC's insurance coverage. Last, FDIC may issue formal and informal enforcement actions for deposit institutions with significant weaknesses or those operating in a deteriorated financial condition. When necessary, the FDIC may oversee the re-capitalization, merger, closure, or other resolution of the institution. By contrast, PBGC is limited to taking over a plan in poor financial condition to prevent it from accruing additional liabilities. PBGC has no authority to seize assets of the plan sponsor, who is responsible for adequately funding the plan. The current financial challenges facing the PBGC reflect, in part, the significant changes that have taken place in employer-sponsored pensions since the passage of ERISA in 1974. Given the decline in defined benefit plans over the last two decades, it is time to make changes in the rules governing the defined benefit system and reexamine PBGC's role as an insurer. In recent years an irreconcilable tension has arisen between PBGC's role as a social insurance program and its mandate to remain financially self-sufficient. Unless something reverses the decline in defined benefit pension coverage, PBGC may have a shrinking plan and participant base to support the program in the future and may face the likelihood of a participant base concentrated in certain potentially more vulnerable industries. In this regard, effectively addressing the uncertainties associated with cash balance and other hybrid pension plans may serve to help slow the decline in defined benefit plans. One of the underlying assumptions of the current insurance program has been that there would be a financially stable and growing defined benefit system. However, the current financial condition of PBGC and the plans that it insures threaten the retirement security of millions of Americans because termination of severely underfunded plans can significantly reduce the benefits participants receive. It also poses risks to the general taxpaying public, who ultimately could be made responsible for paying benefits that PBGC is unable to afford. To help PBGC manage the risks to which it is exposed, Congress may wish to grant PBGC additional authorities to set premiums or limit the guarantees on the benefits it pays to those plans it assumes. However, these changes would not be sufficient in themselves because the primary threat to PBGC and the defined benefit pension system lies in the failure of the funding rules to ensure that retirement benefit obligations are adequately funded. In any event, any legislative changes to address the challenges facing PBGC should provide plan sponsors with incentives to increase plan funding, improve the transparency of the plan's financial information, and provide a means to hold sponsors accountable for funding their plans adequately. However, policymakers must also be careful to balance the need for changes in the current funding rules and premium structure with the possibility that any changes could expedite the exit of healthy plan sponsors from the defined benefit system while contributing to the collapse of firms with significantly underfunded plans. The long-term financial health of PBGC and its ability to protect workers' pensions is inextricably bound to the underlying change in the nature of the risk that it insures, and implicitly to the prospective health of the defined benefit system. Options that serve to revitalize the defined benefit system could stabilize PBGC's financial situation, although such options may be effective only over the long term. Our greater challenge is to fundamentally consider the manner in which the federal government protects the defined benefit pensions of workers in this increasingly risky environment. We look forward to working with Congress on this crucial subject. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other members of the Subcommittee may have. For further information, please contact Barbara Bovbjerg at (202) 512-7215 or George Scott at (202) 512-5932. Other individuals making key contributions to this testimony included David Eisenstadt, Benjamin Federlein, and Joseph Applebaum. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
More than 34 million workers and retirees in about 30,000 singleemployer defined benefit plans rely on a federal insurance program managed by the Pension Benefit Guaranty Corporation (PBGC) to protect their pension benefits. However, the insurance program's long-term viability is in doubt and in July 2003 we placed the single-employer insurance program on our high-risk list of agencies with significant vulnerabilities for the federal government. In fiscal year 2004, PBGC's single-employer pension insurance program incurred a net loss of $12.1 billion for fiscal year 2004, and the program's accumulated deficit increased to $23.3 billion from $11.2 billion a year earlier. Further, PBGC estimated that underfunding in single-employer plans exceeded $450 billion as of the end of fiscal year 2004. This testimony provides GAO's observations on (1) some of the structural problems that limit PBGC's ability to protect itself from risk and (2) steps PBGC has taken to forecast and manage the risks that it faces. Existing laws governing pension funding and premiums have not protected PBGC from accumulating a significant long-term deficit and have exposed PBGC to "moral hazard" from the companies whose pension plans it insures. The pension funding rules, under the Employee Retirement Income Security Act (ERISA) and the Internal Revenue Code (IRC), were not designed to ensure that plans have the means to meet their benefit obligations in the event that plan sponsors run into financial distress. Meanwhile, in the aggregate, premiums paid by plan sponsors under the pension insurance system have not adequately reflected the financial risk to which PBGC is exposed. Accordingly, PBGC faces moral hazard, and defined benefit plan sponsors, acting rationally and within the rules, have been able to turn significantly underfunded plans over to PBGC, thus creating PBGC's current deficit. Despite the challenges it faces, PBGC has proactively attempted to forecast and mitigate its risks. The Pension Insurance Modeling System, created by the PBGC to forecast claim risk, has projected a high probability of future deficits for the agency. However, the accuracy of the projections produced by the model is unclear. Through its Early Warning Program, PBGC negotiates with companies that have underfunded pension plans and that engage in business transactions that could adversely affect their pensions. Over the years, these negotiations have directly led to billions of dollars of pension plan contributions and other protections by the plan sponsors. Moreover, PBGC has changed its investment strategy and decreased its equity exposure to better shield itself from market risks. However, despite these efforts, the agency ultimately lacks the authority, unlike other federal insurance programs, to effectively protect itself.
5,461
590
Currently located within the Department of the Treasury, the CDFI Fund was authorized in 1994 and has received appropriations totaling $225 million through fiscal year 1998. The 1995 Rescissions Act limited the Fund to 10 full-time equivalent (FTE) staff for fiscal years 1995 and 1996, but for fiscal year 1998, the Fund has a FTE ceiling of 35 staff. As of May 8, 1998, the Fund had 27 full-time and 2 part-time staff. The Fund's overall performance is subject to the Results Act. This act seeks to improve the management of federal programs and their effectiveness and efficiency by establishing a system for agencies to set goals for performance and measure the results. Under the act, federal agencies must develop a strategic plan that covers a period of at least 5 years and includes a mission statement, long-term general goals, and strategies for reaching those goals. Agencies must report annually on the extent to which they are meeting their annual performance goals and identify the actions needed to reach or modify the goals they have not met. The Fund completed its final plan in September 1997 and is currently considering revisions to that plan. While the CDFI Fund has established a system for measuring awardees' performance in the CDFI program, this system emphasizes activities over accomplishments and does not always include measures for key aspects of goals. In addition, baseline information that was available to the Fund seldom appears in the Fund's performance measurement schedule. A more comprehensive performance measurement system would provide better indicators for monitoring and evaluating the program's results. The CDFI Fund's progress in developing performance goals and measures for awardees in the CDFI program is mixed. On the one hand, the Fund has entered into assistance agreements with most of the 1996 awardees. As the CDFI Act requires, these assistance agreements include performance measures that (1) the Fund negotiated with the awardees and (2) are generally based on the awardees' business plans. On the other hand, the Fund's performance goals and measures fall somewhat short of the standards for performance measures established in the Results Act. Although awardees' assistance agreements are not subject to the Results Act, the act establishes performance measurement standards for the federal government, including the CDFI Fund. In the absence of specific guidance on performance measures in the CDFI Act, we drew on the Results Act's standards for discussion purposes. The assistance agreements called for under the CDFI Act require awardees to comply with multiple provisions, including the accomplishment of agreed-upon levels of performance by the final evaluation date, typically 5 years in the future. As of January 1998, the Fund had entered into assistance agreements with 26 of the 31 awardees for 1996. We found, on the basis of our six case studies, that the Fund had negotiated performance goals that met the statutory requirements and established goals for awardees that match the Fund's intended purpose, extensively involved the awardees in crafting their planned performance, and produced a flexible schedule for designing goals and measures. According to the Results Act, both activity measures, such as the number of loans made, and accomplishment measures, such as the number of new low-income homeowners, are useful measures. However, the act regards accomplishment measures as more effective indicators of a program's results because such measures identify the impact of the activities performed. Our survey of CDFIs nationwide, including the 1996 awardees, and our review of six case study awardees' business plans showed that CDFIs use both types of measures to assess their progress toward meeting their goals. Yet our review of the 1996 awardees' assistance agreements revealed a far greater use of activity measures. As a result, the assistance agreements focus primarily on what the awardees will do, rather than on how their activities will affect the distressed communities. According to most of the case study awardees, difficulties in isolating and measuring the results of community development efforts and concerns about the effects of factors outside the awardees' control inhibited the awardees' use of accomplishment measures. According to the Results Act, goals and measures should be related and clear. We found that most of the goals and measures were related; however, in some agreements, the measures did not address all key aspects of the goals. Finally, under the Results Act, clarity in performance measurement is best achieved through the use of specific units, well-defined terms, and baseline and target values and dates. While the measures in the agreements included most of these elements, they generally lacked baseline values and dates. Fund officials told us that they used baseline values and dates in negotiating the performance measures, but this information did not appear in the assistance agreements themselves. Therefore, without information contained in awardees' files, it is difficult to determine the level of increase or contribution the investment is intended to achieve. Refining the awardees' goals and measures to meet the Results Act will facilitate the Fund's assessment of the awardees' progress over time. The Fund is taking steps to avoid some of the initial shortcomings in future agreements and is seeking to enhance its expertise and staffing. Although the Fund has developed reporting requirements for awardees to collect information for monitoring their performance, it lacks documented postaward monitoring procedures for assessing their compliance with their assistance agreements, determining the need for corrective actions, and verifying the accuracy of the information collected. In addition, the Fund has not yet established procedures for evaluating the impact of awardees' activities. The effectiveness of the Fund's monitoring and evaluation systems will depend, in large part, on the quality of the information being collected through the required reports and the Fund's assessment of awardees' compliance and the impact of awardees' activities. Primarily because of statutorily imposed staffing restrictions in fiscal years 1995 and 1996 and subsequent departmental hiring restrictions, the Fund has had a limited number of staff to develop and implement its monitoring and evaluation systems. In fiscal year 1998, it began to hire management and professional staff to develop monitoring and evaluation policies and procedures. The Fund has established quarterly and annual reporting requirements for awardees in their assistance agreements. Each awardee is to describe its progress toward its performance goals, demonstrate its financial soundness, and maintain appropriate financial information. However, according to an independent audit recently completed by KPMG Peat Marwick, the Fund lacks formal, documented postaward monitoring procedures to guide Fund staff in their oversight of awardees' activities. In addition, Fund officials indicated that they had not yet established a system to verify information submitted by awardees through the reporting processes. Fund staff told us that they had not developed postaward monitoring procedures because of the CDFI program's initial staffing limits. Now that additional staff are in place, they have begun to focus their attention on monitoring issues, including those identified by KPMG Peat Marwick. The CDFI statute also specifies that the Fund is to annually evaluate and report on the activities carried out by the Fund and the awardees. According to the Conference Report for the statute, the annual reports are to analyze the leveraging of private assistance with federal funds and determine the impact of spending resources on the program's investment areas, targeted populations, and qualified distressed communities. To date, the Fund has published two annual reports, the second of which contains an estimate of the private funding leveraged by the CDFI funding. This estimate is based on discussions with CDFIs and CDFI trade association representatives, not on financial data collected from the awardees. Anecdotal information from three of our six case study awardees indicates that the CDFI funding has assisted them in leveraging private funding. One awardee estimated that the Fund's award generated more than three times its value in private investment. In part because it has been only 15 months since the Fund made its first investment in a CDFI, information on performance in the CDFI program is not yet available for a comprehensive evaluation of the program's impact, such as the Conference Report envisions. The two annual reports include anecdotes about individuals served by awardees and general descriptions of awardees' financial services and initiatives, but they do not evaluate the impact of the program on its investment areas, targeted populations, and qualified distressed communities. Satisfying this requirement will entail substantial research and analysis, as well as expertise in evaluation and time for the program's results to unfold. Fund officials have acknowledged that their evaluation efforts must be enhanced, and they have planned or taken actions toward improvement. For instance, the Fund has developed preliminary program evaluation options, begun hiring staff to conduct or supervise the research and evaluations, and revised the assistance agreements for the 1997 awardees to require that they annually submit a report to assist the Fund in evaluating the program's impact. However, because the Fund has not yet finished hiring its research and evaluation staff, it has not yet reached a final decision on what information it will require from the awardees to evaluate the program's impact. The Fund also has to determine how it will integrate the results of awardees' reported performance measurement or recent findings from related research into its evaluation plans. As to be expected, reports of accomplishments in the CDFI program are limited and preliminary. Because most CDFIs signed their assistance agreements between March 1997 and October 1997, the Fund has just begun to receive the required quarterly reports, and neither the Fund nor we have verified the information in them. Through February 1998, the Fund had received 41 quarterly reports from 19 CDFIs, including community development banks, community development credit unions, nonprofit loan funds, microenterprise loan funds, and community development venture capital funds. The different types of CDFIs support a variety of activities, whose results will be measured against different types of performance measures. Given the variety of performance measures for the different types of CDFIs, it is difficult to summarize the performance reported by the 19 CDFIs. To illustrate cumulative activity in the program to date, we compiled the data reported for the two most common measures--the total number of loans for both general and specific purposes and the total dollar value of these loans. According to these data, the 19 CDFIs made over 1,300 loans totaling about $52 million. In addition, the CDFIs reported providing consumer counseling and technical training to 480 individuals or businesses. In the BEA program, as of January 1998, about 58 percent of the banks had completed the activities for which they received the awards and the Fund had disbursed almost 80 percent of the $13.1 million awarded in fiscal year 1996. Despite this level of activity, the impact of the program on banks' investments in distressed communities is difficult to assess. Our case studies of five awardees and interviews with Fund officials indicate that although the BEA awards encouraged some banks to increase their investments, other regulatory or economic incentives were equally or more important for other banks. In addition, more complete data on some banks' investments are needed to guarantee that the increases in investments in distressed areas rewarded by the BEA program are not being offset by decreases in other investments in these distressed areas. The Fund has tried to measure the program's impact by estimating the private investments leveraged through the BEA awards. However, this estimate includes banks' existing, as well as increased, investments in distressed areas. Furthermore, the Fund cannot be assured that the banks' increased investments remain in place because it does not require banks to report any material changes in these investments. Although the CDFI statute does not require awardees to reinvest their awards in community development, banks have reported to the Fund that they have done so, thereby furthering the BEA program's objectives, according to the Fund. Finally, the Fund does not have a postaward evaluation system for assessing the impact of the program's investments. Our analysis indicated that the impact of the BEA award varied at our five case study banks. One bank reported that it would not have made an investment in a CDFI without the prospect of receiving an award from the Fund. In addition, a CDFI Fund official told us that some CDFIs marketed the prospect of receiving a BEA award as an incentive for banks to invest in them. We found, however, that the prospect of an award did not influence other banks' investment activity. For example, two banks received awards totaling over $324,000 for increased investments they had made or agreed to make before the fiscal year 1996 awards were made. Banks have multiple incentives for investing in CDFIs and distressed areas. Therefore, it is difficult to isolate the impact of the BEA award from the effects of other incentives; however the receipt of a BEA award is predicated on a bank's increasing investments in community development. Discussions with our five case study banks indicated, however, that regulatory and economic incentives have a greater influence on these banks' investments than the prospect of a BEA award. A reason that the banks frequently cited for investing in CDFIs and distressed areas was the need to comply with the Community Reinvestment Act (CRA). Economic considerations also motivated the banks. One bank said that such investments lay the groundwork for developing new markets, while other banks said that the investments help them maintain market share in areas targeted by the BEA program and compete with other banks in these areas. Two banks cited improved community relations as reasons for their investments. Some banks indicated that, compared with these regulatory and economic incentives, the BEA award provides a limited incentive, especially since it is relatively small and comes after a bank has already made at least an initial investment. According to Fund officials, a small portion of the 1996 awardees do not maintain the geographic data needed to determine whether any new investments in distressed areas are coming at the expense of other investments--particularly agricultural, consumer, and small business loans--in such areas. Concerned about the validity of the net increases in investments in distressed areas reported by awardees, the Fund required the 1996 awardees that did not maintain such data to certify that, to the best of their knowledge, they had not decreased investments in distressed areas that were not linked to their BEA award. While most banks maintain the data needed to track their investments by census tract and can thus link their investments with distressed areas, a few do not do so for all types of investments. In an attempt to measure an impact of the BEA program, the Fund has reported that awards of $13.1 million in 1996 leveraged over $125 million in private investment--a leveraging ratio of almost 10 to 1. This estimate includes banks' existing investments in CDFIs and direct investments in distressed areas. When we included only the banks' new direct investments, we calculated a leveraging ratio of 7 to 1. The Fund does not require awardees to notify the Fund of material changes in their investments after awards have been made. Therefore, it does not know how long investments made under the program remain in place. We found, for example, that a CDFI in which one of our case study banks had invested was dissolved several months after the bank received a BEA award. The CDFI later repaid a portion of the bank's total investment. Because the Fund does not require banks to report their postaward activity, the Fund was not aware of this situation until we brought it to the attention of Fund officials. After hearing of the situation, a Fund official contacted the awardee and learned that the awardee plans to reinvest the funds in another CDFI. Even though this case has been resolved, Fund officials do not have a mechanism for determining whether investments made under the program remain in place. The CDFI statute does not require awardees to reinvest their awards in community development; however, awardees have reported to the Fund, and we found through our case studies, that many of them are reinvesting at least a portion of their awards in community development. Reinvestment in community development is consistent with the goals of the BEA program. While the Fund initially established reporting requirements for the 1996 awardees designed to assess the impact of their investments in CDFIs and distressed communities, it discontinued these requirements in 1997 when it found that the accomplishments reported by awardees could not be linked to outcomes in their communities. As a result, the Fund has no system in place for determining the program's impact. As previously noted, accomplishments in community development are difficult to isolate and measure. For example, the effects of investment in community development may not be readily distinguishable from other influences and may not be observable for many years. Nevertheless, the banks we visited are using a variety of measures to assess the effects of their investments, some of which track accomplishments. Such measures include loan repayment rates and reports on the occupancy rates and financial performance of housing projects financed by the banks. However, the awardees are no longer required to report this information to the Fund. The CDFI Fund has more work to do before its strategic plan can fulfill the requirements of the Results Act. Though the plan covers the six basic elements required by the Results Act, these elements are generally not as specific, clear, and well linked as the act prescribes. However, the Fund is not unique in struggling to develop its strategic plan. We have found that federal agencies generally require sustained effort to develop the dynamic strategic planning processes envisioned by the Results Act. Difficulties that the Fund has encountered--in setting clear and specific strategic and performance goals, coordinating cross-cutting programs, and ensuring the capacity to gather and use performance and cost data--have faced many other federal agencies. Under the Results Act, an agency's strategic plan must contain (1) a comprehensive mission statement; (2) agencywide strategic goals and objectives for all major functions and operations; (3) strategies, skill, and technologies and the various resources needed to achieve the goals and objectives; (4) a relationship between the strategic goals and objectives and the annual performance goals; (5) an identification of key factors, external to the agency and beyond its control, that could significantly affect the achievement of the strategic goals and objectives; and (6) a description of how program evaluations were used to establish or revise strategic goals and objectives and a schedule for future program evaluations. The Office of Management and Budget (OMB) has provided agencies with additional guidance on developing their strategic plans. In its strategic plan, the Fund states that its mission is "to promote economic revitalization and community development through investment in and assistance to community development financial institutions (CDFIs) and through encouraging insured depository institutions to increase lending, financial services and technical assistance within distressed communities and to invest in CDFIs." Overall, the Fund's mission statement generally meets the requirements established in the Results Act by explicitly referring to the Fund's statutory objectives and indicating how these objectives are to be achieved through two core programs. Each agency's strategic plan is to set out strategic goals and objectives that delineate the agency's approach to carrying out its mission. The Fund's strategic plan contains 5 goals and 13 objectives, with each objective clearly related to a specific goal. However, OMB's guidance suggests that strategic goals and objectives be stated in a manner that allows a future assessment to determine whether they were or are being achieved. Because none of the 5 goals (e.g. to strengthen and expand the national network of CDFIs) and 13 objectives (e.g. increase the number of organizations in training programs) in the strategic plan include baseline dates and values, deadlines, and targets, the Fund's goals and objectives do not meet this criterion. The act also requires that an agency's strategic plan describe how the agency's goals and objectives are to be achieved. OMB's guidance suggests that this description address the skills and technologies, as well as the human, capital, information, and other resources, needed to achieve strategic goals and objectives. The Fund's plan shows mixed results in meeting these requirements. On the positive side, it clearly lists strategies for accomplishing each goal and objective--establishing better linkages than the strategic plans of agencies that simply listed objectives and strategies in groups. On the other hand, the strategies themselves consist entirely of one-line statements. Because they generally lack detail, most are too vague or general to permit an assessment of whether their accomplishment will help achieve the plan's strategic goals and objectives. For example, it is unclear how the strategy of "emphasizing high quality standards in implementing the CDFI program" will specifically address the objective of "strengthening and expanding the national network of CDFIs." The Fund's strategic plan lists 22 performance goals, which are clearly linked to specific strategic goals. However, the performance goals, like the Fund's strategic goals and objectives, generally lack sufficient specificity, as well as baseline and end values. These details would make the performance goals more tangible and measurable. For example, one performance goal is to "increase the number of applicants in the BEA program." This goal would be more useful if it specified the baseline number of applicants and projected an increase over a specified period of time. Also, some performance goals are stated more as strategies than as desired results. For example, it is not readily apparent how the performance goal of proposing legislative improvements to the BEA program will support the related strategic goal of encouraging investments in CDFIs by insured depository institutions. The Fund's strategic plan only partially meets the requirement of the Results Act and of OMB's guidance that it describe key factors external to the Fund and beyond its control that could significantly affect the achievement of its objectives. While the plan briefly discusses external factors that could materially affect the Fund's performance, such as "national and regional economic trends," these factors are not linked to specific strategic goals or objectives. The Results Act defines program evaluations as assessments, through objective measurement and objective analysis, of the manner and extent to which federal programs achieve intended objectives. Although the Fund's plan does discuss various evaluation options, it does not discuss the role of program evaluations in either setting or measuring progress against all strategic goals. Also, the list of evaluation options does not describe the general scope or methodology for the evaluations, identify the key issues to be addressed, or indicate when the evaluations will occur. Our review of the Fund's strategic plan also identified other areas that could be improved. For instance, OMB's guidance on the Results Act directs that federal programs contributing to the same or similar outcomes should be coordinated to ensure that their goals are consistent and their efforts mutually reinforcing. The Fund's strategic plan does not explicitly address the relationship of the Fund's activities to similar activities in other agencies or indicate whether or how the Fund coordinated with other agencies in developing its strategic plan. Also, the capacity of the Fund to provide reliable information on the achievement of its strategic objectives at this point is somewhat unclear. Specifically, the Fund has not developed its strategic plan sufficiently to identify the types and the sources of data needed to evaluate its progress in achieving its strategic objectives. Moreover, according to a study prepared by KPMG Peat Marwick, the Fund has yet to set up a formal system, including procedures, to evaluate, continuously monitor, and improve the effectiveness of the management controls associated with the Fund's programs. In closing, Mr. Chairman, our preliminary review has identified several opportunities for the Fund to improve the effectiveness of the CDFI and BEA programs and of its strategic planning effort. In our view, these opportunities exist, in part, because the Fund is new and is experiencing the typical growing pains associated with setting up an agency--particularly one that has the relatively complex and long-term mission of promoting economic revitalization and community development in low-income communities. In addition, staffing limitations have delayed the development of monitoring and evaluations systems. Recently, however, the Fund has hired several senior staff--including a director; two deputy directors, one of whom also serves as the chief financial officer; an awards manager; a financial manager; and program managers--and is reportedly close to hiring an evaluations director. While it is too early to assess the impact of filling these positions, the new managers have initiated actions to improve the programs and the strategic plan. Our report may include any recommendations or options we may have to further improve the operations of the CDFI Fund. We provided a copy of a draft of this testimony to the Fund for its review and comment. The Fund generally agreed with the facts presented and offered several clarifying comments, which we incorporated. We performed this review from September 1997 through May 1998 in accordance with generally accepted government auditing standards. Mr. Chairman, this concludes our testimony. We would be pleased to answer any questions that you or Members of the Committee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the results of its ongoing review of the administration of the Community Development Financial Institutions (CDFI) Fund, focusing on the first year's performance of the CDFI and the Bank Enterprise Award (BEA) programs and opportunities for improving their effectiveness. GAO noted that: (1) as of January 1998, the Fund had entered into assistance agreements with 26 of the 31 CDFIs that received awards in 1996; (2) these agreements include performance goals and measures that were based on the business plans submitted by awardees in their application packages and negotiated between the Fund and the awardees, as the CDFI Act requires; (3) GAO found that the performance measures in the assistance agreements generally assess activities rather than the accomplishments reflecting the activities' results; (4) according to Fund officials and CDFIs in GAO's case studies, this emphasis on activity measures is due, in part, to difficulties in isolating and assessing the results of community development initiatives, which may not be observable for many years and may be subject to factors outside the awardees' control; (5) GAO further found that although the performance measures in the assistance agreements are generally related to specific goals, they do not always address the key aspects of the goals, and most assistance agreements lack baseline data that would facilitate tracking progress over time; (6) although the Fund has disbursed about 80 percent of the fiscal year 1996 BEA award funds, it is difficult to determine the extent to which the program has encouraged the 38 awardees to increase their investments in distressed communities; (7) GAO's case studies have five awardees and interviews with Fund officials indicate that although the prospect of receiving a BEA award prompted some banks to increase their investments, it had little or no effect on other banks; (8) GAO found that, in general, other regulatory or economic incentives exerted a stronger influence on banks' investments than the BEA award; (9) in addition, some banks do not collect all of the data on their activities needed to guarantee that increases in investments under the BEA program are not being offset by decreases in other investments in these distressed areas; (10) the CDFI Fund's strategic plan contains all of the elements required by the Government Performance and Results Act and the Office of Management and Budget's (OMB) associated guidance, but these elements generally lack the clarity, specificity, and linkage with one another that the act envisioned; and (11) although the plan identifies key external factors that could affect the Fund's mission, it does not relate these factors to the Fund's strategic goals and objectives and does not indicate how the Fund will take the factors into account when assessing awardee's progress toward goals.
5,525
578
FDA's mission is to protect the public health by ensuring the safety and effectiveness of human drugs marketed in the United States. The agency's responsibilities begin years before a drug is marketed and continue after a drug's approval. FDA oversees the drug development process. Among other things, FDA reviews drug sponsors' proposals for conducting clinical trials, assesses drug sponsors' applications for the approval of new drugs, and publishes guidance for industry on various topics. Once drugs are marketed in the United States, FDA has the responsibility to continue to monitor their safety and efficacy and to enforce drug sponsors' compliance with applicable laws and regulations. FDA also annually publishes a list of drugs approved for sale within the United States, the Approved Drug Products with Therapeutic Equivalence Evaluations, also known as the Orange Book. In addition, since February 2005, FDA has provided updates via the Electronic Orange Book on brand-name drug approvals the month they are approved and on generic drug approvals daily. FDA's Center for Drug Evaluation and Research is responsible for ensuring the safety and efficacy of drugs. Within this center, the Office of New Drugs is responsible for reviewing new drug applications (NDA), while the Office of Generic Drugs is responsible for reviewing applications for generic drugs, which are abbreviated new drug applications (ANDA). NDAs and ANDAs must be submitted by sponsors and approved by FDA before a new brand-name or generic drug can be marketed in the United States. As part of the approval process, FDA reviews proposed labeling for both brand-name and generic drugs; a drug cannot be marketed without an FDA-approved label. Among other things, a drug's label contains information for health care providers and specifically cites the conditions and populations the drug has been approved to treat, as well as effective doses of the drug. Sponsors of both new brand-name and generic drugs are required to submit annual reports to FDA that include, for example, updates about the safety and effectiveness of their drugs; these annual reports are one way FDA monitors the safety and efficacy of drugs once they are available for sale. Manufacturers may submit an ANDA to FDA to seek approval to market a generic version of the drug after the period of exclusivity and any patents for a brand-name drug expire. FDAAA contained three provisions related to antibiotic effectiveness and innovation, each of which required FDA to take certain actions. One provision required FDA to identify breakpoints "where such information is reasonably available," to periodically update them, and to make these up- to-date breakpoints publicly available within 30 days of identifying or updating them. A second provision extended the duration of market exclusivity from 3 years to 5 years for new drugs that meet certain detailed, scientific criteria. be for a new drug consisting of a single enantiomer of a previously approved racemic drug. The application for the drug must also be submitted for approval in a different therapeutic category than the previously approved drug and meet certain other requirements. FDAAA specified that FDA use the therapeutic categories established by the United States Pharmacopeia to determine whether an application has been submitted for a separate therapeutic category than the previously approved drug.categories developed by this organization that were in effect on the date of the enactment of FDAAA. The provision applies to new drugs of any type that meet the criteria, not just antibiotics. Pub. L. No. 110-85, SS 1113, 121 Stat. 823, 976-77 (2007). serious and life-threatening infectious diseases.to counter some of the business risks a drug sponsor must undertake when developing antibiotics. For example, the Orphan Drug Act provides incentives including a 7-year period of marketing exclusivity to sponsors of approved orphan drugs, a tax credit of 50 percent of the cost of conducting human clinical testing, research grants for clinical testing of new therapies to treat orphan diseases, and exemption from the fees that are typically charged when sponsors submit NDAs for FDA's review. Sponsors may also be eligible for a faster review of their applications for market approval. Sponsors of all drugs are required to keep the information on their drug labels accurate. Unlike labels for most other types of drugs, labels for antibiotics contain breakpoints. These breakpoints may continue to change over time, and the sponsors of antibiotics are tasked with the additional responsibility of maintaining up-to-date breakpoints on labels. Although sponsors are required to maintain up-to-date breakpoints on their labels, FDA has acknowledged that many antibiotics are labeled with outdated breakpoints. Outdated breakpoints can result in health care providers unknowingly selecting ineffective treatments, which can also contribute to additional bacterial resistance to antibiotics. Monitoring breakpoints on labels and keeping them up to date can be a challenging process. The most accurate way to monitor and determine if a breakpoint on a label is up to date is to conduct both clinical trials and laboratory studies, but these can be difficult and expensive and may not be appropriate in all circumstances. For example, clinical trials require the enrollment of large numbers of patients, which may be difficult to achieve, to ensure an understanding of a drug's safety and effectiveness against specific bacteria. Enrollment may also be difficult for clinical trials involving antibiotic-resistant bacteria. Unlike clinical trials for a new cancer drug, for example, where researchers are able to target drugs to a patient population with a specific type of cancer, this may not necessarily be the case for antibacterial drugs. There are no rapid diagnostic tests available to help a researcher identify patients with antibiotic-resistant infections who would be eligible for such trials. Laboratory studies, such as susceptibility testing, can be less costly than clinical trials; however, they still require significant microbiology expertise. Susceptibility testing reveals an antibiotic's breakpoint--that is, its ability to kill or inhibit the growth of a specific bacterial pathogen. As such, the results of such tests can provide a sponsor with some data to help update its antibiotic label with more accurate information. Guidelines for developing appropriate susceptibility tests are available from standards-setting organizations, such as the Clinical and Laboratory Standards Institute. Sponsors may obtain information from such organizations to help them conduct susceptibility tests for their antibiotics or otherwise determine if the breakpoints on their antibiotic labels are up to date. According to FDA officials, much of this information is available free online and at conferences. When new information becomes available that may cause the label to become inaccurate, false, or misleading--such as information on increased bacterial resistance to antibiotics--drug sponsors are responsible for updating their drug labels. Label changes of this type require FDA's approval. A sponsor must submit an application supplement to FDA with evidence to support the need for a label change. A sponsor's responsibility for maintaining a drug's label persists throughout the life cycle of the drug--that is, from the time the drug is first approved until FDA withdraws its approval of the drug. A drug is not considered withdrawn until FDA publishes a Federal Register notice officially announcing its withdrawal. A sponsor may also decide to discontinue manufacturing a drug without withdrawal. Sponsors that decide to discontinue marketing a drug are still responsible for maintaining accurate labels. Unlike a drug that is withdrawn, a discontinued drug for which approval has not been withdrawn is one that the sponsor has stopped marketing, but that it may resume marketing without obtaining permission to do so from FDA. Discontinued drugs are identified as such in the discontinued section of the Orange Book. Federal regulations allow ANDA's labels to differ from the label of the corresponding reference-listed drug in certain ways, such as manufacturer name or expiration date. See 21 C.F.R. SS 314.94(a)(8)(iv) (2011). to-date breakpoints, into their generic drugs' labels. A drug maintains its reference-listed drug designation until its approval is withdrawn or a finding is made by FDA that a discontinued reference-listed drug was In either withdrawn from the market for safety or effectiveness reasons.of these cases, FDA will designate a different drug as the reference-listed drug and publish this change in the Orange Book. FDA will generally designate the generic version of the drug with the largest market share as the new reference-listed drug. In this case, the labels of other generic versions of the drug will be expected to follow the label of the newly designated generic, reference-listed drug. FDA has not taken sufficient steps to implement the FDAAA provision regarding preserving antibiotic effectiveness by ensuring that antibiotic labels contain up-to-date breakpoints. In 2008 FDA requested that sponsors respond to the agency regarding whether their antibiotics' labels included up-to-date breakpoints, but FDA has not yet confirmed whether the majority of these labels are accurate. FDA also took the step of issuing guidance in 2009 on sponsors' responsibility to maintain up-to- date breakpoints on their antibiotics' labels, but the agency has not been systematically tracking sponsors' responsiveness. Although FDA has taken steps to update breakpoint information on antibiotic labels, as of November 2011, it has not confirmed that the information is up to date for most reference-listed antibiotics. As one step in FDA's efforts to implement the FDAAA provision regarding antibiotic effectiveness, FDA identified 210 antibiotics and, in January and February 2008, sent letters to the sponsors of these drugs reminding them of the importance of regularly updating the breakpoints on their antibiotic labels. In addition, the letters requested that sponsors evaluate and maintain the currency of breakpoints included on their labels and within 30 days submit evidence to FDA showing that the breakpoints were either current or needed revision. Sponsors that could not submit this evidence within 30 days were advised to provide the agency with a timetable for when they expected to respond with this information. If sponsors determined that their antibiotic labels needed revision, the agency's letter instructed them to submit a label supplement. FDA's letters also highlighted to sponsors that all subsequent annual reports should include an evaluation of these breakpoints and document the status of any needed changes to the antibiotic label. As of November 2011, over 3.5 years after FDA sent its letters, 146, or 70 percent, of the 210 antibiotics are still labeled with breakpoints that have not been updated or confirmed to be up to date. For 78 of the 146 antibiotics, FDA has not yet received a submission regarding the currency of the breakpoints; for 12 of the antibiotics, the sponsors' submissions are pending FDA review; and for 56 of the antibiotics, FDA determined that the sponsors' submission was inaccurate or incomplete and therefore requested a revision or additional information. Thus far, FDA has determined that 64, or 30 percent, of the 210 antibiotics have up-to-date breakpoints (see fig. 1). (See app. II for more details on the status of the labels of the 210 antibiotics.) One reason so many antibiotics still have breakpoints that FDA has not confirmed to be up to date is that many sponsors have not fulfilled the responsibilities outlined in FDA's 2008 letters. FDA officials stated that the agency has followed up with sponsors that had not responded at all to the 2008 letters; however, it did not begin to do so until 2010--2 years after it asked sponsors to respond within 30 days--and two sponsors have still not informed FDA when they intend to submit the requested information. FDA officials told us that they routinely monitor the status of all requested submissions that they have not yet received. In particular, they told us that they have contacted sponsors to set time frames for submitting the requested information, and that they follow up with sponsors that do not submit information within the time frames established. FDA has not pursued regulatory action against any of these sponsors. FDA officials stated that the agency could take regulatory action against a sponsor whose label contained outdated breakpoints, as federal regulations require all sponsors of drugs to maintain accurate labels. However, the officials added that in order for FDA to take regulatory action against a sponsor, FDA would first have to be able to prove that the breakpoint on the antibiotic label was not up to date. Another reason many antibiotics still have breakpoints on their labels that FDA has not confirmed to be up to date is that FDA faced difficulty in keeping up with the workload that resulted from sponsors' breakpoint submissions. According to FDA officials, it should take 1 to 3 months for the agency to review such submissions when staff are available and the submissions include all of the necessary information. However, it took FDA longer than a year to review many of the submissions it received, and as of November 2011, FDA still had a backlog of five submissions from 2008. FDA officials identified four factors that have contributed to the lengthy time between when the agency received a submission and when it completed its review. First, FDA officials explained that the submissions sent in response to the agency's 2008 letter generated a larger number of supplements than normal, adding significantly to FDA's existing workload of label supplements. Second, some of the submissions required significantly more resources to review than typical label supplements, because of challenging scientific issues or difficulties obtaining data. Third, some of the sponsors' submissions were inaccurate or did not include all necessary information. Fourth, FDA staff spent a significant amount of time answering questions from sponsors, tracking responses, and following up when needed. Some of the sponsors we obtained comments from expressed frustration at how long it took FDA to review their submissions, especially given that bacterial resistance to antibiotics is not static and breakpoints may continue to change over time. Specifically, 3 of the 26 sponsors we obtained comments from stated that they are concerned that the breakpoints they submitted may be outdated by the time FDA completes its review. One of these sponsors told us that it was advised by FDA to refrain from submitting new information before the agency completed its review of the sponsor's previously submitted label supplement. According to the sponsor, FDA officials said that providing new information would result in the sponsor's submission going to the end of FDA's review queue. While the fact that breakpoints on the labels of 146 antibiotics may not be up to date is troubling, there are additional reasons for concern. First, nearly all of these 146 antibiotics are reference-listed drugs--thus, in addition to the labels of these drugs, the labels of the generic antibiotics that follow the labels of the reference-listed antibiotics are also uncertain. Second, because bacterial resistance to antibiotics is not static, some of the breakpoints for the 64 antibiotics that FDA has confirmed through its review as up to date may have since become out of date. Third, FDA's list of 210 drugs did not include a complete list of all the antibiotics for which sponsors are responsible for evaluating and maintaining the breakpoints on their labels. For example, FDA did not include any brand-name drugs that were discontinued at the time the agency compiled its list, and also did not include some antibiotics that were reference-listed drugs at that time. FDA officials were unsure how many antibiotics were omitted, but estimated that the number was low. Given the uncertainty surrounding the 146 antibiotics whose breakpoints have not yet been confirmed as well as the antibiotics omitted from FDA's 2008 request to sponsors, more than two-thirds of reference-listed antibiotic labels may contain out-of-date breakpoints. Another step FDA took to implement the FDAAA provision regarding preserving the effectiveness of antibiotics was to issue guidance that reminded sponsors of the requirement to maintain accurate labels, and thus, their responsibility to keep information about breakpoints up to FDA officials stated that in part because the agency received date.questions in response to its 2008 letters, officials determined that it would be useful to issue guidance. FDA first issued draft guidance in June 2008 and finalized it a year later, in June 2009. The guidance specified that the sponsors of brand-name and generic antibiotics that are designated as reference-listed drugs are responsible for evaluating their breakpoints on labels at least annually and should include this evaluation in the sponsor's annual report to FDA. When we asked for clarification as to whether the guidance language limited this responsibility to the sponsors of those brand-name antibiotics that are reference listed, FDA officials told us that the guidance applied to sponsors of all brand-name antibiotics--both those that were and were not reference listed, including those that are discontinued--as well as sponsors of reference-listed, generic antibiotics. The guidance also described approaches sponsors could take to determine up-to-date breakpoints for their antibiotics. While FDA's 2008 letters to certain sponsors communicated much of the same information, FDA's guidance was the first time that FDA specified (1) which sponsors are responsible for evaluating their breakpoints, including that this responsibility applied to sponsors of generic, reference-listed antibiotics, and (2) the frequency with which sponsors needed to perform these evaluations. FDA has not been systematically tracking whether sponsors have been responsive to the guidance. Specifically, FDA does not know what percentage of antibiotic annual reports have included an evaluation of breakpoints. At our request, FDA reviewed a small sample of annual reports and this review suggested that sponsors' responsiveness to the annual reporting responsibility is low. FDA reviewed the most recent annual reports for 19 of the 64 antibiotics that FDA confirmed to be labeled with up-to-date breakpoints after receiving a response to the agency's 2008 letters. FDA found that 10 of the 19, or just over half, of these annual reports included an evaluation of the antibiotics' breakpoints. group of antibiotics--that is, those for which a sponsor already responded to FDA's 2008 letter with a submission regarding the currency of their breakpoints--the overall rate for all antibiotics is likely even lower. FDA looked at a subset of the 64 antibiotics that were also brand-name drugs and for which the sponsor had submitted its most recent annual report electronically. Three of the 19 antibiotics in FDA's sample had annual reports that noted that a label supplement was recently approved but had not been implemented in time to be reflected in the report. Because bacterial resistance to antibiotics is not static, sponsors that do not follow the guidance by evaluating their breakpoints on a regular basis and sharing the results of their evaluation with FDA are unlikely to be able to maintain accurate labels. FDA officials stated that they plan to track compliance with the guidance in one of the agency's drug databases by January 1, 2012. FDA plans to have all annual reports for antibiotics reviewed by FDA microbiologists who will use a standardized form to document the assessment of the antibiotics' breakpoints. In addition, the agency plans to track whether the annual report included an evaluation of the antibiotics' breakpoints in an FDA database. FDA plans to follow up with sponsors that do not include a complete evaluation of antibiotic breakpoints in their annual reports to inform them about what information was missing. Some sponsors, particularly sponsors of generic, reference-listed antibiotics, may not be following FDA's guidance because they are confused as to whether the responsibility to evaluate and maintain up-to- date breakpoints on their labels, as described in the guidance, applies to them. Fifteen sponsors we obtained comments from manufactured at least one generic, reference-listed antibiotic--all were responsible for evaluating and maintaining their breakpoints. Of these 15, 7 sponsors expressed some form of confusion regarding their responsibility. Five of these 7 sponsors stated that their strategy for ensuring that the breakpoints on their generic antibiotic labels were up to date was to follow the breakpoints on the label of the corresponding brand-name drug. Two of the 5 were even more specific and added that their generic antibiotics were only designated reference-listed drugs "by default" and that their strategy was to follow the label of the brand-name drug--even if the brand-name drug was discontinued. One other sponsor was unsure whether any of its generic antibiotics were reference-listed drugs or what implications such a designation would have. A seventh sponsor understood the responsibilities associated with having a generic antibiotic that was designated a reference-listed drug, but was under the impression that its generic antibiotic was not a reference-listed drug. FDA officials told us that it is a sponsor's responsibility to routinely monitor FDA's Orange Book to determine if any of its drugs become designated a reference-listed drug. However, FDA's June 2009 guidance is silent on sponsors' responsibility to consistently monitor the Orange Book to determine if one of their drugs has become, or ceases to be, a reference-listed drug. The officials acknowledged that there is no process or mechanism for notifying sponsors when one of their drugs becomes, or is no longer, a reference-listed drug. The guidance was also not explicit about FDA's view that the responsibility described in the guidance also applied to sponsors of discontinued brand-name antibiotics. The guidance also explained that FDA intended to comply with FDAAA's requirement that it identify, periodically update, and make publicly available up-to-date breakpoints by using two approaches. First, the guidance explained that the agency would review breakpoints referenced in the labeling of individual drug products and post any approved labels on the Internet. FDA officials told us that this is the approach FDA has thus far used to make up-to-date breakpoints publicly available. Second, FDA's guidance also stated that it would, when appropriate, recognize standards used to determine breakpoints from one or more standards- FDA setting organizations and publish these in the Federal Register.has not yet used this approach and did not mention a specific plan or timetable to do so. FDA officials told us that publishing this information in the Federal Register could make the review process quicker as sponsors would then have ready access to standards already recognized by FDA. For example, publishing this information may be helpful for some sponsors, such as those that do not have the microbiology expertise to update their own breakpoints. While FDA officials said that they have been making updated breakpoints publicly available, the agency's guidance regarding these alternative approaches may be causing confusion among some sponsors that are anticipating the publication of breakpoints from standards-setting organizations in the Federal Register. This was the case for one sponsor we obtained comments from, which stopped purchasing data from a standard-setting organization because it believed FDA would be publishing recognized standards in the Federal Register. The FDAAA provision that grants extended market exclusivity has not resulted in any sponsors submitting NDAs for antibiotics that qualify for this exclusivity. Additionally, as required by FDAAA, FDA held a public meeting to discuss incentives, such as those available under the Orphan Drug Act, to encourage antibiotic innovation. However, no changes were made to the availability of current incentives nor were any new incentives established following the public meeting. To date, drug sponsors, including those we received comments from, have not submitted any NDAs for antibiotics as a result of the FDAAA provision granting additional market exclusivity for new drugs containing single enantiomers of previously approved racemic drugs. According to FDA officials, they have received very few inquiries regarding this provision and as of November 2011, no NDAs for antibiotics have been submitted that would qualify for this exclusivity. FDA officials noted that because it is a narrowly targeted provision, they are unsure if any existing racemic drug could qualify. None of the drug sponsors from which we obtained comments said that this FDAAA provision provided a sufficient incentive to develop a new antibiotic of this type. FDA officials stated that it was unlikely that this provision would have an impact on antibiotic innovation. The officials stated that the requirement that the single enantiomer of the approved drug be in a separate therapeutic category would be challenging for antibiotic sponsors to meet. The officials noted that this market exclusivity was not limited to antibiotics. One drug sponsor we spoke with stated that it is pursuing this market exclusivity for a drug that is not an antibiotic. The lack of NDAs for antibiotics submitted in response to this FDAAA provision is consistent with the overall trend in the approval of innovative antibiotic NDAs. The number of annual approvals of antibiotic NMEs from 2001 through 2010 has not changed significantly since the passage of FDAAA. Specifically, the annual number of antibiotic NME approvals was two or less for the years prior to, and one or less for the years following, the enactment of FDAAA. Because drug development is a lengthy process--sponsors spend, on average, 15 years developing a new drug--it may be too early to ascertain the full impact of FDAAA on antibiotic innovation. However, the extended exclusivity provided for in FDAAA is only available to sponsors submitting qualifying NDAs before October 1, 2012. As required by FDAAA, FDA held a public meeting on April 28, 2008, to explore whether and how existing incentives and potential new incentives could be applied to promote the development of antibiotics as well as to discuss whether infectious diseases may qualify for grants or other incentives that may promote innovation. The meeting provided an opportunity to gather input from stakeholders and address their concerns. However, although potential new incentives and changes to current ones were suggested at the meeting, many of these suggestions--such as tax incentives and extended market exclusivities--would require a statutory change. One of the discussion topics at the public meeting related to the circumstances under which antibiotics could qualify for incentives provided under the Orphan Drug Act, which is intended to stimulate the development of drugs for rare diseases--conditions that affect fewer than 200,000 people in the United States. Following the public meeting, FDA responded in writing to an inquiry from one stakeholder to clarify that an antibiotic could qualify for an orphan drug designation when the drug's use is restricted to the treatment of a small population of patients with an infection caused by a specific pathogen. Our examination of FDA data suggests that orphan drug designation is not common for antibiotics. These data show that the annual number of antibiotics that received an orphan drug designation from 2001 to 2007--when FDAAA was enacted--was three drugs or fewer each year. The number of antibiotics that received orphan drug designation following FDAAA's enactment in 2007 has remained constant at this rate through 2010. Additionally, not all antibiotics that have been awarded orphan drug designation have been or will apply to be approved for marketing. Of the 15 antibiotics that received an orphan drug designation from 2001 through 2010, only 1 was approved for marketing as of November 2011. In addition to discussing the applicability of the Orphan Drug Act, the agency gathered input during the public meeting from drug sponsors and other parties--such as those in academia and professional associations--on serious and life-threatening infectious diseases, antibiotic resistance, and incentives for antibiotic innovation. The incentives mentioned as useful mechanisms to encourage the innovation and marketing of antibiotics were both financial and regulatory in nature and are summarized in table 1. The growing public health threat associated with bacterial resistance to antibiotics makes the development of new antibiotics critical. Although FDAAA contained a provision to encourage the development of certain antibiotics, no sponsor has submitted an application for a new drug that meets the law's specific criteria. FDAAA also recognized that up-to-date breakpoints are vital to preserving the effectiveness of antibiotics. Antibiotic labels containing out-of-date breakpoints can lead clinicians to choose less effective treatments and provide additional opportunities for bacteria to develop resistance. Out-of-date breakpoints on labels of reference-listed antibiotics also have a ripple effect on the accuracy of the labels of other antibiotics because other sponsors must match the labels of the corresponding reference-listed drugs. However, more than 4 years after FDAAA's enactment, there continues to be uncertainty about the accuracy of the labels of more than two thirds of reference-listed antibiotics, as well as those of the generic antibiotics that are required to follow these drugs' labels. The steps FDA has taken since the enactment of FDAAA have been insufficient to ensure that all antibiotics have up-to-date breakpoints on their labels. The agency has acted with neither decisiveness nor a sense of urgency. First, FDA has not yet completed reviewing the submissions it received in response to its 2008 request and many sponsors still have not provided FDA with needed information. Further, FDA officials told us that they sent letters to sponsors of 210 antibiotics. These sponsors were responsible for evaluating and maintaining, and if necessary, updating the breakpoints on their labels; however, FDA's request was not made to all the antibiotic sponsors that held this responsibility. While the agency did follow up with sponsors, this was not done in a timely manner. FDA's review of sponsors' submissions has also been time-consuming; given that sponsors are expected to provide information on the effectiveness of these breakpoints annually. It is unclear how the agency plans to keep up with this workload if sponsors' fulfillment of this responsibility improves. Second, FDA's issuance of guidance to specify the responsibilities of antibiotics' sponsors to evaluate breakpoints appears to have been unsuccessful at encouraging all sponsors to fulfill these responsibilities. The comments we received from drug sponsors indicate that some antibiotic sponsors remain confused about this responsibility--either because they did not know that their antibiotics were reference-listed drugs or because they interpreted the June 2009 FDA guidance differently than FDA intended. Without formal notification that their antibiotics have been designated as reference-listed drugs and a clarification of their responsibilities, sponsors may continue to be unaware of, or have differing interpretations of a responsibility that ultimately helps preserve antibiotic effectiveness. The pace of FDA's actions--many of which remain incomplete--means that the majority of antibiotics we examined may have out-of-date breakpoints on their labels that could result in the prescription of ineffective treatments by health care providers and further contribute to antibiotic resistance. This requires concerted action on the part of the agency to help preserve the effectiveness of currently available antibiotics. We recommend that the Commissioner of FDA take the following six actions to help ensure that antibiotics are accurately labeled: expeditiously review sponsors' submissions regarding the breakpoints on their antibiotics' labels; take steps to obtain breakpoint information from sponsors that have not yet submitted breakpoint information in response to the 2008 letters sent by the agency; ensure that all sponsors responsible for the annual review of breakpoints on their antibiotics' labels--including discontinued brand- name antibiotics and reference-listed antibiotics designated since 2008--have been reminded of their responsibility to evaluate and maintain up-to-date breakpoints; establish a process to track sponsors' submissions of breakpoint information included in their annual reports to ensure that such information is submitted to FDA and reviewed by the agency in a timely manner; notify sponsors when one of their drugs becomes or ceases to be a clarify or provide new guidance on which antibiotic sponsors are responsible for annually evaluating and maintaining up-to-date breakpoints on drug labels. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix IV. In its comments, HHS acknowledged the importance of updating antibacterial breakpoints and said that FDA is committed to ensuring that breakpoint information on drug labels is up to date. Although HHS did not specifically indicate whether it agreed with our recommendations, the agency stated that it will consider all of them as it continues to improve its processes to ensure that antibacterial drug labels contain up-to-date breakpoint information. HHS also stated that FDA has already taken steps to expedite the review of sponsor submissions regarding updated breakpoint information, which is consistent with our recommendations. In addition, HHS expressed concern that our report did not fully capture the challenges associated with updating the labels of antibacterial drugs. HHS summarized the approach FDA used to address the provision in FDAAA related to antibiotic effectiveness and highlighted the challenges sponsors face in obtaining currently relevant and adequate scientific data to assess antibiotic breakpoints. However, we believe that our report accurately describes the same actions that HHS outlined in its comments. Similarly, we believe that our report acknowledges the challenges surrounding sponsors' responsibility to maintain up-to-date breakpoints. We recognize that these challenges pose difficulties for both sponsors and FDA. However, FDA is ultimately responsible for ensuring that drugs, including antibiotics, are safe and effective. Despite the agency's efforts, 4 years have elapsed since FDA first began contacting drug sponsors regarding the accuracy of the breakpoints on 210 of their antibiotics' labels. Yet there continues to be uncertainty about the accuracy of the labels for two-thirds of these drugs. Given the serious threat to public health posed by antibiotic resistance, we believe that it is important that our recommendations are implemented, in order to help preserve the effectiveness of these critical drugs. Finally, HHS provided us with new information, reporting that as of December 12, 2011, the labeling for 66 antibacterial drugs has been updated or found to be correct. This is an increase of 2 antibacterial drugs, up from the 64 antibacterial drugs that are cited in our report. We include this information here, but did not revise our report, as HHS did not provide a complete update regarding all of the 210 antibiotics discussed in this report. HHS also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. As one step in FDA's efforts to implement the provision in the Food and Drug Administration Amendments Act of 2007 regarding antibiotic effectiveness, FDA identified 210 antibiotics for which sponsors were responsible for evaluating and maintaining and, if necessary, updating the breakpoints on their antibiotics' labels. In January and February of 2008, FDA sent letters to the sponsors of these drugs reminding them of the importance of regularly updating the breakpoints on their antibiotic labels. In addition, the letters requested that sponsors evaluate and maintain the currency of breakpoints included on their labels and within 30 days submit evidence to FDA showing that the breakpoints were either current or needed revision. Of the 210 antibiotics, 126 were brand-name antibiotics and 84 were generic antibiotics, manufactured by 39 different sponsors. Table 2 identifies these 39 sponsors and whether the sponsor held a brand-name antibiotic, a generic antibiotic, or both. Number of antibiotics for which (NDA) Abbreviated new drug applications (ANDA) Appendix III: Timeline of FDA Implementation of Certain Food and Drug Administration Amendments Act Provisions See FDA, Guidance for Industry: Updating Labeling for Susceptibility Test Information in Systemic Antibacterial Drug Products and Antimicrobial Susceptibility Testing Devices (June 2009). In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Alison Binkowski; Ashley R. Dixon; Cathleen Hamann; Lisa Motley; Patricia Roy; Laurie F. Thurber; and Jocelyn Yin made key contributions to this report.
Antibiotics are critical drugs that have saved millions of lives. Growing bacterial resistance to existing drugs and the fact that few new drugs are in development are public health concerns. The Food and Drug Administration Amendments Act of 2007 (FDAAA) required the Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), to identify, periodically update, and make publicly available up-to-date breakpoints, the concentrations at which bacteria are categorized as susceptible to an antibiotic. Breakpoints are a required part of an antibiotic's label and are used by providers to determine appropriate treatments. FDAAA provided a financial incentive for antibiotic innovation and required FDA to hold a public meeting on antibiotic incentives and innovation. FDAAA directed GAO to report on the impact of these provisions on new drugs. This report (1) assesses FDA's efforts to help preserve antibiotic effectiveness by ensuring breakpoints on labels are up to date and (2) examines the impact of the antibiotic innovation provisions. GAO examined FDA data, guidance, and other documents; interviewed FDA officials; and obtained information from drug sponsors, such as manufacturers, that market antibiotics. FDA has not taken sufficient steps to ensure that antibiotic labels contain up-to-date breakpoints. FDA designates certain drugs as "reference-listed drugs" and the sponsors of these drugs play an important role in ensuring the accuracy of drug labels. Reference-listed drugs are approved drug products to which generic versions are compared. As of November 2011, FDA had not yet confirmed whether the breakpoints on the majority of reference-listed antibiotics labels were up to date. FDA contacted sponsors of 210 antibiotics in early 2008 to remind sponsors of the importance of maintaining their labels and requested that they assess whether the breakpoints on their drugs' labels were up to date. Sponsors were asked to submit evidence to FDA showing that the breakpoints were either current or needed revision. As of November 2011, over 3.5 years after FDA contacted sponsors, the agency had not yet confirmed whether the breakpoints on the labels of 70 percent, or 146 of the 210 antibiotics, were up to date. FDA has not ensured that sponsors have fulfilled the responsibilities outlined in the early 2008 letters. For those submissions FDA has received, it has often taken over a year for FDA to complete its review. Officials attributed this delay to reviewers' workload, challenging scientific issues or difficulties in obtaining needed data, and incomplete submissions. FDA also issued guidance to clarify sponsors' responsibility to evaluate and maintain up-to-date breakpoints. The guidance reminded sponsors that they are required to maintain accurate labels and stated that certain sponsors should submit an evaluation of breakpoints on their antibiotic labels to FDA annually. However, FDA has not been systematically tracking whether sponsors are providing these annual updates. Some sponsors remain confused about their responsibility to evaluate and maintain up-to-date breakpoints. At GAO's request, FDA reviewed a small sample of annual reports and determined that few sponsors appear to be responsive to the guidance. The FDAAA provisions related to antibiotic innovation have not resulted in the submission of new drug applications for antibiotics. FDAAA extended the period of time that sponsors of new drugs that meet certain criteria have exclusive right to market the drug. According to FDA officials, the agency has received very few inquiries regarding this provision and, as of November 2011, no new drug applications for antibiotics have been submitted that would qualify for this exclusivity. None of the drug sponsors GAO received comments from said that this provision provided sufficient incentive to develop a new antibiotic of this type. FDAAA also required that FDA hold a public meeting to discuss whether and how existing or potential incentives could be applied to promote the development of antibiotics. Both financial and regulatory incentives were discussed at FDA's 2008 meeting, including tax incentives for research and development and providing greater regulatory clarity during the drug approval process. GAO recommends that the Commissioner of FDA take steps to help ensure antibiotic labels contain up-to-date information, such as by expediting the agency's review of breakpoint submissions. HHS said it will consider implementing GAO's recommendations.
7,640
879
The Army classifies its vehicles on the basis of such factors as function and physical characteristics. For example, tracked vehicles (Abrams Tanks and Bradley Fighting Vehicles) are classified as Army combat vehicles; wheeled vehicles (trucks, automobiles, cycles, and buses) are classified as Army motor vehicles. Within the Army motor vehicle grouping, vehicles are further separated into tactical and non-tactical categories and within the tactical grouping, into light, medium, and heavy classifications based primarily on vehicle weight. The M939 series trucks are accounted for as part of the Army motor vehicle's medium tactical fleet. The Army reviews operational requirements for its vehicle fleet in an effort to improve readiness. From January 1983 through October 1993, the Army upgraded its 5-ton medium tactical fleet by purchasing about 34,900 M939s to replace aging and obsolete trucks. The new truck, designed to operate on and off road, maintained the basic design of its predecessors but came equipped with such first-time standard equipment as air-brakes and automatic transmissions. At present, the Army has three variations and nearly 40 different models of the M939 in its inventory. Depending on the model, the truck performs multiple duties that include hauling cargo, collecting refuse, transporting troops, and operating as a tractor or wrecker. The last M939s were fielded in late 1993. Should vehicles or equipment prove dangerous or unsafe to operate, the Army Safety Center, Transportation School and Center, and Tank-Automotive and Armaments Command (TACOM) are responsible for identifying problems and disseminating information. Among other duties, the commands collect and evaluate information from accident investigations and field reports. They also issue Army-wide safety alerts, precautionary messages, and other information warning of identified dangers with equipment and vehicles. Our two analyses and the analysis conducted by the Army Safety Center all involved comparisons of different types of accident data collected over different time frames. Nevertheless, all of the analyses showed that the M939 had a higher accident rate than each type of comparison vehicle. In our first analysis, we reviewed data from January 1987 through June 1998 and compared selected M939 accident statistics with those of the rest of the Army motor vehicle fleet. We reviewed the accident categories in terms of "fatal accidents," defined as any accident event in which at least one occupant of an Army motor vehicle died; "occupant deaths," defined as the total number of Army motor vehicle occupants killed; "rollovers," defined as any vehicle that did not remain upright as the result of an accident; and "rollover deaths," defined as those occurring to occupants of Army motor vehicles that rolled over as a result of an accident. In analyzing this selected accident information compiled by the Army Safety Center, we found the frequency of M939 accidents high in each instance. For the 11-1/2 year period reviewed, the M939 series truck inventory averaged 26,991, or about 9 percent of the average annual Army motor vehicle inventory of about 314,000 vehicles, and accounted for about 15 percent of the total Army motor vehicle accidents. Appendix I shows the actual figures by year, 1987-1998. Our comparison of M939 accident statistics with accident statistics for the rest of the Army motor vehicle fleet showed that the M939 accounted for about 34 percent of all Army motor vehicle fatal accident events, and 34 percent of all Army motor vehicle occupant deaths. Comparative rollover statistics revealed much the same. The M939 rollovers accounted for 17 percent of the total Army motor vehicle rollovers, and 44 percent of the total Army motor vehicle rollover fatalities. Figure 2 shows these accident statistics. In our second analysis, we used Department of Transportation published data for years 1987-1996 and compared the accident rate for M939s with the rate for single-unit medium and heavy commercial trucks (which are physically similar to M939s). According to an agency official, the Department of Transportation defines "fatal crashes" as any event in which someone is killed in a crash--vehicle occupant or otherwise--and "truck occupant fatalities" as a fatality of an occupant of a single-unit truck. These comparisons revealed that the accident rates for the M939 were substantially higher than those found for the commercial trucks. However, Army officials point out that commercial trucks are driven almost exclusively on paved roads; the M939 is driven on both paved and unpaved roads. We found that over the 10-year period, 1987-1996, the frequency rates of fatal crashes per million miles driven for M939s averaged about seven times higher than those for commercial trucks. The M939 accident rate ranged from a high of 12 to a low of 3 times higher than the commercial truck rate. In 1988, the M939's accident rate was 0.23 and the commercial truck rate was 0.02--about 12 times higher; and in 1992, the M939 accident rate was 0.056 and the commercial truck rate was 0.018--about 3 times higher. Figure 3 shows these statistics. We also found that, over this same 10-year period, the M939 occupant fatality rate averaged about 30 times higher than those for commercial trucks. The M939 occupant fatality rate ranged from a high of 59 to a low of 13 times higher than the commercial truck rate. In 1995, the M939 occupant fatality rate was 0.165 and the commercial truck rate was 0.0028--about 59 times higher; and in 1989, the M939 rate was 0.046 and the commercial truck rate was 0.0035--about 13 times higher. Figure 4 shows these statistics. The Army Safety Center's analysis reviewed accident data from October 1990 through June 1998. In this analysis, the accident rate of the M939 was compared with accident rates for another series of trucks--the M34/M35 series 2-1/2 ton truck. Army officials advised us that this truck was most comparable with the M939. The analysis reviewed accidents categorized as Class A mishaps. Army Regulation 385-40 defines a "Class A" mishap as an accident where total property damage equals $1 million or more; an Army aircraft or missile is destroyed, missing or abandoned; or an injury and/or occupational illness resulting in a fatality or permanent total disability. Because an M939 costs significantly less than $1 million, almost all Class A mishaps involving an M939 are so classified because they result in a death or permanent total disability. The Army Safety Center's analysis found accident rates for M939s to be higher than the comparison vehicles. The analysis showed M939 Class A mishap frequency rates per million miles driven to be 3 to 21 times higher than those of similar M34/M35 series 2-1/2 ton trucks. For example, the 1995 Class A mishap rate for the M939 was 0.21 and for the 2-1/2 ton M34/35s, it was 0.01 per million miles driven--about a 21-fold difference. Figure 5 shows this comparison. The Army has initiated a program to improve the M939's safety performance and, according to TACOM estimates, plans to spend around $234 million for various modifications. Most of the modifications are the direct result of corrective actions suggested in studies. These studies focused on identifying root causes of M939 accidents based on information contained in accident investigation reports. On the basis of the studies' findings, the Army concluded that the overall truck design was sound but that some modifications were necessary to improve the truck's safety performance. Planned modifications include $120 million for upgrading the trucks tires, altering brake proportioning specifications, and adding anti-lock brake kits. Other modifications include $114 million to install cabs equipped with rollover crush protection systems and improve accelerator linkage. The modifications, for the most part, will be completed by 2005 with the M939s remaining in service during the process. To identify possible mechanical problems or performance limitations contributing to M939 accidents, the Army conducted two studies and a computer simulated modeling analysis. Although M939 trucks have been in service since 1983, Army Safety Center personnel stated that no aberrant accident statistics appeared before early 1992. However, during 1990-91, with the increased operating tempo associated with Desert Shield/Desert Storm, there was an increase in fatal accidents and deaths attributable to M939s. In August 1992, TACOM issued Safety of Use Message 92-20 discussing M939 performance limitations. This message warned of the truck's sensitive braking system--specifically that, when the truck is lightly loaded and on wet pavement, aggressive braking could cause rear wheel lockup, engine stall-out, power steering inoperability, and uncontrolled skidding. The Army began taking a closer look at the M939's accident history after circulating Safety of Use Message 92-20. Between 1993 and 1995, TACOM, the Army Safety Center, and the Army Transportation School and Center initiated a review of M939 accident reports and began putting together evidence that validated the need for the studies. Also, in an effort to reduce the number and severity of M939 accidents, the Army issued Ground Precautionary Message 96-04 in December 1995, limiting M939s to maximum speeds of 40 miles per hour on highway and secondary roads and 35 miles per hour over cross-country roads. Between September 1995 and June 1997, TACOM conducted two studies and a computer simulation analysis. The studies among other things, recreated and analyzed repetitive events cited in many accident investigation reports and discussed in Safety of Use Message 92-20. The two studies and modeling analysis focused on tire and air brake performance under various conditions. On the basis of the project's findings, TACOM concluded the overall truck design was sound and nothing was significantly different between the M939 and its commercial counterparts produced during the same time period. However, the studies found that improvements to some vehicle subsystems would enhance the truck's safety performance. The tire study completed in October 1996, together with other information relating to M939 usage, confirmed that the M939s were being used on-road more than originally planned. The original intent was for M939s to be driven on-road 20 percent and off-road 80 percent of the time. In some Army units, especially reserve units, this no longer held true. Some units were using the M939s on-road as much as 80 to 90 percent of the time. The truck's original tire, designed for maximum efficiency during off-road usage, performed less efficiently on-road, especially during inclement weather. The increased on-road usage enhanced the probability of the M939's being involved in an accident. On the basis of this scenario, TACOM tested several different tire designs looking to improve on-road traction under all environmental conditions, while retaining required off-road capabilities. The study recommended that all M939s be equipped with radial tires. The brake study, completed in June 1997, concluded that the air brake system may lock up more quickly than drivers expect, especially when the vehicle is lightly loaded. In tests, the Army found that aggressively applied pressure to the brake pedal caused the sequence of events found in many accident reports: wheel lockup, engine stall-out, loss of power steering, and uncontrolled skidding, often culminating in rollover. The probability of spin-out and rollover increased on wet or inclined surfaces. To lessen the likelihood of wheel lockup and the resulting chain of events, the study suggested (1) modification of all brake proportioning systems and (2) installation of anti-lock braking kits. The modeling analysis used computer technology to recreate the truck's probable behavioral characteristics in a simulated environment and also to validate conditions being tested in the studies. According to TACOM officials, the modeling results correlated with actual testing results compiled during the tire and brake studies. Besides the recommended improvements from the studies, the Army identified others it considered necessary. The Army decided to replace M939 cabs when they wore out with ones outfitted with a rollover crush protection system and also to modify the accelerator pedal resistance on the A2 variant of the M939. Both TACOM and Army Safety Center personnel stated that installation of the reinforced cab rollover crush protection system, while not an industry standard or required by law, would better protect M939 occupants in the event of a rollover. According to TACOM officials, the scheduled M939 modifications will cost around $234 million. The Army estimates that tire upgrades, brake proportioning, and anti-lock brake system improvements will cost about $120 million or about $3,800 per truck; adding cab rollover protection and modifying the A2's accelerator linkage will cost another $114 million or an additional $3,600 per truck. With respect to the current schedule for completing M939 modifications, brake proportioning and accelerator linkage equipment modifications will be completed by the end of fiscal year 1999; all remaining modifications, except for cab replacement, are scheduled for completion around 2005. Because the truck cabs will be replaced as they wear out, a precise schedule for completing this modification cannot be estimated at this time. Even though some of the M939s have been in service for 15 years, the decision to spend $234 million on modifications and equipment upgrades is based on the need to improve the vehicles' safety because the Army expects these trucks to be in service for at least 30 years. According to TACOM, the June 1998, M939 inventory was around 31,800 trucks. All M939s will be equipped with radial tires, brake reproportioning, anti-lock brake kits installation, and reinforced cab replacements. However, the accelerator linkage improvements are needed only on the 16,800 A2 variant of the trucks. Table 1 shows the schedule for the planned modifications. Although most scheduled modifications will not be completed until fiscal year 2005 or later, TACOM and Army Safety Center personnel noted that accident rates have declined significantly since the reduced speed limits instituted by the December 1995 precautionary message. Figure 6 shows the drop in the number of mishaps since 1995. Army officials believe the modifications being made to the M939s will improve their safety performance and reduce severe accidents, rollovers, and fatalities. In written comments on a draft of this report (see app. III), DOD stated that it concurred with this report and noted that the report accurately describes problems the Army found to be causing M939 accidents. To analyze the accident history of the M939 series 5-ton tactical vehicle, we obtained specific information from the Army Safety Center, Fort Rucker, Alabama; TACOM, Warren, Michigan; the Department of Transportation, Federal Highway Administration, Washington, D.C.; and the Department of the Army, Washington, D.C. To identify any accident anomalies associated with the M939s, we conducted two analyses and reviewed another conducted by the Army Safety Center. Our first analysis compared selected M939 accident statistics with similar information for the overall Army motor vehicle fleet (of which M939s are a subset). Our second analysis compared M939 accident statistics per million miles driven to Department of Transportation accident statistics for comparable commercial trucks. The Army Safety Center study we reviewed compared various M939 accident frequency rates per million miles driven with rates for comparable military tactical trucks. The number of years used in each comparison varied on the basis of the data available. Army motor vehicle fleet to M939 comparisons did not include events prior to 1987 because some accident statistics were not readily available. Our comparison of rates of M939 fatal accident events and vehicle occupant fatalities with rates for corresponding commercial sector trucks was limited to 1987-1996 due to the unavailability of accident data for commercial sector vehicles after 1996. Lastly, the Army Safety Center study comparing M939 Class A accident rates with rates for other similar Army tactical vehicles only included events occurring between October 1990 and June 1998. The extent to which other factors, such as human error, driver training, and off-road versus on-road usage, may have contributed to disparate accident rates was beyond the scope of this review. To assess Army initiatives directed at identifying M939 performance, mechanical, or systemic problems and limitations, as well as recommended corrective actions, we obtained or reviewed relevant Army studies. We also interviewed officials at the Army Safety Center and TACOM about these studies but did not assess or validate the findings, estimated costs, or recommendations resulting from these studies. Although we worked with personnel from the Army Safety Center, TACOM, Department of Transportation, and the Department of the Army during data gathering and reviewed those results for reasonableness, accuracy, and completeness, we did not validate the accuracy of accident statistics contained in various databases or other published information. However, this data is used to support the management information needs of both internal and external customers and is periodically reviewed internally by each organization for accuracy, completeness, and validity. We conducted our review from July 1998 through February 1999 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable William Cohen, Secretary of Defense; the Honorable Louis Caldera, Secretary of the Army, and interested congressional committees. Copies will also be made available to other interested parties upon request. Please contact me on (202) 512-5140 should you or your staff have any questions concerning this report. Major contributors to this report were Carol R. Schuster; Reginald L. Furr, Jr.; Kevin C. Handley; and Gerald L. Winterlin. Occupant fatalities Million miles driven 49,537. 51,239. 52,969. 53,443. 53,787. 53,691. 56,781. 61,284. 62,705. 63,967. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army's M939 series 5-ton tactical cargo truck, focusing on the: (1) extent to which accidents involving the truck have occurred; and (2) results of Army studies on the truck's design and its plans to address any identified deficiencies. GAO noted that: (1) GAO's analyses and an Army analysis indicate a higher rate of accidents involving the M939 series 5-ton tactical cargo truck than other comparison vehicles; (2) GAO's analysis of January 1987 through June 1998 accident data showed that, while M939s made up an average of about 9 percent of the Army motor vehicle fleet during that time, about 34 percent of the fleet's accidents resulting in fatalities of vehicle occupants involved these trucks; (3) 44 percent of accidents that involved a rollover and resulted in fatalities of vehicle occupants involved the M939; (4) GAO's comparison of Department of Transportation accident statistics and M939 accident statistics showed that over a 10-year period, the fatality rate for occupants of the M939 averaged about 30 times higher than the fatality rate for occupants of comparably sized commercial trucks; (5) an Army Safety Center analysis found that the chance of a fatality in a M939 was 3 to 21 times higher than in other similar military trucks in the Army motor vehicle fleet--the M34/M35 series 2 1/2 ton trucks; (6) the Army plans to spend an estimated $234 million on various modifications to improve the M939's safety and operational performance; (7) based on the results of studies into the root causes of M939 accidents, the Army concluded that the overall truck design was sound, but some modifications were necessary; (8) the Army plans to use the $234 million to add anti-lock brake kits, alter brake proportioning specifications, upgrade the truck's tires, install cab rollover crush protection, and modify the accelerator linkage; (9) most modifications will be completed by 2005; and (10) the M939s will remain in service as these modifications are made.
4,079
457
Foreign nationals who wish to come to the United States on a temporary basis and are not citizens of countries that participate in the Visa Waiver Program must generally obtain an NIV. U.S. law provides for the temporary admission of various categories of foreign nationals, who are known as nonimmigrants. Nonimmigrants include a wide range of visitors, such as tourists, foreign students, diplomats, and temporary workers who are admitted for a designated period of time and a specific purpose. There are dozens of specific types of NIVs that nonimmigrants can obtain for tourism, business, student, temporary worker, and other purposes. State manages the application process for these visas, as well as the consular officer corps and its functions, at over 220 visa-issuing posts overseas. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews; collection of biometrics (fingerprints and full-face photographs); cross-referencing an applicant's name and biometrics against multiple databases maintained by the U.S. government; and in-person interviews. Personal interviews with consular officers are required by law for most foreign nationals seeking NIVs. For an overview of the visa process, see figure 1. DHS sets visa policy, in consultation with State, and Commerce oversees the creation and implementation of strategies to promote tourism in the United States, such as the National Travel and Tourism Strategy called for in E.O. 13597. We have previously reported on visa delays at overseas posts: In April 2006, we testified that, of nine posts with wait times in excess of 90 days in February 2006, six were in Brazil, India, and Mexico. In July 2007, we reported that 20 posts said they experienced maximum monthly wait times in excess of 90 days at least once over the past year. More recently, State has reported long interview wait times in Brazil and China. For example, in June 2010, NIV interview wait times reached 100 days at the U.S. Embassy in Beijing, China, and in August 2011, interview wait times reached 143 days at the U.S. Consulate in Rio de Janeiro, Brazil. Following the rise of interview wait times at many posts, and especially in Brazil and China, President Obama issued E.O. 13597 in January 2012 to improve visa processing and travel promotion while continuing to protect U.S. national security. E.O. 13597 contained multiple goals for State and DHS for processing visitors to the United States, including the following: Ensure that 80 percent of NIV applicants worldwide are interviewed within 3 weeks of receipt of application. Increase NIV processing capacity in Brazil and China by 40 percent over the next year. In March 2012, State and DHS released an implementation plan for E.O. 13597 that outlined the measures each agency planned to undertake to meet the goals of the Executive Order. Subsequently, in August 2012, State and DHS issued a progress report on E.O. 13597 stating the progress made in meeting the goals of the Executive Order and the plans for continued efforts to improve a foreign visitor's experience in traveling to the United States. State's Bureau of Consular Affairs, as well as consular management officials and consular officers at the four posts we visited, reported that increased staffing levels, policy changes, and organizational reforms implemented since 2012 have all contributed to increasing NIV processing capacity, reducing NIV interview wait times worldwide. For calculating NIV interview wait times, we used data from State on applications for visas for tourism and business purposes (B visas) and did not include other NIV categories. According to State's Bureau of Consular Affairs, the past hiring of additional staff through various authorities and temporary assignments of consular officers during periods of high NIV demand contributed to meeting E.O. 13597's goals of expanding NIV processing capacity and reducing worldwide wait times, particularly at U.S. posts in Brazil, China, India, and Mexico. Increase in consular officers: According to State officials, from fiscal year 2012 through 2014, State "surged" the number of consular officers deployed worldwide from 1,636 to 1,883 to help address increasing demand for NIVs, an increase of 15 percent over 3 years. In response to E.O. 13597, State increased the number of deployed consular officers between January 19, 2012 (the date of E.O. 13597), and January 19, 2013, from 50 to 111 in Brazil, and 103 to 150 in China, a 122 and 46 percent increase, respectively (see fig. 2 for additional information on consular staffing increases in Brazil and China). As a result, State met its goal of increasing its NIV processing capacity in Brazil and China by 40 percent within a year of the issuance of E.O. 13597. Limited noncareer appointments: In fiscal year 2012, State's Bureau of Consular Affairs launched the limited noncareer appointment (LNA) pilot program to quickly deploy language-qualified staff to posts facing an increase in NIV demand and workload. The first cohort of LNAs--who are hired on a temporary basis for up to 5 years for specific, time-bound purposes--included 19 Portuguese speakers for Brazil and 24 Mandarin speakers for China who were part of the increased number of consular officers deployed to posts noted above. In fiscal year 2013, State expanded the LNA program to include Spanish speakers. As of August 2015, State had hired 95 LNAs for Brazil, China, Colombia, the Dominican Republic, Ecuador, and Mexico. Temporary assignment of consular officers: State utilizes the temporary redeployment of Foreign Service officers and LNAs to address staffing gaps and increases in NIV demand. Between October 2011 and July 2012, State assigned, on temporary duty, 220 consular officers to Brazil and 48 consular officers to China as part of its effort to reallocate resources to posts experiencing high NIV demand. State continues to use this method to respond to increases in NIV demand. For example, during the first quarter of fiscal year 2015, India experienced a surge in NIV demand that pushed NIV interview wait times over 21 days at three posts. To alleviate the situation, consular managers in India sent officers to the U.S. Consulate in Mumbai, which was experiencing higher wait times, from other posts, allowing the U.S. Mission in India to reduce average wait times to approximately 10 days by the end of December 2014. According to State officials, policy changes have also helped to reduce NIV interview wait times at posts, including the expansion of the Interview Waiver Program (IWP) for NIVs and extending the validity of some NIVs. Expansion of interview waiver program: The IWP allows posts to waive the in-person NIV interview requirements for defined categories of "low-risk" applicants or applicants renewing an NIV for some visa categories. In 2012, the IWP for the U.S. Mission in Brazil was expanded to include first time applicants under the age of 16 or over the age of 66. This expansion allowed the U.S. Mission in Brazil to conduct additional walk-in NIV interviews by diverting first-time NIV applicants that State considers to be low-risk and renewals from presenting themselves at post for an interview. According to State officials, discussions with DHS are underway to further expand the IWP. Extending the validity period of visas: In accordance with federal law, State has extended the validity period of some visas in some countries, reducing the frequency with which a holder of a U.S. NIV would be required to apply for a renewal. (The visa validity period is the length of time the holder of a U.S. NIV is permitted to travel to a port of entry in the United States.) In November 2014, the United States and the People's Republic of China reciprocally increased the validity periods of multiple-entry business and tourist visas issued to each other's citizens for up to 10 years. The change in policy was intended to support improved trade, investment, and business by facilitating travel between the two countries. Furthermore, the extension of visa validity periods, according to State officials, is also expected to reduce the number of visas requiring adjudication over the long term at posts in China. State's Bureau of Consular Affairs has adopted several organizational reforms to improve its NIV processing efficiency. These include contracting out some administrative support duties, establishing leadership and management practices to better guide consular officers, and opening additional consulates to expand NIV processing capacity in certain countries and redesigning consular sections at post. Contracting for administrative support duties: The use of a worldwide support services contract has enabled posts to outsource certain administrative activities related to visa processing that would otherwise be handled by consular personnel. This effort, according to State officials, allows consular officers more time to focus on visa adjudication and therefore improves their productivity. The contract provides support services for visa operations at U.S. embassies and consulates, including NIV interview appointment scheduling and fee collection services. Contractors have opened 29 off-site locations in six countries to collect biometric data of NIV applicants, which are then forwarded to the post for processing and security screening prior to an applicant's scheduled interview. Before the implementation of the contract in fiscal year 2011, biometric information could be collected at the post only when the applicant appeared for his or her interview. Consular officials we spoke with in Brazil and India stated that off-site biometric collection has added additional efficiencies to the NIV process. Leadership and management changes: In 2012, State's Bureau of Consular Affairs launched the 1CA office to help further develop a culture of leadership, management, and innovation under budget austerity and increasing NIV demand. In three of the four posts we visited, embassy officials told us that 1CA tools and resources have helped management at post identify and develop solutions to delays in NIV processing, which they said has contributed to the ability of State to reduce NIV interview wait times. For example, the U.S. Embassy in Mexico City is using 1CA to map out NIV processing steps to identify and develop solutions to existing bottlenecks. According to consular managers at post, the process maps allow managers to graphically view the various NIV processing steps and identify where improvements can be implemented. The solutions developed from the 1CA mapping exercise have allowed the post to conduct a larger number of NIV interviews each day. In addition, the 1CA office is in the process of developing meaningful metrics, beyond NIV interview wait times, to provide consular managers with the data to improve performance. Opening additional consulates and redesigning consular sections: Since the issuance of E.O. 13597, State has expanded the number of interview windows at posts in Brazil and China and developed plans to open two additional consulates in Brazil and add visa services to the existing U.S. consulate in Wuhan, China, to help absorb increases in NIV demand. Additionally, at all four posts we visited, State officials told us that they have, to varying degrees, redesigned the responsibilities and location of their consular staff to improve the efficiency of their operations. For example, in China, India, and Mexico, officials reported that they have individualized the tasks that are performed at each interview window to reduce the time an applicant spends at post and streamline NIV processing. Additionally, at the U.S. Embassy in Beijing, each interview window within the consular section is assigned to conduct a discrete task in the NIV adjudication process. These tasks include checking-in and confirming an applicant's identity, collecting biometric data, and adjudicating NIVs at separate windows (see fig. 3 for a photograph of the NIV applicant area at the U.S. Embassy in Beijing, China). Transfer of NIV adjudications: State has redistributed IWP adjudications within the same country to posts experiencing low NIV demand and has created an IWP adjudication section in the United States to better leverage NIV processing resources. Several missions we visited transfer IWP adjudications from a post experiencing high demand to a post experiencing low demand. For example, from February 2014 to April 2015, consular managers in the U.S. Mission in Mexico electronically transferred 44,240 IWP cases from the U.S. Consulate in Guadalajara to the U.S. Consulates in Ciudad Juarez, Matamoros, and Nogales. According to officials, the electronic transfer of the IWP adjudications allowed the U.S. Consulate in Guadalajara to keep NIV interview wait times under 21 days. Additionally, in May 2015, State's Bureau of Consular Affairs created an IWP remote processing unit in the United States to support the U.S. Mission in China. According to State officials, the output of the unit is currently over 1,000 IWP cases per day; and when fully staffed with 30 consular officers by December 2015, the unit will be able to process up to 3,000 cases per day. According to State officials, efforts the Bureau of Consular Affairs has implemented since the issuance of E.O. 13597 have reduced NIV interview wait times worldwide, including in Brazil and China. According to State data, even as NIV demand has increased, State has seen NIV interview wait times generally decline. Specifically, as figure 4 shows, since July 2012, at least 80 percent of B visa applicants worldwide have been able to obtain an interview within 3 weeks of their application. This indicates that the goal of E.O. 13597 is, so far, being met. NIV B visa interview wait times have also decreased even as NIV workloads have increased in Brazil and China, two countries that have historically experienced long interview wait times for NIV applicants. For example, B visa interview wait times decreased from an average high of 114 days in August 2011 to 2 days in September 2012 for posts in Brazil, and from an average high of 50 days in June 2011 to 2 days in February 2014 for posts in China (see fig. 5 for additional average wait times at posts in India and Mexico). Between January 2010 and December 2014, State reported that NIV workloads from Brazil and China increased by 161 percent and 88 percent respectively. State projects that the number of NIV applicants will rise worldwide from 12.4 million in fiscal year 2014 to 18.0 million in fiscal year 2019, an increase of 45 percent. Although NIV demand generally fluctuates and undergoes significant increases and decreases from outside factors-- such as shifts in the world economy and events like the September 2001 terrorist attacks--the demand is generally trending upwards, and has been for the past 40 years (see fig. 6). According to State's projections, NIV applications from the East Asia and Pacific region and the South and Central Asia region, will increase by about 98 and 91 percent, respectively, from fiscal year 2014 to fiscal year 2019. The Western Hemisphere region is expected to receive approximately 6.9 million applicants by fiscal year 2019, an increase of approximately 30 percent from fiscal year 2014 (see fig. 7). State has underestimated growth in NIV demand in past projections. In 2005, State contracted with an independent consulting firm to project growth in NIV applicant volume through 2020. As of 2014, 13 of the 18 countries included in this study had exceeded their 2014 NIV demand projections. The study also underestimated the sharp escalation of NIV demand in Brazil and China. By 2014, Brazil's demand had already exceeded the study's projection for NIV applicants in 2020 by over 104 percent, and in the same year, China's demand was over 57 percent higher than the study's 2020 projection for it. These increases in demand resulted in longer NIV interview wait times between 2006 and 2011 in Brazil and China. As we have previously reported, increases in NIV demand have historically impacted State's ability to efficiently process visas. Expected increases in NIV demand are further complicated by State's current NIV process, including proposed staffing levels that are not anticipated to rise significantly through fiscal year 2016. Consular officers in 8 of the 11 focus groups and consular management officials at posts in Beijing, Mexico City, and New Delhi told us that current efforts to reduce NIV interview wait times are not sustainable if demand for NIVs continues to increase at expected rates. A consular management official at one post noted that efforts such as staff increases have been a "temporary fix" but are not a long-term solution to their high volume of NIV applicants. Staffing levels cannot be increased indefinitely due to factors such as hiring restrictions, staffing limitations established by host governments, and physical workspace constraints. For example, according to State officials, State is currently hiring to meet vacancies caused by attrition and is expected to increase the number of consular officers by only 57 in fiscal year 2015, a 3 percent increase, and not increase consular officers in fiscal year 2016. State officials told us that they do not expect significant increases in staffing levels beyond 2016. According to State officials, staffing limitations established by host governments are also a barrier to State's Bureau of Consular Affairs' staffing efforts. For example, the Indian government has currently restricted the number of staff the United States can employ at consulates and embassies. Physical capacity limitations, such as insufficient interview windows for visa adjudication, are also a concern for efforts to increase staffing. According to State officials, efforts implemented since E.O. 13597 have collectively reduced NIV interview wait times. However, the effectiveness of each individual effort remains unclear due to a lack of evaluation. According to GAO's Standards for Internal Control in the Federal Government, internal controls should provide reasonable assurance that the objectives of an agency are being achieved to ensure the effectiveness and efficiency of operations, including the use of the agency's resources. Furthermore, State's evaluation policy emphasizes the importance of evaluations for bureaus to improve their programs and management processes to inform decision makers about current and future activities. The evaluation findings, according to State's policy, are to then be utilized for making decisions about policy and the delivery of services. State officials acknowledged that they had not completed any systematic evaluations of their efforts to reduce NIV interview wait times because they are not currently collecting reliable data. For example, State officials reported that the expansion of the IWP in Brazil has significantly increased their NIV processing capacity and has helped them reach the NIV interview wait times goals of E.O. 13597. However, due to an absence of data, State could not determine how many more cases were adjudicated via the IWP after its expansion and also could not quantify the impact of the expansion on reducing NIV interview wait times in Brazil. Instead, State officials said they relied on the reduction in NIV interview appointment wait times as a general indication that the efforts are working. Furthermore, projected increases in NIV demand and the goals specified in E.O. 13597 heighten the importance and potential impact of State's efforts to ensure that resources are effectively targeted. A systematic evaluation of efforts by State to reduce NIV interview wait times would provide a clear indication of the efforts that yield the greatest impact on NIV processing efficiency and could assist the agency in continuing to meet the goals of E.O. 13597. Such evaluations would help State allocate resources to those efforts that provide the most impact in efficiently and effectively achieving its objectives. Without such evaluations, State's ability to direct resources to those activities that offer the greatest likelihood of success for continuing to meet the goals of E.O. 13597 is at risk. State officials acknowledged that an evaluation of their efforts to improve NIV processing capacity would be helpful for future decision making. Consular officers and managers at posts we visited identified current information technology (IT) systems as one of the most significant challenges to the efficient processing of NIVs. Consular officers in all 11 focus groups we conducted across the four posts we visited stated that problems with the Consular Consolidated Database (CCD) and the NIV system create significant obstacles for consular officers in the processing of NIVs. Specifically, consular officers and managers at posts stated that frequent NIV system outages and failures (where the system stops working) at individual posts, worldwide system outages of CCD, and IT systems that are not user friendly, negatively affected their ability to process NIVs. NIV system outages and failures at posts: Consular officers we spoke with in Beijing, Mexico City, New Delhi, and Sao Paulo explained that the NIV system regularly stops working. This results in a reduced number of adjudications (whether being performed at the interview window or, for an IWP applicant, at an officer's desk) in a day. Notably, consular officers in 4 of the 11 focus groups reported having to stop work or re-adjudicate NIV applications as a result of these NIV system failures. In fact, during our visit to the U.S. Embassy in New Delhi in March 2015, a local NIV outage occurred, affecting consular officers' ability to conduct adjudications. In January 2015, officers in Bogota, Guadalajara, Monterrey, and Moscow--among the top 15 posts with the highest NIV applicant volume in 2014-- experienced severe NIV performance issues--specifically an inability to perform background check queries against databases. Worldwide outages and operational issues of CCD: Since July 2014, two worldwide outages of CCD have impaired the ability of posts to process NIV applications. On June 9, 2015, an outage affected the ability of posts to run checks of biometric data, thus halting most visa printing along with other services offered at posts. According to State officials, the outage affected every post worldwide for 10 days. The system was gradually repaired, but it was not fully restored at all posts until June 29, 2015, exacerbating already increased NIV interview wait times at some posts during the summer high demand season. According to State notices, another significant outage of CCD occurred on July 20, 2014, slowing NIV processing worldwide until September 5, 2014, when CCD returned to full operational capacity. State estimated that from the start of operational issues on July 20 through late July, State issued approximately 220,000 NIVs globally-- about half of the NIVs State anticipated issuing during that period. According to officials in State's Bureau of Consular Affairs, Office of Consular Systems and Technology (CST), who are responsible for operating and maintaining CCD and the NIV system, consular officers were still able to collect NIV applicant information during that period; however, processing of applications was significantly delayed with an almost 2-week backlog of NIVs. In the U.S. Consulate in Sao Paulo, a consular management official reported that due to this outage, the post had a backlog of about 30,000 NIV applications, or approximately 9 days' worth of NIV interviews during peak season. Consular officers in 8 out of the 11 focus groups we conducted identified a lengthy CCD outage as a challenge to the efficient processing of NIVs. IT systems are not user friendly: In 9 out of 11 focus groups, consular officers described the IT systems for NIV processing as not user friendly. Officers in our focus groups explained that some aspects of the system hinder their ability to quickly and efficiently process NIVs. These aspects include a lack of integration among the databases needed for NIV adjudications, the need for manual scanning of documentation provided by an applicant, and an absence of standard keyboard shortcuts across all IT applications that would allow users to quickly copy information when processing NIV applications for related applicants, to avoid having to enter data multiple times. Some consular officers in our focus groups stated that they could adjudicate more NIVs in a day if the IT systems were less cumbersome and more user friendly. Consular officers in Beijing and Mexico City and consular management at one post indicated that the NIV system appeared to be designed without consideration for the needs of a high volume post, which include efficiently processing a large number of applications per adjudicator each day. According to consular officers, the system is poor at handling today's high levels of demand because it was originally designed in the mid- 1990s. Consular officers in Sao Paulo stated that under current IT systems and programs, the post may not be able to process larger volumes that State projects it will have in the future. State, recognizing the limits of its current consular IT systems, initiated the development of a new IT platform. State is developing a new system referred to as "ConsularOne," to modernize 92 applications that include systems such as CCD and the NIV system. According to State, ConsularOne will be implemented in six phases, starting with passport renewal systems and, in phase five, capabilities associated with adjudicating and issuing visas (referred to as non-citizen services). However, CST officials have yet to formally commit to when the capabilities associated with non-citizen services are to be implemented. According to a preliminary CST schedule, the enhanced capabilities associated with processing NIVs are not scheduled for completion until October 2019. Given this timeline, according to State officials, enhancements to existing IT systems are necessary and are being planned. Although consular officers and managers we spoke with identified CCD and the NIV system as one of the most significant challenges to the efficient processing of NIVs, State does not systematically measure end user (i.e., consular officers) satisfaction. We have previously reported that in order for IT organizations to be successful, they should measure the satisfaction of their users and take steps to improve it. The Software Engineering Institute's IDEALSM model is a recognized approach for managing efforts to make system improvements. According to this model, user satisfaction should be collected and used to help guide improvement efforts through a written plan. With such an approach, IT improvement resources can be invested in a manner that provides optimal results. Although State is in the process of upgrading and enhancing CCD and the NIV system, State officials told us that they do not systematically measure user satisfaction with their IT systems and do not have a written plan for improving satisfaction. According to CST officials, consular officers may voluntarily submit requests to CST for proposed IT system enhancements. Additionally, State officials noted that an IT stakeholder group comprising officials in State's Bureau of Consular Affairs regularly meets to identify and prioritize IT resources and can convey end user concerns for the system. However, State has not collected comprehensive data regarding end user satisfaction and developed a plan to help guide its current improvement efforts. Furthermore, consular officers continued to express concerns with the functionality of the IT systems, and some officers noted that enhancements to date have not been sufficient to address the largest problems they encounter with the systems. Given consular officers' reliance on IT services provided by CST, as well as the feedback we received from focus groups, it is critical that State identify and implement feedback from end users in a disciplined and structured fashion for current and any future IT upgrades. Without a systematic approach to measure end user satisfaction, CST may not be able to adequately ensure that it is investing its resources on improvement efforts that will improve performance of its current and future IT systems for end users. Travel and tourism are important contributors to U.S. economic growth and job creation. According to Commerce, international travelers contributed $220.6 billion to the economy and supported 1.1 million jobs in 2014. Processing visas for such travelers as efficiently and effectively as possible without compromising our national security is critical to maintaining a competitive and secure travel and tourism industry in the United States. Although State has historically struggled with the task of maintaining reasonable wait times for NIV interviews, it has undertaken a number of efforts in recent years that have yielded substantial progress in reducing such waits. Significant projected increases in NIV demand coupled with consular hiring constraints and other challenges could hinder State's ability to sustain this progress in the future--especially in countries where the demand for visas is expected to rise the highest. These challenges heighten the importance of systematically evaluating the cost and impact of the multiple measures State has taken to reduce interview wait times in recent years and leveraging that knowledge in future decision making. Without this, State's ability to direct resources to those activities that offer the greatest likelihood of success is limited. Moreover, State's future capacity to cope with rising NIV demand will be challenged by inefficiencies in its visa processing technology; consular officers and management officials at the posts we visited pointed to cumbersome user procedures and frequent system failures as enormous obstacles to efficient NIV processing. State's Bureau of Consular Affairs recognizes these problems and plans a number of system enhancements; however, the bureau does not systematically collect input from consular officers to help guide and prioritize these planned upgrades. Without a systematic effort to gain the input of those who employ these systems on a daily basis, State cannot be assured that it is investing its resources in a way that will optimize the performance of these systems for current and future users. To further improve State's processing of nonimmigrant visas, we recommend that the Secretary of State take the following two actions: 1. Evaluate the relative impact of efforts undertaken to reduce nonimmigrant visa interview wait times to help managers make informed future resource decisions. 2. Document a plan for obtaining end user (i.e., consular officers) input to help improve end user satisfaction and prioritize enhancements to information technology systems. We provided a draft of this report for review and comment to State, Commerce, and DHS. We received written comments from State, which are reprinted in appendix II. State agreed with both of our recommendations and highlighted a number of actions it is taking or plans to take to implement them. Commerce and DHS did not provide written comments on the report. State and DHS provided a number of technical comments, which we have incorporated throughout the report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of State, the Secretary of Commerce, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report reviews Department of State's (State) nonimmigrant visa (NIV) processing operations and provides an update on the status of the goals in Executive Order (E.O.) 13597. Specifically, this report examines (1) the efforts State has undertaken to expand capacity and reduce NIV applicants' interview wait times and the reported results to date, and (2) the challenges that impact State's ability to efficiently process NIVs. To accomplish our objectives, we reviewed relevant State and Department of Homeland Security (DHS) documents, and interviewed State, DHS, and Department of Commerce (Commerce) officials. In addition, we observed consular operations and interviewed U.S. government officials at four posts--the U.S. Embassy in Beijing, China; the U.S. Embassy in New Delhi, India; the U.S. Embassy in Mexico City, Mexico; and the U.S. Consulate in Sao Paulo, Brazil. For our site visits, we selected posts that (1) were in countries specifically mentioned in E.O. 13597, (2) experienced NIV interview wait time problems previously, or (3) were in countries that have the highest levels of U.S. NIV demand in the world. During these visits, we observed visa operations; interviewed consular staff and embassy management about NIV adjudication policies, procedures, and resources; conducted focus groups with consular officers; and reviewed documents and data. Our selection of posts was not intended to provide a generalizable sample but allowed us to observe consular operations at some of the highest NIV demand posts worldwide. To determine the efforts State has undertaken to expand capacity and reduce NIV applicants' interview wait times, we reviewed relevant documents and interviewed officials from State and DHS. To determine the reported results of those efforts, we collected and analyzed data on NIV processing capacity and NIV interview wait times worldwide from January 2011 until July 2015 and compared them to the goals outlined in E.O. 13597 and reviewed documentation provided by State on their efficiency efforts. For NIV interview wait time data, we focused our analysis on B visas and not on other NIV categories because this is how State measures visa wait times against the goals specified in E.O. 13597, and because B visas represent most NIVs. For example, B visas represent 79 percent of all NIVs processed in fiscal year 2014. To determine the reliability of State's data on NIV wait times for applicant interviews, we reviewed the department's procedures for capturing these data, interviewed the officials in Washington, D.C., who monitor and report these data, and examined data that were provided to us electronically. In addition, we interviewed the corresponding officials from our visits to select posts overseas and in Washington, D.C., who input and use the NIV interview wait time data. While some posts occasionally did not update their NIV wait time data on a weekly basis, we found the data to be sufficiently reliable for the purposes of determining the percentage of posts that were below the 3-week NIV interview wait time threshold established by E.O. 13597. To determine the challenges that impact State's ability to efficiently process NIVs, we reviewed relevant documents, including State planning and NIV demand projections, interviewed State, DHS, and Commerce officials in Washington, D.C., including officials from State's Office of Inspector General, and conducted focus groups with consular officers. We also reviewed State's documentation on its information technology systems, including the Consular Consolidated Database, the NIV system, and the development plans for the ConsularOne system. To determine the reliability of State's NIV applicant projections, we reviewed the department's projections and interviewed the officials that develop the projections. We found the data to be sufficiently reliable for the purposes of providing a baseline for possible NIV demand through 2019. To balance the views of State management and obtain perspectives of consular officers on State's NIV processing, we conducted 11 focus group meetings with randomly selected entry-level consular officers that conduct NIV interviews and adjudications at the four posts we visited. These meetings involved structured small-group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. Consistent with typical focus group methodologies, our design included multiple groups with varying characteristics but some similarity in experience and responsibility. Most groups involved 6 to 10 participants. Discussions were structured, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. Our overall objective in using a focus group approach was to obtain the views, insights, and feelings of entry-level consular officers on issues related to their workload, the NIV process, and challenges they face as consular officers conducting NIV applicant interviews and adjudications. We assured participants of the anonymity of their responses, promising that their names would not be directly linked to their responses. We also conducted one pretest focus group and made some revisions to the focus group guide accordingly. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants' attitudes on specific topics and to offer insights into their concerns about and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, the information includes only the responses of entry-level consular officers from the 11 selected groups. Second, participants were asked questions about their specific experiences with the NIV process and challenges they face as consular officers conducting NIV applicant interviews and adjudications. Other entry-level consular officers who did not participate in our focus groups or were located at different posts may have had different experiences. Because of these limitations, we did not rely entirely on focus groups but rather used several different methodologies to corroborate and support our conclusions. We conducted this performance audit from September 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual mentioned above, Godwin Agbara (Assistant Director, International Affairs and Trade), Kathryn Bernet (Assistant Director, Homeland Security and Justice), Nicholas Marinos (Assistant Director, Information Technology), Ashley Alley, Juan P. Avila, Justin Fisher, Kaelin Kuhn, Jill Lacey, Christopher J. Mulkins, and Jasmine Senior made key contributions to this report. Technical assistance was provided by Karen Deans, Katherine Forsyth, Kara Marshall, and Tina Cheng.
International travel and tourism contributed $220 billion to the U.S. economy and supported 1.1 million jobs in 2014, according to the Department of Commerce. A portion of those travelers to the United States were required to obtain an NIV. After a period in which travelers experienced extensive waits in obtaining a required interview for an NIV in 2011, the President issued E.O. 13597 in 2012 to improve visa and foreign visitor processing, while continuing to protect U.S. national security. The E.O. set goals for State to increase NIV processing capacity in Brazil and China and reduce NIV interview wait times for applicants worldwide. This report examines (1) efforts State has undertaken to expand capacity and reduce NIV applicants' interview wait times and the reported results to date and (2) challenges that impact State's ability to efficiently process NIVs. GAO analyzed State's historical and forecast NIV data and interviewed State officials in Washington, D.C., and consular officers and management in Brazil, China, India, and Mexico. These countries represent the four highest demand countries for U.S. NIVs. Since 2012, the Department of State (State) has undertaken several efforts to increase nonimmigrant visa (NIV) processing capacity and decrease applicant interview wait times. Specifically, it has increased consular staffing levels and implemented policy and management changes, such as contracting out administrative support services. According to State officials, these efforts have allowed State to meet the goals of Executive Order (E.O.) 13597 of increasing its NIV processing capacity by 40 percent in Brazil and China within 1 year and ensuring that 80 percent of worldwide NIV applicants are able to schedule an interview within 3 weeks of State receiving their application. Specifically, State increased the number of consular officers in Brazil and China by 122 and 46 percent, respectively, within a year of the issuance of E.O. 13597. Additionally, according to State data, since July 2012, at least 80 percent of worldwide applicants seeking a tourist visa have been able to schedule an interview within 3 weeks. Two key challenges--rising NIV demand and problems with NIV information technology (IT) systems--could affect State's ability to sustain the lower NIV interview wait times. First, State projects the number of NIV applicants to rise worldwide from 12.4 million in fiscal year 2014 to 18.0 million in fiscal year 2019, an increase of 45 percent (see figure). Given this projected NIV demand and budgetary limits on State's ability to hire more consular officers at posts, State must find ways to achieve additional NIV processing efficiencies or risk being unable to meet the goals of E.O. 13597 in the future. Though State's evaluation policy stresses that it is important for bureaus to evaluate management processes to improve their effectiveness and inform planning, State has not evaluated the relative effectiveness of its various efforts to improve NIV processing. Without conducting a systematic evaluation, State cannot determine which of its efforts have had the greatest impact on NIV processing efficiency. Second, consular officers in focus groups expressed concern about their ability to efficiently conduct adjudications given State's current IT systems. While State is currently enhancing its IT systems, it does not systematically collect information on end user (i.e., consular officer) satisfaction to help plan and guide its improvements, as leading practices would recommend. Without this information, it is unclear if these enhancements will address consular officers' concerns, such as having to enter the same data multiple times, and enable them to achieve increased NIV processing efficiency in the future. To improve State's ability to process NIVs, while maintaining a high level of security to protect our borders, GAO is recommending that State (1) evaluate the relative impact of efforts undertaken to improve NIV processing and (2) document a plan for obtaining input from end users (consular officers) to help improve their satisfaction and prioritize enhancements to IT systems. State concurred with both recommendations.
8,187
848
PEBES legislation required SSA to begin sending benefit estimate statements to workers aged 60 and older in fiscal year 1995 and to those turning 60 during each fiscal year from 1996 through 1999; starting in fiscal year 2000, SSA must send the PEBES annually to almost every worker aged 25 and older. However, to better manage the expected workload, SSA officials are sending the PEBES to many workers ahead of schedule. As a result, most workers aged 40 and older--about 65 million--will have received their first statement by the end of 1998. The PEBES was conceived as a means to inform the public about the benefits available under the Old Age and Survivors Insurance (OASI) and the Disability Insurance (DI) programs, which together are commonly known as "Social Security." These programs provide monthly cash benefits to retired and disabled workers and their dependents and survivors. The benefit amounts are based primarily on a worker's earnings. By providing individual workers with a listing of their yearly earnings on record at SSA and estimates of the benefits they may receive, SSA hopes to better ensure that its earnings records are complete and accurate and to assist workers in planning for their financial future. As a result of profound demographic changes--such as the aging of the baby boom generation and increasing life expectancy--Social Security's yearly expenditures are expected to exceed its yearly tax revenue beginning in 2013. Without corrective legislation, the trust funds are expected to be depleted by 2032, leaving insufficient funds to pay the current level of benefits. As a result of the financial problems facing the program, a national debate on how to ensure Social Security's solvency has begun and will likely intensify. Ensuring long-term solvency within the current program structure will require either increasing revenues or reducing expenditures, or some combination of both. This could be achieved through a variety of methods, such as raising the retirement age, reducing inflation adjustments, increasing payroll taxes, and investing trust fund reserves in securities with potentially higher yields than the U.S. Treasury securities currently held by the trust funds. Some options for change, however, would fundamentally alter the program structure by setting up individual retirement savings accounts managed by the government or personal security accounts managed through the private sector. Both of these options would permit investments in potentially higher yielding securities. Proponents of adding rates of return to the PEBES believe these rates would provide individuals with information on the current program and enable them to compare their rate of return for Social Security with rates for other investments. Analysts disagree about whether it is appropriate to use rates of return to evaluate the Social Security program and the options for reform. Furthermore, using rates of return for Social Security presents a number of difficulties. Estimates would be based on a variety of assumptions, such as how long the worker is expected to live after retirement, and other decisions, such as whether to include disability benefits. These uncertainties and how they affect individual rates of return would need to be explained. Also, comparing rates of return for Social Security with rates for private market investments presents a variety of difficulties, such as how to handle transaction costs and the differences in the level of risk, which also need to be accounted for in the comparison. Social Security contributions are mostly used to pay benefits to current beneficiaries and are not deposited in interest-bearing accounts for individual workers. In fact, benefit payments to any given individual are derived from a formula that does not use interest rates or the amount of contributions. Still, the benefits workers will eventually receive reflect a rate of return they implicitly receive on their contributions. This rate of return is equal to the average interest rate workers would have to receive on their contributions in order to pay for all the benefits they will receive from Social Security. As part of the Social Security reform debate, some analysts contend that comparing rates of return for Social Security with rates for the private market will help individuals understand that they could have potentially higher retirement incomes with a new system of individual retirement savings accounts. Moreover, they believe that the new system would produce real additions to national saving. In turn, new saving would generate economic and productivity growth that yields real returns to society and to consumers. They assert that Social Security, in contrast, only transfers income from taxpayers to beneficiaries, detracts from saving and long-term economic growth, and produces no real economic returns. Other analysts, however, contend that the rate of return concept should not be applied to Social Security for various reasons. First, they observe that Social Security is a social insurance program that helps protect workers and retirees against a variety of risks over which they often have little control, such as the performance of the economy and inflation. For example, the Social Security program is designed to help ensure that low-wage earners have adequate income in their retirement. Second, some analysts observe that Social Security simply transfers money from taxpayers to beneficiaries and is not designed to provide returns on contributions. Third, some analysts believe that the full value of the program cannot be determined solely by comparing monetary benefits and contributions. For example, individuals benefit from Social Security in other, more general ways through reductions in poverty and being relieved of providing for their parents and other beneficiaries through some other means. Rate of return estimates will vary according to what contributions and benefits they include. Moreover, actual rates of return for individuals will differ substantially from estimates due to the uncertainty of several factors, such as how long they will live, how much they will earn, and what size families they will have. To be clearly understood, rate of return estimates for Social Security need an explanation of how they are calculated and how uncertain the estimates are. Estimates of rates of return on contributions need to be clear about which benefits are included. For example, Social Security provides benefit payments to many individuals other than retired workers. In 1996, retired workers accounted for 61 percent of all Social Security beneficiaries, and they received 68 percent of the benefits. Other beneficiaries include disabled workers, survivors of deceased workers, and spouses and children of retired and disabled workers. If the calculations include the full range of benefits provided by the Social Security program, rather than retirement benefits alone, then the calculations would also need to include the full range of contributions made for those benefits. Conversely, if the calculations include only the retirement portion of the benefits, then the contributions would need to be reduced accordingly. Social Security contributions are made by employers as well as employees. Currently, both the individual and the employer pay a 6.2 percent tax on covered earnings for OASI and DI combined. Most rate of return estimates prepared by analysts include both the employer and employee shares; however, some analysts believe the employer contributions should not be included. Analysts using both employer and employee contributions argue that employees ultimately pay the employer share because employers pay lower wages than they would if the employer contribution did not exist. Furthermore, estimates that leave out the employer contributions reflect the full benefits but not the full costs of providing those benefits. A number of other issues affect benefits, contributions, or both and would need to be disclosed with the rate of return estimate. For example, Social Security benefits are automatically adjusted for inflation. In addition, even if the disability benefits and corresponding contributions are not included in the return estimates, OASI benefits provided for families of workers who die before retirement should be included. Finally, how much individuals contribute and how much they receive in benefits depend on when they retire and whether they continue to work while receiving benefits; this could be addressed by assuming a standard retirement age. Many factors that would be included in rate of return estimates for Social Security are subject to considerable uncertainty, and these uncertainties mean that the actual rates of return that individuals receive could vary substantially from their estimates. Such factors include how long individuals will live, how much they will earn in the future, whether their contributions will also entitle their spouses or children to benefits, and what changes the Congress may make to contribution and benefit levels. These uncertainties suggest that individual estimates would be very rough and might best be described as a range of rates. The literature examining rates of return almost always discusses them in the context of the reform debate and, therefore, examines average rates for large groups of people with similar characteristics, such as birth year, income level, and gender. Such average group rates can be estimated with a reasonable degree of accuracy and precision, but an individual's actual experience may be dramatically different. Rate of return estimates depend fundamentally on individual earnings histories, which are used to project workers' future earnings, calculate their benefits, and estimate the amount of their contributions. Because rate of return estimates for Social Security rely on projected earnings, they are inherently uncertain. In addition, younger workers' rates of return would be even more uncertain since they have more years for which earnings need to be projected. Under the current program structure, rate of return estimates would also need to reflect additional benefits provided by workers' contributions. Their contributions not only entitle workers to retirement benefits but also entitle their spouses and children to survivor and dependent benefits. However, SSA's records do not include information on whether a worker has a spouse or children unless and until such dependents apply for benefits based on the worker's record. Moreover, neither SSA nor the workers can be certain who will have spouses, children, or survivors who might collect benefits based on the workers' earnings records and how long their dependents will collect these benefits. In addition, in many families, both the husband and wife work and one may be "dually entitled" to benefits based on both workers' records. Individuals are entitled to receive either a benefit based on their own earnings or a benefit equal to 50 percent of the benefit calculated from their spouse's record, whichever is greater. As a result of this benefit option, a dually entitled couple's rate of return on their contributions is generally different than their individual rates. However, SSA has no way to connect a working couple's two individual earnings records until one applies for benefits based on the other's records. While some analysts have sought to compare rate of return estimates for Social Security with rates of return for private market investments--such as stocks, bonds, or savings accounts--these comparisons are not as straightforward as they first appear. Explanations would be needed to understand a number of important factors, including whether the rates of return incorporated the transaction and administrative costs for investments or annuities, the differences in risk associated with Social Security and private investments, and the questions of how to treat the costs of the benefits promised under the current system when switching to any other retirement system. Under typical Social Security privatization proposals, individual retirement savings accounts would offer workers the potential to receive higher rates of return on private investments than their Social Security contributions implicitly receive. However, private investments would entail a variety of transaction and administrative costs of their own, and these would vary depending on the nature of the proposal. For example, stockbrokers charge commissions for making trades, and mutual fund managers are compensated for managing the funds. Reflected in such costs are marketing and advertising expenses incurred as money managers and brokers compete for investors' business. In contrast, SSA does not maintain actual accounts for each individual but rather keeps records of earnings. Administrative costs for Social Security's OASI program are less than 1 percent of annual program revenues. Accurate rate of return comparisons would need to look at the rates after adjusting for expenses. Accurate rate of return comparisons also need to take into account the differences in risk associated with those rates. Over long periods of time, riskier private market investments, such as stocks, on average earn higher rates of return than less risky ones, such as government bonds. The riskier the investment, the greater the variation in possible investment earnings. By the same token, the riskier the investment made with retirement savings, the greater the variation in possible retirement incomes. Finally, if rates of return for Social Security are compared with rates for alternative reform proposals, the comparisons should indicate whether the rates for the alternatives take into account the costs of the benefits promised under the current Social Security program. Any rate of return comparisons should include these transition costs and not be limited to the return on private investments. The PEBES aims to provide information about the complex programs and benefits available through the Social Security program; however, the current statement is already lengthy and difficult to understand. Adding a rate of return, along with the corresponding narrative that would be needed to understand all of the underlying assumptions and uncertainties, would further complicate PEBES' message. In addition, placing rate of return information on the statement would add significantly to SSA's workload, according to SSA officials. Although the PEBES is intended to be a tool for communicating with the public, we raised concerns about the usefulness of the statement in a 1996 report. We reported that although the public feels the statement can be a valuable tool for retirement planning, the current PEBES provides too much information and fails to communicate clearly the information its readers need to understand SSA's current programs and benefits. Comments from SSA's public focus groups, SSA employees, and benefit experts indicate that the statement contains too much information. For example, SSA reported in a 1994 focus group summary that younger workers aged 25 to 35 wanted a more simplified, one-page statement with their estimated benefits and contributions. In addition, SSA telephone representatives said that they believe most people calling in with questions have read only the section of the statement that provides the benefit estimates. Since the PEBES addresses complex programs and issues, explaining these points in straightforward language can be challenging. Although SSA officials told us they attempt to use simple language geared for a seventh-grade reading level, feedback from the public and SSA staff indicates that readers are confused by several important explanations. For example, the public frequently asks about PEBES' explanation of family benefits. Family benefits are difficult to calculate and explain because the amounts are based on information from both spouses' records and SSA does not maintain information that links individuals' records with those of their spouses. In addition, many people ask for clarification on certain terms used in the statement and on how their benefit estimates are calculated. Based on our recommendation, SSA is working on simplifying the PEBES. Agency officials are currently testing four alternative versions of the statement, and they plan to use the redesigned version of the PEBES for the fiscal year 2000 mailings. For rate of return information on the PEBES to be understood, SSA would need to (1) decide how much information to provide and (2) explain it in simple straightforward language--language that could be easily understood by the diverse population of workers slated to receive the statement. SSA would first need to define rate of return and explain that individuals' rates could vary substantially from the estimates. In addition, readers would need to be cautioned that changes in the Social Security program due to long-term financing problems could affect their rates of return. Furthermore, SSA would need to explain the factors included in the calculation and all the underlying assumptions and uncertainties. As discussed previously, these would include the amounts that were used for the worker's future earnings, whether the estimate includes the disability contributions and potential whether employer's contributions were included along with the worker's, the worker's expected retirement age, the worker's life expectancy after retirement, and how the estimate would vary if the worker's spouse or children qualify for benefits on the worker's record. The PEBES currently addresses how the benefit estimates treat some of these factors--future earnings, retirement ages, and family benefits. However, rate of return estimates are even more sensitive to these issues than benefit estimates; therefore, they would require further explanation. For example, the PEBES currently explains that the worker's future earnings are projected to remain the same as the latest earnings on record. A rate of return estimate based on a steady level of earnings would be different from one in which the earnings vary. In addition, since the PEBES provides benefit estimates at three retirement ages, the statement would need to explain which of the three ages was used for the individual's rate of return estimate. Finally, the statement's complicated discussion of family benefits, which explains that the amount of these benefits is dependent on the worker's benefit and the number of people in the family who would receive benefits, would need to be expanded. The explanation would need to indicate whether the individual's rate of return estimate incorporates any family benefits and what effect family benefits would have on the individual's rate of return. Along with the explanations needed for the rate of return itself, PEBES recipients would need to be cautioned regarding the limitations of comparing a rate of return on Social Security with rates for alternative investments. Before making comparisons, recipients would need to know that the rate of return presented on their PEBES may need to be adjusted for other factors. As discussed earlier, these factors would include the difference in administrative costs of the alternative investments, the difference in the level of risk associated with the alternative investments, and how the costs of the benefits promised under the current program are treated. Furthermore, according to SSA, placing rate of return information on the PEBES would add significantly to workloads across the agency. For example, officials stated that they would expect the volume of calls about the rate of return information to dramatically increase their workload. Staff would need training to be prepared to respond to inquiries regarding the individual rates of return as well as how the rates compare with those for other investments. In addition, SSA officials said significantly changing the PEBES would be difficult to do in a timely manner. If individualized rates of return were to be added, SSA would need time to prepare the calculation, develop the explanations that would be needed to accompany the rates, test the new statement, make programming changes, and renegotiate the PEBES printing and mailing contract. Given the disagreement over whether it is appropriate to apply the rate of return concept to the Social Security program and the number of assumptions that must be factored into such an estimate, it would be especially important to fully explain how the rate was calculated and how uncertain the estimate could be. However, it has already been difficult to develop a PEBES that provides readily understandable information on the existing programs and benefits alone. Adding rate of return information could significantly increase the statement's length and undermine SSA's current efforts to shorten and simplify it. Given the detailed explanations that would be needed along with the estimates, adding rate of return information to the PEBES would most likely complicate an already complex statement. We obtained comments on a draft of this report from SSA. SSA agreed with our overall conclusions and said the report reflects the difficulties the agency would face in placing understandable rate of return information on the PEBES. In addition, SSA pointed out that it is working hard to make the information currently provided in the PEBES easy for readers to understand and use and agreed that adding rate of return information would increase the complexity of the statement. Finally, SSA provided technical comments, which we incorporated in this report where appropriate. SSA's general and technical comments are reprinted in appendix II. We are sending copies of this report to the Commissioner of Social Security. Copies will also be made available to others on request. If you or your staff have any questions concerning this report, please call me or Kay E. Brown, Assistant Director, on (202) 512-7125. Other major contributors to this report include R. Elizabeth Jones, Evaluator-in-Charge, and Kenneth C. Stockbridge, Senior Evaluator. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the recent proposal that would require the Social Security Administration (SSA) to place on the Personal Earnings and Benefit Estimate Statements (PEBES) an individualized estimate of the rates of return workers receive on their contributions to the Social Security program, focusing on the: (1) general implications of using a rate of return for social security; and (2) challenges of including this information on the PEBES. GAO noted that: (1) there is substantial disagreement about whether the rate of return concept should be applied to the Social Security program; (2) supporters of such an application point out that a rate of return would provide individuals information about the return they receive on their contributions to the program; (3) however, others contend that it is inappropriate to use rate of return estimates for social security because the program is designed to pursue social insurance goals, such as ensuring that low-wage earners have adequate income in their old age or that dependent survivors are adequately provided for; (4) in addition, calculations for rates of return rely on a number of assumptions that affect the resulting estimates; (5) for individuals, the actual rates of return can vary substantially from the estimates due to various uncertainties, such as a worker's actual retirement age and future earnings; (6) to be clearly understood, the underlying assumptions and their effect on the estimates should be explained in any presentation of rate of return information; (7) furthermore, comparing rate of return estimates for social security with estimates for private investments could be difficult for various reasons; (8) for example, the comparisons would need to indicate whether the estimates for other investments include the transaction and administrative costs and the differences in risk associated with the social security trust funds and private investments; (9) providing rate of return information on the PEBES could further complicate and lengthen an already complex and difficult-to-understand statement; (10) in GAO's previous work, it concluded that the current PEBES is too long and its explanations of social security's complex programs are not easy for the public to understand; and (11) adding rate of return estimates to the PEBES would require detailed explanations about how the calculations were made and what assumptions were used about comparing a rate of return for social security with rates for private investments.
4,416
483
Community policing is a philosophy under which local police departments develop strategies to address the causes of and reduce the fear of crime through problemsolving tactics and community-police partnerships. According to the COPS Office program regulations, there is no one approach to community policing implementation. However, community policing programs do stress three principles that make them different from traditional law enforcement programs: (1) prevention, (2) problemsolving, and (3) partnerships (see app. II). Community policing emphasizes the importance of police-citizen cooperation to control crime, maintain order, and improve the quality of life in communities. The police and community members are active partners in defining the problems that need to be addressed, the tactics to be used in addressing them, and the measurement of the success of the efforts. The practice of community policing, which emerged in the 1970s, was developed at the street level by rank-and-file police officers. Justice supported community policing and predecessor programs for more than 15 years before the current COPS grant program was authorized. Previous projects noted by Justice officials as forerunners to the funding of community policing included Weed and Seed, which was a community- based strategy to "weed out" violent crime, gang activities, and drugs and to "seed in" neighborhood revitalization. House and Senate conferees, in their joint statement explaining actions taken on the Community Policing Act, emphasized their support of grants for community policing. The conferees noted that the involvement of community members in public safety projects significantly assisted in preventing and controlling crime and violence. As shown in table 1, $5.2 billion was authorized for the COPS grant program from its inception in fiscal year 1995 to the end of fiscal year 1997; $4.1 billion of which was appropriated over this period. The Community Policing Act does not target grants to law enforcement agencies on the basis of which agency has the greatest need for assistance. Rather, agencies are required to demonstrate a public safety need and an inability to address this need without a grant. Grantees are also required to contribute 25 percent of the costs of the program, project, or activity funded by the grant, unless the Attorney General waives the matching requirement. According to Justice officials, the basis for waiver of the matching requirements is extraordinary local fiscal hardship. In one of our previous reports, we reviewed alternative strategies, including targeting, for increasing the fiscal impact of federal grants. We noted that federal grants have been established to achieve a variety of goals. If the desired goal is to target fiscal relief to areas experiencing greater fiscal stress, grant allocation formulas could be changed to include a combination of factors that allocate a larger share of federal aid to those states with relatively greater program needs and fewer resources. The Community Policing Act also requires that grants be used to supplement, not supplant, state and local funds. To prevent supplanting, grantees must devote resources to law enforcement beyond those resources that would have been available without a COPS grant. In general, grantees are expected to use the hiring grants to increase the number of funded sworn officers above the number on board in October 1994, when the program began. Grantees are required to have plans to assume a progressively larger share of the cost over time, looking toward keeping the increased hiring levels by using state and local funds after the expiration of the federal grant program at the end of fiscal year 2000. Assessing whether supplanting has taken place in the community policing grant program was outside the scope of our review. However, in our previously mentioned report on grant design, our synthesis of literature on the fiscal impact of grants suggested that each additional federal grant dollar results in about 40 cents of added spending on the aided activity. This means that the fiscal impact of the remaining 60 cents is to free up state or local funds that otherwise would have been spent on that activity for other programs or tax relief. Monitoring is an important tool for Justice to use in ensuring that law enforcement jurisdictions funded by COPS grants comply with federal program requirements. The Community Policing Act requires that each COPS Office program, project, or activity contain a monitoring component developed pursuant to guidelines established by the Attorney General. In addition, the COPS program regulations specify that each grant is to contain a monitoring component, including periodic financial and programmatic reporting and, in appropriate circumstances, on-site reviews. The regulations state that the guidelines for monitoring are to be issued by the COPS Office. COPS Office grant-monitoring activities during the first 2-1/2 years of the program were limited. Final COPS Office monitoring guidance had not been issued as of June 1997. Information on activities and accomplishments for COPS-funded programs was not consistently collected or reviewed. Site visits and telephone monitoring by grant advisers did not systematically take place. COPS Office officials said that monitoring efforts were limited due to a lack of grant adviser staff and an early program focus on processing applications to get officers on the street. According to a COPS Office official, as of July 1997, the COPS Office had about 155 total staff positions, up from about 130 positions that it had when the office was established. Seventy of these positions were for grant administration, including processing grant applications, responding to questions from grantees, and monitoring grantee performance. The remaining positions were for staff who worked in various other areas, including training; technical assistance; administration; and public, intergovernmental, and congressional liaison. In January 1997, the COPS Office began taking steps to increase the level of its monitoring. It developed monitoring guidelines, revised reporting forms, piloted on-site monitoring visits, and initiated telephone monitoring of grantees' activities. As of July 1997, a COPS Office official said that the office had funding authorization to increase its staff to 186 positions, and it was in the process of hiring up to this level. In commenting on our draft report, COPS officials also noted that they were recruiting for more than 30 staff positions in a new monitoring component to be exclusively devoted to overseeing grant compliance activities. COPS Office officials also said that some efforts were under way to review compliance with requirements of the Community Policing Act that grants be used to supplement, not supplant, local funding. In previous work, we reported that enforcing such provisions of grant programs was difficult for federal agencies due to problems in ascertaining state and local spending intentions. According to the COPS Office Assistant Director of Grant Administration, the COPS Office's approach to achieving compliance with the nonsupplantation provision was to receive accounts of potential violations from grantees or other sources and then to work with grantees to bring them into compliance, not to abruptly terminate grants or otherwise penalize grantees. COPS Office grant advisers attempted to work with grantees to develop mutually acceptable plans for corrective actions. Although the COPS Office did not do proactive investigations of potential supplanting, its three-person legal staff reviewed cases referred to it by grant advisers, grantees, and other sources. COPS Office officials said that they also expected that referrals to Justice's Legal Division will result from planned monitoring activities. Of the 506 inquiries that required follow-up by the Legal Division as of December 1996, about 70 percent involved potential supplanting. In addition, Justice's Inspector General began a review in fiscal year 1997 that was to assess, among other things, how COPS grant funds were used, including whether supplanting occurred. In the course of this review, the Inspector General planned to complete 50 audits of grantees by the end of fiscal year 1997. The Office of Justice Programs also conducted financial monitoring of COPS grants, which officials said is to include review of financial documents and visits to 160 sites by the end of fiscal year 1997. In April 1997, COPS Office officials said that they were discussing ways to encourage grantees to sustain hiring levels achieved under the grants, in light of the language of the Community Policing Act regarding the continuation of these increased hiring levels after the conclusion of federal support. The COPS Office officials also noted in commenting on our draft report that they had sent fact sheets to all grantees explaining the legal requirements for maintaining hiring levels. However, the COPS Office Director also noted that the statute needed to be further defined and that communities could not be expected to maintain hiring levels indefinitely. A reasonable period for retaining the officers funded by the COPS grants had not been determined. Law enforcement agencies in small communities were awarded most of the COPS grants. As shown in figure 1, 6,588 grants--49 percent of the total 13,396 grants awarded--were awarded to law enforcement agencies serving communities with populations of fewer than 10,000. Eighty-three percent--11,173 grants--of the total grants awarded went to agencies serving populations of fewer than 50,000. Large cities--with populations of over 1 million--were awarded only about 1 percent of the grants, but these grants made up over 23 percent--about $612 million--of the total grant dollars awarded. About 50 percent of the grant funds were awarded to law enforcement agencies serving populations of 150,000 or less, and about 50 percent of the grant funds were awarded to law enforcement agencies serving populations exceeding 150,000, as the Community Policing Act required. As shown in figure 2, agencies serving populations of fewer than 50,000 also received about 38 percent of the total grant dollars--over $1 billion. In commenting on our draft report, the COPS Office noted that these distributions were not surprising given that the vast majority of police departments nationwide are also relatively small. The COPS Office also noted that the Community Policing Act requires that the level of assistance given to large and small agencies be equal. As of the end of fiscal year 1996, after 2 years of operation, the COPS Office had issued award letters to 8,803 communities for 13,396 grants totaling about $2.6 billion. Eighty-six percent of these grant dollars were to be used to hire additional law enforcement officers. MORE program grant funds were to be used to buy new technology and equipment, hire support personnel, and/or pay law enforcement officers overtime. Other grant funds were to be used to train officers in community policing and to develop innovative prevention programs, including domestic violence prevention, youth firearms reduction, and antigang initiatives. The Community Policing Act specifies that no more than 20 percent of the funds available for COPS grants in fiscal years 1995 and 1996 and no more than 10 percent of available funds in fiscal years 1997 through 2000 were to be used for MORE program grants. Table 2 shows the number and amount of the COPS grants (awarded in fiscal years 1995 and 1996) by the type of grant. Figure 3 shows the distribution of community policing grant dollars awarded by each state and Washington, D.C. Our survey results showed that in fiscal years 1995 and 1996, grantees were awarded an estimated $286 million (plus or minus 3 percent) in MORE program funds to use for purchases of technology and equipment, hiring of support personnel, and/or payment of law enforcement officers' overtime. We estimated that, as of the end of fiscal year 1996, 61 percent of these funds had been spent to hire civilian personnel. According to our survey, MORE grantees had spent an estimated $90.1 million in fiscal years 1995 and 1996, a little less than one-third of the $286 million in MORE funds they were awarded. Overall, we estimated that about 61 percent of the MORE program grant funds spent during the first 2 years of the program was to hire civilian personnel. About 31 percent of the funds went for the purchase of technology and/or equipment, primarily computers, and about 8 percent was spent on overtime for law enforcement officers. Figure 4 shows how these funds were spent. Civilian personnel ($55.8 million) Time savings achieved through MORE program grant awards were to be applied to community policing. Allowable technology and equipment purchases were generally computer hardware or software. Some technology/equipment items, such as police cars, weapons, radios, radar guns, uniforms, and office equipment--such as fax machines and copiers--could not be purchased with the grant funds. Additional support resources for some positions, such as community service technicians, dispatchers, and clerks, were allowable. Law enforcement officers' overtime was to be applied to community policing activities. Overtime was not funded for the 1996 application year. Distributions of MORE program grant expenditures were heavily influenced by the expenditures of one large jurisdiction, the New York City Police Department. This police department was awarded about one-third of the total amount of MORE grant funds awarded and had spent about one-half of all MORE grant funds expended nationwide. About 86 percent of the money that the department spent, or $38.7 million, was for the hiring of civilian personnel. Excluding the New York City Police Department's expenditures, the highest percentage of expenditures went for purchases of technology and/or equipment, which represented about 48 percent of the MORE program grant spending by all other grantees. Table 3 shows the percentages of MORE grant funds expended for all survey respondents, the New York City Police Department, and all other survey respondents after excluding the New York City Police Department. In commenting on our draft report, COPS officials noted that nearly two-thirds of the MORE program funds awarded nationwide were for purchases of technology and/or equipment. The officials believed that significant local procurement delays may explain our finding that most expenditures through fiscal year 1996 were for civilian personnel hiring. We asked survey respondents to calculate the number of officer full-time-equivalent positions that their agency had redeployed to community policing as a result of MORE program grant funds spent in fiscal years 1995 and 1996. The respondents were asked to do these calculations using instructions provided to them in the original MORE program grant application package. (See p. 18 for a discussion of how these calculations were to be made.) We estimated that nearly 4,800 (plus or minus 9 percent) officer full-time-equivalent positions had been redeployed. Of these, about 40 percent of the positions were redeployed as a result of technology and/or equipment purchases, about 48 percent of the positions were attributable to hiring civilian personnel, and about 12 percent of the positions were a result of law enforcement officers' overtime. The total full-time-equivalent positions were associated with an estimated $82 million, or about 91 percent of the MORE program grant funds spent, because some survey respondents reported that they were not able to calculate positions redeployed to community policing. The most common reasons the respondents cited for not being able to do so were that equipment that had been purchased had not yet been installed, and/or that it was too early in the implementation process to make calculations of time savings. We estimated based on our mail survey responses that about 2,400 full-time civilian personnel were hired with MORE program funds spent in fiscal years 1995 and 1996. The most frequently reported technology or equipment purchases were mobile data computers or laptops, personal computers, other computer hardware, and crime analysis computer software. As of June 1997, a total of 30,155 law enforcement officer positions funded by COPS grants were estimated by the COPS Office to be on the street. COPS Office estimates of the numbers of new community policing officers on the street were based on three funding sources: (1) officers on board as a result of COPS hiring grants; (2) officers redeployed to community policing as a result of time savings achieved through technology and equipment purchases, hiring of civilian personnel, and/or law enforcement officers' overtime funded by the MORE grant program; and (3) officers funded under the Police Hiring Supplement Program, which was in place before the COPS grant program. According to COPS Office officials, the office's first systematic attempt to estimate the progress toward the goal of 100,000 new community policing officers on the street was a telephone survey of grantees done between September and December, 1996. COPS Office staff contacted 8,360 grantees to inquire about their progress in hiring officers and getting them on the street. According to a COPS Office official, a follow-up survey, which estimated 30,155 law enforcement officer positions to be on the street, was done between late March and June, 1997. The official said that this survey was contracted out because the earlier in-house survey had been extremely time consuming. The official said that, as of May 1997, the office was in the process of selecting a contractor to do three additional surveys during fiscal year 1998. In addition to collecting data through telephone surveys on the numbers of new community policing officers hired with hiring grants, the COPS Office reviewed information provided by grantees on officers redeployed to community policing as a result of time savings achieved by MORE program grants. To receive MORE program grants, applicants are required to calculate the time savings that would result from the grants and apply the time to community policing activities. To assist applicants in doing these calculations, the COPS Office provided examples in the grant application package. "Hessville is a rural department with 20 sworn law enforcement officers. Officers in the Hessville Police Department spend an average of three hours each per shift typing reports by hand at the station. Based on information collected from similar agencies that have moved to an automated field-report-writing system, the department determines that if all of the patrol cars are equipped with laptop computers, the same tasks will take the officers only two hours each per shift to complete--a of one hour per officer, per shift. "On any given day, 10 officers in the Hessville Police Department will use the four laptop computers being requested (some laptops will be reused by officers on different shifts) to complete paperwork in their patrol cars. Since each officer is expected to save an hour of time each day as a result of using the computers, 10 hours of sworn officer time will be saved by the agency each day, which would equal approximately 1.3 FTEs (full time equivalents) of redeployment over the course of one year, using a standard of 1,824 hours (228 days) for an FTE." The COPS Office also counted toward the 100,000-officers goal 2,000 positions funded under the Police Hiring Supplement Program, which was administered by another Justice component before the COPS grants program was established. An official said that a policy decision had been made early in the establishment of the COPS Office to include these positions in the count. Special law enforcement agencies, such as those serving Native American communities, universities and colleges, and mass transit passengers, were awarded 329 hiring grants in fiscal years 1995 and 1996. This number was less than 3 percent of the 11,434 hiring grants awarded during the 2-year period. We reviewed application files for 293 of these grants and found that almost 80 percent were awarded to Native American police departments and university or college law enforcement agencies. Other special agencies included mass transit, public housing, and school police. The COPS Office also considered new police departments as special agencies. The awards to special agencies averaged about $291,000 per grant. The 293 special agency grantees applied most frequently to use officers hired with the COPS funds to (1) write strategic plans for community policing, (2) provide community policing training for citizens and/or law enforcement officers, (3) meet regularly with community groups, and (4) develop neighborhood watch programs and antiviolence programs. We provided a draft of this report for comment to the Attorney General and received comments from the Director of the COPS Office. The comments are reprinted in appendix III. The COPS Office also provided some additional information and oral technical comments. The COPS Office generally agreed with the information we presented and provided updates on the progress of the office on some of the issues addressed in the report. These comments are incorporated in the report where appropriate. We are sending copies of this report to the Ranking Minority Members of your Committee and Subcommittee and other interested parties. We will also make copies available to others on request. The major contributors to this report are listed in appendix IV. Please feel free to call me at (202) 512-3610 if you have questions or need additional information. To determine grant program design features in the Public Safety Partnership and Community Policing Act of 1994, we reviewed the act and its legislative history and discussed the results of our review with COPS Office officials. To determine how the COPS Office monitored the use of grants it awarded, we reviewed documentation on monitoring procedures and interviewed officials about actions taken and planned. To determine how COPS grants were distributed nationwide, we obtained COPS Office data files on all grants awarded in fiscal years 1995 and 1996, and we analyzed the distributions by grant type; by population size reported to the COPS Office; by recipient jurisdictions according to COPS data; and by state. The data reflect the number of grants for which applicants have been advised that they will receive funding and for which they have received estimated award amounts. They do not reflect dollar amounts of funds obligated by the COPS Office or actually spent by agencies that received the grants. To determine how law enforcement agencies used grants under the MORE program, we surveyed by mail a stratified, random sample of 415 out of a total of 1,524 agencies that had been awarded MORE grants as of September 30, 1996. Using COPS Office application data, we stratified the grant recipients into four population categories, according to the population of the jurisdiction served, and six total MORE grant award amount groups. The population categories were: fewer than 50,000; 50,000 to fewer than 100,000; 100,000 to fewer than 500,000; and 500,000 and over. The MORE grant award amount categories were: fewer than $10,000; $10,000 to fewer than $25,000; $25,000 to fewer than $50,000; $50,000 to fewer than $75,000; $75,000 to fewer than $150,000; and $150,000 or more. Regardless of population size, we selected all agencies that had accepted grants of $150,000 or more. We received usable responses from 366, or 88 percent, of our contacts with the sample of 415 agencies. All survey results were weighted to represent the total population of 1,524 MORE program grant recipients. Our questionnaire asked agencies to provide the following information as of September 30, 1996: (1) the total amount of MORE program grant funds accepted; (2) the categories under which grant funds were spent--technology and/or equipment, civilian personnel, or law enforcement officer overtime; (3) the types of technology and equipment purchases made or contracted to make; (4) the types of civilian personnel hired; and (5) the number of officer positions redeployed to community policing, according to calculations of time savings achieved through MORE program grant spending. We pretested the questionnaire by telephone with officials from judgmentally selected MORE program grant recipients, and we revised the questionnaires on the basis of this input. To the extent practical, we attempted to verify the completeness and accuracy of the survey responses. We contacted respondents to obtain answers to questions that were not completed and to resolve apparent inconsistencies between answers to different questions. To determine the process the COPS Office used to calculate the number of officers on the street, we interviewed officials and reviewed documentation on how calculations were made. To describe funding distributions and uses of COPS hiring grants in special law enforcement agencies, we used a data collection instrument to review the COPS Office's grant application files of hiring grants accepted by special law enforcement agencies. We reviewed 293 of the 329 (89 percent) hiring grants that were awarded to special agencies in fiscal years 1995 and 1996, according to COPS Office data. The 36 files that we did not review were in use by COPS Office staff at the time we did our work. We looked at how community policing was implemented in six locations that had received COPS grants. The locations we visited were Los Angeles, Los Angeles County, and Oxnard, CA; Prince George's County, MD; St. Petersburg, FL; and Window Rock, AZ (Navajo Nation). These locations were judgmentally selected to include four city or county police departments and two special law enforcement agencies. The departments we visited were in varying stages of implementing community policing activities. They served communities with populations ranging from 155,000 to over 1 million. Table II.1 provides additional information about the locations we visited. In each law enforcement jurisdiction, we did structured interviews with the police chief or community policing coordinator, a panel of community policing officers, and representatives of local government agencies and community groups involved in community policing projects. We discussed community policing projects and asked interviewees to characterize the level of support by their organization for community policing and to discuss what they viewed as major successes and limitations of community policing for their communities. Table II.2 lists the interviewees by job title. Los Angeles County, CA Chief, Metropolitan Transit Authority (MTA) Police Department Panel of community policing officers, MTA Police Department Senior Code Law Enforcement Officer, City of Lawndale Probation Officer, County of Los Angeles Project Director, Esteele Van Meter Multi-Purpose Center Assistant Principal, Manchester Elementary School (MTA officers work with students on campus) Police Chief, Oxnard Police Department Panel of community policing officers, Oxnard Police Department Assistant City Manager, City of Oxnard Chair, Inter-Neighborhood Community Committee (liaison between neighborhood councils and city departments) Marketing Director, AT&T President, Channel Islands National Bank President, Colonial Coalition Against Alcohol and Drugs Executive Director, El Concilio (Latino multiservice nonprofit) Coordinator, Interface Children and Family Services Director, Instructional Support Services at the Oxnard High School District Member, Sea Air Neighborhood Watch (continued) Prince George's County, MD Community Policing Director, Prince George's County Police Department Panel of community policing officers, Prince George's County Police Department Public Safety Director, Prince George's County Prince George's County Multi-Agency Services Team (county agencies and the police address crime concerns in communities) Chair, Public Safety Issues, Interfaith Action Committee (consortium of churches involved in social service issues) Vice President, Government Affairs, Apartment and Building Owners Association Resident Manager, Whitfield Towne Apartments Chief and Director of Special Projects, St. Petersburg Police Department Panel of community policing officers, St. Petersburg Police Department Neighborhood Partnership Director, Office of the Mayor Executive Director and staff, St. Petersburg Housing Authority Administrator and staff, St. Petersburg Department of Leisure Services Chief, St. Petersburg Fire Department Executive Director and staff, Center Against Spouse Abuse Coordinators, Black on Black Crime Prevention Program and Intervention Program, Pinellas County Urban League Director, Criminal Justice Administration, Operations Parental Awareness and Responsibility (PAR), Inc. Window Rock, AZ (Navajo Nation) Six law enforcement agencies we visited--three city police departments, one county police department, a Native American police department, and a mass transit police department--had a variety of community policing projects under way. The projects illustrated three key principles of community policing identified by the COPS Office: prevention, problemsolving, and partnerships. Representatives of community groups and other local government agencies working with the police on community policing activities were generally supportive of the community policing concept. Table II.3 provides examples of community policing projects in these locations. The projects ranged from starting 18 community advisory boards in neighborhoods throughout a major city to curbing drug activity by working with the resident manager and residents of an apartment complex. The police department established 18 Community Police Advisory Boards. Each board consisted of 25 volunteers whose roles were to advise and inform area commanding officers of community concerns (e.g., enforcement of curfew laws and education on domestic violence). Each board used community and police support to address the problems that had been identified. Interviewees said the boards had been effective in helping the police to build trust, involve citizens, solve problems, and reduce citizens' fear of crime. The transit authority was part of a task force that addressed problems associated with loitering and drinking by day laborers on railroad property. Using community policing techniques such as problem identification and specific actions, such as clearing shrubs, painting over graffiti, and securing railroad ties that were being used to build tents for shelter, the task force resolved the problems. Oxnard, CA, Police Department "Street Beat" was an award-winning cable television series sponsored by local businesses and the cable company. Interviewees said the weekly series had been one of the department's most effective community policing tools. Over 500 programs had been aired since 1985. Street beat offered crime prevention tips and encouraged citizens to participate in all of the department's community policing activities. Over 300 departments contacted the Oxnard Police Department for information on replicating the television series in their cities. (continued) Citizens, the resident manager, and a community policing officer worked to remove drug dealers from an apartment complex. The community policing officer used several successful tactics, including citing suspected drug dealers, most of whom were not residents, for trespassing and taking photographs of them. Citizens formed a coalition that met with the community policing officer in her on-site office, thereby increasing the willingness of residents to come forward with information on illegal activities. Some disorderly tenants were evicted. The resident manager estimated that drug dealing at the complex was reduced by 90 percent. Community policing helped to improve relations between police officers and the residents of a shelter run by the Center Against Spouse Abuse. Interviewees said that the shelter had a policy, until about 1992, that police could not enter the property. Residents were distrustful of the police. Some had negative experiences when officers went to their homes to investigate complaints of abuse. For example, residents reported that officers failed to make arrests when injunctions were violated. Since the inception of community policing, interviewees said that officers were more sensitive to victims when they investigated spouse abuse cases. Officers visited the shelter to discuss victims' rights, and residents were favorably impressed by their openness. The community policing officer in the neighborhood was praised by the shelter director for his responsiveness. On two occasions, he responded quickly to service calls, arresting a trespasser and assisting a suicidal resident. A police official noted that the department was in the early development phase of community policing, attempting to demonstrate a few successful projects that could be used in locations throughout the over 26,000-square-mile reservation. One interviewee said that gang activity was partially a result of teens having nothing to do on the reservation. A community policing project had officers working with youth groups to develop positive activities and encourage participation by organizing a blood drive, sponsoring youth athletic teams, and recruiting young people to help elderly citizens. Another community policing project was the development of a computer database on gang activities and membership. We asked interviewees representing community groups and local government agencies participating in community policing activities to characterize the level of support their organization had for community policing in their neighborhoods. Thirty-two of the 39 interviewees said that they were supportive of their local community policing programs. Seven other interviewees offered no specific response to this question, except to say that they felt it was too early in their implementation of community policing to make assessments. We also asked interviewees representing law enforcement agencies, community groups, and local government agencies what they felt were the major successes and limitations of community policing. Responses on community policing successes emphasized improved relationships between the police and residents and improvements in the quality of life for residents of some neighborhoods. Responses on limitations emphasized that there was not enough funding and that performance by some individual community policing officers was disappointing. Summaries of several responses on the major successes of community policing were the following: "I have seen a big turnaround in some apartment complexes. The entire atmosphere of these places has changed. People are outside. Children are playing. This is due to efforts of community policing officers to get drug buyers and sellers off of the properties." (A community group representative.) "There have been big-time changes here as a result of community policing. The police have developed a much higher level of trust from public housing residents than existed before. Residents will work with the police now and provide them with information. In this public housing complex, the sense of safety and security has increased. Before the community policing officers were on patrol, residents did not want to walk past the basketball courts into the community center. That is not a problem any longer. The police worked with the Department of Parks and Recreation to improve lighting and redesign a center entrance. We are now offering a well-attended course on computers at the center. People are enjoying the parks. They are even on the tennis courts. Our community policing officer has been successful in working with problem families and the housing authority staff. We provide referrals, counseling, and other resources. We have either helped families address their problems or had them evicted from our units. There are many individual success stories of young people developing better self-esteem and hygiene as a result of interacting with the community policing officer." (A housing authority director.) "Community policing has changed how we practice law enforcement in a substantial way. We applied community policing strategies to a distressed neighborhood plagued by crime. The area had prostitution and drug dealing, and service calls to the police were high. We worked with residents and landlords to improve the situation. Closer relationships developed, and we began working on crime prevention with community groups, schools, and parents. Property managers provided better lighting for their property, cut their weeds, and screened tenants more carefully." (A community policing officer.) Summaries of several responses on major limitations to community policing were: "Community policing is working here, but we still have a long way to go. The challenge for the department is to convince the force that community policing is not a fad and is not a select group of officers doing touchy/feely work, but that it is a philosophy for the whole department. I think we need to reengineer the entire police department structure to fully integrate community policing into the community. I don't believe we have decentralized the department enough. For example, I think detectives should be out in the community with community policing officers, instead of at police headquarters. They should know the people in the areas to which they are assigned." (A director of public safety.) "We don't have "Officer Friendly" yet, even though overall attitudes have improved. The concept is good. The limitations are in the individuals doing the work. Some are good. Some are not." (A community group member.) "Some residents have an unrealistic expectation of what community policing can do and what it cannot do. The majority of calls for service involve social problems. Some residents expect the police to solve all their social problems, such as unemployment and mediating family and neighbor disputes." (A local government official.) Janet Fong, Senior Evaluator Lisa Shibata, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Justice Office of Community Policing Services (COPS) grant program, focusing on: (1) Justice's implementation of the Community Policing Act with special attention to statutory requirements for implementing the COPS grants; (2) how COPS monitored the use of grants it awarded; (3) the distribution of COPS grants nationwide by population size of jurisdiction served, by type of grant, and by state; (4) how law enforcement agencies used grants under the COPS Making Officer Redeployment Effective (MORE) grant program; (5) the process the COPS office used to calculate the number of officers on the street; and (6) the funding distributions and uses of COPS hiring grants by special law enforcement agencies. GAO noted that: (1) under the Community Policing Act, grants are generally available to any law enforcement agency that can demonstrate a public safety need; demonstrate an inability to address the need without a grant; and, in most instances, contribute a 25-percent match of the federal share of the grant; (2) to achieve the goal of increasing the number of community policing officers, the law required that grants be used to supplement, not supplant, state and local funds; (3) the COPS Office provided limited monitoring of the grants during the period GAO reviewed; however, the office was taking steps to increase its level of monitoring; (4) about 50 percent of the grant funds were awarded to law enforcement agencies serving populations of 150,000 or less, and about 50 percent of the grant funds were awarded to law enforcement agencies serving populations exceeding 150,000, as the Community Policing Act required; (5) about $286 million, or 11 percent of the total grant dollars awarded in fiscal years (FY) 1995 and 1996, were awarded under the MORE grant program; (6) according to the results of a survey GAO did of a representative national sample of those receiving grants under the COPS MORE grant program in FY 1995 and 1996, grantees had spent an estimated $90.1 million, or a little less than one-third of the funds they were awarded; (7) they spent about 61 percent of these funds to hire civilian personnel, about 31 percent to purchase technology or equipment, and about 8 percent on overtime payments for law enforcement officers; (8) the distributions of MORE program grant expenditures were heavily influenced by the expenditures of the New York City Police Department, which spent about one-half of all the MORE program grant funds expended nationwide; (9) to calculate its progress toward achieving the goal of 100,000 new community policing officers on the street as a result of its grants, the COPS Office did telephone surveys of grantees; (10) as of June 1997, the COPS Office estimated that a total of 30,155 law enforcement officer positions funded by COPS grants were on the street; (11) according to the results of GAO's review of COPS Office files, special law enforcement agencies were awarded 329 community policing hiring grants in FY 1995 and 1996--less than 3 percent of the total hiring grants awarded; and (12) special agency grantees applied most frequently to use officers hired with the COPS funds to write strategic plans, work with community groups, and provide community policing training to officers and citizens.
7,936
705
In 1979, the Office of Management and Budget's (OMB) Office of Federal Procurement Policy (OFPP) issued Policy Letter No. 79-1 to guide federal agencies in implementing Public Law 95-507. The letter provides uniform policy guidance to federal agencies regarding the organization and functions of OSDBUs. In September 1994, the President signed Executive Order No. 12928, entitled "Promoting Procurement With Small Businesses Owned and Controlled by Socially and Economically Disadvantaged Individuals, Historically Black Colleges and Universities, and Minority Institutions." The order mandates that, unless prohibited by law, the OSDBU director should be responsible to and report directly to the agency head or the agency's deputy as required by the Small Business Act. The order also mandates that federal agencies comply with the requirements of OFPP's policy letter, unless prohibited by law. Of the eight federal agencies we reviewed, only DLA's and DOE's OSDBU heads do not report to the appropriate agency official. The conference report accompanying Public Law 95-507 states that each federal agency having procurement powers must establish an Office of Small Business Utilization to be directed by an employee of that agency, who would report directly to the head of the agency or the agency's second-ranking official. Also, OFPP's policy letter defines the agency's deputy as the second-ranking official within the agency. Furthermore, in a June 1994 memorandum to federal agencies, the OMB Director defines the agency's deputy as the second-in-command. The OSDBU directors in the Departments of the Army, the Navy, and the Air Force; NASA; and GSA report to either the agency head or the agency's deputy. The Army OSDBU director reports to the Secretary of the Army (the agency head), while the Navy and Air Force OSDBU directors report to the Under Secretary of the Air Force and the Under Secretary of the Navy, respectively (the agencies' second-ranking officials). The NASA OSDBU director reports to the NASA Administrator (the agency head). At GSA, the OSDBU director reports directly to the GSA Deputy Administrator (the agency's second-in-command). In 1988, Public Law 100-656 amended the Small Business Act, allowing the Secretary of Defense to designate the official to whom the OSDBU director should report. Currently, DOD's OSDBU director reports to the Under Secretary of Defense for Acquisition and Technology, who is the Secretary's designee. The OSDBU directors at DLA and DOE report to officials other than the agency head or agency's deputy. While each agency explained its rationale, we do not believe that in either agency the OSDBU director reports to the appropriate official, as defined by Public Law 95-507. DLA's OSDBU director reports to the agency's Deputy Director for Acquisition. As shown in figure 1, the Deputy Director for Acquisition is neither the agency head nor the agency's deputy. According to DLA's Deputy General Counsel (Acquisition), the agency's rationale for the above reporting arrangement is that the Deputy Director for Acquisition is considered to be the agency's deputy for all matters relating to acquisition. We do not agree with DLA's rationale. In our view and as shown by the agency's organizational chart, the Principal Deputy Director is the agency's second-in-command. In addition, the existing reporting arrangement at DLA could potentially impair the achievement of the act's objectives. As the House Committee on Small Business observed in a 1987 report, having the OSDBU director report to an individual who has responsibility for the functions that the director is intended to monitor (procurement) could lessen the director's effectiveness. DLA officials neither agreed nor disagreed with our position that the OSDBU's reporting level was not in compliance with Public Law 95-507. However, in March 1995, on the basis of questions raised during our review, DLA's Deputy General Counsel (Acquisition) said that DLA will take steps to reorganize so that the OSDBU director reports to either the agency head or the agency's deputy. As shown in figure 2, the head of Energy's OSDBU, whose title is Deputy Director, reports to the Director of the Office of Economic Impact and Diversity. The Director reports directly to the Secretary of Energy but is neither the agency head nor the agency's second-in-command. Figure 2 reflects DOE's January 1995 reorganization. Prior to the reorganization, the title of the head of the OSDBU was Director, and that official reported to the Director of Economic Impact and Diversity. In response to our inquiry concerning the rationale for that arrangement, DOE said that the Department of Energy Organization Act (42 U.S.C. 7253) gives the Secretary broad authority to organize the Department and that Public Law 95-507 was not intended to supersede or amend the Organization Act. In response to a congressional request, in 1993 OMB surveyed federal agencies to determine the organizational reporting levels of their OSDBU directors. The OMB survey included four of the agencies we reviewed: DOD, DOE, GSA, and NASA. According to the OFPP Deputy Administrator for Procurement Law and Legislation, DOE was not in compliance with the statute because the OSDBU director did not report to the agency head or the agency's deputy. In a June 9, 1994, memorandum, OMB's Director emphasized that federal agencies must comply with the law and policy regarding the OSDBU's organizational reporting level. Furthermore, in a 1987 report, we stated that DOE's rationale based on the Organization Act does not give the Secretary the authority to alter or abridge the requirements of the Small Business Act. We recommended that the Secretary of Energy require the head of the OSDBU to be responsible only to, and report directly to, the Secretary or Deputy Secretary of Energy. DOE officials neither agreed nor disagreed with our position that the OSDBU's reporting level was not in compliance with Public Law 95-507. However, in March 1995, DOE officials--including the Director, Office of Economic Impact and Diversity, and the Assistant General Counsel for General Law--told us that the agency recognizes that it must comply with Executive Order 12928 (which mandates that, unless prohibited by law, the OSDBU director should be responsible to and report directly to the agency head or the agency's deputy). DOE officials told us that they are currently developing a reorganization plan. DOE's Assistant General Counsel for General Law said that it is uncertain when or how the reorganization will be accomplished because of a need to reconcile the responsibilities of the OSDBU with DOE's statutorily mandated Office of Minority Economic Impact. All eight OSDBUs we examined conduct activities consistent with the requirements of Public Law 95-507 and OFPP's Policy Letter 79-1 for assisting small and disadvantaged businesses in obtaining federal contracts. These activities include (1) developing the agency's small business contracting and subcontracting goals, (2) sponsoring and/or participating in small business outreach efforts, (3) serving as an interagency liaison for procurement activities relating to small businesses and small disadvantaged businesses, and (4) supervising and training employees involved with the agency's small business activities. Officials at several OSDBUs also cited examples of special initiatives undertaken to help meet their agency's contracting goals. As noted above, the Energy OSDBU head reports to the Director of the Office of Economic Impact and Diversity. Because the Diversity Office has broad responsibility for formulating and monitoring the implementation of policies for the agency's small business, disadvantaged business, and women-owned business programs, many activities are conducted jointly with the OSDBU. For simplicity, in the following sections, we characterize these as the OSDBU's activities. The Small Business Act and OFPP's Policy Letter require OSDBU directors to consult with the Small Business Administration (SBA) on establishing contracting goals for small and small disadvantaged businesses. At GSA and NASA, the OSDBU directors and SBA establish goals setting out the percentage of prime contracts and subcontracts that will be awarded to small businesses, small disadvantaged businesses, and women-owned businesses. For DOD, the OSDBU director negotiates DOD-wide prime contracting and subcontracting goals, which incorporate the goals for the component agencies such as DLA and the Departments of the Army, Navy, and Air Force. For DOE, the Office of Economic Impact and Diversity has assumed the responsibility for negotiating the agency's contracting and subcontracting goals. The process of setting goals begins with OSDBU representatives providing SBA officials with estimates of the total dollar amounts of (1) all prime contracts the agencies anticipate awarding during the forthcoming fiscal year and (2) subcontracts to be awarded by the agencies' prime contractors. The agencies express the goals in terms of the percentages of the total contract and subcontract dollars to be awarded to small and small disadvantaged businesses. In formulating goals and tracking the agencies' progress or achievement toward the goals, the OSDBUs also look at the number of contracts awarded and their dollar values. OFPP's policy letter requires OSDBUs to conduct outreach efforts to provide information to small and disadvantaged businesses. For example, OSDBU's outreach may consist of sponsoring or participating in seminars or conferences on contracting opportunities and providing materials describing how to do business with the agencies. OSDBU officials at each of the eight agencies told us that they had sponsored or cosponsored conferences or seminars for small businesses during fiscal years 1993 and 1994. In addition, all eight agencies told us that their staffs had attended numerous conferences or seminars sponsored by other government agencies or private organizations. OFPP's policy letter also requires the OSDBU directors to serve as interagency liaisons for all small business matters. Officials of each of the OSDBUs we reviewed serve in this capacity. For example, in response to the Federal Acquisition Streamlining Act of 1994, OSDBU officials at five of the eight agencies are participating in an interagency group that is drafting revisions to the Federal Acquisition Regulations pertaining to small businesses. Generally, the OSDBUs also serve as their agency's point of contact for small businesses. All eight of the agencies provide information to individual small businesses in response to inquiries about doing business with them. For example, the information provided includes forecasts of agencies' acquisitions, contracting procedures, and required forms. Under Public Law 95-507 and/or OFPP's policy letter, OSDBUs are responsible for supervising and training agency employees in contracting and subcontracting with small businesses. The OSDBUs we reviewed had activities designed to accomplish this requirement. These activities include conducting annual or semiannual training sessions for small business specialists and issuing agency regulations concerning small business procurement matters. Officials of each of the eight OSDBUs said that they have initiated efforts to help meet their agency's contracting goals. In particular, the GSA and Air Force OSDBUs cited examples that illustrate these efforts. Furthermore, officials of small and minority business associations cited the NASA OSDBU as a model for other federal OSDBUs because of its initiatives to help meet the agency's goals. GSA's OSDBU, in conjunction with the agency's Office of Training and Compliance, has established the Procurement Preference Goaling Program. The program is designed to assist small disadvantaged businesses and women-owned businesses in four industries--travel, manufacturing, automobile sales, and construction--where these businesses have done less well in obtaining federal contracts. For example, the program includes the following: Developing a list of minority- and women-owned automobile dealerships in various geographic areas that can supply a portion of GSA's automobile fleet purchases. For about 80 percent of the agency's automobile purchases, the volume of cars required can only be obtained directly from one of the big three automakers. The remaining 20 percent--about $217 million in fiscal year 1993--is small enough that the agency can procure the automobiles from individual dealerships, according to GSA's OSDBU director. Working with SBA on a pilot project to identify zones where contracts for travel services could be set aside for SBA's 8(a) program participants. The OSDBU and SBA are currently planning to sponsor a large conference in New Orleans, Louisiana, to solicit applications from 8(a) firms in the travel services field. Compiling lists of small businesses, small disadvantaged businesses, and women-owned businesses that manufacture various goods that the Federal Emergency Management Agency may need during disasters. Attempting to increase construction subcontracting opportunities for small disadvantaged businesses and small women-owned businesses by implementing the Courthouse/Federal Buildings Pilot Program. Under this program, GSA identifies new federal construction projects with an estimated cost of over $50 million and makes special efforts to include small businesses, small disadvantaged businesses, and small women-owned businesses as subcontractors. (GSA has identified one such project in 10 of its 11 regions; no project qualified in one region.) As part of this pilot program, one of the Deputy Directors will be directly involved in the projects and will meet with potential contractors and agency field staff before contracts are issued to ensure that specific language concerning subcontracting is included in solicitations and bid offerings. Also, the prime contractor will be required to report to GSA--on a monthly basis during the procurement phase and quarterly thereafter--on the utilization of the targeted small businesses. According to the Deputy Director, as of February 1995, although the pilot had not yet been formally approved by GSA, two projects--the Tampa Courthouse and the Kansas City Courthouse--were in the initial process stage. The Air Force OSDBU initiated the Small Business and Historically Black Colleges and Universities/Minority Institutions Strategic Planning Workshop in fiscal year 1992. The purpose of the workshop is to increase the participation in the Air Force's procurement by establishing contracting goals for small business, small disadvantaged business, and minority educational institutions. The workshop is unique for three reasons: (1) The process of goal setting begins 6 months earlier than in other agencies, (2) the OSDBU and field officials meet for a week to develop the goals, and (3) the Air Force develops a set of goals explicitly based on an increased level of effort by agency contracting officials to provide opportunities to small and disadvantaged businesses. The OSDBU also has a project called the East St. Louis Initiative, under which the Air Force OSDBU is working with the city of East St. Louis, Illinois, to help bring contracts to small disadvantaged businesses and jobs to the mostly minority residents. Under this initiative, the Air Force is in a partnership with a national organization called Access America and identifies Air Force contracts that it can obtain to bring manufacturing jobs to this economically depressed area. Access America has obtained a grant to train between 1,100 and 1,500 residents of East St. Louis in aircraft maintenance and aerospace technology. With support from the Air Force Secretary and Chief of Staff, the OSDBU director has assembled a Business Education Team from field and headquarters contracting activities. The team conducts seminars that provide small businesses and small disadvantaged businesses with information on doing business with the Air Force. NASA is required by law to award, to the fullest extent possible, at least 8 percent of the annual total value of its contracts and subcontracts to small businesses or other organizations owned or controlled by socially and economically disadvantaged individuals, including (1) women-owned businesses, (2) historically black colleges and universities, and (3) minority educational associations. NASA targeted fiscal year 1994 to meet the goal. The agency awarded 8.5 percent of its fiscal year 1993 contracting budget to small disadvantaged businesses, and in fiscal year 1994 it awarded 9.9 percent. NASA OSDBU officials attributed the agency's success to the office's six-point plan--a strategy for achieving and maintaining compliance with the law's requirements. The six points include requiring NASA's top officials--Center Directors and Associate Administrators--to develop a plan for meeting their portion of the agency's 8-percent goal; requiring the concurrence of the NASA Chief of Staff when consolidating prime contracts that would reduce awards to small disadvantaged businesses; requiring Associate Administrators to take steps to increase subcontracting to small disadvantaged businesses by NASA's top 100 prime contractors and report these steps to the OSDBU; requiring each NASA center to identify two non-8(a) procurement requirements, of significant dollar value, that could be awarded to small disadvantaged businesses in fiscal year 1993; developing an awards program for technical small business and contracting personnel for their efforts in helping to achieve NASA's 8-percent goal; and challenging NASA's Jet Propulsion Laboratory to double its subcontracting in fiscal year 1993. Also, at the urging of its OSDBU, NASA requires that the OSDBU director review all procurement proposals with an estimated value over $25 million for large contracting activities and $10 million for smaller contracting activities, in order to establish a goal for the portion to be subcontracted to small businesses. NASA also established criteria for assessing top-level managers' assistance to small and disadvantaged businesses. Fiscal year 1993 was the first year the OSDBU provided input for top-level managers' performance assessment. NASA also has several efforts aimed specifically at high-tech small or minority-owned businesses. In cooperation with SBA and the UNISPHERE Institute, the OSDBU assists firms that have participated in SBA's 8(a) program to find international venture partners. The UNISPHERE program helps these firms expand their technical and financial capabilities, thus increasing their ability to compete for NASA contracts. In addition, the OSDBU's New England Outreach Office identifies high-tech minority businesses that are capable of working on NASA contracts and subcontracts. Furthermore, the OSDBU has initiated the High-Tech Small Disadvantaged Business Forum, which permits small disadvantaged businesses to make presentations on their technical capabilities to NASA headquarters and field officials. In fiscal year 1994, 70 percent of the NASA contracts awarded to small disadvantaged businesses were awarded to high-tech firms. The organizational reporting levels of the OSDBU directors at the Defense Logistics Agency and the Department of Energy do not comply with the requirements of Public Law 95-507. By reporting to officials other than the agency head or the agency's deputy, the OSDBU directors at these agencies may not have access to officials at a high enough level to maximize their effectiveness in assisting small and disadvantaged businesses. Following our review, DLA's Deputy General Counsel for Acquisition said that the agency will take steps to reorganize so that the OSDBU director reports to either the agency head or the agency's second-ranking official. DOE's Director, Office of Economic Impact and Diversity, and the Assistant General Counsel for General Law told us that their agency would comply with Executive Order 12928. However the Deputy General Counsel said that it is uncertain when or how the reorganization will be accomplished because of a need to reconcile the responsibilities of the OSDBU with another statutorily mandated office. We discussed a draft of this report with the OSDBU directors or their designees and staff at each of the eight agencies we reviewed. In addition, we discussed matters related to the OSDBUs' reporting levels with DLA's Deputy General Counsel for Acquisition and with DOE's Assistant General Counsel for General Law. All of the officials generally agreed with the facts presented. We have incorporated the officials' comments where appropriate. To attain our objectives, we reviewed the Small Business Act, Public Law 95-507, OFPP's Policy Letter 79-1, and Executive Order No. 12928. We interviewed the directors and other officials of the OSDBU at each of the eight agencies. To obtain the views of small businesses and small disadvantaged businesses concerning OSDBUs' activities, we also interviewed representatives from two small business associations: the National Minority Suppliers Development Council, Inc., and the National Association of Small Businesses. To determine the reporting levels of the OSDBU directors, we reviewed organizational charts and identified the officials providing performance ratings. In those cases in which the OSDBU varied from the statutory requirement, we obtained the rationale from the agency's OSDBU and Office of General Counsel officials. To determine what activities the OSDBUs conducted to assist small businesses and small disadvantaged businesses, we reviewed the OSDBUs' function statements and obtained documentation related to specific activities. We conducted our review from April 1994 through March 1995 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and to other interested parties. We will also make copies available to others on request. Should you or your staff have any questions, you may reach me at (202) 512-7631. Major contributors to this report are listed in appendix I. John T. McGrail, Attorney Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Office of Small and Disadvantaged Business Utilization (OSDBU) at eight federal agencies, focusing on: (1) whether OSDBU directors report to the required agency official and, if not, the rationale for the deviation; and (2) OSDBU activities to assist small and disadvantaged businesses (SDB) in obtaining federal contracts. GAO found that: (1) except for at the Defense Logistics Agency (DLA) and the Department of Energy (DOE), OSDBU directors report to the appropriate agency official as required; (2) DLA OSDBU director reports to the Deputy Director for Acquisition, since that official is responsible for all contracting matters; (3) DLA plans to have its OSDBU director report either to the DLA head or the DLA principal deputy under its reorganization plan; (4) DOE maintains that its authorizing legislation enables the Secretary of Energy to use discretion in determining the OSDBU director's reporting level; (5) DOE plans to comply with federal OSDBU reporting requirements; (6) OSDBU activities are consistent with legal requirements for assisting SDB in obtaining federal contracts; (7) these activities include developing contracting goals, sponsoring or participating in outreach efforts, being an interagency liaison for small business procurement activities, and supervising and training agency staff who work with small businesses; and (8) several agency OSDBU have undertaken additional initiatives to promote SDB participation in federal contracting.
4,821
321
LTCI helps pay for the costs associated with long-term care services, which can be expensive. However, the number of LTCI policies sold has been relatively small--about 9 million as of the end of 2002, the most recent year of data available. To receive benefits under an LTCI policy, the consumer must not only obtain the covered services, but must also meet what are commonly referred to as benefit triggers. Most policies provide benefits under two circumstances (1) the consumer cannot perform a certain number of activities of daily living (ADL)--such as bathing, dressing, and eating--without assistance, or (2) the consumer requires supervision because of a cognitive impairment. In addition, benefit payments do not begin until the policyholder has met the benefit triggers for the length of their elimination period. Elimination periods establish the amount of time a policyholder must receive services before his or her insurance will begin making payments, for example, 30 or 90 days. Determining whether a consumer has met the benefit triggers can be complex and companies' processes for doing so vary. In the event that a consumer's claim for benefits is denied, the consumer generally can appeal to the insurance company. If the company upholds the denial, the consumer can file a complaint with the state insurance department or can seek adjudication through the courts. Many factors affect LTCI premium rates, including the benefits covered and the age and health status of the applicant. For example, companies typically charge higher premiums for comprehensive coverage as compared to policies without such coverage, and consumers pay higher premiums the higher the daily benefit amount and the shorter the elimination period. Similarly, premiums typically are more expensive the older the policyholder is at the time of purchase. Company assumptions about interest rates on invested assets, mortality rates, morbidity rates, and lapse rates--the number of people expected to drop their policies over time--also affect premium rates. A key feature of LTCI is that premium rates are designed--though not guaranteed--to remain level over time. While under most states' laws insurance companies cannot increase premiums for a single consumer because of individual circumstances, such as age or health, companies can increase premiums for entire classes of individuals, such as all consumers with the same policy, if new data indicate that expected claims payments will exceed the class's accumulated premiums and expected investment returns. Setting LTCI premium rates at an adequate level to cover future costs has been a challenge for some companies. Because LTCI is a relatively new product, companies lacked and may continue to lack sufficient data to accurately estimate the revenue needed to cover costs. For example, lapse rates have proven lower than companies anticipated in initial pricing, which increased the number of people likely to submit claims. As a result, many policies were priced too low and subsequently premiums had to be increased, leading some consumers to cancel coverage. Oversight of the LTCI industry is largely the responsibility of states. Through laws and regulations, states establish standards governing LTCI and give state insurance departments the authority to enforce those standards. Many states' laws and regulations reflect standards set out in model laws and regulations developed by NAIC. These models are intended to assist states in formulating their laws and policies to regulate insurance, but states can choose to adopt them or not. Beyond implementing pertinent laws and regulations, state regulators perform a variety of oversight tasks that are intended to protect consumers from unfair practices. These activities include reviewing policy rates and forms to ensure that they are consistent with state laws and regulations; conducting market conduct examinations--where an examiner visits a company to evaluate practices and procedures and checks those practices and procedures against information in the company's files; and responding to consumer complaints. Although oversight of the LTCI industry is largely the responsibility of states, the federal government also plays a role in the oversight of LTCI. For example, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) established federal standards that specify the conditions under which LTCI benefits and premiums can receive favorable federal income tax treatment. Under HIPAA, a tax-qualified policy must cover individuals certified as needing substantial assistance with at least two of the six ADLs for at least 90 days due to a loss of functional capacity, having a similar level of disability, or requiring substantial supervision because of a severe cognitive impairment. Tax-qualified policies under HIPAA must also comply with certain provisions of the NAIC LTCI model act and regulation in effect as of January 1993. The Department of the Treasury, specifically the Internal Revenue Service (IRS), issued regulations in 1998 implementing some of the HIPAA standards. However, according to IRS officials, the agency generally relies on states to ensure that policies marketed as tax qualified meet HIPAA requirements. In 2002, 90 percent of LTCI policies sold were marketed as tax qualified. In recent years, many states have made efforts to improve oversight of rate setting, though some consumers remain more likely to experience rate increases than others. Since 2000, NAIC estimates that more than half of all states have adopted new rate setting standards. States that adopted new standards generally moved from a single standard focused on ensuring that rates were not set too high to more comprehensive standards designed primarily to enhance rate stability and provide increased protections for consumers. The more comprehensive standards were based on changes made to NAIC's LTCI model regulation in 2000. While regulators in most of the 10 states we reviewed told us that they expect these more comprehensive standards will be successful, they noted that more time is needed to know how well the standards will work. Regulators from the states in our review also use other standards or practices to oversee rate setting, several of which are intended to keep premium rates more stable. Despite states implementing more comprehensive standards and using other oversight efforts intended to enhance rate stability, some consumers may remain more likely to experience rate increases than others. Specifically, consumers may face more risk of a rate increase depending on when they purchased their policy, from which company their policy was purchased, and which state is reviewing a proposed rate increase on their policy. Since 2000, NAIC estimates that more than half of states nationwide have adopted new rate setting standards for LTCI. States that adopted new standards generally moved from the use of a single standard designed to ensure that premiums were not set too high to the use of more comprehensive standards designed to enhance rate stability and provide other protections for consumers. Prior to 2000, most states used a single, numerical standard when reviewing premium rates. This standard--called the loss ratio--was included in NAIC's LTCI model regulation. For all policies where initial rates were subject to this loss ratio standard, proposed rate increases are subject to the same standard. While the loss ratio standard was designed to ensure that premium rates were not set too high in relation to expected claims costs, over time NAIC identified two key weaknesses in the standard. First, the standard does not prevent premium rates from being set too low to cover the costs of claims over the life of the policy. Second, the standard provides no disincentive for companies to raise rates, and leaves room for companies to gain financially from premium increases. In identifying these two weaknesses, NAIC noted that there have been cases where, under the loss ratio, initial premium rates proved inadequate, resulting in large rate increases and significant loss of LTCI coverage from consumers allowing their policies to lapse. To address the weaknesses in the loss ratio standard as well as to respond to the growing number of premium increases occurring for LTCI policies, NAIC developed new, more comprehensive model rate setting standards in 2000. These more comprehensive standards were designed to accomplish several goals, including improving rate stability. Among other things, the standards established more rigorous requirements companies must meet when setting initial LTCI rates and rate increases, which several state regulators told us may result in higher, but more stable, premium rates over the long term. The more comprehensive standards were also designed to inform consumers about the potential for rate increases and provide protections for consumers facing rate increases. Table 1 describes selected rate setting standards added to NAIC's LTCI model regulation in 2000 and the purpose of each standard in more detail. Although a growing number of consumers will be protected by the more comprehensive standards going forward, as of 2006 many consumers had policies that were not protected by these standards. Following the revisions to NAIC's LTCI model in 2000, many states began to replace their loss ratio standard with more comprehensive rate setting standards based on NAIC's changes. NAIC estimates that by 2006 more than half of states nationwide had adopted the more comprehensive standards. However, many consumers have policies not protected by the more comprehensive standards, either because they live in states that have not adopted these standards or because they bought policies issued prior to implementation of these standards. For example, as of December 2006, according to our analysis of NAIC and industry information, at least 30 percent of policies in force were issued in states that had not adopted the more comprehensive rate setting standards. Further, in states that have adopted the more comprehensive standards, many policies in force were likely to have been issued before states began adopting these standards in the early 2000s. Regulators from most of the 10 states in our review said that they expect the rate setting standards added to NAIC's model regulation in 2000 will improve rate stability and provide increased protections for consumers, though regulators also recognized that it is too soon to determine the effectiveness of the standards. Some regulators explained that it might be as much as a decade before they are able to assess the effectiveness of these standards. Regulators from 1 state explained that rate increases on LTCI policies sold in the 1980s did not begin until the late 1990s, when consumers began claiming benefits and companies were faced with the costs of paying their claims. Further, though the more comprehensive standards aim to enhance rate stability, LTCI is still a relatively young product, and initial rates continue to be based on assumptions that may eventually require revision. State regulators from the 10 states in our review use other standards-- beyond those included in NAIC's LTCI model regulation--or practices to oversee rate setting, including several that are intended to enhance rate stability. Regulators from 3 of the states in our review told us that their state has standards intended to enhance the reliability of data used to justify rate increases, and regulators from 2 states told us that they have standards to limit the extent to which LTCI rates can increase. Beyond implementing rate setting standards, regulators from all 10 states in our review use their authority to review rates to reduce the size of rate increases or to phase in rate increases over multiple years. While state regulators work to reduce the effect of rate increases on consumers, regulators from 6 states explained that increases can be necessary to maintain companies' financial solvency. Although some states are working to improve oversight of rate setting and to help ensure LTCI rate stability by adopting the more comprehensive standards and through other efforts, there are other reasons why some consumers may remain more likely to experience rate increases than others. In particular, consumers who purchased policies when there were more limited data available to inform pricing assumptions may continue to experience rate increases. Regulators from seven states in our review told us that rate increases are mainly affecting consumers with older policies. For example, regulators from one state told us that there are not as many rate increases proposed for policies issued after the mid-1990s. Regulators in five states explained that incorrect pricing assumptions on older policies are largely responsible for rate increases. Consumers' likelihood of experiencing a rate increase also may depend on the company from which they bought their policy. In our review of national data on rate increases by four judgmentally selected companies that together represented 36 percent of the LTCI market in 2006, we found variation in the extent to which they have implemented increases. For example, one company that has been selling LTCI for 30 years has increased rates on multiple policies since 1995, with many of the increases ranging from 30 to 50 percent. Another company that has been in the market since the mid-1980s has increased rates on multiple policies since 1991, with increases approved on one policy totaling 70 percent. In contrast, officials from a third company that has been selling LTCI since 1975 told us that the company was implementing its first increase as of February 2008. The company reported that this increase, affecting a number of policies, will range from a more modest 8 to 12 percent. Another company that also instituted only one rate increase explained that in cases where initial pricing assumptions were wrong, the company has been willing to accept lower profit margins rather than increase rates. While past rate increases do not necessarily increase the likelihood of future rate increases, they do provide consumers with information on a company's record in having stable premiums. Finally, consumers in some states may be more likely to experience rate increases than those in other states, which officials from two companies noted may raise equity concerns. Of the six companies we spoke with, officials from every company that has instituted a rate increase told us that there is variation in the extent to which states approve proposed rate increases. For example, officials from one company told us that when requesting rate increases they have seen some states deny a request and other states approve an 80 percent increase on the same rate request with the same data supporting it. While some consumers may face higher increases than others, company officials also told us that they provide options to all consumers facing a rate increase, such as the option to reduce their benefits to avoid all or part of a rate increase. Our review of data on state approvals of rate increases requested by one LTCI company operating nationwide also indicated that consumers in some states may be more likely to experience rate increases. Specifically, since 1995 one company has requested over 30 increases, each of which affected consumers in 30 or more states. While the majority of states approved the full amounts requested in these cases, there was notable variation across states in 18 of the 20 cases in which the request was for an increase of over 15 percent. For example, for one policy, the company requested a 50 percent increase in 46 states, including the District of Columbia. Of those 46 states, over one quarter (14 states) either did not approve the rate increase request (2 states) or approved less than the 50 percent requested (12 states), with amounts approved ranging from 15 to 45 percent. The remaining 32 states approved the full amount requested, though at least 4 of these states phased in the amount by approving smaller rate increases over 2 years. (See fig. 1.) Variation in state approval of rate increase requests may have significant implications for consumers. In the above example, if the initial, annual premium for the policy was, for example, $2,000, consumers would see their annual premium rise by $1,000 in Colorado, a state that approved the full increase requested; increase by only $300 in New York, where a 15 percent increase was approved; and stay level in Connecticut, where the increase was not approved. Although state regulators in our 10-state review told us that most rate increases have occurred for policies subject to the loss ratio standard, variation in state approval of proposed rate increases may continue for policies protected by the more comprehensive standards. States may implement the standards differently, and other oversight efforts, such as the extent to which states work with companies, also affect approval of increases. The 10 states in our review have standards established by law and regulations for governing claims settlement practices. The majority of the standards, some of which apply specifically to LTCI and others that apply more broadly to various insurance products, are designed to ensure that claims settlement practices are conducted in a timely manner. Specifically, the standards are designed to ensure the timely investigation and payment of claims and prompt communication with consumers about claims. In addition to these timeliness standards, states have established other standards, such as requirements for how companies are to make benefit determinations. While the 10 states we reviewed all have standards governing claims settlement practices, the states vary in the specific standards they have adopted as well as in how they define timeliness. For example, 1 state does not have a standard that requires companies to pay claims in a timely manner. For the 9 states that do have a standard, the definition of "timely" the states use varies notably--from 5 days to 45 days, with 2 states not specifying a time frame. In addition, federal laws governing tax-qualified policies do not address the timely investigation and payment of claims or prompt communication with consumers about claims. The absence of certain standards and the variation in states' definitions of "timely" may leave consumers in some states less protected from, for example, delays in payment than consumers in other states. (See table 2 for key claims settlement standards adopted by the 10 states in our review and examples of the variation in standards.) The states in our review primarily use two ways to monitor companies' compliance with claims settlement standards. One way the states monitor compliance is by reviewing consumer complaints on a case-by-case basis and in the aggregate to identify trends in company practices. When responding to complaints on a case-by-case basis, regulators in some states told us that they determine whether they can work with the consumer and the company to resolve the complaint or determine whether there has been a violation of claims settlement standards that requires further action. Regulators from four states also told us that they regularly review complaint data to identify trends in company practices over time or across companies, including practices that may violate claims settlement standards. Three of these states review these data as part of broader analyses of the LTCI market during which they also review, for example, financial data and information on companies' claims settlement practices. However, regulators in three states noted that a challenge in using complaint data to identify trends is the small number of LTCI consumer complaints that their state receives. For example, information on complaints provided by one state shows that the state received only 54 LTCI complaints in 2007, and only 20 were related to claims settlement issues. State regulators told us that they expect the number of complaints to increase in the future as more consumers begin claiming benefits. The second way that states monitor company compliance with claims settlement standards is by using market conduct examinations. These examinations may be regularly scheduled or, if regulators find patterns in consumer complaints about a company, they may initiate an examination, which generally includes a review of the company's files for evidence of violations of claims settlement standards. Some states also coordinate market conduct examinations with other states--efforts known as multistate examinations--during which all participating states examine the claims settlement practices of designated companies. If state regulators identify violations of claims settlement standards during market conduct examinations, they may take enforcement actions, such as imposing fines or suspending the company's license. As of March 2008, 4 of the 10 states in our review reported taking enforcement actions against LTCI companies for violating claims settlement standards and 7 reported having ongoing examinations into companies' claims settlement practices. In addition to their efforts to monitor compliance with claims settlement standards, regulators from six of the states in our review reported that their state is considering or may consider adopting additional consumer protections related to claims settlement. The additional protection most frequently considered by the state regulators we interviewed is the inclusion of an independent review process, which would allow consumers appealing LTCI claims denials to have their issue reviewed by a third party independent from their insurance company without having to engage in legal action. Also, a group of representatives from NAIC member states was formed in March 2008 to consider whether to recommend developing provisions to include an independent review process in the NAIC LTCI models. Such an addition may be useful, as regulators from three states told us that they lack the authority to resolve complaints involving a question of fact, for example, when the consumer and company disagree on a factual matter regarding a consumer's eligibility for benefits. Further, there is some evidence to suggest that due to errors or incomplete information companies frequently overturn LTCI denials during the appeals process. Specifically, data provided by four companies we contacted showed that the average percentage of denials overturned was 20 percent in 2006, ranging from 7 percent in one company to 34 percent in another. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the committee may have. For future contacts regarding this statement, please contact John E. Dicken at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Kristi Peterson, Assistant Director; Krister Friday; and Rachel Moskowitz made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As the baby boom generation ages, the demand for long-term care services is likely to grow and could strain state and federal resources. The increased use of long-term care insurance (LTCI) may be a way of reducing the share of long-term care paid by state and federal governments. Oversight of LTCI is primarily the responsibility of states, but over the past 12 years, there have been federal efforts to increase the use of LTCI while also ensuring that consumers purchasing LTCI are adequately protected. Despite this oversight, concerns have been raised about both premium increases and denials of claims that may leave consumers without LTCI coverage when they begin needing care. This statement focuses on oversight of the LTCI industry's (1) rate setting practices and (2) claims settlement practices. This statement is based on findings from GAO's June 2008 report entitled Long-Term Care Insurance: Oversight of Rate Setting and Claims Settlement Practices (GAO-08-712). For that report, GAO reviewed information from the National Association of Insurance Commissioners (NAIC) on all states' rate setting standards. GAO also completed 10 state case studies on oversight of rate setting and claims settlement practices, which included structured reviews of state laws and regulations, interviews with state regulators, and reviews of state complaint information. GAO also reviewed national data on rate increases implemented by companies. Many states have made efforts to improve oversight of rate setting, though some consumers remain more likely to experience rate increases than others. NAIC estimates that since 2000 more than half of states nationwide have adopted new rate setting standards. States that adopted new standards generally moved from a single standard that was intended to prevent premium rates from being set too high to more comprehensive standards intended to enhance rate stability and provide other protections for consumers. Although a growing number of consumers will be protected by the more comprehensive standards going forward, as of 2006 many consumers had policies not protected by these standards. Regulators in most of the 10 states GAO reviewed said that they think the more comprehensive standards will be effective, but that more time is needed to know how well the standards will work. State regulators in GAO's review also use other standards or practices to oversee rate setting, several of which are intended to keep premium rates more stable. Despite state oversight efforts, some consumers remain more likely to experience rate increases than others. Specifically, consumers may face more risk of a rate increase depending on when they purchased their policy, from which company their policy was purchased, and which state is reviewing a proposed rate increase on their policy. Regulators in the 10 states GAO reviewed oversee claims settlement practices by monitoring consumer complaints and conducting examinations in an effort to ensure that companies are complying with standards. Claims settlement standards in these states largely focus on timely investigation and payment of claims and prompt communication with consumers, but the standards adopted and how states define timeliness vary notably across the states. Regulators told GAO that reviewing consumer complaints is one of the primary methods for monitoring companies' compliance with state standards. In addition to monitoring complaints, these regulators also said that they use examinations of company practices to identify any violations in standards that may require further action. Finally, state regulators in 6 of the 10 states in GAO's review reported that their states are considering additional protections related to claims settlement. For example, regulators in several states said that their states were considering an independent review process for consumers appealing claims denials. Such an addition may be useful as some regulators said that they lack authority to resolve complaints where, for example, the company and consumer disagree on a factual matter, such as a consumer's eligibility for benefits. In commenting on a draft of GAO's report issued on June 30, 2008, NAIC compiled comments from its member states. Member states said that the report was accurate but seemed to critique certain aspects of state regulation, including differences among states, and make an argument for certain reforms. The draft reported differences in states' oversight without making any conclusions or recommendations.
4,490
855
In 1982, the Congress enacted The Veterans' Administration and Department of Defense Health Resources Sharing and Emergency Operations Act (Public Law 97-174) to promote greater sharing of health care resources and thus achieve greater efficiencies in the DOD and VA health care systems. One of the main objectives of this legislation was to reduce the costs of operating those systems by minimizing duplication and underuse of health care resources. Under this legislation, the DOD and VA entered into health care resource-sharing agreements, which allowed active-duty and eligible former service members to receive care in VA hospitals and vice versa. However, legislation did not provide for the use of CHAMPUS funds to reimburse VA under sharing agreements nor permit VA to treat dependents of active-duty and eligible former members. In a 1988 GAO report, we recommended that the Congress enact legislation specifically authorizing (1) the use of CHAMPUS funds to purchase care for CHAMPUS beneficiaries from VA medical centers and (2) the treatment of all categories of dependents at VA hospitals. Legislation accomplishing these two purposes was passed in 1989 and 1992, respectively. Under health resource-sharing agreements using CHAMPUS funds, CHAMPUS beneficiaries can receive services from the VA in noncatchment areas through authority provided in sharing agreements between DOD and VA headquarters officials and in catchment areas through local agreements between military hospital commanders and the VA medical center directors subject to headquarters approval. These agreements offer DOD the potential for (1) saving CHAMPUS funds because DOD will reimburse VA less than what it pays the private sector for similar services and (2) improving access to services for their beneficiaries. The VA can benefit by using the extra revenue generated from CHAMPUS funds to improve services to veterans. The information we developed for this report came from three sources: (1) a review of sharing legislation; (2) an examination of the various drafts of the CHAMPUS/Asheville VAMC sharing agreement, the DOD/VA memorandum of understanding, and related documents; and (3) discussions with DOD and VA officials responsible for the sharing program. The discussions focused on the reasons for delays in developing CHAMPUS/VA sharing agreements and in using CHAMPUS funds for sharing agreements between military hospitals and VA hospitals. We performed this work at the Office of the Assistant Secretary of Defense (Health Affairs) and VA headquarters in Washington, D.C.; the U.S. Army Medical Command (a component of the Army Surgeon General's office) in San Antonio, Texas; CHAMPUS headquarters in Aurora, Colorado; and the Asheville VAMC (because it was negotiating the first CHAMPUS/VA sharing agreement). We supplemented these visits with telephone discussions with officials from the Air Force Surgeon General's office and the Navy Bureau of Medicine (Surgeon General's office) in Washington, D.C. We did our work from August 1993 to September 1994 in accordance with generally accepted government auditing standards. Differences between DOD and VA over provisions of a memorandum of understanding and the CHAMPUS/Asheville VAMC sharing agreement prevented CHAMPUS beneficiaries from receiving services in VA hospitals in noncatchment areas through the use of CHAMPUS funds. The differences over sharing provisions arose shortly after the passage of the 1989 legislation authorizing the use of CHAMPUS funds for treatment in VA hospitals and they continued throughout most of 1993. Due in large part to the intervention of the Chairman, House Committee on Veterans' Affairs in October 1993, DOD and VA resolved their differences. Both parties signed (1) a sharing agreement in December 1993 to treat CHAMPUS-eligible beneficiaries in the Asheville VAMC and (2) a memorandum of understanding in February 1994 providing an overall framework for future CHAMPUS/VA health care resource-sharing agreements. The differences between DOD and VA centered mainly on whether VA's hospitals would be treated more as military hospitals or as CHAMPUS civilian providers. These differences led to many revisions of the agreement. More specifically, according to VA officials, DOD wanted VA hospitals to follow CHAMPUS procedures for seeking reimbursement by filing claims with CHAMPUS fiscal intermediaries and collecting copayments and deductibles from beneficiaries. Also, DOD wanted to use its own payment methodology, the diagnosis related group system, for reimbursing VA hospitals for the care they provided. Further, DOD wanted VA to adhere to CHAMPUS standards for utilization review and quality assurance. VA, on the other hand, wanted its hospitals to be treated as military hospitals, which have no copayments and deductibles. VA also wanted to bill the military services directly and not use fiscal intermediaries, and it wanted to bill CHAMPUS on a per diem system rather than the diagnosis related group system. In addition, VA wanted to use its own utilization management and quality review systems. During 1993, the two agencies exchanged several proposals, and, at one point, it appeared that they had reached an agreement. In fact, representatives from the Asheville VAMC and DOD signed a sharing agreement in July 1993. However, DOD subsequently rescinded the agreement because, according to DOD health officials, the person signing for DOD did not have the authority to do so. It was not until the Chairman, House Committee on Veterans' Affairs, called a meeting of DOD and VA officials in October 1993 and expressed frustration with the delays that any substantive progress occurred. By December 23, 1993, both DOD and VA had signed the CHAMPUS/Asheville VAMC sharing agreement, and the Asheville VAMC began treating CHAMPUS patients in February 1994. Under the agreement, the Asheville VAMC is treated as a CHAMPUS provider instead of a direct care provider; it collects CHAMPUS copayments and deductibles, and it bills through CHAMPUS fiscal intermediaries. CHAMPUS reimburses claims submitted by the Asheville VAMC for hospital inpatient charges at a 5-percent discount off the amount payable to civilian providers under the CHAMPUS diagnosis related group-based payment system; it will reimburse professional services claims at a 5-percent discount off the CHAMPUS maximum allowable charge. Although the Asheville VAMC will maintain a utilization review and quality assurance system, it will also be subject to CHAMPUS utilization review and quality assurance requirements. By February 3, 1994, both DOD and VA had signed a memorandum of understanding establishing a general policy and framework for subsequent CHAMPUS/VA health care resource-sharing agreements. To date, however, neither DOD nor VA has conducted a systemwide search to identify noncatchment areas with VA hospitals where sharing agreements can be implemented. Although a July 1994 VA directive encouraged its medical centers to take advantage of the opportunity to treat CHAMPUS beneficiaries, DOD officials told us that they will wait and see how the CHAMPUS/Asheville VAMC agreement fares before entering into additional sharing agreements. As of July 1994, DOD and VA were also developing a memorandum of understanding to establish policies and guidelines for VA to provide services to CHAMPUS beneficiaries in areas of the country where DOD has contracted with private companies to manage CHAMPUS beneficiaries' health care. This particular memorandum of understanding would permit DOD contractors to contract with VA health care facilities. VA signed the memorandum of understanding in May 1994 and sent it to DOD for review. As of July 1994, the Office of the Assistant Secretary of Defense (Health Affairs) was reviewing it. In addition to the delay in implementing CHAMPUS/VA sharing agreements in noncatchment areas, such as Asheville, North Carolina, military hospital commanders in DOD catchment areas have not proposed using CHAMPUS funds for sharing agreements between their hospitals and VA hospitals. The commanders have not proposed using CHAMPUS funds for buying VA services through sharing agreements because they have been unclear about the interagency sharing program and their roles and authorities under it. The military services allocate CHAMPUS funds to military hospital commanders who are responsible for managing the care of all CHAMPUS beneficiaries in their catchment areas. The Army began allocating CHAMPUS funds to its hospitals in fiscal year 1992 and, in fiscal year 1993, expanded the allocations to all its U.S. hospitals except for three in California and one in Hawaii. In fiscal year 1994, Army hospitals were allocated about $540 million in CHAMPUS funds. The Air Force and Navy began allocating CHAMPUS funds to their hospitals in fiscal year 1994 when the Air Force allocated $476 million and the Navy allocated $356 million. Hospital commanders may use these funds to enhance and expand services available to CHAMPUS beneficiaries in their hospitals or to purchase services from outside providers, including sharing with VA. The intent is to use CHAMPUS money in the most cost-effective manner. However, all three services told us that their hospital commanders have not used any CHAMPUS funds for sharing agreements with VA. Further, as in noncatchment areas, DOD and VA have not done a comprehensive search of locations where sharing agreements using CHAMPUS funds can be implemented. Officials from the military services and the Office of the Assistant Secretary of Defense (Health Affairs) stated that military hospital commanders have the authority to submit proposals for using CHAMPUS funds for sharing agreements between their hospitals and VA hospitals if they so choose. However, these officials also said that, while no restrictions exist against using CHAMPUS funds for such sharing, neither do instructions exist for using CHAMPUS funds for such sharing. Further, these officials stated that military hospital commanders do not understand that they can propose using CHAMPUS funds for sharing agreements. Both DOD and VA can benefit from sharing agreements between CHAMPUS and VA hospitals and also between military and VA hospitals. Implementation of the sharing agreements, however, was delayed by the inability of DOD and VA officials to agree on sharing provisions and procedures. Also, DOD and VA have not engaged in a systemwide identification of sharing opportunities using CHAMPUS funds. With the overall memorandum of understanding in place and the first CHAMPUS/VA sharing agreement signed, the necessary structure now exists for further sharing agreements. To take advantage of sharing benefits, we believe DOD must make its hospital commanders more aware of their authority to propose using CHAMPUS funds to buy VA services. Additionally, DOD should provide guidance to military hospital commanders on how to develop and implement sharing agreements. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) and the military services to fully inform and explain to military hospital commanders the authority to propose using CHAMPUS funds for sharing agreements with VA and their roles and authorities under this program, to provide specific instructions on developing and implementing such agreements, and to identify sharing opportunities in which CHAMPUS funds can be used to buy available VA services. Similarly, we recommend that the Secretary of Veterans Affairs direct VA medical center directors to actively identify available VA services that may be candidates for sharing agreements with DOD and to communicate such information to the relevant DOD hospital commander. DOD and VA provided written comments on a draft of this report (apps. I and II). DOD agrees that the sharing of health care resources between the DOD and VA is a worthwhile approach that can result in overall efficiencies for both agencies. DOD does not agree, however, that disagreements between DOD and VA have delayed the implementation of sharing agreements. Following are other DOD comments: The progress of the Asheville agreement will be reviewed and possible additional sharing opportunities will be discussed in October 1994 by the VA/DOD Health Care Resources Sharing Policy and Operations Subcommittee; Guidance is being developed for issuance to the military services to evaluate the possibility and feasibility of using and sharing medical resources when it is cost-effective to do so; and A new DOD Instruction on the VA/DOD Health Care Resources Sharing Program is being developed, and its issuance is anticipated by the end of fiscal year 1995. In our view, the disagreements between DOD and VA did delay the implementation of sharing agreements using CHAMPUS funds. These disagreements, as described in our report, are well documented and did not get resolved until after the Chairman of the House Committee on Veterans' Affairs intervened. We believe that the DOD actions listed above are good steps. However, until they are fully implemented, we believe our recommendations remain valid. To date, neither military hospital commanders nor regional lead agentshave been actively pursuing sharing agreements because, as they stated to us, they are uncertain about their roles and authorities under the CHAMPUS sharing program. They believe they need guidance on the requirements pertaining to CHAMPUS sharing agreements. VA agreed with our overall conclusion that VA and DOD would benefit from sharing agreements using CHAMPUS funds. However, VA disagreed with our draft report recommendation that the VA Secretary direct VA medical center directors to identify sharing agreements in which CHAMPUS funds can be used to buy available VA services. In VA's view, it should be DOD's--not VA's--responsibility to prioritize the needs of CHAMPUS beneficiaries. Further, VA stated that its July 1994 policy directive strongly encourages its medical centers to take advantage of the opportunity to treat CHAMPUS beneficiaries under sharing authority in situations where capacity is available and service to veterans can be enhanced. We recognize that DOD has responsibility for determining CHAMPUS priorities and needs. Similarly, we recognize that the recent VA policy directive is a strong positive indicator of its commitment toward encouraging sharing with DOD using CHAMPUS funds. The intent of our recommendation was to have medical center directors actively identify services that are available to DOD and to communicate such information to the relevant DOD hospital commander. We have clarified our recommendation along these lines. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the Secretary of Defense; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. If you have any questions concerning the contents of this report, please call me at (202) 512-7101. Other major contributors to this report were Stephen P. Backhus, Assistant Director, Robert P. Pickering, Senior Analyst, and Donald C. Hahn, Advisor. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the extent to which Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) funds are being used for health care resource-sharing agreements between the Departments of Veterans Affairs (VA) and Defense (DOD). GAO found that: (1) in February 1994, after nearly 3 years of negotiation, VA and DOD agreed on a framework for VA to treat CHAMPUS-eligible beneficiaries and receive reimbursement from CHAMPUS funds; (2) implementation of CHAMPUS/VA sharing agreements has been delayed because of disagreements between DOD and VA over VA hospital requirements; (3) neither DOD nor VA has conducted a systemwide search to identify additional opportunities for sharing agreements; (4) potential sharing opportunities have been missed because DOD hospital commanders have not used CHAMPUS funds for sharing agreements between their hospitals and VA hospitals and are unclear about their authority to do so; and (5) DOD needs to clarify the authority of DOD hospital commanders to propose sharing agreements using CHAMPUS funds, and it needs to provide instructions on developing and implementing such agreements.
3,286
237
The mission of NWS--an agency within the Department of Commerce's National Oceanic and Atmospheric Administration (NOAA)--is to provide weather, water, and climate forecasts and warnings for the United States, its territories, and its adjacent waters and oceans, in order to protect life and property and to enhance the national economy. NWS is the official source of aviation- and marine-related weather forecasts and warnings, as well as warnings about life-threatening weather situations. In the 1980s and 1990s, NWS undertook a nationwide modernization program to develop new systems and technologies and to consolidate its field office structure. The goals of the modernization program were to achieve more uniform weather services across the nation, improve forecasts, provide more reliable detection and prediction of severe weather and flooding, permit more cost-effective operations, and achieve higher productivity. The weather observing systems (including radars, satellites, and ground-based sensors) and data processing systems that currently support NWS operations were developed and deployed under the modernization program. During this period, NWS consolidated over 250 large and small weather service offices into the office structure currently in use. The coordinated activities of weather facilities throughout the United States allow NWS to deliver a broad spectrum of climate, weather, water, and space weather services. These facilities include weather forecast offices, river forecast centers, national centers, and aviation center weather service units. The functions of these facilities are described below. 122 weather forecast offices are responsible for providing a wide variety of weather, water, and climate services for their local county warning areas, including advisories, warnings, and forecasts (see fig. 1 for the current location of weather forecast offices). 13 river forecast centers provide river, stream, and reservoir information to a wide variety of government and commercial users as well as to local weather forecast offices for use in flood forecasts and warnings. 9 national centers constitute the National Centers for Environmental Prediction, which provide nationwide computer model output and manual forecast information to all NWS field offices and to a wide variety of government and commercial users. These centers include the Environmental Modeling Center, Storm Prediction Center, Tropical Prediction Center, Climate Prediction Center, Aviation Weather Center, and Space Environment Center, among others. 21 aviation center weather service units, which are co-located with key Federal Aviation Administration (FAA) air traffic control centers across the nation, provide meteorological support to air traffic controllers. To fulfill its mission, NWS relies on a national infrastructure of systems and technologies to gather and process data from the land, sea, and air. NWS collects data from many sources, including ground-based Automated Surface Observing Systems (ASOS), Next Generation Weather Radars (NEXRAD), and operational environmental satellites. These data are integrated by advanced data processing workstations--called Advanced Weather Interactive Processing Systems (AWIPS)--used by meteorologists to issue local forecasts and warnings. The data are also fed into sophisticated computer models running on high-speed supercomputers, which are then used to help develop forecasts and warnings. Figure 2 depicts the integration of the various systems and technologies and is followed by a description of each. NEXRAD is a Doppler radar system that detects, tracks, and determines the intensity of storms and other areas of precipitation, determines wind velocities in and around detected storm events, and generates data and imagery to help forecasters distinguish hazards such as severe thunderstorms and tornadoes. It also provides information about heavy precipitation that leads to warnings about flash floods and heavy snow. The NEXRAD network provides data to other government and commercial users and to the general public via the Internet. The NEXRAD network is made up of 158 operational radars and 8 nonoperational radars that are used for training and testing. Of these, NWS operates 120 radars, the Air Force operates 26 radars, and the FAA operates 12 radars. These radars are located throughout the continental United States and in 17 locations outside the continental United States. Figure 3 shows a NEXRAD radar tower. ASOS is a system of sensors, computers, display units, and communications equipment that automates the ground-based observation and dissemination of weather information nationwide. This system collects data on temperature and dew point, visibility, wind direction and speed, pressure, cloud height and amount, and types and amounts of precipitation. ASOS supports weather forecast activities and aviation operations, as well as the needs of research communities that study weather, water, and climate. Figure 4 is a picture of the system, while figure 5 depicts a configuration of ASOS sensors and describes their functions. There are currently 1,002 ASOS units deployed across the United States, with NWS, FAA, and the Department of Defense (DOD) operating 313, 571, and 118 units, respectively. Although NWS does not own or operate satellites, geostationary and polar- orbiting environmental satellite programs are key sources of data for its operations. NOAA manages the Geostationary Operational Environmental Satellite (GOES) system and the Polar-orbiting Operational Environmental Satellite (POES) system. In addition, DOD operates a different polar satellite program called the Defense Meteorological Satellite Program (DMSP). These satellite systems continuously collect environmental data about the Earth's atmosphere, surface, cloud cover, and electromagnetic environment. These data are used by meteorologists to develop weather forecasts and other services, and are critical to the early and reliable prediction of severe storms, such as tornadoes and hurricanes. Geostationary satellites orbit above the Earth's surface at the same speed as the Earth rotates, so that each satellite remains over the same location on Earth. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 6). To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years. Three satellites, GOES-10, GOES- 11, and GOES-12, are currently in orbit. Both GOES-10 and GOES-12 are operational satellites, while GOES-11 is in an on-orbit storage mode. It is a backup for the other two satellites should they experience any degradation in service. The first in the next series of satellites, GOES-13, was launched in May 2006, and the others in the series, GOES-O and GOES-P, are planned for launch over the next few years. In addition, NOAA is planning a future generation of satellites, known as the GOES-R series, which are planned for launch beginning in 2014. Unlike the GOES satellites, which maintain a fixed position above the earth, polar satellites constantly circle the Earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the Earth rotates beneath it, each satellite views the entire Earth's surface twice a day. Currently, there are four operational polar-orbiting satellites--two are POES satellites and two are DMSP satellites. These satellites are positioned so that they can observe the Earth in early morning, morning, and afternoon polar orbits. Together, they ensure that for any region of the Earth, the data are generally no more than 6 hours old. Figure 7 illustrates the current configuration of operational polar satellites. NOAA and DOD plan to continue to launch remaining satellites in the POES and DMSP programs, with final launches scheduled for 2007 and 2011, respectively. In addition, NOAA, DOD, and the National Aeronautics and Space Administration are planning to replace the POES and DMSP systems with a state-of-the-art environment monitoring satellite system called the National Polar-orbiting Operational Environmental Satellite System (NPOESS). In recent years, we reported on a variety of issues affecting this major system acquisition. AWIPS is a computer system that integrates and displays all hydrometeorological data at NWS field offices. This system integrates data from NEXRAD, ASOS, GOES, and other sources to produce rich graphical displays to aid forecaster analysis and decision making. AWIPS is used to disseminate weather information to the national centers, weather offices, the media, and other federal, state, and local government agencies. NWS deployed hardware and software for this system to weather forecast offices, river forecast centers, and national centers throughout the United States between 1996 and 1999. As a software-intensive system, AWIPS regularly receives software upgrades called "builds." The most recent build, called Operational Build 6, is currently being deployed. NWS officials estimated that the nationwide deployment of this build should be completed by July 2006. Figure 8 shows a standard AWIPS workstation. Numerical models are advanced software programs that assimilate data from satellites and ground-based observing systems and provide short- and long-term weather pattern predictions. Meteorologists typically use a combination of models and their own experience to develop local forecasts and warnings. Numerical weather models are also a critical source for forecasting weather up to 7 days in advance and forecasting long-term climate changes. One of NWS's National Centers for Environmental Prediction, the Environmental Modeling Center, is the primary developer of these models within NWS and is responsible for making new and improved models available to regional forecasters via the AWIPS system. Figure 9 depicts model output as shown on an AWIPS workstation. NWS leases high-performance supercomputers to execute numerical calculations supporting weather prediction and climate modeling. In 2002, NWS awarded a $227 million contract to lease high-performance supercomputers to run its environmental models from 2002 through September 2011. Included in this contract are an operational supercomputer used to run numerical weather models, an identical backup supercomputer located at a different site, and a research and development supercomputer on which researchers can test out new analyses and models. The supercomputer lease contract allows NWS to exercise options to upgrade the processing capabilities of the operational supercomputer. During the 1990s, we issued a series of reports on NWS modernization systems and made recommendations to improve them. For example, early in the AWIPS acquisition, we reported that the respective roles and responsibilities of the contractor and government were not clear and that a structured system development environment had not been established. We made recommendations to correct these shortfalls before the system design was approved. We also reported that the ASOS system was not meeting specifications or user needs, and recommended that NWS define and prioritize system corrections and enhancements. On NEXRAD, we reported that selected units were falling short of availability requirements and recommended that NWS analyze and monitor system availability on a site-specific basis and correct any shortfalls. Because of such concerns, we identified NWS modernization as a high-risk information technology initiative in 1995, 1997, and 1999. NWS took a number of actions to address our recommendations and to resolve system risks. For example, NWS enhanced its AWIPS system development processes, prioritized its ASOS enhancements, and improved the availability of its NEXRAD systems. In 2001, because of NWS's progress in addressing key concerns and in deploying and using the AWIPS system--the final component of its modernization program--we removed the modernization from our high-risk list. In accordance with federal legislation requiring federal managers to focus more directly on program results, NWS established short- and long-term performance goals and regularly tracks its actual performance in meeting these goals. Specifically, NWS established 14 different performance measures--such as lead time for flash floods and false-alarm rates for tornado warnings. It also established 5-year goals for improving its performance in each of the 14 performance measures through 2011. For example, the agency plans to increase its lead time on tornado warnings from 13 minutes in 2005 to 15 minutes in 2011. Table 1 identifies NWS's 14 performance measures, selected goals, and performance against those goals, when available. Appendix II provides additional information on NWS's performance goals. NWS periodically adjusts its performance goals as its assumptions change. After reviewing actual results from previous fiscal years and its assumptions about the future, in January 2006, NWS adjusted eight of its 5- year performance goals to make more realistic predictions for performance for the next several years. Specifically, NWS made six performance goals less stringent and two goals more stringent. The six goals that were made less stringent--and the reasons for the changes--are the following: Tornado warning lead time: NWS changed its 2011 goal from 17 to 15 minutes of warning because of delays in deploying new technologies on NEXRAD radars and a lack of access to FAA radar data. Tornado warning false-alarm rate: NWS changed its 2011 goal from a 70 to 74 percent false-alarm rate for the same reasons listed above. Flash flood warning accuracy: NWS changed its 2011 goal from 91 to 90 percent accuracy after delays on two different systems in 2004, 2005, and 2006. Marine wind speed accuracy: NWS changed its 2011 goal from 67 to 59 percent accuracy after experiencing the delay of marine models and datasets, a deficiency of shallow water wave guidance, and a reduction in funds for training. Marine wave height accuracy: NWS changed its 2011 goal from 77 to 69 percent accuracy for the same reasons identified above for marine wind speed accuracy. Aviation instrument flight rule ceiling/visibility: NWS changed its goal from 48 to 47 percent accuracy in 2006 because of a system delay and a reduction in funds for training. Goals for 2007 through 2011 remained the same. Additionally, the following two goals were made more stringent: Aviation instrument flight rule ceiling/visibility false-alarm rate: NWS reduced its expected false-alarm rate from 68 percent to 65 percent for 2006 because of better than anticipated results from the AWIPS aviation forecast preparation system and an aviation learning training course. Goals for the remaining years in the 5-year plan, 2007 to 2011, remained the same. Hurricane track forecasts: NWS changed its 2011 hurricane track forecast goal from 123 to 106 nautical miles after trends in observed data from 1987 to 2004 showed that this measure was improving more quickly than expected. NWS is positioning itself to provide better service through system and technology upgrades. Over the next few years, the agency plans to upgrade and improve its systems, predictive weather models, and computational abilities, and it appropriately links these upgrades to its performance goals. For example, planned improvements in NEXRAD technology are expected to help improve the lead times for tornado warnings, while AWIPS software enhancements are expected to help improve the accuracy of marine weather forecasts. The agency anticipates continued steady improvement in its forecast accuracy as it obtains better observation data, as computational resources are increased, and as scientists are better able to implement advanced modeling and data assimilation techniques. Over the next few years, NWS has plans to spend over $315 million to upgrade its systems, models, and computational abilities. Some planned upgrades are to maintain the weather system infrastructure (either to replace obsolete and difficult-to-maintain parts or to refresh aging hardware and workstations), while others are to take advantage of new technologies. Often, the infrastructure upgrades allow NWS to take advantage of newer technologies. For example, the replacement of an aging and proprietary NEXRAD subsystem is expected to allow the agency to implement enhancements in image resolution. Key planned upgrades for each of NWS's major systems and technologies are listed below. NWS has initiated two major NEXRAD improvements. It is currently replacing an outdated subsystem--the radar data acquisition subsystem-- with current hardware that is compliant with open system standards. This new hardware is expected to enable important software upgrades. In addition, NWS plans to add a new technology called dual polarization to this subsystem, which will provide more accurate rainfall estimates and differentiate various forms of precipitation. Table 2 shows the details of these two projects. NWS has seven ongoing and planned improvements for its ASOS system (see table 3). Many of these improvements are to replace aging parts and are expected to make the system more reliable and maintainable. Key subsystem replacements--including the all-weather precipitation accumulation gauge--are also expected to result in more accurate measurements. Selected AWIPS system components have become obsolete, and NWS is replacing these components. In 2001, NWS began to migrate the existing Unix-based systems to a Linux system to reduce its dependence on any particular hardware platform. NWS expects this project, combined with upgraded information technology, to delay the need for a major information technology replacement. Table 4 shows planned improvements for the AWIPS system. NWS plans to continue to improve its modeling capabilities by (1) better assimilating data from improved observation systems such as ASOS, NEXRAD, and environmental satellites; (2) developing and implementing an advanced global forecasting model (called the Weather Research and Forecast model) to allow forecasters to look at a larger domain area; (3) implementing a hurricane weather research forecast model; and (4) improving ensemble modeling, which involves running a single model multiple times with slight variations on a variable to get a probability that a given forecast is likely to occur. NWS expects to spend approximately $12.7 million in fiscal year 2006 to improve its weather and real-time ocean models. NWS is planning to exercise an option within its existing supercomputer lease to upgrade its computing capabilities to allow more advanced numerical weather and climate prediction modeling. In accordance with federal legislation and policy, NWS's planned upgrades to its systems and technologies are expected to result in improved service. The Government Performance and Results Act calls for federal managers to develop strategic performance goals and to focus program activities on obtaining results. Also, the Office of Management and Budget (OMB) requires agencies to justify major investments by showing how they support performance goals. NOAA and NWS implement the act and OMB guidance by requiring project officials to describe how planned system and technology upgrades are linked to the agency's programmatic priorities and performance measures. Further, in its annual performance plans, NOAA reports on expected NWS service improvements and identifies the technologies and systems that are expected to help improve them. NWS service improvements are often expected through a combination of system and technology improvements. For example, NWS expects to reduce its average error in forecasting a hurricane's path by approximately 20 nautical miles between 2005 and 2011 through a combination of upgrades to observation systems, better hurricane forecast models, enhancements to the computer infrastructure, and research that will be transferred to NWS forecast operations. Also, NWS expects tornado warning lead times to increase from 13 to 15 minutes by the end of fiscal year 2008 after NWS completes retrofits to the NEXRAD systems, realizes the benefits of AWIPS software enhancements, and implements new training techniques. Table 5 provides a summary of how system upgrades are expected to result in service improvements. NWS provides employee training courses that are expected to help improve forecast service performance, but the agency's process for selecting this training lacks sufficient oversight. Each year, NWS identifies its training needs and develops this training in order to enhance its services. NWS develops an annual training and education plan identifying planned training, how this training supports key criteria, and associated costs for the upcoming year. To develop the annual plan, program area teams, with representatives from NWS headquarters and field offices, prioritize and submit training recommendations. Each submission identifies how the training will support up to eight different criteria-- including the course's effect on NWS forecasting performance measures, NOAA strategic goals, ensuring operational continuity, and providing customer outreach. These submissions are screened by a training and education team, and depending on available resources, selected for development (if not pre-existing) and implementation. The planned training courses are then delivered through a variety of means, including courses at the NWS training center, online training, and training at local forecast offices. In its 2006 training process, 25 program area teams identified 134 training needs, such as training on how to more effectively use AWIPS, training on an advanced weather simulator, and training on maintaining ASOS systems. Given an expected funding level of $6.1 million, the training and education team then selected 68 of these training needs for implementation. NWS later identified another 5 training needs and allocated an additional $1.25 million to its training budget. In total, NWS funded 73 of 139 training courses. The majority of planned training courses demonstrate a clear link to expected forecasting service improvements. For example, NWS developed a weather event simulator to help forecasters improve their tornado warning lead times. In addition, AWIPS-related training courses are expected to help improve each of the agency's 14 forecasting performance measures by teaching forecasters advanced techniques in using the integrated data processing workstations. However, NWS's process for selecting which training courses to implement lacks sufficient oversight. In justifying training courses, program officials routinely link proposed courses to NWS forecast performance measures. Specifically, in 2006, 131 of the 134 original training needs were linked to expectations for improved forecasting performance--including training on cardiopulmonary resuscitation, spill prevention, leadership, systems security, and equal employment opportunity/diversity. The training selection process did not validate or question that these courses would improve tornado warning lead times or hurricane warning accuracy. Although these courses are important and likely justifiable on other bases, the overuse of this justification undermines the distinctions among training courses and the credibility of the course selection process. Additionally, because the training selection process does not clearly distinguish among courses, it is difficult to determine whether sufficient funds are dedicated to the courses that are expected improve performance. NWS training officials acknowledged that some of the course justifications seem questionable and that more needs to be done to strengthen the training selection process to ensure oversight of the justification and prioritization process. They noted that the training division plans to improve the training selection process over the next few years by adding a more systematic worker-focused assessment of training needs, better prioritizing strategic and organizational needs, and initiating post- implementation reviews. However, until NWS establishes a training selection process that uses reliable justification and results in understandable decisions, NWS risks selecting courses that do not most effectively support its training goals. NWS plans to develop a prototype of a new concept of operations--an effort that could affect its national office configuration, including the location and functions of its offices nationwide. However, NWS has yet to determine many details about the impact of any proposed changes on NWS forecast services, staffing, and budget. Further, NWS has not yet identified key activities, timelines, or measures for evaluating the concept of operations prototype. As a result, it is not evident that NWS will collect the information it needs on the impact and benefits of any office restructuring in order to make sound and cost-effective decisions. According to agency officials, over the last several years, NWS's corporate board noted that the constrained budget, high labor costs, difficulty in training and developing its employees, and a lack of flexibility in how the agency was operating were making it more difficult for the agency to continue to perform its mission. In August 2005, the board chartered a working group to evaluate the roles, responsibilities, and functions of weather offices nationwide and to make a proposal for a new concept of operations. The group was given a set of guiding principles, including that the proposed concept should (1) be cost effective, (2) ensure that there would be no degradation of service, (3) ensure that weather services nationwide were equitable, and (4) not reduce the number of forecast offices nationwide. In addition, the working group was instructed not to address grade structure, staffing levels, office sizes, or overall organizational chart structure. The group gathered input from various agency stakeholders and other partners within NOAA and considered multiple alternatives. They dismissed all but one of the alternative concepts because they were not consistent with the guiding principles. In its December 2005 proposal, the working group proposed a "clustered peer" office plan designed to redistribute some functions among various offices, particularly when there is a high-intensity weather event. An agency official explained that each weather forecast office currently has a fixed geographic area for which it provides forecasts. If a severe weather event occurs, forecast offices ask their staff to work overtime so that there are enough personnel available to do both the normal forecasting work and the watches and warnings required by the severe event. If a local office becomes unable to provide forecast and warning functions, an adjacent office will temporarily assume those duties by calling in extra personnel to handle the workload of both offices. Alternatively, under a clustered peer office structure, several offices with the same type of weather and warning responsibilities, climate, and customers would be grouped in a cluster. Offices within a cluster would share the workload associated with routine services, such as 7-day forecasts. During a high-impact weather event--such as a severe storm, flood, or wildfire--the offices would redistribute the workload to allow the impacted office to focus solely on the event, while the other offices in the cluster would pick up the impacted office's routine services. In this way, peer offices could help supplement staffing needs and the workload across multiple offices could be more efficiently balanced. After receiving this proposal, the NWS corporate board chartered another team to develop a prototype of the clustered peer idea to evaluate the benefits of this approach. The team plans to recommend the scope of the prototype and select several weather offices for the prototype demonstration by the end of September 2006. It also plans to conduct the prototype demonstration in fiscal years 2007 and 2008. Initial prototype results are due in fiscal year 2009. Many details about the impact of the changes on NWS forecast services, staffing, and budget have yet to be determined. Sound decision making on moving forward with a new concept of operations will require data on the relative costs, benefits, and impacts of such a change, but at this time the implications of NWS's revised concept of operations on staffing, budget, and forecasting services are unknown. The charter for the team developing the prototype for the new concept of operations calls for it to identify metrics for evaluating the prototype and to define mechanisms for obtaining customer feedback. However, the team has not yet established a plan or timeline for developing these metrics or mechanisms. Further, it is not yet evident that these metrics will include the relative costs, benefits, or impacts of this change or which customers will be offered the opportunity to provide feedback. This is not consistent with the last time NWS undertook a major change to its concept of operations--during its modernization in the mid-1990s. During that effort, the agency developed a detailed process for identifying impacts and ensuring that there would be no degradation of service (see app. III for a summary of this prior process). Until it establishes plans, timelines, and metrics for evaluating its prototype of a revised concept of operations, NWS is not able to ensure that it is on track to gather the information it needs to fully evaluate the merits of the revised concept of operations and to make sound and informed decisions on a new office configuration. NWS is appropriately positioning itself to improve its forecasting services by upgrading its systems and technologies and by developing training to enhance the performance of its professional staff. Over the next few years, NWS expects to improve all of its 14 performance measures--ranging from seasonal temperature forecasts, to severe weather warnings, to specialized aviation and marine weather warnings. However, it is not clear that NWS is consistently choosing the best training courses to improve its performance because the training selection process does not rigorously review the training justifications. Recognizing that high labor costs, difficulty in training and developing its employees, and a constrained budget environment make it difficult to fulfill its mission, NWS is evaluating changes to its office structure and operations in order to achieve greater productivity and efficiency. It plans to develop a prototype of a new concept of operations that entails sharing responsibilities among a cluster of offices. Because it is early in the prototype process, the implications of these plans on staffing, budget, and forecasting services are unknown at this time. However, NWS does not yet have detailed plans, timelines, or measures for assessing the prototype. As a result, NWS risks not gathering the information it needs to make an informed decision in moving forward with a new office operational structure. To improve NWS's ability to achieve planned service improvements, we recommend that the Secretary of Commerce direct the Assistant Administrator for Weather Services to take the following three actions: require training officials to validate the accuracy of training justifications; establish key activities, timelines, and measures for evaluating the "clustered peer" office structure prototype before beginning the prototype; and ensure that plans for evaluating the prototype address the impact of any changes on budget, staffing, and services. We received written comments on a draft of this report from the Department of Commerce (see app. IV). In the department's response, the Deputy Secretary of Commerce agreed with our recommendations and identified plans for implementing them. Specifically, the department noted that it plans to revise its training process to ensure limited training resources continue to target improvements in NWS performance. The department also noted that the concept of operations working team is developing a plan for the prototype and stated that this plan will include the items we recommended. The department also provided technical corrections, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Director of the Office of Management and Budget, and other interested congressional committees. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 9286 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were (1) to evaluate the National Weather Service's (NWS) efforts to achieve improvements in the delivery of its services through upgrades to its systems, models, and computational abilities; (2) to assess the agency's plans to achieve improvements in the delivery of its services through the training and professional development of its employees; and (3) to evaluate the agency's plans for revising its nationwide office configuration and the implications of these plans on local forecasting services, staffing, and budgets. To evaluate NWS's efforts to achieve service improvements through system and technology upgrades, we reviewed the agency's system development plans and discussed system-specific plans with NWS program officials. We assessed system-specific documentation justifying system upgrades to evaluate whether these upgrades were linked to anticipated improvements in performance goals. We also evaluated NWS performance goals and identified the extent to which anticipated service improvements were tied to system and technology upgrades. We interviewed National Oceanic and Atmospheric Administration (NOAA) and NWS officials to obtain clarification on agency plans and goals. To assess NWS's plans for achieving service improvements through the training and professional development of its employees, we reviewed NWS policies and plans for training and professional development. We reviewed the agency's service performance goals and assessed the link between those goals and planned and expected training and professional development activities. We also interviewed NWS officials responsible for training and professional development activities. To evaluate the status and potential impact of any plans to revise the national office configuration, we assessed studies of options for changing the NWS concept of operations. We also reviewed the charter for the prototype and interviewed key NWS officials to determine the possible effect of these plans on local forecasting services, staffing, and budgets and to identify plans for determining the implications of changing to a new concept of operations. We performed our work at NWS headquarters in the Washington, D.C., metropolitan area, and at geographically diverse NOAA and NWS weather forecast offices in Denver and in Tampa, and at the NWS National Hurricane Center in Miami. We performed our work from October 2005 to June 2006 in accordance with generally accepted government auditing standards. A measure of the difference between the projected locations of the center of storms and the actual locations in nautical miles for the Atlantic Basin miles, respectively, and ceilings and visibilities are greater than, or eual to, 500 feet and/or 1 mile, respectively. In the 1980s, NWS began a nationwide modernization program to upgrade weather observing systems such as satellites and radars, to design and develop advanced computer workstations for forecasters, and to reorganize its field office structure. The goals of the modernization were to achieve more uniform weather services across the nation, improve forecasting, provide more reliable detection and prediction of severe weather and flooding, achieve higher productivity, and permit more cost- effective operations through staff and office reductions. NWS's plans for revising its office structure were governed by the Weather Service Modernization Act, which required that, prior to closing a field office, the Secretary of Commerce certify that there was no degradation of service. NWS developed a plan for complying with the law. To identify community concerns regarding modernization changes and to study the potential for degradation of service, the Department of Commerce published a notice in the Federal Register requesting comments on service areas where it was believed that services could be degraded by planned modernization changes. The department also contracted for an independent assessment by the National Research Council on whether weather services would be degraded by the proposed changes. As part of this assessment, the contractor developed criteria to identify whether service would be degraded in certain areas of concern. The department then applied these criteria to areas of concern to determine whether services would be degraded or not. Before closing any office, the Secretary of Commerce certified that services would not be degraded. David A. Powner, (202) 512-9286 or [email protected]. In addition to the contact named above, William Carrigg, Barbara Collier, Neil Doherty, Kathleen S. Lovett, Colleen Phillips, Karen Talley, and Jessica Waselkow made key contributions to this report.
To provide accurate and timely weather forecasts, the National Weather Service (NWS) uses systems, technologies, and manual processes to collect, process, and disseminate weather data to its nationwide network of field offices and centers. After completing a major modernization program in the 1990s, NWS is seeking to upgrade its systems with the goal of improving its forecasting abilities, and it is considering changing how its nationwide office structure operates in order to enhance efficiency. GAO was asked to (1) evaluate NWS's efforts to achieve improvements in the delivery of its services through system and technology upgrades, (2) assess agency plans to achieve service improvements through training its employees, and (3) evaluate agency plans to revise its nationwide office configuration and the implications of these plans on local forecasting services, staffing, and budgets. NWS is positioning itself to provide better service through over $315 million in planned upgrades to its systems and technologies. In annual plans, the agency links expected improvements in its service performance measures with the technologies and systems expected to improve them. For example, NWS expects to reduce the average error in its forecasts of hurricane paths by approximately 20 nautical miles between 2005 and 2011 through a combination of upgrades to observation systems, better hurricane forecast models, enhancements to the computer infrastructure, and research that will be transferred to forecast operations. Also, NWS expects to increase tornado warning lead times from 13 to 15 minutes by the end of fiscal year 2008 after the agency completes an upgrade to its radar system and realizes benefits from software improvements to its forecaster workstations. NWS also provides training courses for its employees to help improve its forecasting services, but the agency's process for selecting training lacks sufficient oversight. Program officials propose and justify training needs on the basis of up to eight different criteria--including whether a course is expected to improve NWS forecasting performance measures, support customer outreach, or increase scientific awareness. Many of these course justifications appropriately demonstrate support for improved forecasting performance. For example, training on how to more effectively use forecaster workstations is expected to help improve tornado and hurricane warnings. However, in justifying training courses, program officials routinely link courses to NWS forecasting performance measures. For example, in 2006, almost all training needs were linked to expectations for improved performance--including training on cardiopulmonary resuscitation, spill prevention, and systems security. The training selection process did not validate or question how these courses could help improve weather forecasts. Overuse of this justification undermines the distinctions among different training courses and the credibility of the course selection process. Additionally, because the training selection process does not clearly distinguish among courses, it is difficult to determine whether sufficient funds are dedicated to the courses that are expected to improve performance. To improve its efficiency, NWS plans to develop a prototype of a new concept of operations, an effort that could affect its national office configuration, including the location and functions of its offices nationwide. However, many details about the impact of any proposed changes on NWS forecast services, staffing, and budget have yet to be determined. Further, the agency has not yet determined key activities, timelines, or measures for evaluating the prototype of the new office operational structure. As a result, it is not evident that NWS will collect the information it needs on the impact and benefits of any office restructuring in order to make sound and cost-effective decisions.
7,479
714
The basic process by which all federal agencies typically develop and issue regulations is set forth in the Administrative Procedure Act (APA), and is generally known as the rulemaking process. Rulemaking at most regulatory agencies follows the APA's informal rulemaking process, also known as "notice and comment" rulemaking, which generally requires agencies to publish a notice of proposed rulemaking in the Federal Register, provide interested persons an opportunity to comment on the proposed regulation, and publish the final regulation, among other things. Under the APA, a person adversely affected by an agency's notice and comment rulemaking is generally entitled to judicial review of that new rule, and a court may invalidate the regulation if it finds it to be "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law," sometimes referred to as the arbitrary and capricious test. In addition to the requirements of the APA, federal agencies typically must comply with requirements imposed by certain other statutes and executive orders. In accordance with various presidential executive orders, agencies work closely with staff from the Office of Management and Budget's (OMB) Office of Information and Regulatory Affairs, who review draft regulations and other significant regulatory actions prior to publication.requirements that affect OSHA standard setting were established in 1980 or later. The process OSHA uses to develop and issue standards is spelled out in the OSH Act. Section 6(b) of the act specifies the procedures OSHA must These procedures use to promulgate, modify, or revoke its standards. include publishing the proposed rule in the Federal Register, providing interested persons an opportunity to comment, and holding a public hearing upon request. Section 6(a) of the act directed the Secretary of Labor (through OSHA) to adopt any national consensus standards or established federal standards as safety and health standards within 2 years of the date the OSH Act went into effect, without following the procedures set forth in section 6(b) or the APA.publication, the vast majority of these standards have not changed since originally adopted, despite significant advances in technology, equipment, and machinery over the past several decades. In leading the agency's standard-setting process, staff from OSHA's Directorate of Standards and Guidance, in collaboration with staff from other Labor offices, explore the According to an OSHA appropriateness and feasibility of developing standards to address workplace hazards that are not covered by existing standards. Once OSHA initiates such an effort, an interdisciplinary team typically composed of at least five staff focus on that issue. We analyzed the 58 significant health and safety standards OSHA issued between 1981 and 2010 and found that the time frames for developing and issuing them averaged about 93 months (7 years, 9 months), and ranged from 15 months to about 19 years (see table 1). During this period, OSHA staff also worked to develop standards that have not yet been finalized. For example, according to agency officials, OSHA staff have been working on developing a silica standard since 1997, a beryllium standard since 2000, and a standard on walking and working surfaces since 2003. For a depiction of the timelines for safety and health standards issued between 1981 and 2010, see appendix I. These analyses are necessary because the Supreme Court has held that the OSH Act requires that standards be both technologically and economically feasible. Am. Textile Mfrs. Inst. v. Donovan, 452 U.S. 490, 513 n.31 (1981). businesses. According to agency officials, the small business panel process takes about 8 months of work, and OSHA is one of only three federal agencies that is subject to this requirement. Experts and agency officials also told us that changing priorities are a factor that affects the time frames for developing and issuing standards, explaining that priorities may change as a result of changes within OSHA, Labor, Congress, or the presidential administration. Some agency officials and experts told us such changes often cause delays in the process of setting standards. For example, some experts noted that the agency's intense focus on publishing an ergonomics rule in the 1990s took attention away from several other standards that previously had been a priority. The standard of judicial review that applies to OSHA standards if they are challenged in court also affects OSHA's time frames because it requires more robust research and analysis than the standard that applies to many other agencies' regulations, according to some experts and agency officials. Instead of the arbitrary and capricious test provided for under the APA, the OSH Act directs courts to review OSHA's standards using a more stringent legal standard: it provides that a standard shall be upheld if supported by "substantial evidence in the record considered as a According to OSHA officials, this more stringent standard whole." (known as the "substantial evidence" standard) requires a higher level of scrutiny by the courts and as a result, OSHA staff must conduct a large volume of detailed research in order to understand all industrial processes involved in the hazard being regulated, and to ensure that a given hazard control would be feasible for each process. According to OSHA officials and experts, two additional factors result in an extensive amount of work for the agency in developing standards: Substantial data challenges, which stem from a dearth of available scientific data for some hazards and having to review and evaluate scientific studies, among other sources. In addition, according to agency officials, certain court decisions interpreting the OSH Act require rigorous support for the need for and feasibility of standards. 29 U.S.C. SS 655(f). An example of one such decision cited by agency officials is a 1980 Supreme Court case, which resulted in OSHA having to conduct quantitative risk assessments for each health standard and ensure that these assessments are supported by substantial evidence. Response to adverse court decisions. Several experts with whom we spoke observed that adverse court decisions have contributed to an institutional culture in the agency of trying to make OSHA standards impervious to future adverse decisions. However, agency officials said that, in general, OSHA does not try to make a standard "bulletproof" because, while OSHA tries to avoid lawsuits that might ultimately invalidate the standard, the agency is frequently sued. For example, in the "benzene decision," the Supreme Court invalidated OSHA's revised standard for benzene because the agency failed to make a determination that benzene posed a "significant risk" of material health impairment under workplace conditions permitted by the Another example is a 1992 decision in which a current standard. U.S. Court of Appeals struck down an OSHA health standard that would have set or updated the permissible exposure limit for over 400 air contaminants. OSHA has not issued any emergency temporary standards in nearly 30 years, citing, among other reasons, legal and logistical challenges. OSHA officials noted that the emergency temporary standard authority remains available, but the legal requirements to issue such a standard-- demonstrating that workers are exposed to grave danger and establishing that an emergency temporary standard is necessary to protect workers from that grave danger--are difficult to meet. Similarly difficult to meet, according to officials, is the requirement that an emergency temporary standard must be replaced within 6 months by a permanent standard issued using the process specified in section 6(b) of the OSH Act. OSHA uses enforcement and education as alternatives to issuing emergency temporary standards to respond relatively quickly to urgent workplace hazards. OSHA officials consider their enforcement and education activities complementary. It its enforcement efforts to address urgent hazards, OSHA uses the general duty clause of the OSH Act, which requires employers to provide a workplace free from recognized hazards that are causing, or are likely to cause, death or serious physical Under the general duty clause, OSHA has the harm to their employees. authority to issue citations to employers even in the absence of a specific standard under certain circumstances. Along with its enforcement and standard-setting activities, OSHA also educates employers and workers to promote voluntary protective measures against urgent hazards. OSHA's education efforts include on-site consultations and publishing health and safety information on urgent hazards. For example, if its inspectors discover a particular hazard, OSHA may send letters to all employers where the hazard is likely to be present to inform them about the hazard and their responsibility to protect their workers. 29 U.S.C. SS 654(a)(1). Although the rulemaking experiences of EPA and MSHA shed some light on OSHA's challenges, their statutory framework and resources differ too markedly for them to be models for OSHA's standard-setting process. For example, EPA is directed to regulate certain sources of specified air pollutants and review its existing regulations within specific time frames under section 112 of the Clean Air Act, which EPA officials told us gave the agency clear requirements and statutory deadlines for regulating hazardous air pollutants. MSHA benefits from a narrower scope of authority than OSHA and has more specialized expertise as a result of its more limited jurisdiction and frequent on-sight presence at mines. Officials at MSHA, OSHA, and Labor noted that this is very different from OSHA, which oversees a vast array of workplaces and types of industries and must often supplement the agency's inside knowledge by conducting site visits. Agency officials and occupational safety and health experts shared their understanding of the challenges facing OSHA and offered ideas for improving the agency's standard-setting process.involve substantial procedural changes that may be beyond the scope of OSHA's authority and require amending existing laws, including the OSH Act. Improve coordination with other agencies: Experts and agency officials noted that OSHA has not fully leveraged available expertise at other federal agencies, especially NIOSH, in developing and issuing its standards. OSHA officials said the agency considers NIOSH's input on an ad hoc basis but OSHA staff do not routinely work closely with NIOSH staff to analyze risks of occupational hazards. They stated that collaborating with NIOSH on risk assessments, and generally in a more systematic way, could reduce the time it takes to develop a standard by several months, thus facilitating OSHA's standard-setting process. Expand use of voluntary consensus standards: According to OSHA officials, many OSHA standards incorporate or reference outdated consensus standards, which could leave workers exposed to hazards that are insufficiently addressed by OSHA standards that are based on out-of-date technology or processes. Experts suggested that Congress pass new legislation that would allow OSHA, through a single rulemaking effort, to revise standards for a group of health hazards using current industry voluntary consensus standards, eliminating the requirement for the agency to follow the standard- setting provisions of section 6(b) of the OSH Act or the APA. One potential disadvantage of this proposal is that any abbreviation to the regulatory process could also result in standards that fail to reflect relevant stakeholder concerns, such as an imposition of unnecessarily burdensome requirements on employers. Impose statutory deadlines: OSHA officials indicated that it can be difficult to prioritize standards due to the agency's numerous and sometimes competing goals. In the past, having a statutory deadline, combined with relief from procedural requirements, resulted in OSHA issuing standards more quickly. However, some legal scholars have noted that curtailing the current rulemaking process required by the APA may result in fewer opportunities for public input and possibly decrease the quality of the standard. Also, officials from MSHA told us that, while statutory deadlines make its priorities clear, this is sometimes to the detriment of other issues that must be set aside in the meantime. Change the standard of judicial review: Experts and agency officials suggested OSHA's substantial evidence standard of judicial review be replaced with the arbitrary and capricious standard, which would be more consistent with other federal regulatory agencies. The Administrative Conference of the United States has recommended that Congress amend laws that mandate use of the substantial evidence standard, in part because it can be unnecessarily burdensome for agencies. As a result, changing the standard of review to "arbitrary and capricious" could reduce the agency's evidentiary burden. However, if Congress has concerns about OSHA's current regulatory power, it may prefer to keep the current standard of review. Allow alternatives for supporting feasibility: Experts suggested that OSHA minimize on-site visits--a time-consuming requirement for analyzing the technological and economic feasibility of new or updated standards--by using surveys or basing its analyses on industry best practices. One limitation to surveying worksites is that, according to OSHA officials, in-person site visits are imperative for gathering sufficient data in support of most health standards. Basing feasibility analyses on industry best practices would require a statutory change, as one expert noted, and would still require OSHA to determine feasibility on an industry-by-industry basis. Adopt a priority-setting process: Experts suggested that OSHA develop a priority-setting process for addressing hazards, and as GAO has reported, such a process could lead to improved program results. OSHA attempted such a process in the past, which allowed the agency to articulate its highest priorities for addressing occupational hazards. Reestablishing such a process may improve a sense of transparency among stakeholders and facilitate OSHA management's ability to plan its staffing and budgetary needs. However, it may not immediately address OSHA's challenges in expeditiously setting standards because such a process could take time and would require commitment from agency management. The process for developing new and updated safety and health standards for occupational hazards is a lengthy one and can result in periods when there are insufficient protections for workers. Nevertheless, any streamlining of the current process must guarantee sufficient stakeholder input to ensure that the quality of standards does not suffer. Additional procedural requirements established since 1980 by Congress and various executive orders have increased opportunities for stakeholder input in the regulatory process and required agencies to evaluate and explain the need for regulations, but they have also resulted in a more protracted rulemaking process for OSHA and other regulatory agencies. Ideas for changes to the regulatory process must weigh the benefits of addressing hazards more quickly against a potential increase in the regulatory burden imposed on the regulated community. Most methods for streamlining that have been suggested by experts and agency officials are largely outside of OSHA's authority because many procedural requirements are established by federal statute or executive order. However, OSHA can coordinate more routinely with NIOSH on risk assessments and other analyses required to support the need for standards, saving OSHA time and expense. In our report being released today, we recommend that OSHA and NIOSH more consistently collaborate on researching occupational hazards so that OSHA can more effectively leverage NIOSH expertise in its standard-setting process. Both agencies agreed with this recommendation. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For questions about this testimony, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include, Gretta L. Goodwin, Assistant Director; Susan Aschoff; Tim Bober; Anna Bonelli; Sarah Cornetto; Jessica Gray; and Sara Pelton. The following two figures (fig. 2 and fig. 3) depict a timeline for each of the 58 significant safety and health standards OSHA issued between 1981 and 2010.
This testimony discusses the challenges the Department of Labor's (Labor) Occupational Safety and Health Administration (OSHA) faces in developing and issuing safety and health standards. Workplace safety and health standards are designed to help protect over 130 million public and private sector workers from hazards at more than 8 million worksites in the United States, and have been credited with helping prevent thousands of work-related deaths, injuries, and illnesses. However, questions have been raised concerning whether the agency's approach to developing standards is overly cautious, resulting in too few standards being issued. Others counter that the process is intentionally deliberative to balance protections provided for workers with the compliance burden imposed on employers. Over the past 30 years, various presidential executive orders and federal laws have added new procedural requirements for regulatory agencies, resulting in multiple and sometimes lengthy steps OSHA and other agencies must follow. The remarks today are based on findings from our report, which is being released today, entitled "Workplace Safety and Health: Multiple Challenges Lengthen OSHA's Standard Setting." For this report, we were asked to review: (1) the time taken by OSHA to develop and issue occupational safety and health standards and the key factors that affect these time frames, (2) alternatives to the typical standard-setting process that are available for OSHA to address urgent hazards, (3) whether rulemaking at other regulatory agencies offers insight into OSHA's challenges with setting standards, and (4) ideas that have been suggested by occupational safety and health experts for improving the process. In summary, we found that, between 1981 and 2010, the time it took OSHA to develop and issue safety and health standards ranged from 15 months to 19 years and averaged more than 7 years. Experts and agency officials cited several factors that contribute to the lengthy time frames for developing and issuing standards, including increased procedural requirements, shifting priorities, and a rigorous standard of judicial review. We also found that, in addition to using the typical standard-setting process, OSHA can address urgent hazards by issuing emergency temporary standards, although the agency has not used this authority since 1983 because of the difficulty it has faced in compiling the evidence necessary to meet the statutory requirements. Instead, OSHA focuses on enforcement activities--such as enforcing the general requirement of the Occupational Safety and Health Act of 1970 (OSH Act) that employers provide a workplace free from recognized hazards--and educating employers and workers about urgent hazards. Experiences of other federal agencies that regulate public or worker health hazards offered limited insight into the challenges OSHA faces in setting standards. For example, EPA officials pointed to certain requirements of the Clean Air Act to set and regularly review standards for specified air pollutants that have facilitated the agency's standard-setting efforts. In contrast, the OSH Act does not require OSHA to periodically review its standards. Also, MSHA officials noted that their standard-setting process benefits from both the in-house knowledge of its inspectors, who inspect every mine at least twice yearly, and a dedicated mine safety research group within the National Institute for Occupational Safety and Health (NIOSH), a federal research agency that makes recommendations on occupational safety and health. OSHA must instead rely on time-consuming site visits to obtain information on hazards and has not consistently coordinated with NIOSH to assess occupational hazards. Finally, experts and agency officials identified several ideas that could improve OSHA's standard-setting process. In our report being released today, we draw upon one of these ideas and recommend that OSHA and NIOSH more consistently collaborate on researching occupational hazards so that OSHA can more effectively leverage NIOSH expertise in its standard-setting process.
3,383
777
Federal funding for highways is provided to the states mostly through a series of grant programs collectively known as the Federal-Aid Highway Program. Periodically, Congress enacts multiyear legislation that authorizes the nation's surface transportation programs. In 2005, Congress enacted SAFETEA-LU, which authorized $197.5 billion for the Federal-Aid Highway Program from fiscal years 2005 through 2009. In a joint federal- state partnership the FHWA, within the Department of Transportation (DOT), administers the Federal-Aid Highway Program and distributes most funds to the states through annual apportionments established by statutory formulas. Once FHWA apportions these funds, the funds are available for states to obligate for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes, as well as for other purposes authorized in law. The amount of federal funding made available for highways was substantial--from $34.4 to $43.0 billion per year for fiscal years 2005 through 2009. The Highway Trust Fund was instituted by Congress in 1956 to construct the Interstate Highway System, which is currently 47,000 miles in length. The Highway Trust Fund holds certain excise taxes collected on motor fuels and truck-related taxes, including taxes on gasoline, diesel fuel, gasohol, and other fuels; truck tires and truck sales; and heavy vehicle use. In 1983, the fund was divided into the Highway Account and the Mass Transit Account. More than 80 percent of the total fund is the Highway Account, including a majority of the fuel taxes as well as all truck-related taxes (see fig. 1). Most Highway Account funds (about 83 percent) were apportioned to states across 13 formula programs during the 4 years of the SAFETEA-LU period for which data are available. Included among these 13 programs are 6 "core" highway programs (see table 1). In addition to formula programs, for the time during the SAFETEA-LU period for which final data are available: Congress directly allocated about 8 percent of Highway Account funds to state departments of transportation through congressionally directed High Priority Projects. The remaining funds, about 9 percent of the total, represent dozens of other authorized programs allocated to state DOTs, congressionally directed projects other than High Priority Projects, administrative expenses and funding provided to states by other DOT agencies such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 2). Some of the apportioned programs use states' contributions to the Highway Account of the Highway Trust Fund as a factor in determining program funding levels for each state. Because the Department of Treasury (Treasury) collects fuel taxes from a small number of corporations located in a relatively small number of places--not from states--FHWA has to estimate the fuel tax contributions made to the fund by users in each state. Likewise, FHWA must estimate the state of origin of various truck taxes. FHWA calculates motor fuel-related contributions based on estimates of the gallons of fuel used on highways in each state. To do so, FHWA relies on data gathered from state revenue agencies and summary tax data available from Treasury as part of the estimation process (see app. II). Because the collection and estimation process takes place over several years (see fig. 3), the data used to calculate the formula are 2 years old. For example, the data used to apportion funding to states in fiscal year 2009 were based on estimated collections attributable to each state in fiscal year 2007. By the early 1980s, construction of the Interstate Highway System was nearing completion, and a larger portion of the funds from the Highway Trust Fund were being authorized for non-Interstate programs. The Surface Transportation Assistance Act of 1982 provided, for the first time, that each state would for certain programs receive a "minimum allocation" of 85 percent of its share of estimated tax payments to the Highway Account of the Highway Trust Fund. This approach was largely retained when Congress reauthorized the program in 1987. The Intermodal Surface and Transportation Efficiency Act of 1991 (ISTEA) raised the minimum allocation to 90 percent. The Transportation Equity Act for the 21st Century (TEA-21) of 1997 guaranteed each state a specific share of the total program (defined as all apportioned programs plus High Priority Projects), a minimum 90.5 percent share of contributions. It also introduced rate-of-return considerations into funds states received for the Interstate Maintenance, National Highway System, and Surface Transportation Programs. In 2005, Congress implemented through SAFETEA-LU the Equity Bonus Program that was designed to bring all states up to a guaranteed rate of return of 92 percent by fiscal year 2008. For the time period for which final data are available, fiscal years 2005 through 2008, our analysis shows that every state but one received more funding for highway programs than users contributed to the Highway Account (see fig. 4). The only exception, Texas, received about $1.00 (99.7 cents) for each dollar contributed. Among other states, this ranged from a low of $1.02 for both Arizona and Indiana to a high of $5.63 for the District of Columbia. In addition, all states, including Texas, received more in funding than their highway users contributed during both fiscal years 2007 and 2008. In effect, almost every state was a donee state during the first 4 years of SAFETEA-LU. This occurred because overall, more funding was authorized and apportioned than was collected from highway users. The account was supplemented by general funds from the Treasury. Our rate-of-return analysis has two notable features: It compares funding states received from the Highway Trust Fund Highway Account with the dollars estimated to be have been collected in each state and contributed by each state's highway users into the Highway Account in that same year. For example, for fiscal year 2008, it compares the highway funds states received in fiscal year 2008 with the amount collected and contributed in that fiscal year--data that did not become available until December 2009. Because of the 2-year lag (see fig. 3), fiscal year 2008 is the latest year for which these data are available. Thus, the final year of the original SAFETEA-LU authorization period, fiscal year 2009, is not included. Unlike other calculations used to apportion certain funds discussed further in this report, this analysis includes all funding provided to the states from the Highway Account, including (1) funds apportioned by formula, (2) High Priority Projects, and (3) other authorized programs, including safety program funding provided to states by other DOT agencies such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 2 for a breakdown of these funds). Using the above methodology, our analysis shows that states generally received more than their highway users contributed. However, other calculations, as described below, provide different results. Because there are different methods of calculating a rate of return, and the method used affects the results, confusion can result over whether a state is a donor or donee. A state can appear to be donor using one type of calculation and a donee using a different type. A second way to calculate rate of return is to apply the same dollar return calculation method, but use contribution data that are available at the time funds are apportioned to the states. This calculation method indicates that all states were donees. The data used to calculate the rate of return per dollar contributed differs from our preceding analysis in two ways: As shown in figure 3, it uses 2-year-old data on contributions for apportionments, due to the time lag between when the Treasury collects fuel and truck excise taxes and funds are apportioned. It uses a subset of Federal-Aid Highway programs including both programs apportioned to states by formula and High Priority Projects. However, it does not include other allocated highway programs or other funding states receive from programs other DOT agencies such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 2). Using this approach every state received more in funding from the Highway Account of the Highway Trust Fund than its users contributed for the SAFETEA-LU period. The rate of return ranged from a low of $1.04 per dollar for 16 states, including Texas, to a high of $5.26 per dollar for the District of Columbia, as shown in figure 5. This calculation results in states generally having a lower dollar rate of return than our calculation using same-year data (see fig. 4). A third calculation, based on a state's "relative share"--the amount a state receives relative to other states instead of an absolute, dollar rate of return--results in both donor and donee states. Congress defined this method in SAFETEA-LU as the one FHWA uses for the calculating rates of return for the purpose of apportioning highway funding to the states. In order to calculate this rate of return, FHWA must determine what proportion of the total national contributions came from highway users in each state. The state's share of contributions into the Highway Account of the Highway Trust Fund is then used to calculate a relative rate of return--how the proportion of each state's contribution compares to the proportion of funds the state received. A comparison of the relative rate of return on states' contributions showed 28 donor states, receiving less than 100 percent relative rate of return, and 23 states as donees receiving a more than a 100 percent relative rate of return (see fig. 6). States' relative rates of return ranged from a low of 91.3 percent for 12 states to a high of 461 percent for the District of Columbia. Like the return per dollar analysis in figure 5, this calculation includes only formula funds and High Priority Projects allocated to states, and excludes other DOT authorized programs allocated to states (see fig. 2). The difference between a state's absolute and relative rate of return can create confusion because the share calculation is sometimes mistakenly referred to as "cents on the dollar." Using the relative share method of calculation will result in some states being "winners" and other states being "losers." If one state receives a higher proportion of highway funds than its highway users contributed, another state must receive a lower proportion than it contributed. The only way to avoid this is for every state to get back exactly the same proportion that it contributed, which is impractical because estimated state contribution shares are not known until 2 years after the apportionments and allocations. Furthermore, because more funding has recently been apportioned and allocated from the Highway Account than is being contributed by highway users, a state can receive more than it contributes to the Highway Trust Fund Highway Account, making it a donee under its rate of return per dollar, but a donor under its relative share rate of return. California provides a useful example of this. From fiscal year 2005 through 2008, using same year contributions and funding across all Highway Trust Fund Highway Account allocations and apportionments, California received $1.16 for each dollar contributed. This analysis shows California as a donee state (see table 2). Alternatively, when calculating a dollar rate of return over the full SAFETEA-LU period (fiscal years 2005 through 2009) using state contribution estimates available at the time of apportionment (fiscal year 2003 through 2007 (as shown in fig. 3) and including only programs covered in rate-of-return adjustments, California remains a donee state, but received $1.04 for each dollar contributed. In contrast, using the relative share approach for the fiscal year 2005 through 2009 period, California received 91 percent of the share its highway users contributed in federal highway-related taxes, which would make it a donor state. A fourth method for calculating a state's rate of return is possible, but not normally calculated by FHWA. It involves evaluating the relative share as described above, but using the same year comparison data. Again, because of the time lag required to estimate state highway user contributions to the Highway Account, such analysis is possible only 2 years after FHWA calculates apportionments for states. Our analysis using this approach results in yet another set of rate of return answers. For example, using available data from fiscal years 2005 to 2008, the relative rate of return for California becomes 97 percent, rather than 91 percent. When this analysis is applied to all states, a state may change its donor/donee status. For example, Minnesota, Nebraska, and Oklahoma appear both as donor and donee states, depending on the calculation method. This comparison of the relative rate of return on states' contributions showed 27 states receiving less than 100 percent relative rate of return, and 24 states as receiving a more than a 100 percent relative rate of return. Table 3 shows the results for all four methods described and the wide variation of states' rate of return based on the method used. Since 1982, Congress has attempted to address states' concerns regarding the rate of return on highway users' contribution to the Highway Trust Fund. In 2005, Congress enacted in SAFETEA-LU the Equity Bonus Program, designed to bring all states up to a "guaranteed" rate of return. The Equity Bonus is calculated from a subset of Federal-Aid Highway programs, which include 12 formula programs, plus High Priority Projects designated by Congress. In brief, since SAFETEA-LU, the Equity Bonus allocates sufficient funds to ensure that each state receives a minimum return of 90.5 percent for fiscal years 2005-2006, 91.5 percent for fiscal year 2007, and 92 percent for fiscal years 2008-2009 for the included programs. The Equity Bonus provides more funds to states than any other individual Federal-Aid Highway formula program. Over SAFETEA-LU's initial 5-year authorization period, the Equity Bonus provided $44 billion to the states, while the second largest formula program, the Surface Transportation Program, provided $32.5 billion. Each year about $2.6 billion stay as Equity Bonus program funds and may be used for any purpose eligible under the Surface Transportation Program. Any additional Equity Bonus funds are added to the apportionments of the six "core" federal-aid highway formula programs: the Interstate Maintenance, National Highway System, Surface Transportation, Congestion Mitigation and Air Quality, Highway Bridge and the Highway Safety Improvement programs. States are frequently able to transfer a portion of their funds among the core programs, making funding of core programs less critical than it might be. States may qualify for Equity Bonus funding by meeting any of three criteria (see fig. 7). A state that meets more than one criterion receives funding under whichever provision provides it the greatest amount of funding. FHWA conducts Equity Bonus calculations annually. For the first criterion, the guaranteed relative rate of return, for fiscal year 2005 all states were guaranteed at least 90.5 percent of their share of estimated contributions. The guaranteed rate increased over time, rising to 92 percent in fiscal year 2009. The second criterion, the guaranteed increase over average annual Transportation Equity Act for the 21st Century (TEA-21) funding, also varied by year, rising from 117 percent in fiscal year 2005 to 121 percent for fiscal year 2009. The number of states qualifying under the first two provisions can vary from year to year. For the third criterion, a guarantee to "hold harmless" states that had certain qualifying characteristics at the time SAFETEA-LU was enacted, 27 states had at least one of these characteristics. A number of these states had more than one of these characteristics. Forty-seven states received Equity Bonus funding every year during the SAFETEA-LU period. However, the District of Columbia, Rhode Island, and Vermont each had at least 1 year where they did not receive Equity Bonus funding because they did not need it to reach the funding level specified under the three provisions. Maine was the only state that did not receive an Equity Bonus in any year. Half of all states received a significant increase in their overall Federal-Aid Highway Program-at least 25 percent over their core funding. Each state's percent increase in its overall funding total for apportioned programs and High Priority Projects for fiscal years 2005 through 2009 due to Equity Bonus funding is shown in figure 8. Additional factors affect the relationship between contributions to the Highway Trust Fund and the funding states receive. These include (1) the infusion of significant amounts of general revenues into the Highway Trust Fund, (2) the challenge of factoring performance and accountability for results into transportation investment decisions, and (3) the long-term sustainability of existing mechanisms and the challenges associated with developing new approaches to funding the nation's transportation system. First, the infusion of significant amounts of general revenues into the Highway Trust Fund Highway Account breaks the link between highway taxes and highway funding. The rate-of-return approach was designed to ensure that, consistent with the user pay system, wherein the costs of building and maintaining the system are borne by those who benefit, users receive a fair return on their investment to the extent possible. However, in fiscal year 2008 the Highway Trust Fund held insufficient amounts to sustain the authorized level of funding and, partly as a result, we placed it on our list of high-risk programs. To cover the shortfall, from fiscal years 2008 through 2010 Congress transferred a total of $34.5 billion in additional general revenues into the Highway Trust Fund, including $29.7 billion into the Highway Account. This means that, to a large extent, funding has shifted away from the contributions of highway users, breaking the link between highway taxes paid and benefits received by users. Furthermore, the infusion of a significant amount of general fund revenues complicates rate-of-return analysis because the current method of calculating contributions does not account for states' general revenue contributions. For many states, the share of Highway Trust Fund contributions and general revenue contributions are different, therefore state-based contributions to all the funding in the Trust Fund are no longer clear. In addition, since March 2009, the American Recovery and Reinvestment Act of 2009 apportioned an additional $26.7 billion to the states for highways--a significant augmentation of federal highway spending that was funded with general revenues. Second, using rate of return as a major factor in determining federal highway funding levels is at odds with reexamining and restructuring federal surface transportation programs so that performance and accountability for results is factored into transportation investment decisions. As we have reported, for many surface transportation programs, goals are numerous and conflicting, and the federal role in achieving the goals is not clear. Many of these programs have no relationship to the performance of either the transportation system or of the grantees receiving federal funds and do not use the best tools and approaches to ensure effective investment decisions. Our previous work has outlined the need to create well defined goals based on identified areas of federal interest and a clearly defined federal role in relation to other levels of government. We have suggested that where the federal interest is less evident, state and local governments could assume more responsibility, and some functions could potentially be assumed by the states or other levels of government. Furthermore, incorporating performance and accountability for results into transportation funding decisions is critical to improving results. However the current approach presents challenges. The Federal-Aid Highway program, in particular, distributes funding through a complicated process in which the underlying data and factors are ultimately not meaningful because they are overridden by other provisions designed to yield a largely predetermined outcome--that of returning revenues to their state of origin. Moreover, once the funds are apportioned, states have considerable flexibility to reallocate them among highway and transit programs. As we have reported, this flexibility, coupled with a rate-of-return orientation, essentially means that the Federal-Aid Highway program functions, to some extent, as a cash transfer, general purpose grant program. This approach poses considerable challenges to introducing performance orientation and accountability for results into highway investment decisions. For three highway programs that were designed to meet national and regional transportation priorities, we have recommended that Congress consider a competitive, criteria-based process for distributing federal funds. Finally, using rate of return as a major factor in determining federal highway funding levels poses problems because funding the nation's transportation system through taxes on motor vehicle fuels is likely to be unsustainable in the longer term. Receipts for the Highway Trust Fund derived from motor fuel taxes have declined in purchasing power, in part because the federal gasoline tax rate has not increased since 1993. In fiscal year 2008 (the last year for which data are available) total contributions to the Highway Account of the Highway Trust Fund decreased by more than $3.5 billion from fiscal year 2007, the first year of decrease during the SAFETEA-LU period. Over the long term, vehicles will become more fuel efficient and increasingly run on alternative fuels--for example, higher fuel economy standards were enacted in 2010. As such, fuel taxes may not be a sustainable source of transportation funding. Furthermore, transportation experts have noted that transportation policy needs to recognize emerging national and global challenges, such as reducing the nation's dependence on imported fuel and minimizing the effect of transportation systems on the global climate. A fund that relies on increasing the use of motor fuels to remain solvent might not be compatible with the strategies that may be required to address these challenges. In the near future, policy discussions will need to consider what the most adequate and appropriate transportation financing systems will be and whether or not the current system continues to make sense. The National Surface Transportation Infrastructure Financing Commission--created by SAFETEA-LU to, among other things, explore alternative funding mechanisms for surface transportation--identified and evaluated numerous revenue sources for surface transportation programs in its February 2009 report including alternative approaches to the fuel tax, mileage-based user fees, and freight-related charges. The report also discussed using general revenues to finance transportation investment but concluded that it was a weak option in terms of economic efficiency and other factors, and recommended that new sources of revenue to support transportation be explored. These new sources of revenue may or may not lend themselves to using a rate of return approach. We provided a draft of this to DOT for review and comment. DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO Staff who made major contributions to this report are listed in appendix II. To determine the amount of revenue states contributed to the Highway Trust Fund Highway Account compared with the funding they received during the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) period, we completed four analyses using Federal Highway Administration (FHWA) data. We met with FHWA and other DOT officials to discuss availability of data and appropriate methodologies. We used FHWA estimates of payments made into the Highway Account of the Highway Trust Fund, by state, and the actual total apportionments and allocations made from the fund, by state. This is sometimes referred to as a "dollar-in, dollar-out" analysis. Because the contribution data takes about 2 years for FHWA to compile, for our analyses we used data for 4 of 5 years of the SAFETEA-LU period, 2005 through 2008, as data for 2009 were not yet available. The source data are published annually in Highway Statistics and commonly referred to as table FE-221, titled "Comparison of Federal Highway Trust Fund Highway Account Receipts Attributable to the States and Federal-Aid Apportionments and Allocations from the Highway Account." FHWA officials confirmed that it contains the best estimate of state contributions and also contains the total appropriations and allocations received by states from the Highway Account of the fund. We did not independently review FHWA's process for estimating state highway users' contributions into the Highway Trust Fund. However, we have reviewed this process in the past, and FHWA officials verified that they have made changes to the process as a result of that review. In addition, we did not attribute any prior balances in the Highway Trust Fund back to states of origin because these funds are not directly tied to any specific year or state. We only examined the fiscal year 2005 through 2008 period; other time periods could provide a different result. We performed alternative analyses to demonstrate that different methodologies provide different answers to the question of how the contributions of states' highway users compared to the funding states received. Using the same data as described above, we performed a "relative share" analysis, which compared each state's estimated proportion of the total contributions to the Highway Account to each state's proportion of total Federal-Aid Highway funding. We also examined how states fared using FHWA's approach for determining the Equity Bonus Program funding apportionments. We performed this analysis to show the outcomes for states based on the information available at the time the Equity Bonus program apportionments are made. The Equity Bonus program amounts are calculated using the statutory formulas for a subset of Federal-Aid Highway Programs. These include all programs apportioned by formula plus the allocated High Priority Projects. FHWA uses the most current contribution data available at the time it does its estimates. However, as explained above, the time lag for developing this data is about 2 years. Therefore, we applied the contribution data for 2003 through 2007 to the funding data for 2005 through 2009, the full SAFETEA- LU period. For these data, we (1) analyzed the total estimated contributions by state divided by the total funding received by state--the dollar-in, dollar out methodology--and (2) a comparison of the share of contributions to share of payments received for each state. We obtained data from the FHWA Office of Budget for the analysis of state dollar-in dollar-out outcomes, and state relative share data for the Equity Bonus Program. We completed our analyses across the total years of the SAFETEA-LU period, 2005 through 2009. We interviewed FHWA officials and obtained additional information from FHWA on the steps taken to ensure data reliability and determined the data were sufficiently reliable for the purposes of this report To determine the provisions in place during the SAFETEA-LU period to address rate-of-return issues across states and how they affected the highway funding states received, we reviewed SAFETEA-LU legislation, reports by the Congressional Research Service (CRS) and FHWA. We also spoke with FHWA and DOT officials to get their perspectives. We also conducted an analysis of FHWA data on the Equity Bonus Program provisions which were created explicitly to address the rate-of-return issues across states. Our analysis compared funding levels distributed to states via apportionment programs and High Priority Projects before and after Equity Bonus Program provisions were applied, and calculated the percentage increase each state received as a result of the Equity Bonus. To determine what additional factors affected the relationship between contributions to the Highway Trust Fund and the funding states receive, we reviewed GAO reports on federal surface transportation programs and the Highway Trust Fund, as well as CRS and FHWA reports, and the report of the National Surface Transportation Infrastructure Financing Commission. In addition, we reviewed FHWA data on the status of the Highway Account of the Highway Trust Fund. We also met with officials from Department of Transportation's Office of Budget and Programs and FHWA to obtain their perspectives on the issue. Currently, FHWA estimates state-based contributions to the Highway Account of the Highway Trust Fund through a process that includes data collection, adjustment, verification, and final calculation of the states' highway users' contributions. FHWA first collects monthly motor fuel use data and related annual state tax data from state departments of revenue. FHWA then adjusts states' data by applying its own models using federal and other data to establish data consistency among the states. FHWA provides feedback to the states on these adjustments and estimates through FHWA Division Offices. Finally, FHWA applies each state's highway users' estimated share of highway fuel usage to total taxes collected nationally to arrive at a state's contribution to the Highway Trust Fund. We did not assess the effectiveness of FHWA's process for estimating the amount of tax funds attributed to each state for this report. According to FHWA officials, data from state revenue agencies is more reliable and comprehensive than vehicle miles traveled data, so FHWA uses state tax information to calculate state contributions. States submit regular reports to FHWA, including a monthly report on motor-fuel consumption due 90 days after month's end, and an annual motor-fuel tax receipts report due 90 days after calendar year's end. States have a wide variety of fuel tracking and reporting methods, so FHWA adjusts the data to achieve uniformity. FHWA analyses and adjusts fuel usage data, such as off-highway use related to agriculture, construction, industrial, marine, rail, aviation and off-road recreational usage. It also analyzes and adjusts use data based on public-sector use, including federal civilian, and state, county, and municipal use. FHWA headquarters and Division Offices also work together to communicate with state departments of revenue during the attribution estimation process. According to FHWA officials, each year FHWA headquarters issues a memo prompting its Division Offices to have each state conduct a final review of the motor fuel gallons reported by their respective states. FHWA division offices also are required to assess their state's motor fuel use and highway tax receipt process at least once every 3 years to determine if states are complying with FHWA guidance on motor fuel data collection. Once the data are finalized, FHWA applies each state's estimated share of taxed highway fuel use to the total taxes collected to arrive at a state's contribution in the following manner. Finalized estimations of gallons of fuel used on highways in two categories--gasoline and special fuels-- allow FHWA to calculate each state's share of the total on-highway fuel usage. The shares of fuel use for each state are applied to the total amount of taxes collected by the Department of the Treasury in each of the 10 categories of highway excise tax. The state's gasoline share is applied to the gasoline and gasohol taxes, and the state's special fuels share, which includes diesel fuel, is applied to all other taxes, including truck taxes. In addition to the contact named above, Steve Cohen (Assistant Director), Robert Ciszewski, Robert Dinkelmeyer, Brian Hartman, Bert Japikse, Josh Ormond, Amy Rosewarne, and Swati Thomas made key contributions to this report.
Federal funding for highways is provided to the states mostly through a series of grant programs known as the Federal-Aid Highway Program, administered by the Department of Transportation's (DOT) Federal Highway Administration (FHWA). In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) authorized $197.5 billion for the Federal-Aid Highway Program for fiscal years 2005 through 2009. The program operates on a "user pay" system, wherein users contribute to the Highway Trust Fund through fuel taxes and other fees. The distribution of funding among the states has been a contentious issue. States that receive less than their highway users contribute are known as "donor" states and states that receive more than their highway users contribute are known as "donee" states. GAO was asked to examine for the SAFETEA-LU period (1) how contributions to the Highway Trust Fund compared with the funding states received, (2) what provisions were used to address rate-of-return issues across states, and (3) what additional factors affect the relationship between contributions to the Highway Trust Fund and the funding states receive. To conduct this review, GAO obtained and analyzed data from FHWA, reviewed FHWA and other reports, and interviewed FHWA and DOT officials. DOT reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. Since 2005, every state received as much or more funding for highway programs than they contributed to the Highway Account of the trust fund. This was possible because more funding was authorized and apportioned than was collected from the states and the fund needed to be augmented with general revenues. If the percentage of funds states contributed to the total is compared with the percentage of funds states received (i.e., relative share), then 28 states received a relatively lower share and 22 states received a relatively higher share than they contributed. Thus, depending on the method of calculation, the same state can appear to be either a donor or donee state. The Equity Bonus Program was used to address rate-of-return issues. It guaranteed a minimum return to states, providing them about $44 billion. Nearly all states received Equity Bonus funding and about half received a significant increase, at least 25 percent, over their core funding. The infusion of general revenues into the Highway Trust Fund affects the relationship between funding and contributions, as a significant amount of highway funding is no longer provided by highway users. Since fiscal year 2008, Congress has transferred nearly $30 billion of general revenues to address shortfalls in the highway program when more funding was authorized than collected. Using rate of return as a major factor in determining highway funding poses challenges to introducing a performance and accountability orientation into the highway program; rate-of-return calculations in effect override other considerations to yield a largely predetermined outcome--that of returning revenues to their state of origin. Because of these and other challenges, funding surface transportation programs remains on GAO's High-Risk list.
6,758
654
DHS has begun to take action to work with other agencies to identify facilities that are required to report their chemical holdings to DHS but may not have done so. The first step of the CFATS process is focused on identifying facilities that might be required to participate in the program. The CFATS rule was published in April 2007, and appendix A to the rule, published in November 2007, listed 322 chemicals of interest and the screening threshold quantities for each. As a result of the CFATS rule, about 40,000 chemical facilities reported their chemical holdings and their quantities to DHS's ISCD. In August 2013, we testified about the ammonium nitrate explosion at the chemical facility in West, Texas, in the context of our past CFATS work. Among other things, the hearing focused on whether the West, Texas, facility should have reported its holdings to ISCD given the amount of ammonium nitrate at the facility. During this hearing, the Director of the CFATS program remarked that throughout the existence of CFATS, DHS had undertaken and continued to support outreach and industry engagement to ensure that facilities comply with their reporting requirements. However, the Director stated that the CFATS regulated community is large and always changing and DHS relies on facilities to meet their reporting obligations under CFATS. At the same hearing, a representative of the American Chemistry Council testified that the West, Texas, facility could be considered an "outlier" chemical facility, that is, a facility that stores or distributes chemical-related products, but is not part of the established chemical industry. Preliminary findings of the CSB investigation of the West, Texas, incident showed that although certain federal agencies that regulate chemical facilities may have interacted with the facility, the ammonium nitrate at the West, Texas, facility was not covered by these programs. For example, according to the findings, the Environmental Protection Agency's (EPA) Risk Management Program, which deals with the accidental release of hazardous substances, covers the accidental release of ammonia, but not ammonium nitrate. As a result, the facility's consequence analysis considered only the possibility of an ammonia leak and not an explosion of ammonium nitrate. On August 1, 2013, the same day as the hearing, the President issued Executive Order 13650-Improving Chemical Facility Safety and Security, which was intended to improve chemical facility safety and security in coordination with owners and operators.established a Chemical Facility Safety and Security Working Group, composed of representatives from DHS; EPA; and the Departments of Justice, Agriculture, Labor, and Transportation, and directed the working group to identify ways to improve coordination with state and local partners; enhance federal agency coordination and information sharing; modernize policies, regulations and standards; and work with stakeholders to identify best practices. In February 2014, DHS officials told us that the working group has taken actions in the areas described in the executive order. For example, according to DHS officials, the working group has held listening sessions and webinars to increase stakeholder input, explored ways to share CFATS data with state and local partners to increase coordination, and launched a pilot program in New York and New Jersey aimed at increasing federal coordination and information sharing. DHS officials also said that the working group is exploring ways The executive order to better share information so that federal and state agencies can identify non-compliant chemical facilities and identify options to improve chemical facility risk management. This would include considering options to improve the safe and secure storage, handling, and sale of ammonium nitrate. DHS has also begun to take actions to enhance its ability to assess risk and prioritize facilities covered by the program. For the second step of the CFATS process, facilities that possess any of the 322 chemicals of interest at levels at or above the screening threshold quantity must first submit data to ISCD via an online tool called a Top- Screen. ISCD uses the data submitted in facilities' Top Screens to make an assessment as to whether facilities are covered under the program. If DHS determines that they are covered by CFATS, facilities are to then submit data via another online tool, called a security vulnerability assessment, so that ISCD can further assess their risk and prioritize the covered facilities. ISCD uses a risk assessment approach to develop risk scores to assign chemical facilities to one of four final tiers. Facilities placed in one of these tiers (tier 1, 2, 3, or 4) are considered to be high risk, with tier 1 facilities considered to be the highest risk. The risk score is intended to be derived from estimates of consequence (the adverse effects of a successful attack), threat (the likelihood of an attack), and vulnerability (the likelihood of a successful attack, given an attempt). ISCD's risk assessment approach is composed of three models, each based on a particular security issue: (1) release, (2) theft or diversion, and (3) sabotage, depending on the type of risk associated with the 322 chemicals. Once ISCD estimates a risk score based on these models, it assigns the facility to a final tier. Our prior work showed that the CFATS program was using an incomplete risk assessment approach to assign chemical facilities to a final tier. Specifically, in April 2013, we reported that the approach ISCD used to assess risk and make decisions to place facilities in final tiers did not consider all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. For example, the risk assessment approach was based primarily on consequences arising from human casualties, but did not consider economic criticality consequences, as called for by the 2009 National Infrastructure Protection Plan (NIPP) and the CFATS regulation. In April 2013, we reported that ISCD officials told us that, at the inception of the CFATS program, they did not have the capability to collect or process all of the economic data needed to calculate the associated risks and they were not positioned to gather all of the data needed. They said that they collected basic economic data as part of the initial screening process; however, they would need to modify the current tool to collect more sufficient data. We also found that the risk assessment approach did not consider threat for approximately 90 percent of tiered facilities. Moreover, for the facilities that were tiered using threat considerations, ISCD was using 5-year-old data. We also found that ISCD's risk assessment approach was not consistent with the NIPP because it did not consider vulnerability when developing risk scores. When assessing facility risk, ISCD's risk assessment approach treated every facility as equally vulnerable to a terrorist attack regardless of location and on-site security. As a result, in April 2013 we recommended that ISCD enhance its risk assessment approach to incorporate all elements of risk and conduct a peer review after doing so. ISCD agreed with our recommendations, and in February 2014, ISCD officials told us that they were taking steps to address them and recommendations of a recently released Homeland Security Studies and Analysis Institute (HSSAI) report that examined the CFATS risk assessment model.among other things, that the CFATS risk assessment model inconsistently considers risks across different scenarios and that the model does not adequately treat facility vulnerability. Overall, HSSAI recommended that ISCD revise the current risk-tiering model and create a standing advisory committee--with membership drawn from government, expert communities, and stakeholder groups--to advise DHS on significant changes to the methodology. As with the findings in our report, HSSAI found, In February 2014, senior ISCD officials told us that they have developed an implementation plan that outlines how they plan to modify the risk assessment approach to better include all elements of risk while incorporating our findings and recommendations and those of HSSAI. Moreover, these officials stated that they have completed significant work with Sandia National Laboratory with the goal of including economic consequences into their risk tiering approach. They said that the final results of this effort to include economic consequences will be available in the summer of 2014. With regard to threat and vulnerability, ISCD officials said that they have been working with multiple DHS components and agencies, including the Transportation Security Administration and the Coast Guard, to see how they consider threat and vulnerability in their risk assessment models. ISCD officials said that they anticipate that the changes to the risk tiering approach should be completed within the next 12 to 18 months. We plan to verify this information as part of our recommendation follow-up process. DHS has begun to take action to lessen the time it takes to review site security plans which could help DHS reduce the backlog of plans awaiting review. For the third step of the CFATS process, ISCD is to review facility security plans and their procedures for securing these facilities. Under the CFATS rule, once a facility is assigned a final tier, it is to submit a site security plan or participate in an alternative security program in lieu of a site security plan. The security plan is to describe security measures to be taken and how such measures are to address applicable risk-based performance standards. After ISCD receives the site security plan, the plan is reviewed using teams of ISCD employees (i.e., physical, cyber, chemical, and policy specialists), contractors, and ISCD inspectors. If ISCD finds that the requirements are satisfied, ISCD issues a letter of authorization to the facility. After ISCD issues a letter of authorization to the facility, ISCD is to then inspect the facility to determine if the security measures implemented at the site comply with the facility's authorized plan. If ISCD determines that the site security plan is in compliance with the CFATS regulation, ISCD approves the site security plan, and issues a letter of approval to the facility, and the facility is to implement the approved site security plan. In April 2013, we reported that it could take another 7 to 9 years before ISCD would be able to complete reviews of the approximately 3,120 plans in its queue at that time. As a result, we estimated that the CFATS regulatory regime, including compliance inspections (discussed in the next section), would likely not be implemented for 8 to 10 years. We also noted in April 2013 that ISCD had revised its process for reviewing facilities' site security plans. ISCD officials stated that they viewed ISCD's revised process to be an improvement because, among other things, teams of experts reviewed parts of the plans simultaneously rather than sequentially, as had occurred in the past. In April 2013, ISCD officials said that they were exploring ways to expedite the process, such as streamlining inspection requirements. In February 2014, ISCD officials told us that they are taking a number of actions intended to lessen the time it takes to complete reviews of remaining plans including the following: providing updated internal guidance to inspectors and ISCD updating the internal case management system; providing updated external guidance to facilities to help them better prepare their site security plans; conducting inspections using one or two inspectors at a time over the course of 1 day, rather than multiple inspectors over the course of several days; conducting pre-inspection calls to the facility to help resolve technical issues beforehand; creating and leveraging the use of corporate inspection documents (i.e., documents for companies that have over seven regulated facilities in the CFATS program); supporting the use of alternative security programs to help clear the backlog of security plans because, according to DHS officials, alternative security plans are easier for some facilities to prepare and use; and taking steps to streamline and revise some of the on-line data collection tools such as the site security plan to make the process faster. It is too soon to tell whether DHS's actions will significantly reduce the amount of time needed to resolve the backlog of site security plans because these actions have not yet been fully implemented. In April 2013, we also reported that DHS had not finalized the personnel surety aspect of the CFATS program. The CFATS rule includes a risk- based performance standard for personnel surety, which is intended to provide assurance that facility employees and other individuals with access to the facility are properly vetted and cleared for access to the facility. In implementing this provision, we reported that DHS intended to (1) require facilities to perform background checks on and ensure appropriate credentials for facility personnel and, as appropriate, visitors with unescorted access to restricted areas or critical assets, and (2) check for terrorist ties by comparing certain employee information with its terrorist screening database. However, as of February 2014, DHS had not finalized its information collection request that defines how the personal surety aspect of the performance standards will be implemented. Thus, DHS is currently approving facility security plans conditionally whereby plans are not to be finally approved until the personnel surety aspect of the program is finalized. According to ISCD officials, once the personal surety performance standard is finalized, they plan to reexamine each conditionally approved plan. They would then make final approval as long as ISCD had assurance that the facility was in compliance with the personnel surety performance standard. As an interim step, in February 2014, DHS published a notice about its Information Collection Request (ICR) for personnel surety to gather information and comments prior to submitting the ICR to the Office of Management and Budget (OMB) for review and clearance. According to ISCD officials, it is unclear when the personnel surety aspect of the CFATS program will be finalized. A biometric access control system consists of technology that determines an individual's identity by detecting and matching unique physical or behavioral characteristics, such as fingerprint or voice patterns, as a means of verifying personal identity. its usefulness with regard to the CFATS program. We recommended that DHS take steps to resolve these issues, including completing a security assessment that includes addressing internal controls weaknesses, among other things. The explanatory statement accompanying the Consolidated Appropriations Act, 2014, directed DHS to complete the recommended security assessment.February 2014, DHS had not yet done the assessment, and although DHS had taken some steps to conduct an internal control review, it had not corrected all the control deficiencies identified in our report. DHS reports that it has begun to perform compliance inspections at regulated facilities. The fourth step in the CFATS process is compliance inspections by which ISCD determines if facilities are employing the measures described in their site security plans. During the August 1, 2013, hearing on the West, Texas, explosion, the Director of the CFATS program stated that ISCD planned to begin conducting compliance inspections in September 2013 for facilities with approved site security plans. The Director further noted that the inspections would generally be conducted approximately 1 year after plan approval. According to ISCD, as of February 24, 2014, ISCD had conducted 12 compliance inspections. ISCD officials stated that they have considered using third-party non- governmental inspectors to conduct inspections but thus far do not have any plans to do so. In closing, we anticipate providing oversight over the issues outlined above and look forward to helping this and other committees of Congress continue to oversee the CFATS program and DHS's progress in implementing this program. Currently, the explanatory statement accompanying the Consolidated and Further Continuing Appropriations Act, 2013, requires GAO to continue its ongoing effort to examine the extent to which DHS has made progress and encountered challenges in developing CFATS. Additionally, once the CFATS program begins performing and completing a sufficient number of compliance inspections, we are mandated review those inspections along with various aspects of them. Moreover, Ranking Member Thompson of the Committee on Homeland Security has requested that we examine among other things, DHS efforts to assess information on facilities that submit data, but that DHS ultimately decides are not to be covered by the program. Chairman Meehan, Ranking Member Clarke, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this and our prior work included John F. Mortin, Assistant Director; Jose Cardenas, Analyst-in-Charge; Chuck Bausell; Michele Fejfar; Jeff Jensen; Tracey King; Marvin McGill; Jessica Orr; Hugh Paquette, and Ellen Wolfe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Facilities that produce, store, or use hazardous chemicals could be of interest to terrorists intent on using toxic chemicals to inflict mass casualties in the United States. As required by statute, DHS issued regulations establishing standards for the security of these facilities. DHS established the CFATS program to assess risk at facilities covered by the regulations and inspect them to ensure compliance. In February 2014, legislation was introduced related to several aspects of the program. This statement provides observations on DHS efforts related to the CFATS program. It is based on the results of previous GAO reports in July 2012 and April 2013, with selected updates conducted in February 2014. In conducting the earlier work, GAO reviewed DHS reports and plans on the program and interviewed DHS officials. In addition, GAO interviewed DHS officials to update information. In managing its Chemical Facility Anti-Terrorism Standards (CFATS) program, the Department of Homeland Security (DHS) has a number of efforts underway to identify facilities that are covered by the program, assess risk and prioritize facilities, review and approve facility security plans, and inspect facilities to ensure compliance with security regulations. Identifying facilities. DHS has begun to work with other agencies to identify facilities that should have reported their chemical holdings to CFATS, but may not have done so. DHS initially identified about 40,000 facilities by publishing a CFATS rule requiring that facilities with certain types of chemicals report the types and quantities of these chemicals. However, a chemical explosion in West, Texas last year demonstrated the risk posed by chemicals covered by CFATS. Subsequent to this incident, the President issued Executive Order 13650 which was intended to improve chemical facility safety and security in coordination with owners and operators. Under the executive order, a federal working group is sharing information to identify additional facilities that are to be regulated under CFATS, among other things. Assessing risk and prioritizing facilities. DHS has begun to enhance its ability to assess risks and prioritize facilities. DHS assessed the risks of facilities that reported their chemical holdings in order to determine which ones would be required to participate in the program and subsequently develop site security plans. GAO's April 2013 report found weaknesses in multiple aspects of the risk assessment and prioritization approach and made recommendations to review and improve this process. In February 2014, DHS officials told us they had begun to take action to revise the process for assessing risk and prioritizing facilities. Reviewing security plans . DHS has also begun to take action to speed up its reviews of facility security plans. Per the CFATS regulation, DHS was to review security plans and visit the facilities to make sure their security measures met the risk-based performance standards. GAO's April 2013 report found a 7- to 9-year backlog for these reviews and visits, and DHS has begun to take action to expedite these activities. As a separate matter, one of the performance standards--personnel surety, under which facilities are to perform background checks and ensure appropriate credentials for personnel and visitors as appropriate--is being developed. As of February 2014, DHS has reviewed and conditionally approved facility plans pending final development of the personal surety performance standard. Inspecting to verify compliance . In February 2014, DHS reported it had begun to perform inspections at facilities to ensure compliance with their site security plans. According to DHS, these inspections are to occur about 1 year after facility site security plan approval. Given the backlog in plan approvals, this process has started recently and GAO has not yet reviewed this aspect of the program. In a July 2012 report, GAO recommended that DHS measure its performance implementing actions to improve its management of CFATS. In an April 2013 report, GAO recommended that DHS enhance its risk assessment approach to incorporate all elements of risk, conduct a peer review, and gather feedback on its outreach to facilities. DHS concurred with these recommendations and has taken actions or has actions underway to address them. GAO provided a draft of the updated information to DHS for review, and DHS confirmed its accuracy.
3,650
877
The FCS concept is part of a pervasive change to what the Army refers to as the Future Force. The Army is reorganizing its current forces into modular brigade combat teams, meaning troops can be deployed on different rotational cycles as a single team or as a cluster of teams. The Future Force is designed to transform the Army into a more rapidly deployable and responsive force and to enable the Army to move away from the large division-centric structure of the past. Each brigade combat team is expected to be highly survivable and the most lethal brigade-sized unit the Army has ever fielded. The Army expects FCS-equipped brigade combat teams to provide significant warfighting capabilities to DOD's overall joint military operations. The Army is implementing its transformation plans at a time when current U.S. ground forces are playing a critical role in the ongoing conflicts in Iraq and Afghanistan. The FCS family of weapons includes 18 manned and unmanned ground vehicles, air vehicles, sensors, and munitions that will be linked by an information network. These vehicles, weapons, and equipment will comprise the majority of the equipment needed for a brigade combat team. The Army plans to buy 15 brigades worth of FCS equipment by 2025. We have frequently reported on the importance of using a solid, executable business case before committing resources to a new product development. In its simplest form, this is evidence that (1) the warfighter's needs are valid and can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources--that is, proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when needed. At the heart of a business case is a knowledge-based approach to product development that demonstrates high levels of knowledge before significant commitments are made. In essence, knowledge supplants risk over time. This building of knowledge can be described as three levels or knowledge points that should be attained over the course of a program: First, at program start, the customer's needs should match the developer's available resources--mature technologies, time, and funding. An indication of this match is the demonstrated maturity of the technologies needed to meet customer needs. Second, about midway through development, the product's design should be stable and demonstrate that it is capable of meeting performance requirements. The critical design review is that point of time because it generally signifies when the program is ready to start building production- representative prototypes. Third, by the time of the production decision, the product must be shown to be producible within cost, schedule, and quality targets and have demonstrated its reliability and the design must demonstrate that it performs as needed through realistic system level testing. The three knowledge points are related, in that a delay in attaining one delays the points that follow. Thus, if the technologies needed to meet requirements are not mature, design and production maturity will be delayed. To develop the information on the Future Combat System program's progress toward meeting established goals, the contribution of critical technologies and complementary systems, and the estimates of cost and affordability, we interviewed officials of the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Army G-8; the Office of the Under Secretary of Defense (Comptroller); the Secretary of Defense's Cost Analysis Improvement Group; the Director of Operational Test and Evaluation; the Assistant Secretary of the Army (Acquisition, Logistics, and Technology); the Army's Training and Doctrine Command; Surface Deployment and Distribution Command; the Program Manager for the Future Combat System (Brigade Combat Team); the Future Combat System Lead Systems Integrator; and other contractors. We reviewed, among other documents, the Future Combat System's Operational Requirements Document, the Acquisition Strategy Report, the Baseline Cost Report, the Critical Technology Assessment and Technology Risk Mitigation Plans, and the Integrated Master Schedule. We attended and/or reviewed the results of the FCS System of Systems Functional Review, In- Process Reviews, Board of Directors Reviews, and multiple system demonstrations. In our assessment of the FCS, we used the knowledge- based acquisition practices drawn from our large body of past work as well as DOD's acquisition policy and the experiences of other programs. We conducted the above in response to the National Defense Authorization Act of Fiscal Year 2006, which requires GAO to annually report on the product development phase of the FCS acquisition. We performed our review from June 2005 to March 2006 in accordance with generally accepted auditing standards. An improved business case for the FCS program is essential to help ensure that the program is successful in the long run. The FCS is unusual in that it is developing 18 systems and a network under a single program office and lead system integrator in the same amount of time that it would take to develop a single system. It also started development with less knowledge than called for by best practices and DOD policy. The Army has made significant progress defining FCS's system of systems requirements, particularly when taking into account the daunting number of them involved--nearly 11,500 at this level. Yet system-level requirements are not yet stabilized and will continue to change, postponing the needed match between requirements and resources. Now, the Army and its contractors are working to complete the definition of system level requirements, and the challenge is in determining if those requirements are technically feasible and affordable. Army officials say it is almost certain that some FCS system-level requirements will have to be modified, reduced, or eliminated; the only uncertainty is by how much. We have previously reported that unstable requirements can lead to cost, schedule, and performance shortfalls. Once the Army gains a better understanding of the technical feasibility and affordability of the system- level requirements, trade-offs between the developer and the warfighter will have to be made, and the ripple effect of such trade-offs on key program goals will have to be reassessed. Army officials have told us that it will be 2008 before the program reaches the point which it should have reached before it started in May 2003 in terms of stable requirements. Development of concrete program requirements depends in large part on stable, fully mature technologies. Yet, according to the latest independent assessment, the Army has not fully matured any of the technologies critical to FCS's success. Some of FCS's critical technologies may not reach a high level of maturity until the final major phase of acquisition, the start of production. The Army considers a lower level of demonstration as acceptable maturity, but even against this standard, only about one-third of the technologies are mature. We have reported that going forward into product development without demonstrating mature technologies increases the risk of cost growth and schedule delays throughout the life of the program. The Army is also facing challenges with several of the complementary programs considered essential for meeting FCS's requirements. Some are experiencing technology difficulties, and some have not been fully funded. These difficulties underscore the gap between requirements and available resources that must be closed if the FCS business case is to be executable. Technology readiness levels (TRL) are measures pioneered by the National Aeronautics and Space Administration and adopted by DOD to determine whether technologies were sufficiently mature to be incorporated into a weapon system. Our prior work has found TRLs to be a valuable decision-making tool because they can presage the likely consequences of incorporating a technology at a given level of maturity into a product development. The maturity levels range from paper studies (level 1), to prototypes tested in a realistic environment (level 7), to an actual system proven in mission operations (level 9). Successful DOD programs have shown that critical technologies should be mature to at least a TRL 7 before the start of product development. In the case of the FCS program, the latest independent technology assessment shows that none of the critical technologies are at TRL 7, and only 18 of the 49 technologies currently rated have demonstrated TRL 6, defined as prototype demonstration in a relevant environment. None of the critical technologies may reach TRL 7 until the production decision in fiscal year 2012, according to Army officials. Projected dates for FCS technologies to reach TRL 6 have slipped significantly since the start of the program. In the 2003 technology assessment, 87 percent of FCS's critical technologies were projected to be mature to a TRL 6 by 2005. When the program was looked at again in April 2005, 31 percent of the technologies were expected to mature to a TRL 6 by 2005, and all technologies are not expected to be mature to that level until 2009. The knowledge deficits for requirements and technologies have created enormous challenges for devising an acquisition strategy that can demonstrate the maturity of design and production processes. Several efforts within the FCS program are facing significant problems that may eventually involve reductions in promised capabilities and may lead to cost overruns and schedule delays. Even if requirements setting and technology maturity proceed without incident, FCS design and production maturity will still not be demonstrated until after the production decision is made. Production is the most expensive phase in which to resolve design or other problems. The Army's acquisition strategy for FCS does not reflect a knowledge- based approach. Figure 1 shows how the Army's strategy for acquiring FCS involves concurrent development, design reviews that occur late, and other issues that are out of alignment with the knowledge-based approach outlined in DOD policy. Ideally, the preliminary design review occurs at or near the start of product development. Doing so can help reveal key technical and engineering challenges and can help determine if a mismatch exists between what the customer wants and what the product developer can deliver. An early preliminary design review is intended to help stabilize cost, schedule, and performance expectations. The critical design review ideally occurs midway into the product development phase. The critical design review should confirm that the system design is stable enough to build production-representative prototypes for testing. The FCS acquisition schedule indicates several key issues: The program did not have the basic knowledge needed for program start in 2003. While the preliminary design review normally occurs at or near the start of product development, the Army has scheduled it in fiscal year 2008, about 5 years after the start of product development. Instead of the sequential development of knowledge, major elements of the program are being conducted concurrently. The critical design review is scheduled in fiscal year 2010, just 2 years after the scheduled preliminary review and the planned start of detailed design. The timing of the design reviews is indicative of how late knowledge will be attained in the program, assuming all goes according to plan. The critical design review is also scheduled just 2 years before the initial FCS low-rate production decision in fiscal year 2012, leaving little time for product demonstration and correction of any issues that are identified at that time. The FCS program is thus susceptible to late-cycle churn, which refers to the additional--and unanticipated--time, money, and effort that must be invested to overcome problems discovered late through testing. The total cost for the FCS program, now estimated at $160.7 billion (then year dollars), has climbed 76 percent from the Army's first estimate. Because uncertainties remain regarding FCS's requirements and the Army faces significant challenges in technology and design maturity, we believe the Army's latest cost estimate still lacks a firm knowledge base. Furthermore, this latest estimate does not include complementary programs that are essential for FCS to perform as intended, or all of the necessary funding for FCS spin-outs. The Army has taken some steps to help manage the growing cost of FCS, including establishing cost ceilings or targets for development and production; however, program officials told us that setting cost limits may result in accepting lower capabilities. As FCS's higher costs are recognized, it remains unclear whether the Army will have the ability to fully fund the planned annual procurement costs for the FCS current program of record. FCS affordability depends on the accuracy of the cost estimate, the overall level of development and procurement funding available to the Army, and the level of competing demands. At the start of product development, FCS program officials estimated that the program would require about $20 billion in then-year dollars for research, development, testing, and evaluation and about $72 billion to procure the FCS systems to equip 15 brigade combat teams. At that time, program officials could only derive the cost estimate on the basis of what they knew then--requirements were still undefined and technologies were immature. The total FCS program is now expected to cost $160.7 billion in then-year dollars, a 76 percent increase. Table 1 summarizes the growth of the FCS cost estimate. According to the Army, the current cost estimate is more realistic, better informed, and based on a more reasonable schedule. It accounts for the restructure of the FCS program and its increased scope, the 4-year extension to the product development schedule, the reintroduction of four systems that had been previously deferred, and the addition of a spin-out concept whereby mature FCS capabilities would be provided, as they become available, to current Army forces. It also reflects a rate of production reduced from an average of 2 brigade combat teams per year to an average of 1.5 brigades per year. Instead of completing all 15 brigades by 2020, the Army would complete production in 2025. This cost estimate has also benefited from progress made in defining system of systems requirements. Figure 2 compares the funding profiles for the original program and for the latest restructured program. The current funding profile is lower than the original through fiscal year 2013, but is substantially higher than the original after fiscal year 2013. It still calls for making substantial investments before key knowledge has been demonstrated. Stretching out FCS development by 4 years freed up about $9 billion in funding through fiscal year 2011 for allocation to other Army initiatives. Originally, FCS annual funding was not to exceed $10 billion in any one year. Now, the cost estimate is expected to exceed $10 billion in each of 9 years. While it is a more accurate reflection of program costs than the original estimate, the latest estimate is still based on a low level of knowledge about whether FCS will work as intended. The cost estimate has not been independently validated, as called for by DOD's acquisition policy. The Cost Analysis Improvement Group will not release its updated independent estimate until spring 2006, after the planned Defense Acquisition Board review of the FCS program. The latest cost estimate does not include all the costs that will be needed to field FCS capabilities. For instance, Costs for the 52 essential complementary programs are separate, and some of those costs could be substantial. For example, the costs of the Joint Tactical Radio System Clusters 1 and 5 programs were expected to be about $32.6 billion (then-year dollars). Some complementary programs, such as the Mid-Range Munition and Javelin Block II, are currently not funded for their full development. These and other unfunded programs would have to compete for already tight funding. Procurement of the spin-outs from the FCS program to current Army forces is not yet entirely funded. Procuring the FCS items expected to be spun out to current forces is expected to cost about $19 billion, and the needed installation kits may add $4 billion. Adding these items brings the total required FCS investment to the $200 billion range. Through fiscal year 2006, the Army will have budgeted over $8 billion for FCS development. Through fiscal year 2008, when the preliminary design review is held, the amount budgeted for FCS will total over $15 billion. By the time the critical design review is held in 2010, about $22 billion will have been budgeted. By the time of the production decision in 2012, about $27 billion will have been budgeted. The affordability of the FCS program depends on several key assumptions. First, the program must proceed without exceeding its currently projected costs. Second, the Army's annual procurement budget--not including funds specifically allocated for the modularity initiative--is expected to grow from between $11 billion to $12 billion in fiscal year 2006 to at least $20 billion by fiscal year 2011. The large annual procurement costs for FCS are expected to begin in fiscal year 2012, which is beyond the current Future Years Defense Plan period (fiscal years 2006-2011). FCS procurement will represent about 60-70 percent of Army procurement from fiscal years 2014 to 2022. This situation is typically called a funding bow wave. As it prepares the next Defense Plan, the Army will face the challenge of allocating sufficient funding to meet the increasing needs for FCS procurement in fiscal years 2012 and 2013. If all the needed funding cannot be identified, the Army will have to consider reducing the FCS procurement rate or delaying or reducing items to be spun out to current Army forces. However, reducing the FCS procurement rate would increase the FCS unit costs and extend the time needed to deploy FCS-equipped brigade combat teams. Given the risks facing the FCS program, the business arrangements made for carrying out the program will be critical to protecting the government's interests. To manage the program, the Army is using a lead system integrator (LSI), Boeing. As LSI, Boeing carries greater responsibilities than a traditional prime contractor. The Army is in the process of finalizing a new Federal Acquisition Regulation (FAR)-based contract in response to concerns that the previous Other Transaction Agreement was not the best match for a program of FCS's size and risks. This contract will establish the expectations, scope, deliverables, and incentives that will drive the development of the FCS. From the outset of the FCS program, the Army has employed a management approach that centers on the LSI. The Army did not believe it had the resources or flexibility to field a program as complex as FCS under the aggressive timeline established by the then-Army Chief of Staff. Although there is no complete consensus on the definition of LSI, generally, it is a prime contractor with increased responsibilities. These responsibilities may include greater involvement in requirements development, design and source selection of major system and subsystem subcontractors. The government has used the LSI approach on other programs that require system-of-systems integration. The FCS program started as a joint Defense Advanced Research Projects Agency and Army program in 2000. In 2002, the Army competitively selected Boeing as the LSI for the concept technology demonstration phase of FCS. The Army's intent is to maintain the LSI for the remainder of FCS development. Boeing and the Army established a relationship to work in what has become known as a "one-team" management style with several first tier subcontractors to develop, manage, and execute all aspects of the FCS program. For example, Boeing's role as LSI extends beyond that of a traditional prime contractor and includes some elements of a partner to the government in ensuring the design, development, and prototype implementation of the FCS network and family of systems. In this role, Boeing is responsible for (1) engineering a system of systems solution, (2) competitive selection of industry sources for development of the individual systems and subsystems, and (3) integrating and testing these systems to satisfy the requirements of the system of systems specifications. Boeing is also responsible for the actual development of two critical elements of the FCS information network--the System of Systems Common Operating Environment and the Warfighter-Machine Interface. The Army participates in program decisions such as make/buy and competitive selection decisions, and it may disapprove any action taken under these processes. The decision structure of the program is made up of several layers of Integrated Product Teams. These teams are co-chaired by Army and LSI representatives. Government personnel participate in each of the integrated product teams. This collaborative structure is intended to force decision making to the lowest level in the program. Decisions can be elevated to the program manager level, and ultimately the Army has final decision authority. The teams also include representation of the Army user community, whose extensive presence in the program is unprecedented. The advantages of using an LSI approach on a program like FCS include the ability of the contractor to know, understand, and integrate functions across the various FCS platforms. Thus, the LSI has the ability to facilitate movement of requirements and make trade-offs across platforms. This contrasts with past practices of focusing on each platform individually. However, the extent of contractor responsibility in so many aspects of the FCS program management process, including responsibility for making numerous cost and technical tradeoffs and for conducting at least some of the subcontractor source selections, is also a potential risk. As an example, many of the subcontractor source selections are for major weapon systems that, in other circumstances, would have been conducted by an Army evaluation team, an Army Contracting Officer and a senior- level Army source selection authority. These decisions, including procurement decisions for major weapons systems, are now being made by the LSI with Army involvement. This level of responsibility, as with other LSI responsibilities in the program management process, requires careful government oversight to ensure that the Army's interests are adequately protected now and in the future. Thus far, the Army has been very involved in the management of the program and in overseeing the LSI. It is important that as the program proceeds, the Army continue to be vigilant about maintaining control of the program and that organizational conflicts of interest are avoided, such as can arise when the LSI is also a supplier. As discussed in the next section, the Army intends the new contract to provide additional protection against potential conflicts. The Army and Boeing entered into a contractual instrument called an Other Transaction Agreement (OTA). The purpose of the OTA was to encourage innovation and to use its wide latitude in tailoring business, organizational, and technical relationships to achieve the program goals. The original OTA was modified in May 2003 and fully finalized in December 2003 for the Systems Development and Demonstration phase of the FCS program. The latest major modification to the OTA, to implement the 2004 program restructuring, was finalized in March 2005. As you know, questions have been raised about the appropriateness of the Army's use of an OTA for a program as large and risky as FCS. The Airland Subcommittee held a hearing in March 2005 which addressed this among other issues. In particular, concern has been raised about the protection of the government's interests under the OTA arrangement and the Army's choice to not include standard FAR clauses in the OTA. In April 2005, the OTA was modified by the Army to incorporate the procurement integrity, Truth in Negotiations, and Cost Accounting Standards clauses. In April 2005, the Secretary of the Army decided that the Army should convert the OTA to a FAR-based contract. A request for proposals was issued by the Army on August 15, 2005. An interim letter contract was issued on September 23, 2005. The Systems Development and Demonstration work through September 2005 will be accounted for under the OTA and all future work under the FAR-based contract. Boeing/SAIC and all of the FCS subcontractors were to submit a new certifiable proposal for the remainder of Systems Development and Demonstration and that will be the subject of negotiations with the Army. The Army expects the content of the program--its statement of work--will remain the same and they do not expect the cost, schedule, and performance of the overall Systems Development and Demonstration effort to change materially. The target date for completion of the finalized FAR contract is March 28, 2006. In the coming months, we will be taking a close look at the new contract as part of our continuing work on FCS that is now mandated by the Defense Authorization Act for Fiscal Year 2006. The FAR-based contract is expected to include standard FAR clauses, including the Truth in Negotiations and Cost Accounting Standards clauses. The letter contract includes Organizational Conflict of Interest clauses whereby Boeing and SAIC can not compete for additional FCS subcontracts. Also, other current subcontractors can compete for work only if they do not prepare the request for proposals or participate in the source selection process. The last major revision of the OTA in March 2005 had a total value of approximately $21 billion. Through September 2005 the Army and LSI estimate that about $3.3 billion will be chargeable to the OTA. The FAR based contract will cover all activity after September 2005 and is expected to have a value of about $17.4 billion. Both the OTA and the FAR-based contract will be cost plus fixed fee contracts with additional incentive fees. According to the Army, the fee arrangement is designed to address the unique relationship between the Army and the LSI and to acknowledge their "shared destiny" by providing strategic incentives for the LSI to prove out technologies, integrate systems, and move the program forward to production, at an affordable cost and on schedule. In the OTA, the annual fixed fee was set at 10 percent of estimated cost and the incentive fee available was 5 percent. The Army plans to change the fee structure for the FCS program in the new contract. The request for proposals for the new contract proposed a 7 percent fixed fee and an 8 percent incentive fee. The OTA established 10 distinct events where LSI performance will be evaluated against pre- determined performance, cost, and schedule criteria. (Those events are expected to be retained in the FAR contract.) One event has already occurred--the System of Systems Functional Requirements Review was held in August 2005. The next event is called the Capabilities Maturity Review and it is expected to occur in June or July 2006. As the details are worked out, it is important that the new contract encourage meaningful demonstrations of knowledge and to preserve the government's ability to act on knowledge should the program progress differently than planned. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or members of the Subcommittee may have. For future questions about this statement, please contact me at (202) 512- 4841. Individuals making key contributions to this statement include Robert L. Ackley, Lily J. Chin, Noah B. Bleicher, Marcus C. Ferguson, William R. Graveline, Guisseli Reyes, Michael J. Hesse, John P. Swain, Robert S. Swierczek, and Carrie R. Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Future Combat System (FCS) is a networked family of weapons and other systems in the forefront of efforts by the Army to become a lighter, more agile, and more capable combat force. When considering complementary programs, projected investment costs for FCS are estimated to be on the order of $200 billion. FCS's cost is of concern given that developing and producing new weapon systems is among the largest investments the government makes, and FCS adds significantly to that total. Over the last five years, the Department of Defense (DOD) doubled its planned investments in such systems from $700 billion in 2001 to $1.4 trillion in 2006. At the same time, research and development costs on new weapons continue to grow on the order of 30 to 40 percent. FCS will be competing for significant funds at a time when Federal fiscal imbalances are exerting great pressures on discretionary spending. In the absence of more money being available, FCS and other programs must be executable within projected resources. Today, I would like to discuss (1) the business case needed for FCS to be successful and (2) related business arrangements that support that case. There are a number of compelling aspects of the FCS program, and it is hard to argue with the program's goals. However, the elements of a sound business case for such an acquisition program--firm requirements, mature technologies, a knowledge-based acquisition strategy, a realistic cost estimate and sufficient funding--are not yet present. FCS began product development prematurely in 2003. Since then, the Army has made several changes to improve its approach for acquiring FCS. Yet, today, the program remains a long way from having the level of knowledge it should have had before starting product development. FCS has all the markers for risks that would be difficult to accept for any single system, much less a complex, multi-system effort. These challenges are even more daunting in the case of FCS not only because there are so many of them but because FCS represents a new concept of operations that is predicated on technological breakthroughs. Thus, technical problems, which accompany immaturity, not only pose traditional risks to cost, schedule, and performance; they pose risks to the new fighting concepts envisioned by the Army. Many decisions can be anticipated that will involve trade-offs the Government will make in the program. Facts of life, like technologies not working out, reductions in available funds, and changes in performance parameters, must be anticipated. It is important, therefore, that the business arrangements for carrying out the FCS program--primarily in the nature of the development contract and in the lead system integrator (LSI) approach-- preserve the government's ability to adjust course as dictated by these facts of life. At this point, the $8 billion to be spent on the program through fiscal year 2006 is a small portion of the $200 billion total. DOD needs to guard against letting the buildup in investment limit its decision making flexibility as essential knowledge regarding FCS becomes available. As the details of the Army's new FCS contract are worked out and its relationship with the LSI evolves, it will be important to ensure that the basis for making additional funding commitments is transparent. Accordingly, markers for gauging knowledge must be clear, incentives must be aligned with demonstrating such knowledge, and provisions must be made for the Army to change course if the program progresses differently than planned.
5,761
728
In addition to the 50-50 requirement in 10 U.S.C. SS 2466, the following provisions directly affect the reporting of workload funding allocations to the public and private sectors: Section 2460(a) of Title 10 defines "depot-level maintenance and repair" as material maintenance or repair requiring the overhaul, upgrading, or rebuilding of parts, assemblies, or subassemblies and the testing and reclamation of equipment as necessary, regardless of the source of funds for the maintenance or repair, or the location at which the maintenance or repair is performed. This term also includes: (1) all aspects of software maintenance classified by DOD as of July 1, 1995 as depot-level maintenance and repair; and (2) interim contractor support or contractor logistics support (or any similar contractor support) to the extent that such support is for the performance of services described in the preceding sentence. Section 2460(b)(1) excludes from the definition of depot maintenance the nuclear refueling of an aircraft carrier, and the procurement of major modifications or upgrades of weapon systems that are designed to improve program performance, although a major upgrade program covered by this exception could continue to be performed by private- or public-sector entities. Section 2460(b)(2) also excludes from the definition of depot-level maintenance the procurement of parts for safety modifications, although the term does include the installation of parts for safety modifications. Depot maintenance funding involving certain public-private partnerships is exempt from the 50 percent limitation. Section 2474(f) of Title 10 provides that amounts expended for the performance of depot-level maintenance and repair by nonfederal government personnel at Centers of Industrial and Technical Excellence under any contract entered into during fiscal years 2003 through 2009 shall not be counted when applying the 50 percent limitation in Section 2466(a) if the personnel are provided by entities outside DOD pursuant to a public-private partnership. In its annual 50-50 report to Congress, DOD identifies this funding as a separate category called "exempt." Section 2466(b) allows the Secretary of Defense to waive the 50 percent limitation if he determines the waiver is necessary for national security, and he submits the notification of waiver together with the reasons for the waiver to Congress. Waivers were previously submitted for the Air Force for fiscal years 2000 and 2001. OSD issues guidance to the military departments for reporting public- private workload funding allocations. The guidance's definition of "depot level maintenance and repair" is consistent with the definition of "depot- level maintenance and repair" in 10 U.S.C. SS 2460. The military services have also issued internal instructions to manage the data collection and reporting process, tailored to their individual organizations and operating environments. Although DOD reported that the military departments complied with the 50-50 requirement for fiscal year 2005, we could not validate compliance because of systemic weaknesses in DOD's financial management systems and persistent deficiencies in the processes used to collect and report 50- 50 data. DOD's report provides an approximation of the depot maintenance funding allocation between the public and private sectors but contains some inaccuracies. Our current review showed that 50-50 funding data were not being consistently reported because some maintenance depots were reporting expenditures rather than obligations as directed by OSD guidance. We also found that amounts associated with interservice depot maintenance work and certain contract agreements between depots and private contractors may not accurately reflect the distribution reported for private- and public-sector funds because visibility over the allocation of these funds is limited. In addition, we found several other errors that resulted in inaccuracies in reported 50-50 data for the Navy and Army. DOD took some actions this year to improve 50-50 reporting. However, our work over the last several years has identified a number of persistent deficiencies, such as inadequate management attention and review, which have affected the quality of reported 50-50 data. While DOD took actions to improve 50-50 reporting this year, DOD has not implemented recommendations we made last year to address these deficiencies. In DOD's April 2006 report to Congress on funding allocations for depot maintenance, all three military departments reported that their private- sector depot maintenance allocation was below the 50 percent limitation for fiscal year 2005. However, we found that the reported data contained inaccuracies. Table 1 shows the reported allocation between the public and private sectors and the exempted workload funding. On the basis of our evaluation of selected 50-50 data, DOD's April 2006 report provides an approximation of depot maintenance funding allocations between the public and private sectors for fiscal year 2005. However, we identified errors in reported workload funding data. The net effects of correcting the data inaccuracies we identified would increase the Army's private-sector funding allocation from 49.4 percent to 50 percent. Identified errors in the Army's data resulted in a total decrease in public-sector funding of $5.9 million and a total increase in private-sector funding of $68.1 million. Appendix II provides additional information on these adjustments. We could not quantify the errors that we identified for the Air Force regarding direct sales agreements. We continue to identify areas that continue to be excluded from the Navy's 50-50 reporting. While we found an error in the Marine Corps data, correcting this inaccuracy would not result in changes to the Department of the Navy's funding allocation percentages. We did not conduct a review of all reported 50-50 data; therefore, there may be additional errors, omissions, and inconsistencies that were not identified. Depot maintenance funding data for fiscal year 2005 were not being consistently reported because some maintenance depots were reporting expenditures, rather than obligations as directed by OSD guidance. The reporting of expenditures instead of obligations by some depots presents an inaccurate picture of depot maintenance allocations since the amounts may differ. For the most part, the allocation percentages for public funds represent obligation amounts obtained from the military department's financial accounting systems. However, in reporting the amount of depot maintenance funds allocated to the private sector, some reporting organizations used expenditures rather than obligations as required by OSD guidance. For example, three depots we visited reported their subcontracted depot-level maintenance work as expenditures rather than obligations. Reasons given by depot officials for reporting expenditures rather than obligations include the following: (1) the workload against obligated funds may not have been fully performed during the fiscal year, and therefore they believed reporting expenditures was a better reflection of the actual workload; (2) they did not know that obligations were to be reported instead of expenditures; and (3) many work orders can be associated with a multiyear contract, so they believed that reporting expenditures would be a better representation of the costs associated with multiyear contracts for the fiscal year in question. Accurately reporting carryover work is a problem when the services' data contain both expenditures and obligations. Carryover is work that a depot may "carry over" from one fiscal year to another to ensure a smooth flow of work during the transition between fiscal years. This means that while the funds are obligated in one fiscal year, a certain portion may not be expended until the next fiscal year. When expenditures rather than obligations are reported, we found that the carryover work that is performed in the following year may not be included in either year's 50-50 report. For example, an Army depot official provided us with an estimate of almost $1.5 million that was expended in fiscal year 2006 on a fiscal year 2005 contract obligation. The official stated that this portion of the obligation was not reported in fiscal year 2005 because it was not yet expended, and it would not be reported in fiscal year 2006 because it was expended on a fiscal year 2005 obligation. As a result, the private portion of the service's depot maintenance funds was underreported in the year of the obligation, while the public portion was overreported. Until depot maintenance funding obligations are consistently reported, rather than a combination of expenditures and obligations, inaccurate reporting of the allocation of depot maintenance funding between the public and private sectors will continue. Because DOD has limited visibility over the allocation of private- and public-sector funds in some interservice agreements and direct sales agreements, inaccurate reporting of the depot maintenance workload allocation may result. Interservice workload agreements refer to work that is performed by one component for another. OSD guidance requires that the military departments establish measures to ensure correct accounting of interservice workloads; however the allocation of these funds may not always be accurately reported. We found instances where a military service awarded public depot maintenance work to another military service, which then contracted out a portion of that workload to the private sector. The military service awarding the work, as principal owner of the funds, inaccurately reported this as public workload because it had not inquired whether all the awarded work was performed at the public depot. For example, we identified approximately $172,000 of private- sector work that may have been inaccurately reported as public-sector work because the principal owner of the funds did not follow up to determine whether all of the work was performed by the public depot. While we were unable to fully evaluate the extent of inaccurate reporting associated with interservice agreements, until the military departments establish sufficient measures to accurately account for and report their distribution of depot maintenance workload, the 50-50 data reported by DOD may continue to be inaccurate. The limited visibility over direct sales agreements is another reason why the depot maintenance workload allocation may be inaccurately reported to Congress. A direct sales agreement involves private vendors contracting back to a DOD maintenance facility for labor to be performed by DOD employees. OSD guidance requires that sales of articles and services by DOD maintenance depots to entities outside of DOD, when work is accomplished by DOD employees, shall be reported as public-sector work. However, we found that the reporting of the distribution of private- and public-sector workload for direct sales agreements may not be accurate. With a direct sales agreement, there is no requirement for the private vendor to identify and break out the contract costs, such as materials and other factors of production, and allocate them to expenses performed by the private vendor or the public depot. We found the use of direct sales agreements by the Air Force may have resulted in an overstatement of private-sector funds, with a corresponding understatement of public- sector funds. In addition, we found similar instances in the Army where work performed by the public sector under a direct sales agreement with a private vendor may have been misreported as being performed by the private sector. Although we were unable to fully evaluate the extent to which costs associated with these types of contract agreements were misreported, until private vendors break out direct sales agreement costs by the private and public sectors, DOD's reporting of 50-50 funding allocation may remain inaccurate. We identified several other errors that resulted in inaccurate reported 50- 50 data for the Navy and Army. As we reported in previous years, we identified two areas that continue to be excluded from the Navy's 50-50 reporting. First, the Navy did not report any depot maintenance work on aircraft carriers performed while nuclear refueling. Navy officials cited the exclusion of nuclear refueling in 10 U.S.C. SS 2460(b)(1) and guidance from the General Counsel's office in the Department of the Navy as reasons for not including $115 million in depot maintenance work performed on aircraft carriers while nuclear refueling. However, we continue to believe that depot repairs not directly associated with the task of nuclear refueling should be reported. Second, the Navy, as in prior years, continues to inconsistently report ship-inactivation activities related to the servicing and preservation of systems and equipment before ships are placed in storage or in an inactive status. The Navy did not report $14.4 million of private-sector allocations for inactivation work on nonnuclear ships, even though it reported inactivation activities on nuclear ships. The Navy contends that the work for nuclear ship inactivation is complex while the work for nonnuclear ships is not. We continue to maintain that all such depot-level work should be reported, since the statute and implementing guidance do not make a distinction based on complexity. In addition, our review of the Marine Corps data found that it underreported the private- sector total and overreported the public-sector total by about $1.5 million. This amount was for depot-level maintenance that was performed in a public depot by contractor personnel, which was misreported as public sector rather than private sector. We also identified several data inaccuracies in the Army's 50-50 data. For example, one Army depot failed to include approximately $31 million of private contract work it had outsourced for depot maintenance in its 50-50 report. An Army official said that they had not known that this type of contract work should be included in 50-50 reporting, but they now plan to include it in future submissions. Our review also determined that several Army omissions, totaling approximately $53 million, were due to misinterpretation of the guidance regarding modifications and remanufacturing. The OSD guidance provides information about what to include and not to include in reporting depot maintenance with regard to upgrades, modifications, and remanufacturing. An Army official acknowledged that there has been confusion over what to report for 50-50 depot maintenance and stated the Army is in the draft stages of updating the Army's Depot Maintenance Workload Distribution Reporting Procedures. In addition, the Army's 50-50 data contained errors totaling approximately $4 million due to changes in program costs. Finally, our review of the Army's data found miscellaneous errors, including one instance of double counting and the transposition of numbers in some entries. During our review we noted actions taken by OSD and the military services that, while not fully implemented, provided some improvement in the 50-50 reporting process. For example, OSD, in its 50-50 guidance, added a new requirement that the military services include variance analyses in their submissions of 50-50 data. The services performed variance analyses; however, these were at a very high level and provided little detail on how the fiscal year 2005 allocations differed from the prior year's data. OSD guidance also included a new requirement that the services maintain records and reports for 50-50 data for at least 2 years, although we did find two instances where reporting locations could not provide backup documentation for their 50-50 data. In addition, as in previous years, OSD instructed the services to use a third-party reviewer, such as a service audit agency, to validate their data prior to submission. However, due to time constraints, each service audit agency performed only a limited review of the service's data. For example, the Air Force directed its audit service to perform a limited review that focused on two issues. Additionally, each service headquarters continued to provide some form of training for its 50-50 reporting activities, although no service required attendance by all individuals involved in 50-50 data gathering and reporting. Guidance issued by OSD emphasized, but did not require, training for individuals involved in the 50-50 process. In one instance, an official who was responsible for querying the 50-50 information from the service's data systems was unaware that any training was ever offered for 50-50 reporting. Our work over the last several years has identified a number of persistent deficiencies, such as inadequate management attention and review, which have affected the quality of reported 50-50 data. DOD has not implemented recommendations we made last year to address these deficiencies. In prior years' reports, we have identified problems in 50-50 data accuracy attributable to deficiencies in management attention, controls, and oversight; documentation of procedures and retention of records; independent validation of data; training for staff involved in the 50-50 process; and guidance. DOD has taken steps over the years to improve 50- 50 reporting in response to our recommendations, but we have found that some deficiencies have persisted, including inadequate management attention and review, limited review and validation of data by independent third parties, and inadequate staff training. In our November 2005 report, we concluded that the recurring nature of deficiencies in 50-50 reporting indicates a management control weakness that DOD should disclose in its annual performance and accountability report to Congress. By doing so, DOD would increase the level of management attention and help focus improvement efforts so that the data provided to Congress are accurate and complete. DOD partially concurred with this recommendation, stating that systemic changes to the 50-50 reporting process had already been made in response to previous recommendations. DOD did not disclose 50- 50 reporting as a management control weakness in its most recent performance and accountability report. An OSD official responsible for developing the annual 50-50 report to Congress noted that completion of the department's Enterprise Transition Plan would result in more accurate 50-50 reporting. As we have previously reported, DOD's April 2006 report satisfies the annual mandate as required by 10 U.S.C. SS 2466(d). In our November 2005 report, we stated that DOD could enhance the usefulness of its report for congressional oversight by providing additional information. For example, we recommended that DOD add information such as variance analyses that identify significant changes from the prior year's report and the reasons for these variances, longer term trend analyses, an explanation of methodologies used to estimate workload allocation projections for the current and ensuing fiscal years, and plans to ensure continued compliance with the 50-50 requirement, including decisions on new weapon systems maintenance workload sourcing that could be made to support remaining within the 50 percent threshold. DOD partially concurred with this recommendation and stated that producing the types of information we suggested would require a massive undertaking and may be of limited value. We disagreed and, on the basis of DOD's response, added a matter for congressional consideration suggesting that Congress require the Secretary of Defense to enhance the department's annual 50-50 report as stated in our recommendations. In the April 2006 report, DOD did not make changes consistent with our recommendations, nor has Congress acted. DOD's reported projections for fiscal years 2006 through 2007 do not represent reasonable estimates of public- and private-sector depot maintenance funding allocations, in part because some errors in DOD's fiscal year 2005 data are carried into the projected years. As shown in table 2, the Army and the Navy projected that their private-sector depot maintenance allocations will remain below the 50 percent limitation for fiscal years 2006 and 2007. The Air Force projected that it will remain below the limitation for fiscal year 2006, but will exceed the limitation for fiscal year 2007. Errors similar to those we identified in fiscal year 2005 reported data could affect these projections, as the Air Force is moving closer to the threshold for private-sector funding in fiscal year 2006 (48.4 percent) and beyond the threshold in fiscal year 2007 (50.2 percent). If the adjustments we made to the Army's fiscal year 2005 data--increasing the private-sector percentage by about 0.6 percentage points--are carried forward into fiscal year 2007 projections, it could cause the Army to come within 2 percent of the 50 percent limitation on contracting for depot-level maintenance and repair. When spending projections reflect data within 2 percent of the 50 percent limitation in a fiscal year, OSD guidance directs the components to submit a plan that identifies actions to be taken to ensure continued compliance. This plan shall include identification of decisions on candidate maintenance workload sourcing that could be made to support remaining within compliance with the 50 percent limitation. In addition, we found an error of approximately $1.6 million in the Army's fiscal year 2006 projections, which further limits the accuracy of reported projections. Furthermore, DOD's projected fiscal year 2006 and fiscal year 2007 allocations are based on the President's budget numbers and often did not include supplemental funds, which can change the percentage allocations. However, in the case of some Air Force depot projections, supplemental funds are included in the projections if the amounts are already known. These limitations affect the reasonableness of the data reported as projections of future funding allocations. While the Army and Navy project compliance with the 50-50 requirement through fiscal year 2007, the Air Force's fiscal year 2006 projections are within 2 percent of the 50 percent limitation and its fiscal year 2007 projections exceed the 50 percent limitation by 0.2 percent. To avoid breaching the 50 percent threshold, the Air Force is implementing a plan to ensure compliance in fiscal years 2007 through 2010. Under this plan, the Air Force is identifying and evaluating candidate weapon system programs for shifting maintenance workload from the private sector to the public sector. The Air Force has committed resources and approved shifting some maintenance associated with the F-100 engine beginning in fiscal year 2006. The Air Force plan shows that a total workload of $68 million associated with the F-100 engine could be shifted to the public sector, enabling the Air Force to achieve compliance with the 50-50 requirement in fiscal year 2007. The Air Force is also evaluating workload associated with the KC-135 aircraft, the C-17 aircraft, the B-2 aircraft, the F-119 engine, and the F-117 engine that may be shifted to the public sector. The errors we identified in DOD's April 2006 50-50 report--while not extensive--are indicative of the long-standing problems DOD has encountered in providing accurate depot maintenance funding allocation data to Congress. We have previously observed that the usefulness of the annual 50-50 report to Congress is limited because of data reliability concerns. Our prior reports identified data inaccuracies and recommended corrective actions aimed at addressing deficiencies that limited the accuracy of 50-50 reporting. In addition, we have recommended actions that Congress could take to improve the reliability and usefulness of DOD's annual report. Our current review shows that while DOD has taken some additional actions to improve the quality of reported data for fiscal year 2005, it has not fully addressed the persistent deficiencies that have limited 50-50 data accuracy in the past. DOD's report presented an inaccurate measure of the balance of funding between the public and private sectors due to inconsistencies in reporting expenditures rather than obligations, and inaccurate distribution of reporting of allocations from interservice and direct sales agreements. Without consistent reporting of depot maintenance funding obligations, rather than a combination of expenditures and obligations, inaccurate reporting of the funding allocation between the public and private sectors will continue. Moreover, without accurate reporting of the allocation of depot maintenance workload performed by the private and public sectors under interservice and direct sales agreements, the 50-50 data reported by DOD will continue to be inaccurate. To improve the consistency and accuracy of depot maintenance funding allocation data in DOD's annual 50-50 report to Congress, we recommend that the Secretary of Defense take the following two actions: Direct the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps to follow OSD guidance and report funding obligations rather than expenditures. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in conjunction with the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps, to establish measures to ensure proper accounting of the allocation of interservice workloads between the public and private sectors. In commenting on a draft of this report, DOD concurred with our recommendations. Regarding our recommendation that the military services follow guidance and report funding obligations rather than expenditures, DOD stated that it will be specific in its guidance on 50-50 reporting and require organizations to report obligations rather than expenditures. Also, DOD stated that Army guidance and training will address our findings. Consistent with our recommendation, we believe that the Air Force, Navy, and Marine Corps also should take appropriate steps to ensure that obligations are reported. Regarding our recommendation that measures be established to ensure proper accounting of the allocation of interservice workloads, DOD said that its guidance will require component audit agencies to specifically validate interservice data prior to submitting the 50-50 report to the department. Validation of interservice data would meet the intent of our recommendation. DOD also stated that it did not agree with our adjustments to the work accomplished during the nuclear refueling of aircraft carriers and for inactivation work on nonnuclear ships. DOD stated that all costs during nuclear aircraft carrier refueling are properly excluded and conventional ship inactivation workload is not considered depot-level maintenance. We have had a long-standing disagreement with DOD on including funding for these two areas in its 50-50 report. For the past several years we have maintained that DOD should include these funds, while DOD has disagreed. Our reasons for including these adjustments are discussed in this report. DOD's written comments are reprinted in appendix III. DOD also provided technical comments which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine whether the military departments provided accurate data in reporting depot maintenance funding allocations and whether they met the 50-50 requirement for fiscal year 2005, we reviewed military services' procedures and internal management controls for collecting and reporting their depot maintenance allocations. We discussed with key officials the process used to identify and report depot maintenance workload allocation between the public and private sectors. We selected a nonprobability sample of reported 50-50 obligations totaling $2.7 billion of the reported $26.4 billion reported in the Department of Defense's (DOD) report to Congress on depot maintenance funding allocation. We based our sample on previously identified areas of concern, varying program amounts, and selected locations for our site visits. We also contacted service audit agencies and third-party officials at service headquarters to discuss their verification review of the fiscal year 2005 50-50 obligation data. We did not conduct a review of all reported 50-50 data; therefore, there may be additional errors, omissions, and inconsistencies that were not identified. Because we used a nonprobability sample, our results cannot be projected. We visited departmental headquarters, major commands, and selected maintenance activities. We interviewed service officials responsible for data collection, and we reviewed the reported data for accuracy and completeness. We compared reported amounts to funding documents, contracts, and accounting reports for selected programs for all the military services, but we placed greater emphasis on the Army data because the Army was close to the 50 percent threshold for fiscal year 2005. To determine the actions taken by the Office of the Secretary of Defense (OSD) and military departments to improve the quality of the reported 50- 50 data and implementation of GAO's prior year's recommendations, we reviewed the results of studies conducted by the service audit agencies and reconciled areas of concern identified during prior years' audits. We also reviewed prior years' recommendations to find out whether known problem areas were being addressed and resolved. We discussed with officials actions they took to improve 50-50 data gathering and reporting processes. To determine the reasonableness of fiscal year 2006 and 2007 projections, we discussed with service officials how they developed their projections and whether historical funding information and known increases in funding were included in their projections. Our analysis of the data for fiscal years 2006 and 2007 was limited because our current and past work on this issue has shown that DOD's 50-50 data cannot be relied upon as a precise measure of allocation of depot maintenance funds between the public and private sectors. We discussed with Air Force officials reasons for the increase in their fiscal year 2007 projection and their plans to avoid breaching the 50 percent limitation. In accomplishing our objectives, we interviewed officials, examined documents, and obtained data at the Office of the Secretary of Defense, Army, Navy, Marine Corps, and Air Force headquarters in the Washington, D.C., area; Anniston Army Depot in Anniston, Ala.; Red River Army Depot in Texarkana, Tex.; Army Material Command in Alexandria, Va.; Tank- automotive and Armaments Command (TACOM) Life Cycle Management Command in Warren, Mi.; Naval Air Systems Command in Patuxent River, Md.; U.S. Fleet Forces Command in Norfolk, Va.; Air Force Materiel Command in Wright-Patterson Air Force Base, Oh.; Marine Corps Logistics Command in Albany, Ga.; and Army, Navy, and Air Force Audit Services. We conducted our work from March 2006 to September 2006 in accordance with generally accepted government auditing standards. Our review of the Army's data supporting the Department of Defense's (DOD) fiscal year 2005 50-50 report identified the following adjustments. Key contributors to this report include Thomas Gosling, Assistant Director; Connie W. Sawyer, Jr.; Janine Cantin; Clara Mejstrik; Stephanie Moriarty; and Renee Brown. Depot Maintenance: Persistent Deficiencies Limit Accuracy and Usefulness of DOD's Funding Allocation Data Reported to Congress. GAO-06-88. Washington, D.C.: November 18, 2005. Depot Maintenance: DOD Needs Plan to Ensure Compliance with Public- and Private-Sector Funding Allocation. GAO-04-871. Washington, D.C.: September 29, 2004. Depot Maintenance: Army Needs Plan to Implement Depot Maintenance Report's Recommendations. GAO-04-220. Washington, D.C.: January 8, 2004. Depot Maintenance: DOD's 50-50 Reporting Should Be Streamlined. GAO-03-1023. Washington, D.C.: September 15, 2003. Department of Defense: Status of Financial Management Weaknesses and Progress Toward Reform. GAO-03-931T. Washington, D.C.: June 25, 2003. Depot Maintenance: Change in Reporting Practices and Requirements Could Enhance Congressional Oversight. GAO-03-16. Washington D.C.: October 18, 2002. Depot Maintenance: Management Attention Needed to Further Improve Workload Allocation Data. GAO-02-95. Washington, D.C.: November 9, 2001. Depot Maintenance: Action Needed to Avoid Exceeding Threshold on Contract Workloads. GAO/NSIAD-00-193. Washington, D.C.: August 24, 2000. Depot Maintenance: Air Force Faces Challenges in Managing to 50-50 Threshold. GAO/T-NSIAD-00-112. Washington, D.C.: March 3, 2000. Depot Maintenance: Future Year Estimates of Public and Private Workloads Are Likely to Change. GAO/NSIAD-00-69. Washington, D.C.: March 1, 2000. Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain. GAO/NSIAD-99-154. Washington, D.C.: July 13, 1999. Defense Depot Maintenance: Public and Private Sector Workload Distribution Reporting Can Be Further Improved. GAO/NSIAD-98-175. Washington, D.C.: July 23, 1998. Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations. GAO/NSIAD-98-41. Washington, D.C.: January 20, 1998. Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers. GAO/NSIAD-96-166. Washington, D.C.: May 21, 1996.
Under 10 U.S.C. 2466, the military departments and defense agencies may use no more than 50 percent of annual depot maintenance funding for work performed by private-sector contractors. The Department of Defense (DOD) must submit a report to Congress annually on the allocation of depot maintenance funding between the public and private sectors for the preceding fiscal year and projected distribution for the current and ensuing fiscal years for each of the armed forces and defense agencies. As required by Section 2466, GAO reviewed the report submitted in April 2006 and is, with this report, submitting its view to Congress on whether (1) the military departments and defense agencies complied with the 50-50 requirement for fiscal 2005 and (2) the projections for fiscal years 2006 and 2007 represent reasonable estimates. GAO obtained data used to develop the April 2006 report, conducted site visits, and reviewed supporting documentation. Although DOD reported to Congress that it complied with the 50-50 requirement for fiscal year 2005, GAO could not validate compliance due to weaknesses in DOD's financial management systems and the processes used to collect and report 50-50 data. DOD's April 2006 report provides an approximation of the depot maintenance funding allocation between the public and private sectors for fiscal year 2005. GAO identified errors in the reported data which, if adjusted, would increase the Army's private-sector funding allocation percentage from 49.4 percent to 50 percent. GAO found that 50-50 funding allocation data were not being consistently reported because some maintenance depots were reporting expenditures rather than following Office of the Secretary of Defense (OSD) guidance and reporting obligations. Combining obligations and expenditures produces an inaccurate accounting of 50-50 funding allocations. GAO also found amounts associated with interservice depot maintenance work may not accurately reflect the actual allocation of private- and public-sector funds because visibility over the allocation of these funds is limited. OSD guidance requires that the military departments establish measures to ensure correct accounting of interservice workloads. In prior years' reports on DOD's compliance with the 50-50 requirement, GAO discussed deficiencies limiting data accuracy and recommended specific corrective actions. While DOD has taken some additional actions to improve the quality of reported data for fiscal year 2005, it has not fully addressed the persistent deficiencies that have limited 50-50 data accuracy. Reported projections do not represent reasonable estimates of public- and private-sector depot maintenance funding allocations for fiscal years 2006 and 2007 due to data inaccuracies. Errors GAO identified for fiscal year 2005 could affect these projections. If the adjustments GAO made to the Army's fiscal year 2005 data--increasing the private-sector percentage by about 0.6 percentage points--are carried forward, it could move the Army's projection to within 2 percent of the 50 percent limitation for fiscal year 2007. GAO also found that the projected numbers often did not include supplemental funds, which could change the allocation percentages. These errors and omissions affect the reasonableness and accuracy of the reported projections. To avoid breaching the 50 percent threshold in future years, the Air Force is implementing its plan to ensure compliance with the 50-50 requirement until fiscal year 2010. The plan involves moving some maintenance workload, including the F-100 engine, from the private sector to the public sector.
6,843
703
Economic growth is one of the indicators by which the well-being of the nation is typically measured, although recent discussions have focused on a broader set of indicators, such as poverty. Poverty in the United States is officially measured by the Census Bureau, which calculates the number of persons or households living below an established level of income deemed minimally adequate to support them. The federal government has a long- standing history of assisting individuals and families living in poverty by providing services and income transfers through numerous and various types of programs. Economic growth is typically defined as the increase in the value of goods and services produced by an economy; traditionally this growth has been measured by the percentage rate of increase in a country's gross domestic product, or GDP. The growth in GDP is a key measure by which policy- makers estimate how well the economy is doing. However, it provides little information about how well individuals and households are faring. Recently there has been a substantial amount of activity in the United States and elsewhere to develop a comprehensive set of key indicators for communities, states, and the nation that go beyond traditional economic measures. Many believe that such a system would better inform individuals, groups, and institutions on the nation as a whole. Poverty is one of these key indicators. Poverty, both narrowly and more broadly defined, is a characteristic of society that is frequently monitored and defined and measured in a number of ways. The Census Bureau is responsible for establishing a poverty threshold amount each year; persons or families having income below this amount are, for statistical purposes, considered to be living in poverty. The threshold reflects estimates of the amount of money individuals and families of various sizes need to purchase goods and services deemed minimally adequate based on 1960s living standards, and is adjusted each year using the consumer price index. The poverty rate is the percentage of individuals in total or as part of various subgroups in the United States who are living on income below the threshold amounts. Over the years, experts have debated whether or not the way in which the poverty threshold is calculated should be changed. Currently the calculation only accounts for pretax income and does not include noncash benefits and tax transfers, which, especially in recent years, have comprised larger portions of the assistance package to those who are low- income. For example, food stamps and the Earned Income Tax Credit could provide a combined amount of assistance worth an estimated $5,000 for working adults with children who earn approximately $12,000 a year.If noncash benefits were included in a calculation of the poverty threshold, the number and percentage of individuals at or below the poverty line could change. In 1995, a National Academy of Sciences (NAS) panel recommended that changes be made to the threshold to count noncash benefits, tax credits, and taxes; deduct certain expenses from income such as child care and transportation; and adjust income levels according to an area's cost of living. In response, the Census Bureau published an experimental poverty measure in 1999 using the NAS recommendations in addition to its traditional measure but, to date, Census has not changed the official measure. In 2005, close to 13 percent of the total U.S. population--about 37 million people--were counted as living below the poverty line, a number that essentially remained unchanged from 2004. Poverty rates differ, however, by age, gender, race, and ethnicity and other factors. For example, Children: In 2005, 12.3 million children, or 17.1 percent of children under the age of 18, were counted as living in poverty. Children of color were at least three times more likely to be in poverty than those who were white: 3.7 million, or 34.2 percent of, children who were African- American and 4 million, or 27.7 percent of, children who were Hispanic lived below the poverty line compared to 4 million, or 9.5 percent of, children who were white. Racial and ethnic minorities: African-Americans and Hispanics have significantly higher rates of poverty than whites. In 2005, 24.9 percent of African-Americans (9.2 million) and 22 percent of Hispanics (9.4 million) lived in poverty, compared to 8.3 percent for whites (16.2 million). Elderly: The elderly have lower rates of poverty than other groups. For example, 10.1 percent of adults (3.6 million) aged 65 or older lived in poverty. Poverty rates also differ depending on geographical location and for urban and nonurban areas. Poverty rates for urban areas were double those in suburbs, 17 percent compared to 9.3 percent. Poverty rates in the South were the highest at 14 percent; the West had a rate of 12.6 percent, followed by the Midwest with 11.4 percent and the Northeast at 11.3 percent. The U.S. government has a long history of efforts to improve the conditions of those living with severely limited resources and income. Presidents, Congress, and other policymakers have actively sought to help citizens who were poor, beginning as early as the 1850s through the more recent efforts established through welfare reform initiatives enacted in 1996. Over the years, the policy approaches used to help low-income individuals and families have varied. For example, in the1960s federal programs focused on increasing the education and training of those living in poverty. In the 1970s, policy reflected a more income-oriented approach with the introduction of several comprehensive federal assistance plans. More recently, welfare reform efforts have emphasized the role of individual responsibility and behaviors in areas such as family formation and work to assist people in becoming self-sufficient. Although alleviating poverty and the conditions associated with it has long been a federal priority, approaches to developing effective interventions have sometimes been controversial, as evidenced by the diversity of federal programs in existence and the ways in which they have evolved over time. Currently, the federal government, often in partnership with the states, has created an array of programs to assist low-income individuals and families. According to a recent study by the Congressional Research Service (CRS), the federal government spent over $400 billion on 84 programs in 2004 that provided cash and noncash benefits to individuals and families with limited income. These programs cover a broad array of services: Examples include income supports or transfers such as the Earned Income Tax Credit and TANF; work supports such as subsidized child care and job training; health supports and insurance through programs like the State Children's Health Insurance Program (SCHIP) and Medicaid; and other social services such as food, housing, and utility assistance. Table 1 provides a list of examples of selected programs. Economic research suggests that individuals living in poverty face an increased risk for adverse outcomes, such as poor health, criminal activity, and low participation in the workforce. The adverse outcomes that are associated with poverty tend to limit the development of skills and abilities individuals need to contribute productively to the economy through work, and this in turn, results in low incomes. The relationship between poverty and outcomes for individuals is complex, in part because most variables, like health status, can be both a cause and a result of poverty. The direction of the causality can have important policy implications. To the extent that poor health causes poverty, and not the other way around, then alleviating poverty may not improve health. Health outcomes are worse for individuals with low incomes than for their more affluent counterparts. Lower-income individuals experience higher rates of chronic illness, disease, and disabilities, and also die younger than those who have higher incomes. As reported by the National Center on Health Statistics, individuals living in poverty are more likely than their affluent counterparts to experience fair or poor health, or suffer from conditions that limit their everyday activities (fig.1). They also report higher rates of chronic conditions such as hypertension, high blood pressure, and elevated serum cholesterol, which can be predictors of more acute conditions in the future. Life expectancies for individuals in poor families as compared to nonpoor families also differ significantly. One study showed that individuals with low incomes had life expectancies 25 percent lower than those with higher incomes. Other research suggests that an individual's household wealth predicts the amount of functionality of that individual in retirement. Research suggests that part of the reason that those in poverty have poor health outcomes is that they have less access to health insurance and thus less access to health care, particularly preventive care, than others who are nonpoor. Very low-income individuals were three times as likely not to have health insurance than those with higher incomes, which may lead to reduced access to and utilization of health care (fig. 2). Data show that those who are poor with no health insurance access the health system less often than those who are either insured or wealthier when measured by one indicator of health care access: visits to the doctor. For example, data from the National Center on Health Statistics show that children in families with income below the poverty line who were continuously without health insurance were three to four times more likely to have not visited a doctor in the last 12 months than children in similar economic circumstances who were insured (fig. 3). Research also suggests that a link between income and health exists independent of health insurance coverage. Figure 3 also shows that while children who are uninsured but in wealthier families visit the doctor fewer times than those who are insured, they still go more often than children who are uninsured but living in poverty. Some research examining government health insurance suggests that increased health insurance availability improves health outcomes. Economists have studied the expansion of Medicaid, which provides health insurance to those with low income. They found that Medicaid's expansion of coverage, which occurred between 1979 and 1992, increased the availability of insurance and improved children's health outcomes. For example, one study found that a 30 percentage point increase in eligibility for mothers aged 15-44 translated into a decrease in infant mortality of 8.5 percent. Another study looked at the impact of health insurance coverage through Medicare and its effects on the health of the elderly and also found a statistically significant though modest impact. There is some evidence that variations in health insurance coverage do not explain all the differences in health outcomes. A study done in Canada found improvements in children's health with increases in income, even though Canada offers universal health insurance coverage for hospital services, indicating that health insurance is only part of the story. Although there is a connection among poverty, having health insurance, and health outcomes, having health insurance is often associated with other attributes of an individual, thus making it difficult to isolate the direct effect of health insurance alone. Most individuals in the United States are either self-insured or insured through their employer. If those who are uninsured have lower levels of education, as do individuals with low income, differences in health between the insured and uninsured might be due to level or quality of education, and not necessarily insurance. Another reason that individuals living in poverty may have more negative health outcomes is because they live and work in areas that expose them to environmental hazards such as pollution or substandard housing. Some researchers have found that because poorer neighborhoods may be located closer to industrial areas or highways than more affluent neighborhoods, there tend to be higher levels of pollution in lower-income neighborhoods. The Institute of Medicine concluded that minority and low-income communities had disproportionately higher exposure to environmental hazards than the general population, and because of their impoverished conditions were less able to effectively change these conditions. The link between poverty and health outcomes may also be explained by lifestyle issues associated with poverty. Sedentary life-style: the use of alcohol and drugs; as well as lower consumption of fiber, fresh fruits, and vegetables are some of the behaviors that have been associated with lower socioeconomic status. Cigarette smoking is also more common among adults who live below the poverty line than among those above it, about 30 percent compared to 21 percent. Similarly, problems with being overweight and obese are common among those with low family incomes, although most prevalent in women: Women with incomes below 130 percent of the poverty line were 50 percent more likely to be obese than those with incomes above this amount. Figure 4 shows that people living in poverty are less likely to engage in regular, leisure-time physical activity than others and are somewhat more likely to be obese, and children in poverty are somewhat more likely to be overweight than children living above the poverty line. In addition, there is also evidence to suggest a link among poverty, stress, and adverse health outcomes, such as compromised immune systems. While evidence shows how poverty could result in poor health, the opposite could also be true. For example, a health condition could result, over time, in restricting an individual's employment, resulting in lower income. Additionally, the relationship between poverty and health outcomes could also vary by demographic group. Failing health, for example, can be more directly associated with household income for middle-aged and older individuals than with children, since adults are typically the ones who work. Just as research has established a link between poverty and adverse health outcomes, evidence suggests a link between poverty and crime. Economic theory predicts that low wages or unemployment makes crime more attractive, even with the risks of arrest and incarceration, because of lower returns to an individual through legal activities. While more mixed, empirical research provides support for this. For example, one study shows that higher levels of unemployment are associated with higher levels of property crime, but is less conclusive in predicting violent crime.Another study has shown that both wages and unemployment affect crime, but that wages play a larger role. Research has found that peer influence and neighborhood effects may also lead to increased criminal behavior by residents. Having many peers that engage in negative behavior may reduce social stigma surrounding that behavior. In addition, increased crime in an area may decrease the chances that any particular criminal activity will result in an arrest. Other research suggests that the neighborhood itself, independent of the characteristics of the individuals who live in it, affects criminal behavior.One study found that arrest rates were lower among young people from low-income families who were given a voucher to live in a low-poverty neighborhood, as opposed to their peers who stayed in high-poverty neighborhoods. The most notable decrease was in arrests for violent crimes; the results for property crimes, however, were mixed, with arrest rates increasing for males and decreasing for females. Regardless of whether poverty is a cause or an effect, the conditions associated with poverty limit the ability of low-income individuals to develop the skills, abilities, knowledge, and habits necessary to fully participate in the labor force, in turn, leads to lower incomes. According to 2000 Census data, people aged 20-64 with income above the poverty line in 1999 were almost twice as likely to be employed as compared to those with incomes below it. Some of the reasons for these outcomes include educational attainment and health status. Poverty is associated with lower educational quality and attainment, both of which can affect labor market outcomes. Research has consistently demonstrated that the quality and level of education attained by lower- income children is substantially below those for children from middle- or upper-income families. Moreover, high school dropout rates in 2004 were four times higher for students from low-income families than those in high-income families. Those with less than a high school degree have unemployment rates almost three times greater than those with a college degree, 7.6 percent compared to 2.6 percent in 2005. And the percentage of low-income students who attend college immediately after high school is significantly lower than for their wealthier counterparts: 49 percent compared to 78 percent. A significant body of economic research directly links adverse health outcomes, which are also associated with low incomes, with the quality and quantity of labor that the individual is able to offer to the workforce. Many studies that have examined the relationship among individual adult health and wages, labor force participation, and job choice have documented positive empirical relationships among health and wages, earnings, and hours of work. Although there is no consensus about the exact magnitude of the effects, the empirical literature suggests that poor health reduces the capacity to work and has substantive effects on wages, labor force participation, and job choice, meaning that poor health is associated with low income. Research also demonstrates that poor childhood health has substantial effects on children's future outcomes as adults. Some research, for example, shows that low birth weight is correlated with a low health status later in life. Research also suggests that poor childhood health is associated with reduced educational attainment and reduced cognitive development. Reduced educational attainment may in turn have a causal effect not only on future wages as discussed above but also on adult health if the more educated are better able to process health information or make more informed choices about their health care or if education makes people more "future oriented" by helping them think about the consequences of their choices. In addition, some research shows that poor childhood health is predictive of poor adult health and poor adult economic status in middle age, even after controlling for educational attainment. The economic literature suggests that poverty not only affects individuals but can also create larger challenges for economic growth. Traditionally, research has focused on the importance of economic growth for generating rising living standards and alleviating poverty, but more recently it has examined the reverse, the impact of poverty on economic growth. In the United States, poverty can impact economic growth by affecting the accumulation of human capital and rates of crime and social unrest. While the empirical research is limited, it points to the negative association between poverty and economic growth consistent with the theoretical literature's conclusion that higher rates of poverty can result in lower rates of growth. Research has shown that accumulation of human capital is one of the fundamental drivers of economic growth. Human capital consists of the skills, abilities, talents, and knowledge of individuals as used in employment. The accumulation of human capital is generally held to be a function of the education level, work experience, training, and healthiness of the workforce. Therefore, schooling at the secondary and higher levels is a key component for building an educated labor force that is better at learning, creating, and implementing new technologies. Health is also an important component of human capital, as it can enhance workers' productivity by increasing their physical capacities, such as strength and endurance, as well as mental capacities, such as cognitive functioning and reasoning ability. Improved health increases workforce productivity by reducing incapacity, disability, and the number of days lost to sick leave, and increasing the opportunities to accumulate work experience. Further, good health helps improve education by increasing levels of schooling and scholastic performance. The accumulation of human capital can be diminished when significant portions of the population have experienced long periods of poverty, or were living in poverty at a critical developmental juncture. For example, recent research has found that the distinct slowdown in some measures of human capital development is most heavily concentrated among youth from impoverished backgrounds. When individuals who have experienced poverty enter the workforce, their contributions may be restricted or minimal, while others may not enter the workforce in a significant way. Not only is the productive capability of some citizens lost, but their purchasing power and savings, which could be channeled into productive investments, is forgone as well. In addition to the effects of poverty on human capital, some economic literature suggests that poverty can affect economic growth to the extent that it is associated with crime, violence, and social unrest. According to some theories, when citizens engage in unproductive criminal activities they deter others from making productive investments or their actions force others to divert resources toward defensive activities and expenditures. The increased risk due to insecurity can unfavorably affect investment decisions--and hence economic growth--in areas afflicted by concentrated poverty. Although such theories link poverty to human capital deficiencies and criminal activity, the magnitude of their impact on economic growth for an economy such as the United States is unclear at this time. In addition, people living in impoverished conditions generate budgetary costs for the federal government, which spends billions of dollars on programs to assist low-income individuals and families. Alleviating these conditions would allow the federal government to redirect these resources toward other purposes. While economic theory provides a guide to understanding how poverty might compromise economic growth, empirical researchers have not as extensively studied poverty as a determinant of growth in the United States. Empirical evidence on the United States and other rich nations is quite limited, but some recent studies support a negative association between poverty and economic growth. For example, some research finds that economic growth is slower in U.S. metropolitan areas characterized by higher rates of poverty than those with lower rates of poverty. Another study, using data from 21 wealthy countries, has found a similar negative relationship between poverty and economic growth. Maintaining and enhancing economic growth is a national priority that touches on all aspects of federal decision making. As the nation moves forward in thinking about how to address the major challenges it will face in the twenty-first century, the impact of specific policies on economic growth will factor into decisions on topics as far ranging as taxes, support for scientific and technical innovation, retirement and disability, health care, education and employment. To the extent that empirical research can shed light on the factors that affect economic growth, this information can guide policymakers in allocating resources, setting priorities, and planning strategically for our nation's future. Economists have long recognized the strong association between poverty and a range of adverse outcomes for individuals, and empirical research, while limited, has also begun to help us better understand the impact of poverty on a nation's economic growth. The interrelationships between poverty and various adverse social outcomes are complex, and our understanding of these relationships can lead to vastly different conclusions regarding appropriate interventions to address each specific outcome. Furthermore, any such interventions could take years, or even a generation, to yield significant and lasting results, as the greatest impacts are likely to be seen among children. Nevertheless, whatever the underlying causes of poverty may be, economic research suggests that improvements in the health, neighborhoods, education, and skills of those living in poverty could have impacts far beyond individuals and families, potentially improving the economic well-being of the nation as a whole. We provided the draft report to four outside reviewers with expertise in the areas of poverty and economic growth. The reviewers generally acknowledged that our report covers a substantial body of recent economic research on the topic and did not dispute the validity of the specific studies included in our review. However, they expressed some disagreement over our presentation of this research. Some reviewers felt that the evidence directly linking poverty to adverse outcomes is more robust than implied by our summary and directed us to additional research that bolsters the link between poverty and poor health and crime. We did not incorporate this additional research into our findings, but we reviewed it and found it consistent with the evidence already incorporated in our summary. Other reviewers felt that our report implied a stronger relationship between poverty and adverse outcomes than is supported by the research. They felt that the report did not provide adequate information on the causes of poverty and external factors that could be responsible for both poverty and adverse outcomes. In response to these comments, we made several revisions to the text to ensure that the information we presented was balanced. The reviewers also provided technical comments that we incorporated as appropriate. Copies of this report are being sent to the Departments of Commerce, Health and Human Services, Justice, and Labor; appropriate congressional committees; and other interested parties. Copies will be made available to others upon request. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about matters discussed in this report, please contact me at (202) 512-7215 or at [email protected]. Other contact and staff acknowledgments are listed in appendix II. Adler, Nancy E., and Katherine Newman. "Socioeconomic Disparities in Health: Pathways and Policies." Health Affairs, Vol. 21 No. 2, 2002. Aghion, Phillipe, et al. "Inequality and Economic Growth: The Perspective of the New Growth Theories." Journal of Economic Literature, Vol. XXXVII: 1999. Barro, Robert. "Inequality and growth in a panel of countries." Journal of Economic Growth, 5 (1): 2000. Barsky, Robert B., et al. "Preference Parameters and Behavioral Heterogenity: An Experimental Approach in the Health and Retirement Study." Review of Economic Statistics, 1997. Burtless, G., and C. Jenks. "American Inequality and Its Consequences." In (eds), H. Aaron et al, Agenda for the Nation. Washington, DC: Brookings Institution Press, 2003. Card, David, and Carlos Dobkin, Nicole Maestas. "The Impact of Nearly Universal Insurance Coverage on Health Care Utilization and Health: Evidence from Medicare." National Bureau of Economic Research, Working Paper No. 10365. Cambridge, Massachusetts: National Bureau of Economic Research: 2004. Case, Anne C., and Angus Deaton. "Broken Down by Work and Sex: How our Health Declines." National Bureau of Economic Research, Working Paper No. 9821. Cambridge, Massachusetts: National Bureau of Economic Research: 2003. Case, Anne, Angela Fertig, and Christina Paxson. "The Lasting Impact Of Childhood Health And Circumstance." Journal of Health Economics, 24 (2): 2005. Case, Anne, Darren Lubotsky, and Christina Paxson. "Economic Status and Health in Childhood: The Origins of the Gradient." American Economic Review, Vol. 92, No. 5., Dec. 2002. Chay, Kenneth, and Michael Greenstone. "Air Quality, Infant Mortality, and the Clean Air Act of 1970." National Bureau of Economic Research, Working Paper No. 10053. Cambridge, Massachusetts: National Bureau of Economic Research: 2003. Chui, W. Henry. "Income Inequality, Human Capital Accumulation and Economic Performance." Economic Journal, 108. 1998. Currie, Janet, and Jonathan Gruber. "Saving Babies: The Efficiency and Cost of Recent Changes in the Medicaid Eligibility of Pregnant Women." Journal of Political Economy, Vol. 104, No. 6, 1996. Currie, Janet, and Rosemary Hyson. "Is the Impact of Health Shocks Cushioned by Socioecoomic Status? The Case of Low Birthweight" American Economic Review Papers and Proceedings of the One Hundred Eleventh Annual Meeting of the American Economic Association, 89 (2): 1999. Currie, Janet, and Brigitte Madrian. "Health, Health Insurance and the Labor Market." In (eds), O. Ashenfelter and D. Card, Handbook of Labor Economics, Vol. 3. Elsevier Science. 1999. Currie, Janet and Mark Stabile. "Socioeconomic Status and Child Health: Why is the Relationship Stronger for Older Children?" American Economic Review, Vol. 93, No 5., Dec. 2003. Currie, Janet, and Matthew Neidell. "Air Pollution and Infant Health: What Can We Learn From California's Recent Experience?" Quarterly Journal of Economics, 120 (3), 2005. Cutler, David, Angus Deaton, and Adriana Lleras-Muney. "The Determinants of Mortality." Journal of Economic Perspectives, Vol. 20, No. 3, 2006. DeCicca, Phillip, Donald Kenkel, Alan Mathios. "Racial Difference in the Determinants of Smoking Onset." Journal of Risk and Uncertainty. 2000. Vol. 21, Iss. 2/3; p311. ------. "Putting Out The Fires: Will Higher Taxes Reduce the Onset of Youth Smoking?" Journal of Political Economy. 2002.Vol.110, Iss.1; p. 144. Deaton, Angus. "Policy Implications of the Gradient of Health and Wealth." Health Affairs, Vol. 21, No.2, 2002. Delong, J. et al. "Sustaining U.S. Economic Growth." In H. Aaron et al. (eds.), Agenda for the Nation. Washington, DC: Brookings Institution Press, 2003. Dev Bhatta, Saurav. "Are Inequality and Poverty Harmful for Economic Growth: Evidence from the Metropolitan Areas of the United States." Journal of Urban Affairs, 23 (3&4): 2001. Fallah, B., and M. Partridge. "The Elusive Inequality-Economic Growth Relationship: Are There Differences between Cities and the Countryside?" University of Saskatchewan Working Paper, February 2006. Federal Reserve Bank of New York, "Unequal Incomes, Unequal Outcomes? Economic Inequality and Measures of Well-Being: Proceedings of a Conference Sponsored by the Federal Reserve Bank of New York," Economic Policy Review, Vol. 5 (3), September 1999. http://www.ny.frb.org/research/epr/1999n3.html. Forbes, K. "A Reassessment of the Relationship between Inequality and Growth." The American Economic Review, 90 (4): 2000. Freeman, Richard B. "Why Do So Many Young American Men Commit Crimes and What Might We Do About It?" Journal of Economic Perspectives, Vol. 10, No. 1, 1996. Gould, Eric D., Bruce A. Weinberg, and David B. Mustard. "Crime Rates and Local Labor Market Opportunities in the United States: 1979-1997." The Review of Economics and Statistics, 84 (1): 2002. Grogger, Jeff. "Market Wages and Youth Crime." Journal of Labor Economics, Vol. 16, No. 4. Chicago: 1998. Heckman, J., and A. Krueger. Inequality in America: What Role for Human Capital Policies. Cambridge, Massachusetts: The MIT Press, 2003. Ho, P. "Income Inequality and Economic Growth." Kylos, 53 (3): 2003. Holzer, Harry, et al. "The Economic Costs of Poverty in the United States." Unpublished working paper, 2006. Hsing, Yu. "Economic Growth And Income Inequality: The Case Of The US." International Journal of Social Economics, 32 (7): 2005. Katz, Lawrence F., Jeffrey R. Kling, and Jeffrey B. Liebman. "Moving to Opportunity in Boston: Early Results of a Randomized Mobility Experiment." The Quarterly Journal of Economics, May 2001. Kling, Jeffrey R., Jens Ludwig, and Lawrence F. Katz. "Neighborhood Effects on Crime for Female and Male Youth: Evidence from a Randomized Housing Voucher Experiment." Quarterly Journal of Economics, Feb. 2005. Lochner, Lance, and Enrico Moretti. "The Effect of Education on Crime: Evidence from Prison Inmates, Arrests and Self-Reports." American Economic Review, 2004. Ludwig, Jens, and Greg J. Duncan, Paul Hirschfield. "Urban Poverty and Juvenile Crime: Evidence from a Randomized Housing-Mobility Experiment." Quarterly Journal of Economics, May 2001. McGarry, Kathleen. "Health and Retirement: Do Changes in Health Affect Retirement Expectations?" Journal of Human Resources, Vol. XXXIX, 2004. Mo, P. "Income Inequality and Economic Growth." Kyklos, 53 (3): 2000. Newberger, R., and T. Riggs, "The Impact of Poverty on Location of Financial Establishments: Evidence from Across-Country Data." Profitwise News and Views, Federal Reserve Bank of Chicago, April 2006. Panizza, Ugo. "Income Inequality and Economic Growth: Evidence from American Data." Journal of Economic Growth, 7 (1): 2002. Partridge, Mark. "Is Inequality Harmful for Growth? Comment." The American Economic Review, 87 (5): 1997. Persson, T., and G. Tabellini. "Is Inequality Harmful for Growth?" The American Economic Review, 84 (3): 1994. Rank, Mark. One Nation Underprivileged: Why American Poverty Affects Us All. Oxford: Oxford University Press, 2004. Raphael, Steven, and Rudolf Winter-Ebner. "Identifying the Effect of Unemployment on Crime." Journal of Law and Economics, Vol. XLIV. 2001. Sallis, J.F., et al. "The Association of School Environments with Youth Physical Activity." American Journal of Public Health, Vol. 91, No. 4, 2001. Sandy, Carola. "Essays on the Macroeconomic Impact of Poverty." Columbia University Libraries, http://digitalcommons.libraries.columbia.edu/dissertations/AAI9970273, 2000. Sherman, Arloc. Wasting America's Future: The Children's Defense Fund Report On The Costs Of Child Poverty. Boston, Massachusetts: Beacon Press Books, 1994. Siegel, Michele J. "Measuring the Effect of Husband's Health on Wife's Labor Supply." Health Economics, 15 (6): 2006. Smith, James P. "Healthy Bodies and Thick Wallets: The Dual Relation between Health and Economic Status." Journal of Economic Perspectives, Vol. 13, No. 2, 1999. ------. "The Impact of SES on Health over the Life-Course." Rand Working Paper Series. Rand Labor and Population: 2005. Smith, James, and Raynard Kington. "Demographic and Economic Correlates of Health in Old Age." Demography, Vol. 34, No. 1, 1997. Teles, Vladimir. "The Role of Human Capital in Economic Growth." Applied Economic Letters, 12: 2005. U.S. Census Bureau. Income, Poverty, and Health Insurance Coverage in the United States: 2005. Washington, D.C.: 2006 U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. Health, United States, 2006. Washington, D.C.: 2006. ------. Health, United States, 1998. Washington, D.C.: 1998. U.S. Department of Housing and Urban Development. Moving to Opportunity Demonstration Data. Washington, D.C.: May, 2004. ------. Moving to Opportunity for Fair Housing. Washington, D.C.: Dec. 2000. http://www.hud.gov/progdesc/mto.cfm. Voitchovsky, S. "Does the Profile of Income Inequality Matter for Economic Growth? Distinguishing Between the Effects of Inequality in Different Parts of the Income Distribution." Journal of Economic Growth, Vol. 10 (3): 2005. Kathy Larin, Assistant Director, and Janet Mascia, Analyst-in-Charge, managed this assignment. Lawrance Evans, Ben Bolitzer , Ken Bombara, Amanda Seese, and Rhiannon Patterson made significant contributions throughout the assignment. Charles Willson, Susannah Compton, and Patrick DiBattista helped develop the report's message. In addition, Doug Besharov, Dr. Maria Cancian, Dr. Sheldon Danziger, and Dr. Lawrence Mead reviewed and provided comments on the report.
In 2005, 37 million people, approximately 13 percent of the total population, lived below the poverty line, as defined by the Census Bureau. Poverty imposes costs on the nation in terms of both programmatic outlays and productivity losses that can affect the economy as a whole. To better understand the potential range of effects of poverty, GAO was asked to examine (1) what the economic research tells us about the relationship between poverty and adverse social conditions, such as poor health outcomes, crime, and labor force attachment, and (2) what links economic research has found between poverty and economic growth. To answer these questions, GAO reviewed the economic literature by academic experts, think tanks, and government agencies, and reviewed additional literature by searching various databases for peer- reviewed economic journals, specialty journals, and books. We also provided our draft report for review by experts on this topic. Economic research suggests that individuals living in poverty face an increased risk of adverse outcomes, such as poor health and criminal activity, both of which may lead to reduced participation in the labor market. While the mechanisms by which poverty affects health are complex, some research suggests that adverse health outcomes can be due, in part, to limited access to health care as well as greater exposure to environmental hazards and engaging in risky behaviors. For example, some research has shown that increased availability of health insurance such as Medicaid for low-income mothers led to a decrease in infant mortality. Additionally, exposure to higher levels of air pollution from living in urban areas close to highways can lead to acute health conditions. Data suggest that engaging in risky behaviors, such as tobacco and alcohol use, a sedentary life-style, and a low consumption of nutritional foods, can account for some health disparities between lower and upper income groups. The economic research we reviewed also points to links between poverty and crime. For example, one study indicated that higher levels of unemployment are associated with higher levels of property crime. The relationship between poverty and adverse outcomes for individuals is complex, in part because most variables, like health status, can be both a cause and a result of poverty. These adverse outcomes affect individuals in many ways, including limiting their development of the skills, abilities, knowledge, and habits necessary to fully participate in the labor force. Research shows that poverty can negatively affect economic growth by affecting the accumulation of human capital and rates of crime and social unrest. Economic theory has long suggested that human capital--that is, the education, work experience, training, and health of the workforce--is considered one of the fundamental drivers of economic growth. The conditions associated with poverty can work against this human capital development by limiting individuals' ability to remain healthy and develop skills, in turn decreasing the potential to contribute talents, ideas, and even labor to the economy. An educated labor force, for example, is better at learning, creating and implementing new technologies. Economic theory suggests that when poverty affects a significant portion of the population, these effects can extend to the society at large and produce slower rates of growth. Although historically research has focused mainly on the extent to which economic growth alleviates poverty, some recent empirical studies have begun to demonstrate that higher rates of poverty are associated with lower rates of growth in the economy as a whole. For example, areas with higher poverty rates experience, on average, slower per capita income growth rates than low-poverty areas.
7,684
705
The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. Because of the concern about attacks from individuals and groups, protecting the computer systems that support critical operations and infrastructures has never been more important. These concerns are well founded for a number of reasons, such as escalating threats of computer security incidents, the ease of obtaining and using hacking tools, the steady advances in the sophistication and effectiveness of attack technology, and the emergence of new and more destructive attacks. According to experts from government and industry, during the first quarter of 2005, more than 600 new Internet security vulnerabilities were discovered, thereby placing organizations that use the Internet at risk. Computer-supported federal operations are likewise at risk. IBM recently reported that there were over 54 million attacks against government computers from January 2005 to June 2005. Without proper safeguards, there is risk that individuals and groups with malicious intent may intrude into inadequately protected systems and use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. How well federal agencies are addressing these risks is a topic of increasing interest in both Congress and the executive branch. This is evidenced by recent hearings on information security intended to strengthen information security. DLA is an agency of the Department of Defense (DOD). As DOD's supply chain manager, DLA provides food, fuel, medical supplies, clothing, spare parts for weapon systems, and construction materials to sustain DOD military operations and combat readiness. To fulfill its mission, DLA relies extensively on interconnected computer systems to perform various functions, such as managing about 5.2 million supply items and processing about 54,000 requisition actions per day for goods and services. DLA employs about 22,575 civilian and military workers, located at about 500 field locations in 48 states and 28 countries. In accordance with DOD policy, DLA has developed an agencywide information security program to provide information security for its operations and assets. The DLA Director is responsible for ensuring the security of the information and information systems that support the agency's operations. In carrying out this responsibility, the Director has delegated to DLA's chief information officer the authority to ensure that the agency complies with FISMA and with other information security requirements. DLA's chief information officer has also designated a senior agency official to serve as Director of Information Assurance--the agency's senior information security officer--and to head the central security management group, commonly referred to as the information assurance program office. This group carries out specific responsibilities, including the following: documenting and maintaining an agencywide security framework to assess the agency's security posture, identify vulnerabilities, and allocate resources; establishing and managing security awareness and specialized professional security training for employees who have significant security responsibilities; ensuring that all systems are certified and accredited in accordance with both federal and DOD processes; providing personnel at headquarters and the DLA locations with guidance on, and assistance in preparing, system security authorization agreements--single source data packages for all information pertaining to the certification and accreditation of a system in order to, among other things, guide actions, document decisions, specify information security requirements, and maintain operational systems security; and ensuring that field site personnel accurately assess their locations' security postures. Information assurance managers at the various DLA locations directly report to the information technology chief at their location and are expected to assist the Director of Information Assurance by coordinating security activities, establishing and maintaining a repository for documenting and reporting system certification and accreditation activities, maintaining and updating system security authorization agreements, and notifying the designated approving authority of any changes that could affect system security. Information assurance officers at the various DLA locations assist the information assurance managers through the following activities: ensuring that appropriate information security controls are implemented for an information system, notifying the information assurance manager when system changes that might affect certification and accreditation are requested or planned, and conducting annual validation testing of systems. Figure 1 below shows a simplified overview of DLA's information assurance management and reporting structure. Congress enacted FISMA to strengthen the security of information and information systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to protect the information and information systems that support the operations and assets of the agency--including those that are provided or managed by another agency, a contractor, or some other source. The program must include the following: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, modification, disruption, or destruction of information or information systems; training of personnel who have significant responsibility for information security and security awareness training to educate personnel-- including contractors and other users of the agency's information systems--about information security risks and their responsibilities to comply with the agency's security policies and procedures; periodic testing and evaluation of the effectiveness of the agency's information security policies, procedures, and practices; and a process for planning, implementing, evaluating, and documenting plans of action and milestones that are taken to address any deficiencies in the agency's information security policies, procedures, and practices. To support agencies in conducting their information security programs, the National Institute of Standards and Technology (NIST) is publishing mandatory standards and guidelines for providing information security all agency operations, assets, and information systems other than national security systems. The standards and guidelines include, at a minimum, (1) standards to be used by all agencies to categorize their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels, (2) guidelines recommending the types of information and information systems that are to be included in each category, and (3) minimum information security requirements for information and information systems in each category. In addition, DOD has developed and published various directives and instructions that comprise an information assurance policy framework that is intended to meet the information security requirements specified in FISMA and NIST standards and publications. This framework applies to all of DOD's systems--both national and non-national security systems-- including those operated by or on behalf of DLA. DLA's policies and procedures for implementing its agency information security program are contained in DLA's One Book policy and agency handbook. DLA has implemented important elements of an information security program--including establishing a central security management group, appointing a senior information security officer to manage the program, and providing security awareness training for its employees. However, DLA has not yet fully implemented other essential elements of an effective information security program to protect the confidentiality, integrity, and availability of its information and information systems that support its mission. Collectively, these weaknesses place DLA's information and information systems at risk. Key underlying reasons for the weaknesses pertain to DLA's management and oversight of its security program. In carrying out their information security responsibilities, both the Chief Information Officer and the Director of Information Assurance have taken several steps to implement important elements of DLA's security program, including the following: ensuring employees and contractors receive information security developing information security procedures and guidance for use in implementing the requirements of the program; deploying information system security engineers to assist headquarters and field staff in implementing security policies and procedures consistently across the agency; developing an agencywide management tool--known as the Comprehensive Information Assurance Knowledgebase--to centrally manage and report on key performance measures, such as the status of security training, plans of action and milestones, and certification and accreditation activities; and developing and implementing various automated information technology initiatives to assist information assurance managers and information assurance officers in improving DLA's security posture. Weaknesses in information security practices and controls place DLA's information and information systems at risk. Our analysis of information security activities for selected systems at 10 DLA locations showed that the agency had not fully or consistently implemented important elements of its program. Specifically, risks that could result from the unauthorized access, use, disclosure, or destruction of information or information systems were not consistently assessed; employees who had significant information security responsibilities did not receive sufficient training, and security training plans were sometimes not adequately completed; testing and evaluation of the effectiveness of management and operational security controls were not adequately performed; and plans of action and milestones for mitigating known information security deficiencies were not sufficiently completed. Table 1 indicates with an "X" weaknesses in the implementation of key information security practices and controls for selected systems. FISMA requires that agencies' information security programs include periodic assessments of the risk and magnitude of the harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the agency. Identifying and assessing information security risks are essential steps in order to determine what controls are required and what level of resources should be expended on these controls. NIST has developed guidance to help organizations protect their information and information systems by using security controls that are selected through a risk-based process. DOD established a set of baseline security controls for each of three mission assurance categories that determine what security controls should be implemented. These controls are adjusted based on an assessment of risk including specific threat information, vulnerabilities, and countermeasures relative to the system. Vulnerabilities that are not mitigated are referred to as residual risk. The designated approving authority considers the residual risks in determining whether to accredit a system. Such risk assessments, as part of the requirement to reaccredit systems, are to be performed prior to a significant change in processing, but at least every 3 years. Although DLA categorized its systems in accordance with DOD guidance, we found that it did not consistently assess the residual risk for 9 of the 10 systems we selected for review. For example: nine did not use the established baseline security controls to assess the three did not clearly identify the threats, vulnerabilities, and two did not state how the threats and vulnerabilities would affect the mission that the system supports; one only referenced the security controls as the threat or vulnerability; one had not been updated since 2001. Unless DLA performs risk assessments consistently and assesses them against the appropriate set of controls, it will not have assurance that it has implemented appropriate controls that cost-effectively reduce risk to an acceptable level. FISMA mandates that all federal employees and contractors who are involved in the use of agency information systems be provided training in information security awareness and that agency heads ensure that employees with significant information security responsibilities are provided sufficient training with respect to such responsibilities. An effective information security program should promote awareness and provide training so that employees who use computer resources in their day-to-day operations understand security risks and their roles in implementing related policies and controls to mitigate those risks. DOD guidance requires that individuals receive the necessary training to ensure that they are capable of conducting their security duties and that each component establish and implement information assurance training and professional certification programs. DOD also requires that security awareness and training plans be documented for each system as part of the certification and accreditation process. These security training plans specify that training for individuals associated with a system's operation be appropriate to an individual's level and area of responsibility. This training should provide information about the security policy governing the information being processed, as well as potential threats and the nature of the appropriate countermeasures. DLA provided annual security awareness training for employees and contractors for whom it was appropriate. However, employees with significant information security responsibilities did not receive sufficient training. For example, of the 17 information assurance managers and information assurance officers located where we reviewed selected systems: eleven reported having received some form of training, although eight of them had received training on only one of their security responsibilities--developing security documentation; six reported never having received any security training; and two reported having received no security training for 2 or more years. Further, security training and awareness plans for 3 of the 10 systems we reviewed were either not system-specific or lacked detailed information. For example, training plans for 2 systems did not specify, for each level and area of responsibility, the system operations appropriate for a given user. The third lacked detailed information about training objectives, goals, and requirements. A key reason for these weaknesses is that the individual responsible for monitoring the agency's security training program had other significant responsibilities and was not able to effectively ensure that employees received the required training. As a result, DLA does not have assurance that employees with significant security responsibilities are equipped with the knowledge and skills they need to understand information security risks and their roles and responsibilities in implementing related policies and controls to mitigate those risks. Another key element that FISMA requires of an information security program is periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency based on risk, but not less than annually. FISMA requires that such testing and evaluation activities shall include the management, operational, and technical controls of every system identified in an agency's information systems inventory. DOD policy requires periodic reviews of operational systems at predefined intervals. Such reviews include testing and evaluating the technical implementation of the security design of a system and ascertaining that security software, hardware, and firmware features affecting the confidentiality, integrity, availability, and accountability of information and information systems have been implemented and documented. The results of testing and evaluation of security controls are to be used in the decision- making process for authorizing systems to operate. Further, DLA's One Book policy requires information assurance managers and information assurance officers to use the security test and evaluations as the method for validating the adequacy of management, operational, and technical controls, at least annually. DLA did not annually test and evaluate the management and operational security controls of its systems. According to DLA officials, vulnerability scans and information assurance program reviews collectively satisfied the annual requirement for testing and evaluating management, operational, and technical controls. However, the combination of the vulnerability scans and the program reviews did not satisfy the annual requirement. Although DLA generally assessed technical controls by conducting annual vulnerability scans on its systems, it did not annually assess the management and operational controls for each of its systems. While the program reviews are intended to satisfy the requirement for testing and evaluating the management and operational controls, DLA does not conduct these reviews annually on every system. For example, less than half of DLA's locations and systems have undergone program reviews in the last 3 years, as shown in table 2. Until DLA tests and evaluates management and operational controls annually, critical systems may contain vulnerabilities that have not been identified or appropriately considered in decisions to authorize systems to operate. Moreover, DLA may not be able to ensure the confidentiality, integrity, and availability of the sensitive data that its systems process, store, and transmit. FISMA requires each agency to develop a process for planning, implementing, evaluating, and documenting remedial action plans to address any deficiencies in its information security policies, procedures, and practices. Developing effective corrective action plans is key to ensuring that remedial action is taken to address significant deficiencies. The Office of Management and Budget (OMB) requires agency chief information officers to document and report all agency information assurance weaknesses and remedial actions in plans of action and milestones. The plans should list each security weakness and the tasks, resources, milestones, and scheduled completion dates for remedying each weakness. The plans of action and milestones associated with the 10 systems we selected for review were incomplete. For example: none of the plans clearly documented and reported the nature of the seven did not identify the start or completion dates for addressing the none specified the resources necessary to complete the action plan; nine did not list the risk associated with the security weakness; six were not based on the correct set of baseline security controls; and one plan contained steps to identify vulnerabilities rather than the steps required to remedy vulnerabilities. A key reason for these weaknesses is that information assurance managers and information assurance officers reported that they did not understand the requirements for reporting system security vulnerabilities because DLA had not provided specific criteria or instructions on what--or how--to document and report plans of action and milestones for system deficiencies. Having reliable plans of action and milestones is not only vital to ensuring that DLA's information and information systems receive adequate protection, but it is also important for accurately managing and reporting progress on them. Without reliable plans, DLA does not have assurance that all information security weaknesses have been reported and that corrective actions will be taken to appropriately address the weaknesses. OMB requires that agencies establish a certification and accreditation process for formally authorizing systems to operate. Certification and accreditation is the requirement that agency management officials formally authorize their information systems to process information, thereby accepting the risk associated with their operation. This management authorization (accreditation) is to be supported by a formal technical evaluation (certification) of the management, operational, and technical controls established in an information system's security plan. The accreditation decision results in (1) a full authorization to operate, (2) an interim authorization to operate, or (3) no authorization to operate. DOD instructions and DLA's agency handbook provides guidance on the certification and accreditation process. According to DLA officials, the agency has implemented the practice of issuing authorization to operate decisions on a "time-limited" basis-- regardless if certification tasks have been completed because of concern that OMB might not support funding for systems that received an interim authorization to operate decision. However, OMB, DOD, and DLA policies and procedures do not allow for the practice of issuing "time-limited" authorizations; they require interim authorization to operate decisions when all certification tasks have not been completed. To illustrate, the designated approving authority for one of the ten systems we reviewed changed the system's status from an interim authorization to operate to a "time-limited" authorization to operate even though several action items for such authorization had not been met, and this type of authorization is not allowed under current guidance. For example, information assurance personnel had not updated the security plan or completed a risk assessment. Unless DLA complies with the requirements for issuing accreditation decisions, it will not have assurance that its information systems are operating as intended and meeting security requirements. In addition, DLA did not effectively implement controls to verify the completion of certification tasks. As designed and implemented, DLA divides the responsibilities of the system certifier among the information assurance personnel at its locations and a central review team within the information assurance program office. To help ensure quality over the certification process, the central review team established a DLA quality review checklist to verify the certification tasks performed by the information assurance personnel. However, under the current process, the central review team did not interview information assurance personnel at the locations or conduct on-site visits to verify that certification tasks were performed. Instead, the central review team relies on documentation submitted to them by the information assurance personnel who performed the certification tasks. However, this documentation was not always adequate. For example, the checklist contained questions about whether physical access controls were adequate to protect all facilities housing user workstations, but for the central review team to verify such a task, either an on-site inspection or a diagram of the facility or other documentation to demonstrate the physical access controls in place would have been needed. As a result, the certification process may not provide the authorizing official with objective or sufficient information that is necessary to make credible, risk-based decisions on whether to place an information system into operation. Key underlying reasons for the weaknesses in DLA's information security program were that the responsibilities of information assurance managers and information assurance officers were not consistently understood or communicated across the 10 DLA locations we reviewed and the information assurance program office did not maintain the accuracy and completeness of the data contained in the agency's primary reporting tool for managing and overseeing the agencywide information security program. The information assurance program office--as the agency's central security management group for managing and overseeing the security program--is responsible for providing overall security policy and guidance, along with oversight to ensure information assurance managers and information assurance officers adequately perform or execute required information security activities such as those related to performing risk assessments, satisfying security training requirements, testing and evaluating the effectiveness of controls, documenting and reporting plans of action and milestones, and certifying and accrediting systems. Although the information assurance program office developed information security policies and procedures, it did not maintain them to ensure information assurance personnel had current and sufficient documentation to carry out their responsibilities. For example, of the 17 information assurance managers and information assurance officers at the 10 locations we reviewed: nine were unaware of the requirement for security training specific to an employee's information security responsibilities; and three were unaware of the requirement to perform annual self assessments, while ten others had varying understandings of how this requirement was to be met. In addition, data on key information security activities contained in the primary reporting tool were inaccurate or incomplete. For example, for a year, the information assurance program office had not entered weaknesses that had been identified during information assurance program reviews into the primary reporting tool; information assurance personnel at DLA locations used personal discretion for determining whether or not to report a system deficiency to the information assurance program office for entry and compilation in the primary reporting tool, thereby potentially underreporting agency level plans of action and milestones; and information assurance personnel at both headquarters and the DLA locations did not consistently enter key performance metrics related to plans of action and milestones and security training, thereby potentially underreporting important information used to gauge the health of the security program. A key reason for these weaknesses was that DLA had no documentation on the system design or its intended use and, therefore, had no instructional material to guide users. As a result, the data in the primary reporting tool were not reliable or effective for reporting metrics to DOD and OMB for FISMA evaluation reporting. Moreover, because the key information had not been entered into the database, the agency did not readily have all the information about the deficiencies of its program and, therefore, did not have complete information about the security posture of its program. DLA senior officials recognize that the agency's primary reporting tool has not been effectively implemented and used to manage and oversee the security program. Therefore, the agency developed an ad hoc process of data calls to the DLA locations to aggregate the performance data. However, continuation of this ad hoc process will likely not provide the reliable data needed to consistently satisfy FISMA reporting requirements. Until agencywide policies and procedures are sufficiently documented and implemented and are consistently understood and used across the agency, DLA's ability to protect the information and information systems that support its mission will be limited. DLA has not fully implemented its agencywide information security program, thereby jeopardizing the confidentiality, integrity, and availability of the information and information systems that it relies on to accomplish its mission. Specifically, DLA has not consistently implemented important information security practices and controls, including consistently assessing risk; ensuring that training is provided for employees who have significant responsibilities for information security, and that security training plans are updated and maintained; annually testing and evaluating the effectiveness of management, operational and technical controls; documenting and reporting complete plans of action and milestones; implementing a fully effective certification and accreditation process; and maintaining the accuracy and completeness of the data contained in the primary reporting tool. Although DLA's efforts in developing and implementing its information security program have merit, it has not taken all the necessary steps to ensure the security of the information and information systems that support its operations. Ensuring that the agency implements key information security practices and controls requires top management support and leadership and consistent and effective management oversight and monitoring. Until DLA takes steps to address these weaknesses and fully implements its information security program, it will have limited assurance that agency operations and assets are adequately protected. To assist DLA in fully implementing its information security program, we are making recommendations to the Secretary of Defense to direct the DLA Director to implement key information security practices and controls by: consistently assessing risks that could result from the unauthorized access, use, disclosure or destruction of information and information; ensuring that training is provided for employees who have significant responsibilities for information security; ensuring that security training plans are updated and maintained; ensuring appropriate monitoring of the agency's security training ensuring that annual security test and evaluation activities include management, operational, and technical controls of every information system in DLA's inventory; documenting and reporting complete plans of action and milestones; establishing specific guidance or instructions to information assurance managers and information assurance officers on what--or how--to document and report plans of action and milestones for system deficiencies; discontinuing the practice of issuing "time-limited" authorization to operate accreditation decisions when certification tasks have not been completed; ensuring that the DLA central review team verifies that certification tasks have been completed; and maintaining the accuracy and completeness of the data contained in the agency's primary reporting tool for recording, tracking, and reporting performance metrics on information security practices and controls. In providing written comments on a draft of this report (reprinted in app. II), the Deputy Under Secretary of Defense (Business Transformation) concurred with most of our recommendations and described ongoing and planned efforts to address them. Specifically, he stated that DLA has taken several actions to fully implement an effective agencywide information security program, including publishing a DOD manual that will soon be released to provide detailed guidance on training for employees who have significant information security responsibility. He also stated that DLA is issuing an interim mandatory guide that will soon be released to assist users in documenting and preparing plans of action and milestones, and reinforcing policy requirements for making accreditation decisions. The Deputy Under Secretary of Defense disagreed with our draft recommendation to ensure the testing and evaluation of the effectiveness of security controls for all systems annually. He stated that this recommendation would require all information assurance controls for all systems be tested and evaluated every year, which essentially amounts to annual recertification. The department further stated that the level of test and evaluation is neither practical nor cost-effective and that the combination of DLA's assessments, tests, and reviews allow them to ensure compliance of their controls in accordance with DOD Instruction 8500.2. The intent of our draft recommendation was not to require that all information assurance controls for all systems be tested and evaluated annually. Rather, the intent of our draft recommendation, consistent with FISMA requirements, was to ensure that DLA's annual security test and evaluation activities include management, operational, and technical controls of every information system in its inventory. As stated in our report, while DLA generally assessed technical controls annually of every system in its inventory, it did not annually test and evaluate management and operational controls of those systems. We agree that testing and evaluating all controls for every system annually may not be cost-effective. However, unless DLA's annual testing and evaluation activities include management and operational controls, as well as the technical controls of its systems, it may not be able to ensure the confidentiality, integrity, and availability of its information and information systems. Accordingly, we have clarified our recommendation to state that the Secretary of Defense direct the DLA Director to ensure that annual security test and evaluation activities include management, operational, and technical controls of every information system in DLA's inventory. The Deputy Under Secretary of Defense also disagreed with our draft recommendation to document procedures for performing certification responsibilities that include specific responsibilities related to using the checklist. He stated that the Secretary of Defense provided sufficient direction to agency directors on the certification and accreditation process through DOD Instruction 5200.40, and that additional guidelines on the certification and accreditation process are provided in DOD 8510.1-M. He further stated that DOD 8510.1-M contains a "minimum activities checklist" that all DOD Components are expected to follow when conducting certifications and that DLA's information assurance One Book policy includes roles and responsibilities for performing security certification and accreditation. Our draft recommendation refers to the DLA quality review checklist used by the agency's central review team to verify completion of certification tasks, not to the DOD "minimum activities checklist" described in DOD 8510.1-M. Unless certification tasks performed by information assurance personnel at the various DLA locations have been verified, the authorizing official may not have objective or sufficient information that is necessary to make credible, risk-based decisions on whether to place an information system into operation. Accordingly, we have clarified our recommendation to state that the Secretary of Defense direct the DLA Director to ensure that the DLA central review team verifies that certification tasks have been completed. The Deputy Under Secretary of Defense also disagreed with our draft recommendation to update and maintain the agency's primary reporting tool for recording, tracking, and reporting performance metrics on information security practices and controls. He stated that the primary reporting tool was developed and maintained by DLA and that responsibility for updating and sustaining the tool was transferred to an internal application development team for continued maintenance and support. He also stated that DLA initiated implementation of enterprise standard DOD solutions that will replace the functionality currently provided by the agency reporting tool and that sustainment of the tool would not be cost effective or efficient. The intent of our draft recommendation was to update and maintain the accuracy and completeness of data entered into DLA's primary reporting tool, not the application programs. While DLA has several initiatives underway at various stages of development and implementation that are intended to introduce new functionality or replace some of the existing functionality in the agency reporting tool, none of these initiatives have been fully implemented throughout the agency. If DLA continues to use a tool for managing and overseeing its information assurance program, the fundamental practice of having accurate and complete data--whether in the current tool or in a future tool--is important to ensure the data are reliable for reporting performance metrics on key information security practices and controls to DOD and OMB for FISMA evaluation reporting. Accordingly, we have clarified our recommendation to state that the Secretary of Defense direct the DLA Director to maintain the accuracy and completeness of the data contained in the agency's primary reporting tool for recording, tracking, and reporting performance metrics on information security practices and controls. We are sending copies of this report to the Deputy Under Secretary of Defense (Business Transformation); Assistant Secretary of Defense, Networks and Information Integration; DLA Director; officials within DLA's Information Operations and Information Assurance office; and the Acting DOD Inspector General. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine whether the Defense Logistics Agency (DLA) had implemented an effective agencywide information security program, we reviewed the Department of Defense (DOD) and agencywide information security policies, directives, instructions, and handbooks. We also evaluated DLA's agencywide tool--the Comprehensive Information Assurance Knowledgebase--for aggregating the agency's performance data on information security activities that are required by the Federal Information Security Management Act of 2002 (FISMA), such as the number and percentage of risk assessments performed, employees with significant information security responsibilities that received training to perform their duties, and weaknesses for which the agency had plans of action and milestones. To gain insight into DLA's certification and accreditation process, we reviewed the agency's methods and practices for identifying vulnerabilities and risks and the process for certifying systems and making accreditation decisions. We assessed whether DLA's information security program was consistent with relevant DOD policies and procedures, as well as with the requirements of FISMA, applicable Office of Management and Budget (OMB) policies, and National Institute of Standards and Technology (NIST) guidance. We also assessed whether selected information security plans and documents related to risk assessments, testing and evaluation, and plans of action and milestones were current and complete. To accomplish this, we non-randomly selected 10 sensitive but unclassified systems. The 10 systems came from 10 different DLA locations and included 3 systems, 4 sites, and 3 types. We selected these systems to maximize variety in criticality and geographic locations. We also conducted telephone interviews with 17 information assurance managers and information assurance officers from the 10 locations in order to gain insight into their understanding of FISMA requirements, relevant OMB policies, NIST guidance, and agencywide and DOD policies and procedures. We performed our review at DLA Headquarters, located at Ft. Belvoir, Virginia; DLA Supply Center, located at Columbus, Ohio; and DLA's Business Processing Center, located at Denver, Colorado, from September 2004 to July 2005, in accordance with generally accepted government auditing standards. In addition to the individual named above, Jenniffer Wilson, Assistant Director, Barbara Collier, Joanne Fiorino, Sharon Kittrell, Frank Maguire, John Ortiz, and Chuck Roney made key contributions to this report.
The Defense Logistics Agency's (DLA) mission is, in part, to provide food, fuel, medical supplies, clothing, spare parts for weapon systems, and construction materials to sustain military operations and combat readiness. To protect the information and information systems that support its mission, it is critical that DLA implement an effective information security program. GAO was asked to review the efficiency and effectiveness of DLA's operations, including its information security program. In response, GAO determined whether the agency had implemented an effective information security program. Although DLA has made progress in implementing important elements of its information security program, including establishing a central security management group and appointing a senior information security officer to manage the program, it has not yet fully implemented other essential elements. For example, the agency did not consistently assess risks for its information systems; sufficiently train employees who have significant information security responsibilities or adequately complete training plans; annually test and evaluate the effectiveness of management and operational security controls; or sufficiently complete plans of action and milestones for mitigating known information security deficiencies. In addition, DLA has not implemented a fully effective certification and accreditation process for authorizing the operation of its information systems. Key reasons for these weaknesses are that responsibilities of information security employees were not consistently understood or communicated and DLA has not adequately maintained the accuracy and completeness of data contained in its primary reporting tool for overseeing the agency's performance in implementing key information security activities and controls. Until the agency addresses these weaknesses and fully implements an effective agency-wide information security program, it may not be able to protect the confidentiality, integrity, and availability of its information and information systems, and it may not have complete and accurate performance data for key information security practices and controls.
7,092
368
Global exports of defense equipment have decreased significantly since the end of the Cold War in the late 1980s. Major arms producing countries, such as the United States and those in Western Europe, have reduced their procurement of defense equipment by about one-quarter of 1986 levels based on constant dollars. Overall, European nations have decreased their defense research and development spending over the last 3 years, which is one-third of the relatively stable U.S. research and development funding. Defense exports have declined over 70 percent between 1987 and 1994. In response to decreased demand in the U.S. defense market, U.S. defense companies have consolidated, merged with other companies, or sold off their less profitable divisions, and they are seeking sales in international markets to make up lost revenue. These companies often compete with European defense companies for sales in Europe and in other parts of the world. The U.S. government, led by DOD, has maintained bilateral trade agreements with 21 of its allies, including most European countries, to address barriers to defense trade and international cooperation. No multilateral agreement exists on defense trade issues. Bilateral agreements have been established to provide a framework for discussions about opening defense markets with those countries as a way of improving the interoperability and standardization of equipment among North Atlantic Treaty Organization (NATO) allies. The United States has enjoyed a favorable balance of defense trade, which is still an issue of contention with some of the major arms producing countries in Europe. This trade imbalance was cited in a 1990 U.S. government study as a justification for European governments requiring defense offsets. However, because European investment in defense research and development is significantly below U.S. levels, a Department of Commerce official stated that European industry is at a competitive disadvantage in meeting future military performance requirements. Reciprocal trade agreements recognize the need to develop and maintain an advanced technological capability for NATO and enhance equipment cooperation among the individual European member nations. A senior NATO official stated that Europe's ability to develop an independent security capability within NATO and meet its fair share of alliance obligations is contingent on its ability to consolidate its defense industrial base. This official indicated that if such a consolidation does not occur, then European governments may be less willing to meet their NATO obligations. European governments have made slow gradual progress in developing and implementing unified armament initiatives. These initiatives are slow to evolve because the individual European nations often have conflicting goals and views on implementing procedures and a reluctance to yield national sovereignty. In addition, the various European defense organizations do not include all of the same member countries, making it difficult to establish a pan-European armament policy. European officials see the formation of a more unified European defense market as crucial to the survival of their defense industries as well as their ability to maintain an independent foreign and security policy. Individual national markets are seen as too small to support an efficient industry, particularly in light of declining defense budgets. At the same time, mergers and consolidations of U.S. defense companies are generating concern about the long-term competitiveness of a smaller, fragmented European defense industry. In the past, European governments made several attempts to integrate the European defense market using a variety of organizations. The Western European Union (WEU), the European Union, and NATO are among the institutions composed of different member nations that have addressed European armament policy issues (see fig. 1). For example, in 1976, the defense ministers of the European NATO nations established the Independent European Program Group as a forum for armament cooperation. This group operated without a legal charter, and its decisions were not binding among the member nations. In 1992, the European defense ministers decided that the group's functions should be transferred to WEU, and the Western European Armaments Group was later created as the forum within WEU for armament cooperation. In 1991, WEU called for an examination of opportunities to enhance armament cooperation with the goal of creating a European armaments agency. WEU declared that it would develop as the defense component of the European Union and would formulate a common European defense policy. It also agreed to strengthen the European pillar within NATO. Under WEU, the Western European Armaments Group studied development of an armaments agency that would undertake procurement on behalf of member nations, but agreement could not be reached on the procurement procedures such an agency would follow. Appendix I is a chronology of key events associated with the development of an integrated European defense market. In 1996, two new armament agencies were formed. OCCAR was created as a joint management organization for France, Germany, Italy, and the United Kingdom, and the Western European Armaments Organization (WEAO) was created as a subsidiary body of WEU. As shown in table 1, the two agencies are separate entities with different functions. OCCAR was created as a result of French and German dissatisfaction with the lack of progress WEU was making in establishing a European armaments agency. Joined by Italy and the United Kingdom, the four nations agreed on November 12, 1996, to form OCCAR as a management organization for joint programs involving two or more member nations. OCCAR's goals are to create greater efficiency in program management and facilitate emergence of a more unified market. Although press accounts raised concerns that OCCAR member countries would give preference to European products, no such preference was included in OCCAR's procurement principles. Instead, it was agreed that an OCCAR member would give preference to procuring equipment that it helped to develop. In establishing OCCAR, the Defense ministers of the member countries agreed that OCCAR was to have a competitive procurement policy. Competition is to be open to all 13 member countries of the Western European Armaments Group. Other countries, including the United States, will be invited to compete when OCCAR program participants unanimously agree to open competitions to these countries based on reciprocity. OCCAR officials have indicated that procedures for implementing the competition policy, including criteria for evaluating reciprocity, have not yet been defined. According to some U.S. government and industry officials, issues to consider will include whether U.S. companies will be excluded from OCCAR procurement or whether OCCAR procurement policy will be consistent with the reciprocal trade agreements between member countries and the United States. OCCAR's impact on the European defense market will largely depend on the number of programs that it manages. OCCAR members are discussing integrating additional programs in the future but are expected to only administer joint programs involving participating nations, thereby excluding transatlantic, NATO, or European cooperative programs involving non-OCCAR nations. Some European nations, such as France and Germany, are committed to undertaking new programs on a cooperative basis. While intra-European cooperation is not new, French Ministry of Defense officials have indicated that this represents a change for France because they no longer intend to develop a wide range of weapon programs on their own. On November 19, 1996, a week after OCCAR was created, the WEU Ministerial Council established WEAO to improve coordination of collaborative defense research projects by creating a single contracting entity. As a WEU subsidiary body, WEAO has legal authority to administer contracts, unlike OCCAR, which operates without a legal charter and has no authority to sign contracts for the programs it is to administer. WEAO's initial task is to manage the Western European Armaments Group's research and technology activities, while OCCAR is to manage the development and procurement of weapon systems. The WEAO executive body has responsibility for soliciting and evaluating bids and awarding contracts for common research activities. This single contracting entity eliminated the need to administer contracts through the different national contracting authorities. According to WEAO documentation, the organization was intentionally designed to allow it to evolve into a European armaments agency. However, it may take several years before the effect of OCCAR and WEAO procurement policies can be fully assessed. Some European government officials also told us that OCCAR's ability to centrally administer contracts is curtailed until OCCAR obtains legal authority. U.S. government and industry officials are watching to see whether OCCAR and other initiatives are fostering political pressure and tendencies toward pan-European exclusivity. As membership of the various European organizations expands, pressure to buy European defense equipment may increase. For example, according to some industry officials, the new European members of NATO are already being encouraged by some Western European governments to buy European defense products to ease their entry into other European organizations. While European government initiatives appear to be making slow, gradual progress, the European defense industry is attempting to consolidate and restructure through national and cross-border mergers, acquisitions, joint ventures, and consortia. European government and industry observers have noted that European defense industry is reacting to pressures from rapid U.S. defense industry consolidation, tighter defense budgets, and stronger competition in the global defense market. Even with such pressures, other observers have noted that European defense companies are consolidating at a slower pace than U.S. defense companies. The combined defense expenditures of Western Europe are about 60 percent of the U.S. defense budget, but Western Europe has two to three times more suppliers, according to a 1997 Merrill Lynch study. For example, the United States will have two major suppliers in the military aircraft sector (once proposed mergers are approved), while six European nations each have at least one major supplier of military combat aircraft. In terms of defense revenues, U.S. defense companies tend to outpace European defense companies. Among the world's top 10 arms producing companies in 1994, 8 were U.S. companies and 2 were European companies. While economic pressures to consolidate exist, European defense companies face several obstacles, according to European government and industry officials. For example, national governments, which greatly influence the defense industry and often regard their defense companies as sovereign assets, may not want a cross-border consolidation to occur because it could reduce the national defense industrial base or make it too specialized. National governments further impede defense industrial integration by establishing different defense equipment requirements. Complex ownership structures also make cross-border mergers difficult because many of the larger European defense companies are state-owned or part of larger conglomerates. To varying degrees, defense industry restructuring has occurred within the borders of major European defense producing nations, including France, Germany, Italy, and the United Kingdom. In France, Thomson CSF and Aerospatiale formed a company, Sextant Avionique, that regrouped and merged their avionics and flight electronics activities. The French government initiated discussions in 1996 about the merger of the aviation companies Aerospatiale and Dassault, but negotiations are ongoing. In Germany, restructuring has primarily occurred in the aerospace sector. In 1995, Deutsche Aerospace became Daimler-Benz Aerospace, which includes about 80 percent of German industrial capabilities in aerospace. In Italy, by 1995 Finmeccanica had gained control of about three-quarters of the Italian defense industry, including Italy's major helicopter manufacturer Agusta and aircraft manufacturer Alenia. In the United Kingdom, a number of mergers and acquisitions have occurred. For example, GKN purchased the helicopter manufacturer Westland and GEC purchased the military vehicle and shipbuilder VSEL in 1994. European companies have long partnered on cooperative armament programs for the development and production of large complex weapon systems in Europe. Often, a central management company has been created to manage the relationship between partners. For example, major aerospace companies from the United Kingdom, Germany, Italy, and Spain have created a consortium to work on the Eurofighter 2000 program. Another cooperative venture is the development of the European military transport aircraft known as the Future Large Aircraft. Companies from a number of European nations are forming a joint venture company for the development and production of this aircraft. Project based joint ventures are typically industry led, but they are established with the consent of the governments involved. (See table 2 for examples of European defense company cooperative business activities for major weapon programs.) Although most cross-border industry cooperation is project specific, European defense companies are also acquiring companies or establishing joint ventures or cross-share holdings that are not tied to a particular program. Some cross-border European consolidation has occurred in missiles, defense electronics, and space systems. For example, in 1996, Matra (France) and British Aerospace (United Kingdom) merged their missile activities to form Matra BAe Dynamics. Both companies retained a 50-percent share in the joint venture, but they have a single management structure and a plan to gradually integrate their missile manufacturing facilities. Figure 2 highlights some examples of consolidation in specific defense sectors. Despite attempts to develop a unified European armament policy, individual European governments still retain their own defense procurement policies. Key European countries, including France, Germany, Italy, the Netherlands, and the United Kingdom, vary in their willingness to purchase major U.S. defense equipment. These countries have been involved in efforts to form a unified European defense market, which some observers believe may lead to excluding U.S. defense companies from participating in that market. However, U.S. defense companies continue to sell significant defense equipment to certain European countries in certain product lines. Europe has a large, diverse defense industrial base on which key European nations rely for purchases of major defense equipment. As in the United States, these European countries purchase the majority of their defense equipment from national sources. For example, the United Kingdom aims to competitively award about three-quarters of its defense contracts, with U.K. companies winning at least 90 percent of the contracts over the past several years. According to French Ministry of Defense officials, imports represented only 2 percent of France's total defense procurements over the past 5 years. Germany and Italy each produced at least 80 percent of their national requirements for military equipment over the past several years. Despite its relatively small size, the Dutch defense industry supplied the majority of defense items to the Netherlands. Notwithstanding European preference for domestically developed weapons, U.S. defense companies have sold a significant amount of weapons to Western European countries either directly or through the U.S. government's Foreign Military Sales program. These sales tended to be concentrated in certain countries and products. U.S. foreign military sales of defense equipment to Europe accounted for about $20 billion from 1992 to 1996. Europe was the second largest purchaser of U.S. defense items based on arms delivery data, following the Middle East. The leading European purchasers of U.S. defense equipment were Turkey, Finland, Greece, Switzerland, the Netherlands, and the United Kingdom. U.S. defense companies had greater success in selling aircraft and missiles to Western Europe than they did for tanks and ships. Of the almost $20 billion of U.S. foreign military sales, about $15 billion, or 75 percent, was for sales of military aircraft, aircraft spares, and aircraft modifications. About $3 billion, or 13 percent of total equipment sales, was for sales of missiles. Ships and military vehicles accounted for $552 million, or less than 3 percent of the total U.S. defense equipment sales. Figure 3 shows U.S. defense equipment sales to Western Europe by major weapon categories. According to U.S. defense company officials, sales of military aircraft to Europe are expected to be important in future competitions, particularly in the emerging defense markets in central Europe. Competition between major U.S. and European defense companies for aircraft sales in these markets is expected to be intense. U.S. defense companies varied in their success in winning the major European defense competitions that were open to foreign bidders. The Netherlands and the United Kingdom have bought major U.S. weapon systems over the last 5 years, even when European options were available. The United States is the largest supplier of defense imports to both the Netherlands and the United Kingdom. Both of these countries have stated open competition policies that seek the best defense equipment for the best value. In the major defense competitions in these countries in which U.S. companies won, U.S. industry and government officials stated that the factors that contributed to the success included the uniqueness and technical sophistication of the U.S. systems, industrial participation opportunities offered to local companies, and no domestically developed product was in the competition. For example, in the sale of the U.S. Apache helicopter to the Netherlands and the United Kingdom, there was no competing domestically developed national option, the product was technically sophisticated, and significant industrial participation was offered to domestic defense companies. In the major defense competitions in which U.S. companies competed in the United Kingdom over the last 5 years, the U.K. government tended to chose a domestically developed product when one existed. In some cases, these products contained significant U.S. content. For example, in the competition for the U.K. Replacement Maritime Patrol Aircraft, the two U.S. competing products lost to a British Aerospace developed product, the upgraded NIMROD aircraft. This British Aerospace product, however, contained significant U.S. content with major components coming from such companies as Boeing. In the Conventionally Armed Standoff Missile competition, Matra British Aerospace Dynamics' Stormshadow (a U.K.-French developed option) won. In this case, the competing U.S. products were competitively priced, met the technical requirements, and would have provided significant opportunities for U.K. industrial participation. Table 3 provides details on some U.K. major procurements in which U.S. defense companies competed. France has purchased major U.S. defense weapon systems when no French or European option is available. In contrast to the Netherlands and the United Kingdom, the French defense procurement policy has been to first buy equipment from French sources, then to pursue European cooperative solutions, and lastly to import a non-European item. Recently, French armament policy has put primary emphasis on European cooperative programs, recognizing that it will not be economical to develop major systems alone in the future. The procurement policy reflects France's goal to retain a defense industrial base and maintain autonomy in national security matters. As illustrated in table 4, the French government made two significant purchases from the United States in 1995 when it was not economical for French companies to produce comparable equipment or when it would have taken too long to develop. Germany and Italy have made limited purchases of U.S. defense equipment in recent years because of significantly reduced defense procurement budgets and commitments to European cooperative projects. Both countries now have an open competition defense procurement policy and buy a mixture of U.S. and European products. The largest share of these countries' defense imports is supplied by the United States. In recent major defense equipment purchases from the United States, both Germany and Italy reduced quantities to reserve a portion of their procurement funding for European cooperative solutions. For example, Italy purchased the U.S. C-130J transport aircraft but continued to provide funding for a cooperative European transport aircraft program. As in the other European countries, Germany and Italy encourage U.S. companies to provide opportunities for local industrial participation when selling defense equipment. Table 5 highlights German defense procurement policy and a selected major procurement. As European nations work toward greater armament cooperation, competition for sales in Europe is likely to increase. To mitigate potential protectionism and negative effects on U.S.-European defense trade, both the U.S. defense industry and government have taken steps to improve transatlantic cooperation. U.S. defense companies are taking the lead in forming transatlantic ties to gain access to the European market. The U.S. government is also seeking opportunities to form transatlantic partnerships with its European allies on defense equipment development and production, but some observers point to practical and cultural impediments that affect the extent of such cooperation. U.S. defense companies are forming industrial partnerships with European companies to sell defense equipment to Europe because of the need to increase international sales, satisfy offset obligations, and maintain market access. Most of these partnerships are formed to bid on a particular weapon competition. Some, however, are emerging to sell products to worldwide markets. According to U.S. defense companies, partnering with European companies has become a necessary way of doing business in Europe. U.S. government and defense company officials have cited the importance of industrial partnerships with European companies in winning defense sales there. Many of these partnerships arose out of U.S. companies' need to fulfill offset obligations on European defense sales by providing European companies with subcontract work. When U.S. companies had to find ways to satisfy the customary 100-percent offset obligation on defense contracts in Europe, they began to form industrial partnerships with European companies. With the declining U.S. defense budget after the end of the Cold War, many U.S. companies began to look for ways to increase their international defense sales in Europe and elsewhere. According to some U.S. company officials, they realized that many European government buyers did not want to buy commercially available defense equipment but wanted their own companies to participate in producing weapon systems to maintain their defense industrial base. Forming industrial partnerships was the only way that U.S. companies believed they could win sales in many European countries that were trying to preserve their own defense industries. In addition, several U.S. company officials have indicated that European governments have been pressuring each other in the last several years to purchase defense equipment from European companies before considering U.S. options. These officials stated that even countries that do not have large defense industries to support were being encouraged by other European countries to purchase European defense equipment for the economic good of the European Union. U.S. company officials believe that by forming industrial partnerships with European companies, they increase their ability to win defense contracts in Europe. U.S. defense companies form a variety of industrial partnerships with European companies, including subcontracting arrangements, joint ventures, international consortia, and teaming agreements. The various examples of each are discussed in table 6. According to some U.S. defense company officials, most of U.S. industrial partnerships with European companies, whatever the form, are to produce or market a specific defense item. Some U.S. defense companies, however, are using the partnerships to create long-term alliances and interdependencies with European companies that extend beyond one sale. For example, Lockheed Martin has formed an industrial partnership with the Italian company Alenia to convert an Italian aircraft to satisfy an emerging market for small military transport aircraft. This arrangement arose out of a transaction involving the sale of C-130J transport aircraft to Italy. Some U.S. defense company officials see the establishment of long-term industrial partnerships as a way of improving transatlantic defense trade and countering efforts toward European protectionism. DOD has taken a number of steps over the last few years to improve defense trade and transatlantic cooperation. For example, it has revised its guidance on considering foreign suppliers in defense acquisitions and has removed some of the restrictions on buying defense equipment from overseas. In addition, senior DOD officials have shown renewed interest in international cooperative defense programs with U.S. allies in Europe and are actively seeking such opportunities. Despite some of these efforts, some observers have cautioned that a number of factors may hinder shifts in U.S.-European defense cooperative production programs on major weapons. The following U.S. policy changes have been made that may help to improve defense trade: A DOD directive issued in March 1996 sets out a hierarchy of acquiring defense equipment that places commercially available equipment from allies and cooperative development programs with allies, ahead of a new U.S. equipment development program. According to some U.S. government and defense industry officials, many military program managers traditionally would have favored a new domestic development program when deciding how to satisfy a military requirement. In April 1997, the Office of the Secretary of Defense announced that DOD would favorably consider requests for transfers of software documentation to allies. In the past, such requests were often denied, which was cited by U.S. government officials as a barrier to improve defense trade and cooperation with the United States. In April 1997, the Under Secretary of Defense (Acquisition and Technology) waived certain buy national restrictions for countries with whom the United States had reciprocal trade agreements. This waiver allows DOD to procure from foreign suppliers certain defense equipment that were previously restricted to domestic sources. European government officials have cited U.S. buy-national restrictions as an obstacle in the improvement of the reciprocal defense trade balance between the United States and Europe. DOD is also seeking ways to improve international cooperative programs with European countries through ongoing working groups and a special task force under the quadrennial review. Senior DOD officials have stated that the United States should take advantage of international armaments cooperation to leverage U.S. resources through cost-sharing and to improve standardization and interoperability of defense equipment with potential coalition partners. The U.S. government has participated in numerous international defense equipment cooperation activities with European countries, including research and development programs, data exchange agreements, and engineer and scientist exchanges, but these activities only occasionally resulted in cooperative production programs. More recently, senior DOD officials have provided increased attention to armaments cooperation with U.S. allies. In 1993, DOD established the Armaments Cooperation Steering Committee to improve cooperative programs. In its ongoing efforts, the Steering Committee established several International Cooperative Opportunities Groups in 1995 to address specific issues in armaments cooperation. In addition, the 1997 Quadrennial Defense Review to identify military modernization needs included an international cooperation task force to determine which defense technology areas the United States could collaborate on with France, Germany, and the United Kingdom. In March 1997, the Secretary of Defense signed a memorandum stating that "it is DOD policy that we utilize international armaments cooperation to the maximum extent feasible." The U.S. government has a few ongoing cooperative development programs for major weapon systems, but most cooperative programs are at the technology level. Some observers indicated to us that there may be some impediments to pursuing U.S.-European defense cooperative programs on major weapon systems because (1) European procurement budgets are limited compared to the U.S. budget; (2) the potential that U.S. support for a program may change with each annual budget review may cause some European governments concerns; (3) despite changes in DOD guidance, many military service program managers may be reluctant to engage in international cooperative programs due to the significant additional work that may be required and potential barriers that may arise, such as licensing and technology sharing restrictions; (4) many U.S. program managers may not consider purchasing from a foreign source due to the perceived technological superiority of U.S. weapons; and (5) European and U.S. governments have shown a desire to maintain an independent ability to provide for their national defense. Efforts have been made to develop a more unified European armament policy and defense industrial base. As regional unification efforts evolve, individual European nations still independently make procurement decisions, and these nations vary in their willingness to buy major U.S. weapon systems when European options exist. To maintain market access in Europe, U.S. defense companies have established transatlantic industrial partnerships. These industrial partnerships appear to be evolving more readily than transatlantic cooperative programs led by governments. Although the U.S. government has recently taken steps to improve defense trade and cooperation, some observers have indicated that practical and cultural impediments can affect transatlantic cooperation on major weapon programs. In commenting on a draft of this report, DOD concurred with the report and the Department of Commerce stated that it found the report to be accurate and had no specific comments or recommended changes. The comments from DOD and the Department of Commerce are reprinted in appendixes II and III, respectively. DOD also separately provided some technical suggestions, which we have incorporated in the text where appropriate. To identify European government defense integration plans and activities, we examined European Union, WEU, OCCAR, and NATO documents and publications. We developed a chronology of key events associated with the development of an integrated European defense market. We interviewed European Union, Western European Armaments Group, OCCAR, and NATO officials about European initiatives affecting trade and cooperation and their progress in meeting their goals. We also discussed these issues with officials at the U.S. mission to NATO, the U.S. mission to the European Union, and U.S. embassies in France, Germany, and the United Kingdom. We interviewed or obtained written responses from officials from six major defense companies in France, Germany, and the United Kingdom about European industry consolidation. We identified relevant information and studies about European government and industry initiatives and discussed these issues with consulting firms and European think tanks. To assess how procurement polices of European nations affect U.S. defense companies' market access, we focused our analysis on five countries. We selected France, Germany, and the United Kingdom because they have the largest defense budgets in Europe and their defense industries comprise 85 percent of European defense production. Italy and the Netherlands were selected because they are significant producers and buyers of defense equipment. These five countries are also current members or seeking membership in OCCAR. We interviewed officials from 13 U.S. defense companies on the basis of their roles as prime contractors and subcontractors and range of defense products sold in Europe. Most of these companies represented prime contractors. Eight of these were among the top 10 U.S. defense companies, based on fiscal year 1995 DOD prime contract awards. We also discussed the major defense competitions that U.S. companies participated in over the last 5 years and the factors that contributed to the competitions' outcome with officials from these companies and with U.S. government officials. We discussed procurement policies with European and U.S. government officials. We met with Ministry of Defense officials in France, Germany, and the United Kingdom, as well as U.S. embassy officials in those countries. We did not conduct fieldwork in Italy or the Netherlands, but we did discuss these countries' procurement policies with officials from their embassies in Washington, D.C. We also reviewed documents describing the procurement policies and procedures of the selected countries and U.S. government assessments and cables about major defense contract awards that occurred in these countries and discussed factors affecting these procurement awards with U.S. government and industry officials. We did not review documentation on the bids or contract awards. We collected and analyzed data on defense budgets and defense trade, including foreign military and direct commercial sales to identify buying patterns in Western Europe over the past 5 years. We only used the foreign military sales data to analyze sales by weapons category for the five countries and Western Europe. Direct commercial sales data, which are tracked by the State Department through export licenses, were not organized by weapon categories for the last 5 years. However, we reviewed congressional notification records for direct commercial sales over $14 million for the last 5 years to supplement our analysis of foreign military sales data. To determine actions the U.S. industry and government have taken in response to changes in the European defense environment, we interviewed defense company and U.S. government officials within DOD and the Departments of Commerce and State. With U.S. defense companies, we discussed their business strategies and the nature of the partnerships formed with European defense companies. We obtained and analyzed recently issued DOD directives and policy memorandums on defense trade and international cooperation and discussed the effectiveness of these policies with U.S. and foreign government officials and U.S. and European defense companies. We performed our review from January to September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Secretaries of State and Commerce. We will also make copies available to others upon request. Please contact me at (202) 512-4181 if you have any questions concerning this report. Major contributors to this report were Karen Zuckerstein, Anne-Marie Lasowski, and John Neumann. Western European Union (WEU) was established as a result of the agreements signed in Paris in October 1954 modifying the 1948 Brussels Treaty. Treaty of Rome wae signed creating the European community. The Independent European Programme Group was established to promote European cooperation in research, development, and production of defense equipment; improve transatlantic armament cooperation; and maintain a healthy European defense industrial base. The Treaty on European Union was signed in Maastricht but was subject to ratification. The WEU member states also met in Maastricht and invited members of the European Union to accede to WEU or become observers, and other European members of the North Atlantic Treaty Organization (NATO) to become associate members of WEU. The Council of the WEU held its first formal meeting with NATO. The European Defense Ministers decided to transfer the Independent European Programme Group's functions to WEU. The Maastricht Treaty was ratified and the European Community became the European Union. French and German Ministers of Defense decided to simplify the management for joint armament research and development programs. The proposal for a Franco-German procurement agency emerged. A NATO summit was held, which supported developing of a European Security and Defense Identity and strengthening the European pillar of the Alliance. WEU Ministers issued the Noordwijk Declaration, endorsing a policy document containing preliminary conclusions of the formation of the Common European Defense policy. The European Union Intergovernmental Conference, or constitutional convention, convened. The Defense Ministers of France, Germany, Italy, and the United Kingdom signed the political foundation document for the joint armaments agency Organisme Conjoint de Cooperation en Matiere d'Armament (OCCAR). The Western European Armaments Organization was established, creating a subsidiary body within WEU to administer research and development contracts. The four National Armaments Directors of France, Germany, Italy, and the United Kingdom met during the first meeting of the Board of Supervisors of OCCAR. The board reached decisions about OCCAR's organizational structure and programs to manage. The European Union Intergovernmental Conference concluded. A new treaty was drafted, but little advancement was made to developing a common foreign and security policy. The treaty called for the European Union to cooperate more closely with WEU, which might be integrated in the European Union if all member nations agree. The Board of Supervisors of OCCAR held a second meeting. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the changes that have taken place in the European defense market over the past 5 years, focusing on: (1) what actions European governments and industry have taken to unify the European defense market; (2) how key European countries' defense procurement practices have affected U.S. defense companies' ability to compete on major weapons competitions in Europe; and (3) how the U.S. government and industry have adapted their policies or practices to the changing European defense environment. GAO's review focused on the buying practices of five European countries--France, Germany, Italy, the Netherlands, and the United Kingdom. GAO noted that: (1) pressure to develop a unified European armament procurement policy and related industrial base is increasing, as most nations can no longer afford to develop and procure defense items solely form their own domestic companies; (2) European governments have taken several initiatives to integrate the defense market, including the formation of two new organizations to improve armament cooperation; (3) European government officials remain committed to cooperative programs, which have long been the impetus for cross-border defense cooperation at the industry level; (4) some European defense companies are initiating cross-border mergers that are not tied to government cooperative programs; (5) although some progress toward regionalization is occurring, European government and industry officials told GAO that national sovereignty issues and complex ownership structures may inhibit European defense consolidation from occurring to the extent that is needed to be competitive; (6) until European governments agree on a unified armament policy, individual European countries will retain their own procurement policies; (7) like the United States, European countries tend to purchase major defense equipment from their domestic companies when such options exist; (8) when national options do not exist, key European countries vary in their willingness to buy major U.S. weapon systems; (9) trans-Atlantic industrial partnerships appear to be evolving more readily than trans-Atlantic cooperative programs that are led by governments; (10) U.S. defense companies have established these trans-Atlantic partnerships largely to maintain market access in Europe; (11) U.S. defense company officials say they cannot export major defense items to Europe without involving European defense companies in the production of those items; (12) some U.S. defense companies are seeking long-term partnerships with European companies to develop a defense product line that will meet requirements in Europe or other defense markets; (13) they believe such industrial interdependence can also help counter any efforts toward U.S. or European protectionism and may increase trans-Atlantic defense trade; and (14) the U.S. government has taken several steps over the last few years to improve defense trade and trans-Atlantic cooperation, but some observers point to practical and cultural impediments that affect U.S.-European cooperation on major weapon programs.
7,606
593
VA's Office of Small and Disadvantaged Business Utilization (OSDBU) has overall responsibility for the verification program. OSDBU's Center for Verification and Evaluation (CVE) maintains the mandated database of verified SDVOSBs and VOSBs and is responsible for verification operations, such as application processing. VA's verification process consists of reviewing and analyzing a standardized set of documents submitted with each verification application. VA uses contractors to support its verification program and federal employees oversee the contractors and review and approve verification decisions. As of September 1, 2015, CVE had 15 federal employees and 156 contract staff (employed by five different contractors) verifying applications or filling supporting roles. CVE is funded by VA's Supply Fund, a self-supporting revolving fund that recovers its operating expenses through fees and markups on different products or services. CVE's final obligations for fiscal year 2014 were $17.9 million and its approved budget for fiscal year 2015 was $16.1 million, representing a decrease of about 10 percent ($1.8 million) from 2014. We and VA's Office of Inspector General previously found that VA has faced numerous challenges in operating the verification program. Our most recent work on this program in 2013 found that VA had made significant changes to address previously identified program weaknesses, but that it still faced challenges establishing a stable and efficient program to verify firms on a timely and consistent basis. Specifically, we found that VA consistently placed a higher priority on addressing immediate operational challenges than on developing a comprehensive, long-term strategic focus for the verification program--an approach that contributed to programmatic inefficiencies. We also found that VA's case management data system had shortcomings that hindered the agency's ability to operate, oversee, and monitor the program. Therefore, we recommended that VA (1) refine and implement a strategic plan with outcome-oriented long-term goals and performance measures, and (2) integrate efforts to modify or replace the program's data system with a broader strategic planning effort to ensure the system addresses the program's short- and long-term needs. VA adopted a strategic plan in 2013 and efforts to update its case management system are ongoing. In 2014, VA launched the MyVA Reorganization Plan in an effort to improve the efficiency and effectiveness of VA's services to veterans. The plan's strategy emphasizes improved service delivery, a veteran-centric culture, and an environment in which veteran perceptions are the indicator of VA's success. MyVA extends to all aspects of the agency's operations, including the verification program. In response to this organizational change, OSDBU is required to align its own strategy with MyVA and take steps to make its operations more customer service- oriented and veteran-centric. Based on our preliminary observations, VA has improved its timeliness for application processing, followed its policies for verifying businesses, continued to refine quality controls for the program, and improved communications with veterans. For instance, CVE reported its processing times have improved by more than 50 percent since October 2012, going from an average processing time of approximately 85 days to 41 days in fiscal year 2015. Additionally, VA officials told us that they have been generally meeting their processing goal of 60 days (from receipt of a complete application) and only had 5 applications in fiscal year 2014 and 11 applications in fiscal year 2015 for which it did not meet this goal. Our review of randomly selected application files corroborates that CVE has generally met its processing goals, but the verification process can take longer from a veteran's perspective. In calculating processing times, CVE excludes any time spent waiting for additional information it asked firms to supply, so the actual number of days it takes an applicant to become verified is typically longer than what CVE reports. Our preliminary estimates are that it takes an average of 56 days (without stopping the regulatory clock while the veteran is preparing and submitting additional documents) from when CVE determines a firm's application is complete to when the firm receives notification of the verification determination. During that time, CVE is reviewing the application and potentially requesting and waiting for the applicant to submit additional information. Additionally, firms can submit and withdraw their application multiple times should they need to correct issues or wish to apply at a later date. Each time a firm resubmits an application, CVE resets the application processing clock, meaning that CVE's average case processing time does not account for instances where a firm withdraws and resubmits an application. VA officials said that allowing applicants to withdraw and resubmit multiple applications is an advantage to the veteran because veterans can make several attempts to become verified, and without allowing veterans to withdraw their applications, more veterans would receive denials and have to wait 6 months before submitting another application. However, this means that some veterans might perceive the application process as lengthy if they have submitted and withdrawn several applications in their attempt to become verified. For example, we estimated that for 15 percent of applications, it took the firm more than 4 months from the initial application date to receive a determination from CVE. Based on our initial review of application files, VA appeared to follow its policies and procedures for verifying SDVOSBs and VOSBs, which includes checking the veteran and disability status of the applicant, conducting research on the firm from publicly available information, and reviewing business documents to determine compliance with eligibility requirements, such as direct majority ownership by the veteran, experience of the veteran manager, and the SBA small business size standard. But, we also found that VA did not have a policy requiring documentation of the rationale for assigning risk level to the application, and did not document the rationale in an estimated 40 percent of the cases. VA recently implemented a procedure (October 2015) to require documentation of the rationale after we notified the agency of this finding. CVE has continued to refine its quality management system since our January 2013 report. For example, CVE has developed detailed written work instructions for each part of the verification process, and developed a quality manual that documents the requirements of its quality management system. CVE officials said they update the work instructions on a regular basis. Additionally, CVE implemented an internal audit and continuous improvement process. As of September 2015, CVE had taken action on and closed 364 of 379 (96 percent) internal audit recommendations made from June 2014 through August 2015. Based on our review of internal audits conducted by CVE from September 2014 through February 2015, the findings generally identified information that was incomplete, unclear, missing, or not applicable to the current verification process. CVE also conducted post-verification site visits to 606 firms in fiscal year 2015 to check the accuracy of verification decisions and help ensure that firms continued to comply with program regulations. CVE officials said the site visits identified two instances in which evaluators mistakenly verified a firm (a less than 1 percent error rate), and CVE issued 25 cancellations to firms found noncompliant with program regulations at the time of the site visit (a 4 percent noncompliance rate). CVE also monitors compliance by investigating potentially noncompliant firms identified through tips from external sources. CVE officials said they received about 400 such tips in 2014. Officials said that they investigate every credible tip by conducting public research, reviewing eligibility requirements related to the tip, and making a recommendation for corrective action, if necessary. We reviewed case files associated with 10 firms for which CVE received allegations of noncompliance from June 2014 through May 2015. These cases included one with an active status protest (a mechanism for interested parties to a contract award to protest if they feel a firm misrepresented its SDVOSB or VOSB status in its bid submission) and nine firms for which CVE received an e-mail allegation that the firm was not in compliance with program regulations (a few of these firms also recently received a status protest decision). CVE investigated 6 of 10 cases we reviewed, although it did not always document that an allegation of noncompliance had been received or that it was conducting a review of the firm's eligibility based on the allegation. In comparison, anytime a protest was filed against a verified firm, the case file had a note indicating the firm was the subject of a status protest and verification activities should be put on hold until the protest was resolved. We will continue to monitor these issues and report our final results early next year. Our preliminary work revealed that since our 2013 report, VA has made several changes to improve veterans' experiences with the verification program and reduced the percentage of firms that receive denials from 66 percent in 2012 to 5 percent in 2015, according to agency data. A few examples include the following. VA implemented procedures to allow firms to withdraw applications in order to avoid denials. For example, veterans can correct minor deficiencies or withdraw an application to address more complex problems instead of receiving a denial decision and having to wait 6 months to reapply. VA established procedures to communicate with verified firms and applicants about their verification status. According to VA officials, the agency sends e-mail reminders 120, 90, and 30 days before the expiration of a firm's verification status, contacts firms by telephone 90 days before expiration of verification status; and notifies firms in writing 30 days before cancelling verified status. Officials said they also send notifications to applicants to indicate that an application is complete, additional documents are needed, and that a determination has been made. VA partnered with Procurement Technical Assistance Centers-- funded through cooperative agreements with the Department of Defense--to provide verification assistance to veterans at no cost. VA trained more than 300 procurement counselors at the centers on the verification process so they could better assist veterans applying for verification. VA increased interaction with veterans by conducting monthly pre- application, reverification, and town hall webinars to provide information and assistance to verified firms and others interested in the program. VA provided resources for veterans on its website, such as fact sheets, verification assistance briefs, and standard operating procedures for the verification program. VA also has a tool on its website that allows firms to obtain a list of documents required for their application depending on the type of company they own. VA developed surveys to obtain feedback from firms (1) that go through the verification process, (2) that receive a site visit, (3) that leave the program, and (4) that participate in any pre-verification information sessions. CVE officials stated that they hope these surveys will allow them to more systematically collect feedback on different aspects of the program. All of the verification assistance counselors and representatives of veterans' service organizations with whom we spoke noted that VA has improved its verification process, although most had some recommendations for areas for continued improvement. Three of the four verification assistance counselors we spoke with stated that VA's new policies to allow veterans to withdraw or submit changes to their application represented a positive change. Representatives of one veterans' group we spoke to stated that VA was doing a better job communicating with applicants on missing documentation and other potential issues. They also said VA was interacting more with veteran service organizations and veterans at conferences for veteran-owned small businesses and town hall meetings. However, three of the four verification assistance counselors noted that resources on VA's website for the verification program can be difficult to locate and representatives from one veteran service organization said VA does not provide adequate documentation of the program standards for applicants. VA officials said they have been working with the strategic outreach team in OSDBU to redesign the website to make documents easier to locate. Additionally, we determined that the standard operating procedures--documents to help veterans understand the verification process--posted on the website were from 2013 and did not reflect current procedures, such as the ability to withdraw an application after CVE's evaluation. When we notified VA of this issue, the agency updated the program's website to reflect current procedures and implemented a policy to review and update the operating procedures every 6 months. All of the verification assistance counselors we interviewed also stated that VA's determination letters to applicants could be clearer and that they include regulatory compliance language that could be difficult for some applicants to understand. VA officials maintained that the inclusion of regulatory language in the determination letters was necessary, but acknowledged that this language can present readability challenges. We also observed several instances in our review where a letter initially stated that documents were due on one date, and then later stated the applicant should disregard the initial statement and that documents were due on a different, earlier date. VA officials said this was due to a glitch in the system that generated the letters and this issue was resolved in May 2015. Despite the significant improvements VA has made to its verification program, it continues to face challenges establishing a more cost- effective, veteran-friendly verification process, and acquiring an information technology system that meets the agency's needs. The efforts that VA has either made or currently has underway include restructuring the verification process, revising verification program regulations, changing the program's organizational structure, and developing a new case management system--some of which have been ongoing since our January 2013 report. While these efforts are intended to help address some of the challenges associated with the verification program, VA lacks a comprehensive operational plan with specific actions and milestone dates for managing these efforts and achieving its long-term objectives for the program. Changes in the verification process. VA intends to restructure part of the verification process in an effort to make it more veteran-focused and cost-effective. According to OSDBU's Executive Director, VA embarked on these changes in response to the agency's new MyVA strategy and requests from the Supply Fund to design a veteran-centered process that highlights customer service and maximizes cost efficiency. In August 2015, VA began a pilot for a new verification process that makes a case manager the point of contact for the veteran and the coordinator of staff evaluating the application. According to the Executive Director, the new process is expected to provide cost savings to the agency by reducing the amount of time staff spend reviewing applications and addressing veterans' questions. Officials said the specific tasks staff perform to review applications would not change; rather, the new process would eliminate some redundancies and focus on the veteran's experience. Key differences between the new and current processes as described by CVE officials are shown in table 1. According to CVE officials, as of September 2015, 43 applications had been reviewed using the new pilot process and VA had begun collecting feedback from applicants. VA also has developed metrics to inform adjustments to the pilot and plans to calculate processing times for each application, according to CVE officials. Officials stated that VA plans to finalize the new process in October 2015 and fully transition to the new process by April 2016. VA has not yet conducted an analysis to determine the cost of the new pilot process as compared with the current process, but OSDBU's Executive Director said that he estimates the new pilot process will save the program about $2 million per year. Revisions to regulations. VA is continuing to make revisions to its program regulations. In 2013 we reported that VA had begun the process of modifying the verification program regulations to extend the verification period from 1 year to 2 years and published an interim final rule to this effect in late June 2012. In addition, VA began a process in 2013 to revise program regulations in order to account for common business practices that might otherwise lead to a denial decision under the current regulation. For example, in addressing the challenges associated with one current regulatory provision, VA officials told us that VA plans to allow minority owners to vote on extraordinary business decisions such as closing or selling the business according to CVE officials. Officials stated that the revisions to the regulation are not expected to provide cost and resource efficiencies, but are intended to provide clarity for veterans and increase their satisfaction with the process. As of September 2015, the regulation was undergoing internal review with VA's Office of General Counsel according to CVE officials. Approach to site visits. According to CVE officials, VA plans to determine how many site visits should be conducted annually to maintain the quality of the program while minimizing cost. CVE officials told us that they plan to visit a random sample of 300 of 2,312 verified firms that received VA contracts from March 2014 through April 2015 fiscal years 2014 and 2015 and then calculate the percentage of firms found to be noncompliant with program requirements. A high noncompliance rate could indicate that VA should increase the annual number of visits, while a low rate could indicate that VA should decrease or maintain the annual number of site visits it conducts, according to CVE officials. VA officials said that the statistical analysis will allow them to validate the noncompliance rate obtained from site visits conducted in fiscal year 2014 and that VA plans to complete its study by January 2016. We plan to include additional information on this study in our upcoming report. Reverification policy. VA revised its reverification policy in an effort to improve efficiency and customer service. According to CVE's Acting Director, reverification used to require nearly the same effort of CVE staff, contractors, and veterans as the full verification process. Under a new process CVE implemented in October 2015, CVE contractors are to conduct an initial meeting with the veteran to identify necessary documentation based on changes to the company since its last verification. These changes are intended to improve veterans' understanding of the requirements for reverification, and reduce the amount of time spent re-verifying applications, according to CVE officials. However, it is not yet clear how the change to the reverification procedure will impact the number and type of documents veterans will be required to submit. In addition, VA analyzed data obtained from its fiscal year 2014 site visits and concluded that there is no correlation between a firm's noncompliance and the time passed since its last verification. According to information provided by CVE officials, the agency therefore may be able to reduce the number of site visits conducted each year by lengthening the 2-year reverification cycle. Staffing and organizational structure. VA plans to fill vacant leadership positions and make changes to CVE's organizational structure to reflect the new verification process and align staffing resources with agency needs. In 2010, we noted that leadership and staff vacancies had contributed to the slow pace of implementation of the verification program. CVE has since filled most of its vacant positions. However, staffing at the senior level has been in flux. Since 2011, CVE has had three different directors, the last two of which have been acting directors. The deputy director position also was vacant from March 2014 to September 2015. OSDBU's Executive Director (who has overseen the overall verification program since 2011) indicated that VA would begin advertising for a CVE director in October 2015. VA has developed a draft organizational structure and position descriptions for the new verification process. According to CVE officials, it also has begun an analysis--using initial data from the new verification process pilot--to determine optimal staffing levels for implementing the new process and meeting the demand for verification. CVE officials stated that VA plans to continue using contractor staff to conduct its verification activities because the use of such staff allows VA the flexibility to adjust staffing levels as needed. As discussed earlier, CVE currently has 15 full- time federal employees and 156 contract staff. OSDBU's Executive Director stated that VA has contracts in place for the verification program through April 2016 and plans to start the process for securing new contracts in January 2016. Plans for case management system. VA has faced delays in replacing the verification program's outdated case management system. In our January 2013 report, we also identified deficiencies in VA's data system--such as a lack of certain data fields and workflow management capabilities needed to provide key information on program management--and recommended that VA modify or replace the system. VA hired a contractor in September 2013 to develop a new system but the contract was cancelled in October 2014 due to poor contractor performance. VA paid the contractor about $871,000 for work that had been performed prior to the contract's termination, and received several planning documents from the contractor that helped inform its current acquisition effort, according to CVE officials. VA has since decided to develop a pilot case management system through one of the agency's other existing contracts. According to VA officials, the pilot system is intended to provide VA with the opportunity to test and evaluate the capabilities of a new system without the time and expense of putting a whole new system in place. VA developed specifications and other planning documents for the pilot system, and plans to develop and evaluate the system from November 2015 through January 2016. If the pilot is successful, VA plans to issue a solicitation and award a contract for development of a full system by April 2016 and fully transition to the new system by September 2016. VA was in the initial stages of developing the pilot system as of October 2015, and has not determined how it will select cases for the pilot, evaluate the pilot, and fully transition to the new system once the pilot is complete. VA has taken some steps to address our previous recommendations, but our preliminary findings indicate that additional steps may be needed. In our January 2013 report, we found that VA faced challenges in its strategic planning efforts and recommended that VA refine and implement a strategic plan with outcome-oriented long-term goals and performance measures. VA developed a strategic plan for fiscal years 2014-2018 that described OSDBU's vision, mission, and various performance goals for its programs. It has since developed an operating plan for fiscal year 2016 that identifies a number of key actions needed to meet OSDBU's objectives, such as transitioning to a new verification process, completing revisions to the verification regulations, and developing a new case management system. But, the plan does not have an integrated schedule that includes specific actions and milestone dates for achieving program changes or discuss how the various efforts described above might be coordinated. Useful practices and lessons learned from organizational transformation show that organizations should set implementation goals and a timeline to build momentum and show progress from day one. These practices also show that it is essential that organizations undergoing a transformation establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. According to OSDBU's Executive Director, each OSDBU program team (such as CVE) is to develop action plans for their specific programs that include resource needs and expected timelines. However, it is not clear if OSDBU will develop an overall plan that captures and integrates the various efforts it has been undertaking that are managed by CVE and other program teams within OSDBU. We are continuing to assess the issues discussed in this statement and as we finalize our work for issuance early next year, we will consider making recommendations, as appropriate. Chairmen Coffman and Hanna, Ranking Members Kuster and Takai, and Members of the Subcommittees, this concludes my prepared statement. I would be happy to answer any questions at this time. If you or your staff have any questions about this statement, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Harry Medina (Assistant Director); Katie Boggs (Analyst-in-Charge), Mark Bird, Charlene Calhoon, Pamela Davidson, Kathleen Donovan, John McGrail, Barbara Roesmann, and Jeff Tessin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VA must give contracting preferences to service-disabled and other veteran-owned small businesses and verify the ownership and control of firms seeking such preferences. GAO found in 2013 ( GAO-13-95 ) that VA faced challenges in verifying firms on a timely and consistent basis, developing and implementing long-term strategic plans, and enhancing information technology infrastructure. This testimony discusses preliminary observations on (1) VA's progress in establishing a timely and consistent verification program and improving communication with veterans, and (2) the steps VA has taken to identify and address verification program challenges and long-term goals. This statement is based on GAO's ongoing review of VA's verification program. GAO reviewed VA's verification procedures and strategic plan, reviewed a random sample of 96 verification applications, and interviewed VA officials, and representatives from two veterans' organizations, and four verification assistance counselors. Based on GAO's preliminary observations, the Department of Veterans Affairs (VA) has made significant improvements to its verification process and communication with veterans since GAO's 2013 report. VA reported it reduced its average application processing times by more than 50 percent--from 85 days in 2012 to 41 in 2015. GAO reviewed a randomly selected sample of verification applications and found that VA followed its procedures for reviewing applications. VA continued to refine its quality management by developing written work instructions for every part of the verification process, and implemented an internal audit process. As of September 2015, VA had taken action on and closed 364 of 379 (96 percent) of internal audit recommendations. The agency also conducted post-verification site visits to 606 firms in fiscal year 2015 to check the accuracy of verification decisions and help ensure continued compliance with program regulations. Since 2013, VA has made several changes to improve veterans' experiences with the program. For example, VA revised procedures to allow veterans additional opportunities to withdraw their applications or submit additional information and has partnered with federally supported assistance centers to provide assistance to veterans applying for verification. Correspondingly, the percentage of firms that received denials has dropped from 66 percent in 2012 to 5 percent in 2015. Veterans' organizations and verification counselors with whom GAO spoke noted improvements in VA's communications and interactions with veterans, although most verification counselors we spoke with suggested the program's website and letters to veterans could be clearer. VA has multiple efforts underway to make its verification program more cost-effective and veteran-friendly, but GAO's preliminary results indicate that it lacks a comprehensive operational plan to guide its efforts. For instance, VA intends to restructure part of its verification process and in August 2015, began a pilot that gives veterans one point of contact (a case manager, who would be aware of the specifics of the application throughout the verification process). VA plans to fully transition to this new process by April 2016. VA also plans to change the program's organizational structure and hire a director for the program, which has had three different directors, the last two of which have been acting directors, since 2011. Finally, VA plans to replace the program's outdated case management system, but has faced delays due to contractor performance issues. Efforts are under way to develop and evaluate a pilot system by January 2016 and fully transition to the new case management system by September 2016. VA has developed a high-level operating plan that identified objectives for the office overseeing the verification program--the Office of Small and Disadvantaged Business Utilization (OSDBU). But the plan does not include an integrated schedule with specific actions and milestone dates for achieving the multiple program changes under way or discuss how these various efforts might be coordinated within OSDBU. GAO's work on organizational transformations states that organizations should set implementation goals and a timeline to show progress. Such a plan is vital to managing multiple efforts to completion and achieving long-term program objectives, particularly when senior-level staffing for the verification program has lacked continuity. GAO continues to assess these issues and will report its results early next year. GAO is not making recommendations at this time; as it finalizes its work for issuance early next year, it will consider making recommendations, as appropriate. GAO obtained comments from VA and incorporated them as appropriate.
5,071
891
AOC and its contractors have continued to make progress on the project since the Subcommittee's July 14 hearing. However, mostly because some key activities associated with the HVAC and fire protection systems were not included in earlier schedules and because delays occurred in installing stonework and excavating the utility tunnel, the sequence 2 contractor's August schedule shows the expected completion date for the base project as February 26, 2007. As discussed at the Subcommittee's July 14 hearing, AOC recognized some delays in its June 2005 schedule, which showed the base project's expected completion date as October 19, 2006. Although AOC has not evaluated the contractor's August schedule, it does not believe that so much additional time will be needed. Furthermore, as discussed in the next section, AOC maintains that work could be accelerated to meet the September 15, 2006, target date. According to our analysis of the CVC project's schedule, the base project is unlikely to be completed by the September 15, 2006, target date for several reasons. AOC believes that it could take actions to complete the project by then, but these actions could have negative as well as positive consequences. These and other schedule-related issues raise a number of management concerns. We have discussed actions with AOC officials that we believe are necessary to address problems with the schedule and our concerns. AOC generally agreed with our suggestions. For several reasons, we believe that the base project is more likely to be completed sometime in the spring or summer of 2007 than by September 15, 2006: As we have previously testified, AOC's sequence 2 contractor, Manhattan Construction Company, has continued to miss its planned dates for completing activities that we and AOC are tracking to assist the Subcommittee in measuring the project's progress. For example, as of September 8, the contractor had completed 7 of the 16 selected activities scheduled for completion before today's hearing (see app. II); however, none of the 7 activities was completed on time. Unforeseen site conditions, an equipment breakdown, delays in stone deliveries, and a shortage of stone masons for the interior stonework were among the reasons given for why the work was not completed on time. Our analysis of the sequence 2 contractor's production pace between November 2004 and July 2005 indicates that the base project's construction is unlikely to be finished by September 15, 2006, if the contractor continues at the same pace or even accelerates the work somewhat. In fact, at the current or even a slightly accelerated pace, the base project would be completed several months after September 15, 2006. To finish the base project's construction by that date, our analysis shows that the sequence 2 contractor would have to recover 1 day for every 8 remaining days between July 2005 and September 2006 and could incur no further delays. We continue to believe that the durations scheduled for a number of sequence 2 activities are unrealistic. According to CVC project team managers and staff, several activities, such as constructing the utility tunnel; testing the fire protection system; testing, balancing, and commissioning the HVAC system; installing interior stonework; and finishing work in some areas are not likely to be completed as indicated in the July 2005 schedule. Some of these are among the activities whose durations we identified as optimistic in early 2004 and that we and AOC's construction management contractor identified as contributing most to the project's schedule slippage in August 2005; these activities also served as the basis for our March 2004 recommendation to AOC that it reassess its activity durations to see that they are realistic and achievable at the budgeted cost. Because AOC had not yet implemented this recommendation and these activities were important to the project's completion, we suggested in our May 17 testimony before the Subcommittee that AOC give priority attention to this recommendation. AOC's construction management contractor initiated such a review after the May 17 hearing. Including more time in the schedule to complete these activities could add many more weeks to the project's schedule. AOC's more aggressive schedule management is identifying significant omissions of activities and time from the sequence 2 schedule. AOC's approach, though very positive, is coming relatively late in the project. For example, several detailed activities associated with testing, balancing, and commissioning the CVC project's HVAC and fire protection system were added to the schedule in July and August, extending the schedule by several months. AOC believes, and we agree, that some of this work may be done concurrently, rather than sequentially as shown in the August schedule, thereby saving some of the added time. However, until more work is done to further develop this part of the schedule, it is unclear how much time could be saved. Furthermore, the July schedule does not appear to include time to address significant problems with the HVAC or fire alarm systems should they occur during testing. In August 2005, CVC project personnel identified several risks and uncertainties facing the project that they believed could adversely affect its schedule. Examples include additional unforeseen conditions in constructing the utility and House Connector tunnels; additional delays in stonework due to slippages in stone deliveries, shortages of stone masons, or stop-work orders responding to complaints about noise from work in the East Front; and problems in getting the HVAC and fire protection systems to function properly, including a sophisticated air filtration system that has not been used before on such a large scale. Providing for these risks and uncertainties in the schedule could add another 60 to 90 days to the completion date, on top of the additional time needed to perform activities that were not included in the schedule or whose durations were overly optimistic. Over the last 2 months, AOC's construction management contractor has identified 8 critical activity paths that will extend the base project's completion date beyond September 15, 2006, if lost time cannot be recovered or further delays cannot be prevented. These 8 activity paths are in addition to 3 that were previously identified by AOC's construction management contractor. In addition, the amount of time that has to be recovered to meet the September 15 target has increased significantly. The activity paths include work on the utility tunnel and testing and balancing the HVAC system; procuring and installing the control wiring for the air handling units; testing the fire alarm system; millwork and casework in the orientation theaters and atrium; and stonework in the East Front, orientation theaters, and exhibit gallery. Having so many critical activity paths complicates project management and makes on-time completion more difficult. AOC believes it can recover much of the lost time and mitigate remaining risks and uncertainties through such actions as using temporary equipment, adding workers, working longer hours, resequencing work, or performing some work after the CVC facility opens. AOC said that it is also developing a risk mitigation plan that should contain additional steps it can take to address the risks and uncertainties facing the project. Various AOC actions could expedite the project and save costs, but they could also have less positive effects. For example, accelerating work on the utility tunnel could save costs by preventing or reducing delays in several other important activities whose progress depends on the tunnel's completion. Conversely, using temporary equipment or adding workers to overcome delays could increase the project's costs if the government is responsible for the delays. Furthermore, (1) actions to accelerate the project may not save time; (2) the time savings may be offset by other problems; or (3) working additional hours, days, or shifts may adversely affect the quality of the work or worker safety. In our opinion, decisions to accelerate work must be carefully made, and if the work is accelerated, it must be tightly managed. Possible proposals from contractors to accelerate the project by changing the scope of work or its quality could compromise the CVC facility's life safety system, the effective functioning of the facility's HVAC system, the functionality of the facility to meet its intended purposes, or the life-cycle costs of materials. In August, project personnel raised such possibilities as lessening the rigor of systems' planned testing, opening the facility before all planned testing is done, or opening the facility before completing all the work identified by Capitol Preservation Commission representatives as having to be completed for the facility to open. While such measures could save time, we believe that the risks associated with these types of actions need to be carefully considered before adoption and that management controls need to be in place to preclude or minimize any adverse consequences of such actions, if taken. AOC's schedule presents other management issues, including some that we have discussed in earlier testimonies. AOC tied the date for opening the CVC facility to the public to September 15, 2006, the date in the sequence 2 contract for completing the base project's construction. Joining these two milestones does not allow any time for addressing unexpected problems in completing the construction work or in preparing for operations. AOC has since proposed opening the facility to the public on December 15, 2006, but the schedule does not yet reflect this proposed revision. Specifically, on September 6, 2005, AOC told Capitol Preservation Commission representatives that it was still expecting the CVC base project to be substantially completed by September 15, 2006, but it proposed to postpone the facility's opening for 3 months to provide time to finish testing CVC systems, complete punch-list work, and prepare for operating the facility. In our view, allowing some time to address unexpected problems is prudent. AOC's and its contractors' reassessment of activity durations in the August schedule may not be sufficiently rigorous to identify all those that are unrealistic. In reassessing the project's schedule, the construction management contractor found some durations to be reasonable that we considered likely to be too optimistic. Recently, AOC's sequence 2 and construction management contractors reported that, according to their reassessment, the durations for interior stonework were reasonable. We previously found that these durations were optimistic, and CVC project staff we interviewed in August likewise believed they were unrealistic. We have previously expressed concerns about a lack of sufficient or timely analysis and documentation of delays and their causes and determination of responsibility for the delays, and we recommended that AOC perform these functions more rigorously. We have not reassessed this area recently. However, given the project's uncertain schedule, we believe that timely and rigorous analysis and documentation of delays and their causes and determination of responsibility for them are critical. We plan to reexamine this area again in the next few weeks. The uncertainty associated with the project's construction schedule increases the importance of having a summary schedule that integrates the completion of construction with preparations for opening the facility to the public, as the Subcommittee has requested and we have recommended. Without such a schedule, it is difficult to determine whether all necessary activities have been identified and linked to provide for a smooth opening or whether CVC operations staff will be hired at an appropriate time. In early September, AOC gave a draft operations schedule to its construction management contractor to integrate into the construction schedule. As we noted in our July 14 testimony, AOC could incur additional costs for temporary work if it opens the CVC facility to the public before the construction of the House and Senate expansion spaces is substantially complete. As of last week, AOC's contractors were still evaluating the construction schedule for the expansion spaces, and it was not clear what needs AOC would have for temporary work. The schedule, which we received in early September, shows December 2006 as the date for completing the construction of the expansion spaces. We have not yet assessed the likelihood of the contractor's meeting this date. Finally, we are concerned about the capacity of the Capitol Power Plant (CPP) to provide adequately for cooling, dehumidifying, and heating the CVC facility during construction and when it opens to the public. Delays in completing CPP's ongoing West Refrigeration Plant Expansion Project, the removal from service of two chillers because of refrigerant gas leaks, fire damage to a steam boiler, management issues, and the absence of a CPP director could potentially affect CPP's ability to provide sufficient chilled water and steam for the CVC facility and other congressional buildings. These issues are discussed in greater detail in appendix III. Since the Subcommittee's July 14 CVC hearing, we have discussed a number of actions with AOC officials that we believe are necessary to address problems with the project's schedule and our concerns. AOC generally agreed with our suggestions, and a discussion of them and AOC's responses follows. By October 31, 2005, work with all relevant stakeholders to reassess the entire project's construction schedule, including the schedule for the House and Senate expansion spaces, to ensure that all key activities are included, their durations are realistic, their sequence and interrelationships are appropriate, and sufficient resources are shown to accomplish the work as scheduled. Specific activities that should be reassessed include testing, balancing, and commissioning the HVAC and filtration systems; testing the fire protection system; constructing the utility tunnel; installing the East Front mechanical (HVAC) system; installing interior stonework and completing finishing work (especially plaster work); fabricating and delivering interior bronze doors; and fitting out the gift shops. AOC agreed and has already asked its construction management and sequence 2 contractors to reassess the August schedule. AOC has also asked the sequence 2 contractor to show how it will recover time lost through delays. Carefully consider the costs, benefits, and risks associated with proposals to change the project's scope, modify the quality of materials, or accelerate work, and ensure that appropriate management controls are in place to prevent or minimize any adverse effects of such actions. AOC agreed. It noted that the sequence 2 contractor had already begun to work additional hours to recover lost time on the utility tunnel. AOC also noted that its construction management contractor has an inspection process in place to identify problems with quality and has recently enhanced its efforts to oversee worker safety. Propose a CVC opening date to Congress that allows a reasonable amount of time between the completion of the base project's construction and the CVC facility's opening to address any likely problems that are not provided for in the construction schedule. The December 15, 2006, opening date that AOC proposed earlier this month would provide about 90 days between these milestones if AOC meets its September 15, 2006, target for substantial completion. However, we continue to believe that AOC will have difficulty meeting the September 15 target, and although the 90-day period is a significant step in the right direction, an even longer period is likely to be needed. Give priority attention to effectively implementing our previous recommendations that AOC (1) analyze and document delays and the reasons and responsibility for them on an ongoing basis and analyze the impact of scope changes and delays on the project's schedule at least monthly and (2) advise Congress of any additional costs it expects to incur to accelerate work or perform temporary work to advance the CVC facility's opening so Congress can weigh the advantages and disadvantages of such actions. AOC agreed. AOC is still updating its estimate of the cost to complete the CVC project, including the base project and the House and Senate expansion spaces. As a result, we have not yet had an opportunity to comprehensively update our November 2004 estimate that the project's estimated cost at completion will likely be between $515.3 million without provision for risks and uncertainties and $559 million with provision for risks and uncertainties. Since November 2004, we have added about $10.3 million to our $515.3 million estimate to account for additional CVC design and construction work. (App. IV provides information on the project's cost estimates since the original 1999 estimate.) However, our current $525.6 million estimate does not include costs that AOC may incur for delays beyond those delay costs included in our November 2004 estimate. Estimating the government's costs for delays that occurred after November 2004 is difficult because it is unclear who ultimately will bear responsibility for various delays. Furthermore, AOC's new estimates may cause us to make further revisions to our cost estimates. To date, about $528 million has been provided for CVC construction. (See app.V.) This amount does not include about $7.8 million that was made available for either CVC construction or operations. In late August, we and AOC found that duplicate funding had been provided for certain CVC construction work. Specifically, about $800,000 was provided in two separate funding sources for the same work. The House and Senate Committees on Appropriations were notified of this situation and AOC's plan to address it. The funding that has been provided and that is potentially available for CVC construction covers the current estimated cost of the facility at completion and provides some funds for risks and uncertainties. However, if AOC encounters significant additional costs for delays or other changes, more funding may be needed. Because of the potential for coordination problems with a project as large and complex as CVC, we had recommended in July that AOC promptly designate responsibility for integrating the planning and budgeting for CVC construction and operations. In late August, AOC designated a CVC staff member to oversee both CVC construction and operations funding. AOC had also arranged for its operations planning consultant to develop an operations preparation schedule and for its CVC project executive and CVC construction management contractor to prepare an integrated construction and operations schedule. AOC has received a draft operations schedule and has given it to its construction management contractor to integrate into the construction schedule. Pending the hiring of an executive director for CVC, which AOC would like to occur by the end of January 2006, the Architect of the Capitol said he expects his Chief Administrative Officer, who is currently overseeing CVC operations planning, to work closely with the CVC project executive to integrate CVC construction and operations preparations. Work and costs could also be duplicated in areas where the responsibilities of AOC's contractors overlap. For example, the contracts or planned modification for both AOC's CVC construction design contractor and CVC operations contractor include work related to the gift shop's design and wayfinding signage. We discussed the potential for duplication with AOC, and it agreed to work with its operations planning contractor to clarify the contractor's scope of work, eliminate any duplication, and adjust the operations contract's funding accordingly. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Bernard Ungar at (202) 512-4232 or Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Shirley Abel, Michael Armes, John Craig, George Depaoli, Jr., Maria Edelstein, Elizabeth Eisenstadt, Brett Fallavollita, Jeanette Franzel, Jackie Hamilton, Bradley James, Scott Riback, and Kris Trueblood. With the assistance of a contractor, Hulett & Associates, we assessed the risks associated with the Architect of the Capitol's (AOC) July 2005 schedule for the Capitol Visitor Center (CVC) project and used the results of our assessment to estimate a time frame for completing the base CVC project with and without identified risks and uncertainties. In August 2005, we and the contractor interviewed project managers and team members from AOC and its major CVC contractors, a representative from the Army Corps of Engineers, and AOC's Chief Fire Marshal to determine the risks they saw in completing the remaining work and the time they considered necessary to finish the CVC project and open it to the public. Using the project's July 2005 summary schedule (the most recent schedule available when we did our work), we asked the team members to estimate how many workdays would be needed to complete the remaining work. More specifically, for each summary-level activity that the members had a role or expertise in, we asked them to develop three estimates of the activity's duration--the least, most likely, and longest time needed to complete the activity. We planned to estimate the base project's most likely completion date without factoring in risks and uncertainties using the most likely activity durations estimated by the team members. In addition, using these three-point estimates and a simulation analysis to calculate different combinations of the team's estimates that factored in identified risks and uncertainties, we planned to estimate completion dates for the base project at various confidence levels. In August 2005, AOC's construction management and sequence 2 contractors were updating the July project schedule to integrate the construction schedule for the House and Senate expansion spaces, reflect recent progress and problems, and incorporate the results to date of their reassessment of the time needed for testing, balancing, and commissioning the heating, ventilation and air-conditioning, (HVAC) system and for fire alarm testing. This reassessment was being done partly to implement a recommendation we had made to AOC after assessing the project's schedule in early 2004 and finding that the scheduled durations for these and other activities were optimistic. AOC's construction management and sequence 2 contractors found that key detailed activities associated with the HVAC system had not been included in the schedule and that the durations for a number of activities were not realistic. Taking all of these factors into account, AOC's contractors revised the project's schedule in August. AOC believes that the revised schedule, which shows the base project's completion date slipping by several months, allows too much time for the identified problems. As a result of this problem and others we brought to AOC's attention, AOC has asked its contractors to reassess the schedule. AOC's construction management contractor believes that such a reassessment could take up to 2 months. In our opinion, there are too many uncertainties associated with the base project's schedule to develop reliable estimates of specific completion dates, with or without provisions for risks and uncertainties. These activities are not critical. All other activities were critical in the April schedule or became critical in subsequent schedules. Several issues could affect the capacity of the Capitol Power Plant (CPP) to provide sufficient chilled water and steam for the CVC facility and other congressional buildings. CPP produces chilled water for cooling and dehumidification and steam for heating Capitol Hill buildings. To accommodate the CVC facility and meet other needs, CPP has been increasing its production capacity through the West Refrigeration Plant Expansion Project. This project, which was scheduled for completion in time to provide chilled water for the CVC facility during construction and when it opened, has been delayed. In addition, problems with aging equipment, fire damage, management weaknesses, and a leadership vacancy could affect CPP's ability to provide chilled water and steam. More specifically: In July, two chillers in CPP's East Refrigeration Plant were taken out of service because of a significant refrigerant gas leak. The refrigerant, whose use is being phased out nationally, escaped into the surrounding environment. Because of the chillers' age and use of an outdated refrigerant, AOC has determined that it would not be cost-effective to repair the chillers. CPP's chilled water production capacity will be further reduced between December 1, 2005, and March 15, 2006, when the West Refrigeration Plant is to be shut down to enable newly installed equipment to be connected to the existing chilled water system. However, the remainder of CPP's East Refrigeration Plant is to remain operational during this time, and AOC expects that the East Refrigeration Plant will have sufficient capacity to meet the lower wintertime cooling demands. Additionally, CPP representatives indicated that they could bring the West Refrigeration Plant back online to provide additional cooling capacity in an emergency. CPP is developing a cost estimate for this option. In June, one of two CPP boilers that burn coal to generate steam was damaged by fire. According to a CPP incident report, CPP operator errors contributed to the incident and subsequent damage. Both boilers were taken off-line for scheduled maintenance between July 1 and September 15, and CPP expects both boilers to be back online by September 30, thereby enabling CPP to provide steam to CVC when it is needed. Several management issues at CPP could further affect the expansion plant's and CPP's operational readiness: CPP has not yet developed a plan for staffing and operating the entire plant after the West Refrigeration Plant becomes operational or contracted for its current staff to receive adequate training to operate the West Refrigeration Plant's new, much more modern equipment. CPP has not yet received a comprehensive commissioning plan from its contractor. A number of procurement issues associated with the plant expansion project have arisen. We are reviewing these issues. CPP has been without a director since May 2005, when the former director resigned. CPP is important to the functioning of Congress, and strong leadership is needed to oversee the completion of the expansion project and the integration, commissioning, and operation of the new equipment, as well as address the operational and management problems at the plant. Filling the director position with an experienced manager who is also an expert in the production of steam and chilled water is essential. AOC recently initiated the recruitment process. House and Senate expansion spaces Air filtration system funded by Dep't. of Defense (DOD) Bid prices exceeding estimates, preconstruction costs exceeding budgeted costs, unforeseen field conditions, Other factors (costs associated with delays and design-to-budget overruns) Project budget after increases (as of November 2004) GAO-projected costs to complete after proposed scope changes (as of June 2005, excluding risks and uncertainties) Additional cost-to-complete items (as of August 2005) Design of the Library of Congress tunnel (Funds from Capitol Preservation Fund) GAO-projected costs to complete (as of August 2005, excluding risks and uncertainties) Potential additional costs associated with risks and uncertainties (as of November 2004) Less: Risks and uncertainties GAO believes the project faced in November 2004 [Congressional seals, orientation film, and backpack storage space ($4.2) + US Capitol Police securitymonitoring ($3.0)] (7.2) Less: Additional cost-to-complete items (as of August 2005) (3.1) The five additional scope items are the House connector tunnel, the East Front elevator extension, the Library of Congress tunnel, temporary operations, and enhanced perimeter security. Base project (as of November 2004) US Capitol Police security monitoring Current funding provided (as of June 2005) Design of Library of Congress tunnel (funds from the Capitol Preservation Fund) Construction-related funding provided in operations obligation plan: Construction-related funding provided in operations Current funding provided (as of August 2005) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses progress on the Capitol Visitor Center (CVC) project. Our remarks will focus on (1) the Architect of the Capitol's (AOC) progress in managing the project's schedule since the Subcommittee on the Legislative Branch, Senate Committee on Appropriations' July 14 hearing on the project; (2) our estimate of a general time frame for completing the base project's construction and the preliminary results of our assessment of the risks associated with AOC's July 2005 schedule for the base project; and (3) the project's costs and funding, including the potential impact of scheduling issues on cost. However, we will not, as originally planned, provide specific estimated completion dates because AOC's contractors revised the schedule in August to reflect recent delays, but AOC has not yet evaluated the revised schedule. AOC believes that the time added to the schedule by its contractors is unreasonable. Until AOC completes its evaluation and we assess it, any estimates of specific completion dates are, in our view, tentative and preliminary. Similarly, we will wait until the schedule is stabilized to update our November 2004 estimate of the cost to complete the project. Currently, AOC and its consultant, McDonough Bolyard Peck (MBP), are still developing their cost-to-complete estimates. In summary, although AOC and its construction contractors have continued to make progress since the Subcommittee's July 14 CVC hearing, several delays have occurred and more are expected. These delays could postpone the base project's completion significantly beyond September 15, 2006, the date targeted in AOC's July 2005 schedule. Although not yet fully reviewed and accepted by AOC, the schedule that AOC's contractors revised in August 2005 shows February 26, 2007, as the base project's completion date. According to our preliminary analysis of the project's July 2005 schedule, the base project is more likely to be completed sometime in the spring or summer of 2007 than by September 15, 2006. Unless the project's scope is changed or extraordinary actions are taken, the base project is likely to be completed later than September 15, 2006, for the reasons cited by the contractors and for other reasons, such as the optimistic durations estimated for a number of activities and the risks and uncertainties facing the project. AOC believes that the contractors added too much time to the schedule in August for activities not included in the schedule and that it can expedite the project by working concurrently rather than sequentially and by taking other actions. Additionally, we are concerned about actions that have been, or could be, proposed to accelerate work to meet the September 15, 2006, target date. The project's schedule also raises a number of management concerns, including the potential for delays caused by not allowing enough time to address potential problems or to complete critical activities. Fiscal year 2006 appropriations have provided sufficient funds to cover AOC's request for CVC construction funding as well as additional funds for some risks and uncertainties that may arise, such as costs associated with additional sequence 2 delays or unexpected conditions. Although sequence 2 delays have been occurring, the extent to which the government is responsible for their related costs is not clear at this time. Additional funding may be necessary if the government is responsible for significant delay-related costs or if significant changes are made to the project's design or scope or to address unexpected conditions. In addition, we and AOC identified some CVC construction activities that received duplicate funding. AOC has discussed this issue with the House and Senate Appropriations Committees.
5,793
748
Under section 219 of the Immigration and Nationality Act, as amended, the Secretary of State, in consultation with the Secretary of the Treasury and the Attorney General, is authorized to designate an organization as an FTO. For State to designate an organization as an FTO, the Secretary of State must find that the organization meets three criteria: 1. It is a foreign organization. 2. The organization engages in terrorist activity or terrorism, or retains the capability and intent to engage in terrorist activity or terrorism. 3. The organization's terrorist activity or terrorism threatens the security of U.S. nationals or the national security of the United States. Designation of a terrorist group as an FTO allows the United States to impose certain legal consequences on the FTO, as well as on individuals that associate with or knowingly provide support to the designated organization. It is unlawful for a person in the United States or subject to the jurisdiction of the United States to knowingly provide "material support or resources" to a designated FTO, and offenders can be fined or imprisoned for violating this law. In addition, representatives and members of a designated FTO, if they are not U.S. citizens, are inadmissible to and, in certain circumstances, removable from the United States. Additionally, any U.S. financial institution that becomes aware that it has possession of or control over funds in which a designated FTO or its agent has an interest must retain possession of or control over the funds and report the funds to Treasury's Office of Foreign Assets Control. In addition to making FTO designations, the Secretary of State can address terrorist organizations and terrorists through other authorities, including listing an individual or entity that engages in terrorist activity under Executive Order 13,224 (E.O. 13,224). E.O. 13,224 requires the blocking of property and interests in property of foreign persons the Secretary of State has determined, in consultation with the Attorney General and the Secretaries of the Departments of Homeland Security and the Treasury, to have committed or to pose a significant risk of committing acts of terrorism that threaten the security of U.S. nationals or the national security, foreign policy, or economy of the United States. E.O. 13,224 blocks the assets of organizations and individuals designated under the executive order. It also authorizes the blocking of assets of persons determined by the Secretary of the Treasury, in consultation with the Attorney General and the Secretaries of State and Homeland Security, to assist in; sponsor; or provide financial, material, or technological support for, or financial or other services to or in support of, designated persons, or to be otherwise associated with those persons. In practice, when State designates an organization as an FTO, it also concurrently designates the organization under E.O. 13,224. Once State designates an organization under E.O. 13,224, Treasury is able to make its own designations under E.O. 13,224 of other organizations and individuals associated with or providing support to the organization designated by State under E.O. 13,224. These designations allow the U.S. government to target organizations and individuals that provide material support and assistance to FTOs. State has developed a six-step process for designating foreign terrorist organizations. State's Bureau of Counterterrorism (CT) leads the designation process for State, and other State bureaus and agency partners are involved in the various steps. While the number of FTO designations has varied annually since the first 20 FTOs were designated in 1997, as of December 31, 2014, 59 organizations were designated as FTOs. FTO designation activities are led by CT, which monitors the activities of terrorist groups around the world to identify potential targets for designation. When reviewing potential targets, CT considers not only terrorist attacks that a group has carried out but also whether the group has engaged in planning and preparations for possible future acts of terrorism or retains the capability and intent to carry out such acts. CT also considers recommendations from other State bureaus, federal agencies, and foreign partners, among others, and selects potential target organizations for designation. For an overview of agencies and their roles in the designation process, see appendix II. After selecting a target organization for possible designation, State uses a six-step process it has developed to designate a group as an FTO (see fig. 1). Step 1: Equity check--The first step in CT's process is to consult with other State bureaus, federal agencies, and the intelligence community, among others, to determine whether any law enforcement, diplomatic, or intelligence concerns should prevent the designation of the target organization. If any of these agencies or other bureaus has a concern regarding the designation of the target organization, it can elect to place a "hold" on the proposed designation, which prevents the designation from being made until the hold is lifted by the entity that requested it. The equity check is the first step where an objection to a designation can be raised; however, in practice, a hold can be placed at any step in the FTO designation process prior to the Secretary's decision to designate. Step 2: Administrative record--As required by law, in support of the proposed designation, CT is to prepare an administrative record, which is a compilation of information, typically including both classified and open source information, demonstrating that the target organization identified meets the statutory criteria for FTO designation. Step 3: Clearance process--The third step in CT's process is to send the draft administrative record and associated documents to State's Office of the Legal Adviser and then to Justice and Treasury for review and approval of a final version to submit to the Secretary of State. For clearance, Justice and Treasury are to review the draft administrative record prepared by State and may suggest that State make changes to the document. The interagency clearance process is complete once Justice and Treasury provide State with signed letters of concurrence indicating that the administrative record is legally sufficient. CT is then to send the administrative record to other bureaus in the State Department for final clearance. Step 4: Secretary of State's decision--Materials supporting the proposed FTO designation are to be sent to the Secretary of State for review and decision on whether or not to designate. The Secretary of State is authorized, but not required, to designate an organization as an FTO if he or she finds that the legal elements for designation are met. Step 5: Congressional notification--In accordance with the law, State is required to notify Congress 7 days before an organization is formally designated. Step 6: Federal Register notice--State is required to publish the designation announcement in the Federal Register and, upon publication, the designation is effective for purposes of penalties that would apply to persons who provide material support or resources to designated FTOs. As of December 31, 2014, there were 59 organizations designated as FTOs, including al Qaeda and its affiliates, Islamic State of Iraq and the Levant (ISIL), and Boko Haram. See appendix III for the complete list of FTOs designated, as of December 31, 2014. The number of FTO designations has varied annually since the first FTOs were designated, in 1997. State designated 13 groups between 2012 and 2014. Figure 2 shows the number of organizations designated by year of designation, as of December 31, 2014. According to State officials and our review of agency documents, State considered information and input provided by other State bureaus and federal agencies for all 13 designations made between 2012 and 2014. State considered this input during the first three steps in its designation process: conducting the equity check, compiling the administrative record, and obtaining approval in the clearance process. During our review of the 13 FTO designations between 2012 and 2014, officials from the Departments of Defense, Homeland Security, Justice, and the Treasury, and the Office of the Director of National Intelligence (ODNI) reported that State considered their input when making designations. Specifically, we found that State considered information during the first three steps in the FTO designation process, including the following: Step 1: Equity check--According to State officials, regional bureaus at State and other agencies provided input to CT during the equity check step by identifying, when warranted, any law enforcement, diplomatic, or intelligence equities that would be jeopardized by the designation of the target organization. Officials from Defense, DHS, Justice, Treasury, and the intelligence community also confirmed that they provided input during the equity check. According to State officials, other bureaus and agencies participating in the equity check included the Central Intelligence Agency, the National Counterterrorism Center, the National Security Agency, and the National Security Council Counterterrorism staff. Step 2: Administrative record--Agencies provided classified and unclassified materials to State to support the draft administrative record. For example, officials from ODNI told us they provide an assessment and intelligence review, at the request of State, for any terrorist organization that is nominated for FTO designation. U.S. intelligence agencies may also provide information to State during the equity check and during the compilation of the administrative record to support the designation. Otherwise, State has direct access to the disseminated intelligence of other agencies and does not need to separately request such information, according to CT officials. Step 3: Clearance--In accordance with the law, Justice and Treasury review the draft administrative record for legal sufficiency and provide their input to State before the administrative record is finalized. Officials from Treasury and Justice told us that State considered their input during the clearance process for the administrative record for the 13 FTO designations we examined. This consultation culminates in and is documented through letters of concurrence in support of each FTO designation signed by Treasury and Justice. In all 13 FTO designations that we reviewed, Treasury and Justice issued signed letters of concurrence. The U.S. government penalizes designated FTOs through three key consequences. First, the designation of an FTO triggers a freeze on any assets the organization holds in a financial institution within the United States. Second, the U.S. government can criminally prosecute individuals that provide material support to an FTO, as well as impose civil penalties. Third, FTO designation imposes immigration restrictions upon members of the organization and individuals that knowingly provide material support or resources to the designated organization. Over the period of our review, we found that U.S. agencies imposed all three consequences. U.S. persons are prohibited from conducting unauthorized transactions or having other dealings with or providing services to designated FTOs. U.S. financial institutions that are aware that they are in possession of or control funds in which an FTO or its agent has an interest must retain possession of or maintain control over the funds and report the existence of such funds to Treasury. As of December 31, 2013, which is the date for the most recently published Terrorist Assets Report, the U.S. government blocked funds related to 7 of the 59 currently designated foreign terrorist organizations, totaling more than $22 million (see table 1). As of December 2013, there were no blocked funds reported to Treasury related to the remaining 52 designated FTOs. According to Treasury, the reported amounts blocked by the U.S. government change over the years because of several factors, including forfeiture actions, reallocation of assets to another sanctions program, or the release of blocked funds consistent with sanctions policy. Funds shown in the table above are blocked by the U.S. government pursuant to terrorism sanctions administered by Treasury, including FTO sanctions regulations and global terrorism sanctions regulations. The FTO-related funds blocked by the United States are only funds held within the United States and do not include any assets and funds that terrorist groups may hold outside U.S. financial institutions. However, according to Treasury officials, while designation of FTOs exposes and isolates individuals and organizations, and denies access to U.S. financial institutions, in some cases, FTOs may also be sanctioned by the United Nations or other international partners, an action that may block access to the global financial system. Designation as an FTO triggers criminal liability for persons within the United States or subject to U.S. jurisdiction who knowingly provide, or attempt or conspire to provide, "material support or resources" to a designated FTO. Violations are punishable by a fine and up to 15 years in prison, or life if the death of a person results. Furthermore, it is also a crime to knowingly receive military-type training from or on behalf of an organization designated as an FTO at the time of the training. Between January 1, 2009, and December 31, 2013, which is the most recent date for which data are available, over 80 individuals were convicted of terrorism or terrorism-related crimes, that included providing material support or resources to an FTO or receiving military-type training from or on behalf of an FTO. The penalties for these convictions varied, and included some combination of imprisonment, fines, and asset forfeiture. For example, individuals convicted of terrorism or terrorism- related crimes, which included providing material support to an FTO, received sentences that included imprisonment lengths that varied between time served and life in prison, plus 95 years. In addition, sentencing for convicted individuals included fines up to $125,000, asset forfeiture up to $15 million, and supervised release for up to life. In addition, Justice may also bring civil forfeiture actions against assets connected to terrorism offenses, including the provision of material support to FTOs. U.S. law authorizes, among other things, the forfeiture of property involved in money laundering, property derived from or used to commit certain foreign crimes, and the proceeds of certain unlawful activities. Once the government establishes that an individual or entity is engaged in terrorism, it may bring forfeiture actions by proceeding directly against the assets (1) of an individual, entity, or organization engaged in planning or perpetrating crimes of terrorism against the United States or U.S. citizens; (2) acquired or maintained by any person intending to support, plan, conduct, or conceal crimes of terrorism against the United States or U.S. citizens; (3) derived from, involved in, or used or intended to be used to commit terrorism against the United States or U.S. citizens or their property; or (4) of any individual, entity, or organization engaged in planning or perpetrating any act of international terrorism. According to Justice officials, there have not been any civil forfeiture actions related to FTOs. However, Justice officials said their department routinely investigates and takes actions against financial institutions operating in the United States that willfully violate the International Emergency Economic Powers Act. They added that Justice has, for example, imposed fines and forfeitures and installed compliance monitors in cases where banks have violated terrorism-related sanctions programs. Furthermore, according to Justice officials, there are numerous other investigative and prosecutorial tools available to the United States to confront terrorism and terrorism-related conduct, disrupt terrorist plots, and dismantle foreign terrorist organizations. FTO representatives and members, as well as individuals who knowingly provide material support or resources to a designated organization who are not U.S. citizens are inadmissible to, and in some cases removable from, the United States under the Immigration and Nationality Act. However, exemptions or waivers can be granted for certain circumstances, according to State and DHS officials. For example, DHS may grant eligible individuals exemptions in cases where material support was provided under duress. Individuals found inadmissible or deportable without an appropriate waiver or exemption under these provisions are also barred from receiving most immigration benefits or relief from removal. State and DHS are responsible for enforcing different aspects of the immigration restrictions and ensuring that inadmissible individuals without an appropriate waiver or exemption do not enter the United States. State consular officers at U.S. embassies and consulates are responsible for determining whether an applicant is eligible for a visa to travel to the United States. In instances where a consular officer determines that an applicant has engaged or engages in terrorism-related activity, the visa will be denied. According to State Bureau of Consular Affairs data, between fiscal years 2009 and 2013, which was the most recent period for which data are available, 1,069 individuals were denied nonimmigrant visas and 187 individuals were denied immigrant visas on the basis of involvement in terrorist activities and associations with terrorist organizations. DHS develops and deploys resources to detect; assess; and, if necessary, mitigate the risk posed by travelers during the international air travel process, including when an individual applies for U.S. travel documents; reserves, books, or purchases an airline ticket; checks in at an airport; travels en route on an airplane; and arrives at a U.S. port of entry. For example, upon arrival in the United States, all travelers are subjected to an inspection by U.S. Customs and Border Protection to determine if the individual is eligible for admission under U.S. immigration law. According to U.S. Customs and Border Protection data, between fiscal years 2009 and 2014, which was the most recent period for which data were available, more than 1,000 individuals were denied admission to the United States for various reasons, and were identified for potential connections to terrorism or terrorist groups, including being a member of or supporting an FTO. In addition, U.S. Immigration and Customs Enforcement is responsible for deporting individuals determined to be engaged in terrorism or terrorism-related activities. Between fiscal years 2013 and 2104, which was the most recent period for which data are available, Immigration and Customs Enforcement officials indicated that 3 individuals determined to be associated with or to have provided material support to designated FTOs were removed from the United States. Further, U.S. Citizenship and Immigration Services is responsible for the adjudication of immigration benefits. An individual who is a member of a terrorist organization or who has engaged or engages in terrorist-related activity, as defined by the Immigration and Nationality Act, is deemed inadmissible to the United States and is ineligible for most immigration benefits. The law grants both the Secretary of State and the Secretary of Homeland Security unreviewable discretion to waive the inadmissibility of certain individuals who would be otherwise inadmissible under this provision, after consulting with each other and the Attorney General. Additionally, according to DHS officials, an exemption may be applied to certain terrorist-related inadmissibility grounds if the activity was carried out under duress, or under certain circumstances, such as the provision of material support in the form of medical care. Such exemptions, if applied favorably, may allow an immigration benefit to be granted. DHS officials stated that these exemptions are extremely limited. Terrorist groups, such as al Qaeda and its affiliates, Boko Haram, and ISIL, continue to be a threat to the United States and its foreign partners. The designation of FTOs, which can result in civil and criminal penalties, is an integral component of the U.S. government's counterterrorism efforts. State's process for designating FTOs considers input and information from several key U.S. agency stakeholders, and allows U.S. agencies to impose consequences on the organizations and individuals that associate with or provide material support to FTOs. Such consequences help U.S. counterterrorism efforts isolate terrorist organizations internationally and limit support and contributions to those organizations. We provided draft copies of this report to the Departments of Defense, Homeland Security, Justice, State, and the Treasury, as well as the Office of the Director of National Intelligence, for review and comment. The Department of Homeland Security provided technical comments, which we incorporated as appropriate. The Departments of Defense, Justice, State, and the Treasury, as well as the Office of the Director of National Intelligence, had no comments. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or [email protected]. GAO staff who made key contributions to this report are listed in appendix IV. This report examines the Department of State's (State) process for designating foreign terrorist organizations (FTO) and the consequences resulting from designation. We report on (1) the process for designating FTOs, (2) the extent to which the State considers input from other agencies during the FTO designation process, and (3) the consequences that U.S. agencies impose as a result of an FTO designation. To identify the steps in the FTO designation process, we reviewed the legal requirements for designation and the legal authorities granted to State and other U.S. agencies to designate FTOs. In addition, we reviewed State documents that identified and outlined State's process to designate an FTO, from the equity check through publishing the designation in the Federal Register. We interviewed State officials in the Bureau of Counterterrorism to confirm and clarify the steps in the FTO designation process and to identify which agencies are involved in the process and at what steps they are involved. We also interviewed officials from the Departments of Defense, Homeland Security, Justice (Justice), and the Treasury (Treasury), as well as officials from the intelligence community, to determine each agency's level of participation in the process. To assess the extent to which State considered information from other agencies in the designation process, we interviewed officials from the Departments of Defense, Homeland Security, Justice, State, and the Treasury, as well as officials from the intelligence community, to determine when information is provided to State on organizations considered for FTO designation, as well as the nature of that information. We defined consideration as any action of State to request, obtain, and use information from other agencies, as well as letters of concurrence from those agencies. We reviewed both Justice's and Treasury's letters of concurrence for all 13 designations made between 2012 and 2014. We also interviewed State officials to determine how information provided by other agencies is considered during the FTO designation process. To identify the consequences U.S. agencies impose as a result of FTO designation, we reviewed the legal consequences agencies can impose under U.S. law, including the Immigration and Nationality Act, as amended. Specifically, we reviewed the FTO funds and assets related to FTOs that are blocked by U.S. financial institutions, as reported by the Office of Foreign Assets Control (OFAC) of the Department of the Treasury. We reviewed the publicly available Terrorist Assets Reports published by Treasury for calendar years 2008 through 2013, which identify the blocked assets identified and reported to Treasury related to FTOs, as well as organizations designated under additional Treasury authorities. U.S. persons are prohibited from conducting unauthorized transactions or having other dealings with or providing services to the designated individuals or entities. Any property or property interest of a designated person that comes within the United States or into the possession or control of a U.S. person is blocked and must be reported to OFAC. The Terrorist Assets Reports identify these reported blocked assets held within U.S. financial institutions that are targeted with sanctions under any of the three OFAC-administered sanctions programs related to terrorist organizations designated as FTOs, specially designated global terrorists, and specially designated terrorists under various U.S. authorities. We verified the totals reported in each of the reports and identified the funds blocked for organizations designated as FTOs. We also interviewed Treasury officials to discuss the reports of blocked assets and the changes in the assets across years. We did not analyze blocked funds for organizations that were designated under other authorities or by the United Nations or international partners. To assess the reliability of Treasury data on blocked funds, we performed checks of the year-to-year data published in the Terrorist Assets Reports for inconsistencies and errors. When we found minor inconsistencies, we discussed them with relevant agency officials and clarified the reporting data before finalizing our analysis. We determined that these data were sufficiently reliable for the purposes of our report. We also reviewed the Department of Justice National Security Division Chart of Public/Unsealed Terrorism and Terrorism Related Convictions to identify the individuals convicted of and sentenced for providing material support or resources to an FTO or receiving military-type training from or on behalf of an FTO between January 1, 2009, and December 31, 2013, which was the period for which the most recent data were available. Designation as an FTO introduces the possibility of a range of civil penalties for the FTO or its members, as well as criminal liability for individuals engaged in certain prohibited activities, such as individuals who knowingly provide, or attempt or conspire to provide, "material support or resources" to a designated FTO. We reviewed Justice data of only public/unsealed convictions from January 1, 2009, to December 31, 2013. For the purposes of our report, we analyzed the Justice data on the convictions and sentencing associated with individuals who were convicted of knowingly providing, or attempting or conspiring to provide, "material support or resources" to a designated FTO. We also reviewed the data to identify the individuals who were convicted of knowingly receiving military-type training from or on behalf of an organization designated as an FTO at the time of the training. The data did not include defendants who were charged with terrorism or terrorism-related offenses but had not been convicted either at trial or by guilty plea, as of December 31, 2013. The data included defendants who were determined by prosecutors in Justice's National Security Division Counterterrorism Section to have a connection to international terrorism, even if they were not charged with a terrorism offense. To assess the reliability of the convictions data, we performed basic reasonableness checks on the data and interviewed relevant agency officials to discuss the convictions and sentencing data. We determined that these data were sufficiently reliable for the purposes of our report. To identify the immigration restrictions and penalties imposed on individuals associated with or who provided material support to a designated foreign terrorist organization, we analyzed available data from State Bureau of Consular Affairs reports on visa denials between fiscal years 2009 and 2013, the U.S. Customs and Border Protection enforcement system database on arrival inadmissibility determinations between fiscal years 2009 and 2014, and information from the U.S. Immigration and Customs Enforcement on deportations between fiscal years 2013 and 2014. The Immigration and Nationality Act, as amended, establishes the types of visas available for travel to the United States and what conditions must be met before an applicant can be issued a particular type of visa and granted admission to the United States. For the purposes of this report, we primarily included the applicants deemed inadmissible under section 212(a)(3) of the Immigration and Nationality Act, which includes ineligibility based on terrorism grounds. We did not include the national security inadmissibility codes that were not relevant to terrorism. In each instance, we analyzed the data provided by the agencies and performed basic checks to determine the reasonableness of the data. We also spoke with relevant agency officials to discuss the data to confirm the reasonableness of the totals presented for individuals denied visas, denied entry into the United States, or deported from the United States for association with a designated foreign terrorist organization. We determined that these data were sufficiently reliable for the purposes of our report. We conducted this performance audit from April 2015 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Designated Foreign Terrorist Organizations, as of December 31, 2014 1. Organization Abu Nidal Organization (ANO) 2. Abu Sayyaf Group (ASG) 3. Aum Shinrikyo (AUM) 4. Basque Fatherland and Liberty (ETA) 5. Gama'a al-Islamiyya (Islamic Group) (IG) 6. 7. Harakat ul-Mujahidin (HUM) 8. 9. Kahane Chai (Kach) 10. Kurdistan Workers Party (PKK) (Kongra-Gel) 11. Liberation Tigers of Tamil Eelam (LTTE) 12. National Liberation Army (ELN) 13. Palestine Liberation Front (PLF) 14. Palestinian Islamic Jihad (PIJ) 15. PFLP-General Command (PFLP-GC) 16. Popular Front for the Liberation of Palestine (PFLF) 17. Revolutionary Armed Forces of Colombia (FARC) 18. Revolutionary Organization 17 November (17N) 19. Revolutionary People's Liberation Party/Front (DHKP/C) 20. Shining Path (SL) 21. al Qaeda (AQ) 22. Islamic Movement of Uzbekistan (IMU) 23. Real Irish Republican Army (RIRA) 24. Jaish-e-Mohammed (JEM) 25. Lashkar-e Tayyiba (LeT) 26. Al-Aqsa Martyrs Brigade (AAMB) 27. al Qaeda in the Islamic Maghreb (AQIM) 28. Asbat al-Ansar (AAA) 29. Communist Party of the Philippines/New People's Army (CPP/NPA) 30. Jemaah Islamiya (JI) 31. Lashkar i Jhangvi (LJ) 32. Ansar al-Islam (AAI) 33. Continuity Irish Republican Army (CIRA) 34. Islamic State of Iraq and the Levant (formerly al Qaeda in Iraq) 12/17/2004 35. Libyan Islamic Fighting Group (LIFG) 12/17/2004 36. Organization Islamic Jihad Union (IJU) 37. Harakat ul-Jihad-i-Islami/Bangladesh (HUJI-B) 39. Revolutionary Struggle (RS) 40. Kata'ib Hizballah (KH) 41. al Qaeda in the Arabian Peninsula (AQAP) 42. Harakat ul-Jihad-i-Islami (HUJI) 43. Tehrik-e Taliban Pakistan (TTP) 45. Army of Islam (AOI) 46. Indian Mujahedeen (IM) 47. Jemaah Anshorut Tauhid (JAT) 48. Abdallah Azzam Brigades (AAB) 49. Haqqani Network (HQN) 50. Ansar al-Dine (AAD) 54. Ansar al-Shari'a in Benghazi 55. Ansar al-Shari'a in Darnah 56. Ansar al-Shari'a in Tunisia 57. Ansar Bayt al-Maqdis 59. Mujahidin Shura Council in the Environs of Jerusalem (MSC) In addition to the contact listed above, Elizabeth Repko (Assistant Director), Claude Adrien, John F. Miller, and Laurani Singh made key contributions to this report. Ashley Alley, Martin de Alteriis, Tina Cheng, and Lynn Cothern provided technical assistance.
The Secretary of State, in consultation with the Secretary of the Treasury and the Attorney General, has the authority to designate a foreign organization as an FTO. Designation allows the United States to impose legal consequences on the FTO or on individuals who support the FTO. As of June 1, 2015, 59 organizations were designated as FTOs. GAO was asked to review the FTO designation process. This report provides information on the process by which the Secretary of State designates FTOs. Specifically, this report addresses (1) the process for designating FTOs, (2) the extent to which the Department of State considers input from other agencies during the FTO designation process, and (3) the consequences that U.S. agencies impose as a result of an FTO designation. To address these objectives, GAO reviewed and analyzed agency documents and data, and interviewed officials from Departments of Defense, Homeland Security, Justice, State, and the Treasury, as well as the intelligence community. Separately, GAO also reviewed the duration of the designation process for FTOs designated between 2012 and 2014. That information was published in April 2015 in a report for official use only. GAO is not making recommendations in this report. The Department of State (State) has developed a six-step process for designating foreign terrorist organizations (FTO) that involves other State bureaus and agency partners in the various steps. State's Bureau of Counterterrorism (CT) leads the designation process for State. CT monitors terrorist activity to identify potential targets for designation and also considers recommendations for potential targets from other State bureaus, federal agencies, and foreign partners. After selecting a target, State follows a six-step process to designate a group as an FTO, including steps to consult with partners and draft supporting documents. During this process, federal agencies and State bureaus, citing law enforcement, diplomatic, or intelligence concerns, can place a "hold" on a potential designation, which, until resolved, prevents the designation of the organization. The number of FTO designations has varied annually since 1997, when 20 FTOs were designated. As of December 31, 2014, 59 organizations were designated as FTOs, with 13 FTO designations occurring between 2012 and 2014. State considered input provided by other State bureaus and federal agencies for all 13 of the FTO designations made between 2012 and 2014, according to officials from the Departments of Defense, Homeland Security, Justice, State, and the Treasury, and the Office of the Director of National Intelligence, and GAO review of agency documents. For example, State used intelligence agencies' information on terrorist organizations and activities to support the designations. U.S. agencies reported enforcing FTO designations through three key legal consequences--blocking assets, prosecuting individuals, and imposing immigration restrictions--that target FTOs, their members, and individuals that provide support to those organizations. The restrictions and penalties that agencies reported imposing vary widely. For example, as of 2013, Treasury has blocked about $22 million in assets relating to 7 of 59 designated FTOs.
6,766
661
DHS is the lead department involved in securing our nation's homeland. Its mission includes, among other things, leading the unified national effort to secure the United States, preventing and deterring terrorist attacks, and protecting against and responding to threats and hazards to the nation. As part of its mission and as required by the Homeland Security Act of 2002, the department is also responsible for coordinating efforts across all levels of government and throughout the nation, including with federal, state, tribal, local, and private sector homeland security resources. As we have previously reported, DHS relies extensively on information technology (IT), such as networks and associated system applications, to carry out its mission. Specifically, in our recent report, we reported that the department identified 11 major networks it uses to support its homeland security functions, including sharing information with state and local governments. Examples of such DHS networks include the Homeland Secure Data Network, the Immigration and Customs Enforcement Network, and the Customs and Border Protection Network. In addition, the department has deployed HSIN, a homeland security information- sharing application that operates on the public Internet. As shown in table 1, of the 11 networks, 1 is categorized as Top Secret, 1 is Secret, 8 are Sensitive but Unclassified, and 1 is unclassified. HSIN is considered Sensitive but Unclassified. As the table shows, some of these networks are used solely within DHS, while others are also used by other federal agencies, as well as state and local governments. In addition, the total cost to develop, operate, and maintain these networks and HSIN in fiscal years 2005 and 2006, as reported by DHS, was $611.8 million. Of this total, the networks accounted for the vast majority of the cost: $579.4 million. DHS considers HSIN to be its primary communication application for transmitting sensitive but unclassified information. According to DHS, this network is an encrypted, unclassified, Web-based communications application that serves as DHS's primary nationwide information-sharing and collaboration tool. It is intended to offer both real-time chat and instant messaging capability, as well as a document library that contains reports from multiple federal, state, and local sources. Available through the application are suspicious incident and pre-incident information and analysis of terrorist threats, tactics, and weapons. The application is managed within DHS's Office of Operations Coordination. HSIN includes over 35 communities of interest, such as emergency management, law enforcement, counterterrorism, individual states, and private sector communities. Each community of interest has Web pages that are tailored for the community and contain general and community-specific news articles, links, and contact information. The community Web pages also provide access to other resources, such as the following: * Document library. Users can search the entire document library within the communities they have access to. * Discussion threads. HSIN has a discussion thread (or bulletin board) feature that allows users to post information that other users should know about and post requests for information that other users might have. Community administrators can also post and track tasks assigned to users during an incident. * Chat tool. HSIN's chat tool, known as Jabber, is similar to other instant message and chat tools--with the addition of security. Users can customize lists of their coworkers and send messages individually or set up chat rooms for more users. Other features include chat logs (which allow users to review conversations), timestamps, and user profiles. State and local governments have similar IT initiatives to carry out their homeland security missions, including sharing information. A key state and local-based initiative is the Regional Information Sharing Systems (RISS) program. The RISS program helps state and local jurisdictions to, among other things, share information in support of their homeland security missions. This nationwide program, operated and managed by state and local officials, was established in 1974 to address crime that operates across jurisdictional lines. The program consists of six regional information analysis centers that serve as regional hubs across the country. These centers offer services to RISS members in their regions, including information sharing and research, analytical products, case investigation support, funding, equipment loans, and training. Funding for the RISS program is administered through a grant from the Department of Justice. As part of its information-sharing efforts, the RISS program operates two key initiatives (among others): the RISS Secure Intranet (RISSNET) and the Automated Trusted Information Exchange (RISS ATIX): * Created in 1996, RISSNET is intended as a secure network serving member law enforcement agencies throughout the United States and other countries. Through this network, RISS offers services such as secure e-mail, document libraries, intelligence databases, Web pages, bulletin boards, and a chat tool. * RISS ATIX offers services similar to those offered by RISSNET to agencies beyond the law enforcement community, including executives and officials from governmental and nongovernmental agencies and organizations that have public safety responsibilities. RISS ATIX is partitioned into 39 communities of interest, such as critical infrastructure, emergency management, public health, and government officials. Members of each community of interest contribute information to be made available within each community. According to RISS officials, the RISS ATIX application was developed in response to the events of September 11, 2001; it was initiated in 2002 as an application to provide tools for information sharing and collaboration among public safety stakeholders, such as first responders and schools. As of July 2006, RISS ATIX supported 1,922 users beyond the traditional users of RISSNET. RISS ATIX uses the technology of RISSNET to offer services through its Web pages. The pages are tailored for each community of interest and contain community-specific news articles, links, and contact information. The pages also provide access to the following features: * Document library. Participants can store and search relevant documents within their community of interest. * Bulletin board. The RISS ATIX bulletin board allows users to post timely threat information in discussion forums and to view and respond to posted information. Users can post documents, images, and information related to terrorism and homeland security, as well as receive DHS information, advisories, and warnings. According to RISS officials, the bulletin boards are monitored by a RISS moderator to relay any information that might be useful for other communities of interest. * Chat tool. ATIXLive is an online, real-time, collaborative communications information-sharing tool for the exchange of information by community members. Through this tool, users can post timely threat information and view and respond to messages posted. * Secure e-mail. RISS ATIX participants have access to e-mail that can be used to provide alerts and related information. According to RISS, this is done in a secure environment. The need to improve information sharing as part of a national effort to improve homeland security and preparedness has been widely recognized, not only to improve our ability to anticipate and respond to threats and emergencies, but to avoid unnecessary expenditure of scarce resources. In January 2005, and more recently in January 2007, we identified establishing appropriate and effective information-sharing mechanisms to improve homeland security as a high-risk area. The Office of Management and Budget (OMB) has also issued guidance that stresses the importance of information sharing and avoiding duplication of effort. Nonetheless, although this area has received increased attention, the federal government faces formidable challenges in sharing information among stakeholders in an appropriate and timely manner. As we concluded in October 2005, agencies can help address these challenges by adopting and implementing key practices, related to OMB's guidance, to improve collaboration, such as establishing joint strategies and addressing needs by leveraging resources and developing compatible policies, procedures, and other means to operate across agency boundaries. Based on our research and experience, these practices are also relevant for collaboration between federal agencies and other levels of government (e.g., state, local). Until these coordination and collaboration practices are implemented, agencies face the risk that effective information sharing will not occur. Congress and the Administration have made several efforts to address the challenges associated with information sharing. In particular, as we reported in March 2006, the President initiated an effort to establish an Information Sharing Environment that is to combine policies, procedures, and networks and other technologies that link people, systems, and information among all appropriate federal, state, local, and tribal entities and the private sector. In November 2006, in response to congressional direction, the Administration issued a plan for implementing this environment and described actions that the federal government intends--in coordination with state, local, tribal, private sector, and foreign partners--to carry out over the next 3 years. DHS did not fully adhere to the previously mentioned key practices in coordinating its efforts on HSIN with key state and local information-sharing initiatives. The department's limited use of these practices is attributable to a number of factors: in particular, after the events of September 11, 2001, the department expedited its schedule to deploy HSIN capabilities, and in doing so, it did not develop an inventory of key state and local information initiatives. Until the department fully implements key coordination and collaboration practices and guidance, it faces, among other things, the risk that effective information sharing is not occurring. DHS has efforts planned and under way to improve coordination and collaboration, including implementing the recommendations in our recent report. In developing HSIN, DHS did not fully adhere to the practices related to OMB's guidance. First, although DHS officials met with RISS program officials to discuss exchanging terrorism-related documents, joint strategies for meeting mutual needs by leveraging resources have not been fully developed. DHS did not engage the RISS program to determine how resources could be leveraged to meet mutual needs. According to RISS program officials, they met with DHS twice (on September 25, 2003, and January 7, 2004) to demonstrate that their RISS ATIX application could be used by DHS for sharing homeland security information. However, communication from DHS on this topic stopped after these meetings, without explanation. According to DHS officials, they did not remember the meetings, which they attributed to the departure from DHS of the staff who had attended. In addition, although DHS initially pursued a limited strategy of exchanging selected terrorism-related documents with the RISS program, the strategy was impeded by technical issues and by differences in what each organization considers to be terrorism information. For example, the exchange of documents between HSIN and the RISS program stopped on August 1, 2006, because of technical problems with HSIN's upgrade to a new infrastructure. As of May 3, 2007, the exchange of terrorism-related documents had not yet resumed, according to HSIN's program manager. This official also stated that the program is currently working to fix the issue with the goal of having it resolved by June 2007. Finally, DHS has yet to fully develop coordination policies, procedures, and other means to operate across agency boundaries with the RISS program. DHS has not fully developed such means to operate with the RISS program and leverage its available technological resources. Although an operating agreement was established to govern the exchange of terrorism-related documents, according to RISS officials, it did not cover the full range of information available through the RISS program. The extent of DHS's adherence to key practices (and the resulting limited coordination) is attributable to DHS's expedited schedule to deploy an information-sharing application that could be used across the federal government in the wake of the September 11 attacks; in its haste, DHS did not develop a complete inventory of key state and local information initiatives. According to DHS officials, they still do not have a complete inventory of key state and local information- sharing initiatives. DHS's Office of Inspector General also reported that DHS developed HSIN in a rapid and ad hoc manner, and among other things, did not adequately identify existing federal, state, and local resources, such as RISSNET, that it could have leveraged. Further, DHS did not fully understand the RISS program. Specifically, DHS officials did not acknowledge the RISS program as a state and local based program with which to partner, but instead considered it to be one of many vendors providing a tool for information sharing. In addition, DHS officials believed that the RISS program was solely focused on law enforcement information and did not capture the broader terrorism-related or other information of interest to the department. Because of this limited coordination and collaboration, DHS is at increased risk that effective information sharing is not occurring. The department also faces the risk that it is developing and deploying capabilities on HSIN that duplicate those being established by state and local agencies. There is evidence that this has occurred with respect to the RISS program. Specifically: * HSIN and RISS ATIX currently target similar user groups. DHS and the RISS program are independently striving to make their applications available to user communities involved in the prevention of, response to, mitigation of, and recovery from terrorism and disasters across the country. For example, HSIN and RISS ATIX are being used and marketed for use at state fusion centers and other state organizations, such as emergency management agencies across the country. * HSIN and RISS applications have similar approaches for sharing information with their users. For example, on each application, users from a particular community--such as emergency management--have access to a portal or community area tailored to the user's information needs. The community-based portals have similar features focused on user communities. Both applications provide each community with the following features: * Web pages. Tailored for communities of interest (e.g., law enforcement, emergency management, critical infrastructure sectors), these pages contain general and community-specific news articles, links, and contact information. * Bulletin boards. Participants can post and discuss information. * Chat tool. Each community has its own online, real-time, interactive collaboration application. * Document library. Participants can store and search relevant documents. According to DHS officials, including the HSIN program manager, the department has efforts planned and under way to improve coordination. For example, the department is in the process of developing an integration strategy that is to include enhancing HSIN so that other applications and networks can interact with it. This would promote integration by allowing other federal agencies and state and local governments to use their preferred applications and networks--such as RISSNET and RISS ATIX--while allowing DHS to continue to use HSIN. Other examples of improvements either begun or planned include the following: * The formation of an HSIN Mission Coordinating Committee, whose roles and responsibilities are to be defined in a management directive. It is expected to ensure that all HSIN users are coordinated in information-sharing relationships of mutual value. * The recent development of engagement, communications, and feedback strategies for better coordination and communication with HSIN, including, for example, enhancing user awareness of applicable HSIN contact points and changes to the system. * The reorganization of the HSIN program management office to help the department better meet user needs. According to the program manager, this reorganization has included the use of integrated process teams to better support DHS's operational mission priorities as well as the establishment of a strategic framework and implementation plan for meeting the office's key activities and vision. * The establishment of a HSIN Advisory Committee to advise the department on how the HSIN program can better meet user needs, examine DHS's processes for deploying HSIN to the states, assess state resources, and determine how HSIN can coordinate with these resources. In addition to these planned improvements, DHS has agreed to implement the recommendations in our recent report. Specifically, we recommended that the department ensure that HSIN is effectively coordinated with key state and local government information-sharing initiatives. We also recommended that this include (1) identifying and inventorying such initiatives to determine whether there are opportunities to improve information sharing and avoid duplication, (2) adopting and institutionalizing key practices related to OMB's guidance on enhancing and sustaining agency coordination and collaboration, and (3) ensuring that the department's coordination efforts are consistent with the Administration's recently issued Information Sharing Environment plan. In response to these recommendations, DHS described actions it was taking to implement them. (The full recommendations and DHS's written response to them are in the report.) In closing, DHS has not effectively coordinated its primary information-sharing system with two key state and local initiatives. Largely because of the department's hasty approach to delivering needed information-sharing capabilities, it did not follow key coordination and collaboration practices and guidance or invest the time to inventory and fully understand how it could leverage state and local approaches. Consequently, the department faces the risk that effective information sharing is not occurring and that its HSIN application may be duplicating existing state and local capabilities. This also raises the issue of whether similar coordination and duplication issues exist with the other federal homeland security networks and associated systems and applications under the department's purview. DHS recognizes these risks and has improvements planned and under way to address them, including stated plans to implement our recommendations. These are positive steps and should help address shortfalls in the department's coordination practices on HSIN. However, these actions have either just begun or are planned, with milestones for implementation yet to be defined. Until all the key coordination and collaboration practices are fully implemented and institutionalized, DHS will continue to be at risk that the effectiveness of its information sharing is not where it needs to be to adequately protect the homeland and that its efforts are unnecessarily duplicating state and local initiatives. Madame Chair, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact David Powner, Director, Information Technology Management Issues, at (202) 512-9286 or [email protected]. Other individuals who made key contributions include Gary Mountjoy, Assistant Director; Barbara Collier; Joseph Cruz; Matthew Grote; and Lori Martinez. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) is responsible for coordinating the federal government's homeland security communications with all levels of government, the private sector, and the public. In support of its mission, the department has deployed a Web-based information-sharing application--the Homeland Security Information Network (HSIN)--and operates at least 11 homeland security networks. The department reported that in fiscal years 2005 and 2006, these investments cost $611.8 million to develop, operate, and maintain. In view of the significance of information sharing for protecting homeland security, GAO was asked to testify on the department's efforts to coordinate its development and use of HSIN with two key state and local initiatives under the Regional Information Sharing Systems--a nationwide information-sharing program operated and managed by state and local officials. This testimony is based on a recent GAO report that addresses, among other things, DHS's homeland security networks and HSIN. In performing the work for that report, GAO analyzed documentation on HSIN and state and local initiatives, compared it against the requirements of the Homeland Security Act and federal guidance and best practices, and interviewed DHS officials and state and local officials. In developing HSIN, its key homeland security information-sharing application, DHS did not work effectively with two key Regional Information Sharing Systems program initiatives. This program, which is operated and managed by state and local officials nationwide, provides services to law enforcement, emergency responders, and other public safety officials. However, DHS did not coordinate with the program to fully develop joint strategies and policies, procedures, and other means to operate across agency boundaries, which are key practices for effective coordination and collaboration and a means to enhance information sharing and avoid duplication of effort. For example, DHS did not engage the program in ongoing dialogue to determine how resources could be leveraged to meet mutual needs. A major factor contributing to this limited coordination was that the department rushed to deploy HSIN after the events of September 11, 2001. In its haste, it did not develop a comprehensive inventory of key state and local information-sharing initiatives, and it did not achieve a full understanding of the relevance of the Regional Information Sharing Systems program to homeland security information sharing. As a result, DHS faces the risk that effective information sharing is not occurring and that HSIN may be duplicating state and local capabilities. Specifically, both HSIN and one of the Regional Information Sharing Systems initiatives target similar user groups, such as emergency management agencies, and all have similar features, such as electronic bulletin boards, "chat" tools, and document libraries. The department has efforts planned and under way to improve coordination and collaboration, including developing an integration strategy to allow other applications and networks to connect with HSIN, so that organizations can continue to use their preferred information-sharing applications and networks. In addition, it has agreed to implement recommendations made by GAO to take specific steps to (1) improve coordination, including developing a comprehensive inventory of state and local initiatives, and (2) ensure that similar coordination and duplication issues do not arise with other federal homeland security networks, systems, and applications. Until DHS completes these efforts, including developing an inventory of key state and local initiatives and fully implementing and institutionalizing key practices for effective coordination and collaboration, the department will continue to be at risk that information is not being effectively shared and that the department is duplicating state and local capabilities.
3,988
720
Air traffic controllers monitor and direct traffic in a designated volume of airspace called a sector. Each sector requires a separate channel assignment for controllers to communicate with aircraft flying in that sector. As the amount of air traffic grows, the need for additional sectors and channel assignments also increases. FAA's present air-ground communications system operates in a worldwide, very high frequency (VHF) band reserved for safety communications within the 118 to 137 megahertz (MHz) range. Within this range of frequencies, FAA currently has 524 channels available for air traffic services. During the past four decades, FAA has primarily been able to meet the increased need for more channel capacity within this band by periodically reducing the space between channels (a process known as channel splitting). For example, in 1966, reducing the space between channels from 100 kHz to 50 kHz doubled the number of channels. The last channel split in 1977, from 50 kHz to 25 kHz, again doubled the number of channels available. Each time FAA reduced this space, owners of aircraft needed to purchase new radios to receive the benefits of the increased number of channels. FAA can use or assign its 524 channels several times around the country (as long as channel assignments are separated geographically to preclude frequency interference). Through channel reuse, FAA can make up to 14,000 channel assignments nationwide. While aviation literature often refers to channel and channel assignments as frequency and frequency assignments, throughout this report, we use the terms channel and channel assignments. Because the growth in air traffic during the past decade has created a need for more communications channels since the 1977 split, FAA has been increasingly concerned that the demand for channels would exceed their availability, which would cause frequency congestion. FAA first explored this issue at length at a 1990 International Civil Aviation Organization (ICAO) conference, at which the ICAO member countries addressed increasing congestion in the air traffic control communications band and the especially acute problem in the U. S. and Western Europe. Over the next 5 years, ICAO evaluated different solutions that were proposed by the conference's participants. While the Western European countries proposed further channel splitting to increase capacity, FAA proposed a totally new air-ground communications system. FAA's proposed technology, known as VDL-3, would be based on a new integrated digital voice and data communications technology, which would assign segments of a channel to users in milliseconds of time, thereby allowing both voice and data to travel over the same channels using one of the available time slots. Under the current system, each channel is used exclusively and continuously for voice, so the air traffic controller can communicate at all times with the aircraft. This new technology could provide up to a fourfold increase in capacity without channel splitting, thus meeting the demand for new voice channels. VDL-3 digitizes a person's voice and sends it as encoded bits of information, which is reassembled by the receiver. Moreover, this technology could provide real-time data link on-board communications of air traffic control messages and events. Although ICAO adopted FAA's proposed digital air-ground communications system VDL-3 in 1995 as its model for worldwide implementation, it also approved standards allowing Western Europe, which was then experiencing severe frequency congestion, to further reduce the spacing between channels from 25 kHz to 8.33 kHz. While this action tripled the number of channels available for assignment, it also resulted in the need for aircraft flying in Western Europe to install new radios that are capable of communicating over the 8.33 kHz channels. ICAO intended that this reduction would be an interim measure until 2004, when FAA estimated that the technology it had proposed would be operational. However, FAA did not pursue developing VDL-3 in 1995, in part, because its existing communications system still had available capacity to meet near-term communications needs, and because the agency's need to modernize its air traffic control system became an urgent priority. In 1998, FAA resumed developing VDL-3; however, the agency is not expected to implement this technology until 2009. Figure 1 depicts how channel splitting has increased channel capacity since 1966 and how FAA's proposed use of VDL-3 will further increase channel capacity. FAA has identified 23 measures to improve its existing voice communications system. While FAA and the U. S. aviation industry generally believe that implementing all these measures would add several years to the useful life of the existing system, they believe it would not meet aviation's future voice communications needs beyond 2009. Because increases in air traffic create the need for more channel assignments, the events of September 11, which have resulted in slower than expected increases, might delay by a year or two when FAA starts to encounter problems systemwide in providing new channel assignments. Agency and industry representatives agree that it is not possible to precisely predict when the existing system with its planned improvements will no longer meet aviation's needs. As a result, FAA plans to annually assess whether this system will be capable of meeting the projected need for more channel assignments for at least 5 years into the future. FAA plans to release the first of these annual assessments in September 2002. While the focus of FAA's efforts has been to meet aviation's need for voice communications through 2009, FAA recognizes that its data communications needs are evolving. The agency expects to increase its use of data communications to help alleviate voice congestion and to help controllers and pilots accurately exchange more information. Because FAA's current system cannot do this, it has been leasing data link services from ARINC. However, even with the planned improvements, this service will not be able to meet FAA's projected need for more data communications. As FAA relies more on data communications, this leased system will not be able to meet the agency's need to prioritize those messages that must be delivered expeditiously. Recognizing that accurately projecting the growth in aviation's need for data link communications beyond 15 years would be difficult, FAA is designing a system to provide a sevenfold increase in capacity to meet future needs. During the 1990s, several of FAA's studies found that, historically, increases in air traffic were closely related to the growing need to assign more channels for voice communications (see fig. 2). In its most recent study about the growing need for more channel assignments for voice communications, FAA found that this need had grown annually, on average, about 4 percent (about 300 new channel assignments) since 1974 (see fig. 3). This growth paralleled the increase in domestic air travel during that time frame. Despite the recent downturn in air traffic resulting from a recession and the September 11 terrorist attacks, FAA expects it to resume its historical 4-percent annual growth within a year or two. Currently, FAA's voice communications system is limited to a maximum of 14,000 channel assignments. Because increases in air traffic require more new channel assignments, FAA expects that providing them in some metropolitan areas will become increasingly difficult. If the system is left unchanged, FAA has concluded that, as early as 2005, it could no longer fully support aviation's need for voice communications and that in such high traffic metropolitan areas as New York, Chicago, and Los Angeles the need for additional assignments could be evident sooner. Because FAA has delayed NEXCOM's implementation until 2009, the agency's 23 planned improvement measures are designed to add approximately 2,600 additional channel assignments for voice communications. (See table 1.) FAA has classified these initiatives, which involve a variety of technical, regulatory, and administrative changes, according to how soon it expects to implement them. However, FAA recognizes that there is no guarantee that all of these measures can be implemented because some of them largely depend on gaining agreement from other entities, such as other federal agencies and the aviation community, and some may involve international coordination. FAA also recognizes that the exact degree of improvement resulting from the totality of these measures cannot be precisely projected and actual test results could show less gain than anticipated. Many of these initiatives involve reallocating channels being used for purposes other than air traffic services and increasing FAA's flexibility to use already assigned channels. For example, FAA is reviewing its policy for assigning channels to such special events as air shows to determine if fewer channels could be assigned to them so that channels could be used for other purposes. While it is not possible to predict exactly when FAA's existing voice communications system will run out of available channel assignments, agency and aviation representatives concur that, without the 23 improvement measures, the system will be strained to provide enough channel assignments. According to a MITRE Corporation study completed in 2000, even if the need for more channel assignments for voice communications were to grow at 2 percent per year (instead of FAA's projected growth of 4 percent per year), by 2005 or sooner, it would be difficult for FAA to meet the need for air traffic communications in major metropolitan areas. MITRE also projected that the shortage of available channel assignments would become a nationwide problem by 2015 or sooner. In 2000, FAA first encountered a shortage problem when it had to reassign a channel from one location to another that FAA viewed as a higher priority in the Cleveland area. Figure 4 shows MITRE's analysis of how the projected demand for more voice communications capacity will intensify if FAA does nothing to improve this system. Currently, FAA is leasing ARINC's Aircraft Communications Addressing and Reporting System (ACARS) to provide data link communications that are not time critical, such as forwarding clearances to pilots prior to takeoff. Because this analog system is also reaching its capacity to handle data link communications, FAA plans to use ARINC's new digital data communications system, known as Very High Frequency Digital Link Mode 2 (VDL-2) until 2009. By then, FAA expects to use its VDL-3 system, which is being developed to integrate voice and data communications, to meet aviation's needs for about 1,800 channel assignments for data communications over the next 15 years and to prioritize messages that must be delivered expeditiously, which VDL-2 cannot provide. Because FAA believes that aviation's need for data communications cannot be realistically projected beyond 15 years, it is designing a system to provide a sevenfold increase in capacity for data communications, thereby providing what it believes is an excess capacity that should meet aviation's future needs. In consultation with stakeholders from the aviation industry, FAA selected VDL-3 as the preferred solution to meet its future communications needs. During the 1990s, FAA collaborated with its stakeholders to analyze many different communications systems, as well as variations of them, as potential candidates to replace its existing communications system. As a result of these studies, FAA eliminated several designs because they did not meet some of the fundamental needs established for NEXCOM. For example, FAA found that Europe's Very High Frequency Digital Link Mode 4 (VDL-4) technology was too early in development to assess and that it would not provide voice communications, FAA's most pressing need. Moreover, a vendor of VDL-4 recently told us that this technology still needed additional development to meet FAA's communications needs and that the international community had not yet validated it as a standard for air traffic control communications, which could take at least an additional 3 years. In March 1998, FAA rated VDL-3 as the best of the six possible technologies to meet its future communications needs and the most likely to meet its schedule with the least risk. FAA found that VDL-3, the international model for aviation communications, could provide up to a maximum fourfold increase in channel capacity, but the increase is estimated to be three to fourfold because of initial deployment scenarios; transmit voice and data communications without interference; increase the level of security; provide voice and data communications to all users with minimal equipment replacement; require no additional channel splitting, thereby reducing the need for engineering changes; and reduce the number of ground radios required by FAA because each radio could accommodate up to four channels within the existing 25 kHz channel spacing. Although FAA and its stakeholders thought that each of the five other technologies had some potential to satisfy a broad range of their future needs, each was rejected during the 1998 evaluation process. (See table 2.) Academia and other experts have concluded that FAA's rationale for rejecting alternative technologies in 1998 remains valid today. Specifically, the technical challenges facing these technologies have not been sufficiently resolved to allow FAA to deploy an initial operating system by 2005. For example, while satellite technology is used to provide voice and data communications across the oceans and in remote regions, it is expensive, it does not support the need for direct aircraft-to-aircraft communications, and does not meet international standards for air traffic control communications. Representatives from the National Aeronautics and Space Administration (NASA) told us that emerging technologies that could meet FAA's need for voice and data communications could be developed and available by 2015. However, in further discussion with these representatives, they indicated that while such technologies might be mature enough to provide communications services, it may require additional time for them to meet all of the requirements associated with air traffic control safety systems. NASA officials commented that FAA initiated its plans for its new communications system at the outset of the emerging wireless technology explosion and was not able to assess and integrate any of these emerging technologies into the NEXCOM architecture. However, they noted that the telecommunications field is changing rapidly, and FAA and the aviation industry will need to continually assess their requirements and keep abreast of emerging technologies that could better meet their future communications needs. FAA's planned approach for NEXCOM is to implement VDL-3 in three segments, as shown in figure 5. Currently, FAA's senior management has only approved investments for the first segment. If FAA cannot demonstrate that VDL-3 can successfully integrate both voice and data in a cost-effective manner, FAA plans to implement a backup approach to meet the need for more channel capacity. FAA's backup follows the Western European approach as follows: For analog voice communications, reduce the 25 kHz space between channels to 8.33 kHz. For digital data communications, rely on a commercial vendor that is developing a technology to support aviation's need for data, known as VDL-2. However, this approach remains a backup because it doubles, not quadruples, voice channel capacity. Furthermore, it does not resolve the issues of radio interference and loss of communications that now confront FAA, nor does it meet all of the requirements for air traffic control data link communications. Before selecting VDL-3 as the technology for NEXCOM, FAA needs to demonstrate the technical and operational merits of VDL-3, certify VDL-3 as a "safety critical system," and prove its cost-effectiveness to the aviation industry. To help address these issues, the FAA Administrator formed the NEXCOM Aviation Rulemaking Committee (NARC) in 2000. The NARC, composed of representatives from the aviation industry and other groups, submitted its final report in September 2001, which included recommendations to expedite the resolution of technical and operational issues involving NEXCOM. To demonstrate VDL-3's technical and operational merits, FAA has scheduled a series of three tests of this technology, beginning in October 2002 and ending in October 2004. The first test is designed to demonstrate the quality of voice communications and the integration of voice and data communications. A key component of the second test is to demonstrate that new digital ground radios can work with new digital aircraft equipment and other equipment in FAA's air traffic control system.Finally, in the third test, FAA plans to validate that VDL-3 can be certified as safe for aircraft operations. Moreover, to make VDL-3 fully operational will require FAA and users to undertake a phased installation of tens of thousands of new pieces of equipment. In addition to FAA and users installing radios with new transmitters and receivers, FAA would need to install new voice switches and workstations. FAA also needs to ensure that all the new equipment required for NEXCOM will be compatible with FAA's existing equipment, especially the numerous types of voice switches as well as the local and wide area networks. Therefore, FAA estimates that it will take 5 years following the successful conclusion of its demonstration tests for it to install the new ground equipment, while the airlines install new aircraft equipment. Figure 6 shows FAA's schedule to implement both voice and data digital communications. Because communications are critical to ensuring safe aircraft operations, FAA is developing a process to certify that VDL-3 and the new equipment it requires could be used in the National Airspace System. In April 2002, FAA's teams responsible for developing and certifying VDL-3 drafted a memorandum of understanding that describes their respective responsibilities. They agreed to maintain effective communications among them as well as with the manufacturers developing VDL-3 equipment. (See table 3 for the schedule for certifying the radios that will be used with VDL-3.) To FAA's credit, the agency is proactively seeking certification before making a final decision on VDL-3. The issue of cost effectiveness was raised by the NARC because it wanted FAA to fully analyze the airlines' transition to digital radios before the agency requires their use. Convincing enough users to purchase VDL-3 radios might be difficult because some air carriers had recently bought 8.33 kHz radios for operation in Europe, and they would not be eager to purchase additional equipment. As part of its cost-benefit analysis, FAA is assuming a 30-year life cycle for NEXCOM; however, changing requirements coupled with the rapidly changing developments in telecommunications technology could reduce this life cycle. Without analyzing the costs and benefits under different confidence levels for other potential life cycles for NEXCOM while considering the impact of changing requirements and the effects of emerging technologies, FAA might find it more difficult to enlist the continued support of the aviation community for NEXCOM. FAA plans to begin analyzing the cost- effectiveness of NEXCOM in mid-2002, publish a notice of proposed rulemaking by January 2004, complete its cost-benefit analysis by mid- 2004, and publish its final rulemaking by June 2005. FAA officials agreed that it is important to continually evaluate the requirements of the future system and whether emerging technologies could reduce VDL-3's cost- effectiveness prior to making the final selection. Throughout its rulemaking process, program officials stressed that they plan to continue involving all key FAA organizations and the aviation industry. FAA's approach for selecting its NEXCOM technology appears prudent. The FAA officials managing NEXCOM have worked with the aviation industry and involved other key FAA organizations to help ensure that the technical and operational, safety, and cost-effectiveness issues are resolved in a timely manner. However, FAA is only in the early stages of resolving these three issues, and the program's continued success hinges on FAA's maintaining close collaboration with major stakeholders. FAA's follow-through on the development of a comprehensive cost-benefit analysis, which considers how changing requirements and emerging technologies could affect the cost effectiveness of VDL-3, will be key to this success. Otherwise, the aviation community might not continue to support FAA in developing NEXCOM, as they now do. To make the most informed decision in selecting the technology for NEXCOM and continue to receive the support from the aviation community, we recommend that the Secretary of Transportation direct the FAA Administrator to assess whether the requirements for voice and data communications have changed and the potential impact of emerging technologies on VDL-3's useful life as part of its cost-effectiveness analysis of NEXCOM. We provided the Department of Transportation, the Department of Defense, and the National Aeronautics and Space Administration with a draft of this report for review and comment. The Department of Defense provided no comments. The Product Team Lead for Air/Ground Voice Communications and officials from Spectrum Policy and Management, FAA, indicated that they generally agreed with the facts and recommendation. These officials, along with those from the National Aeronautics and Space Administration, provided a number of clarifying comments, which we have incorporated where appropriate. To determine the extent to which FAA's existing communications system can effectively meet its future needs, we interviewed officials from FAA's NEXCOM program office, the agency's spectrum management office, union officials representing the air traffic controller and maintenance technician workforces, representatives of the MITRE Corporation, and members of the NARC, an advisory committee formed by FAA to help ensure that NEXCOM meets the aviation industry's needs. We reviewed documentation on the current status of FAA's existing air-ground communications system as well as documentation on potential measures FAA plans to take to increase the channel capacity of its existing system. To determine what FAA did to help ensure that its preferred technology for NEXCOM will meet aviation's future needs, we interviewed officials from FAA's NEXCOM program office; officials from the Department of Defense, the National Aeronautics and Space Administration, and Eurocontrol; an expert in satellite communications from the University of Maryland; and contractors who offer VDL-2 and VDL-4 communications services. We reviewed documentation indicating to what extent varying technologies could meet FAA's time frames for implementing NEXCOM. We also reviewed documentation indicating how well varying technologies could meet FAA's specifications for NEXCOM. We did not perform an independent verification of the capabilities of these technologies. Additionally, we reviewed studies performed by FAA in collaboration with the U.S. aviation industry to assess alternative technologies for NEXCOM that led the U.S. aviation community to endorse FAA's decision to select VDL-3 as its preferred technology for NEXCOM. To identify issues FAA needs to resolve before it can make a final selection for NEXCOM's technology, we interviewed officials from FAA's NEXCOM program office as well as members of the NARC. We also reviewed NEXCOM program office documentation that prioritizes the program's risks, assesses their potential impact on the program's cost and schedule, and describes the status of FAA's efforts to mitigate those risks. In addition, we reviewed the NARC's September 2001 report that made recommendations to FAA for modernizing its air-ground communications system. We conducted our review from September 2001 through May 2002, in accordance with generally accepted government auditing standards. We are sending copies of this report to interested Members of Congress; the Secretary of Transportation; the Secretary of Defense; the Administrator, National Aeronautics and Space Administration, and the Administrator, FAA. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3650. I can also be reached by E-mail at [email protected]. Key contributors are listed in appendix I. In addition to those individuals named above, Nabajyoti Barkakati, Geraldine C. Beard, Jeanine M. Brady, Peter G. Maristch, and Madhav S. Panwar made key contributions to this report.
The Federal Aviation Administration (FAA) provides air-ground voice and data communications for pilots and air traffic controllers to safely coordinate all flight operations, ground movement of aircraft at airports, and in-flight separation distances between aircraft. However, the anticipated growth in air traffic, coupled with FAA's efforts to reduce air traffic delays and introduce new air traffic services, will create a demand for additional channels of voice communications that FAA's current system cannot provide. FAA and the aviation industry agree that the existing communications system, even with enhancements, cannot meet aviation's expanding need for communications. To ensure that the technology it wants to use for Next Generation Air/Ground Communications (NEXCOM) will meet its future needs, FAA, in collaboration with the aviation industry, conducted a comparative analysis of numerous technologies, to assess each one's ability to meet technical requirements, minimize program risk, and meet the agency's schedule. However, before making a final decision on the technology for NEXCOM, FAA will need to efficiently address three major issues: whether the preferred technology is technically sound and will operate as intended, if the preferred technology and the equipment it requires can be certified as safe for use in the National Airspace System, and whether it is cost effective for users and the agency.
5,045
277
U.S. federal financial regulators have made progress in implementing provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) and related reforms to restrict future government support and reduce the likelihood and impacts of the failure of a systemically important financial institution (SIFI). These reforms can be grouped into four general categories: (1) restrictions on regulators' emergency authorities to provide assistance to financial institutions; (2) new tools and authorities for regulators to resolve a failing SIFI outside of bankruptcy if its failure would have serious adverse effects on the U.S. financial system; (3) enhanced regulatory standards for SIFIs related to capital, liquidity, and risk management; and (4) other reforms intended to reduce the potential disruptions to the financial system that could result from a SIFI's failure. We found that while views varied among market participants with whom we spoke, many believed that recent regulatory reforms have reduced but not eliminated the likelihood the federal government would prevent the failure of one of the largest bank holding companies. Citing recent reforms, two of the three largest credit rating agencies reduced or eliminated "uplift"--an increase in the credit rating--they had assigned to the credit ratings of eight of the largest bank holding companies due to their assumptions of government support for these firms. Credit rating agencies and large investors cited the new Orderly Liquidation Authority, which gives the Federal Deposit Insurance Corporation new authority to resolve large financial firms outside of the bankruptcy process, as a key factor influencing their views. While several large investors viewed the resolution process as credible, others cited potential challenges, such as the risk that multiple failures of large firms could destabilize markets. Remaining market expectations of government support can benefit large bank holding companies to the extent that these expectations affect decisions by investors, counterparties, and customers of these firms. For example, market beliefs about government support could benefit a firm by lowering its funding costs to the extent that providers of funds--such as depositors, bond investors, and stockholders--rely on credit ratings that assume government support or incorporate their own expectations of government support into their decisions to provide funds. Second, higher credit ratings from assumed government support can benefit firms through private contracts that reference credit ratings such as derivative contracts that tie collateral requirements to a firm's credit rating. Finally, expectations of government support can affect a firm's ability to attract customers to varying degrees. New and higher fees imposed by the Dodd-Frank Act, stricter regulatory standards, and other reforms could increase costs for the largest bank holding companies relative to smaller competitors. Officials from the Financial Stability Oversight Council (FSOC) and its member agencies have stated that financial reforms have not completely removed too-big- to-fail perceptions but have made significant progress toward doing so. According to Department of the Treasury (Treasury) officials, key areas that require continued progress include education of market participants on reforms and international coordination on regulatory reform efforts, such as creating a viable process for resolving a failing financial institution with significant cross-border activities. We analyzed the relationship between a bank holding company's size and its funding costs, taking into account a broad set of other factors that can influence funding costs. To inform this analysis and to understand the breadth of methodological approaches and results, we reviewed selected studies that estimated funding cost differences between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. Studies we reviewed generally found that the largest financial institutions had lower funding costs during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller institutions has since declined. However, these empirical analyses contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. For example, some studies used credit ratings, which provide only an indirect measure of funding costs. In addition, studies that pooled a large number of countries in their analysis have results that may not be applicable to U.S. bank holding companies and studies that did not include data past 2011 have results that may not reflect recent changes in the regulatory environment. Our analysis, which addresses some limitations of these studies, suggests that large bank holding companies had lower funding costs than smaller ones during the financial crisis but provides mixed evidence of such advantages in recent years. However, most models suggest that such advantages may have declined or reversed. To conduct our analysis, we developed a series of econometric models-- models that use statistical techniques to estimate the relationships between quantitative economic and financial variables--based on our assessment of relevant studies and expert views. These models estimate the relationship between bank holding companies' bond funding costs and their size, while also controlling for other drivers of bond funding costs, such as bank holding company credit risk. Key features of our approach include the following: U.S. bank holding companies. To better understand the relationship between bank holding company funding costs and size in the context of the U.S. economic and regulatory environment, we only analyzed U.S. bank holding companies. In contrast, some of the literature we reviewed analyzed nonbank financial companies and foreign companies. 2006-2013 time period. To better understand the relationship between bank holding company funding costs and size in the context of the current economic and regulatory environment, we analyzed the period from 2006 through 2013, which includes the recent financial crisis as well as years before the crisis and following the enactment of the Dodd-Frank Act. In contrast, some of the literature we reviewed did not analyze data in the years after the financial crisis. Bond funding costs. We used bond yield spreads--the difference between the yield or rate of return on a bond and the yield on a Treasury bond of comparable maturity--as our measure of bank holding company funding costs because they are a direct measure of what investors charge bank holding companies to borrow money and because they are sensitive to credit risk and hence expected government support. This indicator of funding costs has distinct advantages over certain other indicators used in studies we reviewed, including credit ratings, which do not directly measure funding costs, and total interest expense, which mixes the costs of funding from multiple sources. Alternative measures of size. Size or systemic importance can be measured in multiple ways, as reflected in our review of the literature. Based on that review and the comments we received from external reviewers, we used four different measures of size or systemic importance: total assets, total assets and the square of total assets, whether or not a bank holding company was designated a global systemically important bank by the Financial Stability Board in November 2013, and whether or not a bank holding company had assets of $50 billion or more. Extensive controls for bond liquidity, credit risk, and other key factors. To account for the many factors that could influence funding costs, we controlled for credit risk, bond liquidity, and other key factors in our models. We included a number of variables that are associated with the risk of default, including measures of capital adequacy, asset quality, earnings, and volatility. We also included a number of variables that can be used to measure bond liquidity. Finally, we included variables that measure other key characteristics of bonds, such as time to maturity, and key characteristics of bank holding companies, such as operating expenses. Our models include a broader set of controls for credit risk and bond liquidity than some studies we reviewed and we directly assess the sensitivity of our results to using alternative controls on our estimates of funding costs. Multiple model specifications. In order to assess the sensitivity of our results to using alternative measures of size, bond liquidity, and credit risk, we estimated multiple different model specifications. We developed models using four alternative measures of size, two alternative sets of measures of capital adequacy, six alternative measures of volatility, and three alternative measures of bond liquidity. In contrast, some of the studies we reviewed estimated a more limited number of model specifications. Link between size and credit risk. To account for the possibility that investors' beliefs about government rescues affect their responsiveness to credit risk, our models allow the relationships between bank holding company funding costs and credit risk to depend on size. Altogether, we estimated 42 different models for each year from 2006 through 2013 and then used those models to compare bond yield spreads--our measure of bond funding costs--for bank holding companies of different sizes but with the same level of credit risk. Figure 1 shows our models' comparisons of bond funding costs for bank holding companies with $1 trillion in assets and average credit risk and bond funding costs for similar bank holding companies with $10 billion in assets, for each model and for each year. Each circle and dash in figure 1 shows the comparison for a different model. Circles show model- estimated differences that were statistically significant at the 10 percent level, while dashes represent differences that were not statistically significant at that level. Circles and dashes below zero correspond to models suggesting that bank holding companies with $1 trillion in assets have lower bond funding costs than bank holding companies with $10 billion in assets, and vice versa. For example, for 2013, a total of 18 models predicted statistically significant differences above zero and a total of eight models predicted statistically significant differences below zero. Our analysis provides evidence that the largest bank holding companies had lower funding costs during the 2007-2009 financial crisis but that these differences may have declined or reversed in recent years. However, we found that the outcomes of our econometric models varied with the various controls we used to capture size, credit risk, and bond liquidity. This variation indicates that uncertainty related to how to model funding costs has an important impact on estimated funding cost differences between large and small bank holding companies. As figure 1 shows, most models found that larger bank holding companies had lower bond funding costs than smaller bank holding companies during the 2007-2009 financial crisis, but the magnitude of the difference varied widely across models, as indicated by the range of results for each year. For example, for 2008, our models suggest that bond funding costs for bank holding companies with $1 trillion in assets and average credit risk were from 17 to 630 basis points lower than bond funding costs for similar bank holding companies with $10 billion in assets. Our models' comparisons of bond funding costs for different-sized bank holding companies for 2010 through 2013 also vary widely. For bank holding companies with average credit risk, more than half of our models suggest that larger bank holding companies had higher bond funding costs than smaller bank holding companies from 2011 through 2013, but many models suggest that larger bank holding companies still had lower bond funding costs than smaller ones during this period. For example, for 2013, our models suggest that bond funding costs for average credit risk bank holding companies with $1 trillion in assets ranged from 196 basis points lower to 63 basis points higher than bond funding costs for similar bank holding companies with $10 billion in assets (see fig. 1). For 2013, 30 of our models suggest that the larger banks had higher funding costs, and 12 of our models suggest that the larger banks had lower funding costs. To assess how investors' beliefs that the government will support failing bank holding companies have changed over time, we compared bond funding costs for bank holding companies of various sizes while holding the level of credit risk constant over time at the average for 2008--a relatively high level of credit risk that prevailed during the financial crisis. In these hypothetical scenarios, most models suggest that bond funding costs for larger bank holding companies would have been lower than bond funding costs for smaller bank holding companies in most years from 2010 to 2013. For example, most models for 2013 predict that bond funding costs for larger bank holding companies would be higher than for smaller bank holding companies at the average level of credit risk in that year, but would be lower at financial crisis levels of credit risk (see fig. 2). These results suggest that changes over time in funding cost differences we estimated (depicted in fig. 1) have been driven at least in part by improvements in the financial condition of bank holding companies. At the same time, more models predict lower bond funding costs for larger bank holding companies in 2008 than in 2013 when we assume that financial crisis levels of credit risk prevailed in both years, which suggests that investors' expectations of government support have changed over time. However, it is important to note that the relationships between variables estimated by our models could be sensitive to the average level of credit risk among bank holding companies, making these estimates of the potential impact of the level of credit risk from 2008 in the current environment even more uncertain. Moreover, Dodd-Frank Act reforms discussed earlier in this statement, such as enhanced regulatory standards for capital and liquidity, could enhance the stability of the U.S. financial system and make such a credit risk scenario less likely. This analysis builds on certain aspects of prior studies, but our estimates of the relationship between the size of a bank holding company and the yield spreads on its bonds are limited by several factors and should be interpreted with caution. Our estimates of differences in funding costs reflect a combination of several factors, including investors' beliefs about the likelihood that a bank holding company will fail, the likelihood that it will be rescued by the government if it fails, and the size of the losses that the government may impose on investors if it rescues the bank holding company. Like the methodologies used in the literature we reviewed, our methodology does not allow us to precisely identify the influence of each of these components. As a result, changes over time in our estimates of the relationship between bond funding costs and size may reflect changes in one or more of these components, but we cannot identify which with certainty. In addition, these estimates may reflect factors other than investors' beliefs about the likelihood of government support and may also reflect differences in the characteristics of bank holding companies that do and do not issue bonds. If a factor that we have not taken into account is associated with size, then our results may reflect the relationship between bond funding costs and this omitted factor instead of, or in addition to, the relationship between bond funding costs and bank holding company size. Finally, our estimates are not indicative of future trends. After reviewing the draft report, Treasury provided general comments and Treasury, FDIC, the Federal Reserve Board, and OCC provided technical comments. In its written comments, Treasury commented that our draft report represents a meaningful contribution to the literature and that our results reflect increased market recognition that the Dodd-Frank Act ended "too big to fail" as a matter of law. While our results do suggest bond funding cost differences between large and smaller bank holding companies may have declined or reversed since the 2007-2009 financial crisis, we also found that a higher credit risk environment could be associated with lower bond funding costs for large bank holding companies than for small ones. Furthermore, as we have noted, many market participants we spoke with believe that recent regulatory reforms have reduced but not eliminated the perception of "too big to fail" and both they and Treasury officials indicated that additional steps were required to address "too big to fail."As discussed, changes over time in our estimates of the relationship between bond funding costs and size may reflect changes in one or more components of investors' beliefs about government support--such as their views on the likelihood that a bank holding company will fail and the likelihood it will be rescued if it fails--but we cannot precisely identify the influence of each factor with certainty. In addition, Treasury and other agencies provided via email technical comments related to the draft report's analysis of funding cost differences between large and small bank holding companies. We incorporated these comments into the report, as appropriate. A complete discussion of the agencies' comments and our evaluation are provided in the report. Chairman Brown, Ranking Member Toomey, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the Subcommittee may have. For future contacts regarding this statement, please contact Lawrance L. Evans, Jr. at (202) 512-4802 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other GAO staff who made significant contributions to this statement and the report it is based on include: Karen Tremba, Assistant Director; John Fisher (Analyst-in-Charge); Bethany Benitez; Michael Hoffman; Risto Laboski; Courtney LaFountain; Rob Letzler; Marc Molino; Jason Wildhagen; and Jennifer Schwartz. Other assistance was provided by Abigail Brown; Rudy Chatlos; Stephanie Cheng; and Jose R. Pena. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's Month Year report, entitled Large Bank Holding Companies: Expectations of Government Support, GAO-14-621 . While views varied among market participants with whom GAO spoke, many believed that recent regulatory reforms have reduced but not eliminated the likelihood the federal government would prevent the failure of one of the largest bank holding companies. Recent reforms provide regulators with new authority to resolve a large failing bank holding company in an orderly process and require the largest bank holding companies to meet stricter capital and other standards, increasing costs and reducing risks for these firms. In response to reforms, two of three major rating agencies reduced or removed the assumed government support they incorporated into some large bank holding companies' overall credit ratings. Credit rating agencies and large investors cited the new Orderly Liquidation Authority as a key factor influencing their views. While several large investors viewed the resolution process as credible, others cited potential challenges, such as the risk that multiple failures of large firms could destabilize markets. Remaining market expectations of government support can benefit large bank holding companies if they affect investors' and customers' decisions. GAO analyzed the relationship between a bank holding company's size and its funding costs, taking into account a broad set of other factors that can influence funding costs. To inform this analysis and to understand the breadth of methodological approaches and results, GAO reviewed selected studies that estimated funding cost differences between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. Studies GAO reviewed generally found that the largest financial institutions had lower funding costs during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller institutions has since declined. However, these empirical analyses contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. For example, some studies used credit ratings which provide only an indirect measure of funding costs. GAO's analysis, which addresses some limitations of these studies, suggests that large bank holding companies had lower funding costs than smaller ones during the financial crisis but provides mixed evidence of such advantages in recent years. However, most models suggest that such advantages may have declined or reversed. GAO developed a series of statistical models that estimate the relationship between bank holding companies' bond funding costs and their size or systemic importance, controlling for other drivers of bond funding costs, such as bank holding company credit risk. Key features of GAO's approach include the following: * U.S. Bank Holding Companies: The models focused on U.S. bank holding companies to better understand the relationship between funding costs and size in the context of the U.S. economic and regulatory environment. * Bond Funding Costs: The models used bond yield spreads--the difference between the yield or rate of return on a bond and the yield on a Treasury bond of comparable maturity--to measure funding costs because they are a risk-sensitive measure of what investors charge bank holding companies to borrow.
3,614
645
Each year, OMB and federal agencies work together to determine how much the government plans to spend for IT and how these funds are to be allocated. Federal IT spending has risen to an estimated $65 billion in fiscal year 2008. OMB plays a key role in overseeing the implementation and management of federal IT investments. To improve this oversight, Congress enacted the Clinger-Cohen Act in 1996, expanding the responsibilities delegated to OMB and agencies under the Paperwork Reduction Act. Among other things, Clinger-Cohen requires agency heads, acting through agency chief information officers, to better link their IT planning and investment decisions to program missions and goals and to implement and enforce IT management policies, procedures, standards, and guidelines. The act also requires that agencies engage in capital planning and performance and results-based management. OMB's responsibilities under the act include establishing processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. OMB must also report to Congress on the net program performance benefits achieved as a result of major capital investments in information systems that are made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB developed policy for planning, budgeting, acquisition, and management of federal capital assets. This policy is set forth in OMB Circular A-11 (section 300) and in OMB's Capital Programming Guide (supplement to Part 7 of Circular A-11), which directs agencies to develop, implement, and use a capital programming process to build their capital asset portfolios. Among other things, OMB's Capital Programming Guide directs agencies to evaluate and select capital asset investments that will support core mission functions that must be performed by the federal government and demonstrate projected returns on investment that are clearly equal to or better than alternative uses of available public resources; institute performance measures and management processes that monitor actual performance and compare to planned results; and establish oversight mechanisms that require periodic review of operational capital assets to determine how mission requirements might have changed and whether the asset continues to fulfill mission requirements and deliver intended benefits to the agency and customers. To further support the implementation of IT capital planning practices as required by statute and directed in OMB's Capital Programming Guide, we have developed an IT investment management framework that agencies can use in developing a stable and effective capital planning process. Consistent with the statutory focus on selecting, controlling, and evaluating investments, this framework focuses on these processes in relation to IT investments specifically. It is a tool that can be used to determine both the status of an agency's current IT investment management capabilities and the additional steps that are needed to establish more effective processes. Mature and effective management of IT investments can vastly improve government performance and accountability. Without good management, such investments can result in wasteful spending and lost opportunities for improving delivery of services to the public. Only by effectively and efficiently managing their IT resources through a robust investment management process can agencies gain opportunities to make better allocation decisions among many investment alternatives and further leverage their investments. However, the federal government faces enduring IT challenges in this area. For example, in January 2004 we reported on mixed results of federal agencies' use of IT investment management practices. Specifically, we reported that although most of the agencies had IT investment boards responsible for defining and implementing the agencies' investment management processes, agencies did not always have important mechanisms in place for these boards to effectively control investments, including decision-making rules for project oversight, early warning mechanisms, and/or requirements that corrective actions for underperforming projects be agreed upon and tracked. Executive-level oversight of project-level management activities provides organizations with increased assurance that each investment will achieve the desired cost, benefit, and schedule results. Accordingly, we made several recommendations to agencies to improve their practices. In previous work using our investment management framework, we reported that the use of IT investment management practices by agencies was mixed. For example, a few agencies that have followed the framework in implementing capital planning processes have made significant improvements. In contrast, however, we and others have continued to identify weaknesses at agencies in many areas, including immature management processes to support both the selection and oversight of major IT investments and the measurement of actual versus expected performance in meeting established performance measures. For example, we recently reported that the Department of Homeland Security and the Department of Treasury did not have the processes in place to effectively select and oversee their major investments. To help ensure that investments of public resources are justified and that public resources are wisely invested, OMB began using its Management Watch List in the President's fiscal year 2004 budget request, as a means to oversee the justification for and planning of agencies' IT investments. This list was derived based on a detailed review of the investments' Capital Asset Plan and Business Case, also known as the exhibit 300. The exhibit 300 is a reporting mechanism intended to enable an agency to demonstrate to its own management, as well as OMB, that a major project is well planned in that it has employed the disciplines of good project management; developed a strong business case for the investment; and met other Administration priorities in defining the cost, schedule, and performance goals proposed for the investment. We reported in 2005 that OMB analysts evaluate agency exhibit 300s by assigning scores to each exhibit 300 based on guidance presented in OMB Circular A-11. As described in this circular, the scoring of a business case consists of individual scoring for 10 categories, as well as a total composite score of all the categories. The 10 categories are project (investment) management, performance-based management system (including the earned value life-cycle costs formulation, and support of the President's Management Agenda. Projects are placed on the Management Watch List if they receive low scores (3 or less on a scale from 1 to 5) in the areas of performance goals, performance-based management systems, security and privacy or a low composite score. According to OMB, agencies with weaknesses in these three areas are to submit remediation plans addressing the weaknesses. OMB officials also stated that decisions on follow-up and monitoring the progress are typically made by staff with responsibility for reviewing individual agency budget submissions, depending on the staff's insights into agency operations and objectives. According to OMB officials, those Management Watch List projects that receive specific follow-up attention receive feedback through the passback process, targeted evaluation of remediation plans designed to address weaknesses, the apportioning of funds so that the use of budgeted dollars was conditional on appropriate remediation plans being in place, and the quarterly e-Gov Scorecards. OMB removes projects from the Management Watch List as agencies remediate the weaknesses identified with these projects' business cases. As originally defined in OMB Circular A-11 and subsequently reiterated in an August 2005 memorandum, high risk projects are those that require special attention from oversight authorities and the highest levels of agency management. These projects are not necessarily "at risk" of failure, but may be on the list because of one or more of the following four reasons: The agency has not consistently demonstrated the ability to manage complex projects. The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency's total IT portfolio. The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. Delay or failure of the project would introduce for the first time unacceptable or inadequate performance or failure of an essential mission function of the agency, a component of the agency, or another organization. Most agencies reported that to identify high risk projects, staff from the Office of the Chief Information Officer compare the criteria against their current portfolio to determine which projects met OMB's definition. They then submit the list to OMB for review. According to OMB and agency officials, after the submission of the initial list, examiners at OMB work with individual agencies to identify or remove projects as appropriate. According to most agencies, the final list is then approved by their Chief Information Officer. For the identified high risk projects, beginning September 15, 2005, and quarterly thereafter, Chief Information Officers are to assess, confirm, and document projects' performance. Specifically, agencies are required to determine, for each of their high risk projects, whether the project was meeting one or more of four performance evaluation criteria: establishing baselines with clear cost, schedule, and performance goals; maintaining the project's cost and schedule variances within 10 percent; assigning a qualified project manager; and avoiding duplication by leveraging inter-agency and governmentwide investments. If a high risk project meets any of these four performance evaluation criteria, agencies are instructed to document this using a standard template provided by OMB and provide this template to oversight authorities (e.g., OMB, agency inspectors general, agency management, and GAO) on request. Upon submission, according to OMB staff, individual analysts review the quarterly performance reports of projects with shortfalls to determine how well the projects are progressing and whether the actions described in the planned improvement efforts are adequate using other performance data already received on IT projects such as the e-Gov Scorecards, earned value management data, and the exhibit 300. OMB and federal agencies have identified approximately 227 IT projects-- totaling at least $10.4 billion in expenditures for fiscal year 2008--as being poorly planned, poorly performing, or both. Figure 1 shows the distribution of these projects and their associated dollar values. Each year, OMB places hundreds of projects totaling billions of dollars on the Management Watch List. Table 1 provides a historical perspective of the number of these projects and their associated budget since OMB started reporting on the Management Watch List in the President's budget request for 2004. The table shows that while the number of projects and their associated budget have generally decreased since then, they increased by 83 projects this year, and represent a significant percentage of the total budget. As of July 2007, 136 projects, representing $8.6 billion, still remained on the Management Watch List (see appendix 1 for complete list). We determined that 29 of these projects were on the Management Watch List as of September 2006. As of June 2007, when agencies last reported on their high risk projects to OMB, the 24 major agencies identified 438 IT projects as high risk, of which 124 had performance shortfalls collectively totaling about $6.0 billion in funding requested for fiscal year 2008. Table 2 shows that the number of projects, as well as the number of projects with shortfalls increased this year. OMB attributes this rise to increased management oversight by agencies. The majority of projects were not reported to have had performance shortfalls. In addition, five agencies--the departments of Energy, Housing and Urban Development, Labor, and State, and the National Science Foundation--reported that none of their high risk projects experienced any performance shortfalls. Figure 2 illustrates the number of high risk projects by agency as of June 2007, with and without shortfalls. Agencies reported cost and schedule variances that exceeded 10 percent as the greatest shortfall. This is consistent with what they reported about a year ago, and the distribution of shortfalls types is similar to last year. Figure 3 illustrates the reported number and type of performance shortfalls associated with high risk projects. Appendix II identifies the shortfalls associated with each of the poorly performing projects. Twenty-two high risk projects have experienced performance shortfalls for the past four quarters (see figure 4). Of these projects, the following six have had shortfalls since the High Risk List was established in September 2005. Department of Homeland Security's (DHS) Secure Border Initiative Net Technology Program, which is expected to provide on-scene agents near real-time information on attempted border crossings by illegal aliens, terrorists, or smugglers; Department of Agriculture's (USDA) Modernize and Innovate the Delivery of Agricultural Systems, which is intended to modernize the delivery of farm program benefits by deploying an internet-based self-service capabilities for customers, and eliminating the department's reliance on aging technology and service centers as the sole means of delivering program benefits; Department of Veterans Affairs' (VA) VistA Imaging, which should provide complete online patient data to health care providers, increase clinician productivity, facilitate medical decision-making, and improve quality of care; DHS's Transportation Worker Identification Credentialing, which is to establish a system-wide common secure biometric credential, used by all transportation modes, for personnel requiring unescorted physical and/or logical access to secure areas of the transportation system; Department of Justice's (DOJ) Regional Data Exchange, which is expected to combine and share regional investigative information and provide powerful tools for analyzing the integrated data sets; and VA's Patient Financial Services System, which is expected create a comprehensive business solution for revenue improvement utilizing improved business practices, commercial software, and enhanced VA clinical applications. Thirty-three projects are on both the Management Watch List and list of high risk projects with shortfalls, meaning that they are both poorly planned and poorly performing. They total about $4.1 billion in estimated expenditures for fiscal year 2008. These projects are listed in table 3 below. OMB has taken steps to improve the identification and oversight of the Management Watch List and high risk projects by addressing some of the recommendations we previously made, but additional efforts are needed to more effectively perform these activities and ultimately ensure that potentially billions of taxpayer dollars are not wasted. Specifically, we previously recommended that OMB take action to improve the accuracy and reliability of exhibit 300s and application of the high risk projects criteria, and perform governmentwide tracking and analysis of Management Watch List and high risk project information. While OMB took steps to address our concerns, more can be done. In January 2006, we noted that the underlying support for information provided in the exhibit 300s was often inadequate and that, as a result, the Management Watch List may be undermined by inaccurate and unreliable data. Specifically, we noted that documentation either did not exist or did not fully agree with specific areas of all exhibit 300s; agencies did not always demonstrate that they complied with federal or departmental requirements or policies with regard to management and reporting processes; for example, no exhibit 300 had cost analyses that fully complied with OMB requirements for cost-benefit and cost- effectiveness analyses; and data for actual costs were unreliable because they were not derived from cost-accounting systems with adequate controls; in the absence of such systems, agencies generally derived cost information from ad hoc processes. We recommended, among other things, that OMB direct agencies to improve the accuracy and reliability of exhibit 300 information. To address our recommendation, in June 2006, OMB directed agencies to post their exhibit 300s on their website within two weeks of the release of the President's budget request for fiscal year 2008. While this is a step in the right direction, the accuracy and reliability of exhibit 300 information is still a significant weakness among the 24 major agencies, as evidenced by a March 2007 President's Council on Integrity and Efficiency and Executive Council on Integrity and Efficiency study commissioned by OMB to ascertain the validity of exhibit 300s. Specifically, according to individual agency reports contained within the study, Inspectors General found that the documents supporting agencies' exhibit 300s continue to have accuracy and reliability issues. For example, according to these reports, the Agency for International Development did not maintain the documentation supporting exhibit 300s cost figures. In addition, at the Internal Revenue Service, the exhibit 300s were unreliable because, among other things, project costs were being reported inaccurately and progress on projects in development was measured inaccurately. In June 2006, we noted that OMB did not always consistently apply the criteria for identifying high risk projects. For example, we identified projects that appeared to meet the criteria but that were not designated as high risk. Accordingly, we recommended that OMB apply their high risk criteria consistently. OMB has since designated as high risk the projects that we identified. Further, OMB officials stated that they have worked with agencies to ensure a more consistent application of the high risk criteria. These are positive steps, as they result in more projects receiving the management attention they deserve. However, questions remain as to whether all high risk projects with shortfalls are being reported by agencies. For example, we have reported in our high risk series that the Department of Defense's efforts to modernize its business systems have been hampered because of weaknesses in practices for (1) developing and using an enterprise architecture, (2) instituting effective investment management processes, and (3) establishing and implementing effective systems acquisition processes. We concluded that the department remains far from where it needs to be to effectively and efficiently manage an undertaking of such size, complexity, and significance. Despite these problems, Department of Defense (DOD), which accounts for $31 billion of the government's $65 billion in IT expenditures, only reported three projects as being high risk with shortfalls representing a total of about $1 million. The dollar value of DOD's three projects represents less than one tenth of one percent of high risk projects with shortfalls. In light of the problems we and others have identified with many of DOD's projects, this appears to be an underestimation. Given the critical nature of high risk projects, it is particularly important to identify early on those that are performing poorly, before their shortfalls become overly costly to address. Finally, to improve the oversight of the Management Watch List projects, we recommended in our April 2005 report that the Director of OMB report to Congress on projects' deficiencies, agencies' progress in addressing risks of major IT investments, and management areas needing attention. In addition, to fully realize the potential benefits of using the Management Watch List, we recommended that OMB use the list as the basis for selecting projects for follow-up, tracking follow-up activities and analyze the prioritized list to develop governmentwide and agency assessments of the progress and risks of IT investments, identifying opportunities for continued improvement. We also made similar recommendations to the Director of OMB regarding high risk projects. Specifically, we recommended that OMB develop a single aggregate list of high risk projects and their deficiencies and use that list to report to Congress progress made in correcting high risk problems, actions under way, and further actions that may be needed. To its credit, OMB started publicly releasing aggregate lists of the Management Watch List and high risk projects in September 2006, and has been releasing updated versions on a quarterly basis by posting them on their website. While this is a positive step, OMB does not publish the specific reasons that each project is placed on the Management Watch List, nor does it specifically identify why high risk projects are poorly performing, as we have done in appendix II. Providing this information would allow OMB and others to better analyze the reasons projects are poorly planned and performing and take corrective actions and track these projects on a governmentwide basis. Such information would also help to highlight progress made by agencies or projects, identify management issues that transcend individual agencies, and highlight the root causes of governmentwide issues and trends. Such analysis would be valuable to agencies in planning future IT projects, and could enable OMB to prioritize follow-up actions and ensure that high-priority deficiencies are addressed. In summary, the Management Watch List and high risk projects processes play important roles in improving the management of federal IT investments by helping to identify poorly planned and poorly performing projects that require management attention. As of June 2007, the 24 major agencies had 227 such projects totaling at least $10 billion. OMB has taken steps to improve the identification of these projects, including implementing recommendations related to improving the accuracy of exhibit 300s and the application of the high risk projects criteria. However, the number of projects may be understated because issues concerning the accuracy and reliability of the budgetary documents the Management Watch List is derived from still remain and high risk projects with shortfalls may not be consistently identified. While OMB can act to further improve the identification and oversight of poorly planned and poorly performing projects, we recognize that agencies must also take action to fulfill their responsibilities in these areas. We have addressed this in previous reports and made related recommendations. Until further improvements in the identification and oversight of poorly planned and poorly performing IT projects, potentially billions in taxpayer dollars are at risk of being wasted. If you should have any questions about this testimony, please contact me at (202) 512-9286 or by e-mail at [email protected]. Individuals who made key contributions to this testimony are Sabine Paul, Assistant Director; Neil Doherty; Amos Tevelow; Kevin Walsh and Eric Winter. The following provides additional detail on the investments comprising OMB's Management Watch List as of July 2007. Under the Clinger-Cohen Act of 1996, agencies are required to submit business plans for IT investments to OMB. If the agency's investment plan contains one or more planning weaknesses, it is placed on OMB's Management Watch List and targeted for follow-up action to correct potential problems prior to execution. We estimated the fiscal year 2008 request based on the data in the Report on IT Spending for Fiscal Years 2006, 2007, and 2008 (generally referred to as exhibit 53), and data provided by agencies. The following provides additional detail on the high risk projects that have performance shortfalls as of June 2007. We estimated the fiscal year 2008 request based on the data in the Report on IT Spending for Fiscal Years 2006, 2007, and 2008 (generally referred to as exhibit 53), and data provided by agencies.
The Office of Management and Budget (OMB) plays a key role in overseeing federal information technology (IT) investments. The Clinger-Cohen Act, among other things, requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by agencies and to report to Congress on the net program performance benefits achieved as a result of these investments. OMB has developed several processes to help carry out its role. For example, OMB began using a Management Watch List several years ago as a means of identifying poorly planned projects based on its evaluation of agencies' funding justifications for major projects, known as exhibit 300s. In addition, in August 2005, OMB established a process for agencies to identify high risk projects and to report on those that are performing poorly. GAO testified last year on the Management Watch List and high risk projects, and on GAO's recommendations to improve these processes. GAO was asked to (1) provide an update on the Management Watch List and high risk projects and (2) identify OMB's efforts to improve the identification and oversight of these projects. In preparing this testimony, GAO summarized its previous reports on initiatives for improving the management of federal IT investments. GAO also analyzed current Management Watch List and high risk project information. OMB and federal agencies have identified approximately 227 IT projects--totaling at least $10.4 billion in expenditures for fiscal year 2008--as being poorly planned (on the Management Watch List), poorly performing (on the High Risk List with performance shortfalls), or both. OMB has taken steps to improve the identification and oversight of the Management Watch List and High Risk projects by addressing recommendations previously made by GAO, however, additional efforts are needed to more effectively perform these activities. Specifically, GAO previously recommended that OMB take action to improve the accuracy and reliability of exhibit 300s and consistent application of the high risk projects criteria, and perform governmentwide tracking and analysis of Management Watch List and high risk project information. In response to these recommendations, OMB, for example, started publicly releasing aggregate lists of Management Watch List and high risk projects by agency in September 2006 and has been updating them since then on a quarterly basis. However, OMB does not publish the reasons for placing projects on the Management Watch List, nor does it specifically identify why high risk projects are poorly performing. Providing this information would allow OMB and others to better analyze the reasons projects are poorly planned and performing, take corrective actions, and track these projects on a governmentwide basis. Such information would also help to highlight progress made by agencies or projects, identify management issues that transcend individual agencies, and highlight the root causes of governmentwide issues and trends. Until OMB makes further improvements in the identification and oversight of poorly planned and poorly performing IT projects, potentially billions in taxpayer dollars are at risk of being wasted.
4,598
612
VA's mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. Over time, the use of IT has become increasingly crucial to the department's efforts to provide such benefits and services. For example, the department relies on its systems for medical information and records for veterans, as well as for processing benefit claims, including compensation and pension and education benefits. In reporting on VA's IT management over the past several years, we have highlighted challenges that the department has faced in achieving its "One VA" vision, including that information systems and services were highly decentralized and that its administrations controlled a majority of the IT budget. For example, we noted that, according to an October 2005 memorandum from the former CIO to the Secretary of Veterans Affairs, the CIO had direct control over only 3 percent of the department's IT budget and 6 percent of the department's IT personnel. In addition, in the department's fiscal year 2006 IT budget request, the Veterans Health Administration was identified to receive 88 percent of the requested funding, while the department was identified to receive only 4 percent. We have previously pointed out that, given the department's large IT funding and decentralized management structure, it was crucial for the CIO to ensure that well-established and integrated processes for leading, managing, and controlling investments were followed throughout the department. Further, a contractor's assessment of VA's IT organizational alignment, issued in February 2005, noted the lack of control for how and when money is spent. The assessment found that project managers within the administrations were able to shift money as they wanted to build and operate individual projects. In addition, according to the assessment, the focus of department-level management was only on reporting expenditures to the Office of Management and Budget and Congress, rather than on managing these expenditures within the department. The department officially began its initiative to provide the CIO with greater authority over the department's IT in October 2005. At that time, the Secretary of Veterans Affairs issued an executive decision memorandum that granted approval for the development of a new centralized management structure for the department. According to VA, its goals in moving to centralized management included having better overall fiscal discipline over the budget. In February 2007, the Secretary approved the department's new management structure. In this new structure, the Assistant Secretary for Information and Technology serves as VA's CIO and is supported by a principal deputy assistant secretary and five deputy assistant secretaries--senior leadership positions created to assist the CIO in overseeing functions such as cyber security, IT portfolio management, and systems development and operations. In April 2007, the Secretary approved a governance plan that is intended to enable the Office of Information and Technology, under the leadership of the CIO, to centralize its decision making. The plan describes the relationship between IT and departmental governance and the approach the department intends to take to enhance governance and realize more cost-effective use of IT resources and assets. The department also made permanent the transfer of its entire IT workforce under the CIO, consisting of approximately 6,000 personnel from the administrations. In June 2007, we reported on the department's plans for realigning the management of its IT program and establishing centralized control of its IT budget within the Office of Information and Technology. We pointed out that the department's realignment plans included elements of several factors that we identified as critical to a successful transition, but that additional actions could increase assurance that the realignment would be completed successfully. Specifically, we reported that the department had ensured commitment from its top leadership and that, among other critical actions, it was establishing a governance structure to manage resources. However, at that time, VA had not updated its strategic plan to reflect the new organization. In addition, we noted that the department had planned to take action by July 2008 to create the necessary management processes to realize a centralized IT management structure. In testimony before the House Veterans' Affairs Committee last September, however, we pointed out that the department had not kept pace with its schedule for implementing the new management processes. As part of its IT realignment, VA has taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department's IT budget and resources. Within the new centralized management structure, the CIO is responsible for ensuring that there are adequate controls over the department's IT budget and for overseeing capital planning and execution. These responsibilities are consistent with the Clinger-Cohen Act of 1996, which requires federal agencies to develop processes for the selection, control, and evaluation of major systems initiatives. In this regard, the department has (1) designated organizations with specific roles and responsibilities for controlling the budget to report directly to the CIO; (2) implemented an IT governance structure that assigns budget oversight responsibilities to specific governance boards; (3) finalized an IT strategic plan to guide, manage, and implement its operations and investments; (4) completed multi-year budget guidance to improve management of its IT; and (5) initiated the implementation of critical management processes. However, while VA has taken these important steps toward establishing control of the department's IT, it remains too early to assess their overall impact because most of the actions taken have only recently become operational or have not yet been fully implemented. Thus, their effectiveness in ensuring accountability for the resources and budget has not yet been clearly established. As one important step, two deputy assistant secretaries under the CIO have been assigned responsibility for managing and controlling different aspects of the IT budget. Specifically, the Deputy Assistant Secretary for Information Technology Enterprise Strategy, Policy, Plans, and Programs is responsible for development of the budget and the Deputy Assistant Secretary for Information Technology Resource Management is responsible for overseeing budget execution, which includes tracking actual expenditures against the budget. Initially, the deputy assistant secretaries have served as a conduit for information to be used by the governance boards. As a second step, the department has established and activated three governance boards to facilitate budget oversight and management of its investments. The Business Needs and Investment Board; the Planning, Architecture, Technology and Services Board; and the Information Technology Leadership Board have begun providing oversight to ensure that investments align with the department's strategic plan and that business and budget requirements for ongoing and new initiatives meet user demands. One of the main functions of the boards is to designate funding according to the needs and requirements of the administrations and staff offices. Each board meets monthly, and sometimes more frequently, as the need arises during the budget development phase. The first involvement of the boards in VA's budget process began with their participation in formulating the fiscal year 2009 budget. As part of the budget formulation process, in May 2007 the Business Needs and Investment Board conducted its first meeting in which it evaluated the list of business projects being proposed in the budget using the department's Exhibit 300s for fiscal year 2009, and made departmentwide allocation recommendations. Then in June, these recommendations were passed on to the Planning, Architecture, Technology, and Services Board, which proposed a new structure for the fiscal year 2009 budget request. The recommended structure was to provide visibility to important initiatives and enable better communication of performance results and outcomes. In late June, based on input from the aforementioned boards, the Information Technology Leadership Board made recommendations to department decision makers for funding the major categories of IT projects. In July 2007, following its work on the fiscal year 2009 budget formulation, the boards then began monitoring fiscal year 2008 budget execution. However, according to Office of Information and Technology officials, with the governance boards' first involvement in budget oversight having only recently begun (in May 2007), and with their activities to date being primarily focused on formulation of the fiscal year 2009 budget and execution of the fiscal year 2008 budget, none of the boards has yet been involved in all stages of the budget formulation and execution processes. Thus, they have not yet fully established their effectiveness in helping to ensure overall accountability for the department's IT appropriations. In addition, the Office of Information and Technology has not yet standardized the criteria that the boards are to use in reviewing, selecting, and assessing investments. The criteria is planned to be completed by the end of fiscal year 2008 and to be used as part of the fiscal year 2010 budget discussions. Office of Information and Technology officials stated that, in response to operational experience with the 2009 budget formulation and 2008 budget execution, the department plans to further enhance the governance structure. For example, the Office of Information and Technology found that the boards' responsibilities needed to be more clearly defined in the IT governance plan to avoid confusion in roles. That is, one board (the Business Needs and Investment Board) was involved in the budget formulation for fiscal year 2009, but budget formulation is also the responsibility of the Deputy Assistant Secretary for Information Technology Resource Management, who is not a member of this board. According to the Principal Deputy Assistant Secretary for Information and Technology, the department is planning to update its governance plan by September 2008 to include more specificity on the role of the governance boards in the department's budget formulation process. Such an update could further improve the structure's effectiveness. In addition, as part of improving the governance strategy, the department has set targets by which the Planning, Architecture, Technology, and Services Board is to review and make departmentwide recommendations for VA's portfolio of investments. These targets call for the board to review major IT projects included in the fiscal year budgets. For example, the board is expected to review 10 percent for fiscal year 2008, 50 percent for fiscal year 2009, and 100 percent for fiscal year 2011. As a third step in establishing oversight, in December 2007, VA finalized an IT strategic plan to guide, manage, and implement its operations and investments. This plan (for fiscal years 2006-2011) aligns Office of Information and Technology goals, priorities, and initiatives with the priorities of the Secretary of Veterans Affairs, as identified in the VA strategic plan for fiscal years 2006-2011. In addition, within the plan, the IT strategic goals are aligned with the CIO's IT priorities, as well as with specific initiatives and performance measures. This alignment frames the outcomes that IT executives and managers are expected to meet when delivering services and solutions to veterans and their dependents. Further, the plan includes a performance accountability matrix that highlights the alignment of the goals, priorities, initiatives, and performance measures, and an expanded version of the matrix designates specific entities within the Office of Information and Technology who are accountable for implementation of each initiative. The matrix also establishes goals and time lines through fiscal year 2011, which should enable VA to track progress and suggest midcourse corrections and sustain progress toward the realignment. As we previously reported, it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. As a fourth step, the department has completed multi-year budget guidance to improve management of its IT portfolio. In December 2007, the CIO disseminated this guidance for the fiscal years 2010 through 2012 budgets. The purpose of the guidance is to provide general direction for proposing comprehensive multi-year IT planning proposals for centralized review and action. The process called for project managers to submit standardized concept papers and other review documentation in December 2007 for review in the January to March 2008 time frame, to decide which projects will be included in the fiscal year 2010 portfolio of IT projects. The new process is to add rigor and uniformity to the department's investment approach and allow the investments to be consistently evaluated for alignment with the department's strategic planning and priorities and the enterprise architecture. According to VA officials, this planning approach is expected to allow for reviewing proposals across the department and for identifying opportunities to maximize investments in IT. Nevertheless, although the multi-year programming guidance holds promise for obtaining better information for portfolio management, the guidance has not been fully implemented because it is applicable to future budgets (for fiscal years 2010 through 2012). As a result, it is too early to determine VA's effectiveness in implementing this guidance, and ultimately, its impact on the department's IT portfolio management. Finally, the department has begun developing new management processes to establish the CIO's control over the IT budget. The department's December 2007 IT strategic plan identifies three processes as high priorities for establishing the foundation of the budget functions: project management, portfolio management, and service level agreements. However, while the department had originally stated that its new management processes would be implemented by July 2008, the IT strategic plan indicates that key elements of these processes are not expected to be completed until at least fiscal year 2011. Specifically, the plan states that the project and portfolio management processes are to be completed by fiscal year 2011, and does not assign a completion date for the service level agreement process. As our previous report noted, it is crucial for the CIO to ensure that well- established and integrated processes are in place for leading, managing, and controlling VA's IT resources. The absence of such processes increases the risk to the department's ability to achieve a solid and sustainable management structure that ensures effective IT accountability and oversight. Appendix I provides a timeline of the various actions that the department has undertaken and planned for the realignment. In summary, while the department has made progress with implementing its centralized IT management approach, effective completion of its realignment and implementation of its improved processes is essential to ensuring that VA has a solid and sustainable approach to managing its IT investments. Because most of the actions taken by VA have only recently become operational, it is too early to assess their overall impact. Until the department carries out its plans to add rigor and uniformity to its investment approach and establishes a comprehensive set of improved management processes, the department may not achieve a sustainable and effective approach to managing its IT investments. Mr. Chairman and members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have at this time. For more information about this testimony, please contact Valerie C. Melvin at (202) 512-6304 or by e-mail at [email protected]. Key contributors to this testimony were Barbara Oliver, Assistant Director, Nancy Glover, David Hong, Scott Pettis, and J. Michael Resser.
The use of information technology (IT) is crucial to the Department of Veterans Affairs' (VA) mission to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation. In this regard, the department's fiscal year 2009 budget proposal includes about $2.4 billion to support IT development, operations, and maintenance. VA has, however, experienced challenges in managing its IT projects and initiatives, including cost overruns, schedule slippages, and performance problems. In an effort to confront these challenges, the department is undertaking a realignment to centralize its IT management structure. This testimony summarizes the department's actions to realign its management structure to provide greater authority and accountability over its IT budget and resources and the impact of these actions to date. In developing this testimony, GAO reviewed previous work on the department's realignment and related budget issues, analyzed pertinent documentation, and interviewed VA officials to determine the current status and impact of the department's efforts to centralize the management of its IT budget and operations. As part of its IT realignment, VA has taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department's IT budget and resources. For example, the department's chief information officer (CIO) now has responsibility for ensuring that there are controls over the budget and for overseeing all capital planning and execution, and has designated leadership to assist in overseeing functions such as portfolio management and IT operations. In addition, the department has established and activated three governance boards to facilitate budget oversight and management of its investments. Further, VA has approved an IT strategic plan that aligns with priorities identified in the department's strategic plan and has provided multi-year budget guidance to achieve a more disciplined approach for future budget formulation and execution. While these steps are critical to establishing control of the department's IT, it remains too early to assess their overall impact because most of the actions taken have only recently become operational or have not been fully implemented. Thus, their effectiveness in ensuring accountability for the resources and budget has not yet been clearly established. For example, according to Office of Information and Technology officials, the governance boards' first involvement in budget oversight only recently began (in May 2007) with activities to date focused primarily on formulation of the fiscal year 2009 budget and on execution of the fiscal year 2008 budget. Thus, none of the boards has yet been involved in all aspects of the budget formulation and execution processes and, as a result, their ability to help ensure overall accountability for the department's IT appropriations has not yet been fully established. In addition, because the multi-year programming guidance is applicable to future budgets (for fiscal years 2010 through 2012), it is too early to determine VA's effectiveness in implementing this guidance. Further, VA is in the initial stages of developing management processes that are critical to centralizing its control over the budget. However, while the department had originally stated that the processes would be implemented by July 2008, it now indicates that implementation across the department will not be completed until at least 2011. Until VA fully institutes its oversight measures and management processes, it risks not realizing their contributions to, and impact on, improved IT oversight and accountability within the department.
3,030
680
The Competition in Contracting Act of 1984 (CICA), 41 U.S.C. section 253, and the implementing FAR section 6.302 require full and open competition for government contracts except in a limited number of statutorily prescribed situations. One situation in which agencies may use other than full and open competition occurs when the agency's need is of such unusual and compelling urgency that the government would be seriously injured unless the agency is permitted to limit the number of sources from which it solicits proposals. Even when an unusual and compelling urgency exists, the agency is required to request offers from as many potential sources as is practicable under the circumstances. 41 U.S.C. section 253(e); FAR section 6.302-2(c)(2). This means that an agency may limit a procurement to one firm only when the agency reasonably believes that only that firm can perform the work in the available time. Based on our investigation, we believe there was insufficient urgency to limit competition and that the sole-source contract to Sato & Associates was not proper. The Treasury OIG violated the applicable statute and regulation by failing to request offers from as many potential sources as was practical. Ms. Lau knew of three other former IGs who had performed similar management reviews. Indeed, Mr. Sato hired two of the former IGs to assist him with the Treasury OIG review. Further, the cost of that review, over $90,700, appears artificially high. After Mr. Sato submitted a similar-costing proposal to Interior and after a full and open competition, Interior awarded a similar contract to Mr. Sato at a final cost of about $28,900. Prior to being confirmed as Treasury IG on October 7, 1994, Ms. Lau decided that a management review of the OIG would help her meet a number of challenges in her new job. In November 1994, Ms. Lau contacted Mr. Sato to request that he conduct the management review. According to Ms. Lau, she first met Mr. Sato when she was a regional official and Mr. Sato a national official of the Association of Government Accountants; a professional relationship developed over the years through functions related to that association. Mr. Sato had written to the White House Personnel Office in May 1993 recommending Ms. Lau for an appointment to an IG position. In November 1994, Ms. Lau talked with senior OIG managers about a management review and advised them that she knew to whom she wanted to award a contract. In early December 1994, she contacted Treasury's PSD to request assistance in awarding a management review contract. The contracting officer provided her with an explanation of the requirements to justify a sole-source contract. Thereafter, Ms. Lau told PSD that she wanted Sato & Associates to do the work. The Treasury contracting officer subsequently prepared a Justification for Other Than Full and Open Competition, also known as the justification and approval (J&A) document. On December 12, 1994, PSD approved the J&A, authorizing a sole-source award to Sato & Associates. When we asked the contracting officer why she did not attempt to identify other individuals or companies that could perform the contract, she stated that Ms. Lau had told her that Mr. Sato "had unique capabilities which would preclude the award of a management studies contract to anyone else." On January 9, 1995, Treasury's PSD awarded a contract at the request of the Treasury OIG to Sato & Associates to perform a management study of the Treasury OIG. The contract specified that the contractor was to produce a report within 13 weeks, which was to focus on the most efficient methods of improving the organization and functioning of the operations of the OIG. Specific areas to be reviewed included office management procedures and practice, staffing, correspondence, automation, and personnel management. The contract was awarded without full and open competition on the basis of unusual and compelling urgency. The J&A for the Sato contract provided that "he Government would be injured if the Inspector General is unable to quickly assess any needs for management reform and make any required changes that would ensure that she receives the appropriate staff support for the implementation of her policies." According to the contracting officer, when she questioned Ms. Lau about the justification for the Sato contract and whether an urgent need existed, Ms. Lau stated that she did not want to divulge too much of "the internal goings-on" in the Inspector General's Office to the contracting officer. Ms. Lau merely assured the contracting officer that the need was urgent. "I was aware that the office had some major challenges to meet, that we needed to marshal the resources to do the financial audits required by the Government Management Reform Act. That we had some major work to do in terms of identifying the resources to do so. In addition, as the newly appointed head of the Office of Inspector General, I had a 120 day period before I would be able to make any major changes or reassignments of senior executives, and that I wanted to do that as early as possible. I knew I was going into an office with some issues that were getting scrutiny from Congress as well as others. I believed that I needed to have a trusted and experienced group of professionals come in to assist me to do that. I definitely felt that there was a compelling and urgent need, if you want to use that terminology, because I wanted to ensure that I had, for example, some of the major changes that were necessary to meet the CFO audit by the time the next cycle came around, which in Government fiscal years, the cycle ends September 30th, and so the financial audits that would be required under that would have to be planned and conducted within that time frame." Other than full and open competition is permitted when the agency has an unusual and compelling urgency such that full competition would seriously injure the government's interest. We recognize that the challenges Ms. Lau believed she faced and her express desire to make management changes and develop strategies to deal with various audit requirements as soon as possible after taking office, provide some support for the OIG's urgency determination. On the other hand, we are not aware of facts establishing that Ms. Lau's ability to perform her duties would have been seriously impaired had the procurement of a consultant to perform the management study been delayed by a few months in order to obtain full and open competition. On balance, we believe that there was insufficient urgency to limit competition. It is clear, however, that irrespective of whether it would have been proper to limit competition, issuance of a sole-source contract to Sato & Associates was not proper. As discussed above, unusual and compelling urgency does not relieve an agency from the obligation to seek competition. An agency is required to request offers from as many potential sources as is practicable under the circumstances. It may limit the procurement to only one firm if it reasonably believes that only that firm can perform the work in the available time. 41 U.S.C. section 253(c)(1). The J&A stated that Sato & Associates had a predominate capability to meet the Department of Treasury's needs. However, Ms. Lau stated to us that she knew at the time that former Inspectors General Charles Dempsey, Brian Hyland, and Richard Kusserow had been awarded contracts for management reviews. We interviewed two of the three former Inspectors General--Messrs. Dempsey and Hyland--that Ms. Lau knew had done management reviews. Both stated that they could have met the IG's urgent time frame to perform the contract. In fact, they were hired by Mr. Sato to work on the Treasury OIG contract, performing as consultants. We are aware of no reason why it was impractical for the agency to have requested offers from at least the three other known sources for the work Ms. Lau needed. Nor are we aware of any reason why Sato & Associates was the only firm that could have performed that work in the available time. In fact, Mr. Sato reported to us that he had never performed a management review, while, as Ms. Lau knew, Messrs. Dempsey, Hyland, and Kusserow had done so. Consequently, we conclude that the agency acted in violation of 41 U.S.C. section 253(e) and FAR section 6.302-2(c)(2) by failing to request offers from other potential sources. The contract to Sato & Associates was awarded at a firm fixed price of $88,566, which included estimated travel and per diem costs of $15,296. The contract also contained an unpriced time-and-materials option to assist in implementing recommendations made in the contract's final report. A second modification to the contract exercised that option and raised the projected cost an estimated $24,760, for a total estimated contract cost of $113,326. The actual amount billed to the government by Mr. Sato for the fixed-price contract and the time-and-materials option totaled $90,776. Federal procurement policy seeks to ensure that the government pays fair and reasonable prices for the supplies and services procured by relying on the competitive marketplace wherever practical. We believe that the lack of competition for the award of the Treasury OIG management study may have been the reason for an artificially high price on the Sato & Associates contract. On February 25, 1995, Mr. Sato submitted an unsolicited proposal for $91,012 to the Department of the Interior's OIG for a contract similar to his Treasury contract. Rather than award a contract to Mr. Sato based on this proposal, the Department of the Interior conducted a full and open competition. In June 1995, Interior awarded a management study contract to Sato & Associates for approximately $62,000 less than the offer in Mr. Sato's unsolicited proposal. The contract's final cost was $28,920. Our review of both management study contracts shows that they are similar and that any dissimilarity does not explain a nearly threefold higher cost of the Treasury contract over the Interior contract. The Treasury and Interior contracts contained three identical objectives that the contractor was to focus on in conducting the review and making recommendations. They were to "a. improve the day-to-day management of the Office of Inspector General "b. optimize management techniques and approaches "c. enhance the efficiency . . . productivity of the . . . [OIG]." The proposals and final reports submitted by the contractor were substantially the same for both jobs. Mr. Sato's final report for Treasury included 30 recommendations; his Interior report had 26 recommendations. Eighteen of the recommendations in both reports were substantially the same. Messrs. Dempsey and Hyland worked with Mr. Sato on both the Interior OIG and Treasury OIG contracts. Mr. Hyland stated to us that the scope of work on the Interior contract was basically the same as that on the Treasury contract. According to Mr. Dempsey, although he conducted more interviews at Treasury than at Interior, the Treasury contract was worth no more than $40,000, adding that he and Mr. Hyland could have done "this job in 60 days at $40,000." Ms. Lau told us that prior to her October 1994 confirmation she had learned that OIG suffered severe morale and diversity problems. In the spring of 1995, she requested OPM to conduct a workplace effectiveness study of the OIG. The purpose of the resulting OPM report was to provide the OIG with the necessary information on employee attitudes to assist it in its efforts to remove obstacles to workplace effectiveness. When Ms. Lau made that request, she had anticipated contracting with OPM to develop an implementation plan based on the problems identified in the initial study. However, in April 1995, OPM explained that it was unable to do any follow-on work because of reorganization and downsizing. Instead, in July 1995, OPM provided Treasury OIG a list of 12 consultants who were capable of doing the follow-on work. On July 12, 1995, Ms. Lau's staff gave her a list of 14 possible consultants to perform the follow-on work--OPM's list of 12 and 2 others with whom the staff were familiar. Ms. Lau reviewed the list, added two names, and instructed her special assistant to invite bids from at least the six individuals she had identified on the list. On August 17, 1995, OPM conducted a preliminary briefing with senior OIG staff concerning the nature of the OIG problems. Thereafter Ms. Lau told PSD that an urgent need existed to hire a contractor to perform the follow-on work. She wanted the contract awarded before the annual OIG managers' meeting scheduled for September 14, 1995, to prove to her managers that she intended to fix the problems identified in the OPM study. (The final report was furnished to the OIG on September 30, 1995; it reported that OIG suffered from a lack of communication with its employees, severe diversity problems, and a lack of trust employees had toward management.) OIG staff followed up with the six consultants identified by Ms. Lau. The staff were unable to contact one consultant, and another consultant could not provide a preliminary proposal by August 30, 1995. With respect to the remaining four consultants, OIG staff met with each one to orally describe the agency's needs and request written proposals. Following receipt of the proposals and oral presentations by the offerors, two OIG officials selected Kathie M. Libby, doing business as KLS, a consultant from OPM's list, as the successful contractor. Although one OIG official told us that the evaluation criteria used for evaluating the proposals were based on the OPM recommendations, the other OIG official involved in the selection stated that the selection was based only on a "gut instinct" that KLS would provide a "good fit" with OIG and could do the work. Ms. Lau concurred with the selection. On September 12, 1995, a time-and-materials contract was awarded to KLS. The original term of the contract was from date of award (Sept. 12, 1995) to September 30, 1996. The contract, among other things, called for the contractor to attend the September 14, 1995, OIG conference; review and analyze the OPM survey results; and provide assistance to managers and staff on reaching the goals identified by OPM in its study. It was expected that in the beginning stages of contract performance, KLS would meet with OIG employees weekly, if not daily. Given the complexity of the issues and the desire for lasting improvements, OIG anticipated that KLS's services would be required for as long as 1 year, although it was anticipated that the services would be on an "on-call" basis during the final stages of the contract. The agency justified limiting the competition on the basis of unusual and compelling urgency. The J&A provided as follows, "It is imperative that the services begin no later than September 11, 1995, in order to have the consultants provide a briefing to managers attending the September 14, 1995, OIG managers conference." This determination reflected Ms. Lau's concern that while similar management studies had been conducted in the past, historically there had been no follow-through on the studies' recommendations. It also reflected her desire to show the OIG managers continuity between the OPM survey results and the follow-up work. To that end, the J&A noted that it was imperative that the employees view the change process to be implemented by the consultants as an on-going process rather than a series of "finger in the dike" actions. Based on the results of our investigation, we conclude that the decision to limit the competition was not reasonable. As explained previously, other than full and open competition is permitted when the agency has an unusual and compelling urgency such that full competition would seriously injure the government's interest. The agency's urgency determination was based upon Ms. Lau's desire to have a management consultant provide a briefing at a management conference to be held a few days after contract award. The KLS consultants did attend the management conference, but they were present for the limited purpose of introducing themselves to the OIG staff and informing them that KLS would work with them to implement the OPM study recommendations. Little else was possible since, although OIG staff had received preliminary results from the OPM study in August 1995, Ms. Libby informed us that it was not until mid-October 1995, well after the OIG management conference, that the KLS consultants received the study results and began work on the contract. We recognize the importance of Ms. Lau's desire for her managers to know that she intended to implement the OPM study recommendations. However, we do not believe Ms. Lau's ability to convey that message at the management conference and to correct the problems identified in the OPM study would have been seriously impaired had the announcement of the actual consultant been delayed by a few months in order to conduct a full and open competition. Following discussion at the conference of the OPM study, Ms. Lau could have announced that the agency was going to employ a contractor with expertise in the field to perform follow-on work on the OPM study and that the acquisition process would begin as soon as practicable. The announcement of her plans, an expeditious initiation of the acquisition process, and notification of her staff about the contract award should have been sufficient to assure her employees that Ms. Lau was serious about addressing the diversity and morale problems. When first awarded, the KLS contract had an estimated level of effort of $85,850. The original term of the contract was 1 year. By November 1, 1996, four modifications had increased the contract price to $345,050 (see table 1). Modification 5 extended the contract through September 30, 1997, at no additional cost. Federal procurement law requires that an agency conduct a separate procurement when it wishes to acquire services that are beyond the scope of an existing contract. A matter exceeds the scope of the contract when it is materially different from the original contract for which the competition was held. The question of whether a material difference exists is resolved by considering such factors as the extent of any changes in the type of work, the performance period, and costs between the contract as awarded and as modified, as well as whether potential bidders reasonably would have anticipated the modification. In our view, the largest modification (Modification 4) materially deviated from the original contract's scope of work and should have been the subject of a separate procurement action. Modification 4 increased the contract price by $148,600 and extended the contract period of performance by 6 months. About half of the work under this modification was the same type of work that had been performed under the original contract; however, the other half was beyond the contract's scope of work and would not reasonably have been anticipated by potential bidders. It involved revising the OIG's performance appraisal system. Although the OPM study referenced employee concerns with the OIG performance appraisal system, nothing in the contract called for the contractor to work with OIG to modify that system. Ms. Libby herself stated that Modification 4 significantly changed the original scope and contract requirements and that she was surprised competition was not held for this work. In our view, this modification was beyond the contract's scope of work and would not have been appropriate even if the OIG could have justified its urgency determination for the original procurement. In addition to legal improprieties in the manner in which the agency awarded and assigned tasks under the contract, we found a pattern of careless management in the procurement process and in oversight of performance under the contract. We believe such careless management could have contributed to an increased cost for the work performed under the contract. Good procurement planning is essential to identifying an agency's needs in a timely manner and contributes to ensuring that the agency receives a reasonable price for the work. Little or no procurement planning took place prior to making the award here. Although proposals were solicited to do follow-on work relating to recommendations from an OPM study on diversity and workplace morale, the OIG had not received the OPM study and had only been briefed on the preliminary findings at the time of the solicitation. The OIG therefore did not have sufficient information to adequately identify its needs and clearly articulate a set of goals for the change process to be implemented. Furthermore, OIG did not prepare a written solicitation, including a statement of work. One important purpose of a written statement of work is to communicate the government's requirements to prospective contractors by describing its needs and establishing time frames for deliverables. The OIG instead relied upon oral communications and failed to effectively communicate with the consultants from whom it solicited proposals. Had the OIG waited until it received the OPM report, carefully analyzed OPM's recommendations, determined what it needed, and adequately communicated these needs in a written solicitation, we believe the OIG would have received a better proposal initially, and one that may have been at a lower overall price. In this regard, Ms. Libby explained to us that the OIG had not specifically identified to her its needs and that she had misunderstood the work to be performed as explained in her initial telephone conversation with the OIG. Her proposal was based on her belief that the OIG already had management task forces or employee groups studying what changes were needed to address the issues raised in the OPM study and that KLS was to serve only in an advisory capacity to those working groups. However, soon after conducting her initial briefings, she learned that this was not the case and that the work that needed to be done was different from what she believed when she presented her proposal. As a result shortly after she began work, Ms. Libby informed OIG that more work was necessary under the contract than she had originally envisioned. This led to the first three modifications under the contract. Modification 1 was issued soon after the contract was awarded. It called for KLS to design and conduct briefings with OIG staff both in headquarters and in the field, adding $30,800 to the costs of the original contract. Modification 2 also increased the level of effort, and added $78,400 to the contract. According to a memorandum from the contracting officer, this modification was necessary because KLS's technical proposal had suggested the establishment of one steering group whereas additional groups were needed. The modification also significantly increased the training hours to be expended by KLS. Modification 3 resulted from the need to increase the amount of "other direct costs" to allow for travel and material costs for KLS to contribute to the 1996 OIG managers' conference. Although each of these three modifications were within the scope of work contemplated by the initial contract, this increased work was apparently necessary because OIG had not adequately determined its requirements at the beginning of the procurement process and conveyed them to KLS. Had the agency adequately planned for the procurement and identified its needs, this work could have been included in the original contract and the modifications would not have been required. Similarly, had the OIG properly analyzed the OPM recommendations, it could have determined whether revision of the performance appraisal system should have been included in the scope of the original contract or the work procured separately--thus eliminating Modification 4. Furthermore, had the OIG determined the nature of the work involved in revising the performance appraisal system, specific deliverables and time frames for revising the performance appraisal system could have been established. None of this was done in Modification 4, which merely stated that the modification was "to complete change process transition to include establishing a permanent self-sustaining advisory team, work with in-house committees on complex systems changes, and to establish procedures which will withstand changes in senior management personnel." An OIG official told us that revision to the performance appraisal process had been on-going for 2 years and that the revisions to the system had still not been completed as of June 1997. We also identified management deficiencies in oversight of the work performed under the contract. In several instances, KLS performed and billed for work that was not included in the contract statement of work. As stated previously, pursuant to Modification 4, KLS was authorized to make revisions to the OIG performance appraisal system. However, prior to this modification, one of KLS's employees performed this type of service, working with employee groups to address generic critical job elements and standards, rating levels, and an incentive award system to complement the performance appraisal system. Furthermore, the OIG official responsible for authorizing payment performed under the contract told us that she did not verify that any work had been performed under the contract prior to authorizing payment. She also told us that she did not determine whether documentation for hotel and transportation costs claimed by KLS had been received even though she authorized payment for these travel expenses. Allegations concerning IG Lau's trips to California suggested that she had used these trips, at taxpayers' expense, to visit her mother, a resident of the San Francisco Bay area. A review of Ms. Lau's travel vouchers revealed that she had made 22 trips between September 1994 and February 1997 (30 months)--5 to California of which 3 included stops in San Francisco. During the three trips that included San Francisco, Ms. Lau took a total of 9 days off. During these 9 days, she charged no per diem or expense to Treasury. Her travel to California, including the San Francisco area, was scheduled for work-related reasons. See table 2. We conducted our investigation from May 13 to October 8, 1997, in Washington, D.C., and Seattle, Washington. We interviewed Treasury officials, including current and former OIG officials, and contractors and staff involved in the two procurements discussed in this report. We reviewed pertinent government regulations, OIG contract files, OIG contracting policies and procedures, and Interior OIG documents concerning Sato & Associates' review of its operation. We also reviewed Ms. Lau's financial disclosure statements, travel vouchers, and telephone logs. Finally, we reviewed prior GAO contracting decisions relevant to the subject of our investigation. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to interested congressional committees; the Secretary of the Treasury; and the Inspector General, Department of the Treasury. We will also make copies available to others on request. If you have any questions concerning our investigation, please contact me or Assistant Director Barney Gomez on (202) 512-6722. Major contributors are listed in appendix I. Aldo A. Benejam, Senior Attorney Barry L. Shillito, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the award of a sole-source contract to Sato & Associates for a management study of the Department of the Treasury's Office of Inspector General (OIG) and of a consulting services contract to Kathie M. Libby, doing business as KLS, using other than full and open competition. GAO also reviewed the nature and purpose of trips to California made by Treasury Inspector General (IG) Valerie Lau since her appointment. GAO noted that: (1) shortly after her confirmation as Inspector General, Ms. Lau notified the Treasury Procurement Services Division (PSD) that she wanted Sato to perform a management review; (2) PSD awarded a sole-source management study contract to Sato on the basis of unusual and compelling urgency; (3) although Ms. Lau stated that the need to limit competition was urgent because of the need to make reassignments in the senior executive ranks and to marshal the resources needed to conduct audits, there was insufficient urgency to limit competition; (4) the price of Sato's contract for the Treasury OIG effort appears to be artificially high, in light of the fact that the firm performed a similar review of the Department of the Interior OIG for approximately $62,000 less; (5) in September 1995 PSD awarded a time-and-materials, consulting services contract to Libby to review and analyze an Office of Personnel Management (OPM) report on morale and diversity problems in the OIG office and assist OIG managers and staff concerning goals identified in the OPM study; (6) the contract was awarded on the basis of unusual and compelling urgency following limited competition; (7) the justification for limiting competition was not reasonable, since Ms. Lau could still have conveyed to her managers that the problems identified in the OPM study would be addressed and corrected those problems, had the consultant selection been delayed a few months to obtain full and open competition; (8) the largest modification made to the KLS contract was outside the scope of the contract and should have been obtained through a separate, competitive procurement; (9) GAO identified a pattern of careless management in the procurement process and in oversight of performance under the KLS contract; (10) OIG failed to fully understand and articulate its needs, resulting in a fourfold increase in the contract's total price and a 1-year extension to the period of performance; (11) OIG paid for work that was not authorized, and payments were made without verification that work had been done and without determining that travel and transportation costs documents had been received; and (12) all five of Ms. Lau's trips to California made between September 1994 and February 1997 were scheduled for work-related reasons.
6,024
585
Section 482 of Title 10 of the United States Code requires DOD to report quarterly to Congress on military readiness. The report is due to Congress not later than 45 days after the end of each calendar-year quarter (i.e. by May 15th, August 14th, November 14th, and February 14th of each year). Congress first mandated the report in 1996 to enhance its oversight of military readiness, requiring that DOD describe each readiness problem and deficiency, the key indicators and other relevant information related to these problems and deficiencies, and planned remedial actions. DOD submitted its first quarterly report in May 1996. Since that time, Congress has added additional reporting requirements. Specifically, in 1997, the initial reporting requirement was expanded to require DOD to include additional reporting elements in the quarterly reports. Examples of these additional reporting elements include historical and projected personnel trends, training operations tempo, and equipment availability. In 2008, an additional reporting element was added to require the inclusion of an assessment of the readiness of the National Guard. For a listing of the 26 reporting elements currently required by section 482, see table 1. Since DOD provided its first quarterly readiness report in May 1996, DOD and the services have invested significant resources in upgrading the systems they use to collect and report readiness information. As a result, the Office of the Secretary of Defense, the Joint Staff, the combatant commands, and the services have added numerous new readiness reporting capabilities such as the capacity to assess the ability of U.S. forces to meet mission requirements in specific operational plans. In addition, the services have also refined their respective service-specific metrics to enhance their ability to measure the readiness of their forces. The Quarterly Readiness Report to Congress is a classified report that includes a summary of the contents of the report and multiple classified annexes that provide the required information. The report is typically hundreds of pages long. For example, the July through September 2012 Quarterly Readiness Report to Congress totaled 443 pages and the January through March 2013 report is 497 pages long. The Office of the Under Secretary of Defense for Personnel and Readiness assembles and produces the quarterly report to Congress. To do so, it compiles information from multiple DOD organizations, including the Joint Staff and military services, and its own information such as a summary of overall readiness status and prepares a draft report. It then sends the draft report to DOD components to review it for accuracy, and coordinates any comments. Once finalized, Office of the Under Secretary of Defense for Personnel and Readiness officials provide the report to the congressional defense committees (see figure 1). We have previously examined the extent to which DOD's quarterly readiness reports met section 482 reporting elements, and found that DOD's reports lacked detail, or in some cases, information required by law. For example: In 1998, we reported that DOD's quarterly readiness reports did not discuss the precise nature of identified readiness deficiencies, and information on planned remedial actions could be more complete and detailed to include specifics on timelines and funding requirements. In 2003, we reported that DOD's quarterly reports still contained broad statements of readiness issues and remedial actions, which were not supported by detailed examples. We also identified gaps in the extent to which DOD addressed the required reporting elements. For example, DOD was not reporting on borrowed manpower, personnel morale, and training funding. In both reports, we recommended actions DOD could take to improve its readiness reporting. Since our 2003 review, DOD has made adjustments and expanded its readiness reporting to Congress in some areas. In its quarterly readiness reports that covered the period from April 2012 through March 2013, DOD addressed most of the 26 reporting elements required by section 482 but partially addressed some elements and did not address some other elements. We found that, for the areas that were addressed or partially addressed, the services submitted different amounts and types of information because the Office of the Secretary of Defense has not provided guidance on the information to be included in the quarterly reports. Further, we found that information may exist in the department on some of the reporting elements DOD did not address, but that DOD has not analyzed alternative information that it could provide to meet the required reporting elements. DOD's four quarterly readiness reports that cover the period from April 1, 2012 through March 31, 2013 mostly addressed the 26 required reporting elements. In analyzing the three reports that covered the period from April 1 through December 31, 2012, we found that DOD addressed 17 elements, partially addressed 3 elements, and did not address 6 elements. In the January 1 through March 31, 2013 report, DOD's reporting remained the same except that it addressed an additional element that had not previously been addressed. As a result, our analysis for this report showed it addressed 18 elements, partially addressed 3 elements and did not address 5 elements. Figure 2 summarizes our assessment of the extent to which DOD's quarterly reports addressed the section 482 reporting elements. We assessed elements as being addressed when the information provided in the report was relevant to the reporting elements set out in section 482. For example, for training unit readiness and proficiency, each of the services provided their current and historical training readiness ratings. Similarly, for recruit quality, each of the services provided high school graduation rates of recruits. For some of the elements, DOD reported information that was incomplete or inconsistent across the services. Specifically, as shown below, for the three required reporting elements that DOD partially addressed, the information was incomplete, with only some services providing information on personnel stability, training operations tempo, and deployed equipment: Personnel stability: The Air Force, Marine Corps, and Navy provided information on retention rates, but the Army did not provide any information on this element. Training operations tempo: The Marine Corps and Navy provided information on the pace of training operations, but the Army and Air Force did not provide any information on this element. Deployed equipment: The Navy provided information on the number of ships deployed, but the other three services did not provide any information on this element. Further, in instances when the services reported information on a required element, they sometimes did so inconsistently, with varying amounts and types of information. For example: The Air Force and Marine Corps both reported information on the age of certain equipment items, but they did not report the same amount and type of information. The Air Force reported the average age of equipment by broad types of aircraft (e.g., fighters, bombers), while the Marine Corps reported average age of specific aircraft (e.g., F/A- 18, MV-22), as well as the age of its oldest equipment on hand, expected service life, and any impact of recapitalization initiatives on extending the expected service life of the equipment. The services all reported information on training commitments and deployments, but did not report the same amounts and types of information. First, the services used different timeframes when providing information on training commitments and deployments. The Army provided planned training events for fiscal years 2012 through 2018, the Air Force and Marine Corps provided planned training events for fiscal years 2012 through 2014, and the Navy did not provide any information on planned training events in the future. Second, the Air Force and the Navy provided information on the number of training events executed over the past two years, while the Army and Marine Corps did not. We found that the services have submitted different amounts and types of information to meet reporting elements because the Office of the Secretary of Defense has not provided guidance on the information to be included in the quarterly reports. Service officials told us they have received informal feedback from the Office of the Secretary of Defense regarding the data and charts they submit for inclusion in the quarterly readiness reports. For example, they have received informal suggestions for changes to how the readiness information is presented. However, service officials explained that they have not received clear guidance or instructions on the type and amount of information to present. As a result, the services have used their own judgment on the scope and content of readiness information they provide to meet the required reporting elements. Because the services report different types and amounts of information and DOD has not clarified what information should be reported to best address the required elements, the users of the report may not be getting a complete or consistent picture of the key indicators that relate to certain elements. For its three quarterly readiness reports that covered the period from April 1 through December 31, 2012, DOD did not provide any information on 6 of the 26 required elements, although in its January through March 2013 report DOD did provide information on 1 previously unaddressed element, specifically planned remedial actions. The required elements that remain unaddressed are personnel serving outside their specialty or grade, personnel morale, training funding, borrowed manpower, and the condition of nonpacing items. We found instances where information may exist within the department for some of these elements DOD did not report on. For example: Extent to which personnel are serving in positions outside their specialty or grade: The Navy internally reports fit and fill rates, which compare personnel available by pay grade and Navy skill code against the positions that need to be filled. Such information could potentially provide insight into the extent to which the Navy fills positions using personnel outside of their specialty or grade. Personnel morale: We found multiple data sources that provide information related to this required reporting element. For example, DOD's Defense Manpower Data Center conducts a series of Web- based surveys called Status of Forces surveys, which include measures of job satisfaction, retention decision factors, and perceived readiness. Also, DOD's Morale, Welfare, and Recreation Customer Satisfaction Surveys regularly provide information on retention decision indicators. Finally, the Office of Personnel Management conducts a regular survey on federal employees' perceptions of their agencies called the Federal Employee Viewpoint Survey; the results of this survey are summarized in an Office of Personnel Management report, and provide insights into overall job satisfaction and morale at the department level. Training funding: DOD's fiscal year 2014 budget request contained various types of information on training funding. For example, the request includes funding for recruit training, specialized skills training, and training support in the Marine Corps and similar information for the other services. Borrowed manpower: We found that the Army now requires commanders to report on the readiness impacts of borrowed military manpower in internal monthly readiness reports. Specifically, on a quarterly basis, beginning no later than June 15, 2013, senior leaders will brief the Secretary of the Army on borrowed manpower with a focus on training and readiness impacts. For the condition of nonpacing items element, officials from the Office of the Under Secretary of Defense for Personnel and Readiness noted that there is not a joint definition of nonpacing items across the services. The Army defines pacing items as major weapon systems, aircraft, and other equipment items that are central to the organization's ability to perform its core functions/designed capabilities, but service officials reported that they do not collect any information related to nonpacing items. As noted previously, section 482 requires that DOD address all 26 reporting elements in its quarterly readiness reports to Congress. When asked why DOD did not provide information on certain required reporting elements, officials from the Office of the Under Secretary of Defense for Personnel and Readiness cited an analysis included in the implementation plan for its readiness report to Congress in 1998. This analysis concluded that DOD could not provide the required data at that time because, among other reasons, they lacked the metrics to capture the required data. In the 1998 implementation plan, DOD noted that addressing the section 482 reporting elements was an iterative process, recognizing that the type and quality of readiness information was likely to evolve over time as improvements to DOD's readiness reporting and assessment systems came to fruition. DOD stated that it intended to continue to review and update or modify the readiness information as necessary to improve the report's utility in displaying readiness. However, since it issued its initial implementation plan, DOD has not analyzed alternative information, such as Navy fit and fill rates or satisfaction survey results, which it could provide to meet the required reporting elements. DOD officials told us they intend to review the required reporting elements to determine the extent to which they can address some of the elements that they have consistently not reported on and, if they still cannot address the elements, to possibly request congressional modifications on the required content of the reports. However, they said that they had not yet begun or set a specific timetable for this review. Without analyzing alternative information it could provide to meet the required reporting elements, DOD risks continuing to provide incomplete information to Congress, which could hamper its oversight of DOD readiness. DOD has taken steps to improve the information in its Quarterly Readiness Reports to Congress over time. However, we found several areas where additional contextual information, such as benchmarks or goals, and clear linkages between reported information and readiness ratings, would provide decision makers a more complete picture of DOD's readiness. Over time, based on its own initiative and specific congressional requests for information, DOD has added information to its reports. For example, in 2001, it added data on cannibalizations--specifically the rates at which the services are removing serviceable parts from one piece of equipment and installing them in another. This information was added in response to a requirement in the 2001 National Defense Authorization Act that the readiness reporting system measure "cannibalization of parts, supplies, and equipment."data from the Defense Readiness Reporting System and detailed information on operational plan assessments. Operational plan assessments gauge combatant commands' ability to successfully execute key plans and provide insight into the impact of sourcing and logistics shortfalls and readiness deficiencies on military risk. In 2009, it added brigade and regimental combat team deployment information. In 2006, DOD added capability-based assessment In compiling its January through March 2013 Quarterly Readiness Report to Congress, DOD made several structural changes to expand its reporting on overall readiness. Specifically, the Office of the Secretary of Defense added narrative information and other sections, and made more explicit linkages between resource needs and readiness deficiencies in order to convey a clearer picture of the department's readiness status and concerns. In that report, DOD added: Narrative information detailing the impact of readiness deficiencies on overall readiness. Discussions of how the military services' fiscal year 2014 budgets support their long-term readiness goals. Examples of remedial actions to improve service readiness. A section highlighting significant changes from the previous quarter. Office of the Secretary of Defense officials told us that they plan to sustain these changes in future quarterly readiness reports to Congress. We found several areas where adding contextual information to the quarterly readiness reports, such as benchmarks or goals, and clearer linkages between reported information and readiness ratings, would provide Congress with a more comprehensive and understandable report. Federal internal control standards state that decision makers need complete and relevant information to manage risks. This includes providing pertinent information that is identified and distributed in an understandable form. In some instances, the services report significant amounts of quantitative data, but do not always include information on benchmarks or goals that would enable the reader to distinguish between acceptable and unacceptable levels in the data reported. For example, when responding to the required reporting element on equipment that is not mission capable: The Marine Corps and Air Force report mission capable rates for all of their equipment, but do not provide information on related goals, such as the percentage of each item's inventory that should be kept at various mission capability levels. The Navy reports on the number of ships that are operating with a mechanical or systems failure. While the Navy explains that this may or may not impact the mission capability of the vessel, it does not provide what it considers an acceptable benchmark for the number of ships that operate with these failures or the number of failures on each ship. In the absence of benchmarks or goals, the reader cannot assess the significance of any reported information because it is not clear whether the data indicate a problem or the extent of the problem. In other instances, the services have not fully explained the voluminous data presented on the required reporting elements or set the context for how it may or may not be connected to the information DOD provides in the report on unit equipment, training, and personnel readiness ratings and overall readiness. For example: The services provide detailed mission capable rate charts and supporting data for dozens of aircraft, ground equipment, and other weapons systems. For example, for the January through March 2013 readiness report, the services collectively provided 130 pages of charts, data, and other information on their mission capable equipment rates; this accounted for over 25 percent of the entire quarterly report. However, the services do not explain the extent to which these mission capable rates are, or are not, linked to equipment readiness ratings or overall readiness that is also presented in the quarterly reports. In the area of training, the Navy provides data showing the number of training exercises completed over the past two years, but does not provide any explanation regarding how this information affects training readiness ratings that are also presented in the quarterly reports. In the area of logistics, although the Army and the Air Force provide depot maintenance backlogs, they do not explain the effect the backlogs have on unit readiness that is also discussed in the report. Specifically, those services do not explain whether units' readiness is affected or could be affected in the future because maintenance was not accomplished when needed. Without providing additional contextual information, such as benchmarks and clearer linkages, it is unclear how, if at all, the various data on the required elements affected unit and overall readiness. To oversee DOD's efforts to maintain a trained and ready force, and make decisions about related resource needs, congressional decision makers need relevant, accurate, and timely readiness information on the status of the military forces. DOD continues to address many of the required reporting elements in its quarterly readiness reports to Congress and has periodically revised the content of the information it presents, which is an important step to making the reports more useful. However, as reflected in its more recent reports for 2012 and 2013, DOD has not always reported or fully reported on some elements, and sometimes presents detailed readiness data without sufficient context on how this information relates to or affects the information it provides on overall readiness or readiness in specific resource areas, such as equipment, personnel, and training. Without further analyzing whether information is available within the department to address the elements that it is not currently addressing, DOD cannot be sure that it has the information it needs to enhance the quality of its reporting or present options to the Congress for adjusting reporting requirements. Furthermore, unless DOD provides guidance to the services on the amount and types of information to be included in the quarterly reports, including requirements to provide contextual information such as criteria or benchmarks for distinguishing between acceptable and unacceptable levels in the data reported, DOD is likely to continue to be limited in its ability to provide Congress with complete, consistent, and useful information. To improve the information available to Congress in its quarterly readiness reports, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to take the following three actions: Analyze alternative sources of information within DOD that it could provide to meet required reporting elements that DOD has not addressed in past reports; Issue guidance to the services on the type and amount of information to be included in their submissions for the quarterly readiness report; and Incorporate contextual information in the quarterly readiness reports such as clear linkages between reported information on the required elements and readiness ratings, and benchmarks for assessing provided data to enable the reader to distinguish between acceptable and unacceptable levels in the data reported. In written comments on a draft of this report, DOD concurred with two recommendations and partially concurred with one recommendation. DOD's comments are reprinted in their entirety in appendix II. DOD provided technical comments during the course of the engagement, and these were incorporated as appropriate. In its overall comments, DOD noted that the goal of DOD is to provide the most accurate and factual representation of readiness to Congress through its quarterly reports and their ability to accomplish this relies upon our recommendations, which should facilitate improvements. DOD stated that our recommendations will be incorporated in the ongoing process of producing the quarterly readiness reports and will hopefully improve the ability to interpret the product while assisting the services in relaying their readiness concerns. DOD also provided detailed comments on each of our recommendations. DOD partially concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to analyze alternative sources of information within DOD that it could provide to meet required reporting elements that DOD has not addressed in past reports. DOD stated the iterative process that is used to improve quarterly readiness reports to Congress will continue to seek alternative sources of information that could provide a more holistic picture of readiness across the force and that improvements in reporting capabilities and adjustments to reported readiness information should be available to provide all of the information required by section 482 of Title 10. DOD noted that it provides information on one required element, training funding, within its annual budget requests. DOD stated it will investigate ways to incorporate surrogate methods of reporting in future reports. DOD concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to issue guidance to the services on the type and amount of information to be included in their submissions for the quarterly readiness report. DOD stated that it will continue to issue guidance to the individual services regarding types and amounts of information that may improve the readiness analysis and advance the comparative nature of separate services. DOD stated that the individual services may use distinct measures to determine specific levels of their readiness and the ability to compare these measures may not be possible or occur quarterly. Where feasible, DoD stated it will continue to attempt to align information and improve the clarity of readiness throughout the department. DOD concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to incorporate contextual information in the quarterly readiness reports such as clear linkages between reported information on the required elements and readiness ratings, and benchmarks for assessing provided data to enable the reader to distinguish between acceptable and unacceptable levels in the data reported. DOD stated that a concerted effort is made to continuously improve the quality of analysis as well as assist with the explanation of linkages between raw data and readiness. DOD stated that this effort is tempered with the need to reduce the volume of information and provide sound examination of the effects of this data on the force. DOD noted a succinct version of readiness is provided in the executive summary included in recent reports. DOD also noted that a longer narrative supplement will continue to be provided in an attempt to enhance the clarity of the linkages and judgment of acceptability regarding the reported readiness across the force. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, the Commandant of the Marine Corps, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) addressed required reporting elements in its quarterly readiness reports to Congress, we reviewed legislation governing DOD readiness reporting, including provisions in Title 10, and interviewed DOD officials. We analyzed the four most recent Quarterly Readiness Reports to Congress that covered the period from April 1, 2012 through March 31, 2013 and compared the reported readiness information in these reports to the Title 10 requirements to identify any trends, gaps, or reporting Specifically, we developed an evaluation tool based on inconsistencies.Title 10 section 482 reporting requirements to assess the extent to which the April through June 2012, July through September 2012, October through December 2012, and January through March 2013 Quarterly Readiness Reports to Congress addressed these elements. Using scorecard methodologies, two GAO analysts independently evaluated the quarterly readiness reports against the elements specified in section 482. The analysts rated compliance for each element as "addressed" or "not addressed." After the two analysts completed their independent analyses, they compared the two sets of observations and discussed and reconciled any differences. We also interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Joint Staff Readiness Division, and each of the military services and obtained additional information and the officials' views of our assessments, as well as explanations of why certain items were not addressed or not fully addressed. To determine what additional information, if any, could make the reports more useful, we reviewed the types of readiness information DOD uses internally to manage readiness contained in documents such as the Joint Force Readiness Review and various service-specific readiness products, and compared their formatting and contents to the four reports identified above. We reviewed the content of these reports in the context of federal internal control standards, which state that decision makers need complete and relevant information to manage risks. This includes pertinent information that is identified and distributed in an understandable form. We interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Joint Staff Readiness Division, and each of the military services and discussed the procedures for compiling and submitting readiness information for inclusion in the quarterly readiness reports, changes in the reports over time, and the Office of the Secretary of Defense's process for compiling the full report. We also identified adjustments DOD has made to its reports, including changes the Office of the Under Secretary of Defense for Personnel and Readiness made in preparing the January through March 2013 report, and the underlying reasons for these adjustments, as well as obtained the views of officials as to opportunities to improve the current reporting. We conducted this performance audit from August 2012 through July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michael Ferren, Assistant Director; Richard Burkard; Randy Neice; Amie Steele; Shana Wallace; Chris Watson; and Erik Wilkins-McKee made key contributions to this report.
Congress and DOD need relevant, accurate, and timely readiness information to make informed decisions about the use of military forces, and related resource needs. To that end, Congress requires DOD to submit a quarterly readiness report addressing various elements related to overall readiness, personnel, training, and equipment. A committee report accompanying the National Defense Authorization Act for Fiscal Year 2013 mandated GAO report on the type of readiness information available to Congress and DOD decision makers and the reported readiness of U.S. forces. In May 2013, GAO provided a classified report on readiness trends of DOD forces. For this report, GAO evaluated (1) the extent to which DOD addressed required reporting elements in its quarterly readiness reports to Congress, and (2) what additional information, if any, could make the reports more useful. GAO analyzed various readiness reports and supporting documentation, and interviewed cognizant officials. In its quarterly readiness reports that covered the period from April 2012 through March 2013, the Department of Defense (DOD) addressed most but not all required reporting elements. Section 482 of Title 10 of the U.S. Code requires DOD to report on 26 elements including readiness deficiencies, remedial actions, and data specific to the military services in the areas of personnel, training, and equipment. In analyzing DOD's reports, GAO found that DOD addressed 18 of the 26 elements, partially addressed 3 elements and did not report on 5 elements. For the elements partially addressed--personnel stability, training operations tempo, and deployed equipment--reporting was incomplete because some services reported information and others did not report. When all the services reported on an element, they at times did so inconsistently, with varying amounts and types of information. For example, the services all reported information on training commitments and deployments, but used different timeframes when providing information on planned training events in the future. The services reported differently because DOD has not provided guidance on the information to be reported. For the elements that DOD did not address, including borrowed manpower and training funding, GAO found that information may exist in the department but is not being reported to Congress. For example, the Army now requires commanders to report monthly on the readiness impacts of borrowed military manpower and DOD's budget requests include data on training funding. However, DOD has not taken steps to analyze whether this information could be used to meet the related reporting element. Without issuing guidance on the type and amount of information to be included by each service and analyzing alternative information it could provide to meet the required elements, DOD risks continuing to provide inconsistent and incomplete information to Congress. DOD has taken steps to improve its quarterly readiness reports to Congress, but additional contextual information would provide decision makers a more complete picture of DOD's readiness. Over time, based on its own initiative and congressional requests, DOD has added information to its reports, such as on operational plan assessments. In its most recent report, DOD added narrative information detailing the impact of readiness deficiencies on overall readiness and a discussion of how the services' budgets support their long-term readiness goals. Federal internal control standards state that decision makers need complete and relevant information to manage risks, and GAO found several areas where DOD could provide Congress with more comprehensive and understandable information if it added some additional context to its reports. For example, in some instances, the services report significant amounts of quantitative data, but do not include information on benchmarks or goals that would enable the reader to determine whether the data indicate a problem or the extent of the problem. For example, the Marine Corps and Air Force report mission capable rates for their specific equipment items, but do not provide information on related goals, such as the percentage of the inventory that should be kept at various capability levels. In other instances, the services have not fully explained any connections between the voluminous data they report on the required elements and the information DOD provides in the report on unit and overall readiness ratings. Without providing additional contextual information, DOD's quarterly reports may not provide clear information necessary for congressional oversight and funding decisions. GAO recommends that DOD analyze alternative sources of information that could be used to meet the required reporting elements, issue guidance on the type and amount of information to be included by each service, and incorporate contextual information to improve the clarity and usefulness of reported information. DOD generally agreed with the recommendations.
5,647
917
In August 1990, Iraq invaded Kuwait, and the United Nations imposed sanctions against Iraq. Security Council Resolution 661 of 1990 prohibited all nations from buying and selling Iraqi commodities, except for food and medicine. Security Council Resolution 661 also prohibited all nations from exporting weapons or military equipment to Iraq and established a sanctions committee to monitor compliance and progress in implementing the sanctions. The members of the sanctions committee were members of the Security Council. Subsequent Security Council resolutions specifically prohibited nations from exporting to Iraq items that could be used to build chemical, biological, or nuclear weapons. In 1991, the Security Council offered to let Iraq sell oil under a U.N. program to meet its peoples' basic needs. The Iraqi government rejected the offer, and over the next 5 years, the United Nations reported food shortages and a general deterioration in social services. In December 1996, the United Nations and Iraq agreed on the Oil for Food program, which permitted Iraq to sell up to $1 billion worth of oil every 90 days to pay for food, medicine, and humanitarian goods. Subsequent U.N. resolutions increased the amount of oil that could be sold and expanded the humanitarian goods that could be imported. In 1999, the Security Council removed all restrictions on the amount of oil Iraq could sell to purchase civilian goods. The United Nations and the Security Council monitored and screened contracts that the Iraqi government signed with commodity suppliers and oil purchasers, and Iraq's oil revenue was placed in a U.N.-controlled escrow account. In May 2003, U.N. resolution 1483 requested the U.N. Secretary General to transfer the Oil for Food program to the CPA by November 2003. Despite concerns that sanctions may have worsened the humanitarian situation, the Oil for Food program appears to have helped the Iraqi people. According to the United Nations, the average daily food intake increased from around 1,275 calories per person per day in 1996 to about 2,229 calories at the end of 2001. In February 2002, the United Nations reported that the Oil for Food program had considerable success in several sectors such as agriculture, food, health, and nutrition by arresting the decline in living conditions and improving the nutritional status of the average Iraqi citizen. The Public Distribution System run by Iraq's Ministry of Trade is the food portion of the Oil for Food program. The system distributes a monthly "food basket" that normally consists of a dozen items to all Iraqis. About 60 percent of Iraqis rely on this basket as their main source of food. We estimate that, from 1997 through 2002, the former Iraqi regime acquired $10.1 billion in illegal revenues related to the Oil for Food program--$5.7 billion through oil smuggling and $4.4 billion through surcharges against oil sales and illicit commissions from commodity suppliers. This estimate is higher than the $6.6 billion in illegal revenues we reported in May 2002. We updated our estimate to include (1) oil revenue and contract amounts for 2002, (2) updated letters of credit from prior years, and (3) newer estimates of illicit commissions from commodity suppliers. Oil was smuggled out through several routes, according to U.S. government officials and oil industry experts. Oil entered Syria by pipeline, crossed the borders of Jordan and Turkey by truck, and was smuggled through the Persian Gulf by ship. In addition to revenues from oil smuggling, the Iraqi government levied surcharges against oil purchasers and commissions against commodity suppliers participating in the Oil for Food program. According to some Security Council members, the surcharge was up to 50 cents per barrel of oil and the commission was 5 to 15 percent of the commodity contract. In our 2002 report, we estimated that the Iraqi regime received a 5-percent illicit commission on commodity contracts. However, a September 2003 Department of Defense review found that at least 48 percent of 759 Oil for Food contracts that it reviewed were overpriced by an average of 21 percent. Defense officials found 5 contracts that included "after-sales service charges" of between 10 and 20 percent. In addition, interviews by U.S. investigators with high-ranking Iraq regime officials, including the former oil and finance ministers, confirmed that the former regime received a 10-percent commission from commodity suppliers. Both OIP and the sanctions committee were responsible for overseeing the Oil for Food Program. However, the Iraqi government negotiated contracts directly with purchasers of Iraqi oil and suppliers of commodities. While OIP was to examine each contract for price and value, it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. The sanctions committee responded to illegal surcharges on oil, but it is unclear what actions it took to respond to commissions on commodity contracts. U.N. Security Council resolutions and procedures recognized the sovereignty of Iraq and gave the Iraqi government authority to negotiate contracts and decide on contractors. Security Council resolution 986 of 1995 authorized states to import petroleum products from Iraq, subject to the Iraqi government's endorsement of transactions. Resolution 986 also stated that each export of goods would be at the request of the government of Iraq. Security Council procedures for implementing resolution 986 further stated that the Iraqi government or the United Nations Inter-Agency Humanitarian Program would contract directly with suppliers and conclude the appropriate contractual arrangements. Iraqi control over contract negotiations may have been one important factor in allowing Iraq to levy illegal surcharges and commissions. Appendix I contains a chronology of major events related to sanctions against Iraq and the administration of the Oil for Food program. OIP administered the Oil for Food program from December 1996 to November 2003. As provided in Security Council resolution 986 of 1995 and a memorandum of understanding between the United Nations and the Iraqi government, OIP was responsible for monitoring the sale of Iraq's oil, monitoring Iraq's purchase of commodities and the delivery of goods, and accounting for the program's finances. The United Nations received 3 percent of Iraq's oil export proceeds for its administrative and operational costs, which included the cost of U.N. weapons inspections. The sanctions committee's procedures for implementing resolution 986 stated that U.N. independent inspection agents were responsible for monitoring the quality and quantity of oil being shipped and were authorized to stop shipments if they found irregularities. To do this, OIP employed 14 contract workers to monitor Iraqi oil sales at 3 exit points in Iraq. However, the Iraqi government bypassed the official exit points by smuggling oil through an illegal Syrian pipeline and by trucks through Jordan and Turkey. According to OIP, member states were responsible for ensuring that their nationals and corporations complied with the sanctions. OIP was also responsible for monitoring Iraq's purchase of commodities and the delivery of goods. Security Council Resolution 986, paragraph 8a(ii) required Iraq to submit a plan, approved by the Secretary General, to ensure equitable distribution of Iraq's commodity purchases. The initial distribution plans focused on food and medicines while subsequent plans were expansive and covered 24 economic sectors, including electricity, oil, and telecommunications. The sanction committee's procedures for implementing Security Council resolution 986 stated that experts in the Secretariat were to examine each proposed Iraqi commodity contract, in particular the details of price and value, and to determine whether the contract items were on the distribution plan. It is unclear whether the office performed this function. OIP officials told the Defense Contract Audit Agency they performed very limited, if any, pricing review. They stated that no U.N. resolution tasked them with assessing the price reasonableness of the contracts and no contracts were rejected solely on the basis of price. The sanction committee's procedures for implementing resolution 986 state that independent inspection agents will confirm the arrival of supplies in Iraq. OIP deployed about 78 U.N. contract monitors to verify shipments and authenticate the supplies for payment. OIP employees were able to visually inspect 7 to 10 percent of the approved deliveries. Security Council resolution 986 also requested the Secretary General to establish an escrow account for the Oil for Food Program, and to appoint independent and certified public accountants to audit the account. In this regard, the Secretary General established an escrow account at BNP Paribas into which Iraqi oil revenues were deposited and letters of credit were issued to suppliers having approved contracts. The U.N. Board of Audit, a body of external public auditors, audited the account. According to OIP, there were also numerous internal audits of the program. We are trying to obtain these audits. The sanctions committee was responsible for three key elements of the Oil for Food Program: (1) monitoring implementation of the sanctions, (2) screening contracts to prevent the purchase of items that could have military uses, and (3) approving Iraq's oil and commodity contracts. U.N. Security Council resolution 661 of 1990 directs all states to prevent Iraq from exporting petroleum products into their territories. Paragraph 6 of Resolution 661 establishes a sanctions committee to report to the Security Council on states' compliance with the sanctions and recommend actions regarding effective implementation. As early as June 1996, the Maritime Interception Force, a naval force of coalition partners including the United States and Great Britain, informed the sanctions committee that oil was being smuggled out of Iraq through Iranian territorial waters. In December 1996, Iran acknowledged the smuggling and reported that it had taken action. In October 1997, the sanctions committee was again informed about smuggling through Iranian waters. According to multiple sources, oil smuggling also occurred through Jordan, Turkey, Syria, and the Gulf. Smuggling was a major source of illicit revenue for the former Iraqi regime through 2002. It is unclear what recommended actions the sanctions committee made to the Security Council to address the continued smuggling. A primary function of the members of the sanctions committee was to review and approve contracts for items that could be used for military purposes. For example, the United States conducted the most thorough review; about 60 U.S. government technical experts assessed each item in a contract to determine its potential military application. According to U.N. Secretariat data in 2002, the United States was responsible for about 90 percent of the holds placed on goods to be exported to Iraq. As of April 2002, about $5.1 billion of worth of goods were being held for shipment to Iraq. Under Security Council resolution 986 of 1995, paragraphs 1 and 8, the sanctions committee was responsible for approving Iraq's oil contracts, particularly to ensure that the contract price is fair, and for approving most of Iraq's commodity contracts. In March 2001, the United States informed the Security Council about allegations that Iraqi government officials were receiving illegal surcharges on oil contracts and illicit commissions on commodity contracts. According to OIP officials, the Security Council took action on the allegations of surcharges in 2001 by implementing retroactive pricing for oil contracts. However, it is unclear what actions the sanctions committee took to respond to illicit commissions on commodity contracts. At that time, there was increasing concern about the humanitarian situation in Iraq and pressure on the United States to expedite its review process. In November 2003, the United Nations transferred to the CPA responsibility for 3,059 Oil for Food contracts totaling about $6.2 billion and decided not to transfer a remaining 2,199 contracts for a variety of reasons. U.N. agencies had renegotiated most of the contracts turned over to the CPA with the suppliers to remove illicit charges and amend delivery and location terms. However, the information the United Nations supplied to the CPA on the renegotiated contracts contained database errors and did not include all contracts, amendments, and letters of credit associated with the 3,000 contracts. These data problems, coupled with inadequate staffing at the CPA, hampered the ability of the CPA's Oil for Food coordination center to ensure that suppliers complied with commodity deliveries. In addition, poor planning and coordination are affecting the execution of food contracts. On November 22, 2003, OIP transferred 3,059 contracts worth about $6.2 billion in pending commodity shipments to the CPA, according to OIP. Prior to the transfer, U.N. agencies had renegotiated the contracts with the suppliers to remove "after-sales service fees"--based on information provided by the CPA and Iraqi ministries--and to change delivery dates and locations. These fees were either calculated separately or were part of the unit price of the goods. At the time of the transfer, all but 251 contracts had been renegotiated with the suppliers. The Defense Contract Management Agency is renegotiating the remaining contracts for the CPA to remove additional fees averaging 10 percent. The criteria for renegotiating contracts and the amount of the reductions were based on information from the CPA in Baghdad and the ministries that originally negotiated the contracts. An additional 2,199 contracts worth almost $2 billion were not transferred as a result of a review by U.N. agencies, the CPA, and the Iraqi ministries that negotiated the contracts. For example: The review did not recommend continuing 762 contracts, worth almost $1.2 billion, because it determined that the commodities associated with the contracts were no longer needed. Another 728 contracts, worth about $750 million, had been classified as priority contracts, but were not transferred to the CPA for several reasons. About half--351 contracts--were not transferred because suppliers were concerned about the adequacy of security within Iraq or could not reach agreement on price reductions or specification changes. Another 180 contracts were considered fully delivered. Another 136 suppliers had either declared bankruptcy, did not exist, or did not respond to U.N. requests. It is unclear why the remaining 61 contracts were removed from the priority list; the OIP document lists them as "other." Suppliers did not want to ship the outstanding small balances for an additional 709 contracts totaling about $28 million. The largest portion of the $6.2 billion in Oil for Food contracts pending shipment in November 2003--about 23 percent--was designated for food procurement. An additional 9 percent was for food handling and transport. The oil infrastructure, power, and agriculture sectors also benefited from the remaining contracts. Nearly one half of the renegotiated contracts were with suppliers in Russia, Jordan, Turkey, the United Arab Emirates, and France. According to CPA officials and documents, the incomplete and unreliable contract information the CPA received from the United Nations has hindered CPA's ability to execute and accurately report on the remaining contracts. U.N. resolution 1483 requested the Secretary General, through OIP, to transfer to the CPA all relevant documentation on Oil for Food contracts. When we met with OIP officials on November 24, 2003, they stated that they had transferred all contract information to the CPA. CPA officials and documents report that the CPA has not received complete information, including copies of all contracts. The CPA received several compact disks in November and January that were to contain detailed contract and delivery data, but the information was incomplete. The CPA received few source documents such as the original contracts, amendments, and letters of credit needed to identify the status of commodities, prepare shipment schedules, and contact suppliers. In addition, the CPA received little information on letters of credit that had expired or were cancelled. Funds for the Oil for Food program are obligated by letters of credit to the bank holding the U.N. escrow account. When these commitments are cancelled, the remaining funds are available for transfer to the Development Fund for Iraq. Without this information, the CPA cannot determine the disposition of Oil for Food funds and whether the proper amounts were deposited into the Development Fund for Iraq. In addition, the CPA received an OIP contract database but found it unreliable. For example, CPA staff found mathematical and currency errors in the calculation of contract cost. The inadequate data and documentation have made it difficult for CPA to prepare accurate reports on the status of inbound goods and closeouts of completed contracts. According to a Department of Defense contracting official, some contractors have not received payment for goods delivered in Iraq because the CPA had no record of their contracts. In November 2003, the CPA established a coordination center in Baghdad to oversee the receipt and delivery of Oil for Food commodities. The CPA authorized 48 coalition positions, to be assisted by Iraqis from various ministries. However, according to several U.S. and U.N. officials, the CPA had insufficient staff to manage the program and high staff turnover. As of mid-December 2003, the center had 19 coalition staff, including 18 staff whose tours ended in January 2004. U.S. and WFP officials stated that the staff assigned at the time of the transfer lacked experience in managing and monitoring the import and distribution of goods. A former CPA official stated that the Oil for Food program had been thrust upon an already overburdened and understaffed CPA. As a result, 251 contracts had not been renegotiated prior to the time of the transfer and the CPA asked the Defense Contract Management Agency to continue the renegotiation process. A November 2003 WFP report placed part of the blame in food shortfalls during the fall of 2003 on OIP delays in releasing guidelines for the contract prioritization and renegotiation process. A September 2003 U.N. report also noted that the transfer process in the northern governates was slowing due to an insufficient number of CPA counterparts to work with U.N. staff on transition issues. The center's capacity improved in March 2004 when its coalition staff totaled 37. By April 2004, the coordination center had 16 coalition staff. Up to 40 Iraqi ministry staff are currently working on Oil for Food contracts. As of April 1, the coordination center's seven ministry advisors have begun working with staff at their respective ministries as the first step in moving control of the program to the Iraqi government. According to U.S. officials and documents, CPA's failed plans to privatize the food distribution system and delayed negotiations with WFP to administer the system resulted in diminished stocks of food commodities and localized shortages. Before the transfer of the Oil for Food program, the CPA administrator proposed to eliminate Iraq's food distribution system and to provide former recipients with cash payments. He asserted that the system was expensive and depressed the agricultural sector, and the Ministry of Trade began drawing down existing inventories of food. In December 2003, as the security environment worsened, the CPA administrator reversed his decision to reform the food ration system and left the decision to the provisional Iraqi government. In January 2004, CPA negotiated a memorandum of understanding (MOU) with WFP and the Ministry of Trade that committed WFP to procuring a 3- month emergency food stock by March 31, 2004 and providing technical support to the CPA and Ministry of Trade. Delays in signing the MOU were due to disagreements about the procurement of emergency food stocks, contract delivery terms, and the terms of WFP's involvement. No additional food was procured during the negotiations, and food stocks diminished and localized shortages occurred in February and March 2004. The CPA and WFP addressed these problems with emergency procurements from nearby countries. An April WFP report projected a continued supply of food items through May 2004 except for a 12-percent shortage in milk. Only 55 percent of required domestic wheat has been procured for July 2004 and no domestic wheat has been procured for August. Under the terms of MOU, WFP's commitment to procuring food stock ended March 31, 2004. The Ministry of Trade assumed responsibility for food procurement on April 1, 2004. According to a U.S. official, coordination between WFP and the Ministry of Trade has been deteriorating. The Ministry has not provided WFP with complete and timely information on monthly food allocation plans, weekly stock reports, or information on cargo arrivals, as the MOU required. WFP staff reported that the Ministry's data are subject to sudden, large, and unexplained stock adjustments, thereby making it difficult to plan deliveries. The security environment in Iraq has also affected planning for the transfer and movement of Oil for Food goods in fall 2003. The transfer occurred during a period of deteriorating security conditions and growing violence in Iraq. A September 2003 U.N. report found that the evacuation of U.N. personnel from Baghdad affected the timetable and procedures for the transfer of the Oil for Food program to the CPA and contributed to delays in the contract prioritization and renegotiation processes. Most WFP staff remained in Amman and other regional offices and continued to manage the Oil for Food program from those locations. The August bombing of the U.N. Baghdad headquarters also resulted in the temporary suspension of the border inspection process and shipments of humanitarian supplies and equipment. A March 2004 CPA report also noted that stability of the food supply would be affected if security conditions worsened. The history of inadequate oversight and corruption in the Oil for Food program raises questions about the Iraqi government's ability to manage the import and distribution of Oil for Food commodities and the billions in international assistance expected to flow into the country. In addition, the food distribution system created a dependency on food subsidies that disrupted private food markets. The government will have to decide whether to continue, reform, or eliminate the current system. The CPA and Iraqi ministries must address corruption in the Oil for Food program to help ensure that the remaining contracts are managed with transparent and accountable controls. Building these internal control and accountability measures into the operations of Iraqi ministries will also help safeguard the $18.4 billion in fiscal year 2004 U.S. reconstruction funds and at least $13.8 billion pledged by other countries. To address these concerns and oversee government operations, the CPA administrator announced the appointment of inspectors general for 21 of Iraq's 25 national ministries on March 30, 2004. At the same time, the CPA announced the establishment of two independent agencies to work with the inspectors general--the Commission on Public Integrity and a Board of Supreme Audit. Finally, the United States will spend about $1.63 billion on governance-related activities in Iraq, which will include building a transparent financial management system in Iraq's ministries. CPA's coordination center continues to provide on-the-job training for ministry staff who will assume responsibility for Oil for Food contracts after July 2004. Coalition personnel have provided Iraqi staff with guidance on working with suppliers in a fair and open manner and determining when changes to letters of credit are appropriate. In addition, according to center staff, coalition and Iraqi staff signed a code of conduct, which outlined proper job behavior. Among other provisions, the code of conduct prohibited kickbacks and secret commissions from suppliers. The center also developed a code of conduct for suppliers. In addition, the center has begun identifying the steps needed for the transition of full authority to the Iraqi ministries. These steps include transferring contract- related documents, contacting suppliers, and providing authority to amend contracts. In addition, the January 2004 MOU agreement commits WFP to training ministry staff in the procurement and transport functions currently conducted by WFP. Training is taking place at WFP headquarters in Rome, Italy. After the CPA transfers responsibility for the food distribution system to the Iraqi provisional government in July 2004, the government will have to decide whether to continue, reform, or eliminate the current system. Documents from the Ministries of Trade and Finance indicate that the annual cost of maintaining the system is as high as $5 billion, or about 25 percent of total government expenditures. In 2005 and 2006, expenditures for food will be almost as much as all expenditures for capital projects. According to a September 2003 joint U.N. and World Bank needs assessment of Iraq, the food subsidy, given out as a monthly ration to the entire population, staved off mass starvation during the time of the sanctions, but at the same time it disrupted the market for food grains produced locally. The agricultural sector had little incentive to produce crops in the absence of a promising market. However, the Iraqi government may find it politically difficult to scale back the food distribution system with 60 percent of the population relying on monthly rations as their primary source of nutrition. WFP is completing a vulnerability assessment that Iraq could use to make future decisions on food security programs and better target food items to those most in need. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Joseph Christoff at (202) 512-8979. Other key contributors to this statement were Pamela Briggs, Lyric Clark, Lynn Cothern, Jeanette Espinola, Zina Merritt, Tetsuo Miyabara, Jose M. Pena, III, Stephanie Robinson, Jonathan Rose, Richard Seldin, Audrey Solis, and Phillip Thomas. Iraqi forces invaded Kuwait. Resolution 660 condemned the invasion and demanded immediate withdrawal from Kuwait. Imposed economic sanctions against the Republic of Iraq. The resolution called for member states to prevent all commodity imports from Iraq and exports to Iraq, with the exception of supplies intended strictly for medical purposes and, in humanitarian circumstances, foodstuffs. President Bush ordered the deployment of thousands of U.S. forces to Saudi Arabia. Public Law 101-513 prohibited the import of products from Iraq into the United States and export of U.S. products to Iraq. Iraq War Powers Resolution authorized the president to use "all necessary means" to compel Iraq to withdraw military forces from Kuwait. Operation Desert Storm was launched: Coalition operation was targeted to force Iraq to withdraw from Kuwait. Iraq announced acceptance of all relevant U.N. Security Council resolutions. U.N. Security Council Resolution 687 (Cease-Fire Resolution) Mandated that Iraq must respect the sovereignty of Kuwait and declare and destroy all ballistic missiles with a range of more than 150 kilometers as well as all weapons of mass destruction and production facilities. The U.N. Special Commission (UNSCOM) was charged with monitoring Iraqi disarmament as mandated by U.N. resolutions and to assist the International Atomic Energy Agency in nuclear monitoring efforts. Proposed the creation of an Oil for Food program and authorized an escrow account to be established by the Secretary General. Iraq rejected the terms of this resolution. Second attempt to create an Oil for Food program. Iraq rejected the terms of this resolution. Authorized transferring money produced by any Iraqi oil transaction on or after August 6, 1990, which had been deposited into the escrow account, to the states or accounts concerned as long as the oil exports took place or until sanctions were lifted. Allowed Iraq to sell $1 billion worth of oil every 90 days. Proceeds were to be used to procure foodstuffs, medicine, and material and supplies for essential civilian needs. Resolution 986 was supplemented by several U.N. resolutions over the next 7 years that extended the Oil for Food program for different periods of time and increased the amount of exported oil and imported humanitarian goods. Established the export and import monitoring system for Iraq. Signed a memorandum of understanding allowing Iraq's export of oil to pay for food, medicine, and essential civilian supplies. Based on information provided by the Multinational Interception Force (MIF), communicated concerns about alleged smuggling of Iraqi petroleum products through Iranian territorial waters in violation of resolution 661 to the Security Council sanctions committee. Committee members asked the United States for more factual information about smuggling allegations, including the final destination and the nationality of the vessels involved. Provided briefing on the Iraqi oil smuggling allegations to the sanctions committee. Acknowledged that some vessels carrying illegal goods and oil to and from Iraq had been using the Iranian flag and territorial waters without authorization and that Iranian authorities had confiscated forged documents and manifests. Representative agreed to provide the results of the investigations to the sanctions committee once they were available. Phase I of the Oil for Food program began. Extended the term of resolution 986 another 180 days (phase II). Authorized special provision to allow Iraq to sell petroleum in a more favorable time frame. Brought the issue of Iraqi smuggling petroleum products through Iranian territorial waters to the attention of the U.N. Security Council sanctions committee. Coordinator of the Multinational Interception Force (MIF) Reported to the U.N. Security Council sanctions committee that since February 1997 there had been a dramatic increase in the number of ships smuggling petroleum from Iraq inside Iranian territorial waters. Extended the Oil for Food program another 180 days (phase III). Raised Iraq's export ceiling of oil to about $5.3 billion per 6-month phase (phase IV). Permitted Iraq to export additional oil in the 90 days from March 5, 1998, to compensate for delayed resumption of oil production and reduced oil price. Authorized Iraq to buy $300 million worth of oil spare parts to reach the export ceiling of about $5.3 billion. Public Law 105-235, a joint resolution finding Iraq in unacceptable and material breach of its international obligations. Oct. 31, 1998 U.S. legislation: Iraq Liberation Act Public Law 105-338 SS4 authorized the president to provide assistance to Iraqi democratic opposition organizations. Iraq announced it would terminate all forms of interaction with UNSCOM and that it would halt all UNSCOM activity inside Iraq. Renewed the Oil for Food program for 6 months beyond November 26 at the higher levels established by resolution 1153. The resolution included additional oil spare parts (phase V). Following Iraq's recurrent blocking of U.N. weapons inspectors, President Clinton ordered 4 days of air strikes against military and security targets in Iraq that contribute to Iraq's ability to produce, store, and maintain weapons of mass destruction and potential delivery systems. President Clinton provided the status of efforts to obtain Iraq's compliance with U.N. Security Council resolutions. He discussed the MIF report of oil smuggling out of Iraq and smuggling of other prohibited items into Iraq. Renewed the Oil for Food program another 6 months (phase VI). Permitted Iraq to export an additional amount of $3.04 billion of oil to make up for revenue deficits in phases IV and V. Extended phase VI of the Oil for Food program for 2 weeks until December 4, 1999. Extended phase VI of the Oil for Food program for 1 week until December 11, 1999. Renewed the Oil for Food program another 6 months (phase VII). Abolished Iraq's export ceiling to purchase civilian goods. Eased restrictions on the flow of civilian goods to Iraq and streamlined the approval process for some oil industry spare parts. Also established the United Nations Monitoring, Verification and Inspection Commission (UNMOVIC). Increased oil spare parts allocation from $300 million to $600 million under phases VI and VII. Renewed the Oil for Food program another 180 days until December 5, 2000 (phase VIII). Extended the Oil for Food program another 180 days (phase IX). Ambassador Cunningham acknowledged Iraq's illegal re-export of humanitarian supplies, oil smuggling, establishment of front companies, and payment of kickbacks to manipulate and gain from Oil for Food contracts. Also acknowledged that the United States had put holds on hundreds of Oil for Food contracts that posed dual-use concerns. Ambassador Cunningham addressed questions regarding allegations of surcharges on oil and smuggling. Acknowledged that oil industry representatives and other Security Council members provided the United States anecdotal information about Iraqi surcharges on oil sales. Also acknowledged companies claiming they were asked to pay commissions on contracts. Extended the terms of resolution 1330 (phase IX) another 30 days. Renewed the Oil for Food program an additional 150 days until November 30, 2001 (phase X). The resolution stipulated that a new Goods Review List would be adopted and that relevant procedures would be subject to refinement. Renewed the Oil for Food program another 180 days (phase XI). UNMOVIC reviewed export contracts to ensure that they contain no items on a designated list of dual-use items known as the Goods Review List. The resolution also extended the program another 180 days (phase XII). MIF reported that there had been a significant reduction in illegal oil exports from Iraq by sea over the past year but noted oil smuggling was continuing. Extended phase XII of the Oil for Food program another 9 days. Renewed the Oil for Food program another 180 days until June 3, 2003 (phase XIII). Approved changes to the list of goods subject to review and the sanctions committee. Chairman reported on a number of alleged sanctions violations noted by letters from several countries and the media from February to November 2002. Alleged incidents involved Syria, India, Liberia, Jordan, Belarus, Switzerland, Lebanon, Ukraine, and the United Arab Emirates. Operation Iraqi Freedom is launched. Coalition operation led by the United States initiated hostilities in Iraq. Adjusted the Oil for Food program and gave the Secretary General authority for 45 days to facilitate the delivery and receipt of goods contracted by the Government of Iraq for the humanitarian needs of its people. Public Law 108-11 SS1503 authorized the President to suspend the application of any provision of the Iraq Sanctions Act of 1990. Extended provision of resolution 1472 until June 3, 2003. End of major combat operations and beginning of post-war rebuilding efforts. Lifted civilian sanctions on Iraq and provided for the end of the Oil for Food program within 6 months, transferring responsibility for the administration of any remaining program activities to the Coalition Provisional Authority (CPA). Transferred administration of the Oil for Food program to the CPA. Responded to allegations of fraud by U.N. officials that were involved in the administration of the Oil for Food program. Proposed that a special investigation be conducted by an independent panel. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Oil for Food program was established by the United Nations and Iraq in 1996 to address concerns about the humanitarian situation after international sanctions were imposed in 1990. The program allowed the Iraqi government to use the proceeds of its oil sales to pay for food, medicine, and infrastructure maintenance. The program appears to have helped the Iraqi people. From 1996 through 2001, the average daily food intake increased from 1,300 to 2,300 calories. From 1997-2002, Iraq sold more than $67 billion of oil through the program and issued $38 billion in letters of credit to purchase commodities. GAO (1) reports on its estimates of the revenue diverted from the program, (2) provides preliminary observations on the program's administration, (3) describes some challenges in its transfer to the CPA, and (4) discusses the challenges Iraq faces as it assumes program responsibility. GAO estimates that from 1997-2002, the former Iraqi regime attained $10.1 billion in illegal revenues from the Oil for Food program, including $5.7 billion in oil smuggled out of Iraq and $4.4 billion through surcharges on oil sales and illicit commissions from suppliers exporting goods to Iraq. This estimate includes oil revenue and contract amounts for 2002, updated letters of credit from prior years, and newer estimates of illicit commissions from commodity suppliers. Both the U.N. Secretary General, through the Office of the Iraq Program (OIP) and the Security Council, through its sanctions committee for Iraq, were responsible for overseeing the Oil for Food Program. However, the Iraq government negotiated contracts directly with purchasers of Iraqi oil and suppliers of commodities, which may have been one important factor that allowed Iraq to levy illegal surcharges and commissions. While OIP was responsible for examining Iraqi contracts for price and value, it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. While the sanctions committee responded to illegal surcharges on oil, it is unclear what actions it took to respond to illicit commissions on commodity contracts. OIP transferred 3,059 Oil for Food contracts--with pending shipments valued at $6.2 billion--to the CPA on November 22, 2003. However, the CPA stated that it has not received all the original contracts, amendments, and letters of credit it needs to manage the program. These problems, along with inadequate CPA staffing during the transfer, hampered the efforts of CPA's Oil for Food coordination center in Baghdad to ensure continued delivery of commodities. Poor planning, coordination, and the security environment in Iraq continue to affect the execution of these contracts. Inadequate oversight and corruption in the Oil for Food program raise concerns about the Iraqi government's ability to import and distribute Oil for Food commodities and manage at least $32 billion in expected donor reconstruction funds. The CPA has taken steps, such as appointing inspectors general, to build internal control and accountability measures at Iraq's ministries. The CPA and the World Food Program (WFP) are also training ministry staff to help them assume responsibility for Oil for Food contracts in July 2004. The new government will have to balance the reform of its costly food subsidy program with the need to maintain food stability and protect the poorest populations.
7,390
702
USPS has taken steps to respond to most of our prior recommendations to strengthen planning and accountability for its network realignment efforts. It has clarified how it makes realignment decisions and generally addressed how it integrates its realignment initiatives, but it has not established measurable performance targets for these initiatives. USPS believes that its budgeting process accounts for the cost reductions achieved through these initiatives. In our 2007 report we stated that without measurable performance targets for achieving its realignment goals, USPS remains unable to demonstrate to Congress and other stakeholders the costs and benefits associated with its network realignment initiatives. We also reported that although USPS had made progress on several of its realignment initiatives, it remained unclear how the various initiatives were individually and collectively contributing to the achievement of realignment goals because the initiatives lacked measurable targets. Appendix I provides a brief description and identifies the status of USPS's key realignment initiatives. Appendix II provides updated status information for all AMP consolidations through July 2008. PAEA calls for USPS to, among other matters, establish performance goals and identify anticipated costs, cost savings, and other benefits associated with the infrastructure realignment alternatives in its Network Plan. The Network Plan describes an overall goal to create an efficient and flexible network that results in lower costs for both the Postal Service and its customers, improves the consistency of mail service, and reduces the Postal Service's overall environmental footprint. In addition, the plan states that USPS's goals are continuous improvement and savings of $1 billion per year through realignment and other efforts. According to the plan, USPS will achieve these savings, in part, through three core realignment initiatives, including Airport Mail Center (AMC) closures, AMP consolidations, and Bulk Mail Center (BMC) transformations. The specificity of the expected savings and other benefits related to the core initiatives varies in the plan's discussion of measurable goals, targets, and results achieved. Overall program targets: USPS estimated total savings of $117 million for AMC closures--including savings of $57 million in 2008 and $21 million in 2009--but provided no such figure for the AMP consolidations. Postal officials told us USPS is developing an overall program target for transforming the BMCs. Evaluation of results: USPS has measured the results of its AMP consolidations through a post-implementation review. In 2007, we identified data consistency problems with this review. USPS has addressed these problems in an updated handbook issued in 2008, by revising its data calculation worksheets. No analogous process exists for measuring the results of USPS's AMC closures, which included outsourcing some operations conducted at these facilities, relocating some operations to other postal facilities, and closing some facilities. We are issuing a report today on USPS's outsourcing activities, which discusses USPS's realignment decisions related to its AMCs. As part of this review, we concluded that USPS does not track and could not quantify the results of its outsourcing activities. We recommended that USPS establish a process to measure the results and effectiveness of those outsourcing activities that are subject to collective bargaining, including the AMCs. USPS agreed to establish a process for future outsourcing initiatives subject to collective bargaining, in which it would compare the financial assumptions that supported its outsourcing decision with actual contract award data 1 year after project implementation. When we met with USPS officials in June 2008, we asked why they did not have measurable performance goals and targets for the individual realignment initiatives. The Deputy Postmaster General explained that the realignment targets are captured in USPS's goal of saving $1 billion per year. Specifically, he explained that USPS will present its overall goals and targets in more detail as part of its internal budget, which will be presented to the Board of Governors in July 2008. USPS will have additional opportunities to provide information about its estimated costs and cost savings related to its realignment efforts in its annual report to Congress, which is required by the end of December. Developing and implementing more transparent performance targets and results can help inform Congress about the effectiveness of USPS's realignment efforts. In 2007, we found there was little transparency into how USPS's efforts were integrated with each other. We recommended that USPS explain how it will integrate the various initiatives that it will use in realigning the postal facilities network. In its Network Plan, USPS identifies three major realignment efforts: (1) Airport Mail Center closures, (2) consolidations of Area Mail Processing operations and (3) transformations of Bulk Mail Centers. USPS briefly addresses the integration of its network initiatives, stating that their overall impact and execution are tightly integrated, and provides a few examples, but little contextual information about what its future network will look like and how its realignment goals are being met. In a recent meeting, senior USPS officials provided more information that helps to put the integration of USPS's three network realignment initiatives in context. They said this integration is expected to reduce USPS's network and shrink its mail processing operations. After integrating these three efforts, they said, USPS will continue to be the "first and last mile"--the "first mile" being the point of entry for mail into the system, and the "last mile" being the delivery of mail to customers nationwide, as required to meet USPS's universal service mission. They expect to lower costs and achieve savings by reducing excess processing capacity and fuel consumption, as well as by working with the mailing industry to implement new technologies such as delivery point sequencing, flats sequencing, and Intelligent Mail.®️ Going forward, USPS has opportunities, in its annual report to Congress and in other reports and strategic plans, to further articulate how it plans to integrate these three initiatives and to what extent they are helping USPS meet its goals. USPS has partially responded to our prior recommendations related to improving delivery performance information by establishing delivery performance standards and committing to develop performance targets against these standards and provide them to the PRC in August. However, full implementation of performance measures and reporting is not yet completed. Delivery service performance is a critical area that may be affected by the implementation of the realignment initiatives. Delivery standards are essential for setting realistic expectations for mail delivery so that USPS and mailers can plan their mailing activities accordingly. Delivery performance information is critical for stakeholders to understand how USPS is achieving its mission of providing universal postal service, including requirements for the prompt, expeditious, and reliable delivery of mail throughout the nation. Delivery performance data are also necessary for USPS and its customers to identify and address delivery problems and to enable Congress, the PRC, and others to hold management accountable for results and to conduct independent oversight. Our July 2006 report found that USPS's delivery performance standards, measurement, and reporting needed improvement. We recommended that USPS update its outdated delivery standards, which did not reflect postal operations and thus were unsuitable for setting realistic expectations and measuring performance. We also recommended that the Service implement representative measures of delivery performance for all major types of mail because only one-fifth of mail volume was being measured and there were no representative measures for Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services. Furthermore, we recommended that USPS improve the transparency of its delivery standards, measurement, and reporting. In December 2006, Congress enacted postal reform legislation that required USPS to modernize its delivery standards and measure and report to the PRC on the speed and reliability of delivery for each market-dominant product. Collectively, market-dominant products represent 99 percent of mail volume. In December 2007, USPS issued its new delivery standards and has committed to measuring and reporting on delivery performance for market-dominant products starting in fiscal year 2009. Moreover, USPS provided a specific proposal for measuring and reporting its delivery performance to the PRC, which has requested public comment on USPS's proposal. Full implementation of delivery performance measures and reporting for all major types of mail will require both mailers and USPS to take actions to barcode mail and track its progress--a system referred to as Intelligent Mail®️. USPS has taken steps to respond to our recommendations that it improve its communication of realignment plans and proposals with stakeholders. For key realignment efforts such as AMP consolidations, we found it is critical for USPS to communicate with and engage the public. Stakeholder input can help USPS understand and address customer concerns, reach informed decisions, and achieve buy-in. In our 2007 report, we concluded that USPS was not effectively engaging stakeholders and the public in its AMP consolidation process and effectively communicating decisions. For example, USPS was not clearly communicating to stakeholders what it was planning to study, why studies were necessary, and what study outcomes might be. In addition, USPS did not provide stakeholders with adequate notice of the public input meeting or materials to review in preparation for the meeting. Furthermore, according to stakeholders, USPS offered no explanation as to how it evaluates and weighs public input in its decision-making process. To help resolve these and other issues concerning how USPS communicates its realignment plans with stakeholders, we recommended that USPS take the following actions: Improve public notice. Clarify notification letters by explaining whether USPS is considering closing the facility under study or consolidating operations with another facility, explaining the next decision point, and providing a date for the required public meeting. Improve public engagement. Hold the public meeting during the data- gathering phase of the study and make an agenda and background information, such as briefing slides, available to the public in advance. Increase transparency. Update AMP guidelines to explain how public input is considered in the decision-making process. USPS has incorporated into its 2008 AMP Communication Plan several modifications aimed at improving public notification and engagement. Most notably, USPS has moved the public input meeting to an earlier point in the AMP process and plans posts a meeting agenda, summary brief, and presentation slides on its Web site 1 week before the public meeting. USPS has increased transparency, largely by clarifying its processes for addressing public comments and plans to make additional information available to the public on its Web site. In 2007, we found that stakeholders potentially affected by AMP consolidations could not discern from USPS's initial notification letters what USPS was planning to study and what the outcomes of the study might be. This lack of clarification led to speculation on the part of stakeholders, which in turn increased public resistance to USPS's realignment efforts. The initial notification letters were also confusing to stakeholders because they contained jargon and lacked adequate context to understand the purpose of the study. Furthermore, in 2007 we reported that stakeholders were not given enough notice about the public meeting, and we recommended that USPS improve public notice by providing stakeholders with a date for the public meeting earlier in the AMP process. In its 2008 AMP Communication Plan, USPS has eliminated most of the jargon from its notification letters and has generally provided more context as to why it is necessary for USPS to conduct the feasibility studies. For example, letters now name both facilities that would be affected by a proposed consolidation, whereas previously, only one facility was named. USPS also added a requirement that the public be notified at least 15 days in advance of a public meeting. In 2007, we found that public meetings required for AMP consolidations were occurring too late in the decision-making process for the public to become engaged in this process in any meaningful way. At that time, the meetings were held after the area office and headquarters had completed their reviews of the AMP consolidation studies and just before headquarters had made its final consolidation decisions. Stakeholders we spoke with were not satisfied with the public input process and told us that USPS solicited their input only when it considered the AMP consolidation a "done deal." We also found that USPS did not publish agendas in advance of public meetings or provide the public with much information about the proposed studies. The only information available was a series of bullet points posted on USPS's Web site several days before the meetings. This lack of timely and complete information further inhibited the public's ability to meaningfully participate in the process. To make the meetings more focused and productive, and to give the public an opportunity to adequately prepare for them, we recommended that USPS make an agenda and background information available to the public in advance of the public meetings. Although USPS still holds the public meetings after the data-gathering phase of the study has been completed, the meeting now occurs earlier in the AMP review process. Currently, before the meeting, the study has been approved only at the district level--the area office and headquarters have not yet completed their reviews or validated the data by the time of the meeting. When we asked USPS why it did not move the meeting to the data-gathering phase of the study, USPS officials responded that it would be difficult to hold the meeting during the data-gathering phase because at that point, they do not know what operations could potentially be consolidated. However, to ensure that the public meeting is held within a reasonable amount of time after the study's completion, USPS included a requirement in its 2008 AMP Communication Plan that the public meeting take place within 45 days after the District Manager forwards the study to the area office and headquarters. In addition, the initial notification letter now includes contact information for the local Consumer Affairs Manager, to whom the public can submit written comments up to 15 days after the public meeting; previously, this contact information appeared in the second notification letter. To help stakeholders better prepare for the public meeting, USPS plans to post a meeting agenda, presentation slides, and a summary brief of the AMP proposal on its Web site 1 week before the meeting. In addition, USPS plans to inform stakeholders in the public meeting notification letter that these materials will be posted on its Web site 1 week before the meeting. In our 2007 report, we found that stakeholders and the public were unclear as to how public input factored into USPS's consolidation decisions. They wanted to know precisely how USPS took their input--letters, phone calls, public meeting results--into consideration when it made its decisions. We recommended that USPS increase the transparency of its decision-making process by explaining how it considers public input in the decision-making process. In a recent interview, senior USPS officials identified two additions to the 2008 AMP Communication Plan that address stakeholders' concerns about how USPS considers public input. First, USPS considers written comments from stakeholders before the public input meetings and addresses these comments as part of the public input meetings. Second, USPS has modified its public input review process so that officials at the district, area, and headquarters levels consider, and are responsive to, public concerns. Senior USPS officials told us that they weigh public input primarily by considering the impact of any consolidations on customer services and service standards. Additionally, USPS officials told us that as AMP consolidations go forward, USPS will post standard information about each consolidation on its Web site and update this information regularly. Specifically, USPS plans to post initial notifications, a summary brief of the proposed AMP consolidation, specifics about the scheduled public meeting, a summary of written and verbal public input, and the final decision and implementation plans if an AMP consolidation is approved. Congress has also addressed USPS's communication process. PAEA required USPS to describe its communication procedures related to AMP consolidations in its Network Plan. In response, the Network Plan discusses how USPS will publicly notify communities potentially affected by realignment changes and how it will obtain and consider public input. In addition, PAEA directed USPS to identify any statutory or regulatory obstacles that have prevented it from taking action to realign or consolidate facilities. Accordingly, USPS's Network Plan identified delays related to implementing AMP consolidations. For example, USPS was directed not to implement certain consolidations until after GAO has reported to Congress on whether USPS has implemented GAO recommendations from its report issued in July 2007 to strengthen planning and accountability in USPS's realignment efforts. These directions were included in the joint explanatory statement accompanying the Consolidated Appropriations Act for fiscal year 2008. We have previously discussed the difficulties that stakeholder resistance poses for USPS when it tries to close facilities and how delays may affect USPS's ability to achieve its cost-reduction and efficiency goals. Part of the problem stemmed from USPS's limited communication with the public. We believe that USPS has made significant progress toward improving its AMP communication processes since 2005. Now, it will be crucial for USPS, in going forward, to establish and maintain an ongoing and open dialogue with its various stakeholders, including congressional oversight committees and Members of Congress who have questions or are concerned about proposed realignment changes. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or Members of the Subcommittee may have. For further information about this statement, please contact Phillip Herr, Director, Physical Infrastructure Issues, at (202) 512-2834 or at [email protected]. Individuals making key contributions to this statement included Teresa Anderson, Kenneth John, Summer Lingard, Margaret McDavid, and Jaclyn Nidoh. Realignment of Airport Mail Centers (AMC) AMCs are postal facilities that have traditionally been operated for the purpose of expediting the transfer of mail to and from commercial passenger airlines. USPS's Network Plan stated that USPS had terminated operations at 46 AMCs during fiscal years 2006 and 2007, and another 8 AMCs in fiscal year 2008. AMP consolidations of mail processing operations are intended to reduce costs and increase efficiency by eliminating excess capacity at USPS's more than 400 processing plants. From 2005 through July 2008, USPS implemented 11 AMP consolidations, decided not to implement 35 studies (5 placed on indefinite hold), was continuing to consider 7 consolidations, and had closed 1 facility after consolidation. Because mailers have increased their sorting and transport of mail shipments to postal facilities near mail destinations, mailers have been bypassing BMCs and the centers are underused. Also, increased highway contract expenses and an aging postal distribution infrastructure have prompted USPS to evaluate its BMC network to determine how it can best support future postal operations. In July 2008, USPS issued a Request for Proposal to obtain input on a proposal to outsource some of its BMC workload so that USPS can use its 21 BMCs for alternative postal work. The Regional Distribution Centers were expected to perform bulk processing operations and act as Surface Transfer Centers and mailer entry points. The Network Plan stated that this initiative has been discontinued because USPS determined that it would not generate the benefits originally anticipated. GAO. U.S. Postal Service: Data Needed to Assess the Effectiveness of Outsourcing. GAO-08-787. Washington, D.C.: July 24, 2008. GAO. U.S. Postal Service: Progress Made in Implementing Mail Processing Realignment Efforts, but Better Integration and Performance Measurement Still Needed. GAO-07-1083T. Washington, D.C.: July 26, 2007. GAO. U.S. Postal Service: Mail Processing Realignment Efforts Under Way Need Better Integration and Explanation. GAO-07-717. Washington, D.C.: June 21, 2007. GAO. U.S. Postal Service: Delivery Performance Standards, Measurement, and Reporting Need Improvement. GAO-06-733. Washington, D.C.: July 27, 2006. GAO. U.S. Postal Service: The Service's Strategy for Realigning Its Mail Processing Infrastructure Lacks Clarity, Criteria, and Accountability. GAO-05-261. Washington, D.C.: April 8, 2005. GAO. U.S. Postal Service: USPS Needs to Clearly Communicate How Postal Services May Be Affected by Its Retail Optimization Plans. GAO-04-803. Washington, D.C.: July 13, 2004. GAO. U.S. Postal Service: Bold Action Needed to Continue Progress on Postal Transformation. GAO-04-108T. Washington, D.C.: November 5, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has issued reports on the U.S. Postal Service's (USPS) strategy for realigning its mail processing network and improving delivery performance information. These reports recommended that the Postmaster General (1) strengthen planning and the overall integration of its realignment efforts, and enhance accountability by establishing measurable targets and evaluating results, (2) improve delivery service standards and performance measures, and (3) improve communication with stakeholders by revising its Area Mail Processing (AMP) Communication Plan to improve public notice, engagement, and transparency. The 2006 postal reform act required USPS to develop a network plan by June 2008 that described its vision and strategy for realigning its network; the anticipated costs, cost savings, and other benefits of its realignment initiatives; performance measures for its delivery service standards, and its communication procedures for consolidating AMP operations. This testimony discusses USPS's actions toward addressing GAO recommendations to (1) strengthen network realignment planning and accountability, (2) improve delivery performance information, and (3) improve communication with stakeholders. This testimony is based on prior GAO work, a review of USPS's 2008 Network Plan and revised AMP Communication Plan, and updated information from USPS officials. USPS did not have comments on this testimony. USPS has taken steps to respond to most of GAO's prior recommendations to strengthen planning and accountability for its network realignment efforts. In its June 2008 Network Plan, USPS clarified how it makes realignment decisions, and generally addressed how it integrates its realignment initiatives. However, USPS has not established measurable performance targets for its realignment initiatives. USPS believes that its budgeting process accounts for the cost reductions achieved through these initiatives. The Deputy Postmaster General explained that such performance targets are captured in USPS's overall annual goal of achieving $1 billion in savings. While these measures are not as explicit or transparent as GAO had recommended, USPS is required to report annually by the end of December to Congress on, among other matters, its realignment costs and savings. Also, USPS's annual compliance reports to the Postal Regulatory Commission (PRC) will provide opportunities for further transparency of performance targets and results. USPS's Network Plan notes that to respond to declining mail volumes, USPS must increase efficiency and decrease costs across all its operations. Given USPS's challenging financial situation, effective implementation of network realignment is needed; and USPS's annual reports could help inform Congress about the effectiveness of its realignment efforts. USPS has partially responded to GAO's recommendations to improve its delivery performance standards, measurement, and reporting, but full implementation of performance measures and reporting is not yet completed. USPS established delivery performance standards in December 2007. USPS's Network Plan stated that USPS would develop targets and measures to assess performance against these standards by fiscal year 2009. In addition, USPS has recently submitted a proposal for measuring and reporting on delivery service performance to the PRC. The PRC has requested public comment on USPS's proposal, which depends upon USPS and mailers implementing new technology. Delivery service performance is a critical area that may be affected by the implementation of the realignment initiatives. USPS has also taken steps to address GAO's recommendations to improve communication with its stakeholders as it consolidates its AMP operations by modifying its Communication Plan to improve public notification and engagement, increasing transparency by clarifying its processes for addressing public comments, and making additional information available on its Web site. Going forward, it will be crucial that USPS establishes and maintains an ongoing and open dialogue with stakeholders, including congressional oversight committees and Members of Congress who have questions or are concerned about proposed realignment changes
4,443
782
IRS relies on data from SSA to determine the accuracy of SSNs and names recorded on tax documents submitted by individual taxpayers. IRS uses this information to establish the identity of each taxpayer and to ensure that each transaction is posted to the correct account on the IMF. When processing paper tax returns with missing or incorrect SSNs, IRS service centers first try to make corrections by researching IRS files or other documents (for example, Form W-2 wage and tax statements) that accompany a tax return. Returns that can be corrected, along with those that match SSA records, are posted to the "valid" segment of the IMF. Returns that cannot be corrected are posted to the "invalid" segment of the IMF, using either the incorrect SSN on the tax return or a temporary number assigned by IRS. As of January 1, 1995, 4.3 million accounts were posted on the invalid segment of the IMF, and 153.3 million accounts were posted on the valid segment. IRS created the invalid segment of the IMF to store the accounts of taxpayers who had changed their names, because of marriage or divorce for example, and had not yet informed SSA of the name change. However, IRS has posted returns to the invalid segment of the IMF to cover other situations, such as when a taxpayer (1) uses the SSN of another individual, (2) uses an SSN that is not issued by SSA, or (3) is assigned a temporary number. IRS tries to resolve invalid accounts and move them to the valid segment of the IMF by corresponding with taxpayers to verify their identities, periodically matching invalid accounts against updated SSA records, and reviewing tax documents subsequently filed by taxpayers. Our objectives were to (1) measure the growth of accounts on the invalid segment of the IMF, (2) assess IRS' procedures to verify the identities of tax return filers whose returns were posted to the IMF invalid segment, and (3) identify any effects the procedures may have on IRS' TSM goals and its income-matching program. To measure the growth of accounts on the IMF invalid segment, we reviewed IRS management and internal audit reports about the growth and composition of accounts on the IMF. We also interviewed officials at IRS' National Office on the makeup of the IMF invalid segment and the reasons for the growth in these accounts. To assess IRS' procedures for verifying taxpayer identities, we reviewed (1) IRS procedures (1995 and pre-1995) for processing returns with missing or incorrect SSNs, (2) the notice IRS uses to verify taxpayer identities, and (3) other pertinent documents. We also interviewed officials at IRS' National Office and at IRS' Austin, TX; Cincinnati, OH; Fresno, CA; Ogden, UT; and Philadelphia, PA service centers on the process for posting returns to the IMF invalid segment and changes implemented in 1995 to verify taxpayer identities. We chose Cincinnati because of its proximity to the audit team conducting the work. We chose the other 4 centers because, out of IRS' 10 service centers, they processed and posted more than 60 percent of the accounts on the IMF invalid segment in 1994. To identify the potential effects of IRS' posting procedures, we did the following: We selected a random sample of 400 tax year 1993 returns from accounts that were posted to the IMF invalid segment before IRS implemented its new procedures. Our sample results are not projectable to the universe of accounts on the IMF invalid segment. Our objective was to determine whether the filers accurately reported their wages and withheld taxes. The sample consisted of returns with refunds of more than $1,000 that were posted to the IMF invalid segment by the Austin, Fresno, Ogden, and Philadelphia service centers between January 1, 1994, and June 30, 1994. The 400 returns included 50 from each center that had been posted with IRS temporary numbers and 50 from each center that had been posted with incorrect SSNs. The Cincinnati service center's Criminal Investigation Branch contacted employers of the 400 filers to verify employment and wage information. The branch obtained responses on 357 returns. For the 43 returns with no response, we verified the wage information using information return transcripts. We analyzed 100 of the 400 returns to determine why they posted to the IMF invalid segment and to profile some of the filers' characteristics. The 100 returns included 25 returns (12 that had been posted with temporary numbers and 13 that had been posted with incorrect numbers) randomly selected from each of the 4 service centers. Among the 100 returns were 58 that were posted to accounts containing a computer code that automatically released refunds. We also interviewed cognizant officials from IRS' National Office and the previously mentioned service centers regarding any effects that returns with missing or incorrect SSNs may have on IRS' income-matching programs and its TSM plans. We reviewed IRS reports on TSM plans and analyzed documents relating to IRS' processing costs. We did our audit work from December 1993 through May 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from you or your designee. On June 21, 1995, the Assistant Commissioner for Taxpayer Services, the Staff Chief for the National Director of Submission Processing, and other IRS staff, including representatives from the Office of Chief Counsel, provided us with oral comments. Their comments are summarized and evaluated on pages 13 and 14 incorporated in this report where appropriate. From 1986 through 1994, according to IRS data, the average annual growth rate of accounts on the invalid segment of the IMF was more than twice the growth rate of accounts on the valid segment--5 percent versus 2 percent, respectively. Figure 1 shows year-to-year growth rates since 1986. During this period, the number of accounts on the invalid segment of the IMF grew from 2.8 million on January 1, 1986, to 4.3 million on January 1, 1995, while the number of valid accounts grew from 130.2 million to 153.3 million. From 1990 through 1994, the size of the IMF invalid segment grew by about 821,000 accounts. Most of that growth (52 percent) resulted from IRS' increased use of temporary numbers to process and post returns. Accounts with incorrect numbers made up the other 48 percent. The IRS National Office official responsible for monitoring accounts on the master file explained that the increase in accounts with temporary numbers stemmed from IRS' decision in 1990 to not send verification notices to taxpayers whose returns were processed with temporary numbers. Many of these filers, he said, cannot obtain SSNs because they are not legal residents of the United States but are entitled to refunds of withheld taxes or earned income credits. He said that most of these taxpayers were using temporary numbers verified in previous years and that requiring reverification each year would have unduly increased taxpayer burden. He speculated that when IRS' decision not to require verification became more widely known, more taxpayers who could not obtain SSNs began filing tax returns. Another factor affecting the number of accounts on the invalid segment of the master file was IRS' willingness to release refunds and allow the accounts to remain on the invalid segment, even though taxpayers' responses to the verification notice did not resolve the invalid condition. Before 1995, IRS accepted a taxpayer's response that a return was "correct as filed," and taxpayers were not required to provide documentation (marriage certificate, birth certificate, etc.) to verify their identities. In 1994, IRS paid out $1.4 billion in refunds on returns posted to the IMF invalid segment. As part of its efforts to combat refund fraud, IRS revised its procedures in January 1995 to require that taxpayers provide documentation to verify their identities. In announcing that IRS would delay refund claims for individuals lacking proper identification numbers, you stated that, consistent with the way financial institutions manage withdrawals of funds, IRS should not permit refunds from the federal treasury without a valid taxpayer identification number. Under the revised procedures, when a taxpayer's return with a refund request is posted to the IMF invalid segment for the first time, IRS is to freeze the refund and correspond with the taxpayer in an attempt to verify the taxpayer's identity. Filers with missing or incorrect SSNs who request a refund are to be required to provide a reasonable explanation for the discrepancy and proof of their identity (such as a marriage certificate, birth certificate, earnings statement, or passport) before the refund will be released. The requirement applies to filers whose returns are posted with temporary numbers as well as filers whose returns are posted with incorrect numbers. Once a taxpayer responds satisfactorily to IRS' verification notice, IRS is to release the refund. Previously, IRS automatically issued refunds to filers with temporary numbers and did not require proof of identity from filers with incorrect numbers before releasing their refunds. IRS uses the CP54B notice to verify taxpayers' identities before issuing a refund. The current version of the CP54B notice does not reflect IRS' revised procedures. It does not clearly convey that persons who file with missing or incorrect numbers, including filers who were issued temporary numbers, are required to provide documentation verifying their identities. (Appendix I contains a copy of the CP54B notice annotated to show misleading or potentially confusing sections.) A revised version of the CP54B notice has been developed that reflects IRS' revised procedures but, as of July 1995, had not been finalized. Until the revised notice is available, IRS National Office officials told us that they plan to use the current version of the notice, followed by additional correspondence if the taxpayer does not respond in accordance with the revised procedures. This practice will increase IRS' processing costs, create additional taxpayer burden, and delay the issuance of some refunds. IRS expects to send out about 616,000 CP54B notices in 1995. IRS officials said that review and approval of the revised notice was taking longer than expected. As of June 21, 1995, the revision had been approved by the National Office Notice Clarity Unit and was being reviewed by the National Automation Advisory Group. That group is to assign a priority for making the computer programming changes necessary to finalize the notice. If the notice is not assigned the highest priority, we are concerned, on the basis of past work, that it will not be revised in time for use during the 1996 tax-filing season, beginning in January 1996. In December 1994, we reported on the lengthy notice-review process and noted that many recommended notice revisions were delayed or never made because of IRS' limited computer-programming resources. As one way of avoiding computer-programming delays, we recommended that IRS test the feasibility of transferring notices to its Correspondex System--a more modern computer system that produces other types of IRS correspondence. IRS National Office officials told us that they do not plan to apply the revised procedures to filers with prior accounts on the IMF invalid segment who file again using the same name and number combination. Thus, these filers would not need to verify their identities before receiving future refunds, although the mismatch with SSA records may continue to exist. According to IRS data, at least 3.2 million of the 4.3 million accounts on the IMF invalid segment, as of January 1, 1995, will not be subject to the new procedures. Instead, IRS placed a permanent computer code on the accounts so that the system will automatically release future refunds. IRS' rationale for exempting these accounts from the revised verification procedures is that most of these filers had already responded to a previous CP54B and requiring them to respond again would increase taxpayer burden. But responses to the previous CP54B were done under IRS' old verification procedures, which, as we noted previously, did not require proof of identity. Thus, IRS has no assurance that the earlier responses were satisfactory. Our analysis of the reasons 58 tax year 1993 returns were posted to the IMF invalid segment with automatic refund release codes raised questions about IRS' plans. We noted, for example, that 27 of the returns were filed by persons who either used SSNs not issued by SSA or used another individual's SSN, including 11 filers who used SSNs belonging to children and 5 filers who used SSNs belonging to deceased taxpayers. Under these circumstances, IRS was less certain of filers' identities than if taxpayers had filed using names and numbers that matched SSA files. Table 1 shows the circumstances under which those 58 returns were posted to the invalid segment of the IMF. Another reason for IRS to reconsider its decision to exclude some filers from the revised procedures is the fraud risk associated with accounts on the IMF invalid segment. Our analysis of 400 refunds of $1,000 or more that were issued to taxpayers whose returns were posted on the IMF invalid segment surfaced only one instance in which a taxpayer appeared to have misstated his wages and withheld taxes. In that instance, a return was filed with a wage and tax statement that had been issued to another person. However, there are other ways to get fraudulent refunds besides claiming improper wages and/or withholdings. IRS has developed a profile of high-risk filers that it uses to help identify potentially fraudulent returns. According to that profile, many filers whose returns are posted to the invalid segment of the IMF pose a higher risk of fraud than filers whose returns are posted to the valid segment. For example, IRS has determined that filers claiming the Earned Income Credit (EIC) are more likely to claim fraudulent refunds than those who do not. In April 1995, IRS' Internal Audit Office reported that returns on the IMF invalid segment are four times more likely than returns on the valid segment (54 percent versus 12 percent, respectively) to include an EIC claim. Internal Audit also noted that 41 percent of the cases identified through September 1994 by IRS' EIC Unallowable Program were filed with invalid SSNs. In contrast, according to Internal Audit, returns with invalid SSNs represented only 1 percent of the total individual Form 1040 population. Of the unallowable cases closed by IRS, 84 percent with invalid SSNs had EIC amounts reversed, compared with 69 percent with valid SSNs. Of the 100 returns posted to the IMF invalid segment in our sample, 90 claimed the EIC. Also, the filing status claimed on 40 of the returns in our sample matched another characteristic in IRS' profile of high-risk filers. IRS' new verification procedures, if applied to filers with pre-1995 accounts on the IMF invalid segment, could help to limit these risks because they would enable IRS to more easily identify filers who attempt to claim duplicate refunds. Under TSM, IRS plans to access account information on taxpayers, using either the primary or secondary SSN. IRS also plans to consolidate existing, separate taxpayer databases into a single database. With a single database and the ability to access account information on every taxpayer, IRS would be in a much better position to maintain accurate, up-to-date accounts and respond to taxpayer inquiries. Before IRS can effectively implement its plans, it will have to identify and merge multiple taxpayer accounts on its current files. For example, the current master file structure with its valid and invalid segments allows two or more taxpayers to have accounts under the same SSN, or one taxpayer to have several accounts under different numbers. To begin the clean-up process, IRS mailed out 189,000 letters in December 1994 to taxpayers whose returns were posted to the IMF invalid segment because they used an SSN that had not been issued by SSA. The letter instructed taxpayers to contact SSA to obtain a correct SSN. This effort is only a first step, however, and IRS will need to do much more to clean up the rest of its IMF records. IRS' clean-up task is further complicated because IRS plans to include secondary filers (generally the spouse on a joint return) in its database. According to IRS data, as of February 1995, IRS had at least 47 million IMF accounts with secondary filers. Presently, IRS does not require that secondary IMF filers verify their identities. One particular complication, according to an IRS official, will involve merging the accounts of taxpayers who are secondary filers on the IMF valid segment and primary filers on the invalid segment. Currently, IRS does not try to merge these accounts. Each year, IRS matches the income claimed by taxpayers with the income reported by third parties on information returns. IRS relies on a taxpayer's name and SSN, as reported on a tax return and associated information returns, to perform the matches. Discrepancies in reported income are used by IRS to detect underreported income or nonfiling of tax returns. In most cases, returns posted to the IMF invalid segment with temporary numbers are not available for use in IRS' matching program. This occurs because temporary numbers are unique to IRS and cannot be matched against taxpayer identifiers on information documents. Omitting these taxpayers from IRS' matching program hampers efforts to detect underreported income and nonfiling. In addition, posting returns with incorrect SSNs may complicate IRS' matching program if information returns report income for a different name and/or SSN. Unless IRS is able to make corrections through the additional research it does to check for erroneous mismatches, false leads could be generated that siphon IRS resources away from more productive cases. IRS has developed a proposal that could alleviate some of the problems associated with matching returns posted with temporary numbers. IRS officials told us that many of the returns assigned temporary numbers involved nonresident or illegal aliens who are not eligible to obtain SSNs. Under the proposal, IRS would assign permanent Individual Taxpayer Identification Numbers (ITIN) to these taxpayers, following a process similar to that used by SSA to verify identities and assign SSNs. Taxpayers with ITINs would then be required to use their ITINs when filing tax returns, and their returns could be posted to the valid segment of the IMF. Persons with ITINs would also be encouraged to use their ITINs when engaging in financial transactions that are subject to information reporting. Those who did so would be included in IRS' matching program. IRS is currently obtaining public comments on a regulation, signed by the Department of the Treasury on March 9, 1995, to implement the ITIN proposal. Since 1986, the number of accounts on the IMF invalid segment has grown faster than the number of accounts on the valid segment. IRS risks errors when issuing refunds to filers on the IMF invalid segment because it cannot verify a filer's identity against SSA records. Moreover, some accounts on the IMF invalid segment cannot be included in IRS' income-matching program. IRS took steps in 1995 that, when fully implemented, could help reduce the number of accounts on the IMF invalid segment. For example, IRS is doing more to verify the identities of taxpayers who file returns with missing or incorrect SSNs, and it plans to issue permanent identification numbers to taxpayers that could be used in IRS' matching program. We identified several areas where IRS could make additional improvements. IRS has not finished revising the notice used to verify taxpayer identities, and our past work indicates that the revision process has been lengthy. The current version of the notice does not adequately explain IRS' revised documentation requirements and is causing additional taxpayer contacts. To reduce taxpayer burden and IRS costs, it is important that the revised notice be available for the 1996 filing season. IRS is not applying its revised documentation requirements to taxpayers whose returns were posted to the IMF invalid segment prior to 1995 and who have a permanent refund release code on their accounts. Our review of accounts posted on the IMF invalid segment that would be exempted under IRS' plan and IRS' profile of high-risk filers raises questions about whether IRS should exclude such filers from its revised documentation requirements. Verification of these filers' accounts should also help complete the cleanup of taxpayer accounts that will be necessary as part of IRS' modernization. To improve the processing of returns with missing or incorrect SSNs and help clean up accounts currently posted on the IMF invalid segment, we recommend that you finalize the CP54B notice in time for use during the 1996 tax-filing season, and apply the revised documentation requirements to taxpayers who filed tax returns that were posted to the IMF invalid segment before 1995 and whose accounts now have a permanent refund release code. We requested comments on a draft of this report from you or your designee. The draft included three proposed recommendations. IRS officials, including the Assistant Commissioner for Taxpayer Services and the Staff Chief for the Director of Submission Processing, provided oral comments in a meeting on June 21, 1995. On the basis of their comments, which are summarized in this section, we modified one of our proposed recommendations and withdrew another. IRS agreed with the other recommendation. Because of the delays inherent in IRS' current notice-revision process, our draft report included a recommendation that IRS assess the feasibility of producing the CP54B verification notice on the Correspondex System, as discussed in our December 1994 report. The Assistant Commissioner for Taxpayer Services agreed that a revised notice was needed, but she said that the best way to accomplish this is to proceed with the revision process currently under way. She assured us that the revised notice would be available for use during the 1996 filing season. Given the Assistant Commissioner's assurances, we have revised our recommendation to delete any reference to the use of the Correspondex System. IRS agreed with our recommendation that it apply the revised documentation requirements to the IMF invalid segment accounts with permanent refund release codes. The Staff Chief said that a task force, working in cooperation with internal auditors, is determining the best way to verify accounts placed on the IMF invalid segment before 1995. IRS plans to focus on verifying active accounts, which they estimate make up 38 percent of the accounts on the IMF invalid segment. (An account containing a recent tax return, for example, would be considered active.) IRS also plans to remove IMF invalid segment accounts that have been inactive for a certain period, similar to the treatment of accounts on the valid segment. The task force is also working to reverse the permanent refund release code on the IMF invalid segment accounts that were established before 1995. IRS' actions, if properly implemented, would respond to our recommendation. We also included a proposed recommendation in our draft report that IRS send back to taxpayers returns that are filed with missing SSNs or SSNs that were not issued by SSA. IRS data indicated that it was less costly to send these returns back to taxpayers than it was to post the returns to the master file, send taxpayers a CP54B notice, and process their responses. IRS disagreed with our proposal on the basis that an individual income tax return with a missing SSN or an SSN that was not issued by SSA is considered a valid return under the Internal Revenue Code. Because the return is valid, they asserted that a court would hold that the statute of limitations on assessment and collection would begin when the return was first filed, even though it was returned to the taxpayer because of the invalid condition. Thus, IRS might limit its ability to recover the return from the taxpayer and take any necessary enforcement actions if the process of resolving the invalid condition became lengthy. We considered IRS' argument persuasive and have withdrawn our proposed recommendation. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency's first request for appropriations made more than 60 days after the date of this letter. We are sending copies of this report to various congressional committees, the Secretary of the Treasury, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. The major contributors to this report are listed in appendix II. If you or your staff have any questions about this report, you can reach me at (202) 512-9110. The following are GAO's comments on IRS' Notice CP54B (1994 Version). 1. The wording "REFUND DELAYED" is the only indication at the beginning of the notice that the taxpayer will not be receiving his/her refund and that the refund will be delayed until the taxpayer resolves the discrepancy to IRS' satisfaction. 2. The notice does not accommodate filers who were issued temporary numbers. It gives instructions on what to do when there are differences in the last name or SSN, but it does not explain what filers with temporary numbers must do to have their refunds released. 3. A taxpayer might presume from the wording in this section that providing the information IRS requests will release the refund, when in fact, the refund would be released only if the new information matches SSA's records. 4. This section of the notice does not require that a taxpayer send anything back to IRS and, again, does not make it clear that the taxpayer's refund will not be released until the discrepancy is cleared up. All it says is "If you wish, you may provide IRS with . . . ." Service center staff told us that taxpayers are expected to provide this kind of information, and if it is not provided, IRS will correspond again with taxpayers to obtain it. 5. This section has problems similar to those described in comment 4. It does not require that taxpayers send anything to IRS and thus is not clear about how or on what basis IRS will decide to release the refund. Rachel DeMarcus, Assistant General Counsel Shirley A. Jones, Attorney Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Internal Revenue Service's (IRS) procedures for processing and posting tax returns with missing or incorrect social security numbers (SSN), focusing on: (1) the growth in IRS individual master file (IMF) accounts with missing or incorrect SSN; (2) IRS procedures for verifying the identities of tax return filers; and (3) the potential effect of these procedures on IRS plans to modernize the tax system and on the income-matching program. GAO found that: (1) the average annual growth rate for invalid IMF accounts was significant from 1986 through 1994; (2) IRS has revised its procedures to require taxpayers with missing or incorrect SSN or temporary numbers to provide documentation that verifies their identity; (3) these revised procedures could help reduce the number of invalid IMF accounts when fully implemented; (4) the IRS Tax Modernization System is in jeopardy because the master file structure allows two or more taxpayers to have accounts under the same number, or one taxpayer to have several accounts under different numbers; (5) the IRS income-matching program is hampered by posting returns to IMF invalid accounts; and (6) IRS plans to assign permanent taxpayer identification numbers to filers that are ineligible to obtain SSN and encourage the use of these numbers on information returns.
5,745
267
We found that the VA reprocessing requirements we selected for review are inadequate to help ensure veterans' safety. Lack of specificity about types of RME that require device-specific training. The VA reprocessing requirements we reviewed do not specify the types of RME for which VAMCs must develop device-specific training. This inadequacy has caused confusion among VAMCs and contributed to inconsistent implementation of training for reprocessing. While VA headquarters officials told us that the training requirement is intended to apply to RME classified as critical--such as surgical instruments--and semi-critical--such as certain endoscopes, officials from five of the six VAMCs we visited told us that they were unclear about the RME for which they were required to develop device-specific training. Officials at one VAMC we visited told us that they did not develop all of the required reprocessing training for critical RME--such as surgical instruments--because they did not understand that they were required to do so. Officials at another VAMC we visited also told us that they had begun to develop device-specific training for reprocessing non-critical RME, such as wheelchairs, even though they had not yet fully completed device-specific training for more critical RME. Because these two VAMCs had not developed the appropriate device-specific training for reprocessing critical and semi-critical RME, staff at these VAMCs may not have been reprocessing all RME properly, which potentially put the safety of veterans receiving care at these facilities at risk. Conflicting guidance on the development of RME reprocessing training. While VA requires VAMCs to develop device-specific training on reprocessing RME, VA headquarters officials provided VAMCs with conflicting guidance on how they should develop this training. For example, officials at three VAMCs we visited told us that certain VA headquarters or VISN officials stated that this device-specific training should very closely match manufacturer guidelines--in one case verbatim--while other VA headquarters or VISN officials stated that this training should be written in a way that could be easily understood by the personnel responsible for reprocessing RME. This distinction is important, since VAMC officials told us that some of the staff responsible for reprocessing RME may have difficulty following the more technical manufacturers' guidelines. In part because of VA's conflicting guidance, VAMC officials told us that they had difficulty developing the required device-specific training and had to rewrite the training materials multiple times for RME at their facilities. Officials at five of the six VAMCs also told us that developing the device-specific training for reprocessing RME was both time consuming and resource intensive. VA's lack of specificity and conflicting guidance regarding its requirement to develop device-specific training for reprocessing RME may have contributed to delays in developing this training at several of the VAMCs we visited. Officials from three of the six VAMCs told us that that they had not completed the development of device-specific training for RME since VA established the training requirement in July 2009. As of October 2010, 15 months after VA issued the policy containing this requirement, officials at one of the VAMCs we visited told us that device-specific training on reprocessing had not been developed for about 80 percent of the critical and semi-critical RME in use at their facility. VA headquarters officials told us that they are aware of the lack of specificity and conflicting guidance provided to VAMCs regarding the development of training for reprocessing RME and were also aware of inefficiencies resulting from each VAMC developing its own training for reprocessing types of RME that are used in multiple VAMCs. In response, VA headquarters officials told us that they have made available to all VAMCs a database of standardized device-specific training developed by RME manufacturers for approximately 1,000 types of RME and plan to require VAMCs to implement this training by June 2011. The officials also told us that VA headquarters is planning to develop device-specific training available to all VAMCs for certain critical and semi-critical RME for which RME manufacturers have not developed this training, such as dental instruments. However, as of February 2011, VA headquarters had not completed the development of device-specific training for these RME and has not established plans or corresponding timelines for doing so. We found that VA recently made changes to its oversight of VAMCs' compliance with selected reprocessing requirements; however, this oversight continues to have weaknesses. Beginning in fiscal year 2011, VA headquarters directed VISNs to make three changes intended to improve its oversight of these reprocessing requirements at VAMCs. VA headquarters recently required VISNs to increase the frequency of site visits to VAMCs--from one to three unannounced site visits per year--as a way to more quickly identify and address areas of noncompliance with selected VA reprocessing requirements. VA headquarters also recently required VISNs to begin using a standardized assessment tool to guide their oversight activities. According to VA headquarters officials, requiring VISNs to use this assessment tool will enable the VISNs to collect consistent information on VAMCs' compliance with VA's reprocessing requirements. Before VA established this requirement, the six VISNs that oversee the VAMCs we visited often used different assessment tools to guide their oversight activities. As a result, they reviewed and collected different types of information on VAMCs' compliance with these requirements. VISNs are now required to report to VA headquarters information from their site visits. Specifically, following each unannounced site visit to a VAMC, VISNs are required to provide VA headquarters with information on the facility's noncompliance with VA's reprocessing requirements and VAMCs' corrective action plans to address areas of noncompliance. Prior to fiscal year 2011, VISNs were generally not required to report this information to VA headquarters. Despite the recent changes, VA's oversight of its reprocessing requirements, including those we selected for review, has weaknesses in the context of the federal internal control for monitoring. Consistent with the internal control for monitoring, we would expect VA to analyze this information to assess the risk of noncompliance and ensure that noncompliance is addressed. However, VA headquarters does not analyze information to identify the extent of noncompliance across all VAMCs, including noncompliance that occurs frequently or poses high risks to veterans' safety. As a result, VA headquarters has not identified the extent of noncompliance across VAMCs with, for example, VA's operational reprocessing requirement that staff use personal protective equipment when performing reprocessing activities, which is key to ensuring that clean RME are not contaminated by coming into contact with soiled hands or clothing. Three of the six VAMCs we visited had instances of noncompliance with this requirement. Similarly, because VA headquarters does not analyze information from VAMCs' corrective action plans to address noncompliance with VA reprocessing requirements, it is unable to confirm, for example, whether VAMCs have addressed noncompliance with its operational reprocessing requirement to separate clean and dirty RME. Two of the six VAMCs we visited had not resolved noncompliance with this requirement and, as a result, are unable to ensure that clean RME does not become contaminated by coming into contact with dirty RME. VA headquarters officials told us that VA plans to address the weaknesses we identified in its oversight of VAMCs' compliance with reprocessing requirements. Specifically, VA headquarters officials told us that they intend to develop a systematic approach to analyze oversight information to identify areas of noncompliance across all VAMCs, including those that occur frequently, pose high risks to veterans' safety, or have not been addressed in a timely manner. While VA has established a timeline for completing these changes, certain VA headquarters officials told us that they are unsure whether this timeline is realistic due to possible delays resulting from VA's ongoing organizational realignment, which had not been completed as of April 6, 2011. In conclusion, weaknesses exist in VA's policies for reprocessing RME that create potential safety risks to veterans. VA's lack of specificity and conflicting guidance for developing device-specific training for reprocessing RME has led to confusion among VAMCs about which types of RME require device-specific training and how VAMCs should develop that training. This confusion has contributed to some VAMCs not developing training for their staff for some critical and semi-critical RME. Moreover, weaknesses in oversight of VAMCs' compliance with the selected reprocessing requirements do not allow VA to identify and address areas of noncompliance across VAMCs, including those that occur frequently, pose high risks to veterans' safety, or have not been addressed by VAMCs. Correcting inadequate policies and providing effective oversight of reprocessing requirements consistent with the federal standards for internal control is essential for VA to prevent potentially harmful incidents from occurring. To help ensure veterans' safety through VA's reprocessing requirements, we are making two recommendations in our report. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following actions: Develop and implement an approach for providing standardized training for reprocessing all critical and semi-critical RME to VAMCs. Additionally, hold VAMCs accountable for implementing device-specific training for all of these RME. Use the information on noncompliance identified by the VISNs and information on VAMCs' corrective action plans to identify areas of noncompliance across all 153 VAMCs, including those that occur frequently, pose high risks to veterans' safety, or have not been addressed, and take action to improve compliance in those areas. In responding to a draft of the report from which this testimony is based, VA concurred with these recommendations. Chairman Miller, Ranking Member Filner, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have. For further information about this testimony, please contact Randall B. Williamson at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Mary Ann Curran, Assistant Director; Kye Briesath; Krister Friday; Melanie Krause; Lisa Motley; and Michael Zose. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses patient safety incidents at Department of Veterans Affairs (VA) medical centers and potential strategies to address the underlying causes of those incidents. VA operates one of the largest integrated health care delivery systems in the United States, providing care to over 5.5 million veterans annually. Organized into 21 Veterans Integrated Service Networks (VISN), VA's health care system includes 153 VA medical centers (VAMC) nationwide that offer a variety of outpatient, residential, and inpatient services. In providing health care services to veterans, clinicians at VAMCs use reusable medical equipment (RME), which is designed to be reused for multiple patients and includes such equipment as endoscopes and some surgical and dental instruments. Because RME is used when providing care to multiple veterans, this equipment must be reprocessed--that is, cleaned and disinfected or sterilized--between uses. VA has established requirements for VAMCs to follow when reprocessing RME, which are designed, in part, to help ensure the safety of the veterans who receive care at VAMCs. This testimony, based on our May 2011 report, which is being released today, examines issues related to veterans' safety, including (1) selected reprocessing requirements established in VA policies, based on their relevance to patient safety incidents and (2) VA's oversight of VAMCs' compliance with these selected requirements. In summary, we found that the VA reprocessing requirements we selected for review are inadequate to help ensure the safety of veterans who receive care at VAMCs. Although VA requires VAMCs to develop devicespecific training for staff on how to correctly reprocess RME, it has not specified the types of RME for which this training is required. Furthermore, VA has provided conflicting guidance to VAMCs on how to develop device-specific training on reprocessing RME. This lack of clarity may have contributed to delays in developing the required training. Without appropriate training on reprocessing, VAMC staff may not be reprocessing RME correctly, which poses potential risks to veterans' safety. VA headquarters officials told us that VA has plans to develop training for certain RME, but VA lacks a timeline for developing this training. We also found that despite changes to improve VA's oversight of VAMCs' compliance with selected reprocessing requirements, weaknesses still exist. These weaknesses render VA unable to systematically identify and address noncompliance with the requirements, which poses potential risks to the safety of veterans. Although VA headquarters receives information from the VISNs on any noncompliance they identify, as well as VAMCs' corrective action plans to address this noncompliance, VA headquarters does not analyze this information to inform its oversight. According to VA headquarters officials, VA intends to develop a plan for analyzing this information to systematically identify areas of noncompliance that occur frequently, pose high risks to veterans' safety, or have not been addressed across all VAMCs. To address the inadequacies we identified in selected VA reprocessing requirements, GAO recommends that VA develop and implement an approach for providing standardized training for reprocessing all critical and semi-critical RME to VAMCs and hold VAMCs accountable for implementing this training. To address the weaknesses in VA's oversight of VAMCs' compliance with selected requirements, GAO recommends that VA use information on noncompliance identified by the VISNs and information on VAMCs' corrective action plans to identify areas of noncompliance across all 153 VAMCs and take action to improve compliance in those areas.
2,364
779
Although mutual funds already disclose considerable information about the fees they charge, our report recommends that SEC consider requiring that mutual funds make additional disclosures to investors about fees in the account statements that investors receive. Mutual funds currently provide information about the fees they charge investors as an operating expense ratio that shows as a percentage of fund assets all the fees and other expenses that the fund adviser deducts from the assets of the fund. Mutual funds also are required to present a hypothetical example that shows in dollar terms what investors could expect to pay in fees if they invested $10,000 in a fund and held it for various periods. It is important to understand the fees charged by a mutual fund because fees can significantly affect investment returns of the fund over the long term. For example, over a 20-year period a $10,000 investment in a fund earning 8 percent annually, with a 1-percent expense ratio, would be worth $38,122; but with a 2-percent expense ratio it would be worth $31,117--over $7,000 less. Unlike many other financial products, mutual funds do not provide investors with information about the specific dollar amounts of the fees that have been deducted from the value of their shares. Table 1 shows that many other financial products do present their costs in specific dollar amounts. Although mutual funds do not disclose their costs to each individual investor in specific dollars, the disclosures that they make do exceed those of many products. For example, purchasers of fixed annuities are not told of the expenses associated with investing in such products. Some industry participants and others including SEC also cite the example of bank savings accounts, which pay stated interest rates to their holders but do not explain how much profit or expenses the bank incurs to offer such products. While this is true, we do not believe this is an analogous comparison to mutual fund fees because the operating expenses of the bank are not paid using the funds of the savings account holder and are therefore not explicit costs to the investor like the fees on a mutual fund. A number of alternatives have been proposed for improving the disclosure of mutual fund fees, that could provide additional information to fund investors. In December 2002, SEC released proposed rule amendments, which include a requirement that mutual funds make additional disclosures about their expenses. This information would be presented to investors in the annual and semiannual reports prepared by mutual funds. Specifically, mutual funds would be required to disclose the cost in dollars associated with an investment of $10,000 that earned the fund's actual return and incurred the fund's actual expenses paid during the period. In addition, SEC also proposed that mutual funds be required to disclose the cost in dollars, based on the fund's actual expenses, of a $10,000 investment that earned a standardized return of 5 percent. If these disclosures become mandatory, investors will have additional information that could be directly compared across funds. By placing the disclosures in funds' annual and semiannual reports, SEC staff also indicated that it will facilitate prospective investors comparing funds' expenses before making a purchase decision. However, SEC's proposal would not require mutual funds to disclose to each investor the specific amount of fees in dollars that are paid on the shares they own. As result, investors will not receive information on the costs of mutual fund investing in the same way they see the costs of many other financial products and services that they may use. In addition, SEC did not propose that mutual funds provide information relating to fees in the quarterly or even more frequent account statements that provide investors with the number and value of their mutual fund shares. In a 1997 survey of how investors obtain information about their funds, the Investment Company Institute (ICI) indicated that, to shareholders, the account statement is probably the most important communication that they receive from a mutual fund company and that nearly all shareholders use such statements to monitor their mutual funds. SEC and industry participants have indicated that the total cost of providing specific dollar fee disclosures might be significant; however, we found that the cost might not represent a large outlay on a per investor basis. As we reported in our March 2003 statement for the record to the Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises, House Committee on Financial Services, ICI commissioned a large accounting firm to survey mutual fund companies about the costs of producing such disclosures. Receiving responses from broker-dealers, mutual fund service providers, and fund companies representing approximately 77 percent of total industry assets as of June 30, 2000, this study estimated that the aggregated estimated costs for the survey respondents to implement specific dollar disclosures in shareholder account statements would exceed $200 million, and the annual costs of compliance would be about $66 million. Although the ICI study included information from some broker-dealers and fund service providers, it did not include the reportedly significant costs that all broker-dealers and other third-party financial institutions that maintain accounts on behalf of individual mutual fund shareholders could incur. However, using available information on mutual fund assets and accounts from ICI and spreading such costs across all investor accounts indicates that the additional expenses to any one investor are minimal. Specifically, at the end of 2001, ICI reported that mutual fund assets totaled $6.975 trillion. If mutual fund companies charged, for example, the entire $266 million cost of implementing the disclosures to investors in the first year, then dividing this additional cost by the total assets outstanding at the end of 2001 would increase the average fee by 0.0038 percent or about one-third of a basis point. In addition, ICI reported that the $6.975 trillion in total assets was held in over 248 million mutual fund accounts, equating to an average account of just over $28,000. Therefore, implementing these disclosures would add $1.07 to the average $184 that these accounts would pay in total operating expense fees each year--an increase of six-tenths of a percent. In addition, other less costly alternatives are also available that could increase investor awareness of the fees they are paying on their mutual funds by providing them with information on the fees they pay in the quarterly statements that provide information on an investor's share balance and account value. For example, one alternative that would not likely be overly expensive would be to require these quarterly statements to present the information--the dollar amount of a fund's fees based on a set investment amount--that SEC has proposed be added to mutual fund semiannual reports. Doing so would place this additional fee disclosure in the document generally considered to be of the most interest to investors. An even less costly alternative could be to require quarterly statements to also include a notice that reminds investors that they pay fees and to check their prospectus and with their financial adviser for more information. In September 2003, SEC amended fund advertising rules, which require funds to state in advertisements that investors should consider a fund's fees before investing and directs investors to consult their funds' prospectus. However, also including this information in the quarterly statement could increase investor awareness of the impact that fees have on their mutual fund's returns. H.R. 2420 would require that funds disclose in the quarterly statement or other appropriate shareholder report an estimated amount of the fees an investor would have to pay on each investment of $1,000. S. 1958, like H.R. 2420, would require disclosure of fees paid on each $1,000 invested. S. 1971, among other disclosures, would require that funds disclose the actual cost borne by each shareholder for the operating expenses of the fund. SEC's current proposal, while offering some advantages, does not make mutual funds comparable to other products and provide information in the document that is most relevant to investors--the quarterly account statement. Our report recommends that SEC consider requiring additional disclosures relating to fees be made to investors in the account statement. In addition to providing specific dollar disclosures, we also noted that investors could be provided with a variety of other disclosures about the fees they pay on mutual funds that would have a range of implementation costs, including some that would be less costly than providing specific dollar disclosures. However, seeing the specific dollar amount paid on shares owned could be the incentive that some investors need to take action to compare their fund's expenses to those of other funds and make more informed investment decisions on this basis. Such disclosures may also increasingly motivate fund companies to respond competitively by lowering fees. Because the disclosures that SEC is currently proposing be included in mutual fund annual and semiannual reports could also prove beneficial, it could choose to require disclosures in these documents and the account statements, which would provide both prospective and existing investors in mutual funds access to valuable information about the costs of investing in funds. Academics and other industry observers have also called for increased disclosure of mutual fund brokerage commissions and other trading costs that are not currently included in fund expense ratios. In an academic study we reviewed that looked at brokerage commission costs, the authors urged that investors pay increased attention to such costs. For example, the study noted that investors seeking to choose their funds on the basis of expenses should also consider reviewing trading costs as relevant information because the impact of these unobservable trading costs is comparable to the more observable expense ratio. The authors of another study noted that research shows that all expenses can reduce returns so attention should be paid to fund trading costs, including brokerage commissions, and that these costs should not be relegated to being disclosed only in mutual funds' Statement of Additional Information. Mutual fund officials raised various concerns about expanding the disclosure of brokerage commissions and trading costs in general. Some officials said that requiring funds to present additional information about brokerage commissions by including such costs in the fund's operating expense ratios would not present information to investors that could be easily compared across funds. For example, funds that invest in securities on the New York Stock Exchange (NYSE), for which commissions are usually paid, would pay more in total commissions than would funds that invest primarily in securities listed on NASDAQ because the broker- dealers offering such securities are usually compensated by spreads rather than explicit commissions. Similarly, most bond fund transactions are subject to markups rather than explicit commissions. If funds were required to disclose the costs of trades that involve spreads, officials noted that such amounts would be subject to estimation errors. Officials at one fund company told us that it would be difficult for fund companies to produce a percentage figure for other trading costs outside of commissions because no agreed upon methodology for quantifying market impact costs, spreads, and markup costs exists within the industry. Other industry participants told us that due to the complexity of calculating such figures, trading cost disclosure is likely to confuse investors. For example funds that attempt to mimic the performance of certain stock indexes, such as the Standard & Poors 500 stock index, and thus limit their investments to just these securities have lower brokerage commissions because they trade less. In contrast, other funds may employ a strategy that requires them to trade frequently and thus would have higher brokerage commissions. However, choosing among these funds on the basis of their relative trading costs may not be the best approach for an investor because of the differences in these two types of strategies. To improve the disclosure of trading costs to investors, the House-passed H.R. 2420 would require mutual fund companies to make more prominent their portfolio turnover disclosure which, by measuring the extent to which the assets in a fund are bought and sold, provides an indirect measure of transaction costs for a fund. The bill directs funds to include this disclosure in a document that is more widely read than the prospectus or Statement of Additional Information, and would require fund companies to provide a description of the effect of high portfolio turnover rates on fund expenses and performance. H.R 2420 also requires SEC to issue a concept release examining the issue of portfolio transaction costs. S. 1822 would require mutual funds to disclose brokerage commissions as part of fund expenses. S. 1958 would require SEC to issue a concept release on disclosure of portfolio transaction costs. S. 1971 would require funds to disclose the estimated expenses paid for costs associated with management of the fund that reduces the funds overall value, including brokerage commissions, revenue sharing and directed brokerage arrangements, transactions costs and other fees. In December 2003, SEC issued a concept release to solicit views on how SEC could improve the information that mutual funds disclose about their portfolio transaction costs. The way that investors pay for the advice of financial professionals about their mutual funds has evolved over time. Approximately 80 percent of mutual fund purchases are made through broker-dealers or other financial professionals, such as financial planners and pension plan administrators. Previously, the compensation that these financial professionals received for assisting investors with mutual fund purchases were paid by either charging investors a sales charge or load or by paying for such expenses out of the investment adviser's own profits. However, in 1980, SEC adopted rule 12b-1 under the Investment Company Act to help funds counter a period of net redemptions by allowing them to use fund assets to pay the expenses associated with the distribution of fund shares. Rule 12b-1 plans were envisioned as temporary measures to be used during periods of declining assets. Any activity that is primarily intended to result in the sale of mutual fund shares must be included as a 12b-1 expense and can include advertising; compensation of underwriters, dealers, and sales personnel; printing and mailing prospectuses to persons other than current shareholders; and printing and mailing sales literature. These fees are called 12b-1 fees after the rule that allows fund assets to be used to pay for fund marketing and distribution expenses. NASD, whose rules govern the distribution of fund shares by broker dealers, limits the annual rate at which 12b-1 fees may be paid to broker- dealers to no more than 0.75 percent of a fund's average net assets per year. Funds are allowed to include an additional service fee of up to 0.25 percent of average net assets each year to compensate sales professionals for providing ongoing services to investors or for maintaining their accounts. Therefore, 12b-1 fees included in a fund's total expense ratio are limited to a maximum of 1 percent per year. Rule 12b-1 provides investors an alternative way of paying for investment advice and purchases of fund shares. Apart from 12b-1 fees, brokers can be paid with sales charges called "loads"; "front-end" loads are applied when shares in a fund are purchased and "back-end" loads when shares are redeemed. With a 12b-1 plan, the fund can finance the broker's compensation with installments deducted from fund assets over a period of several years. Thus, 12b-1 plans allow investors to consider the time-related objectives of their investment and possibly earn returns on the full amount of the money they have to invest, rather than have a portion of their investment immediately deducted to pay their broker. Rule 12b-1 has also made it possible for fund companies to market fund shares through a variety of share classes designed to help meet the different objectives of investors. For example, Class A shares might charge front-end loads to compensate brokers and may offer discounts called breakpoints for larger purchases of fund shares. Class B shares, alternatively, might not have front-end loads, but would impose asset- based 12b-1 fees to finance broker compensation over several years. Class B shares also might have deferred back-end loads if shares are redeemed within a certain number of years and might convert to Class A shares if held a certain number of years, such as 7 or 8 years. Class C shares might have a higher 12b-1 fee, but generally would not impose any front-end or back-end loads. While Class A shares might be more attractive to larger, more sophisticated investors who wanted to take advantage of the breakpoints, smaller investors, depending on how long they plan to hold the shares, might prefer Class B or C shares because no sales charges would be deducted from their initial investments. Although providing alternative means for investors to pay for the advice of financial professionals, some concerns exist over the impact of 12b-1 fees on investors' costs. For example, our June 2003 report discussed academic studies that found that funds with 12b-1 plans had higher management fees and expenses. Questions involving funds with 12b-1 fees have also been raised over whether some investors are paying too much for their funds depending on which share class they purchase. For example, SEC recently brought a case against a major broker dealer that it accused of inappropriately selling mutual fund B shares, which have higher 12b-1 fees, to investors who would have been better off purchasing A shares that had much lower 12b-1 fees. Also, in March 2003, NASD, NYSE, and SEC staff reported on the results of jointly administered examinations of 43 registered broker-dealers that sell mutual funds with a front-end load. The examinations found that most of the brokerage firms examined, in some instances, did not provide customers with breakpoint discounts for which they appeared to have been eligible. One mutual fund distribution practice--called revenue sharing--that has become increasingly common raises potential conflicts of interest between broker-dealers and their mutual fund investor customers. Broker-dealers, whose extensive distribution networks and large staffs of financial professionals who work directly with and make investment recommendations to investors, have increasingly required mutual funds to make additional payments to compensate their firms beyond the sales loads and 12b-1 fees. These payments, called revenue sharing payments, come from the adviser's profits and may supplement distribution-related payments from fund assets. According to an article in one trade journal, revenue sharing payments made by major fund companies to broker- dealers may total as much as $2 billion per year. According to the officials of a mutual fund research organization, about 80 percent of fund companies that partner with major broker-dealers make cash revenue sharing payments. For example, some broker-dealers have narrowed their offerings of funds or created preferred lists that include the funds of just six or seven fund companies that then become the funds that receive the most marketing by these broker-dealers. In order to be selected as one of the preferred fund families on these lists, the mutual fund adviser often is required to compensate the broker-dealer firms with revenue sharing payments. One of the concerns raised about revenue sharing payments is the effect on overall fund expenses. A 2001 research organization report on fund distribution practices noted that the extent to which revenue sharing might affect other fees that funds charge, such as 12b-1 fees or management fees, was uncertain. For example, the report noted that it was not clear whether the increase in revenue sharing payments increased any fund's fees, but also noted that by reducing fund adviser profits, revenue sharing would likely prevent advisers from lowering their fees. In addition, fund directors normally would not question revenue sharing arrangements paid from the adviser's profits. In the course of reviewing advisory contracts, fund directors consider the adviser's profits not taking into account marketing and distribution expenses, which also could prevent advisers from shifting these costs to the fund. Revenue sharing payments may also create conflicts of interest between broker-dealers and their customers. By receiving compensation to emphasize the marketing of particular funds, broker-dealers and their sales representatives may have incentives to offer funds for reasons other than the needs of the investor. For example, revenue sharing arrangements might unduly focus the attention of broker-dealers on particular mutual funds, reducing the number of funds considered as part of an investment decision-potentially leading to inferior investment choices and potentially reducing fee competition among funds. Finally, concerns have been raised that revenue sharing arrangements might conflict with securities self- regulatory organization rules requiring that brokers recommend purchasing a security only after ensuring that the investment is suitable given the investor's financial situation and risk profile. Although revenue sharing payments can create conflicts of interest between broker-dealers and their clients, the extent to which broker- dealers disclose to their clients that their firms receive such payments from fund advisers is not clear. Rule 10b-10 under the Securities Exchange Act of 1934 requires, among other things, that broker-dealers provide customers with information about third-party compensation that broker- dealers receive in connection with securities transactions. While broker- dealers generally satisfy the 10b-10 requirements by providing customers with written "confirmations," the rule does not specifically require broker- dealers to provide the required information about third-party compensation related to mutual fund purchases in any particular document. SEC staff told us that they interpret rule 10b-10 to permit broker-dealers to disclose third-party compensation related to mutual fund purchases through delivery of a fund prospectus that discusses the compensation. However, investors would not receive a confirmation and might not view a prospectus until after purchasing mutual fund shares. As a result of these concerns, our report recommends that SEC evaluate ways to provide more information to investors about the revenue sharing payments that funds make to broker-dealers. Having additional disclosures made at the time that fund shares are recommended about the compensation that a broker-dealer receives from fund companies could provide investors with more complete information to consider when making their investment decision. To address revenue sharing issues, we were pleased to see that a recent NASD rule proposal would require broker-dealers to disclose in writing when the customer first opens an account or purchases mutual fund shares compensation that they receive from fund companies for providing their funds "shelf space" or preference over other funds. On January 14, 2004, SEC proposed new rules and rule amendments designed to enhance the information that broker-dealers provide to their customers. H.R. 2420 would require fund directors to review revenue sharing arrangements consistent with their fiduciary duty to the fund. H.R. 2420 also would require funds to disclose revenue sharing arrangements and require brokers to disclose whether they have received any financial incentives to sell a particular fund or class of shares. S. 1822 would require brokers to disclose in writing any compensation received in connection with a customer's purchase of mutual fund shares. S. 1971 would require fund companies and investment advisers to fully disclose certain sales practices, including revenue sharing and directed brokerage arrangements, shareholder eligibility for breakpoint discounts, and the value of research and other services paid for as part of brokerage commissions. Soft dollar arrangements allow fund investment advisers to obtain research and brokerage services that could potentially benefit fund investors but could also increase investors' costs. When investment advisers buy or sell securities for a fund, they may have to pay the broker- dealers that execute these trades a commission using fund assets. In return for these brokerage commissions, many broker-dealers provide advisers with a bundle of services, including trade execution, access to analysts and traders, and research products. Some industry participants argue that the use of soft dollars benefits investors in various ways. The research that the fund adviser obtains can directly benefit a fund's investors if the adviser uses it to select securities for purchase or sale by the fund. The prevalence of soft dollar arrangements also allows specialized, independent research to flourish, thereby providing money managers a wider choice of investment ideas. As a result, this research could contribute to better fund performance. The proliferation of research available as a result of soft dollars might also have other benefits. For example, an investment adviser official told us that the research on smaller companies helps create a more efficient market for such companies' securities, resulting in greater market liquidity and lower spreads, which would benefit all investors including those in mutual funds. Although the research and brokerage services that fund advisers obtain through the use of soft dollars could benefit a mutual fund investor, this practice also could increase investors' costs and create potential conflicts of interest that could harm fund investors. For example, soft dollars could cause investors to pay higher brokerage commissions than they otherwise would, because advisers might choose broker-dealers on the basis of soft dollar products and services, not trade execution quality. One academic study shows that trades executed by broker-dealers that specialize in providing soft dollar products and services tend to be more expensive than those executed through other broker-dealers, including full-service broker- dealers. Soft dollar arrangements could also encourage advisers to trade more in order to pay for more soft dollar products and services. Overtrading would cause investors to pay more in brokerage commissions than they otherwise would. These arrangements might also tempt advisers to "over-consume" research because they are not paying for it directly. In turn, advisers might have less incentive to negotiate lower commissions, resulting in investors paying more for trades. Under the Investment Advisers Act of 1940, advisers must disclose details of their soft dollar arrangements in Part II of Form ADV, which investment advisers use to register with SEC and must send to their advisory clients. However, this form is not provided to the shareholders of a mutual fund, although the information about the soft dollar practices that the adviser uses for particular funds are required to be included in the Statement of Additional Information that funds prepare, which is available to investors upon request. Specifically, Form ADV requires advisers to describe the factors considered in selecting brokers and determining the reasonableness of their commissions. If the value of the products, research, and services given to the adviser affects the choice of brokers or the brokerage commission paid, the adviser must also describe the products, research and services and whether clients might pay commissions higher than those obtainable from other brokers in return for those products. In a series of regulatory examinations performed in 1998, SEC staff found examples of problems relating to investment advisers' use of soft dollars, although far fewer problems were attributable to mutual fund advisers. In response, SEC staff issued a report that included proposals to address the potential conflicts created by these arrangements, including recommending that investment advisers keep better records and disclose more information about their use of soft dollars. Although the recommendations could increase the transparency of these arrangements and help fund directors and investors better evaluate advisers' use of soft dollars, SEC has yet to take action on some of its proposed recommendations. As a result, our June 2003 report recommends that SEC evaluate ways to provide additional information to fund directors and investors on their fund advisers' use of soft dollars. SEC relies on disclosure of information as a primary means of addressing potential conflicts between investors and financial professionals. However, because SEC has not acted to more fully address soft dollar-related concerns, investors and mutual fund directors have less complete and transparent information with which to evaluate the benefits and potential disadvantages of their fund adviser's use of soft dollars. To address the inherent conflicts of interest with respect to soft dollar arrangements, H.R. 2420 would require SEC to issue rules mandating disclosure of information about soft dollar arrangements; require fund advisers to submit to the fund's board of directors an annual report on these arrangements, and require the fund to provide shareholders with a summary of that report in its annual report to shareholders; impose a fiduciary duty on the fund's board of directors to review soft dollar arrangements; direct SEC to issue rules to require enhanced recordkeeping of soft require SEC to conduct a study of soft-dollar arrangements, including the trends in the average amounts of soft dollar commissions, the types of services provided through these arrangements, the benefits and disadvantages of the use of soft dollar arrangements, the impact of soft dollar arrangements on investors' ability to compare the expenses of mutual funds, the conflicts of interest created by these arrangements and the effectiveness of the board of directors in managing such conflicts, and the transparency of soft dollar arrangements. S. 1822 would discourage use of soft dollars by requiring that funds calculate their value and disclose it along with other fund expenses. S. 1971 also would require disclosure of soft dollar arrangements and the value of the services provided. Also, it would require that SEC conduct a study of the use of soft dollar arrangements by investment advisers. Since we issued our report in June 2003, various allegations of misconduct and abusive practices involving mutual funds have come to light. In early September 2003, the Attorney General of the State of New York filed charges against a hedge fund manager for arranging with several mutual fund companies to improperly trade in fund shares and profiting at the expense of other fund shareholders. Since then federal and state authorities' widening investigation of illegal late trading and improper timing of fund trades has involved a growing number of prominent mutual fund companies and brokerage firms. The problems involving late trading arise when some investors are able to purchase or sell mutual fund shares after the 4:00 pm Eastern Time close of U.S. securities markets, the time at which funds price their shares. Under current mutual fund regulations, orders for mutual fund shares received after 4:00 pm are required by regulation to be priced at the next day's price. An investor permitted to engage in late trading could be buying or selling shares at the 4:00 pm price knowing of developments in the financial markets that occurred after 4:00 pm, thus unfairly taking advantage of opportunities not available to other fund shareholders. Clearly, to ensure compliance with the law, funds should have effective internal controls in place to prevent abusive late trading. Regulators are considering a rule change requiring that an order to purchase or redeem fund shares be received by the fund, its designated transfer agent, or a registered securities clearing agency, by the time that the fund establishes for calculating its net asset value in order to receive that day's price. The problems involving market timing occur when certain fund investors are able to take advantage of temporary disparities between the share value of a fund and the values of the underlying assets in the fund's portfolio. For example, such disparities can arise when U.S. mutual funds use old prices for their foreign assets even though events have occurred overseas that will likely cause significant movements in the prices of those assets when their home markets open. Market timing, although not illegal, can be unfair to funds' long-term investors because it provides the opportunity for selected fund investors to profit from fund assets at the expense of fund long-term investors. To address these issues, regulators are considering the merits of various proposals that have been put forth to discourage market timing, such as mandatory redemption fees or fair value pricing of fund shares. To protect fund investors from such unfair trading practices H.R. 2420 would, with limited exceptions, require that all trades be placed with funds by 4:00 pm and includes provisions to eliminate conflicts of interest in portfolio management, ban short-term trading by insiders, allow higher redemption fees to discourage short-term trading, and encourage wider use of fair value pricing to eliminate stale prices that makes market timing profitable. S. 1958 would require that fund companies receive orders prior to the time they price their shares. S. 1958 would also increase penalties for late trading and require funds to explicitly disclose their market timing policies and procedures. S.1971 also would restrict the placing of trades after hours, require funds to have internal controls in place and compliance programs to prevent abusive trading, and require wider use of fair value pricing. In conclusion, GAO believes that various changes to current disclosures and other practices would benefit fund investors. Additional disclosures of mutual fund fees could help increase the awareness of investors of the fees they pay and encourage greater competition among funds on the basis of these fees. Likewise, better disclosure of the costs funds incur to distribute their shares and of the costs and benefits of funds' use of soft dollar research activities could provide investors with more complete information to consider when making their investment decision. In light of recent scandals involving late trading and market timing, various reforms to mutual fund rules will also likely be necessary to better protect the interests of all mutual fund investors. This concludes my prepared statement and I would be happy to respond to questions. For further information regarding this testimony, please contact Cody J. Goebel at (202) 512-8678. Individuals making key contributions to this testimony include Toayoa Aldridge and David Tarosky. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Concerns have been raised over whether the disclosures of mutual fund fees and other fund practices are sufficiently fair and transparent to investors. Our June 2003 report, Mutual Funds: Greater Transparency Needed in Disclosures to Investors, GAO-03- 763, reviewed (1) how mutual funds disclose their fees and related trading costs and options for improving these disclosures, (2) changes in how mutual funds pay for the sale of fund shares and how the changes in these practices are affecting investors, and (3) the benefits of and the concerns over mutual funds' use of soft dollars. This testimony summarizes the results of our report and discusses certain events that have occurred since it was issued. Although mutual funds disclose considerable information about their costs to investors, the amount of fees and expenses that each investor specifically pays on their mutual fund shares are currently disclosed as percentages of fund assets, whereas most other financial services disclose the actual costs to the purchaser in dollar terms. SEC staff has proposed requiring funds to disclose additional information that could be used to compare fees across funds. However, SEC is not proposing that funds disclose the specific dollar amount of fees paid by each investor nor is it proposing to require that any fee disclosures be made in the account statements that investors receive. Although some of these additional disclosures could be costly and data on their benefits to investors was not generally available, less costly alternatives exist that could increase the transparency and investor awareness of mutual funds fees, making consideration of additional fee disclosures worthwhile. Changes in how mutual funds pay intermediaries to sell fund shares have benefited investors but have also raised concerns. Since 1980, mutual funds, under SEC Rule 12b-1, have been allowed to use fund assets to pay for certain marketing expenses. Over time the use of these fees has evolved to provide investors greater flexibility in choosing how to pay for the services of individual financial professionals that advise them on fund purchases. Another increasingly common marketing practice called revenue sharing involves fund investment advisers making additional payments to the broker-dealers that distribute their funds' shares. However, these payments may cause the broker-dealers receiving them to limit the fund choices they offer to investors and conflict with their obligation to recommend the most suitable funds. Regulators acknowledged that the current disclosure regulations might not always result in complete information about these payments being disclosed to investors. Under soft dollar arrangements, mutual fund investment advisers use part of the brokerage commissions they pay to broker-dealers for executing trades to obtain research and other services. Although industry participants said that soft dollars allow fund advisers access to a wider range of research than may otherwise be available and provide other benefits, these arrangements also can create incentives for investment advisers to trade excessively to obtain more soft dollar services, thereby increasing fund shareholders' costs. SEC staff has recommended various changes that would increase transparency by expanding advisers' disclosure of their use of soft dollars. By acting on the staff's recommendations SEC would provide fund investors and directors with needed information about how their funds' advisers are using soft dollars.
6,826
630
The 46 states reported receiving a total of nearly $52.6 billion in payments in varying annual amounts from fiscal year 2000 through fiscal year 2005. Of the nearly $52.6 billion, about $36.5 billion were payments from the tobacco companies and about $16 billion were securitized proceeds that 15 states arranged to receive, as shown in table 1. The tobacco companies' annual payments are adjusted based on several factors contained in the Master Settlement Agreement that include fluctuations in the volume of cigarette sales, inflation, and other variables, such as the participating companies' share of the tobacco market. Declining tobacco consumption alone would result in lower Master Settlement Agreement payments than originally expected. Tobacco consumption has declined since the Master Settlement Agreement was signed in 1998--by about 6.5 percent in 1999 alone--mostly due to one- time increases in cigarette prices by the tobacco companies after the agreement took effect. Analysts project that, in the future, tobacco consumption will decline by an average of nearly 2 percent per year. As a result, tobacco consumption is estimated to decline by 33 percent between 1999 and 2020. However, the Master Settlement Agreement also includes an inflation adjustment factor that some analysts have estimated increases payments more than any decreases caused by reduced consumption. The inflation adjustment equals the actual percentage increase in the Consumer Price Index for the preceding year or 3 percent, whichever is greater. The effect of these compounding increases is potentially significant, especially given that the payments are made in perpetuity. Assuming a 3-percent inflation adjustment and no decline in base payments, settlement amounts received by states would double every 24 years. Also, several tobacco companies' interpretation of the provision that addresses participants' market share led them to lower their payments in 2006. Under this provision, an independent auditor determined that participating tobacco companies lost a portion of their market share to non-participating companies. An economic research firm determined that the Master Settlement Agreement was a significant factor in these market share losses. Based on these findings, several participating companies reduced their fiscal year 2006 payments by about a total of about $800 million. Many states have filed suit to recover these funds. Each state's share of the tobacco companies' total annual payments is a fixed percentage that was negotiated during the settlement. These percentages are based on two variables related to each state's smoking- related health care costs, which reflect each state's population and smoking prevalence. In general, the most populous states receive a larger share of the tobacco companies' total annual payments than the less populous states. For example, California and New York each receive about 13 percent, while Alaska and Wyoming each receive less than 1 percent. However, these percentages are not strictly proportional to population. In addition to the annual payments states receive, the Master Settlement Agreement requires that a Strategic Contribution Fund payment begin in 2008 and continue through 2017. The base amount of each year's Strategic Contribution Fund payment is $861 million, which will be adjusted for volume and inflation and shared among the states. Strategic Contribution Fund payments are intended to reflect the level of the contribution each state made toward final resolution of their lawsuit against the tobacco companies. They will be allocated to the states based on a separate formula developed by a panel of former state attorneys general. The Master Settlement Agreement imposed no restrictions on how states could spend their settlement payments and, as such, the states have allocated their payments to a wide variety of activities, with health-related activities the largest among them. As part of their decision making on how to spend their payments, some states established planning commissions and working groups to develop recommendations and strategic plans for allocating their states' payments. In six states, voter-approved initiatives restricted use of the funds and, in 30 states, the legislatures enacted laws restricting their use. Overall, we identified 13 general categories to which states have allocated their Master Settlement Agreement payments, as shown in table 2. Appendix I provides more details on the categories to which states allocated their payments. States allocated the largest portion of their payments--about $16.8 billion, or 30 percent of the total payments--to health-related activities. To a closely related category--tobacco control--states allocated $1.9 billion, or 3.5 percent of their total payments. States allocated the second largest portion of their payments--about $12.8 billion or 22.9 percent--to cover budget shortfalls. Some states told us that they viewed the settlement payments as an opportunity to fund needs that they were not able to fund previously due to the high cost of health care. Figure 1 illustrates the relative magnitude of the categories receiving allocations. The seven largest categories of allocations, in descending order, are health, budget shortfalls, general purposes, infrastructure, education, debt service on securitized funds, and tobacco control. States' allocations to these categories have varied considerably from year to year--with some categories showing wide fluctuations. For example, for budget shortfalls, the states allocated from 2 to 44 percent of the total payments. On the other hand, for health care, the states allocated from 20 to 38 percent of the total payments. Figure 2 shows these annual changes for these seven categories. Information about how states have allocated their Master Settlement Agreement payments follows. Health. From fiscal years 2000 through 2005, states allocated about $16.8 billion of their Master Settlement Agreement payments to a variety of health care programs, including Medicaid; health insurance; cancer prevention, screening, and treatment; heart and lung disease; and drug addiction. Over this period, the amounts states allocated to health care ranged from about $1.9 billion in fiscal year 2005 to nearly $4.8 billion in fiscal years 2000-2001 combined. In fiscal year 2005, the most recent year for which we collected actual data, 36 of the 46 states allocated some of their Master Settlement Agreement payments to health care. Of the 36 states, 5 states allocated two-thirds or more of their payments to health care; 19 states allocated one-third to two-thirds; and 12 states allocated less than one-third. Ten states did not allocate any of their payments to health care activities. In fiscal year 2005, Pennsylvania, Illinois, Michigan, and Maryland allocated larger amounts to health care than the other states. Pennsylvania allocated over $326 million of its payments to health care programs for adult health insurance, uncompensated care, medical assistance for workers with disabilities, and community medical assistance. Illinois allocated nearly $204 million of its payments to health care, citing Medicaid drugs as a key program that would receive funds. Michigan allocated over $185 million of its payments to areas such as elder pharmaceutical assistance and Medicaid support programs. Maryland allocated nearly $100 million of its payments to areas such as Medicaid; cancer prevention, screening, and treatment; heart and lung disease; and drug addiction. Budget Shortfalls. From fiscal years 2000 through 2005, states allocated about $12.8 billion of their Master Settlement Agreement payments to budget shortfalls. Over this period, the amounts the states allocated to budget shortfalls ranged from a high of about $5.1 billion, or 44 percent of the total payments in fiscal year 2004, to $261 million, or 4 percent in fiscal year 2005. In fiscal year 2005, only 4 of the 46 states allocated some of their Master Settlement Agreement payments to budget shortfalls. Of these states, only Missouri allocated more than one-third of its total payments-- about $72 million--to budget shortfalls. General Purposes. From fiscal years 2000 through 2005, states allocated about $4 billion of their Master Settlement Agreement payments to general purposes, including law enforcement, community development activities, technology development, emergency reserve funds, and legal expenses for enforcement of the Master Settlement Agreement. Over this period, the amounts states allocated to general purposes ranged from $623 million, or about 5 percent of the total payments they allocated in fiscal years 2000- 2001 combined, to about $1.1 billion, or 8 percent in fiscal year 2003. In fiscal year 2005, 27 of the 46 states allocated some of their Master Settlement Agreement payments to general purposes. Of these 27 states, 4 states allocated two-thirds or more of their total payments to general purposes; 2 states allocated one-third to two-thirds; and 21 states allocated less than one-third. Nineteen states did not allocate any of their payments to general purposes. Massachusetts, Tennessee, Connecticut, and Colorado allocated the largest amounts to general purposes in fiscal year 2005. Massachusetts allocated nearly $255 million of its payments to general purposes for its General Fund, Tennessee allocated nearly $157 million of its payments to its General Fund, and Connecticut allocated about $113 million of its payments to its General Fund. Colorado allocated about $64.5 million of its payments to general purposes, but did not specify which programs would receive funds. Infrastructure. From fiscal years 2000 through 2005, states allocated about $3.4 billion of their Master Settlement Agreement payments to infrastructure-related activities, including capital maintenance on state owned facilities, regional facility construction, and water projects. Over this period, the amounts states allocated to infrastructure have ranged from $31 million, or about 1 percent of the total payments in fiscal year 2005, to about $1.2 billion, or 10 percent in fiscal year 2002. In fiscal year 2005, 5 of the 46 states allocated some of their Master Settlement Agreement payments to infrastructure. Of these 5 states, North Dakota was the only state that allocated more than one-third of its total payments to infrastructure. North Dakota, Hawaii, and Kentucky allocated the largest amounts to infrastructure in fiscal year 2005. North Dakota allocated about $10.5 million of its payments to infrastructure for work on water projects. Hawaii allocated approximately $10 million of its payments to infrastructure, citing debt service on University of Hawaii revenue bonds issued for the new Health and Wellness Center as a primary program that would receive funds. Kentucky allocated $6.1 million of its payments to service debt on such things as water resource development and a Rural Development Bond Fund. Education. From fiscal years 2000 through 2005, states allocated about $3 billion of their Master Settlement Agreement payments to education programs, including early childhood development; special education; scholarships; after-school services; and reading programs. Over this period, the amounts states allocated to education ranged from between $280 million or 2 percent of the total payments in fiscal year 2004, to over $1.1 billion, or 9 percent, in fiscal year 2002. In fiscal year 2005, 16 of the 46 states allocated some of the Master Settlement Agreement payments to education. Of the 16 states, only New Hampshire allocated more than two-thirds of its total payments to education; 4 states allocated between one-third and two-thirds to education; and 11 states allocated less than one-third. Thirty states did not allocate any of their payments to education-related activities. Michigan, New Hampshire, Nevada, and Colorado allocated the largest amounts to education in fiscal year 2005. Michigan allocated over $99 million of its payments to education for Merit Award scholarships and tuition incentive grants for higher education students; the Michigan Educational Assessment Program testing for K-12 students, nursing scholarships, the Michigan Education Savings Plan, and general higher education support. New Hampshire allocated $40 million of its payments to areas such as an Education Trust Fund, which distributes grants to school districts in the state. Nevada allocated about $33 million of its payments to education programs, citing a scholarship program for Nevada students attending Nevada's higher education institutions as a key recipient. Colorado allocated over $16 million of its payments to education, including its Read to Achieve program. Debt Service on Securitized Funds. From fiscal years 2000 through 2005, states allocated about $3 billion of their Master Settlement Agreement payments to servicing debt on securitized funds. This category consists of amounts allocated to servicing the debt issued when a state securitizes all or a portion of its Master Settlement Agreement payments. Over this period, the amounts states allocated for this purpose have ranged from $271 million, or about 2 percent of the total payments in fiscal year 2002, to about $1.4 billion, or about 24 percent, in fiscal year 2005. In fiscal year 2005, four states--California, Rhode Island, South Carolina, and Wisconsin--allocated 100 percent of their Master Settlement Agreement payments to servicing debt on securitized funds, while New Jersey allocated just under 100 percent. In addition, Alaska, Louisiana, and South Dakota, allocated more than half of their payments for this purpose. In fiscal year 2005, California and New York allocated the largest amounts to servicing debt on securitized funds. Tobacco Control. From fiscal years 2000 through 2005, states allocated about $1.9 billion of their Master Settlement Agreement payments to tobacco control programs, including prevention, cessation, and counter marketing. Over this period, the amounts states allocated to tobacco control ranged from $790 million, or about 6 percent of the total payments in fiscal years 2000-2001 combined, to $223 million, or about 2 percent, in fiscal year 2004. In fiscal year 2005, 34 of the 46 states allocated some of their Master Settlement Agreement payments to tobacco control programs. Of the 34 states, Wyoming allocated more than one-third of its payments to tobacco control, while 33 states allocated less than one-third. Twelve states did not allocate any of their payments to tobacco control-related programs. Pennsylvania and Ohio allocated more than the other states to tobacco control--about $44 million and $37 million, respectively--in fiscal year 2005. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Acting Director, Natural Resources and Environment at (202) 512-3841 or [email protected]. Key contributors to this statement were Charles M. Adams, Bart Fischer, Jennifer Harman, Natalie Herzog, Alison O'Neill, and Beverly Peterson. To standardize the information reported by the 46 states, we developed the following categories and definitions for the program areas to which states allocated their payments. Budget shortfalls: This category is comprised of amounts allocated to balance state budgets and close gaps or reduce deficits resulting from lower than anticipated revenues or increased mandatory or essential expenditures. Debt service on securitized funds: This category consists of amounts allocated to service the debt on bonds issued when the state securitized all or a portion of its Master Settlement Agreement payments. Economic development for tobacco regions: This category is comprised of amounts allocated for economic development projects in tobacco states such as infrastructure projects, education and job training programs, and research on alternative uses of tobacco and alternative crops. This category includes projects specifically designed to benefit tobacco growers as well as economic development that may serve a larger population within a tobacco state. Education: This category is comprised of amounts allocated for education programs such as day care, preschool, Head Start, early childhood education, elementary and secondary education, after-school programs, and higher education. This category does not include money for capital projects such as construction of school buildings. General purposes: This category is comprised of amounts allocated for attorneys' fees and other items, such as law enforcement or community development, which could not be placed into a more precise category. This category also includes amounts allocated to a state's general fund that were not earmarked for any particular purpose. Amounts used to balance state budgets and close gaps or reduce deficits should be categorized as budget shortfalls rather than general purposes. Health: This category is comprised of amounts allocated for direct health care services; health insurance, including Medicaid and the State Children's Health Insurance Program (SCHIP); hospitals; medical technology; public health services; and health research. This category does not include money for capital projects such as construction of health facilities. Infrastructure: This category is comprised of amounts allocated for capital projects such as construction and renovation of health care, education, and social services facilities; water and transportation projects; and municipal and state government buildings. This category includes retirement of debt owed on capital projects. Payments to tobacco growers: This category is comprised of amounts allocated for direct payments to tobacco growers, including subsidies and crop conversion programs. Reserves/rainy day funds: This category is comprised of amounts allocated to state budget reserves such as rainy day and budget stabilization funds not earmarked for specific programs. Amounts allocated to reserves that are earmarked for specific areas are categorized under those areas--e.g., reserve amounts earmarked for economic development purposes should be categorized in the economic development category. Social services: This category is comprised of amounts allocated for social services such as programs for the aging, assisted living, Meals on Wheels, drug courts, child welfare, and foster care. This category also includes amounts allocated to special funds established for children's programs. Tax reductions: This category is comprised of amounts allocated for tax reductions such as property tax rebates and earned income tax credits. Tobacco control: This category is comprised of amounts allocated for tobacco control programs such as prevention, including youth education, enforcement, and cessation services. Unallocated: This category is comprised of amounts not allocated for any specific purpose, such as amounts allocated to dedicated funds that have no specified purpose; amounts states chose not to allocate in the year Master Settlement Agreement payments were received that will be available for allocation in a subsequent fiscal year; interest earned from dedicated funds not yet allocated; and amounts that have not been allocated because the state had not made a decision on the use of the Master Settlement Agreement payments. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the 1990s, states sued major tobacco companies to obtain reimbursement for health impairments caused by the public's use of tobacco. In 1998, four of the nation's largest tobacco companies signed a Master Settlement Agreement, agreeing to make annual payments to 46 states in perpetuity as reimbursement for past tobacco-related health care costs. Some states have arranged to receive advance proceeds based on the amounts that tobacco companies owe by issuing bonds backed by future payments. This testimony discusses (1) the amounts of tobacco settlement payments that the states received from fiscal years 2000 through 2005, the most recent year for which GAO has actual data, and (2) the states' allocations of these payments. We also include states' projected fiscal year 2006 allocations. The Farm Security and Rural Investment Act of 2002 required GAO to report annually, through fiscal year 2006, on how states used the payments made by tobacco companies. GAO based this testimony on five annual surveys of these 46 states' Master Settlement Agreement payments and how they allocated these payments. From fiscal year 2000 through 2005, the 46 states party to the Master Settlement Agreement received $52.6 billion in tobacco settlement payments. Of the $52.6 billion total, about $36.5 billion were payments from the tobacco companies and about $16 billion were advance payments which several states had arranged to receive by issuing bonds backed by their future payments from the tobacco companies. The Master Settlement Agreement imposed no restrictions on how states could spend their payments, and as such, the states have chosen to allocate them to a wide variety of activities. Some states told us that they viewed the settlement payments as an opportunity to fund needs that they were not able to fund previously due to the high costs of health care. States allocated the largest portion of their payments to health care--$16.8 billion or 30 percent--which includes Medicaid, health insurance, hospitals, medical technology, and research. States allocated the second largest portion to cover budget shortfalls--about $12.8 billion or about 22.9 percent. This category includes allocations to balance state budgets or reduce deficits that resulted from lower than anticipated revenues, increased mandatory spending, or essential expenditures. Included among the next largest categories are allocations for infrastructure projects, education, debt service on securitized proceeds, and tobacco control.
3,789
482
SEC's ability to directly oversee hedge fund advisers is limited to those that are required to register or voluntarily register with SEC as investment advisers. Registered hedge fund advisers are subject to the same disclosure requirements as all other registered investment advisers. These advisers must provide current information to both SEC and investors about their business practices and disciplinary history. Advisers also must maintain required books and records, and are subject to periodic examinations by SEC staff. Meanwhile, hedge funds, like other investors in publicly traded securities, are subject to various regulatory reporting requirements. For example, upon acquiring a 5 percent beneficial ownership position of a particular publicly traded security, a hedge fund may be required to file a report disclosing its holdings with SEC. In December 2004, SEC adopted an amendment to Rule 203(b)(3)-1, which had the effect of requiring certain hedge fund advisers that previously enjoyed the private adviser exemption from registration to register with SEC as investment advisers. In June 2006, a federal court vacated the 2004 amendment to Rule 203(b)(3)-1. According to SEC, when the rule was in effect (from February 1, 2006, through August 21, 2006), SEC was better able to identify hedge fund advisers. In August 2006, SEC estimated that 2,534 advisers that sponsored at least one hedge fund were registered with the agency. Since August 2006, SEC's ability to identify an adviser that manages a hedge fund has been further limited due to changes in filing requirements and to advisers that chose to retain registered status. As of April 2007, 488, or about 19 percent of the 2,534 advisers, had withdrawn their registrations. At the same time, 76 new registrants were added and some others changed their filing status, leaving an estimated 1,991 hedge fund advisers registered. While the list of registered hedge fund advisers is not all-inclusive, many of the largest hedge fund advisers--including 49 of the largest 78 U.S. hedge fund advisers--are registered. These 49 hedge fund advisers account for approximately $492 billion of assets under management, or about 33 percent of the estimated $1.5 trillion in hedge fund assets under management in the United States. In an April 2009 speech, Chairman Schapiro stated that there are approximately150 active hedge fund investigations at SEC, some of which include possible Ponzi schemes, misappropriations, and performance smoothing. In a separate speech in April, Chairman Schapiro renewed SEC's call for greater oversight of hedge funds, including the registration of hedge fund advisers and potentially the hedge funds themselves. SEC uses a risk-based examination approach to select investment advisers for inspections. Under this approach, higher-risk investment advisers are examined every 3 years. One of the variables in determining risk level is the amount of assets under management. SEC officials told us that most hedge funds, even the larger ones, do not meet the dollar threshold to be automatically considered higher-risk. In fiscal year 2006, SEC examined 321 hedge fund advisers and identified issues (such as information disclosure, reporting and filing, personal trading, and asset valuation) that are not exclusive to hedge funds. Also, from 2004 to 2008, SEC oversaw the large internationally active securities firms on a consolidated basis. These securities firms have significant interaction with hedge funds through affiliates previously not overseen by SEC. One aspect of this program was to examine how the securities firms manage various risk exposures, including those from hedge fund-related activities such as providing prime brokerage services and acting as creditors and counterparties. SEC found areas where capital computation methodology and risk management practices can be improved. Similarly, CFTC regulates those hedge fund advisers registered as CPOs or CTAs. CFTC has authorized the National Futures Association (NFA), a self-regulatory organization for the U.S. futures industry, to conduct day- to-day monitoring of registered CPOs and CTAs. In fiscal year 2006, NFA examinations of CPOs included six of the largest U.S. hedge fund advisers. In addition, SEC, CFTC, and bank regulators can use their existing authorities--to establish capital standards and reporting requirements, conduct risk-based examinations, and take enforcement actions--to oversee activities, including those involving hedge funds, of broker- dealers, of futures commission merchants, and of banks, respectively. While none of the regulators we interviewed specifically monitored hedge fund activities on an ongoing basis, generally regulators had increased reviews--by such means as targeted examinations--of systems and policies to mitigate counterparty credit risk at the large regulated entities. For instance, from 2004 to 2007, the Federal Reserve Bank of New York (FRBNY) had conducted various reviews--including horizontal reviews-- of credit risk management practices that involved hedge fund-related activities at several large banks. On the basis of the results, FRBNY noted that the banks generally had strengthened practices for managing risk exposures to hedge funds, but the banks could further enhance firmwide risk management systems and practices, including expanded stress testing. The federal government does not specifically limit or monitor private sector pension investment in hedge funds and, while some states do so for public plans, their approaches vary. Although the Employee Retirement and Income Security Act (ERISA) governs the investment practices of private sector pension plans, neither federal law nor regulation specifically limit pension investment in hedge funds or private equity. Instead, ERISA requires that plan fiduciaries apply a "prudent man" standard, including diversifying assets and minimizing the risk of large losses. The prudent man standard does not explicitly prohibit investment in any specific category of investment. The standard focuses on the process for making investment decisions, requiring documentation of the investment decisions, due diligence, and ongoing monitoring of any managers hired to invest plan assets. Plan fiduciaries are expected to meet general standards of prudent investing and no specific restrictions on investments in hedge funds or private equity have been established. The Department of Labor is tasked with helping to ensure plan sponsors meet their fiduciary duties; however, it does not currently provide any guidance specific to pension plan investments in hedge funds or private equity. Conversely, some states specifically regulate and monitor public sector pension investment in hedge funds, but these approaches vary from state to state. While states generally have adopted a "prudent man" standard similar to that in ERISA, some states also explicitly restrict or prohibit pension plan investment in hedge funds or private equity. For instance, in Massachusetts, the agency overseeing public plans will not permit plans with less than $250 million in total assets to invest directly in hedge funds. Some states have detailed lists of authorized investments that exclude hedge funds and/or private equity. Other states may limit investment in certain investment vehicles or trading strategies employed by hedge fund or private equity fund managers. While some guidance exists for hedge fund investors, specific guidance aimed at pension plans could serve as an additional tool for plan fiduciaries when assessing whether and to what degree hedge funds would be a prudent investment. According to several 2006 and 2007 surveys of private and public sector plans, investments in hedge funds are typically a small portion of total plan assets--about 4 to 5 percent on average--but a considerable and growing number of plans invest in them. Updates to the surveys indicated that institutional investors plan to continue to invest in hedge funds. One 2008 survey reported that nearly half of over 200 plans surveyed had hedge funds and hedge-fund-type strategies. This was a large increase when compared to the previous survey when 80 percent of the funds had no hedge fund exposure. Pension plans' investments in hedge funds n part were a response to stock market declines and disenchantment with traditional investment management in recent years. Officials with most of the plans we contacted indicated that they invested in hedge funds, at least in part, to reduce the volatility of returns. Several pension plan officials told us that they sought to obtain returns greater than the returns of the overall stock market through at least some of their hedge fund investments. Officials of pension plans that we contacted also stated that hedge funds are used to help diversify their overall portfolio and provide a vehicle that will, to some degree, be uncorrelated with the other investments in their portfolio. This reduced correlation was viewed as having a number of benefits, including reduction in overall portfolio volatility and risk. While any plan investment may fail to deliver expected returns over time, hedge fund investments pose investment challenges beyond those posed by traditional investments in stocks and bonds. These include the reliance on the skill of hedge fund managers, who often have broad latitude to engage in complex investment techniques that can involve various financial instruments in various financial markets; use of leverage, which amplifies both potential gains and losses; and higher fees, which require a plan to earn a higher gross return to achieve a higher net return. In addition to investment challenges, hedge funds pose additional challenges, including: (1) limited information on a hedge fund's underlying assets and valuation (limited transparency); (2) contract provisions which limit an investor's ability to redeem an investment in a hedge fund for a defined period of time (limited liquidity); and (3) the possibility that a hedge fund's active or risky trading activity will result in losses due to operational failure such as trading errors or outright fraud (operational risk). Pension plans that invest in hedge funds take various steps to mitigate the risks and challenges posed by hedge fund investing, including developing a specific investment purpose and strategy, negotiating important investment terms, conducting due diligence, and investing through funds of funds. Such steps require greater effort, expertise and expense than required for more traditional investments. As a result, according to plan officials, state and federal regulators, and others, some pension plans, especially smaller plans, may not be equipped to address the various demands of hedge fund investing. Investors, creditors, and counterparties have the power to impose market discipline--rewarding well-managed hedge funds and reducing their exposure to risky, poorly managed hedge funds--during due diligence exercises and through ongoing monitoring. Creditors and counterparties also can impose market discipline through ongoing management of credit terms (such as collateral requirements). According to market participants doing business with larger hedge funds, hedge fund advisers have improved disclosure and become more transparent about their operations, including risk management practices, partly as a result of recent increases in investments by institutional investors with fiduciary responsibilities, such as pension plans, and guidance provided by regulators and industry groups. Despite the requirement that fund investors be sophisticated, some market participants suggested that not all prospective investors have the capacity or retain the expertise to analyze the information they receive from hedge funds, and some may choose to invest in a hedge fund largely as a result of its prior returns and may fail to fully evaluate its risks. Since the near collapse of LTCM in 1998, investors, creditors, and counterparties have increased their efforts to impose market discipline on hedge funds. Regulators and market participants also said creditors and counterparties have been conducting more extensive due diligence and monitoring risk exposures to their hedge fund clients since LTCM. The creditors and counterparties we interviewed said that they have exercised market discipline by tightening their credit standards for hedge funds and demanding greater disclosure. However, regulators and market participants also identified issues that limit the effectiveness of market discipline or illustrate failures to properly exercise it. For example, most large hedge funds use multiple prime brokers as service providers. Thus, no one broker may have all the data necessary to assess the total leverage used by a hedge fund client. In addition, the actions of creditors and counterparties may not fully prevent hedge funds from taking excessive risk if these creditors' and counterparties' risk controls are inadequate. For example, the risk controls may not keep pace with the increasing complexity of financial instruments and investment strategies that hedge funds employ. Similarly, regulators have been concerned that in competing for hedge fund clients, creditors sometimes relaxed credit standards. These factors can contribute to conditions that create the potential for systemic risk if breakdowns in market discipline and the risk controls of creditors and counterparties are sufficiently severe that losses by hedge funds in turn cause significant losses at key intermediaries or instability in financial markets. Although financial regulators and market participants recognize that the enhanced efforts by investors, creditors, and counterparties since LTCM impose greater market discipline on hedge funds, some remain concerned that hedge funds' activities are a potential source of systemic risk. Counterparty credit risk arises when hedge funds enter into transactions, including derivatives contracts, with regulated financial institutions. Some regulators regard counterparty credit risk as the primary channel for potentially creating systemic risk. At the time of our work in 2007, financial regulators said that the market discipline imposed by investors, creditors, and counterparties is the most effective mechanism for limiting the systemic risk from the activities of hedge funds (and other private pools of capital). The most important providers of market discipline are the large, global commercial and investment banks that are hedge funds' principal creditors and counterparties. As part of the credit extension process, creditors and counterparties typically require hedge funds to post collateral that can be sold in the event of default. OCC officials told us that losses at their supervised banks due to the extension of credit to hedge funds were rare. Similarly, several prime brokers told us that losses from hedge fund clients were extremely rare due to the asset-based lending they provided such funds. While regulators and others recognize that counterparty credit risk management has improved since LTCM, the ability of financial institutions to maintain the adequacy of these management processes in light of the dramatic growth in hedge fund activities remained a particular focus of concern. In addition to counterparty credit risk, other factors such as trading behavior can create conditions that contribute to systemic risk. Given certain market conditions, the simultaneous liquidation of similar positions by hedge funds that hold large positions on the same side of a trade could lead to losses or a liquidity crisis that might aggravate financial distress. Recognizing that market discipline cannot eliminate the potential systemic risk posed by hedge funds and others, regulators have been taking steps to better understand the potential for systemic risk and respond more effectively to financial disruptions that can spread across markets. For instance, they have examined particular hedge fund activities across regulated entities, mainly through international multilateral efforts. The PWG has issued guidelines that provide a framework for addressing risks associated with hedge funds and implemented protocols to respond to market turmoil. Finally, in September 2007, the PWG formed two private sector committees comprising hedge fund advisers and investors to address investor protection and systemic risk concerns, including counterparty credit risk management issues. On January 15, 2009, these two committees, the Asset Managers' Committee and the Investors' Committee, released their final best practices reports to hedge fund managers and investors. The final best practices for the asset managers establishes a framework on five aspects of the hedge fund business-- disclosure, valuation of assets, risk management, business operations, compliance and conflicts of interest--to help hedge fund managers take a comprehensive approach to adopting best practices and serve as the foundation upon which those best practices are established. The final best practices for investors include a Fiduciary's Guide, which provides recommendations to individuals charged with evaluating the appropriateness of hedge funds as a component of an investment portfolio, and an Investor's Guide, which provides recommendations to those charged with executing and administering a hedge fund program if one is added to the investment portfolio. In closing, I would like to include a final thought. It is likely that hedge funds will continue to be a source of capital and liquidity in financial markets, by providing financing to new companies, industries and markets, as well as a source of investments for institutional investors. Given our recent experience with the financial crisis, it is important that regulators have the information to monitor the activities of market participants that play a prominent role in the financial system, such as hedge funds, to protect investors and manage systemic risk. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Orice M. Williams on (202) 512-8678 or at [email protected] points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2008, GAO issued two reports on hedge funds--pooled investment vehicles that are privately managed and often engage in active trading of various types of securities and commodity futures and options contracts--highlighting the need for continued regulatory attention and for guidance to better inform pension plans on the risks and challenges of hedge fund investments. Hedge funds generally qualified for exemption from certain securities laws and regulations, including the requirement to register as an investment company. Hedge funds have been deeply affected by the recent financial turmoil. But an industry survey of institutional investors suggests that these investors are still committed to investing in hedge funds in the long term. For the first time hedge funds are allowed to borrow from the Federal Reserve under the Term-Asset Backed Loan Facility. As such, the regulatory oversight issues and investment challenges raised by the 2008 reports still remain relevant. This testimony discusses: (1) federal regulators' oversight of hedge fund-related activities; (2) potential benefits, risks, and challenges pension plans face in investing in hedge funds; (3) the measures investors, creditors, and counterparties have taken to impose market discipline on hedge funds; and (4) the potential for systemic risk from hedge fund-related activities. To do this work we relied upon our issued reports and updated data where possible Under the existing regulatory structure, the Securities and Exchange Commission and Commodity Futures Trading Commission can provide direct oversight of registered hedge fund advisers, and along with federal bank regulators, they monitor hedge fund-related activities conducted at their regulated entities. Although some examinations found that banks generally have strengthened practices for managing risk exposures to hedge funds, regulators recommended that they enhance firmwide risk management systems and practices, including expanded stress testing. The federal government does not specifically limit or monitor private sector plan investment in hedge funds. Under federal law, fiduciaries must comply with a standard of prudence, but no explicit restrictions on hedge funds exist. Pension plans invest in hedge funds to obtain a number of potential benefits, such as returns greater than the stock market and stable returns on investment. However, hedge funds also pose challenges and risks beyond those posed by traditional investments. For example, some investors may have little information on funds' underlying assets and their values, which limits the opportunity for oversight. Plan representatives said they take steps to mitigate these and other challenges, but doing so requires resources beyond the means of some plans. According to market participants, hedge fund advisers have improved disclosures and transparency about their operations as a result of industry guidance issued and pressure from investors and creditors and counterparties. Regulators and market participants said that creditors and counterparties have generally conducted more due diligence and tightened their credit standards for hedge funds. However, several factors may limit the effectiveness of market discipline or illustrate failures to properly exercise it. Further, if the risk controls of creditors and counterparties are inadequate, their actions may not prevent hedge funds from taking excessive risk and can contribute to conditions that create systemic risk if breakdowns in market discipline and risk controls are sufficiently severe that losses by hedge funds in turn cause significant losses at key intermediaries or in financial markets. Financial regulators and industry observers remain concerned about the adequacy of counterparty credit risk management at major financial institutions because it is a key factor in controlling the potential for hedge funds to become a source of systemic risk. Although hedge funds generally add liquidity to many markets, including distressed asset markets, in some circumstances hedge funds' activities can strain liquidity and contribute to financial distress. In response to their concerns regarding the adequacy of counterparty credit risk, a group of regulators had collaborated to examine particular hedge fund-related activities across entities they regulate, and the President's Working Group on Financial Markets (PWG). The PWG also established two private sector committees that recently released guidelines to address systemic risk and investor protection.
3,595
806
There can be little doubt that we can--and must--get better outcomes from our weapon system investments. As seen in table 1, the value of these investments in recent years has been on the order of $1.5 trillion or more, making them a significant part of the federal discretionary budget. Large programs have an outsized impact on the aggregate portfolio. For example, Joint Strike Fighter costs have now consumed nearly a quarter of the entire portfolio. Yet, as indicated in table 1, 39 percent of programs have had unit cost growth of 25 percent or more. Recently, we have seen some modest improvements. For example, cost growth has declined between 2011 and 2012.programs have improved their buying power by finding efficiencies in We have also observed that a number of development or production, and requirements changes. On the other hand, cost and schedule growth remain significant when measured against programs' first full estimates. The performance of some very large programs are no longer reflected in the latest data as they are no longer acquisition programs. For example, the Future Combat Systems program was canceled in 2009 after an investment of about $18 billion and the F- 22 Raptor program has completed aircraft procurement. In addition, the Ballistic Missile Defense System are not included in any of the analysis as those investments have proceeded without a baseline of original estimates, so the many difficulties experienced in the roughly $130 billion program are not quantifiable. The enormity of the investment in acquisitions of weapon systems and its role in making U.S. fighting forces capable, warrant continued attention and reform. The potential for savings and for better serving the warfighter argue against complacency. When one thinks of the weapon systems acquisition process, the image that comes to mind is that of the methodological procedure depicted on paper and in flow charts. DOD's acquisition policy takes the perspective that the goal of acquisition is to obtain quality products that satisfy user needs at a fair and reasonable price. The sequence of events that comprise the process defined in policy reflects principles from disciplines such as systems engineering, as well as lessons learned, and past reforms. The body of work we have done on benchmarking best practices Recent, significant changes has also been reflected in acquisition policy.to the policy include those introduced by the Weapon Systems Acquisition Reform Act of 2009 and the department's own "Better Buying Power" initiatives which, when fully implemented, should further strengthen practices that can lead to successful acquisitions. The policy provides a framework for developers of new weapons to gather knowledge that confirms that their technologies are mature, their designs are stable, and their production processes are in control. These steps are intended to ensure that a program will deliver the capabilities required utilizing the resources--cost, schedule, technology, and personnel--available. Successful product developers ensure a high level of knowledge is achieved at key junctures in development. We characterize these junctures as knowledge points. While there can be differences of opinion over some of the specifics of the process, I do not believe there is much debate about the soundness of the basic steps. It is a clear picture of "what to do." Table 2 summarizes these steps and best practices, organized around three key knowledge points in a weapon system acquisition. Our work over the last year shows that, to the extent reforms like the Weapon Systems Acquisition Reform Act and DOD's Better Buying Power initiatives are being implemented, they are having a positive effect on individual programs. For example, several programs we have reviewed are: making early trade-offs among cost, schedule, and technical developing more realistic cost and schedule estimates; increasing the amount of testing during development; and placing greater emphasis on reliability. These improvements do not yet signify a trend or suggest that a corner has been turned. The reforms themselves still face implementation challenges such as staffing and clarity of guidance and will doubtless need refining as experience is gained. We have made a number of recommendations on how DOD can improve implementation of the Weapon Systems Acquisition Reform Act. To a large extent, the improvements we have seen tend to result from external pressure exerted by higher level offices within DOD on individual programs. In other words, the reforms have not yet been institutionalized within the services. We still see employment of other practices--that are not prescribed in policy--such as concurrent testing and production, optimistic assumptions, and delayed testing. These are the same kinds of practices that perpetuate the unsatisfactory results that have persisted in acquisitions through the decades, such as significant cost growth and schedule delays. They share a common dynamic: moving forward with programs before the knowledge needed to make decisions is sufficient. We have reported that most programs still proceed through the critical design review without having a stable design, even though we have made a number of recommendations on the importance of this review and how to prepare for it. Also, programs proceed with operational testing before they are ready. Other programs are significantly at odds with the acquisition process. Among these I would number Ballistic Missile Defense System, Future Combat Systems (since canceled), Littoral Combat Ship, and airships. We recently reported on the Unmanned Carrier-Launched Airborne Surveillance and Strike program which proposes to complete the main acquisition steps of design, development, testing, manufacturing, and initial fielding before it formally enters the acquisition process. The fact that programs adopt practices that run counter to what policy and reform call for is evidence of the other pressures and incentives that significantly influence program practices and outcomes. I will turn to these next. An oft-cited quote of David Packard, former Deputy Secretary of Defense, is: "We all know what needs to be done. The question is why aren't we doing it?" To that point, reforms have been aimed mainly at the "what" versus the "why." They have championed sound management practices, such as realistic estimating, thorough testing, and accurate reporting. Today, these practices are well known. We need to consider that they mainly address the mechanisms of weapon acquisitions. Seen this way, the practices prescribed in policy are only partial remedies. The acquisition of weapons is much more complex than policy describes and involves very basic and strongly reinforced incentives to field weapons. Accordingly, rival practices, not normally viewed as good management techniques, comprise an effective stratagem for fielding a weapon because they reduce the risk that the program will be interrupted or called into question. I will now discuss several factors that illustrate the pressures that create incentives to deviate from sound acquisition management practices. The process of acquiring new weapons is (1) shaped by its different participants and (2) far more complex than the seemingly straightforward purchase of equipment to defeat an enemy threat. Collectively, as participants' needs are translated into actions on weapon programs, the purpose of such programs transcends efficiently filling voids in military capability. Weapons have become integral to policy decisions, definitions of roles and functions, justifications of budget levels and shares, service reputations, influence of oversight organizations, defense spending in localities, the industrial base, and individual careers. Thus, the reasons "why" a weapon acquisition program is started are manifold and acquisitions do not merely provide technical solutions. While individual participants see their needs as rational and aligned with the national interest, collectively, these needs create incentives for pushing programs and encouraging undue optimism, parochialism, and other compromises of good judgment. Under these circumstances, persistent performance problems, cost growth, schedule slippage, and difficulties with production and field support cannot all be attributed to errors, lack of expertise, or unforeseeable events. Rather, a level of these problems is embedded as the undesirable, but apparently acceptable, consequence of the process. These problems persist not because they are overlooked or under-regulated, but because they enable more programs to survive and thus more needs to be met. The problems are not the fault of any single participant; they are the collective responsibility of all participants. Thus, the various pressures that accompany the reasons why a program is started can also affect and compromise the practices employed in its acquisition. I would like to highlight three characteristics about program funding that create incentives in decision making that can run counter to sound acquisition practices. First, there is an important difference between what investments in new products represent for a private firm and for DOD. In a private firm, a decision to invest in a new product, like a new car design, represents an expense. Company funds must be expended that will not provide a revenue return until the product is developed, produced, and sold. In DOD, new products, in the form of budget line items, can represent revenue. An agency may be able to justify a larger budget if it can win approval for more programs. Thus, weapon system programs can be viewed both as expenditures and revenue generators. Second, budgets to support major program commitments must be approved well ahead of when the information needed to support the decision to commit is available. Take, for example, a decision to start a new program scheduled for August 2016. Funding for that decision would have to be included in the fiscal year 2016 budget. This budget would be submitted to Congress in February 2015--18 months before the program decision review is actually held. DOD would have committed to the funding before the budget request went to Congress. It is likely that the requirements, technologies, and cost estimates for the new program-- essential to successful execution--may not be very solid at the time of funding approval. Once the hard-fought budget debates put money on the table for a program, it is very hard to take it away later, when the actual program decision point is reached. Third, to the extent a program wins funding, the principles and practices it embodies are thus endorsed. So, if a program is funded despite having an unrealistic schedule or requirements, that decision reinforces those characteristics, not sound acquisition processes. Pressure to make exceptions for programs that do not measure up are rationalized in a number of ways: an urgent threat needs to be met; a production capability needs to be preserved; despite shortfalls, the new system is more capable than the one it is replacing; or the new system's problems will be fixed in the future. It is the funding approvals that ultimately define acquisition policy. DOD has a unique relationship with the defense industry that differs from the commercial marketplace. The combination of a single buyer (DOD), a few very large prime contractors in each segment of the industry, and a limited number of weapon programs constitutes a structure for doing business that is altogether different from a classic free market. For instance, there is less competition, more regulation, and once a contract is awarded, the contractor has considerable power. Moreover, in the defense marketplace, the firm and the customer have jointly developed the product and, as we have reported previously, the closer the product comes to production the more the customer becomes invested and the less likely they are to walk away from that investment. While a defense firm and a military customer may share some of the same goals, important goals are different. Defense firms are accountable to their shareholders and can also build constituencies outside the direct business relationship between them and their customers. This relationship does not fit easily into a contract. J. Ronald Fox, author of Defense Acquisition Reform 1960-2009: An Elusive Goal, sums up the situation as follows. "Many defense acquisition problems are rooted in the mistaken belief that the defense industry and the government-industry relationship in defense acquisition fit naturally into the free enterprise model. Most Americans believe that the defense industry, as a part of private industry, is equipped to handle any kind of development or production program. They also by and large distrust government 'interference' in private enterprise. Government and industry defense managers often go to great lengths to preserve the myth that large defense programs are developed and produced through the free enterprise system." But neither the defense industry nor defense programs are governed by the free market; "major defense acquisition programs rarely offer incentives resembling those of the commercial marketplace." Dr. Fox also points out that in private industry, the program manager concept works well because the managers have genuine decision-making authority, years of training and experience, and understand the roles and tactics within government and industry. In contrast, Dr. Fox concludes that DOD program managers lack the training, experience, and stature of their private sector counterparts, and are influenced by others in their service, DOD, and Congress. managers indicated to us that the acquisition process does not enable them to succeed because it does not empower them to make decisions on whether the program is ready to proceed forward or even to make relatively small trade-offs between resources and requirements as unexpected problems are encountered. Program managers said that they are also not able to shift personnel resources to respond to changes affecting the program. Fox, Defense Acquisition Reform. for their position or forced into the near-term perspective of their tenures. In this environment, the effectiveness of management can rise and fall on the strength of individuals; accountability for long-term results is, at best, elusive. In my more than 30 years in the area, I do not know of a time or era when weapon system programs did not exhibit the same symptoms that they do today. Similarly, I do not subscribe to the view that the acquisition process is too rigid and cumbersome. Clearly, this could be the case if every acquisition followed the same process and strategy without exception. But they do not. We repeatedly report on programs approved to modify policy and follow their own process. DOD refers to this as tailoring, and we see plenty of it. At this point, we should build on existing reforms--not necessarily by revisiting the process itself but by augmenting it by tackling incentives. To do this, we need to look differently at the familiar outcomes of weapon systems acquisition--such as cost growth, schedule delays, large support burdens, and reduced buying power. Some of these undesirable outcomes are clearly due to honest mistakes and unforeseen obstacles. However, they also occur not because they are inadvertent but because they are encouraged by the incentive structure. I do not think it is sufficient to define the problem as an objective process that is broken. Rather, it is more accurate to view the problem as a sophisticated process whose consistent results are indicative of its being in equilibrium. The rules and policies are clear about what to do, but other incentives force compromises. The persistence of undesirable outcomes such as cost growth and schedule delays suggests that these are consequences that participants in the process have been willing to accept. Drawing on our extensive body of work in weapon systems acquisition, I have four areas of focus regarding where to go from here. These are not intended to be all-encompassing, but rather, practical places to start the hard work of realigning incentives with desired results. Reinforce desirable principles at the start of new programs: The principles and practices programs embrace are determined not by policy, but by decisions. These decisions involve more than the program at hand: they send signals on what is acceptable. If programs that do not abide by sound acquisition principles win funding, then seeds of poor outcomes are planted. The highest point of leverage is at the start of a new program. Decision makers must ensure that new programs exhibit desirable principles before they are approved and funded. Programs that present well-informed acquisition strategies with reasonable and incremental requirements and reasonable assumptions about available funding should be given credit for a good business case. As an example, the Presidential Helicopter, the Armored Multi Purpose Vehicle, the Enhanced Polar System, and the Ground Combat Vehicle are all acquisitions estimated to cost at least a billion dollars, in some cases several billions of dollars, and slated to start in 2014. These could be viewed as a "freshman" class of acquisitions. There is such a class every year, and it would be beneficial for DOD and Congress to assess them as a group to ensure that they embody the right principles and practices. Identify significant program risks upfront and resource them: Weapon acquisition programs by their nature involve risks, some much more than others. The desired state is not zero risk or elimination of all cost growth. But we can do better than we do now. The primary consequences of risk are often the need for additional time and money. Yet, when significant risks are taken, they are often taken under the guise that they are manageable and that risk mitigation plans are in place. In my experience, such plans do not set aside time and money to account for the risks taken. Yet in today's climate, it is understandable--any sign of weakness in a program can doom its funding. This needs to change. If programs are to take significant risks, whether they are technical in nature or related to an accelerated schedule, these risks should be declared and the resource consequences acknowledged. Less risky options and potential off-ramps should be presented as alternatives. Decisions can then be made with full information, including decisions to accept the risks identified. If the risks are acknowledged and accepted by DOD and Congress, the program should be supported. More closely align budget decisions and program decisions: Because budget decisions are often made years ahead of program decisions, they depend on the promises and projections of program sponsors. Contentious budget battles create incentives for sponsors to be optimistic and make it hard to change course as projections fade in the face of information. This is not about bad actors; rather, optimism is a rational response to the way money flows to programs. Aside from these consequences, planning ahead to make sure money is available in the future is a sound practice. I am not sure there is an obvious remedy for this. But I believe ways to have budget decisions follow program decisions should be explored, without sacrificing the discipline of establishing long-term affordability. Attract, train, and retain acquisition staff and management: Dr. Fox's book does an excellent job of laying out the flaws in the current ways DOD selects, trains, and provides a career path for program managers. I refer you to these, as they are sound criticisms. We must also think about supporting people below the program manager who are also instrumental to program outcomes, including engineers, contracting officers, cost analysts, testers, and logisticians. There have been initiatives to support these people, but they have not been consistent over time. The tenure for acquisition executives is a more challenging prospect in that they arguably are at the top of their profession and already expert. What can be done to keep good people in these jobs longer? I am not sure of the answer, but I believe part of the problem is that the contentious environment of acquisition grinds good people down at all levels. In top commercial firms, a new product development is launched with a strong team, corporate funding support, and a time frame of 5 to 6 years or less. In DOD, new weapon system developments can take twice as long, have turnover in key positions, and every year must contend for funding. This does not necessarily make for an attractive career. Mr. Chairman, this concludes my statement and I would be happy to answer any questions. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD's acquisition of major weapon systems has been on GAO's high risk list since 1990. Over the past 50 years, Congress and DOD have continually explored ways to improve acquisition outcomes, including reforms that have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering. Too often, GAO reports on the same kinds of problems today that it did over 20 years ago. The topic of today's hearing is: "25 Years of Acquisition Reform: Where Do We Go From Here?" To that end, this testimony discusses (1) the performance of DOD's major defense acquisition program portfolio; (2) the management policies and processes currently in place to guide those acquisitions; (3) the incentives to deviate from otherwise sound acquisition practices; and (4) suggestions to temper these incentives. This statement draws from GAO's extensive body of work on DOD's acquisition of weapon systems. The Department of Defense (DOD) must get better outcomes from its weapon system investments, which in recent years have totaled around $1.5 trillion or more. Recently, there have been some improvements, owing in part to reforms. For example, cost growth declined between 2011 and 2012 and a number of programs also improved their buying power by finding efficiencies in development or production and requirements changes. Still, cost and schedule growth remain significant; 39 percent of fiscal 2012 programs have had unit cost growth of 25 percent or more. DOD's acquisition policy provides a methodological framework for developers to gather knowledge that confirms that their technologies are mature, their designs stable, and their production processes are in control. The Weapon Systems Acquisition Reform Act of 2009 and DOD's recent "Better Buying Power" initiatives introduced significant changes that, when fully implemented, should further strengthen practices that can lead to successful acquisitions. GAO has also made numerous recommendations to improve the acquisition process, based on its extensive work in the area. While recent reforms have benefited individual programs, it is premature to say there is a trend or a corner has been turned. The reforms still face implementation challenges and have not yet been institutionalized within the services. Reforms that focus on the methodological procedures of the acquisition process are only partial remedies because they do not address incentives to deviate from sound practices. Weapons acquisition is a complicated enterprise, complete with unintended incentives that encourage moving programs forward by delaying testing and employing other problematic practices. These incentives stem from several factors. For example, the different participants in the acquisition process impose conflicting demands on weapon programs so that their purpose transcends just filling voids in military capability. Also, the budget process forces funding decisions to be made well in advance of program decisions, which encourages undue optimism about program risks and costs. Finally, DOD program managers' short tenures and limitations in experience and training can foster a short-term focus and put them at a disadvantage with their industry counterparts. Drawing on its extensive body of work in weapon systems acquisition, GAO sees several areas of focus regarding where to go from here: at the start of new programs, using funding decisions to reinforce desirable principles such as well-informed acquisition strategies; identifying significant risks up front and resourcing them; exploring ways to align budget decisions and program decisions more closely; and attracting, training, and retaining acquisition staff and managers so that they are both empowered and accountable for program outcomes. These areas are not intended to be all-encompassing, but rather, practical places to start the hard work of realigning incentives with desired results.
4,164
752
Health care in the United States is a highly decentralized system, with stakeholders that include not only the entire population as consumers of health care, but also all levels of government, health care providers such as medical centers and community hospitals, patient advocates, health professionals, major employers, nonprofit health organizations, insurance companies, commercial technology providers, and others. In this environment, clinical and other health- related information is stored in a complex collection of paper files, information systems, and organizations, but much of it continues to be stored and shared on paper. Successfully implementing health IT to replace paper and manual processes has been shown to support benefits in both cost savings and improved quality of care. For example, we reported to this committee in 2003 that a 1,951-bed teaching hospital stated that it had realized about $8.6 million in annual savings by replacing outpatient paper medical charts with electronic medical records. This hospital also reported saving more than $2.8 million annually by replacing its manual process for managing medical records with an electronic process to provide access to laboratory results and reports. Other technologies, such as bar coding of certain human drug and biological product labels, have also been shown to save money and reduce medical errors. Health care organizations reported that IT contributed other benefits, such as shorter hospital stays, faster communication of test results, improved management of chronic diseases, and improved accuracy in capturing charges associated with diagnostic and procedure codes. There is also potential benefit from improving and expanding existing health IT systems. We have reported that some hospitals are expanding their IT systems to support improvements in quality of care. In April 2007, we released a study on the processes used by eight hospitals to collect and submit data on their quality of care to HHS's Centers for Medicare & Medicaid Services (CMS). Among the hospitals we visited, officials noted that having electronic records was an advantage for collecting the quality data because electronic records were more accessible and legible than paper records, and the electronic quality data could also be used for other purposes (such as reminders to physicians). Officials at each of the hospitals reported using the quality data to make specific changes in their internal procedures designed to improve care. However, hospital officials also reported several limitations in their existing IT systems that constrained the ability to support the collection of their quality data. For example, hospitals reported having a mix of paper and electronic systems, having data recorded only as unstructured narrative or other text, and having multiple systems within a single hospital that could not access each other's data. Although it was expected to take several years, all the hospitals in our study were working to expand the scope and functionality of their IT systems. This example illustrates, among other things, that making health care information electronically available depends on interoperability--that is, the ability of two or more systems or components to exchange information and to use the information that has been exchanged. This capability is important because it allows patients' electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. (Paper-based health records--if available--also provide necessary information, but unlike electronic health records, do not provide automated decision support capabilities, such as alerts about a particular patient's health, or other advantages of automation.) Interoperability may be achieved at different levels (see fig. 1). For example, at the highest level, electronic data are computable (that is, in a format that a computer can understand and act on to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At still a lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. It is important to note that not all data require the same level of interoperability. For example, computable pharmacy and drug allergy data would allow automated alerts to help medical personnel avoid administering inappropriate drugs. On the other hand, for such narrative data as clinical notes, unstructured, viewable data may be sufficient. Achieving even a minimal level of electronic interoperability would potentially make relevant information available to clinicians. Any level of interoperability depends on the use of agreed-upon standards to ensure that information can be shared and used. In the health IT field, standards may govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. * For example, vocabulary standards provide common definitions and codes for medical terms and determine how information will be documented for diagnoses and procedures. These standards are intended to lead to consistent descriptions of a patient's medical condition by all practitioners. The use of common terminology helps in the clinical care delivery process, enables consistent data analysis from organization to organization, and facilitates transmission of information. Without such standards, the terms used to describe the same diagnoses and procedures may vary (the condition known as hepatitis, for example, may be described as a liver inflammation). The use of different terms to indicate the same condition or treatment complicates retrieval and reduces the reliability and consistency of data. * Another example is messaging standards, which establish the order and sequence of data during transmission and provide for the uniform and predictable electronic exchange of data. These standards dictate the segments in a specific medical transmission. For example, they might require the first segment to include the patient's name, hospital number, and birth date. A series of subsequent segments might transmit the results of a complete blood count, dictating one result (e.g., iron content) per segment. Messaging standards can be adopted to enable intelligible communication between organizations via the Internet or some other communications pathway. Without them, the interoperability of health IT systems may be limited, reducing the data that can be shared. Developing interoperability standards requires the participation of the relevant stakeholders who will be sharing information. In the case of health IT, stakeholders include both the public and private sectors. The public health system is made up of the federal, state, tribal, and local agencies that may deliver health care services to the population and monitor its health. Private health system participants include hospitals, physicians, pharmacies, nursing homes, and other organizations that deliver health care services to individual patients, as well as multiple vendors that provide health IT solutions. Widespread adoption of health IT has the potential to improve the efficiency and quality of health care. However, transitioning to this capability is a challenging endeavor that requires attention to many important considerations. Among these are mechanisms to establish clearly defined health IT standards that are agreed upon by all important stakeholders, comprehensive planning grounded in results-oriented milestones and measures, and an approach to privacy protection that encourages acceptance and adoption of electronic health records. Attempting to expand the use of health IT without fully addressing these issues would put at risk the ultimate goal of achieving more effective health care. The need for health care standards has been broadly recognized for a number of years. In previous work, we identified lessons learned by U.S. agencies and by other countries from their experiences. Among other lessons, they reported the need to define and adopt common standards and terminology to achieve data quality and consistency, system interoperability, and information protection. In May 2003, we reported that federal agencies recognized the need for health care standards and were making efforts to strengthen and increase their use. However, while they had made progress in defining standards, they had not met challenges in identifying and implementing standards necessary to support interoperability across the health care sector. We stated that until these challenges were addressed, agencies risked promulgating piecemeal and disparate systems unable to exchange data with each other when needed. We recommended that the Secretary of HHS define activities for ensuring that the various standards-setting organizations coordinate their efforts and reach further consensus on the definition and use of standards; establish milestones for defining and implementing standards; and create a mechanism to monitor the implementation of standards through the health care industry. HHS implemented this recommendation through the activities of the Office of the National Coordinator for Health Information Technology (established within HHS in April 2004). Through the Office of the National Coordinator, HHS designated three primary organizations, made up of stakeholders from both the public and private health care sectors, to play major roles in identifying and implementing standards and expanding the implementation of health IT: * The American Health Information Community (now known as the National eHealth Collaborative) was created by the Secretary of HHS to make recommendations on how to accelerate the development and adoption of health IT, including advancing interoperability, identifying health IT standards, advancing nationwide health information exchange, and protecting personal health information. Created in September 2005 as a federal advisory commission, the organization recently became a nonprofit membership organization. It includes representatives from both the public and private sectors, including high-level officials of VA and other federal and state agencies, as well as health systems, payers, health professionals, medical centers, community hospitals, patient advocates, major employers, nonprofit health organizations, commercial technology providers, and others. Among other things, the organization has identified health care areas of high priority and developed "use cases" for these areas (use cases are descriptions of events or scenarios, such as Public Health Case Reporting, that provide the context in which standards would be applicable, detailing what needs to be done to achieve a specific mission or goal). * The Healthcare Information Technology Standards Panel (HITSP), sponsored by the American National Standards Institute and funded by the Office of the National Coordinator, was established in October 2005 as a public-private partnership to identify competing standards for the use cases developed by the American Health Information Community and to "harmonize" the standards. As of March 2008, nearly 400 organizations representing consumers, healthcare providers, public health agencies, government agencies, standards developing organizations, and other stakeholders were participating in the panel and its committees. The panel also develops the interoperability specifications that are needed for implementing the standards. In collaboration with the National Institute for Standards and Technology, HITSP selected initial standards to address, among other things, requirements for message and document formats and for technical networking. Federal agencies that administer or sponsor federal health programs are now required to implement these standards, in accordance with an August 2006 Executive Order. * The Certification Commission for Healthcare Information Technology is an independent, nonprofit organization that certifies health IT products, such as electronic health records systems. HHS entered into a contract with the commission in October 2005 to develop and evaluate the certification criteria and inspection process for electronic health records. HHS describes certification as the process by which vendors' health IT systems are established to meet interoperability standards. The certification criteria defined by the commission incorporate the interoperability standards and specifications defined by HITSP. The results of this effort are intended to help encourage health care providers throughout the nation to implement electronic health records by giving them assurance that the systems will provide needed capabilities (including ensuring security and confidentiality) and that the electronic records will work with other systems without reprogramming. The interconnected work of these organizations to identify and promote the implementation of standards is important to the overall effort to advance the use of interoperable health IT. For example, according to HHS, the HITSP standards are incorporated into the National Coordinator's ongoing initiative to enable health care entities--such as providers, hospitals, and clinical labs--to exchange electronic health information on a nationwide basis. Under this initiative, HHS awarded contracts to nine regional and state health information exchanges as part of its efforts to provide prototypes of nationwide networks of health information exchanges. Such exchanges are intended to eventually form a "network of networks" that is to produce the envisioned Nationwide Health Information Network (NHIN). According to HHS, the department planned to demonstrate the experiences and lessons learned from this work in December 2008, including defining specifications based upon the work of HITSP and standards development organizations to facilitate interoperable data exchange among the participants, testing interoperability against these specifications, and developing trust agreements among participants to protect the information exchanged. HHS plans to place the nationwide health information exchange specifications defined by the participating organizations, as well as related testing materials, in the public domain, so that they can be used by other health information exchange organizations to guide their efforts to adopt interoperable health IT. The products of the federal standards initiatives are also being used by DOD and VA in their ongoing efforts to achieve the seamless exchange of health information on military personnel and veterans. The two departments have committed to the goal of adopting applicable current and emerging HITSP standards. According to department officials, DOD is also taking steps to ensure compliance with standards through certification. To ensure that the electronic health records produced by the department's modernized health information system, AHLTA, are compliant with standards, it is arranging for certification through the Certification Commission for Healthcare Information Technology. Both departments are also participating in the National Coordinator's standards initiatives. The involvement of the departments in these activities is an important mechanism for aligning their electronic health records with emerging federal standards. Federal efforts to implement health IT standards are ongoing and some progress has been made. However, until agencies are able to demonstrate interoperable health information exchange between stakeholders on a broader level, the overall effectiveness of their efforts will remain unclear. In this regard, continued work on standards initiatives will remain essential for extending the use of health IT and fully achieving its potential benefits, particularly as both information technology and medicine advance. Using interoperable health IT to help improve the efficiency and quality of health care is a complex goal that involves a range of stakeholders and numerous activities taking place over an expanse of time; in view of this complexity, it is important to develop comprehensive plans that are grounded in results-oriented milestones and performance measures. Without comprehensive plans, it is difficult to coordinate the many activities under way and integrate their outcomes. Milestones and performance measures allow the results of the activities to be monitored and assessed, so that corrective action can be taken if needed. Since it was established in 2004, the Office of the National Coordinator has pursued a number of health IT initiatives (some of which we described above), aimed at the expansion of electronic health records, identification of interoperability standards, advancement of nationwide health information exchange, and protection of personal health information. It also developed a framework for strategic action for achieving an interoperable national infrastructure for health IT, which was released in 2004. We have noted accomplishments resulting from these various initiatives, but we also observed that the strategic framework did not include the detailed plans, milestones, and performance measures needed to ensure that the department integrated the outcomes of its various health IT initiatives and met its overall goals. Given the many activities to be coordinated and the many stakeholders involved, we recommended in May 2005 that HHS define a national strategy for health IT that would include the necessary detailed plans, milestones, and performance measures, which are essential to help ensure progress toward the President's goal for most Americans to have access to interoperable electronic health records by 2014. The department agreed with our recommendation, and in June 2008 it released a four-year strategic plan. If the plan's milestones and measures for achieving an interoperable nationwide infrastructure for health IT are appropriate and properly implemented, the plan could help ensure that HHS's various health IT initiatives are integrated and provide a useful roadmap to support the goal of widespread adoption of interoperable electronic health records. Across our health IT work at HHS and elsewhere, we have seen other instances in which planning activities have not been sufficiently comprehensive. An example is the experience of DOD and VA, which have faced considerable challenges in project planning and management in the course of their work on the seamless exchange of electronic health information. As far back as 2001 and 2002, we noted management weaknesses, such as inadequate accountability and poor planning and oversight, and recommended that the departments apply principles of sound project management. The departments' efforts to meet the recent requirements of the National Defense Authorization Act for Fiscal Year 2008 provide additional examples of such challenges, raising concerns regarding their ability to meet the September 2009 deadline for developing and implementing interoperable electronic health record systems or capabilities. In July 2008, we identified steps that the departments had taken to establish an interagency program office and implementation plan, as required. According to the departments, they intended the program office to play a crucial role in accelerating efforts to achieve electronic health records and capabilities that allow for full interoperability, and they had appointed an Acting Director from DOD and an Acting Deputy Director from VA. According to the Acting Director, the departments also have detailed staff and provided temporary space and equipment to a transition team. However, the newly established program office was not expected to be fully operational until the end of 2008--allowing the departments at most 9 months to meet the deadline for full interoperability. Further, we reported other planning and management weaknesses. For example, the departments developed a DOD/VA Information Interoperability Plan in September 2008, which is intended to address interoperability issues and define tasks required to guide the development and implementation of an interoperable electronic health record capability. Although the plan included milestones and schedules, it was lacking many milestones for completing the activities defined in the plan. Accordingly, we recommended that the departments give priority to fully establishing the interagency program office and finalizing the implementation plan. Without an effective plan and a program office to ensure its implementation, the risk is increased that the two departments will not be able to meet the September 2009 deadline. As the use of electronic health information exchange increases, so does the need to protect personal health information from inappropriate disclosure. The capacity of health information exchange organizations to store and manage a large amount of electronic health information increases the risk that a breach in security could expose the personal health information of numerous individuals. Addressing and mitigating this risk is essential to encourage public acceptance of the increased use of health IT and electronic medical records. Recognizing the importance of privacy protection, HHS included security and privacy measures in its 2004 framework for strategic action, and in September 2005, it awarded a contract to the Health Information Security and Privacy Collaboration as part of its efforts to provide a nationwide synthesis of information to inform privacy and security policymaking at federal, state, and local levels. The collaboration selected 33 states and Puerto Rico as locations in which to perform assessments of organization-level privacy- and security-related policies and practices that affect interoperable electronic health information exchange and their bases, including laws and regulations. As a result of this work, HHS developed and made available to the public a toolkit to guide health information exchange organizations in conducting assessments of business practices, policies, and state laws that govern the privacy and security of health information exchange. However, we reported in January 2007 that HHS initiated these and other important privacy-related efforts without first defining an overall approach for protecting privacy. In our report, we identified key privacy principles and challenges to protecting electronic personal health information. * Examples of principles that health IT programs and applications need to address include the uses and disclosures principle, which provides limits to the circumstances in which an individual's protected heath information may be used or disclosed, and the access principle, which establishes individuals' rights to review and obtain a copy of their protected health information in certain circumstances. * Key challenges include understanding and resolving legal and policy issues (for example, those related to variations in states' privacy laws), ensuring that only the minimum amount of information necessary is disclosed to only those entities authorized to receive the information, ensuring individuals' rights to request access and amendments to their own health information, and implementing adequate security measures for protecting health information. We recommended that HHS define and implement an overall privacy approach that identifies milestones for integrating the outcomes of its privacy-related initiatives, ensures that key privacy principles are fully addressed, and addresses challenges associated with the nationwide exchange of health information. In September 2008, we reported that HHS had begun to establish an overall approach for protecting the privacy of personal electronic health information--for example, it had identified milestones and an entity responsible for integrating the outcomes of its many privacy- related initiatives. Further, the federal health IT strategic plan released in June 2008 includes privacy and security objectives along with strategies and target dates for achieving them. However, in our view, more actions are needed. Specifically, within its approach, the department had not defined a process to ensure that the key privacy principles and challenges we had identified were fully and adequately addressed. This process should include, for example, steps for ensuring that all stakeholders' contributions to defining privacy-related activities are appropriately considered and that individual inputs to the privacy framework are effectively assessed and prioritized to achieve comprehensive coverage of all key privacy principles and challenges. Without such a process, stakeholders may lack the overall policies and guidance needed to assist them in their efforts to ensure that privacy protection measures are consistently built into health IT programs and applications. Moreover, the department may miss an opportunity to establish the high degree of public confidence and trust needed to help ensure the success of a nationwide health information network. To address these concerns, we recommended in our September report that HHS include in its overall privacy approach a process for ensuring that key privacy principles and challenges are completely and adequately addressed. Lacking an overall approach for protecting the privacy of personal electronic health information, there is reduced assurance that privacy protection measures will be consistently built into health IT programs and applications. Without such assurance, public acceptance of health IT may be at risk. In closing, Mr. Chairman, many important steps have been taken, but more is needed before we can make a successful transition to a nationwide health IT capability and take full advantage of potential improvements in care and efficiency that this could enable. It is important to have structures and mechanisms to build, maintain, and expand a robust foundation of health IT standards that are agreed upon by all important stakeholders. Further, given the complexity of the activities required to implement health IT and the large number of stakeholders, completing and implementing comprehensive planning activities are also key to ensuring program success. Finally, an overall privacy approach that ensures public confidence and trust is essential to successfully promoting the use and acceptance of health IT. Without further action taken to address these areas of concern, opportunities to achieve greater efficiencies and improvements in the quality of the nation's health care may not be realized. This concludes my statement. I would be pleased to answer any questions that you or other Members of the Committee may have. If you should have any questions about this statement, please contact me at (202) 512-6304 or by e-mail at [email protected]. Other individuals who made key contributions to this statement are Barbara S. Collier, Heather A. Collins, Amanda C. Gill, Linda T. Kohn, Rebecca E. LaPaze, and Teresa F. Tucker. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As GAO and others have reported, the use of information technology (IT) has enormous potential to help improve the quality of health care and is important for improving the performance of the U.S. health care system. Given its role in providing health care, the federal government has been urged to take a leadership role to improve the quality and effectiveness of health care, and it has been working to promote the nationwide use of health IT for a number of years. However, achieving widespread adoption and implementation of health IT has proven challenging, and the best way to accomplish this transition remains subject to much debate. At the committee's request, this testimony discusses important issues identified by GAO's work that have broad relevance to the successful implementation of health IT to improve the quality of health care. To develop this testimony, GAO relied largely on its previous work on federal health IT activities. Health IT has the potential to help improve the efficiency and quality of health care, but achieving the transition to a nationwide health IT capability is an inherently complex endeavor. A successful transition will require, among other things, addressing the following issues: (1) Establishing a foundation of clearly defined health IT standards that are agreed upon by all important stakeholders. Developing, coordinating, and agreeing on standards are crucial for allowing health IT systems to work together and to provide the right people access to the information they need: for example, technology standards must be agreed on (such as file types and interchange systems), and a host of content issues must also be addressed (one example is the need for consistent medical terminology). Although important steps have been taken, additional effort is needed to define, adopt, and implement such standards to promote data quality and consistency, system interoperability (that is, the ability of automated systems to share and use information), and information protection. (2) Defining comprehensive plans that are grounded in results-oriented milestones and measures. Using interoperable health IT to improve the quality and efficiency of health care is a complex goal that involves a range of stakeholders, various technologies, and numerous activities taking place over an expanse of time, and it is important that these activities be guided by comprehensive plans that include milestones and performance measures. Without such plans, it will be difficult to ensure that the many activities are coordinated, their results monitored, and their outcomes most effectively integrated. (3) Implementing an approach to protection of personal privacy that encourages public acceptance of health IT. A robust approach to privacy protection is essential to establish the high degree of public confidence and trust needed to encourage widespread adoption of health IT and particularly electronic medical records. Health IT programs and applications need to address key privacy principles (for example, the access principle, which establishes the right of individuals to review certain personal health information). At the same time, they need to overcome key challenges (for example, those related to variations in states' privacy laws). Unless these principles and challenges are fully and adequately addressed, there is reduced assurance that privacy protection measures will be consistently built into health IT programs and applications, and public acceptance of health IT may be put at risk.
5,020
647
CERCLA requires EPA to compile a list of contaminated and potentially contaminated federal facilities. This list, known as the Federal Agency Hazardous Waste Compliance Docket (docket), is based on information that agencies are required to report to EPA. EPA compiled the first docket in 1988 and, under CERCLA, EPA is to publish a list of any new sites added to the docket in the Federal Register every 6 months. Under section 120(c) of CERCLA, EPA is to update the docket after receiving and reviewing notices from federal agencies concerning the generation, transportation, treatment, storage, or disposal of hazardous wastes or release of hazardous substances. After a site is listed on the docket, CERCLA requires EPA to take steps to ensure that a preliminary assessment is conducted. EPA has established 18 months as a reasonable time frame for agencies to complete the preliminary assessment. After the agency conducts the preliminary assessment, EPA reviews it to determine whether the information is sufficient to assess the likelihood of a hazardous substance release, a contamination pathway, and potential receptors. EPA may determine that the site does not pose a significant threat and requires no further action. If it determines that further investigation is needed, EPA may request that the agency conduct a site inspection to gather more detailed information. If, on the basis of the site inspection, EPA determines that hazardous substances, pollutants, or contaminants have been released at the site, EPA will use the information from the preliminary assessment and site inspection to calculate and document a site's preliminary Hazard Ranking System (HRS) score, which indicates a site's relative threat to human health and the environment based on potential pathways of contamination. Sites with an HRS score of 28.50 or greater become eligible for listing on the National Priorities List (NPL), a list that includes some of the nation's most seriously contaminated sites. Based on the risk a site poses, EPA may place the site on the NPL. According to an EPA official, 158 federal sites are on the NPL, as of September 2015. Once a site is on the NPL, EPA is to oversee the cleanup. As part of its oversight responsibility, EPA works with the responsible federal agency to evaluate the nature and extent of contamination at a site. The agency must then enter into an interagency agreement with EPA that includes: (1) a review of remedial alternatives and the selection of the remedy; (2) schedules for completion of each remedy; and (3) arrangements for the long-term operation and maintenance of the site. According to EPA, the agreements also provide a process for EPA and the federal agency to resolve any disagreements related to implementing the cleanup remedy, with EPA being the final arbiter of such disputes. Once the agency and EPA agree on a cleanup remedy, the agency implements the remedy at the site. Afterwards, the agency must conduct long-term monitoring to ensure the remedy remains protective of human health and the environment. For federal sites not included on the NPL, CERCLA provides that state cleanup and enforcement laws apply, and most states have their own cleanup programs to address hazardous waste sites. USDA, Interior, DOD, and DOE have identified thousands of contaminated and potentially contaminated sites on land they manage, but there is not a complete inventory of sites, in particular, for abandoned mines. We found in our January 2015 report that there were at least 1,491 contaminated sites on land managed by USDA. These sites include 1,422 Forest Service sites, which are primarily abandoned mines; 2 Animal and Plant Health Inspection Service (APHIS) sites; 3 Agricultural Research Service (ARS) sites; 61 former grain storage sites once managed by the Commodity Credit Corporation (CCC); and 3 foreclosure properties belonging to the Farm Service Agency (FSA). In addition to sites with confirmed contamination, we found that USDA agencies have also identified some potentially contaminated sites. ARS had identified 3 sites that are potentially contaminated. Forest Service regions maintain inventories of potentially contaminated sites that include landfills, shooting ranges, and cattle dip vats, but there was no centralized database of these sites and no plans or procedures for developing one. These various inventories did not provide a complete picture of the extent of USDA's potentially contaminated sites. For example, there were an unknown number of potentially contaminated former grain storage sites in the 29 states where the CCC previously used carbon tetrachloride. This number was unknown because the CCC relies on the states to notify them of potential contamination, and 25 of the 29 states had not yet reported whether there was suspected contamination at their former CCC grain storage sites. The Forest Service also deals with various other types of hazardous waste sites, such as methamphetamine laboratories, roadside spills, and waste dumps. Forest Service officials said that, since these types of sites may involve illegal activities and are, therefore, not routinely reported, it is not possible to develop a comprehensive inventory of these types of sites. In addition, in January 2015, we reported that the Forest Service had not developed a complete, consistent, or usable inventory of abandoned mines and had no plans and procedures for developing such an inventory because, according to Forest Service officials, they did not have the resources to complete a comprehensive inventory of all potentially contaminated abandoned mines on the agency's lands. The Forest Service estimated that there were from 27,000 to 39,000 abandoned mines on their lands--approximately 20 percent of which may pose some level of risk to human health or the environment, based on the professional knowledge and experience of agency staff. Such risks may include chemicals and explosives, acid mine drainage, and heavy metal contamination in mine waste rock. However, we concluded that because the Forest Service did not have a complete inventory of abandoned mine sites, the actual number of abandoned mines on National Forest System (NFS) lands was unknown. According to a USDA official, USDA first attempted to create a national inventory of mines on NFS lands in 2003. Then, in 2008, the Forest Service established the Abandoned Mine Lands (AML) database to aggregate all available data on abandoned mines on NFS lands. The AML database drew data on pending abandoned mine sites from the 2003 database and Forest Service regional inventories, as well as from the U.S. Geological Survey and various other federal, state, and local databases. USDA officials said that, once the AML database was established, the purpose of the earlier database shifted away from maintaining an AML inventory to tracking sites that entered into the CERCLA process. However, as we reported in January 2015, the AML database has a number of shortcomings. For example, the data migration from multiple inventories led to data redundancy issues, such as some mine sites being listed multiple times under the same or different names. In addition, USDA officials told us that there was a lot of variation in the accuracy and completeness of the data on these mine sites, but a quality assurance review had not yet been performed. One Forest Service official said that, because of these problems, the data in the AML database were unusable for purposes of compiling a complete and accurate inventory of abandoned mines. In 2012, the Forest Service tried to obtain the agency resources necessary to clean up the database. Even though the Forest Service rated this project as "critical," the project did not receive any resources because other projects were deemed more important, according to a Forest Service official. Similarly, in our January 2015 report, we found several problems with the Forest Service's regional abandoned mine inventories. First, some regional inventories were incomplete. For example, officials in Forest Service Region 10, which is composed solely of the State of Alaska, said they believed there may be some abandoned mines scattered throughout Tongass and Chugach National Forests that had not yet been inventoried. They said that Forest Service Region 10 did not have enough staff to assess all abandoned mines across such a large area. Second, several Forest Service regional inventories contained inaccurate data. Third, the Forest Service's regional offices maintained their inventories differently. Some regional offices maintained their own inventories of potentially contaminated sites, whereas other regional offices utilized state or local agencies' inventories. Finally, the type of data on abandoned mines varied from region to region, making it difficult to consolidate into a coherent national database. Some regional offices tracked mines at the site level, some by their features--such as mine shafts, pits, ore piles, or machinery--and some used both approaches. For example, officials in Forest Service Region 3 told us that they had identified over 3,000 abandoned mine sites, and officials in Forest Service Region 4 told us that they had identified approximately 2,000 mine features but had not yet consolidated these features into mine sites. We reported in January 2015 that, without a comprehensive inventory of such sites or plans and procedures for developing one, USDA and the Forest Service will not have reasonable assurance that they are prioritizing and addressing the sites that pose the greatest risk to human health or the environment. Consequently, in January 2015, we recommended that the Secretary of Agriculture direct the heads of the department's land management agencies to develop plans and procedures for completing their inventories of potentially contaminated sites. USDA disagreed with our recommendation and stated that it had a centralized inventory and that this inventory was in a transition phase as a result of reduced funding levels. USDA also stated that it had taken a number of actions to manage its inventory in a more cost-effective manner, reduce operating costs, and eliminate data collection redundancies across the USDA agencies. Subsequently, in a June 2015 letter to GAO, USDA described three corrective actions that the department planned to take in response to our recommendation. We believe that these actions are needed. We found in our January 2015 report that Interior had identified 4,722 sites with confirmed or likely contamination. These include 4,098 Bureau of Land Management (BLM) sites that the agency reported had confirmed contamination or required further investigation to determine whether remediation was warranted. The majority of these sites were abandoned mines. Interior's National Park Service (NPS) identified 417 sites with likely or confirmed contamination; the Bureau of Indian Affairs, 160 sites; the Fish and Wildlife Service, 32 sites; and the Bureau of Reclamation, 15 sites. These Interior agencies identified additional locations of concern that would require verification or initial assessment to determine if there were environmental hazards at the sites. Officials we interviewed from Interior agencies, except BLM, told us that they believed they had identified all sites with likely environmental contamination. We also found that the total number of sites BLM may potentially have to address is unknown, due primarily to incomplete and inaccurate data on abandoned mines on land managed by the agency. BLM accounts for the largest number of contaminated sites and sites that need further investigation in Interior's inventory. Table 1 shows the number of contaminated or potentially contaminated sites in BLM's inventory as of April 2014, and the extent to which remediation measures had been undertaken or were completed. We reported in January 2015 that BLM had also identified 30,553 abandoned mine sites that posed physical safety hazards but needed verification or a preliminary assessment to determine whether environmental hazards were present. However, the number of potentially contaminated mines may be larger than these identified sites because BLM had not identified all of the abandoned mines on the land it manages. We reported that BLM estimated that there may be approximately 100,000 abandoned mines that had not yet been inventoried in California, Nevada, and Utah, and that it would take 2 to 3 years to complete the estimates for the other nine BLM states. BLM estimated that it will take decades to complete the inventory. To inventory a site, BLM field staff must visit the site to collect data, research the land ownership and extent of mining activity that occurred, and record the information in BLM databases. In January 2015, we reported that BLM has an ongoing effort to estimate the number of abandoned mines and mine features that have not yet been inventoried on BLM lands and the approximate cost to complete the inventory. BLM established inventory teams in several states to go out and identify sites. In addition, BLM began an initiative in California to determine the number of sites that need to be inventoried after the state provided the agency with digitized maps of potential mine sites and verified a sample of the sites. For California, BLM estimated that 22,728 sites and 79,757 features needed to be inventoried. BLM estimated that approximately 69,000 and 4,000 sites remained to be inventoried in Nevada and Utah, respectively, on BLM land. BLM officials told us that they expect to provide a report to Congress on the inventory work remaining in these three states in 2015. The nine remaining states with BLM land do not have the digital geographic data available that BLM used for California, Nevada, and Utah, according to BLM officials, making it difficult for BLM to develop similar estimates for these states. BLM officials told us that the U.S. Geological Survey was working on an effort to develop datasets similar to those used to estimate the number of abandoned mines on BLM land in California, Nevada, and Utah. We found that Interior's Bureau of Indian Affairs, Bureau of Reclamation, Fish and Wildlife Service, and NPS also have sites with environmental contamination. Officials from each of these agencies told us that they believed their inventories of sites with environmental contamination were complete. Both Fish and Wildlife Service and NPS had identified locations of concern, where contamination is suspected based on known past activities or on observed and reported physical indicators requiring further assessment. For NPS, nearly half of these sites are old dump sites. NPS also has abandoned mines on the lands it manages. In 2013, NPS completed a system-wide inventory and assessment project to identify abandoned mines on lands it manages. NPS's inventory identified 37,050 mine features at 3,421 sites on NPS land. In January 2015, we reported that, of the total inventory, NPS officials said they believed that 3,841 features at 1,270 sites still required some level of effort to address human health and safety and/or environmental concerns. As a result of NPS' system-wide inventory, officials with the agency's Abandoned Mineral Lands Program told us that they believed that their inventory of all potentially contaminated sites was largely complete. As we reported in our July 2010 report, before federal environmental legislation was enacted in the 1970s and 1980s regulating the generation, storage, treatment, and disposal of hazardous waste, DOD activities and industrial facilities contaminated millions of acres of soil and water on and near DOD properties in the United States and its territories. DOD activities released hazardous substances into the environment primarily through industrial operations to repair and maintain military equipment, as well as the manufacturing and testing of weapons at ammunition plants and proving grounds. In June 2014, DOD reported to Congress that it had 38,804 sites in its inventory of sites with contamination from hazardous substances or pollutants or contaminants at active installations, formerly used defense sites, and Base Realignment and Closure (BRAC) locations in the United States, as well as munition response sites that were known or suspected to contain unexploded ordnance, discarded military munitions, or munitions constituents., Of these 38,804 sites, DOD's report shows that 8,865 have not reached the department's response complete milestone--which occurs when a remedy is in place and required remedial action operations, if any, are complete. In May 2013, we reported that in addition to having a large number of contaminated and potentially contaminated sites in its inventory, of all federal agencies, DOD had the greatest number of sites listed on the NPL. We reported that, as of April 2013, DOD was responsible for 129 of the 156 federal facilities on the NPL at the time (83 percent). Also, we reported in March 2009 that the majority of DOD sites were not on the NPL and that most DOD site cleanups were overseen by state agencies rather than EPA, as allowed by CERCLA. Our work has found that the lack of interagency agreements between EPA and DOD has historically contributed to delays in cleaning up military installations. For example, we reported in July 2010 that, as of February 2009, 11 DOD installations did not have an interagency agreement, even with CERCLA's requirement that federal agencies enter into interagency agreements with EPA within a certain time frame to clean up sites on the NPL, and even though the department had reached agreement with EPA on the basic terms. Without an interagency agreement, EPA does not have the mechanisms to ensure that cleanup by an installation proceeds expeditiously, is properly done, and has public input, as required by CERCLA. We found one DOD installation that, after 13 years on the NPL and receipt of EPA administrative cleanup orders for sitewide cleanup, had not signed an interagency agreement. We recommended that the Administrator of EPA take action to ensure that outstanding CERCLA section 120 interagency agreements are negotiated expeditiously. In May 2013, we reported that DOD had made progress on this issue by decreasing the number of installations without an interagency agreement from 11 to 2, but both of those sites still posed significant risks. According to an EPA official, as of September 2015, one of these two installations now has an interagency agreement. However, according to this official, there is no interagency agreement at the other installation-- Redstone Arsenal in Alabama. We recommended that EPA pursue changes to a key executive order that would increase its authority to hasten cleanup at sites without an interagency agreement. EPA agreed but has not taken action to have the executive order amended. We also suggested in July 2010 that Congress consider amending CERCLA section 120 to authorize EPA to impose administrative penalties at federal facilities placed on the NPL that lack interagency agreements within the CERCLA-imposed deadline of 6 months after completion of the remedial investigation and feasibility study. We believe that this leverage could help EPA better satisfy its statutory responsibilities with agencies that are unwilling to enter into agreements where required under CERCLA section 120. As we reported in March 2015, 70 years of nuclear weapons production and energy research by DOE and its predecessor agencies generated large amounts of radioactive waste, spent nuclear fuel, excess plutonium and uranium, contaminated soil and groundwater, and thousands of contaminated facilities, including land, buildings, and other structures and their systems and equipment. DOE's Office of Environmental Management (EM) is responsible for one of the world's largest environmental cleanup programs, the treatment and disposal of radioactive and hazardous waste created as a by-product of producing nuclear weapons and energy research. The largest component of the cleanup mission is the treatment and disposal of millions of gallons of highly radioactive waste stored in aging and leak-prone underground tanks. In addition, radioactive and hazardous contamination has migrated through the soil into the groundwater, posing a significant threat to human health and the environment. According to DOE's fiscal year 2016 congressional budget request, EM has completed cleanup activities at 91 sites in 30 states and in the Commonwealth of Puerto Rico, and EM has remaining cleanup responsibilities at 16 sites in 11 states. EM cleanup work activities are carried out by contractors, such as Washington River Protection Solutions, for the operation of nuclear waste tanks at the Hanford Site in Washington State. In March 2015, we reported that the National Nuclear Security Administration (NNSA), a separately organized agency within DOE, also manages many contaminated facilities. Some of these facilities are no longer in use, others are still operational. Once NNSA considers these facilities to be nonoperational, they may be eligible for NNSA to transfer to EM. We found that NNSA had identified 83 contaminated facilities at six sites for potential transfer to EM for disposition over a 25-year period, 56 of which were currently nonoperational. Until the sites are transferred to EM, however, NNSA is responsible for maintaining its facilities and incurring associated maintenance costs to protect human health and the environment from the risk of contamination. NNSA's responsibilities may last for several years, or even decades, depending on when EM is able to accept the facilities. We found that as NNSA maintains contaminated nonoperational facilities, the facilities' condition continues to worsen, resulting in increased costs to maintain them. As we reported in March 2015, EM has not accepted any facilities from NNSA for cleanup in over a decade. EM does not accept facilities for transfer until funding is available to carry out the decontamination and decommissioning work. In addition, EM officials told us that they also do not include facilities maintained by NNSA in their planning until they have available funding to begin cleanup work. We concluded that without integrating NNSA's inventory of nonoperation facilities into its process for prioritizing facilities for disposition, EM may be putting lower-risk facilities under its responsibility ahead of deteriorating facilities managed by NNSA that are of greater risk to human health and the environment. We therefore recommended that EM integrate its lists of facilities prioritized for disposition with all NNSA facilities that meet EM's transfer requirements and that EM should also include this integrated list as part of the Congressional Budget Justification for DOE. We also recommended that EM analyze and consider life cycle costs for NNSA facilities that meet its transfer requirements and incorporate the information into its prioritization process. Analyzing life cycle costs of nonoperational facilities shows that accelerating cleanup of some facilities, while others are maintained in their current states, could offer significant cost savings. DOE stated that it concurred with the issues identified in our report and described actions it plans to implement to address them. For example, DOE stated that it has formed a working group that may address GAO's findings. The four departments reported allocating and spending millions of dollars annually on environmental cleanup. They also estimated future costs in the hundreds of millions of dollars or billions to clean up sites and address their environmental liabilities. We reported in January 2015 that the majority of USDA's environmental cleanup funds are spent cleaning up ARS's Beltsville NPL facility and abandoned mines and landfills on NFS lands, as well as mitigating potential groundwater contamination from activities at former CCC grain storage sites. In fiscal year 2013, USDA allocated over $22 million to environmental cleanup efforts. Specifically, USDA allocated (1) $3.7 million for department-wide cleanup projects, the majority of which were for cleanup at USDA's Beltsville site and to cover legal expenses; (2) approximately $14 million for the Forest Service to conduct environmental assessments and cleanup activities; and (3) $4.3 million in funds to mitigate contamination at former grain storage sites. The Forest Service also allocated approximately $20 million in one-time Recovery Act funds in fiscal year 2009 to cleanup activities at 14 sites located on, or directly impacting, land managed by the Forest Service. In addition, USDA seeks cost recovery of cleanup costs and natural resource damages under CERCLA from potentially responsible parties, such as owners and operators of a site, to help offset cleanup costs at sites where they caused or contributed to contamination. Cost recovery amounts vary from year to year. We found that for fiscal years 2003 to 2013, USDA typically recovered $30 million or less annually. However, according to department documents, USDA successfully recovered over $170 million from a single mining company as part of a bankruptcy case in 2009. These funds were used to conduct cleanup activities at 13 mine sites located on NFS lands. In fiscal year 2011, USDA recovered $65 million from another mining company for restoration of injured natural resources in the Coeur d'Alene River Basin NPL site in Idaho. In its fiscal year 2013 financial statements, USDA reported a total of $176 million in environmental liabilities. These liabilities represent what USDA determined to be the probable and reasonably estimable future costs to address 100 USDA sites, as required by federal accounting standards. The $176 million amount included: $165 million to address asbestos contamination, $8 million for up to 76 CCC former grain storage sites in the Midwest that are contaminated with carbon tetrachloride, and $3 million for 24 Forest Service sites, including guard stations, work centers, and warehouses, among others. In addition, USDA reported $120 million in contingent liabilities in its fiscal year 2013 financial statements. Of this amount, $40 million was for environmental cleanup at four phosphate mine sites in southeast Idaho. We reported in January 2015 that Interior allocated about $13 million for environmental cleanup efforts in fiscal year 2013. Specifically, Interior allocated $10 million for cleanup projects department-wide; NPS allocated an additional $2.7 million, and the Fish and Wildlife Service allocated over $800,000 for environmental assessment and cleanup projects. In addition, BLM allocated more than $34 million to its hazardous management and abandoned mine programs. BLM provided over $18 million to its state offices; however, the amount specifically used for environmental cleanup projects was not readily available. BLM also spent over $27 million in one-time Recovery Act funds on physical safety and/or environmental remediation projects at 76 locations. According to BLM, there were 31 projects for environmental activities. For fiscal years 2003 through 2013, Interior allocated over $148 million in Central Hazardous Materials Fund (CHF) resources to its agencies to support response actions undertaken at contaminated sites under CERCLA. This amount includes over $49 million in CHF cost recoveries. Interior's agencies undertook 101 projects with CHF funding during fiscal years 2003 through 2013. These projects supported a range of activities, from project oversight to advanced studies (e.g., remedial investigations, feasibility studies, engineering evaluations, and cost analyses) to removal and remedial actions. The majority of sites receiving CHF funding were abandoned mines, landfills, and former industrial facilities. In fiscal year 2013, Interior allocated $10 million to the CHF. During our work for the January 2015 report, BLM officials told us that the current funding levels were not sufficient to complete the inventory and address the physical and environmental hazards at abandoned mines. In its 2014 and 2015 budget justifications, Interior described proposals to charge the hardrock mining industry fees and use the funds to address abandoned mines. Similarly, an NPS official told us that the agency has inadequate funding to address its over 400 potentially contaminated and contaminated sites. According to an NPS official, the agency had been able to address its highest risk sites. If there is a very significant risk, NPS can usually obtain funds to address the portion of the site that has the highest risk, if not the site as a whole. According to NPS officials, NPS has not selected response actions for almost 300 sites because current funding levels are not sufficient to address them. As we found in our January 2015 report, Interior reported $192 million in environmental liabilities in its fiscal year 2013 financial statements. These liabilities represent what the agency has determined to be the probably and reasonably estimable future cost for completing cleanup activities at 434 sites, as required by federal accounting standards. These activities include studies or removal and remedial actions at sites where Interior has already conducted an environmental assessment and where Interior caused or contributed to the contamination or has recognized its legal obligation for addressing the site. Interior also disclosed in the notes to its financial statements the estimated cost range for completing cleanup activities at these sites. The cost range disclosed was approximately $192 million to $1.3 billion. Interior also disclosed the estimated costs for government-acknowledged sites--sites that are of financial consequence to the federal government with damage caused by nonfederal entities--where it was reasonably probable that cleanup costs would be incurred. In fiscal year 2013, Interior disclosed in the notes to its fiscal year 2013 financial statements a cost range for these activities to be approximately $62 million to $139 million. The majority of this cost range was related to addressing 85 abandoned mine sites. As we have previously reported, cleanup costs for abandoned mines vary by type and size of the operation. For example, the cost of plugging holes is usually small, but reclamation costs for large mining operations can reach tens of millions of dollars. Historically, we have found that DOD has spent billions on environmental cleanup and restoration at its installations. For example, in July 2010, we reported that DOD spent almost $30 billion from 1986 to 2008 across all environmental cleanup and restoration activities at its installations, including NPL and non-NPL sites. In March 2010, we reported that since the Defense Environmental Restoration Program (DERP) was established, approximately $18.4 billion had been obligated for environmental cleanup at individual sites on active military bases, $7.7 billion for cleanup at sites located on installations designated for closure under BRAC, and about $3.7 billion to clean up formerly used defense sites. In June 2014, DOD reported to Congress that, in fiscal year 2013, DOD obligated approximately $1.8 billion for its environmental restoration activities. In its Agency Financial Report for fiscal year 2014, DOD reported $58.6 billion in total environmental liabilities. These liabilities include, but are not limited to, cleanup requirements for DERP for active installations, BRAC installations, and formerly used defense sites. According to DOE's fiscal year 2016 Congressional budget request, DOE received an annual appropriation of almost $5.9 billion in fiscal year 2015 to support the cleanup of radioactive and hazardous wastes resulting from decades of nuclear weapons research and production. DOE has estimated that the cost of this cleanup may approach $300 billion over the next several decades. As we reported in May 2015, DOE spent more than $19 billion since 1989 on the treatment and disposition of 56 million gallons of radioactive and hazardous waste at its Hanford site in Washington State. In July 2010, we reported that four large DOE cleanup sites received the bulk of the $6 billion in Recovery Act funding for environmental cleanup. We previously reported that those sites have had problems with rising costs, schedule delays, and contract and project management. In 2014, DOE estimated that its total liability for environmental cleanup, the largest component of which is managed by EM, is almost $300 billion and includes responsibilities that could continue beyond the year 2089. We are beginning work at the request of the Senate Armed Services Committee to examine DOE's long-term cleanup strategy, what is known about the potential cost and timeframes to address DOE's environmental liabilities, what factors does DOE consider when prioritizing cleanup activities across its sites, and how DOE's long-term cleanup strategy address the various risks that long-term cleanup activities encounter. As part of its oversight role in maintaining the list of contaminated and potentially contaminated federal sites and ensuring that preliminary assessments of such sites are complete, EPA has compiled a docket of over 2,300 federal sites that may pose a risk to human health and the environment. EPA is responsible for ensuring that the federal agencies assess these sites for contamination. Our January 2015 report focused on reviewing the extent to which USDA and Interior have assessed the majority of sites listed on the docket. As of August 2015, the agency's docket listing consisted of 2,323 sites that may pose a risk to human health and the environment, which EPA compiled largely from information provided by federal agencies. We found in January 2015 that EPA has published many updates of the docket, but the agency has not consistently met the 6-month reporting requirement. Prior to 2014, the effort to compile and monitor the docket listings was a manual process. However, in 2014, EPA implemented revised docket procedures with a computer-based process that is to compile potential docket listings from agency notices by searching electronic records. EPA officials said that they expect the new system to allow them to update the docket in a more timely way in the future. EPA has published two docket updates with this new system, in December 2014 and August of 2015. As we reported in January 2015, EPA officials told us that it is difficult for EPA to know about a site to list if agencies have not reported it. However, if EPA learns about a site that has had a release or threat of a release of hazardous substances through other means, EPA will list the site on the docket. It is important to note that the docket is a historical record of potentially contaminated sites that typically have been reported to EPA by agencies. Because it is a historical record, sites that subsequently were found to not be contaminated, and sites that the agencies may have addressed, are still included on the docket. In our January 2015 report on USDA and Interior potentially contaminated sites, we discussed the docket with officials from these two departments. We found that Interior and USDA officials disagreed with EPA officials over whether some of these sites should have been listed on the docket. Interior officials believed that CERCLA does not give EPA the discretion to list Interior sites unless Interior reports them to EPA and that EPA should limit its listing of sites on the docket to those reported by an agency under one of the provisions specifically noted in CERCLA. Interior and USDA officials also believed that abandoned mines should not be listed on EPA's docket because the agencies did not cause the contamination and, therefore, the sites should not be considered federal sites. However, EPA officials believed that, regardless of whether USDA and Interior are legally liable for addressing these sites, they have an independent responsibility under Executive Order 12,580 and CERCLA as land management agencies owning the sites to address them. As I stated earlier, EPA established 18 months as a reasonable time frame for agencies to complete a preliminary assessment. However, in March 2009, we reported that EPA officials from two regions told us that some agencies such as DOD may take 2 to 3 years to complete a preliminary assessment because EPA does not have independent authority under CERCLA to enforce a timeline for completion of a preliminary assessment. In March 2009, we suggested that Congress consider amending CERCLA section 120 to authorize EPA to require agencies to complete preliminary assessments within specified time frames. For USDA and Interior, we found in our January 2015 report that as of February 2014, both Interior and USDA had conducted a preliminary assessment of the majority of their sites on EPA's docket. However, EPA, Interior, and USDA have differing information on the status of preliminary assessments for the remaining docket sites. Our analysis of data in in EPA's Comprehensive Environmental Response, Compensation, and Liability Information System for our January 2015 report found that USDA still needed to conduct a preliminary assessment at 50 docket sites, and Interior needed to conduct a preliminary assessment at 79 docket sites. When we reviewed the status of these sites with USDA and Interior officials, the officials told us that they believed their agencies had met the preliminary assessment requirement for many of these sites. To help resolve disagreements between EPA and USDA and Interior regarding which remaining docket sites require preliminary assessments, we recommended, in January 2015, that EPA take three actions. First, EPA should review available information on USDA and Interior sites where EPA's Superfund Enterprise Management System indicates that a preliminary assessment has not occurred to determine the accuracy of this information, and update the information, as needed. After completing this review, EPA should inform USDA and Interior whether the requirement to conduct a preliminary assessment at the identified sites has been met or if additional work is needed to meet this requirement. Finally, EPA should work with the relevant USDA and Interior offices to obtain any additional information needed to assist EPA in determining the accuracy of the agency's data on the status of preliminary assessments for these sites. EPA agreed with these recommendations and, according to EPA officials, the agency has started taking steps to address them. Chairman Shimkus, Ranking Member Tonko, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other individuals who made key contributions include: Barbara Patterson (Assistant Director), Antoinette Capaccio, Rich Johnson, Kiki Theodoropoulos, and Leigh White. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government owns over 700 million acres of land. Some of this land--which is primarily managed by USDA, Interior, DOD, and DOE--is contaminated with hazardous waste from prior uses, such as landfills and mining. To respond to problems caused by improper disposal of hazardous substances in the past, in 1980, Congress passed CERCLA, also known as Superfund. Among other things, CERCLA requires owners and operators of hazardous waste sites to notify the federal EPA--which manages the Superfund program--of the existence of their facilities, as well as known, suspected, or likely releases of hazardous substances. This testimony focuses on (1) numbers of contaminated and potentially contaminated federal sites for four departments; (2) spending and estimates of future costs for cleanup at these federal sites; and (3) EPA's role in maintaining the list of contaminated and potentially contaminated federal sites and ensuring that preliminary assessments of such sites are complete. This testimony is based on prior GAO reports issued from March 2009 through March 2015. The Departments of Agriculture (USDA), the Interior, Defense (DOD), and Energy (DOE) have identified thousands of contaminated and potentially contaminated sites on land they manage but do not have a complete inventory of sites, in particular, for abandoned mines. GAO reported in January 2015 that USDA had identified 1,491 contaminated sites and many potentially contaminated sites. However, USDA did not have a reliable, centralized site inventory or plans and procedures for completing one, in particular, for abandoned mines. For example, officials at USDA's Forest Service estimated that there were from 27,000 to 39,000 abandoned mines on its lands--approximately 20 percent of which may pose some level of risk to human health or the environment. GAO also reported that Interior had an inventory of 4,722 sites with confirmed or likely contamination. However, Interior's Bureau of Land Management had identified over 30,000 abandoned mines that were not yet assessed for contamination, and this inventory was not complete. DOD reported to Congress in June 2014 that it had 38,804 sites in its inventory of sites with contamination. DOE reported that it has 16 sites in 11 states with contamination. These four departments reported allocating and spending millions of dollars annually on environmental cleanup and estimated future costs in the hundreds of millions of dollars or more in environmental liabilities. Specifically: GAO reported in January 2015 that, in fiscal year 2013, USDA allocated over $22 million to environmental cleanup efforts and reported in its financial statements $176 million in environmental liabilities to address 100 sites. GAO reported in January 2015 that Interior in fiscal year 2013 allocated about $13 million for environmental cleanup efforts and reported $192 million in environmental liabilities in its financial statements to address 434 sites. In July 2010, GAO reported that DOD spent almost $30 billion from 1986 to 2008 across all environmental cleanup and restoration activities at its installations. In its fiscal year 2014 Agency Financial Report , DOD reported $58.6 billion in total environmental liabilities. DOE reported receiving an annual appropriation of almost $5.9 billion in fiscal year 2015 to support cleanup activities. In 2014, DOE estimated its total liability for environmental cleanup at almost $300 billion. As part of maintaining the list of contaminated and potentially contaminated federal sites, the Environmental Protection Agency (EPA) compiled 2,323 federal sites that may pose a risk to human health and the environment, as of August 2015, according to EPA officials. EPA is responsible for ensuring that federal agencies assess these sites for contamination and has established 18 months as a reasonable time frame for agencies to complete a preliminary assessment. However, in March 2009, GAO reported that according to EPA officials, some agencies, such as DOD, may take 2 to 3 years to complete an assessment and that EPA does not have independent authority under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) to enforce a timeline for completing the preliminary assessment. In March 2009, GAO suggested that Congress consider amending CERCLA section 120 to authorize EPA to require agencies to complete preliminary assessments within specified time frames. GAO is making no new recommendations. Previously, GAO made numerous recommendations to ensure that contaminated sites were identified and assessed, and some of these recommendations have not been fully implemented. GAO will continue to monitor implementation.
7,921
914
There are multiple statutes that concern health care fraud, including the following: The False Claims Act is often used by the federal government in health care fraud cases and prohibits certain actions, including the knowing presentation of a false claim for payment by the federal government. Civil monetary penalty provisions of the Social Security Act apply to certain activities, such as knowingly presenting a claim for medical services that is known to be false and fraudulent. In addition, the Social Security Act also provides for criminal penalties for knowing and willful false statements in applications for payment. The Anti-Kickback statute makes it a criminal offense for anyone to knowingly and willfully solicit, receive, offer, or pay any remuneration in return for or to induce referrals of items or services reimbursable under a federal health care program, subject to statutory exceptions and regulatory safe harbors. The Stark law and its implementing regulations generally prohibit physicians from making "self-referrals"--certain referrals for "designated health services" paid for by Medicare to entities with which the physician (or an immediate family member) has a financial relationship--nor may the entities that perform the "designated health services" present claims to Medicare or bill for these services. These prohibitions also extend to payments for Medicaid-covered services to the same extent and under the same terms and conditions as if Medicare had covered them. The Federal Food, Drug, and Cosmetic Act, as amended, makes it unlawful to, among other things, introduce an adulterated or misbranded pharmaceutical product or device into interstate commerce. Health care fraud takes many forms, and a single case can involve more than one fraud scheme. Schemes may include fraudulent billing for services not provided, services provided that were not medically necessary; and services intentionally billed at a higher level than appropriate for the services that were provided, called upcoding. Other fraud schemes include providing compensation--kickbacks--to beneficiaries or providers or others for participating in the fraud scheme and schemes involving prescription drugs (including prescription drugs that contain controlled substances), such as the submission of false claims for prescription drugs that have been improperly marketed for non- FDA-approved uses and the illicit diversion of prescription drugs for profit or abuse. Fraud cases may involve more than one scheme; for example, an infusion clinic may pay kickbacks to a beneficiary for receiving care at the clinic, and the care that was provided and billed for may not have been medically necessary. Providers may be complicit in the schemes or unaware of the schemes. For example, providers who are complicit may willingly use their provider identification information to bill fraudulently, misrepresent services provided to receive higher payment, or receive kickbacks to provide their identification information for others to bill fraudulently. In other cases, providers may be unaware that their identification information has been stolen and used in various fraud schemes. Similarly, beneficiaries can be either complicit in or unaware of the fraud. Beneficiaries who are complicit may willingly provide their identification information to a provider for the purposes of committing fraud or receive kickbacks in exchange for providing their information to or receiving services from a provider. In contrast, they also may be unaware of fraud schemes in which the provider bills for services not medically necessary or uses beneficiaries' identification information without their knowledge. Additionally, both beneficiaries and some providers may not be involved in the fraud scheme, in the sense that the fraud schemes involved circumstances other than a provider giving care to a beneficiary. For example, a pharmaceutical manufacturing company that marketed prescription drugs for non-FDA approved uses does not involve a provider giving care directly to a beneficiary. Individuals and entities that commit fraud do so in federal health care programs and private insurance programs, and may commit fraud in more than one program simultaneously. Several agencies are involved in investigating and prosecuting health care fraud cases, including CMS; HHS OIG; DOJ's U.S. Attorneys' Offices, Civil and Criminal Divisions; and the FBI. HHS OIG and the FBI primarily conduct investigations of health care fraud, and DOJ's divisions typically prosecute or litigate the cases. DOJ prosecutes fraud cases that affect both federal health programs and private health insurance. Amid concerns about identify theft, proposals have been put forward to replace Medicare's paper identification cards that contain the beneficiary's Social Security numbers with electronically readable cards, such as smart cards. Some proposals have suggested that such cards should be issued to providers as well. Electronically readable cards include those that store information on magnetic stripes and bar codes and cards called smart cards that use microprocessor chips to store and process data. In March 2015, we identified three key uses for electronically readable cards: (1) authenticating beneficiary and provider presence at the point of care, (2) electronically exchanging beneficiary medical information, and (3) electronically conveying beneficiary identity and insurance information to providers. We also found that smart cards could provide more rigorous authentication of beneficiaries and providers at the point of care than cards with magnetic stripes and bar codes, though all three types of cards can electronically convey identity and insurance information. Proponents of smart cards have suggested that, among other benefits, using smart cards may reduce health care fraud in the Medicare program. For example, some proponents claim that the use of smart cards to identify the beneficiary and provider at the point of care could potentially curtail certain types of fraud such as schemes in which providers misuse another provider's information to bill fraudulently. However, our March 2015 report also found that there are several limitations associated with the use of smart cards. Specifically, it is possible that individuals may still be able to commit fraud by adapting and altering the schemes they use to account for the use of smart card technology. In addition, the use of smart card technology could introduce new types of fraud and ways for individuals to illegally access beneficiary information. For example, malicious software could be written onto a smart card and used to compromise provider IT systems. Further, various factors may limit the implementation of smart card technology in the Medicare program. As we found in our March 2015 report, while the use of smart cards to verify the beneficiary identity at the point of care could reduce certain types of fraud, it would have limited effect on Medicare fraud since CMS policy is to pay claims for Medicare beneficiaries even if they do not have a Medicare identification card at the time of care. CMS officials noted that it would not be feasible to require the use of smart cards because of concerns that this would limit beneficiaries' access to care given that there may be legitimate reasons why a card might not be present at the point of care. For example, beneficiaries who experience a health care emergency may not have their Medicare cards with them at the time care is rendered. Additionally, we concluded that the use of smart cards to verify the beneficiary and provider presence at the point of care would require addressing costs and implementation challenges associated with card management and information technology system enhancements. These enhancements would be needed to update both CMS's and providers' claims processing and card management systems in order to achieve a high level of provider and beneficiary authentication as well as meet security requirements. The majority of the 739 cases resolved in 2010 that we reviewed had more than one fraud scheme. Fraudulent billing schemes, such as billing for services that were not provided and billing for services that were not medically necessary, were the most common fraud schemes. Over 20 percent of the cases included kickbacks to providers, beneficiaries, or other individuals. Providers were complicit in the fraud schemes in over half of the cases. In contrast with providers, only about 14 percent of the 739 cases we reviewed had beneficiaries who were complicit in the schemes. Using cases from 2010, we identified 1,679 fraud schemes in the 739 cases that we reviewed. The majority of the 739 cases (about 68 percent) included more than one scheme; 61 percent of the cases had 2 to 4 schemes, about 7 percent had 5 or more schemes. Thirty-two percent had only one scheme. The most common schemes used in the cases we reviewed were related to fraudulent billing, such as billing for services that were not provided (42.6 percent of cases), billing for services that were not medically necessary (24.5 percent), and upcoding, which is billing for a higher level of service than the service actually provided (17.5 percent). Additionally, schemes used to support other fraud were also common, such as falsifying a substantial portion of records to support the fraud scheme (25.2 percent) and paying kickbacks to participants in the scheme (20.6 percent). Schemes related to prescription drugs (including prescription drugs that contained controlled substances), such as fraudulently obtaining or distributing prescription drugs or marketing prescription drugs for non-FDA approved uses in order to commit fraud, were found in about 21 percent of the cases we reviewed. (See table 1 for the number and percentage of cases in which these schemes were used and app. II, table 6, for additional details on schemes we identified in cases.) Many different combinations of schemes were present in the 68 percent of cases with more than one scheme. The most common schemes were also the ones that were most often used together: billing for services not provided along with billing for services that were not medically necessary, billing for services or supplies that were not prescribed by a physician and falsifying a substantial portion of records in order to support the fraud scheme. (See app. II, table 7, for additional analysis of the number of schemes per case.) For example, according to the indictment in a fraud case we reviewed, a DME supplier used two schemes to commit fraud: (1) billing Medicare for medical equipment, such as orthotic braces, that were not provided to Medicare beneficiaries and (2) billing for supplies that had not been prescribed by physicians for these beneficiaries. Many different federal programs and private insurers were affected by fraud schemes in the cases we reviewed. In one-quarter of the cases, more than one program was affected. Medicare was affected in about 63 percent of the 739 cases reviewed, Medicaid and/or CHIP in about 32 percent, TRICARE in about 5 percent, and the Federal Employees Health Benefits Program (FEHBP) in 3 percent of the cases. In over 11 percent of the cases, private health insurers were affected. Other programs affected included Department of Veterans Affairs programs, Social Security programs, worker's compensation programs, and other benefit plans. Among the fraud cases we reviewed, one-third--262 cases--had information in the documents we reviewed about the amount of fraudulent payments made by the programs and insurers. For the 262 cases, the total paid was $801.5 million. The amounts of the fraudulent payments in these cases typically ranged from $10,000 to $1.5 million. In about 20 percent of the 739 cases we reviewed, kickbacks were paid to providers, beneficiaries, or other individuals. The most common schemes used in cases where providers were paid kickbacks were marketing prescription drugs for non-FDA-approved uses, billing for services that were not medically necessary, upcoding, and self-referring. Many different types of providers received or provided kickbacks in these cases; the most common provider types were DME suppliers, hospitals, and pharmaceutical manufacturers. The most common schemes used in cases where beneficiaries were paid kickbacks were billing for services that were not medically necessary and billing for services that were not provided. In addition, kickbacks were paid to both beneficiaries and providers for their involvement in a fraud case or to other individuals, such as "recruiters," who connect providers and beneficiaries in exchange for a fee. For 23 of the cases we reviewed, there was information in the documents we reviewed about the amount of kickbacks paid to beneficiaries, providers, and other individuals, which totaled $69.7 million. In about 62 percent of the 739 cases we reviewed, providers were complicit in the cases, either by submitting fraudulent claims or by supporting the fraud schemes. (See table 2 and app. II, table 8, for additional information on the role of the provider, by fraud scheme.) For example, a physician would be complicit when billing for higher level services than those actually provided in order to receive a higher payment rate (upcoding). A physician may also be complicit in a case by receiving kickbacks for referring beneficiaries to a particular clinic, even though the physician did not bill for the services provided by that clinic. Example of health care fraud case in which providers were complicit According to an indictment in one of the cases we reviewed, a physician conspired with the owner of a medical testing company that performed diagnostic ultrasound tests to bill Medicare and private insurance companies for tests that were either never provided or were not medically necessary. The physician signed orders for these ultrasound tests for beneficiaries that he had not actually treated and received kickbacks from the medical testing company for the orders. well as hospitals, other clinics, home health agencies, and pharmacies were the most common types of providers that were complicit. Providers were not complicit in about 10 percent of the cases we reviewed. In those cases, providers' information had been stolen or used without their knowledge to carry out the fraud schemes. The most common schemes in these cases were falsifying records and billing for services or supplies that were not prescribed by the physicians. Additionally, in two cases, a fictitious provider was created to support the fraud schemes. Example of health care fraud case in which provider was not complicit According to a complaint, a DME supplier billed Medicare for supplies prescribed by a physician. However, those supplies were not prescribed by the physician the DME supplier had listed on the claims. During an interview with investigators, the physician indicated his practice was not to prescribe DME supplies to his patients and instead to refer them to a specialist. When reviewing a list of 200 Medicare beneficiaries for whom the DME supplier had listed him as the prescribing physician on the claims, the physician identified that only 12 of those listed had ever been his patients. In this case, the DME supplier was using the physician's information without his knowledge to bill for DME supplies that were not provided. No provider was involved in another 10 percent of the cases that we reviewed. For example, no provider gave care directly or billed for services provided to a beneficiary in cases where a prescription drug manufacturer marketed prescription drugs for non-FDA-approved uses. In the remaining 18.5 percent of cases, we were unable to determine how the provider was involved as the court documents did not include this information. In contrast with providers, only about 14 percent of the 739 cases we reviewed had beneficiaries that were complicit in the schemes. For example, there were cases in which the beneficiary willingly provided identification information so a provider could fraudulently bill, or the beneficiary received kickbacks for receiving treatment at a specific clinic. Among the cases in which the beneficiary was complicit, the most common schemes were billing for services that were not medically necessary, billing for services that were not provided, and falsifying records to support the fraud schemes. Example of health care fraud case in which beneficiary was complicit According to an information document filed by prosecutors in one case we reviewed, an employee of a medical clinic asked a beneficiary to visit the clinic complaining of ailments that the beneficiary did not have in order to receive prescriptions for drugs containing controlled substances. The beneficiary visited the clinic complaining of a toothache and obtained a prescription for a controlled substance. The employee then purchased that medication from the beneficiary. In about 62 percent of the 739 cases we reviewed, beneficiaries were not complicit in the schemes. Among beneficiaries that were not complicit, most received services from the provider, but there was no evidence that the beneficiary was aware of the fraud (54.8 percent). For example, beneficiaries who were not complicit in the schemes received services from the provider but were unaware that the provider billed for upcoded services or that they received services that were not medically necessary. In 39 cases (5.3 percent), court documents we reviewed indicated that the beneficiaries' information was stolen or sold without their knowledge. In an additional 12 cases (1.6 percent) we reviewed, the beneficiaries' information was obtained through false pretenses, such as through a telemarketer. (See table 3 and app. II, table 9, for additional information on the role of the beneficiary, by fraud scheme.) Additionally, the beneficiary was not involved in about 13 percent of the 739 cases we reviewed. The beneficiary may not have been involved in the fraud schemes because the schemes did not involve billing for care provided to a beneficiary. For instance, in one case, a pharmaceutical drug manufacturer marketed drugs for non-FDA-approved uses and paid kickbacks to providers for prescribing those drugs to beneficiaries. This scheme did not involve billing for care provided to the beneficiary. For the remaining 11 percent of cases we reviewed, we were unable to determine whether the beneficiary was complicit or not, and in 1 case, a fictitious beneficiary's information was created to support the fraud scheme. Among the 739 cases, we found 165 cases (22 percent) in which the entire case (2 percent) or part of the case (20 percent) could have been affected by the use of smart cards. The remaining 574 cases (78 percent) had schemes that would not have been affected by smart cards. (See fig. 1.) Example of health care fraud case in which the provider was complicit but the beneficiary was not According to a complaint document in one case we reviewed, the provider submitted duplicate claims for the same service provided to a beneficiary. The beneficiary received the service from the provider the first time but was unaware that a second claim had been submitted as if the service had been provided a second time when it had not. Example of health care fraud case in which neither the beneficiary nor the provider was complicit According to a complaint document in one fraud case we reviewed, a DME supplier used the identification information for several beneficiaries to submit a bill for DME supplies. The DME supplier also used a physician's identification information to allege that the supplies had been prescribed when that physician had not prescribed the DME supplies. In this case, neither the beneficiaries nor the provider were aware of the fraud schemes. Among the 739 cases we reviewed, we found 165 cases in which the entire or part of the case could have been affected by the use of smart cards. These cases included at least one of six schemes smart cards could have affected as the schemes involved the lack of verification of the beneficiary or the provider at the point of care. These six schemes were (1) billing for services that were never actually provided and no legitimate services were provided; (2) misusing a provider's identification information to bill fraudulently (such as using a retired provider's identification information); (3) misusing a beneficiary's identification information to bill fraudulently (such as using a deceased beneficiary's identification information or stealing a beneficiary's information); (4) billing more than once for the same service (known as duplicate billing) by altering a small portion of the claim, such as the date, and resubmitting it for payment; (5) providing services to ineligible individuals; and (6) falsifying a substantial part of the records to indicate that beneficiaries or providers were present at the point of care. In 18 cases (2.4 percent of all cases resolved in 2010 that we reviewed), the entire case could have been affected because all of the schemes on those cases involved the lack of verification of the beneficiary or provider at the point of care. For these 18 cases, either the beneficiary or the provider was complicit in the scheme, while the other was not, or neither the beneficiary nor the provider was complicit in the scheme. The use of smart cards could have had an effect because the card would have been able to verify at least one identity. Example of health care fraud case that may have been partially affected by the use of smart cards According to a complaint in one case we reviewed, a physical therapy provider was billing for services that were not medically necessary and was submitting duplicate bills for the same service. This case could have been partially affected by the use of smart cards, as the smart card would have verified that the beneficiary was present for only one service in which a duplicate bill was submitted but would not have affected the ability of the provider to bill for services that were not medically necessary. Smart cards could have partially affected an additional 147 cases (19.9 percent) in which at least one of the six schemes was present. However, because other fraud schemes were used, the entire case would not have been affected. (See table 4.) Smart card technology would not have affected the majority of fraud schemes we identified, which represented 574 of the 739 cases that we reviewed (78 percent). In these instances, the schemes would not have been affected by the smart cards because although the beneficiary and provider were present at the point of care, the provider misrepresented the services rendered after the smart cards would have registered their identities. These schemes included the following: billing for services that were not provided along with services that billing for services that were not medically necessary, unbundling of services, billing for services that were not prescribed or not referred by a billing for services as if they were provided by a physician to receive a higher payment rate when they were actually provided by another provider in which the payment rate would have been lower. In these schemes, smart cards would not be able to detect that the provider misrepresented the actual services provided even if the cards verified the beneficiary's and provider's presence. Similarly, schemes that involved a provider misrepresenting eligibility to provide services would not have been affected by smart cards, including schemes in which bills were submitted for services provided by an excluded provider or by an unlicensed, uncertified, or ineligible provider. Many of these schemes involved health care entities that billed for services provided by employees or contractors that were not licensed or were excluded from providing care. In addition, smart card technology would not have affected schemes in which the beneficiary was not present or the verification of the beneficiary and provider was not relevant to the scheme. These fraud schemes involved improper marketing of prescription drugs, including drugs for non-FDA-approved uses; misbranding prescription drugs; inflating prescription drug prices; and physician self-referrals. In addition, smart cards would not have affected schemes related to improperly obtaining or distributing prescription drugs (including drugs that contained controlled substances), regardless of whether the beneficiary's or provider's identity was verified, such as cases in which individuals visited multiple providers complaining about pain to obtain prescriptions. Further, smart cards would not have had an effect on cases in which the beneficiary and provider were complicit in the scheme, regardless of the schemes used on the case. For instance, smart cards would not have an effect on the billing for services never provided if both the beneficiary and provider were willing participants in the scheme. Similarly, smart cards would not have an effect on cases in which kickbacks were paid to a beneficiary or to a provider that allowed his or her smart card to be used for fraud. HHS and DOJ provided technical comments on a draft of this report, which we have incorporated as appropriate. In its comments, HHS reiterated that it would be difficult for CMS to implement smart cards in the Medicare program because implementation would require significant changes. For example, CMS stated that it would need to require that Medicare beneficiaries present smart cards at the point of care, which is contrary to current CMS policy and which CMS believes could create access to care issues. Additionally, CMS officials noted that implementing smart cards in Medicare would be a significant business process change, requiring substantial resources and time to implement. This report, as well as our past work on smart cards in Medicare, recognizes the concerns raised by CMS. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. This appendix provides details on the methodology we used to describe the types of health care fraud and their prevalence among cases resolved in 2010 that we reviewed. To describe types of health care fraud, we reviewed our prior reports, as well as reports from the Department of Health and Human Services (HHS) Office of Inspector General (OIG) and the Department of Justice (DOJ) to develop a list of schemes and definitions for these schemes, and then reviewed cases resolved in 2010 that we obtained through the course of work for our 2012 report. Specifically, we reviewed several government reports, such as reports produced by HHS and DOJ on the Health Care Fraud and Abuse Control Program, and DOJ and HHS OIG press releases to identify fraud schemes that were commonly included in the reports and to develop definitions for these schemes. See table 5 for the health care fraud schemes developed for our case review. Using the list of fraud schemes identified, we reviewed court documents for the health care fraud cases resolved in 2010 to determine the prevalence of health care fraud schemes. The data we obtained for the 2012 report were for fraud cases, including investigations and prosecutions, from HHS OIG and DOJ's U.S. Attorneys' Offices and Civil Division and included a variety of information such as information on the subjects of the fraud case and outcomes of the case (such as prison or probation). We obtained data from both HHS OIG and DOJ, as HHS OIG conducts investigations but DOJ does not prosecute all of the cases that are investigated. Also, because HHS OIG often works jointly with DOJ on fraud cases, for our 2012 report, we reduced duplication of fraud cases from the data we received from HHS OIG and DOJ by comparing subjects of the fraud cases that were in more than one data set we received. Although the cases we obtained for the 2012 report included investigations as well as prosecutions, judgments, and settlements, for this engagement, we included only cases that had been adjudicated favorably for the United States, meaning criminal cases in which the subjects were found guilty, pled guilty, or pled no contest to at least one of the charges, and civil cases that resulted in a judgment for the United States or a settlement. There were 834 cases that resulted in a favorable outcome for the United States, though we only reviewed 739 of these cases. We excluded 95 cases because they were duplicative of another case in our data set (18 cases), they were not health care fraud cases (21 cases), the data were insufficient to determine the fraud schemes used on the cases (15 cases), the cases were administrative actions rather than criminal or civil cases (9 cases), or we could not locate information on the cases, such as a court document or a press release, to determine the fraud schemes involved in the cases (32 cases). To determine the health care fraud schemes used in the 739 cases included in our report, we reviewed court documents associated with the charging stage of the case (such as indictment, information, or complaint) unless the charging document for a case was not available. We used court documents that we had previously obtained through our work on the 2012 report. For that report, we obtained court documents from the Public Access to Court Electronic Records (PACER) database for the DOJ cases. However, we did not have a charging document for all of the DOJ cases and did not have a charging document for any of the HHS OIG cases. As a result, we searched in PACER for charging documents for any cases for which we were missing a charging document. If the charging document was not available, we reviewed case details as described in a DOJ or Federal Bureau of Investigation (FBI) press release. For several HHS OIG cases, we were unable to locate a charging document or a press release and obtained other court documents, such as settlement agreements and plea agreements, from HHS OIG. When reviewing the court documents, we collected information on the health care fraud schemes that were used in the cases along with information about the beneficiary's role, the provider's role, whether a durable medical equipment supplier was involved, the programs that were affected by the fraud, and any monetary amounts associated with the fraud schemes (such as the amounts paid). For each case we reviewed, two reviewers independently categorized all information obtained for the case, including the relevant health care fraud schemes used on the case, and resolved any differences in the categorization. To assess the reliability of the data, we reviewed relevant documentation and examined the data for reasonableness and internal consistency. We found these data were sufficiently reliable for the purposes of our report. Tables 6 through 9 provide detailed information on health care fraud schemes for cases we reviewed, including whether the scheme was the only scheme in the case or used in combination with other schemes, the number of schemes used in cases, the role of the provider, and the role of the beneficiary. In addition to the contact named above, Martin T. Gahart, Assistant Director; Christine Davis; Laura Elsberg; Christie Enders; Matt Gever; Jackie Hamilton; Dan Lee; Elizabeth T. Morrison; and Carmen Rivera- Lowitt made key contributions to this report.
While there have been convictions for multimillion dollar schemes that defrauded federal health care programs, there are no reliable estimates of the magnitude of fraud within these programs or across the health care industry. In some fraud cases, individuals have billed federal health care programs or private health insurance by using a beneficiary's or provider's identification information without the beneficiary's or provider's knowledge. One idea to reduce the ability of individuals to commit this type of fraud is to use electronically readable card technology, such as smart cards. Proponents say that these cards could reduce fraud by verifying that the beneficiary and the provider were present at the point of care. GAO was asked to identify and categorize schemes found in health care fraud cases. This report describes (1) health care fraud schemes and their prevalence among cases resolved in 2010 and (2) the extent to which health care fraud schemes could have been affected by the use of smart card technology. GAO reviewed reports on health care fraud and smart card technology and reviewed court documents for 739 fraud cases resolved in 2010 obtained for a related 2012 GAO report on health care fraud. GAO is not making any recommendations. The Department of Health and Human Services and the Department of Justice provided technical comments on a draft of this report, which GAO incorporated as appropriate. GAO's review of 739 health care fraud cases that were resolved in 2010 showed the following: About 68 percent of the cases included more than one scheme with 61 percent including two to four schemes and 7 percent including five or more schemes. The most common health care fraud schemes were related to fraudulent billing, such as billing for services that were not provided (about 43 percent of cases) and billing for services that were not medically necessary (about 25 percent). Other common schemes included falsifying records to support the fraud scheme (about 25 percent), paying kickbacks to participants in the scheme (about 21 percent), and fraudulently obtaining controlled substances or misbranding prescription drugs (about 21 percent). Providers were complicit in 62 percent of the cases, and beneficiaries were complicit in 14 percent of the cases. GAO's analysis found that the use of smart cards could have affected about 22 percent (165 cases) of cases GAO reviewed in which the entire or part of the case could have been affected because they included schemes that involved the lack of verification of the beneficiary or provider at the point of care. However, in the majority of cases (78 percent), smart card use likely would not have affected the cases because either beneficiaries or providers were complicit in the schemes, or for other reasons. For example, the use of cards would not have affected cases in which the provider misrepresented the service (as in billing for services not medically necessary), or when the beneficiary and provider were not directly involved in the scheme (as in illegal marketing of prescription drugs).
6,326
596
Although Congress has not established mechanisms for regularly adjusting for inflation the fixed dollar amounts of civil tax penalties administered by IRS, it has done so for penalties administered by other agencies. When the Federal Civil Penalties Inflation Adjustment Act of 1990 (Inflation Adjustment Act) was enacted, Congress noted that inflation had weakened the deterrent effect of many civil penalties. The stated purpose of the 1990 act was "to establish a mechanism that shall (1) allow for regular adjustment for inflation of civil monetary penalties; (2) maintain the deterrent effect of civil monetary penalties and promote compliance with the law; and (3) improve the collection by the Federal Government of civil monetary penalties." Congress amended the Inflation Adjustment Act in 1996 and required some agencies to examine their covered penalties at least once every 4 years thereafter and, where possible, make penalty adjustments. The Inflation Adjustment Act exempted penalties under the IRC of 1986, the Tariff Act of 1930, the Occupational Safety and Health Act of 1970, and the Social Security Act. As stated earlier, some civil tax penalties are based on a percentage of liability and therefore are implicitly adjusted for inflation. For example, the penalty for failure to pay tax obligations is 0.5 percent of the tax owed per month, not exceeding 25 percent of the total tax obligations. However, other civil penalties have fixed dollar amounts, such as minimums or maximums, which are not linked to a percentage of liability. For example, a minimum penalty of $100 exists for a taxpayer who fails to file a tax return. Adjusting civil tax penalties for inflation on a regular basis to maintain their real values over time may increase IRS assessments and collections. Based on our analysis, if the fixed dollar amounts of civil tax penalties had been adjusted for inflation, the increase in IRS assessments potentially would have ranged from an estimated $100 million to $320 million and the increase in collections would have ranged from an estimated $38 million to $61 million per year from 2000 to 2005, as shown in table 1. The majority of the estimated increase in collections from adjusting these penalties for inflation was generated from the following four types of penalties: (1) failure to file tax returns, (2) failure to file correct information returns, (3) various penalties on returns by exempt organizations and by certain trusts, and (4) failure to file partnership returns. The estimated increases in collections associated with these penalties for 2004 are shown in table 2. We highlight 2004 data because, according to IRS officials, approximately 85 percent of penalties are collected in the 3 years following the assessment. The same four penalty types account for the majority of the estimated increase in collections for the prior years. Our analysis showed that these four penalties would account for approximately 99 percent of the estimated $61 million in additional IRS collections for assessments made in calendar year 2004. Because penalty amounts have not been adjusted for decades in some cases, the real value of the fixed dollar amounts of these penalties has decreased. For example, the penalty for failing to file a partnership return was set at $50 per month in 1979, which is equivalent to about $18 today, or a nearly two-thirds decline in value, as shown in table 3. If the deterrent effect of penalties depends on the real value of the penalty, the deterrent effect of these penalties has eroded because of inflation. In addition, not adjusting these penalties for inflation may lead to inconsistent treatment of otherwise equal taxpayers over time because taxpayers penalized when the amounts were set could effectively pay a higher penalty than taxpayers with the same noncompliance pay years later. Finally, if the real value of penalties declines, but IRS's costs to administer them do not, imposing penalties becomes less cost-effective for IRS and could lead to a decline in their use. In the past, Congress has established fixed penalty amounts, increased fixed penalty amounts, or both in order to deter taxpayer noncompliance with the tax laws. For example, the $100 minimum for failure to file a tax return was created in 1982 because many persons who owed small amounts of tax ignored their filing obligations. In addition, Congress increased penalties for failure to file information returns in 1982 because it believed that inadequate information reporting of nonwage income was a substantial factor in the underreporting of such income by taxpayers. As recently as 2006, IRS's National Research Program confirmed Congress's belief that compliance is highest where there is third-party reporting. Congress has also recently adjusted some civil penalties that have fixed dollar amounts. For example, the minimum penalty for a bad check was raised from $15 to $25 in May 2007, and the penalty for filing a frivolous return was raised from $500 to $5,000 in December 2006. We spoke with officials from offices across IRS whose workloads would be affected if regular adjustments of penalties occurred. IRS officials from all but one unit said that regularly updating the fixed dollar amounts of civil tax penalties would not be a significant burden. Officials from one relatively small office--the Office of Penalties--said that such adjustments might be considerable depending on the number of penalties being adjusted and would require a reprioritization of their work since their office would have lead responsibility for monitoring the administrative steps necessary to implement the adjustments and coordinating tasks among a wide range of functions within IRS. In addition, the limited number of tax practitioners we interviewed told us that the administrative burden associated with adjusting these penalties for inflation on a regular basis would be low. Officials from all but one unit we spoke to within IRS said that regularly adjusting civil tax penalties for inflation would not be burdensome. Some officials added that adequate lead time and minimally complex changes would reduce the administrative impact. For example, officials from the Office of Forms and Publications and the Office of Chief Counsel said that adjustments to civil penalty amounts would not affect their work significantly. While each office would have to address the penalty changes in documents for which they are responsible, in some cases these documents are updated regularly already. Similarly, officials responsible for programming IRS's computer systems explained that these changes would not require out of the ordinary effort, unless they had little lead time in which to implement the changes. However, officials from the Office of Penalties within the Small Business/Self-Employed division (SB/SE)--the unit which would be responsible for coordinating IRS's implementation of any adjustments to penalties among a wide range of functions within IRS--felt that the administrative burden associated with these changes might be considerable depending on the number of penalties being adjusted. The Office of Penalties, which currently consists of 1 manager and 10 analysts, provides policies, guidelines, training, and oversight for penalty issues IRS- wide, not just within SB/SE. When legislation affecting penalties is enacted, the Office of Penalties creates an implementation team that helps determine what IRS needs to do to implement the new legislation. In the case of adjusting penalties for inflation, the Office of Penalties would work with numerous other IRS units to coordinate the necessary changes to forms, training materials, computer systems, and guidance, among other things. Regularly changing four penalties would take less effort than regularly changing all penalties. In addition, the ability of the office to make these changes would require reprioritization of its work or receiving more resources. While the Office of Penalties has not done a formal analysis of the resources needed, an official stated that the additional work would not require a significant increase in staffing, such as a doubling of the size of the office. As a result, the amount of additional resources necessary for the penalty adjustment do not appear to be of sufficient scale to have a large impact on IRS overall. Further, officials we interviewed from other IRS units who would perform the work described by the Office of Penalties said that the administrative burden would not be significant for them. Some IRS officials who oversee the implementation of other periodic updates to IRS databases and documents said that the legislative changes requiring regular updates are most burdensome initially but become less of an issue in each subsequent year. Some officials also said that with enough advance notice, they would be able to integrate the necessary changes into routine updates. For example, program changes could be integrated into the annual updates that some Modernization and Information Technology Service programs receive. Other areas in IRS, such as the Office of Forms and Publications, already conduct annual and in some cases quarterly updates of their forms, and according to officials, a change to the tax penalty amount could easily be included in these regularly scheduled updates. IRS has a variety of experiences that may provide guidance that would be relevant to adjusting civil tax penalties with fixed dollar amounts for inflation. IRS has extensive procedures for implementing statutory changes to the tax code. Further, IRS has experience implementing inflation adjustment calculations. For example, tax brackets, standard deduction amounts, and the itemized deduction limit are among the inflation adjustments conducted annually by IRS. In addition, the administrative changes associated with regular updates to the interest rate have some similarities to the types of changes that an inflation adjustment may require. For example, the Office of Chief Counsel issues quarterly guidance on interest rates and the Communications & Liaison Office provides regular updates on interest changes to the tax professional community, including practitioner associations. Changes to the civil tax penalty fixed dollar amounts could be handled in a similar manner. The limited number of tax practitioners that we spoke with also expected the impact on their work from adjustments to the fixed dollar amounts of civil tax penalties for inflation to be relatively low. For example, one tax practitioner said that he expected to spend more time explaining different penalty amounts to clients, particularly in situations where taxpayers who receive the same penalty in different tax years may not understand why different penalty amounts were applied. In addition, three other practitioners we spoke with said that the changes may lead to an increased reliance on software programs that tax preparers often use to assist them with determining penalty amounts since making the calculations involving inflation adjustments could become more onerous for the tax practitioners to do without software. The real value and potential deterrent effect of civil tax penalties with fixed dollar amounts has decreased because of inflation. Periodic adjustments to the fixed dollar amounts of civil tax penalties to account for inflation, rounded appropriately, may increase the value of collections by IRS, would keep penalty amounts at the level Congress initially believed was appropriate to deter noncompliance, and would serve to maintain consistent treatment of taxpayers over time. Regularly adjusting the fixed dollar amounts of civil tax penalties for inflation likely would not put a significant burden on IRS or tax practitioners. Congress should consider requiring IRS to periodically adjust for inflation, and round appropriately, the fixed dollar amounts of the civil penalties to account for the decrease in real value over time and so that penalties for the same infraction are consistent over time. On July 30, 2007, we sent a draft of this report to IRS for its comment. We received technical comments that have been incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to appropriate congressional committees and the Acting Commissioner of Internal Revenue. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To determine the potential effect that adjusting civil tax penalties for inflation would have on the dollar value of penalty assessments and collections, we used the Consumer Price Index-Urban (CPI-U) to adjust actual penalty assessment and collection information contained in the Enforcement Revenue Information System (ERIS), which was created to track Internal Revenue Service (IRS) enforcement revenues. We provided inflation-adjusted estimates for penalties that had been assessed for at least $1 million in any one year from 2000 to 2005 and had either a fixed minimum or set amount. This excluded less than two one- hundredths of a percentage of all assessments each year. In addition, we assumed that assessment rates and collection rates would stay the same regardless of penalty amount. This assumption may bias our estimates upwards because higher penalties may encourage taxpayers to comply with tax laws and, therefore, IRS would not assess as many penalties. However, improved compliance could also increase revenues. For collections, we assumed that a particular collection would increase the inflation-adjusted penalty amount only if the unadjusted penalty assessment had been paid in full. For example, if a taxpayer paid $50 of a $100 penalty assessment, we assumed that the $50 collected was all that would have been collected even with a higher assessment, and therefore did not adjust the collection amount. We made this assumption in order to avoid overstating the effect that adjusting penalties for inflation would have on collections because the data did not tell us why a penalty was partially collected. To the extent that taxpayers who paid the unadjusted penalty amount in full would not pay the adjusted penalty amount in full, our estimates would overstate additional collections. One reason for a partial collection is that it is all the taxpayer can afford. We did not include penalties that are percentage based but have a fixed maximum in our inflation adjustments. Two penalty categories in the ERIS data set that we received have fixed maximums and had total assessments of over $1 million for at least 1 year from 2000 to 2005. In both cases, we could not determine how much a penalty assessment for the current maximum would have risen if the maximum had been higher. However, we estimated an upper bound for the potential increase in collections due to adjusting the maximums for inflation by assuming that penalties assessed at the current maximum would have increased by the full rate of inflation. As a result, we concluded that at most, collections would have risen by approximately $196,000 over the years 2000 to 2005 if these maximums had been adjusted for inflation. We also did not include penalties that are based solely on a percentage of tax liability in our analysis because they are implicitly adjusted for inflation. The data contained in the ERIS database were reliable for our purposes, but some limitations exist. To assess the reliability of the data, we reviewed relevant documentation, interviewed relevant IRS officials, and performed electronic data testing. One limitation of the ERIS data is that it does not include penalties that are self-assessed and paid at the time of filing. IRS officials estimated that this is about 6 to 7 percent of all penalty assessments, but that a large majority of these are percentage based with no fixed dollar amount. For example, many people self-assess and pay the penalty for withdrawing money from their Individual Retirement Accounts early. Further, IRS officials acknowledged that some penalties were incorrectly categorized in the database making it impossible for us to determine which penalties were being assessed. We determined that 0.4 percent to 1.4 percent of assessments per year from 2000 to 2005 were incorrectly categorized. For example, in 2000, over $144 million in assessments and over $28 million in collections were incorrectly categorized. In 2005, over $343 million in assessments and over $86 million in collections were incorrectly categorized. These two limitations may bias our estimates downwards. The federal government produces several broad measures of price changes, including the CPI-U and the Gross Domestic Product (GDP) price deflator. The CPI-U measures the average change over time in the prices paid by consumers for a fixed market basket of consumer goods and services. The GDP price deflator measures changes over time in the prices of broader expenditure categories than the CPI-U. We used the CPI-U for the purposes of this analysis because it is used currently in the tax code to make inflation adjustments to several provisions, such as the tax rate schedule, the amount of the standard deduction, and the value of exemptions. To determine the likely effect that regularly adjusting penalties for inflation would have on the administrative burden of IRS officials, we interviewed officials in offices across IRS who would be affected if regular adjustments of penalties occurred. These offices are the Office of Penalties within the Small Business/Self Employed division (SB/SE); Learning and Education within SB/SE; Wage and Investment division (W&I); Tax Exempt/Government Entity division; Large and Mid-Size Business division; Research, Analysis and Statistics division; Legislative Analysis Tracking and Implementation Services; Office of Chief Counsel; Business Forms and Publications within W&I; Enforcement Revenue Data; Communications and Liaison; and Modernization and Information Technology Services, including officials who work on the Business Master File, the Financial Management Information System, the Automated Trust Fund Recovery system, Report Generation Software, Automated Offers in Compromise, Penalty and Interest Notice Explanation, Integrated Data Retrieval System, and the Payer Master File Processing System. To determine the likely effect that regularly adjusting penalties for inflation would have on the administrative burden of tax practitioners, we interviewed tax practitioners affiliated with the American Institute of Certified Public Accountants, the National Association of Enrolled Agents, the National Society of Tax Professionals, and the American Bar Association. In total, we spoke with 28 practitioners. Results from the nongeneralizable sample of practitioners we selected cannot be used to make inferences about the effect of regular adjustments of penalties on the work of all tax practitioners. Additionally, those we spoke with presented their personal views, not those of the professional associations through which they were contacted. We conducted our work from September 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Jonda Van Pelt, Assistant Director; Benjamin Crawford; Evan Gilman; Edward Nannenhorn; Jasminee Persaud; Cheryl Peterson; and Ethan Wozniak made key contributions to this report.
Civil tax penalties are an important tool to encourage taxpayer compliance with the tax laws. A number of civil tax penalties have fixed dollar amounts--a specific dollar amount, a minimum or maximum amount--that are not indexed for inflation. Because of Congress's concerns that civil penalties are not effectively achieving their purposes, we agreed to (1) determine the potential effect of adjusting civil tax penalties for inflation on the Internal Revenue Service's (IRS) assessment and collection amounts and (2) describe the likely administrative impact of regularly adjusting civil tax penalties on IRS and tax practitioners. GAO examined IRS data on civil tax penalties and conducted interviews with IRS employees and tax practitioners. Adjusting civil tax penalties for inflation on a regular basis to maintain their real values over time may increase IRS collections by tens of millions of dollars per year. Further, the decline in real value of the fixed dollar amounts of civil tax penalties may weaken the deterrent effect of these penalties and may result in the inconsistent treatment of taxpayers over time. If civil tax penalty fixed dollar amounts were adjusted for inflation, the estimated increase in IRS collections would have ranged from $38 million to $61 million per year from 2000 to 2005. Almost all of the estimated increase in collections was generated by four penalties. These increases result because some of the penalties were set decades ago and have decreased significantly in real value--by over one-half for some penalties. According to those we interviewed, the likely administrative burden associated with adjusting the fixed dollar amounts of civil tax penalties for inflation on a regular basis would not be significant for IRS and would be low for tax practitioners. However, officials from the Office of Penalties, a relatively small office that would be responsible for coordinating the required changes among multiple IRS divisions, said that such adjustments might be considerable depending on the number of penalties being adjusted and would require a reprioritization of work. IRS officials said that the work required would be easier to implement with each subsequent update.
3,844
414
The federal government receives amounts from numerous sources in addition to tax revenues, including user fees, fines, penalties, and intragovernmental fees. Whether these collections are dedicated to a particular purpose and available for agency use without further appropriation depends on the type of collection and its specific authority. User fees: User fees are fees assessed to users for goods or services provided by the federal government. They are an approach to financing federal programs or activities that, in general, are related to some voluntary transaction or request for government services above and beyond what is normally available to the public. User fees are a broad category of collections, whose boundaries are not clearly defined. They encompass charges for goods and services provided to the public, such as fees to enter a national park, as well as regulatory user fees, such as fees charged by the Food and Drug Administration for prescription drug applications. Unless Congress has provided specific statutory authority for an agency to use (i.e., obligate and spend) fee collections, fees are deposited to the Treasury as miscellaneous receipts and are generally not available to the agency. Fines, penalties, and settlement proceeds: Criminal fines and penalty payments are imposed by courts as punishment for criminal violations. Civil monetary penalties are not a result of criminal proceedings but are employed by courts and federal agencies to enforce federal laws and regulations. Settlement proceeds result from an agreement ending a dispute or lawsuit. As with user fees, unless Congress has provided specific statutory authority for an agency to use fines, penalties, and settlements, those collections are deposited as miscellaneous receipts and are generally not available to the agency. Intragovernmental fees are charged by one federal agency to another for goods and services such as renting space in a building or cybersecurity services. Unlike user fees, fines, and penalties, unless Congress has specified otherwise, agencies generally have authority to use intragovernmental fees without further appropriation. In 2013, we identified six key fee design decisions related to how fees are set, used, and reviewed that, in the aggregate, enable Congress to design fees that strike its desired balance between agency flexibility and congressional control. Four of the six key design decisions relate to how the fee collections are used and in 2015 we reported that they are applicable to fines and penalties (see figure 1). Congress determines the availability of collections by defining the extent to which an agency may use (i.e., obligate and spend) them, including the availability of the funds, the period of time the collections are available for obligation, the purposes for which they may be used, and the amount of collections that are available to the agency. Availability. Congressional decisions about the use of a fee, fine, or penalty will determine how the funds will be considered within the context of all federal budgetary resources. Collections are classified into 3 major categories: offsetting collections, offsetting receipts, or governmental receipts. Funds classified as offsetting collections can provide agencies with more flexibility because they are generally available for agency obligation without further legislative action. In contrast, offsetting receipts and governmental receipts offer greater congressional control because, generally, additional congressional action is needed before the collections are available for agency obligation. Time. When Congress provides that an agency's collections are available until they are expended, agencies have greater flexibility and can carry over unobligated amounts to future fiscal years. This enables agencies to align collections and costs over a longer time period and to better prepare for, and adjust to, fluctuations in collections and costs. Funds set aside or reserved can sustain operations in the event of a sharp downturn in collections or increase in costs. Carrying over unobligated balances from year to year, if an agency has multi- or no-year collections, is one way agencies can establish a reserve. Purpose. Congress sets limits on the activities or purposes for which an agency may use collections. Congress has granted some agencies broad authority to use some of their collections for any program purpose, but has limited the use of other collections to specific sets of activities. Narrower restrictions may benefit stakeholders and increase congressional control. On the other hand, statutes that too narrowly limit how collections can be used reduce both Congress's flexibility to make resource decisions and an agency's flexibility to reallocate resources. This can make it more difficult to pursue public policy goals or respond to changing program needs, such as when the activities intended to achieve the purposes of the related program change. Amount. Congress determines the specific level of budget authority provided for a program's activities by limiting the amount of collections that can be collected or used by the agency; however, these limits can also pose challenges for the agency. For example, when a fee-funded agency is not authorized to retain or use all of its fee collections and no other funding sources are provided, the agency may not have the funds available to produce the goods or services that it has promised or that it is required to provide by law. Our design guides can help Congress consider the implications and tradeoffs of various design alternatives. One key design element is whether the funds will be (1) deposited to the Treasury as miscellaneous receipts for general support of federal government activities, (2) dedicated to the related program with availability subject to further appropriation, (3) dedicated to the related program and available without further congressional action, or (4) available based on a combination of these authorities. Some authorities to collect fees, fines and penalties specify that the funds will be deposited to the Treasury as miscellaneous receipts. These funds are not dedicated to the agency or program under which they were collected; they are used for the general support of federal government activities. For example, Penalties from financial institutions: Civil monetary penalty payments collected from financial institutions by certain financial regulators, including the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation, are deposited to the Treasury as miscellaneous receipts. In March 2016, we reported that, from January 2009 through December 2015, financial regulators and components within the Department of the Treasury deposited $2.7 billion to the Treasury as miscellaneous receipts from enforcement actions assessed against financial institutions for violations related to anti-money laundering, anti-corruption, and U.S. sanctions programs requirements. Federal Communications Commission (FCC) Application Fees: The FCC regulates interstate and international communications by radio, television, wire, satellite, and cable, and telecommunications services for all people of the United States. FCC collects application fees from companies for activities such as license applications, renewals, or requests for modification. As we reported in September 2013, these fees are deposited to the Treasury as miscellaneous receipts. Some fees, fines, and penalties cannot be used by an agency without being further appropriated to the agency. For example, Customs and Border Protection's (CBP) Merchandise Processing Fee: Importers of cargo pay a fee to offset the costs of "customs revenue functions" as defined in statute, and the automation of customs systems. CBP deposits merchandise processing fees as offsetting receipts to the Customs User Fee Account, with availability subject to appropriation. In July 2016, we reported that in fiscal year 2014 merchandise processing fee collections totaled approximately $2.3 billion. Requiring an appropriation to make the funds available to an agency increases opportunities for congressional oversight on a regular basis. When the amount of collections exceeds the amount of the appropriation, however, unobligated collection balances that are not available to the agency may accumulate. For example, Security and Exchange Commission (SEC) Fees: When SEC collects more in Section 31 fees than its annual appropriation, the excess collections are not available for obligation without additional congressional action. In September 2015, we reported that at the end of fiscal year 2014, the SEC had a $6.6 billion unavailable balance in its Salaries and Expenses account because the fee collections exceeded appropriations. Environmental Protection Agency (EPA) Motor Vehicle and Engine Compliance Program (MVECP) Fees: MVECP fee collections are deposited into EPA's Environmental Services Special Fund. As we reported in September 2015, according to officials, Congress had not appropriated money to EPA from this fund for MVECP purposes. EPA instead received annual appropriations which may be used for MVECP purposes. As a result, the unavailable balance of this fund steadily increased and totaled about $370 million at the end of fiscal year 2014. U.S. Army Corps of Engineers Harbor Maintenance Fee: The authorizing legislation generally designates that the purpose for the fee collections is harbor maintenance activities but, as we reported in February 2008, fee collections have substantially exceeded spending on harbor maintenance. In July 2016, we reported that the Harbor Maintenance Trust Fund had a balance of over $8 billion at the end of fiscal year 2014. U.S. Patent and Trademark Office (USPTO) Fees: In September 2013, we reported that in some years Congress chose not to make available to USPTO the full amount of its collections which, according to USPTO officials, contributed to USPTO's inability to hire sufficient examiners to keep up with USPTO's workload and invest in technology systems needed to modernize the USPTO. According to USPTO officials, patent fee collections can only be used for patent processes, and trademark fee collections can only be used for trademark processes, as well as to cover each processes' proportionate share of the administrative costs of the agency. USPTO officials stated that patent and trademark customers are typically two distinct groups and this division helps to assure stakeholders that their fees are supporting the activities that affect them directly. Some programs include mechanisms to link the amount of collections with the amount of collections appropriated to the program, over time. For example, Food and Drug Administration (FDA) Prescription Drug User Fees: If FDA prescription drug user fee collections are higher than the amount of the collections appropriated for the fiscal year, FDA must adjust fee rates in a subsequent year to reduce its anticipated fee collections by the excess amount. In March 2012, we reported that in fiscal year 2010, Prescription Drug User Fee Act user fees collected by FDA-- including application, establishment, and product fees--totaled more than $529 million, including over $172 million in application fees. Legislation authorizing a fee, fine, or penalty may give the agency authority to use collections without additional congressional action. We refer to the legal authorities that provide agencies with permanent authority to both collect and obligate funds from sources such as fees, fines, and penalties as "permanent funding authorities." Agencies with these permanent funding authorities have varying degrees of autonomy, depending in part on the extent to which the statute limits when, how much, and for what purpose funds may be obligated. Some examples include the following: National Park Service (NPS) Fees: NPS fees include recreation fees--primarily entrance and amenity fees--and commercial service fees paid by private companies that provide services, such as operating lodges and retail stores in park units. In December 2015, we reported that in fiscal year 2014 the NPS collected about $186 million in recreation fees and about $95 million in commercial service fees. U.S. Department of Agriculture Animal and Plant Health Inspection Service (APHIS) Agricultural Quarantine Inspection (AQI) Fees: The AQI program provides for inspections of imported agricultural goods, products, passenger baggage, and vehicles to prevent the introduction of harmful agricultural pests and diseases. APHIS is authorized to set and collect user fees sufficient to cover the cost of providing and administering AQI services in connection with the arrival of commercial vessels, trucks, railcars, and aircraft, and international passengers. AQI fee collections are available without fiscal year limitation and may be used for any AQI-related purpose without further appropriation. In March 2013, we reported that in fiscal year 2012, AQI fee collections totaled about $548 million. Environmental Protection Agency (EPA) Superfund Settlements: Under the Superfund program, EPA has the authority to clean up hazardous waste sites and then seek reimbursement from potentially responsible parties. EPA is authorized to retain and use funds received from certain types of settlements with these parties in interest-earning, site-specific special accounts within the Hazardous Substance Superfund Trust Fund. EPA generally uses these funds for future cleanup actions at the sites associated with a specific settlement or to reimburse appropriated funds that EPA had previously used for response activities at these sites. In January 2012, we reported that as of October 2010 EPA held nearly $1.8 billion in unobligated funds in 947 open special accounts for 769 Superfund sites. Tennessee Valley Authority Collections (TVA): The TVA, the nation's largest public power provider, has authority to use payments it receives from selling power to the public without further appropriation. In October 2011, we reported that TVA had annual revenues of about $11 billion. Presidio Trust Collections: The Presidio Trust, a congressionally chartered organization, manages The Presidio, an urban park in San Francisco, and sustains its operations in part by rental income from residential and commercial buildings on its grounds. Agencies can also be authorized to retain intragovernmental fees charged to other agencies in exchange for a good or service. Some agencies are fully supported by intragovernmental fees; for others, intragovernmental fees are one of their sources of funds. Federal Protective Service (FPS) Fees: The FPS is a fully fee-funded organization authorized to charge customer agencies fees for security services at federal facilities and to use those offsetting collections for all agency operations. In July 2016, we reported that, at the end of fiscal year 2014, FPS had an unobligated balance of approximately $193 million and that FPS had not established targets to determine the extent to which that balance was appropriate to fund its operations. Federal Aviation Administration (FAA) Franchise Fund Customer Fees: FAA's Administrative Services Franchise Fund provides goods and services--including training and specialized aircraft maintenance--to customer agencies on a fee-for-service basis. National Park Service Fees (NPS): NPS collections include intragovernmental fees, as well as user fees and appropriations. For example, in October 2016, we reported that NPS received funding from the Department of the Army to contract with the National Symphony Orchestra for holiday concerts on the U.S. Capitol Grounds. Even when an agency has a permanent authority to use collections, collections remain subject to congressional oversight at any point in time and Congress can place limitations on obligations for any given year. For example, U.S. Citizenship and Immigration Services (USCIS) Fees: USCIS is authorized to charge fees for adjudication and naturalization services, including a premium-processing fee for employment-based petitioners and applicants. The House Report to the fiscal year 2008 Department of Homeland Security Appropriations Bill, H.R. 2638, directed USCIS to allocate all premium-processing fee collections to information technology and business-systems transformation. In January 2009, we reported that, consistent with this directive, USCIS's 2007 fee review stated that the agency intended to use all premium processing collections to fund infrastructure improvements to transform USCIS's paper-based data systems into a modern, digital processing resource. In July 2016, we reported that USCIS estimated that the unobligated carryover balance for the premium processing fee could grow to $1.1 billion by fiscal year 2020, as fee collections are expected to exceed Transformation initiative funding requirements in fiscal years 2015 through 2020. Department of Justice's (DOJ) Crime Victims Fund (CVF) Fines and Penalties: Criminal fines and penalties collected from offenders, among other sources, are deposited in the CVF and can be used without further appropriation to fund victims' assistance programs and directly compensate crime victims. In February 2015, we reported that in fiscal years 2009 through 2013, annual appropriations acts limited the CVF amounts the DOJ's Office of Justice Programs may obligate for these purposes. In some cases, Congress has provided agencies with permanent authority to use a portion of collections and designated other portions of the collections for another use or to be deposited to the Treasury as miscellaneous receipts. Bureau of Land Management (BLM) Grazing Fees: Since the early 1900s, the federal government has required ranchers to pay a fee for grazing their livestock on millions of acres of federal land located primarily in western states. The relevant authorities designate a portion of the grazing fees collected by the BLM for range improvement, a portion to states, and a portion to be deposited to the Treasury as miscellaneous receipts. For example, in September 2005, we reported that in fiscal year 2004 the BLM collected about $11.8 million in grazing fees, half of which was deposited to a special fund receipt account in the Treasury for range rehabilitation, protection, and improvements. Of the other half of the collections, about $2.2 million was distributed to states and counties and about $3.7 million was deposited to the Treasury as miscellaneous receipts. Department of Housing and Urban Development (HUD) Mutual Mortgage Insurance Fund Settlement: HUD's Mutual Mortgage Insurance Fund receives payments resulting from violations related to single-family programs. The primary purpose of the Mutual Mortgage Insurance Fund is to pay lenders in cases where borrowers default on their loan and the lender makes a claim for mortgage insurance benefits. In November 2016, we reported on a case involving False Claims Act violations and loans backed by HUD's Federal Housing Administration (FHA) in which a portion of the settlement was paid to the company that filed a complaint in regard to the False Claims Act on behalf of the government. The other FHA-related settlement proceeds were divided among, and deposited to, the Mutual Mortgage Insurance Fund, the Treasury as miscellaneous receipts, and DOJ's Three Percent Fund. DOJ Drug Enforcement Administration (DEA) Diversion Control Fees: The first $15 million of fees collected each year from DEA registrants such as manufacturers, distributors, dispensers, importers, and exporters of controlled substances (such as narcotics and stimulants) and certain listed chemicals (such as ephedrine) is deposited to the Treasury as miscellaneous receipts. As we reported in February 2015, fees collected beyond $15 million are available to the agency and obligated to recover the full costs of DEA's diversion control program. DOJ Three Percent Fund Penalties: Most civil penalties resulting from DOJ litigation are eligible to be assessed up to a 3 percent fee disbursed to DOJ's Three Percent Fund--which is primarily used to offset DOJ expenses related to civil debt collection. The remainder of the civil penalty amount collected may be deposited to the Treasury as miscellaneous receipts or to another account. For example, in February 2015, we reported on a civil settlement involving fraud against the U.S. Postal Service. Of the $13 million that was awarded to the U.S. Postal Service, DOJ deposited $390,000 into the Three Percent Fund. Chairmen Meadows and Jordan, Ranking Members Connolly and Cartwright, and Members of the Subcommittees, this concludes our prepared statement. We would be pleased to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact Heather Krause, Acting Director, Strategic Issues at (202) 512-6806 or [email protected] or Edda Emmanuelli Perez, Managing Associate General Counsel, Office of General Counsel at (202) 512-2853 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are Susan J. Irving, Director; Julia Matta, Assistant General Counsel for Appropriations Law; Susan E. Murphy, Assistant Director; Laurel Plume, Analyst-in-Charge; and Amanda Postiglione, Senior Attorney. Allison Abrams, Dawn Bidne, Elizabeth Erdmann, Chris Falcone, Valerie Kasindi, and Jeremy Manion also contributed. Principles of Federal Appropriations Law, Chapter 2, The Legal Framework, Fourth Edition, 2016 Revision. GAO-16-464SP. Washington, D.C.: March 10, 2016. Federal User Fees: Key Considerations for Designing and Implementing Regulatory Fees. GAO-15-718. Washington, D.C.: September 16, 2015. Federal User Fees: Fee Design Options and Implications for Managing Revenue Instability. GAO-13-820. Washington, D.C.: September 30, 2013. Congressionally Chartered Organizations: Key Principles for Leveraging Nonfederal Resources. GAO-13-549. Washington, D.C.: June 7, 2013. Federal User Fees: A Design Guide. GAO-08-386SP. Washington, D.C.: May 29, 2008. Principles of Federal Appropriations Law, Third Edition, Volume II. GAO-06-382SP. Washington, D.C.: February 1, 2006. A Glossary of Terms Used in the Federal Budget Process. GAO-05-734SP. Washington, D.C.: September 1, 2005. Federal Trust and Other Earmarked Funds: Answers to Frequently Asked Questions. GAO-01-199SP. Washington, D.C.: January 1, 2001. Budget Issues: Inventory of Accounts With Spending Authority and Permanent Appropriations, 1997. OGC-98-23. Washington, D.C.: January 19, 1998. Budget Issues: Inventory of Accounts With Spending Authority and Permanent Appropriations, 1996. AIMD-96-79. Washington, D.C.: May 31, 1996. Financial Institutions: Penalty and Settlement Payments for Mortgage- Related Violations in Selected Cases. GAO-17-11R. Washington, D.C.: November 10, 2016. U.S. Capitol Grounds Concerts: Improvements Needed in Management Approval Controls over Certain Payments. GAO-17-44. Washington, D.C.: October 25, 2016. DHS Management: Enhanced Oversight Could Better Ensure Programs Receiving Fees and Other Collections Use Funds Efficiently. GAO-16-443. Washington, D.C.: July 21, 2016. Revolving Funds: Additional Pricing and Performance Information for FAA and Treasury Funds Could Enhance Agency Decisions on Shared Services. GAO-16-477. Washington, D.C.: May 10, 2016. Financial Institutions: Fines, Penalties, and Forfeitures for Violations of Financial Crimes and Sanctions Requirements. GAO-16-297. Washington, D.C.: March 22, 2016. National Park Service: Revenues from Fees and Donations Increased, but Some Enhancements Are Needed to Continue This Trend. GAO-16-166. Washington, D.C.: December 15, 2015. Department of Justice: Alternative Sources of Funding Are a Key Source of Budgetary Resources and Could Be Better Managed. GAO-15-48. Washington, D.C.: February 19, 2015. Agricultural Quarantine Inspection Fees: Major Changes Needed to Align Fee Revenues with Program Costs. GAO-13-268. Washington, D.C.: March 1, 2013. Patent and Trademark Office: New User Fee Design Presents Opportunities to Build on Transparency and Communication Success. GAO-12-514R. Washington, D.C.: April 25, 2012. Prescription Drugs: FDA Has Met Most Performance Goals for Reviewing Applications. GAO-12-500. Washington, D.C.: March 30, 2012. Superfund: Status of EPA's Efforts to Improve Its Management and Oversight of Special Accounts. GAO-12-109. Washington, D.C.: January 18, 2012. Tennessee Valley Authority: Full Consideration of Energy Efficiency and Better Capital Expenditures Planning Are Needed. GAO-12-107. Washington, D.C.: October 31, 2011. Budget Issues: Better Fee Design Would Improve Federal Protective Service's and Federal Agencies' Planning and Budgeting for Security. GAO-11-492. Washington, D.C.: May 20, 2011. Federal User Fees: Additional Analyses and Timely Reviews Could Improve Immigration and Naturalization User Fee Design and USCIS Operations. GAO-09-180. Washington, D.C.: January 23, 2009. Federal User Fees: Substantive Reviews Needed to Align Port-Related Fees with the Programs They Support. GAO-08-321. Washington, D.C.: February 22, 2008. Livestock Grazing: Federal Expenditures and Receipts Vary, Depending on the Agency and the Purpose of the Fee Charged. GAO-05-869. Washington, D.C.: September 30, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress exercises its constitutional power of the purse by appropriating funds and prescribing conditions governing their use. Through annual appropriations and other laws that constitute permanent appropriations, Congress provides agencies with authority to incur obligations for specified purposes. The federal government receives funds from a variety of sources, including tax revenues, fees, fines, penalties, and settlements. Collections from fees, fines, penalties, and settlements involve billions of dollars and fund a wide variety of programs. The design and structure--and corresponding agency flexibility and congressional control--of these statutory authorities can vary widely. In many cases, Congress has provided agencies with permanent authority to collect and obligate funds from fees, fines, and penalties without further congressional action. This authority is a form of appropriations and is subject to the fiscal laws governing appropriated funds. In addition, annual appropriation acts may limit the availability of those funds for obligation. Given the nation's fiscal condition, it is critical that every funding source and spending decision be carefully considered and applied to its best use. This testimony provides an overview of key design decisions related to the use of federal collections outlined in prior GAO reports, with examples of specific fees, fines, and penalties from GAO reports issued between September 2005 and November 2016. GAO's prior work has identified four key design decisions related to how fee, fine, and penalty collections are used that help Congress balance agency flexibility and congressional control. One of these key design decisions is the congressional action that triggers the use of collections. The table below outlines the range of structures that establish an agency's use of collections and examples of fees, fines, and penalties for each structure. Source: GAO analysis of applicable laws | G AO-17-268T As GAO has previously reported, these designs involve different tradeoffs and implications. For example, requiring collections to be annually appropriated before an agency can use the collections increases opportunities for congressional oversight on a regular basis. Conversely, if Congress grants an agency authority to use collections without further congressional action, the agency may be able to respond more quickly to customers or changing conditions. Even when an agency has the permanent authority to use collections, the funds remain subject to congressional oversight at any point in time and Congress can place limitations on obligations for any given year.
5,509
492
Emerging infectious diseases pose a growing health threat to people everywhere. Some emerging infections result from deforestation, increased development, and other environmental changes that bring people into contact with animals or insects that harbor diseases only rarely encountered before. However, others are familiar diseases that have developed resistance to the antibiotics that brought them under control just a generation ago. Infectious diseases account for considerable health care costs and lost productivity. In this country, about one-fourth of all doctor visits involve infectious diseases. The number of pathogens resistant to one or more previously effective antibiotics is increasing rapidly, reducing treatment options and adding to health care costs. Surveillance is public health officials' most important tool for detecting and monitoring both existing and emerging infections. Without adequate surveillance, local, state, and federal officials cannot know the true scope of existing health problems and may not recognize new diseases until many people have been affected. Health officials also use surveillance data to allocate their staff and dollar resources and to monitor and evaluate the effectiveness of prevention and control programs. The states have principal responsibility for protecting the public's health and, therefore, take the lead role in surveillance efforts. Each state decides for itself which diseases physicians, hospitals, and others should report to its health department and which information it will then pass on to CDC. Most state surveillance programs include infections from the list of "nationally notifiable" diseases, which the Council of State and Territorial Epidemiologists (CSTE), in consultation with CDC, reviews annually. Nationally notifiable diseases are ones that are important enough for the nation as a whole to routinely report to CDC. However, states are under no obligation to include nationally notifiable diseases in their own surveillance programs, and state reporting to CDC is voluntary. The methods for detecting emerging infections are the same as those used to monitor infectious diseases generally. These methods can be characterized as passive or active. Passive surveillance relies on laboratory and hospital staff, physicians, and other relevant sources to take the initiative to provide data to the health department, where officials analyze and interpret the information as it comes in. Under active surveillance, public health officials contact people directly to gather data. For example, health department staff could call clinical laboratories each week to ask if any samples of S. pneumoniae tested positive for resistance to penicillin. Active surveillance produces more complete information than passive surveillance, but it takes more time and costs more. Infectious diseases surveillance in the United States depends largely on passive methods of collecting disease reports and laboratory test results. Consequently, the surveillance network relies on the participation of health care providers, private laboratories, and state and local health departments across the nation. Even when states require reporting of specific diseases, experts acknowledge that the completeness of reporting varies by disease and type of provider. Surveillance usually begins when a person with a reportable disease seeks care and the physician--in an effort to determine the cause of the illness--runs a laboratory test, which could be performed in the physician's office, a hospital, an independent clinical laboratory, or a public health laboratory. Reports of infectious diseases generated by such tests are often sent first to local health departments, where staff check the reports for completeness, contact health care professionals to obtain missing information or clarify unclear responses, and forward the reports to state health agencies. At the state level, state epidemiologists analyze data collected through the disease reporting network, decide when and how to supplement passive reporting with active surveillance methods, conduct outbreak and other disease investigations, and design and evaluate disease prevention and control efforts. They also transmit state data to CDC, providing routine reporting on selected diseases. Many state epidemiologists and laboratory directors provide the medical community with information obtained through surveillance, such as rates of disease incidence or prevailing patterns of antimicrobial resistance. Federal participation in the infectious diseases surveillance network focuses on CDC activities--particularly those of the National Center for Infectious Diseases (NCID), which operates CDC's infectious diseases laboratories. CDC analyzes the data furnished by states to (1) monitor national health trends, (2) formulate and implement prevention strategies, and (3) evaluate state and federal disease prevention efforts. CDC routinely provides public health officials, medical personnel, and others information on disease trends and analyses of outbreaks. CDC also offers an array of scientific and financial support for state infectious diseases surveillance, prevention, and control programs. Public health and private laboratories are a vital part of the surveillance network because only laboratory test results can definitively identify pathogens. In addition, test results are often an essential complement to a physician's clinical impressions. According to public health officials, the nation's 158,000 laboratories are consistent sources of passively reported information for infectious diseases surveillance. Every state has at least one state public health laboratory that conducts testing for routine surveillance or as part of special clinical or epidemiologic studies. State public health laboratories also provide specialized testing for low-incidence, high-risk diseases, such as tuberculosis and botulism. Testing they provide during an outbreak contributes greatly to tracing the spread of the outbreak, identifying the source, and developing appropriate control measures. Epidemiologists rely on state public health laboratories to document trends and identify events that may indicate an emerging problem. Many state laboratories also provide licensing and quality assurance oversight of commercial laboratories. State public health laboratories are increasingly using advanced technology to identify pathogens at the molecular level. These tests provide information that can enable epidemiologists to tell whether individual cases of illness are caused by the same strain of pathogen--information that is not available from clinical records or other epidemiologic methods. Public health officials have used advanced molecular technology to trace the movement of diseases in ways that would not have been possible 5 years ago. For example, DNA fingerprints developed by laboratories in a CDC-sponsored network showed that drug-resistant strains of tuberculosis first found in New York City have spread to other parts of the country. The fingerprints also showed that tuberculosis can be transmitted during brief contact among people--an important discovery that improved treatment and control programs. CDC laboratories provide highly specialized tests not always available in state public health or commercial laboratories and assist states with testing during outbreaks. Specifically, CDC laboratories help diagnose life-threatening, unusual, or exotic infectious diseases; confirm public or private laboratory test results that are difficult to interpret; and conduct research to improve diagnostic methods. While state surveillance and laboratory testing programs are extensive, not all include every significant emerging infection, leaving gaps in the nation's surveillance network. Our surveys found that almost all states conducted surveillance of tuberculosis, pertussis, hepatitis C, and virulent strains of E. coli; slightly fewer collected information on cryptosporidiosis. About two-thirds collected information on penicillin-resistant S. pneumoniae. Similarly, state public health laboratories commonly performed tests to support state surveillance of tuberculosis, pertussis, cryptosporidiosis, and virulent strains of E. coli. However, over half of the laboratories did not test for hepatitis C, and about two-thirds did not test for penicillin-resistant S. pneumoniae. Over three-quarters of the responding epidemiologists told us that their surveillance programs either leave out or do not focus sufficient attention on important infectious diseases. Antibiotic-resistant diseases, including penicillin-resistant S. pneumoniae and hepatitis C, were among the diseases they cited most often as deserving greater attention. Moreover, our surveys found that about half of the state laboratories used a molecular technology called pulsed field gel electrophoreses (PFGE) to support state surveillance of the diseases we asked about. State and CDC officials believe that most, and possibly all, states should have PFGE because it can be used to study many diseases and greatly improves the ability to detect outbreaks. As part of our surveys and field interviews, we asked state officials to identify the problems they considered most important in conducting surveillance of emerging infectious diseases. The problems they cited fell principally into two categories: staffing and information sharing. State epidemiologists and laboratory directors told us that staffing constraints prevent them from undertaking surveillance and testing for diseases they consider important. Furthermore, laboratory officials noted that advances in scientific knowledge and the proliferation of molecular testing methods have created a need for training to update the skills of current staff. They reported that such training was often either unavailable or inaccessible because of funding or administrative constraints. We found considerable variability among states in laboratory and epidemiology staffing. During fiscal year 1997, states devoted a median of 8 staff years per 1 million population to laboratory testing of infectious diseases, with individual states reporting from 1.3 to 89 staff per 1 million population. The variation in epidemiology staffing was even greater, ranging from 2.1 to 321 in individual states, with a median 14 staff years per 1 million population. Epidemiologists and laboratory officials alike said that public health departments often lack either basic equipment, such as computers and fax machines, or integrated data systems that would allow them to rapidly share surveillance-related information with public and private partners. For health crises that need an immediate response--as when a serious and highly contagious disease appears in a school or among restaurant staff--rapid sharing of surveillance information is critical. Officials most often attributed the lack of computer equipment and integrated data systems to insufficient funding. Without such equipment, some tasks that could be automated must be done by hand. In some cases, the lack of equipment has required data in electronic form to be reverted to paper form. For example, representatives from two large, multistate private clinical laboratories told us that data stored electronically in their information systems had to be converted to paper so it could be reported to local health departments. Our survey responses indicate that state laboratory directors use electronic communications systems much less often than do state epidemiologists. Although most laboratory directors use electronic systems to communicate within their laboratories, they often do not use them to communicate with others. For example, almost 40 percent reported rarely using computerized systems to receive surveillance-related data, and 21 percent used them very little to transmit such data. Even with adequate computer equipment, the difficulty of creating integrated information systems can be formidable. Not only does technology change rapidly, but computerized public health data are stored in thousands of isolated locations, including the record and information systems of public health agencies and health care institutions, individual case files, and data files of surveys and surveillance systems. These independent systems have differing hardware and software structures and considerable variation in how the data are coded, particularly for laboratory test results. CDC alone operates over 100 data systems to monitor over 200 health events, such as diagnoses of specific infectious diseases. Many of these systems collect data from state surveillance programs. CDC's patchwork of data systems arose, in part, to meet federal and state needs for more detailed information for particular diseases than was usually reported. Public health officials told us that the multitude of databases and data systems, software, and reporting mechanisms burdens staff at state and local health agencies and leads to duplication of effort when staff must enter the same data into multiple systems that do not communicate with one another. Further, the lack of integrated data management systems can hinder laboratory and epidemiologic efforts to control outbreaks. For example, in 1993, the lack of integrated systems impeded efforts to control the hantavirus outbreak in the Southwest. Data were locked into separate databases that could not be analyzed or merged with others, causing public health investigators to analyze paper printouts by hand. Although many state officials are concerned about their staffing and technology resources, public health officials have not developed a consensus definition of the minimum capabilities that state and local health departments need to conduct infectious diseases surveillance. For example, according to CDC and state health officials, there are no standards for the types of tests state public health laboratories should be able to perform; nor are there widely accepted standards for the epidemiological capabilities state public health departments need. Public health officials have identified a number of elements that might be included in a consensus definition, such as the number and qualifications of laboratory and epidemiology staff; the pathogens that each state laboratory should be able to identify and, where relevant, test for antibiotic resistance; and laboratory and information-sharing technology each state should have. CSTE, the Association of Public Health Laboratories, and CDC have begun collaborating to define the staff and equipment components of a national surveillance system for infectious diseases and other conditions. They plan to develop agreements about the laboratory and epidemiology resources needed to conduct surveillance, diseases that should be under surveillance, and the information systems needed to share surveillance data. According to state and federal officials, this consensus would give state and local health agencies the basis for setting priorities for their surveillance efforts and determining the resources needed to implement them. CDC provides state and local health departments with a wide range of technical, financial, and staff resources. Many state laboratory directors and epidemiologists said such assistance has been essential to their ability to conduct infectious diseases surveillance and to take advantage of new laboratory technology; however, a small number of laboratory directors and epidemiologists believe CDC's assistance has not significantly increased their ability to conduct surveillance of emerging infections. Yet many state officials indicated that improvements are needed, particularly in the area of information-sharing systems. Many state laboratory directors and epidemiologists told us that CDC's testing, consultation, and training services are critical to their surveillance efforts. More than half of those responding to our surveys indicated that these three services greatly or significantly improved their state's ability to conduct surveillance. State officials indicated that CDC's testing for rare pathogens and the ability to consult with experienced CDC staff are important, particularly for investigating cases of unusual diseases, and that CDC's training was even more significant for improving their ability to conduct surveillance of emerging infections. Over 70 percent of epidemiologists responding to our survey said that when they need assistance, knowledgeable staff at CDC are easy to locate, but many noted that help with matters involving more than one CDC unit is difficult to obtain. Many state officials said that this problem arose when staff in different units did not communicate well with one another. One official described CDC's units as separate towers that do not interact. State officials and survey respondents also said they would like CDC to provide more timely test results in non-urgent situations and additional training in new laboratory techniques. Most survey respondents said that NCID's disease-specific grants and epidemiology and laboratory capacity grants had made great or significant improvements in their ability to conduct surveillance of emerging infectious diseases. For example, after state laboratories began receiving funds from CDC's tuberculosis grant program--which go to programs in all states and selected localities--they markedly improved their ability to rapidly identify the disease and indicate which, if any, antibiotics could be used effectively in treatment. State laboratory officials attributed this improvement to the funding and training they received from CDC. In contrast, only eight states receive CDC funding for active surveillance and testing for penicillin-resistant S. pneumoniae. Where almost all states and most state laboratories reported that they monitor antibiotic-resistance in tuberculosis, far fewer reported monitoring penicillin-resistant S. pneumoniae. Moreover, while all but one state require health care providers to submit tuberculosis reports, fewer than half require reporting of penicillin-resistant S. pneumoniae. Over the past two decades, CDC has developed and made available to states several general and disease-specific information management and reporting programs. State and federal officials we spoke with said CDC's systems have limited flexibility for adapting to state program needs--one reason states have developed their own information management systems. Officials told us that two systems used by most laboratory directors and epidemiologists often cannot share data with each other or with other CDC-or state-developed systems. CDC officials responsible for these programs said that the most recent versions can share data more readily with other systems, but the lack of training in how to use the programs and high staff turnover at state agencies may limit the number of state staff able to use the full range of program capabilities. Many state officials complained about a substantial drain on scarce staff time to enter and reconcile data into multiple systems, such as their own system plus one or more CDC-developed systems. The inability to share data between systems also hinders identifying multiple records on one case and undermines efforts to improve reporting by providers. In response to state and local requests for greater integration of systems, CDC established a board to formulate and enact policy for integrating public health information and surveillance systems. The board brings together federal and state public health officials to focus on issues such as data standards and security, assessing hardware and software used by states, and identifying gaps in CDC databases. CDC and the states have made progress in developing more efficient information-sharing systems through one of CDC's grant programs: the Information Network for Public Health Officials (INPHO). INPHO is designed to foster communication between public and private partners, make information more accessible, and allow for rapid and secure exchange of data. By 1997, 14 states had begun INPHO projects. Some had combined these funds with other CDC grant moneys to build statewide networks linking state and local health departments and, in some cases, private laboratories. Integrated systems can dramatically improve communication. For example, in Washington, electronic information sharing systems reduced passive reporting time from 35 days to 1 day and gave local authorities access to health data for analysis. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed public health surveillance of emerging infectious diseases, focusing on the role of state laboratories. GAO noted that: (1) surveillance of and testing for important emerging infectious diseases are not comprehensive in all states; (2) GAO found that most states conduct surveillance of five of the six emerging infections GAO asked about, and state public health laboratories conduct tests to support state surveillance of four of the six; (3) however, over half of state laboratories do not conduct tests for surveillance of penicillin-resistant S. pneumoniae and hepatitis C; (4) also, most state epidemiologists believe their surveillance programs do not sufficiently study antibiotic-resistant and other diseases they consider important; (5) many state laboratory directors and epidemiologists reported that inadequate staffing and information-sharing problems hinder their ability to generate and use laboratory data in their surveillance; (6) however, public health officials have not agreed on a consensus definition of the minimum capabilities that state and local health departments need to conduct infectious diseases surveillance; (7) this lack of consensus makes it difficult for policymakers to assess the adequacy of existing resources or to evaluate where investments are needed most; (8) most state officials said the Centers for Disease Control and Prevention's (CDC) testing and consulting services, training, and grant funding support are critical to their efforts to detect and respond to emerging infections; (9) however, both laboratory directors and epidemiologists were frustrated by the lack of integrated systems within CDC and the lack of integrated systems linking them with other public and private surveillance partners; and (10) CDC's continued commitment to integrating its own data systems and to helping states and localities build integrated electronic data and communication systems could give state and local public health agencies vital assistance in carrying out their infectious diseases surveillance and reporting responsibilities.
4,020
376
Strategic plans developed by regional organizations can be effective tools to focus resources and efforts to address problems. Effective plans often contain such features as goals and objectives that are measurable and quantifiable. These goals and objectives allow problems and planned steps to be defined specifically and progress to be measured. By specifying goals and objectives, plans can also give planners and decision makers a structure for allocating funding to those goals and objectives. A well- defined, comprehensive strategic plan for the NCR is essential for assuring that the region is prepared for the risks it faces. The Homeland Security Act established the Office of National Capital Region Coordination within the Department of Homeland Security. The ONCRC is responsible for overseeing and coordinating federal programs for and relationships with state, local, and regional authorities in the NCR and for assessing, and advocating for, the resources needed by state, local and regional authorities in the NCR to implement efforts to secure the homeland. One of the ONCRC mandates is to coordinate with federal, state, local, and regional agencies and the private sector in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities. In our earlier work, we reported that ONCRC and the NCR faced three interrelated challenges in managing federal funds in a way that maximizes the increase in first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. These challenges included the lack of a set of accepted benchmarks (best practices) and performance goals that could be used to identify desired goals and determine whether first responders have the ability to respond to threats and emergencies with well-planned, well-coordinated, and effective efforts that involve police, fire, emergency medical, public health, and other personnel from multiple jurisdictions; a coordinated regionwide plan for establishing first responder performance goals, needs, and priorities, and assessing the benefits of expenditures in enhancing first responder capabilities; and a readily available, reliable source of data on the funds available to first responders in the NCR and their use. Without the standards, a regionwide plan, and data on spending, we observed it would be extremely difficult to determine whether NCR first responders were prepared to effectively respond to threats and emergencies. Regional coordination means the use of governmental resources in a complementary way toward goals and objectives that are mutually agreed upon by various stakeholders in a region. Regional coordination can also help to overcome the fragmented nature of federal programs and grants available to state and local entities. Successful coordination occurs not only vertically among federal, state, and local governments, but also horizontally within regions. The effective alignment of resources for the security of communities could require planning across jurisdictional boundaries. Neighboring jurisdictions may be affected by an emergency situation in many ways, including major traffic or environmental disruptions, activation and implementation of mutual aid agreements, acceptance of evacuated residents, and treating casualties in local hospitals. Although work has continued on a NCR strategic plan for the past 2 years, a completed plan is not yet available to guide decision making such as assessment of NCR's strategic priorities and funding needs and aid for NCR jurisdictions in ascertaining how the NCR strategic plan complements their individual or combined efforts. In May 2004, we recommended that the Secretary of DHS work with the NCR jurisdictions to develop a coordinated strategic plan to establish goals and priorities to enhance first responder capacities that can be used to guide the use of federal emergency preparedness funds, and the department agreed to implement this recommendation. A related recommendation--that DHS monitor the plan's implementation to ensure that funds are used in a way that promotes effective expenditures that are not unnecessarily duplicative--could not be implemented until the final strategic plan was in place. In July 2005, we testified that, according to a DHS ONCRC official, a final draft for review had been completed and circulated to key stakeholders. The plan was to feature measurable goals, objectives, and performance measures. ONCRC officials state that past references to a NCR strategic plan reflect availability of the core elements of the NCR strategic plan--the mission, vision, guiding principles, long-term goals, and objectives, but not a complete plan. They told us that these core elements, along with other information, will need to be compiled into a strategic planning document. ONCRC officials said that NCR leadership had elected to make the core elements available but to concentrate on preparing other planning and justification documents required for the fiscal year 2006 DHS grant process. NCR planning timelines indicate this decision was made in September 2005. Because a strategic plan was not available, ONCRC officials provided us with several documents, which they have said that taken as a whole, constitute the basic elements of NCR's strategic plan. These documents include a November 18, 2005, NCR Plenary Session PowerPoint presentation containing information on NCR strategic goals, objectives, and initiatives; a February 1, 2006, National Capital Region Target Capabilities and NCR Projects Work Book; the March 2, 2006, District of Columbia and National Capital Region Fiscal Year 2006 Homeland Security Grant Application Program and Capability Enhancement Plan; the March 2, 2006, National Capital Region Initiatives; and the Fiscal Year 2006 NCR Homeland Security Grant Program Funding Request Investment Justification, submitted to DHS in March 2006. According to ONCRC officials, a complete strategic plan is awaiting integration of additional information that in some cases is not yet complete. These include an Emergency Management Accreditation Program (EMAP) assessment of all local jurisdictions in the NCR and regional-level activities, which, according to the ONCRC, is completed but will not be available until sometime in April; the peer review of the status of state and urban area emergency operations plans after Hurricane Katrina, whose completion is anticipated in April 2006; and the fiscal year 2006 homeland security program grant enhancement plan for funding, which was completed in early March 2006. ONCRC officials estimate that after April 2006, it will take approximately 90 more days to integrate these documents and the core framework of the strategic plan, plus approximately 60 days for final review and coordination by the NCR leadership. Thus, an initial strategic plan will not be available until at least September or October 2006. NCR strategic planning should reflect both national and regional priorities and needs. ONCRC officials have said that the November 18, 2005, NCR plenary session PowerPoint presentation represents the vision, mission, and core goals and objectives of the NCR's strategic plan. If the NCR's homeland security grant program funding documents prepared for DHS are used extensively in NCR strategic planning, a NCR strategic plan might primarily reflect DHS priorities and grant funding--national priorities-- and not regionally developed strategic goals and priorities. NCR's current goals and objectives are shown in table 1. The other four documents that ONCRC represents as constituting the NCR strategic plan were developed in response to federal requirements under the National Preparedness Goal and to support the NCR's federal funding application. Required by Homeland Security Presidential Directive 8, the National Preparedness Goal is a national domestic all-hazards preparedness goal intended to establish measurable readiness priorities and targets. The fiscal year 2006 Homeland Security Grant Program (HSGP) integrates the State Homeland Security Program, the Urban Areas Security Initiative, the Law Enforcement Terrorism Prevention Program, the Metropolitan Medical Response System, and the Citizen Corps Program. For the first time, starting with the fiscal year 2006 HSGP, DHS is using the National Preparedness Goal to shape national priorities and focus expenditures for the HSGP. According to DHS, the combined fiscal year 2006 HSGP Program Guidance and Application Kit streamlines efforts for states and urban areas in obtaining resources that are critical to building and sustaining capabilities to achieve the National Preparedness Goal and implement state and urban area homeland security strategies. All states and urban areas were required to align existing preparedness strategies within the National Preparedness Goal's eight national priorities. States and urban areas were required to assess their preparedness needs by reviewing their existing programs and capabilities and use those findings to develop a plan and formal investment justification outlining major statewide, substate, or interstate initiatives for which they will seek funding. According to DHS, these initiatives are to focus efforts on how to build and sustain programs and capabilities within and across state boundaries while aligning with the National Preparedness Goal and national priorities. It is, of course, important and necessary that the ONCRC, and other regional and local jurisdictions, incorporate the DHS's National Preparedness Goal and related target capabilities into their strategic planning. The target capabilities are intended to serve as a benchmark against which states, regions, and localities can measure their own capabilities. However, these national requirements are but one part of developing regional preparedness, response, and recovery assessments and funding priorities specific to the NCR. The NCR's strategic plan should provide the framework for guiding the integration of DHS requirements into the NCR's overall efforts. While the NCR strategic plan is not complete, our preliminary review of the NCR initiatives developed to implement NCR's strategic goals and objectives presented in ONCRC documents indicates they are not completely addressed in the DHS HSGP documents. Using the November 18, 2005, PowerPoint presentation as our primary framework, we identified whether the NCR's 39 individual regional initiatives were specifically supported in whole or in part by programs or investments in the fiscal year 2006 HSGP documents (enhancement plan and investment justification) prepared for DHS. Our preliminary analysis indicates that regional initiatives defined under NCR strategic goals and objectives have some coverage--individual programs or projects--in the NCR documents prepared for DHS HSGP funding, but not complete coverage. We found that of the NCR's 16 priority initiatives, 10 were partially addressed in the enhancement plan and 12 were partially addressed in the investment justification. Of the other 23 NCR initiatives, 8 were partially addressed in the enhancement plan and 12 were partially addressed in the investment justification. Implementation of regional initiatives not covered by HSGP funding likely would require NCR jurisdictions acting individually or in combination with others. Our preliminary work did not include an assessment of individual jurisdictional efforts to implement the NCR initiatives to determine if uncovered initiatives, particularly those considered priority initiatives, might be addressed by one or more of the NCR jurisdictions. Further work would be required to determine to what extent, if any, the NCR initiatives are addressed in other federal funding applications or individual NCR jurisdictional homeland security initiatives. As I stated earlier, ONCRC officials told us a complete NCR strategic plan is awaiting information from the EMAP assessment, DHS's peer review of the status of emergency operations plans in the aftermath of Hurricane Katrina, and the fiscal year 2006 homeland security grant program enhancement plan for funding. This information may further emphasize federal priorities in the regional planning process. However, information from these sources should complement the region's own assessment of preparedness gaps and the development of strategic goals, objectives, and initiatives. Officials from the District of Columbia, Virginia, and Maryland emphasized this point when they testified before this committee in July 2005. At that time, they said that the regional strategic plan would be a comprehensive document that defined priorities and objectives for the entire region without regard to any specific jurisdiction, discipline, or funding mechanisms. In our view, a NCR plan should complement the plans of the various jurisdictions within NCR. In the aftermath of the September 11, 2001 terrorist attacks and the creation of the ONCRC, we would have expected that the vast majority of this assessment work should have been completed. The NCR is considered a prime target for terrorist events, and other major events requiring a regional response can be anticipated, such as large, dangerous chemical spills. A complete NCR strategic plan based on the November 18 PowerPoint presentation could be strengthened in several ways. In earlier work we have identified characteristics that we consider to be desirable for a national strategy that may be useful for a regional approach to homeland security strategic planning. The desirable characteristics, adjusted for a regional strategy, are purpose, scope, and methodology that address why the strategy was produced, the scope of its coverage, and the process by which it was developed; problem definition and risk assessment that address the particular regional problems and threats the strategy is directed towards; goals, subordinate objectives, activities, and performance measures that address what the strategy is trying to achieve, steps to achieve those results, as well as the priorities, milestones, and performance measures to gauge results; resources, investments, and risk management that address what the strategy will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted by balancing risk reductions and costs; organizational roles, responsibilities, and coordination that address who will be implementing the strategy, what their roles will be compared to those of others, and mechanisms for them to coordinate their efforts; and integration and implementation that address how a regional strategy relates to other strategies' goals, objectives and activities, and to state and local governments within their region and their plans to implement the strategy. According to the ONCRC, the November 18 PowerPoint presentation contains the core elements of the NCR's strategic plan--the mission, vision, guiding principles, long-term goals, and objectives. Our preliminary review of the presentation indicates it reflects many of the characteristics we have defined as desirable for a strategy. The presentation includes some material on the purpose, scope, and methodology underlying the presentation; what it covers; and how it was developed. For example, the presentation contains a detailed timeline of key activities in the execution of the strategic plan and how initiatives were prioritized. Particular regional problems and performance gaps are described, including a section on regionwide weaknesses and gaps such as the lack of a regionwide risk assessment framework and inadequate response and recovery for special needs populations. These gaps are cross-referenced to priority initiatives. Specific goals, objectives, and initiatives are in the presentation, cross-referenced to the regional gaps. Some initiative descriptions identify if a cost is either high, medium, or low with more detailed cost information summarized in other places. Our preliminary review indicates that as the ONCRC fleshes out the November 18 PowerPoint presentation into an initial, complete strategic plan, improvements might be made in (1) initiatives that will accomplish objectives under the strategic goals, (2) performance measures and targets that indicate how the initiatives will accomplish identified strategic goals, (3) milestones or time frames for initiative accomplishment, (4) information on the resources and investment for each initiative, and (5) organizational roles, responsibilities, and coordination, and integration and implementation plans. A discussion of how these elements could be strengthened follows. A NCR strategic plan could more fully develop initiatives to accomplish objectives under the strategic goals. For example, the presentation contains several objectives that have only one initiative. A single initiative may not ensure that objectives are accomplished, and it may merely be restating the objective itself. For example, there is only one initiative (regional strategic planning and decision making process enhancements) for Goal 1's first objective (enhancing and adapting the framework for strategic planning and decision making to achieve an optimal balance of capabilities across the NCR). The initiative in large part restates the objective. This initiative might be replaced by more specific initiatives or the objective restated and additional initiatives proposed. Other objectives in the November 18 PowerPoint presentation provide a more complete picture of initiatives intended to meet the objective. For any future plan, these initiatives should be reviewed to determine if the current initiatives will fully meet the results expected of the objectives. A NCR strategic plan could more fully measure initiative expectations by improving performance measures and targets. First, in some cases, the performance measures will not readily lend themselves to actual quantitative or qualitative measurement through a tabulation, a calculation, a recording of activity or effort, or an assessment of results that is compared to an intended purpose. Additional measures might be necessary. For example, Goal 1, Objective 1, Initiative 1 (regional strategic planning and decision-making process) includes measures such as (1) the decision-making system is well understood by all stakeholders based on changed behaviors and (2) time and resources required of stakeholders in the region to participate in the decision-making process is more efficient. These could be either refined for more direct measurement or additional measures posed, such as specifying behaviors for assessment or what parts of the process might be assessed for efficiency. Other measures in the document might serve as examples of more direct measurement, such as those that assess accomplishments using percentages in Goal 2, Objective 4, Initiative 1 (increasing civic involvement in all phases of disaster preparedness). Second, a strategic plan could be improved by (1) expanding the use of outcome measures and targets in the plan to reflect the results of its activities and (2) limiting the use of other types of measures. ONCRC officials said that the performance measures in the November 18 PowerPoint presentation had a greater emphasis on tracking outcomes, rather than inputs. They stated that as programs and projects are funded and implemented, a more thorough effort to develop associated measures for each will be undertaken. With regard to revising measures to reflect funded programs and projects, we would suggest NCR officials focus on measuring outcomes of programs and projects to meet strategic goals and objectives. Our preliminary analysis indicates that several measures are outcome- oriented, such as those for Goal 2, Objective 4, Initiative 1 (increase civic involvement in all phases of disaster preparedness) that has outcome measures such as the percentage of the population that has taken steps to develop personal preparedness and the percentage of the population familiar with workplace, school, and community emergency plans. However, the majority of the presentation's performance measures and targets are process- or output-oriented and may not match the desired result of the initiative. For example, the Goal 1, Objective 4, Initiative 2 (facilitating practitioner priorities into the program development process) desired outcomes are (1) an easily understood process for participation and feedback of the practitioner stakeholder communities to influence programmatic initiatives and priorities defined in Goal Groups 2, 3, and 4 and (2) an awareness and increased participation in the range of resource opportunities. Measures for this initiative include communication across Emergency Support Functions (ESFs), an accountability chart, and governance guidance document show the feedback loop between ESFs and Senior Policy Group/Chief Administrative Officer (SPG/CAO) and Regional Working Groups. Such measures identify completed activities or tasks, not how well understand the process is. A fourth measure for this initiative--understanding/agreeing on roles, responsibility, and accountability--might closer to measuring the desired outcome. Third, many initiatives do not have performance targets. For example, targets are missing for all or some measures for initiatives under Goal 1, Objectives 1, 3, 4, and 5. Other targets are unclear. For example, one measure for both Goal 1, Objective 3, Initiative 1 (tasks and capabilities for the NCR) and Goal 1, Objective 3, Initiative 2 (gap analysis, recommendations, and appropriate actions) is the progress toward closing the gap between baseline and target capabilities. The target is "what we think we need to accomplish in HSPD 7/8." Any targets such as this would require clarification if progress toward results is to be assessed. A future NCR strategic plan could also be strengthened by including more complete time frames for initiative accomplishment, including specific milestones. In some cases, the time frame description is missing or is inconsistent with timeframes provided within performance measure descriptions that generally cover activities or tasks. For example, Goal 3, Objective 1, Initiative 1 (region prevention and mitigation framework) has a time frame for fall 2006, but measures include targets in 2007. In several instances, measures of tasks or activities include milestones, but an overall time frame is not indicated. For example, Goal 3, Objective 3, Initiative 1 (critical infrastructure and high-risk targets risk assessments) and Goal 4, Objective 1, Initiative 1 (corrective action program for gaps) do not have timeframes identified, but measures have dates extending into 2007 and 2009 respectively. Time frames should also match the initiative. In some cases, it is unclear if the initiative description should be expanded to encompass activities that appear outside the scope of the initiative as written, but result in the time frame for the overall initiative. For example, Goal 3, Objective 1, Initiative 3 (health surveillance, detection, and mitigation functions plan) has an overall time frame of December 2010, but the 2010 date reflects implementation of a patient tracking system. In the list of measures, the plan itself is targeted for December 2008. Either the initiative description could be changed to include the system or the patient tracking system measure could be removed or revised. A future NCR strategic plan could provide fuller information on the resources and investments associated with each initiative. For example, each initiative in the November 18 PowerPoint presentation has a section for cost and cost factors. However, there is not an explanation in the document as to what cost categories of high, medium, or low mean in terms of dollar ranges. ONCRC officials told us that these descriptions should be considered more notional in nature, with a low usually meaning well under $1 million and those rated high in the tens of millions. In many cases, the categorization of cost for an initiative is missing from the November 18 PowerPoint presentation initiative sections. More specific cost information by initiative, such as the funded and unfunded grant information that is provided in a summary format, would facilitate decision making in comparing trade-offs as options are considered. A plan also could be improved by including the sources of funding for the anticipated costs, whether federal, state, or local, or a combination of multiple sources. Last, any future NCR strategic plan could expand on organizational roles, responsibilities, coordination, and integration and implementation plans. Organizational roles, responsibilities, and coordination for each initiative would clarify accountability and leadership for completion of the initiative. The plan might also include information on how the plan will be integrated with the strategic plans of NCR jurisdictions and that of the ONCRC and plans to implement the regional strategy. There is no more important element in results-oriented management than the effort of strategic planning. This effort is the starting point and foundation for defining what an organization seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in reaching results-oriented goals and achieving objectives. Establishing clear goals, objectives, and milestones; setting performance goals; assessing performance against goals to set priorities; and monitoring the effectiveness of actions taken to achieve the designated performance goals are all part of the planning process. If done well, strategic planning is not a static or occasional event, but rather a dynamic and inclusive process. Continuous strategic planning provides the foundation for the most important things an organization does each day, and fosters informed communication between the organization and those affected by or interested in the organization's activities. We appreciate the fact that strategic plans, once issued, are living documents that require continual assessment. There is an understandable temptation to delay issuing a strategic plan at some point in the ongoing strategic planning process until the plan is considered perfect and all information has been collected, analyzed, and incorporated into the plan. However, failure to complete an initial strategic plan makes it difficult for decision makers to identify and assess NCR's first strategic goals, objectives, priorities, measures, and funding needs, and how resources can be leveraged across the region as events warrant. We continue to recommend that the Secretary of DHS work with the NCR jurisdictions to quickly complete a coordinated strategic plan to establish regional goals and priorities. - - - - - That concludes my statement, Mr. Chairman. I would be pleased to respond to any questions you or other members of the Committee may have. For questions regarding this testimony, please contact William O. Jenkins, Jr. at (202) 512-8757, email [email protected]. Sharon L. Caudle also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress asked GAO to provide comments on the National Capital Region's (NCR) strategic plan. GAO reported on NCR strategic planning, among other issues, in May 2004 and September 2004, testified before the House Committee on Government Reform in June 2004, and testified before the Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia in July 2005. In this testimony, we addressed completion of the NCR strategic plan, national and regional priorities, and strengthening any plan that is developed. Among its other statutory responsibilities, the Office of National Capital Region Coordination is charged with coordinating with NCR agencies and other entities to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities. In May 2004 and again in July 2005, we recommended that the ONCRC complete a regional strategic plan to establish goals and priorities for enhancing first responder capacities that could be used to guide the effective use of federal funds. Although work has continued on a NCR strategic plan for the past 2 years, a completed plan is not yet available. According to NCR officials, completion of the plan requires integrating information and analyses from other documents completed or nearly completed, and a plan may not be available before September or October of 2006. The NCR's strategic planning should reflect both national and regional priorities and needs. The majority of the individual documents ONCRC provided to us as representing components for its strategic plan were developed in response to Department of Homeland Security fiscal year 2006 grant guidance to support the NCR's fiscal year 2006 grant application. It is appropriate and necessary that the NCR address national priorities, but the NCR's strategic plan should not be primarily driven by these requirements. It should integrate national and regional priorities and needs. A well-defined, comprehensive strategic plan for the NCR is essential for assuring that the region is prepared for the risks it faces. A November 18, 2005, NCR PowerPoint presentation describes the NCR's vision, mission, goals, objectives, and priority initiatives. That presentation includes some elements of a good strategic plan, including some performance measures, target dates, and cost estimates. A completed NCR strategic plan should build on the current elements that the NCR has developed and strengthen others based on the desirable characteristics of a national strategy that may also be useful for a regional approach to homeland security strategic planning. As it completes its strategic plan, the NCR could focus on strengthening (1) initiatives that will accomplish objectives under the NCR strategic goals, (2) performance measures and targets that indicate how the initiatives will accomplish identified strategic goals, (3) milestones or timeframes for initiative accomplishment, (4) information on the resources and investments for each initiative, and (5) organizational roles, responsibilities, and coordination, and integration and implementation plans.
5,184
593
When Social Security was enacted in 1935, the nation was in the midst of the Great Depression. About half of the elderly depended on others for their livelihood, and roughly one-sixth received public charity. Many had lost their savings. Social Security was created to help ensure that the elderly would have adequate retirement incomes and would not have to depend on welfare. It would provide benefits that workers had earned because of their contributions and those of their employers. When Social Security started paying benefits, it responded to an immediate need to bolster the income of the elderly. The Social Security benefits that early beneficiaries received significantly exceeded their contributions, but even the very first beneficiaries had made some contributions. Initially, funding Social Security benefits required relatively low payroll taxes because very few of the elderly had earned benefits under the new system. Increases in payroll taxes were always anticipated to keep up with the benefit payments as the system matured and more retirees received benefits. Virtually from the beginning, Social Security was financed on this type of pay-as-you-go basis, with any single year's revenues collected primarily to fund that year's benefits. The Congress had rejected the idea of advance funding for the program, or collecting enough revenues to cover future benefit rights as workers accrued them. Many expressed concern that if the federal government amassed huge reserve funds, it would find a way to spend them. Over the years, both the size and scope of the program have changed, and periodic adjustments have been necessary. In 1939, coverage was extended to dependents and survivors. In the 1950s, state and local governments were given the option of covering their employees. The Disability Insurance program was added in 1956. Beginning in 1975, benefits were automatically tied to the Consumer Price Index to ensure that the purchasing power of benefits was not eroded by inflation. These benefit expansions led to higher payroll tax rates in addition to the increases stemming from the maturing of the system. Moreover, the long-term solvency of the program has been reassessed annually. Changes in demographic and economic projections have required benefit and revenue adjustments to maintain solvency, such as the amendments enacted in 1977 and 1983. Profound demographic trends are now contributing to Social Security's long-term financing shortfall. As a share of the total U.S. population, the elderly population grew from 7 percent in 1940 to 13 percent in 1996; this share is expected to increase further to 20 percent by 2050. As it ages, the baby-boom generation will increase the size of the elderly population. However, other demographic trends are at least as important. Life expectancy has increased continually since the 1930s, and further improvements are expected. Moreover, the fertility rate has declined from 3.6 children per woman in 1960 to around 2 children per woman today and is expected to level off at about 1.9 by 2020. Combined, increasing life expectancy and falling fertility rates mean that fewer workers will be contributing to Social Security for each aged, disabled, dependent, or surviving beneficiary. While 3.3 workers support each Social Security beneficiary today, only 2 workers are expected to be supporting each beneficiary by 2030. As a result of these demographic trends, Social Security revenues are expected to be about 14 percent less than expenditures over the next 75-year period, and demographic trends suggest that this imbalance will grow over time. By 2030, the Social Security trust funds are projected to be depleted. From then on, Social Security revenues are expected to be sufficient to pay only about 70 to 75 percent of currently promised benefits, given currently scheduled tax rates and the Social Security Administration's (SSA) intermediate assumptions about demographic and economic trends. In 2031, the last members of the baby-boom generation will reach age 67, when they will be eligible for full retirement benefits under current law. Restoring Social Security's long-term solvency will require some combination of increased revenues and reduced expenditures. A variety of options are available within the current structure of the program, such as raising the retirement age, reducing inflation adjustments, increasing payroll tax rates, and investing trust fund reserves in higher-yielding securities. However, some proposals would go beyond restoring long-term solvency and would fundamentally alter the program structure by setting up individual retirement savings accounts and requiring workers to contribute to them. Retirement income from these accounts would usually replace a portion of Social Security benefits. Some proposals would attempt to produce a net gain in retirement income. The combination of mandated savings deposits and revised Social Security taxes would be greater than current Social Security taxes, in most cases. Helping ensure adequate retirement income has been a fundamental goal of Social Security. While Social Security was never intended to guarantee an adequate income, it provides an income base upon which to build. Virtually all reform proposals also pay some attention to "income adequacy," but some place a different emphasis on it relative to the goal of "individual equity," which seeks to ensure that benefits bear some relationship to contributions. Some proponents of reform believe that increasing the role of individual retirement savings could improve individual equity without diminishing income adequacy. The current Social Security program seeks to ensure adequate retirement income in various ways. First, it makes participation mandatory, which guards against the possibility that some people would not otherwise save enough to have even a minimal retirement income. Reform proposals also generally make participation mandatory. Second, the current Social Security benefit formula redistributes income from high earners to low earners to help keep low earners out of poverty. It accomplishes this by replacing a larger share of lifetime earnings for low earners and a smaller share for high earners. In addition, Social Security helps ensure adequate income by providing benefits for dependent and surviving spouses and children who may not have the work history required to earn adequate benefits. Also, it automatically ensures that the purchasing power of benefits keeps pace with inflation, unlike most employer pension plans or individually purchased annuities. While the Social Security benefit formula seeks to ensure adequacy by redistributing income, it also promotes some degree of individual equity by ensuring that benefits are at least somewhat higher for workers with higher lifetime earnings. In helping ensure adequate retirement income, Social Security has contributed to reducing poverty among the elderly. (See fig. 1.) Since 1959, poverty rates for the elderly have dropped by two-thirds, from 35 percent to less than 11 percent in 1996. While they were higher than rates for children and for working-age adults (aged 18 to 64), they are now lower than for either group. For more than half the elderly, income other than Social Security was less than the poverty threshold in 1994. While Social Security provides a strong foundation for retirement income, it is only a foundation. In 1994, it provided an average of roughly $9,200 to all elderly households. Median Social Security benefits have historically been very close to the poverty threshold. Elderly households with below-average income rely heavily on Social Security, which provided 80 percent of income for 40 percent of elderly households in 1994. (See fig. 2.) One in seven elderly Americans has no income other than Social Security. Pockets of poverty remain. Women, minorities, and persons aged 75 and older are much more likely to be poor than other elderly persons. For example, compared with 11 percent for all elderly persons (aged 65 and older) in 1996, poverty rates were 23 percent for all elderly women living alone, roughly 25 percent for elderly blacks and Hispanics, and 31 percent for black women older than 75. Unmarried women make up more than 70 percent of poor elderly households, although they account for only 45 percent of all elderly households. Proposals that would increase the extent to which workers save for their own retirement would reduce income redistribution because any contributions to individual accounts that would otherwise go to Social Security would not be available for redistribution. Still, proponents of individual accounts assert that virtually all retirees would be at least as well off as they are now and that such reforms would improve individual equity. Citing historical investment returns, they argue that the rates of return that workers could earn on their individual retirement savings would be much higher than the returns they implicitly earn under the current system and that their retirement incomes could be higher as a result. Nevertheless, earning such higher returns would require investing in riskier assets such as stocks. Income adequacy under such reforms would depend on how workers invest their savings and whether they actually earn higher returns. It would also depend on what degree of Social Security coverage and its income redistribution would remain after reform. In addition to examining the effects of reform proposals on all retirees generally, attention should be paid to how they affect specific subpopulations, especially those that are most vulnerable to poverty, including women, widows, minorities, and the very old. Reform proposals vary considerably in their effects on such subpopulations. For example, since men and women typically have different earnings histories, life expectancies, and investment behaviors, reforms could exacerbate differences in benefits that already exist. An individual savings approach that permits little redistribution would on average generate smaller savings balances at retirement for women, who tend to have lower earnings from both employment and investments, and these smaller balances would need to last longer because women have longer life expectancies. The balance between income adequacy and individual equity also influences how much risk and responsibility are borne by individuals and the government. Workers face a variety of risks regarding their retirement income security. These include individually based risks, such as how long they will be able to work, how long they will live, whether they will be survived by a spouse or other dependents, how much they will earn and save over their lifetimes, and how much they will earn on retirement savings. Workers also face some collective risks, such as the performance of the economy and the extent of inflation. Different types of retirement income embody different ways of assigning responsibility for these risks. Social Security was based on a social insurance model in which the society as a whole through the government largely takes responsibility for all these risks to help ensure adequate income. This tends to minimize risks to the individuals and in the process lowers the rate of return they implicitly earn on their retirement contributions. Social Security provides a benefit that provides income to workers who become disabled and to workers who reach retirement, for as long as they live, and for their spouse and dependents. The government takes responsibility for collecting and managing the revenues needed to pay benefits. By redistributing income, Social Security helps protect workers against low retirement income that stems from low lifetime earnings. Social Security pays a pension benefit that is determined by a formula that takes lifetime earnings into account. This type of pension is called a defined benefit pension. Many employer pensions are also defined benefit pensions. These pensions help smooth out variations in benefit amounts that can arise from year to year because of economic fluctuations. Defined benefit pension providers assume investment risks and some of the economic risks and take responsibility for investing and managing pension funds and ensuring that contributions are adequate to fund promised benefits. In contrast, defined contribution pensions, such as 401(k) accounts, base retirement income solely on the amount of contributions made and interest earned. Such pensions resemble individual savings. Retirement savings by individuals place virtually all the risk and responsibility on individuals but give them greater freedom and control over their income. Under reform proposals that increase the role of individual savings, the government role would primarily be to make sure that workers contribute to their retirement accounts and to regulate the management of those accounts. Workers would be responsible for choosing how to invest their savings and would assume the investment and economic risks. Some proposals would allow workers to invest only in a limited number of "indexed" investment funds, which like some mutual funds are managed so they mirror the performance of market indexes like the Standard and Poor 500. Some proposals would require workers to buy an annuity at retirement, while others would place few restrictions on how workers use their funds in retirement. Social Security places relatively greater emphasis on adequacy and less on individual equity by providing a way for all members of society to share all the risks. An individual retirement savings approach places relatively less emphasis on adequacy and more on individual equity by making retirement income depend more directly on each person's contributions and management of the funds. Reform proposals that would increase the role of individual savings would change the overall mix of different types of retirement income and with it the relative emphasis on adequacy and individual equity embodied by that mix. In addition to changing the relative roles of Social Security and individual savings, such Social Security reform could indirectly affect other sources of retirement income and related public policies. For example, raising Social Security's retirement age or cutting its benefit amounts could affect employer pensions. Some employers pay supplements to their pensions until retirees start to receive Social Security income, or they set their pension benefits relative to Social Security's. Employers might terminate their pension plans rather than pay increased costs. Reforms would also interact with other income support programs such as Social Security's Disability Insurance or the Supplemental Security Income public assistance program. For example, raising the retirement age could lead more older workers to apply for Social Security's disability benefits because those benefits would be greater than retirement benefits, if they qualify. No matter what shape Social Security reform takes, restoring long-term solvency will require some combination of benefit reductions and revenue increases. Within the current program structure, examples of possible benefit reductions include modifying the benefit formula, raising the retirement age, and reducing cost-of-living adjustments. Revenue increases might take the form of increases in the payroll tax rate, expanding coverage to include the relatively few workers who are still not covered under Social Security, or allowing the trust funds to be invested in potentially higher-yielding securities such as stocks. Reforms that increase the role of individual retirement savings would also involve Social Security benefit reductions or revenue increases, which might take slightly different forms. For example, such reforms might include Social Security benefit reductions to offset any contributions that are diverted from the current program or permitting workers to invest their retirement savings in stocks. The choice among various benefit reductions and revenue increases will affect the balance between income adequacy and individual equity. Benefit reductions could pose the risk of diminishing adequacy, especially for specific subpopulations. Both benefit reductions and tax increases that have been proposed could diminish individual equity by reducing the implicit rates of return the workers earn on their contributions to the system. In contrast, increasing revenues by investing retirement funds in the stock market could improve rates of return. The choice among various benefit reductions and revenue increases--for example, raising the retirement age--will ultimately determine not just how much income retirees will have but also how long they will be expected to continue working and how long their retirements will be. Reforms will determine how much consumption workers will give up during their working years to provide for more consumption during retirement. Reform proposals have also raised the issue of increasing the degree to which the nation sets aside funds to pay for future Social Security benefits. Advance funding could reduce payroll tax rates in the long term and improve intergenerational equity but would involve significant transition costs. As noted earlier, Social Security is largely financed on a pay-as-you-go basis. In a pure pay-as-you-go arrangement, virtually all revenues come from payroll taxes since trust funds are kept to a relatively small contingency reserve that earns relatively little interest compared with the interest that a fully funded system would earn. In contrast, defined benefit employer pensions are generally fully advance funded. As workers accrue future pension benefit rights, employers make pension fund contributions that are projected to cover them. The pension funds accumulate substantial assets that contribute a large share of national saving. The investment earnings on these funds contribute considerable revenues and reduce the size of pension fund contributions that would otherwise be required to pay pension benefits. Defined contribution pensions and individual retirement savings are fully funded by definition, and investment earnings on these retirement accounts also help provide retirement income. Similarly, Social Security reform proposals that increase the role of individual retirement savings would generally increase advance funding. Advance funding is possible in the public sector simply by collecting more revenue than is necessary to pay current benefits. However, advance funding in the public sector raises issues that prompted the Congress to reject advance funding in designing Social Security. A fully funded Social Security program would have trust funds worth trillions of dollars. If the trust funds were invested in private securities, some people would be concerned about the influence that government could have on the private sector. If these funds were invested only in federal government securities, as is required under current law, taxpayers would eventually pay both interest and principal to the trust funds and ultimately cover the full cost of Social Security benefits. Moreover, the effect of advance funding in the public sector fundamentally depends on whether the government as a whole is increasing national saving, as discussed further below. If Social Security reforms increase the balances in privately held retirement funds, interest on those funds could eventually help finance retirement income and reduce the system's reliance on Social Security payroll contributions, which in turn would improve individual equity. At the same time, the relatively larger generation of current workers could finance some of their future benefits now rather than leaving a relatively smaller future generation of workers with the entire financing responsibility. In effect, advance funding shifts responsibility for retirement income from the children of one generation of retirees to that retiree generation itself. However, larger payroll contributions would be required in the short term to build up those fund balances. Social Security would still need revenues to pay benefits that retirees and current workers have already been promised. The contributions needed to fund both current and future retirement liabilities would clearly be higher than those currently collected. Thus, increasing advance funding in any form involves substantial transition costs as workers are expected to cover some portion of both the existing unfunded liability and the liability for their own future benefits. Reform proposals handle this transition in a variety of ways, and the transition costs can be spread out across one or several generations. The nature of specific reform proposals will determine the pace at which advance funding is increased. For example, one proposal would increase payroll taxes by 1.52 percent for 72 years to fund the transition and would involve borrowing $2 trillion from the public during the first 40 years of the transition to help cover the unfunded liability. Ideally, Social Security reforms would help address the fundamental economic implications of the demographic trends that underlie Social Security's financing problems. Although people are living longer and healthier lives, they have also been retiring earlier and have been having smaller families. Unless these patterns change, relatively fewer workers will be producing goods and services for a society with relatively more retirees. Economic growth, and more specifically growth in labor productivity, could help ease the strains of providing for a larger elderly population. Increased investment in physical and human capital should generally increase productivity and economic growth, but investment depends on national saving, which has been at historically low levels. Recognizing these economic fundamentals, proponents of increasing the role of individual retirement savings generally observe that a pay-as-you-go financing structure does little to help national saving, and they argue that the advance funding through individual accounts would increase saving. However, reforms would not produce notable increases in national saving to the extent that workers reduce their other saving in the belief that their new accounts can take its place. Social Security reforms might also increase national saving within the current program structure. Advance funding would increase saving, and it could be applied to government-controlled trust funds as well as to individual accounts. Any additional Social Security savings in the federal budget could add to national saving but only if not offset by deficits in the rest of the federal budget. More broadly, overall federal budget surpluses or deficits affect national saving since they represent saving or dissaving by the government. To the extent that reforms attempt to increase national saving, they will vary by how much emphasis they place on doing so through individual or government saving. That emphasis will reflect not only judgments about which is likely to be more effective but also values regarding the responsibilities of individuals and governments and attitudes toward the national debt. While these points will be much debated, few dispute the need to be aware of the effect of increasing national saving, although it may be hard to achieve. In some form and to varying degrees, every generation of children has supported its parents' generation in old age. In economic terms, those who do work ultimately produce the goods and services consumed by those who do not. The Social Security system and, more broadly, the nation's retirement income policies, whatever shape they take, ultimately determine how and to what extent the nation supports the well-being of the elderly. Restoring Social Security's long-term solvency presents complex and important choices. These choices include how reforms will balance income adequacy and individual equity; how risks are shared as a community or assumed by individuals; how reforms assign roles and responsibilities among government, employers, and individuals; whether retirements will start earlier or later and how large retirement incomes will be; and how much the nation saves and invests in its capacity to produce goods and services. Whatever reforms are adopted will reflect these fundamental choices implicitly, if not explicitly. This concludes my testimony. I would be happy to answer any questions. Social Security Reform: Implications for Women's Retirement Income (GAO/HEHS-98-42, Dec. 31, 1997). Social Security Reform: Demographic Trends Underlie Long-Term Financing Shortage (GAO/T-HEHS-98-43, Nov. 20, 1997). Budget Issues: Analysis of Long-Term Fiscal Outlook (GAO/AIMD/OCE-98-19, Oct. 22, 1997). 401(k) Pension Plans: Loan Provisions Enhance Participation but May Affect Retirement Income Security for Some (GAO/HEHS-98-5, Oct. 1, 1997). Retirement Income: Implications of Demographic Trends for Social Security and Pension Reform (GAO/HEHS-97-81, July 11, 1997). Social Security Reform: Implications for the Financial Well-Being of Women (GAO/T-HEHS-97-112, Apr. 10, 1997). 401(k) Pension Plans: Many Take Advantage of Opportunity to Ensure Adequate Retirement Income (GAO/HEHS-96-176, Aug. 2, 1996). Social Security: Issues Involving Benefit Equity for Working Women (GAO/HEHS-96-55, Apr. 10, 1996). Federal Pensions: Thrift Savings Plan Has Key Role in Retirement Benefits (GAO/HEHS-96-1, Oct. 19, 1995). Social Security Retirement Accounts (GAO/HEHS-94-226R, Aug. 12, 1994). Social Security: Analysis of a Proposal to Privatize Trust Fund Reserves (GAO/HRD-91-22, Dec. 12, 1990). Social Security: The Trust Fund Reserve Accumulation, the Economy, and the Federal Budget (GAO/HRD-89-44, Jan. 19, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the goals of the social security program and the difficult choices that restoring its long-term solvency will require, focusing on: (1) balancing income adequacy and individual equity; (2) determining who bears risks and responsibilities; (3) choosing among various benefit reductions and revenue increases; (4) using pay-as-you-go or advance funding; and (5) deciding how much to save and invest in the nation's productive capacity. GAO noted that: (1) helping ensure adequate retirement income has been a fundamental goal of social security; (2) virtually all reform proposals also pay some attention to income adequacy, but some place a different emphasis on it relative to the goal of individual equity, which seeks to ensure that benefits bear some relationship to contributions; (3) some proponents of reform believe that increasing the role of individual retirement savings could improve individual equity without diminishing income adequacy; (4) the balance between income adequacy and individual equity also influences how much risk and responsibility are borne by individuals and the government; (5) workers face a variety of risks regarding their retirement income security; (6) no matter what shape social security reform takes, restoring long-term solvency will require some combination of benefit reductions and revenue increases; (7) revenue increases might take the form of increases in payroll tax rate, expanding coverage to include the relatively few workers who are still not covered under social security, or allowing the trust funds to be invested in potentially higher-yielding securities such as stocks; (8) reforms that increase the role of individual retirement savings would also involve social security benefit reductions or revenue increases, which might take slightly different forms; (9) reform proposals have also raised the issue of increasing the degree to which the nation sets aside funds to pay for future social security benefits; (10) advanced funding could reduce payroll tax rates in the long term and improve intergenerational equity but would involve significant transition costs; (11) in a pure pay-as-you-go arrangement, virtually all revenues come from payroll taxes since trust funds are kept to a relatively small contingency reserve that earns relatively little interest compared with the interest that a fully funded system would earn; (12) in contrast, defined benefit employer pensions are generally fully advance funded; (13) ideally, social security reforms would help address the fundamental economic implications of the demographic trends that underlie social security's financing problems; (14) economic growth, and more specifically growth in labor productivity, could help ease the strains of providing for a larger elderly population; and (15) increased investment in physical and human capital should generally increase productivity and economic growth, but investment depends on national saving, which has been at historically low levels.
5,155
587
AMC, located at Scott Air Force Base, Illinois, is responsible for providing strategic airlift, including air refueling, special air missions, and aeromedical evacuation. As part of that mission, AMC is responsible for tasking 67 C-5 aircraft: 35 stationed at Travis Air Force Base in California and 32 stationed at Dover Air Force Base in Delaware. Unlike other Air Force aircraft, the C-5 is rarely deployed for more than 30 days, since it is primarily used to move cargo from the United States to locations worldwide. As a result, C-5 aircrews are deployed away from home for several weeks and then return to their home station. Other Air Force aircraft, such as the KC-10, can carry cargo but are primarily used to refuel other aircraft and can be deployed to locations around the world for extended periods of time. Since September 11, 2001, C-5 aircrews have been deployed for periods of time less than 30-days, generally ranging from 7 to 24 days. Known for its ability to carry oversized and heavy loads, the C-5 can transport a wide variety of cargo, including helicopters and Abrams M1A1 Tanks to destinations worldwide. Recently, the C-5's have been used for a variety of missions, including: support of presidential travel, contracted movement of materials by other government organizations, training missions, and support of operations Enduring and Iraqi Freedom. In addition, the C-5 can also transport about 70 passengers. The aircrew for a C-5 is comprised of two pilots, a flight engineer, and two loadmasters. At Travis Air Force Base there are 439 active duty and 383 reserve aircrew members. At Dover Air Force Base there are 650 active duty and 344 reserve aircrew members. Within the Office of the Secretary of Defense (OSD), the Under Secretary of Defense (Personnel and Readiness) is responsible for DOD personnel policy, including oversight of military compensation. The Under Secretary of Defense (Personnel and Readiness) leads the Unified Legislation and Budgeting process, established in 1994 to develop and review personnel compensation proposals. As part of this process, the Under Secretary of Defense (Personnel and Readiness) chairs biannual meetings, attended by the principal voting members from the Office of the Under Secretary of Defense (Personnel and Readiness), including the Principle Deputy Under Secretary of Defense (Personnel and Readiness), the Assistant Secretary of Defense (Reserve Affairs), the Assistant Secretary of Defense (Health Affairs), the Office of the Under Secretary of Defense (Comptroller), the Joint Staff, and the services' Assistant Secretaries for Manpower and Reserve Affairs. In 1963, Congress established the $30-per-month family separation allowance to help offset the additional expenses incurred by the dependents of servicemembers who are away from their permanent duty station for more than 30 consecutive days. According to statements made by members of Congress during consideration of the legislation establishing the family separation allowance, additional expenses could stem from costs associated with home repairs, automobile maintenance, and childcare that could not be performed by the deployed servicemember. Over the years, the eligibility requirements for the family separation allowance have changed. For example, while the family separation allowance was initially authorized for enlisted members in pay grades E-5 and above as well as to enlisted members in pay grade E-4 with 4 years of service, today the family separation allowance is authorized for servicemembers in all pay grades at a flat rate of $250 per month. Servicemembers must apply for the family separation allowance, certifying their eligibility to receive the allowance. The rationale for establishing the 30-day threshold is unknown. However, DOD officials noted that servicemembers deployed for more than 30 days do not have the same opportunities to minimize household expenses as those who are deployed for less than 30 days. For example, servicemembers who are able to return to their permanent duty locations may perform home repairs and do not have to pay someone to do these tasks for them. The 1963 family separation allowance legislation was divided into two subsections, one associated with overseas duty and one associated with any travel away from the servicemembers home station. The first subsection was intended to compensate servicemembers who were permanently stationed overseas and were not authorized to bring dependents. The second subsection was intended to compensate servicemembers for added expenses associated with their absence from their dependents and permanent duty station for extended periods of time regardless of whether the members were deployed domestically or overseas. Originally, this aspect of family separation compensation was also to be based on the allowance for living quarters. At that time, members would receive one-third the allowance for living quarters or a flat rate of $30 per month, whichever amount was larger. In July of 1963, the Senate heard testimony from DOD officials who generally agreed with the proposed legislation, but raised concerns about using the allowance for living quarters as a baseline. Their concerns were related to the complexity of determining the payments and the inequities associated with tying payment to rank. Ultimately, DOD proposed and the Congress accepted a flat rate of $30 per month for eligible personnel. DOD has not identified frequent short-term deployments less than 30-days as a family separation allowance issue. No proposals seeking modifications to the family separation allowance because of frequent short-term deployments have been provided to DOD for consideration as part of DOD's Unified Legislation and Budgeting process, which reviews personnel compensation proposals. Since 1994, a few proposals have been made seeking changes to allowance amounts and eligibility requirements. None of the proposals sought to change the 30-day eligibility threshold. Further, our discussions with OSD, service, and reserve officials did not reveal any concerns related to frequent short-term deployments and the family separation allowance. To analyze concerns that might be raised by those experiencing frequent short-term deployments, we conducted group discussions with Air Force strategic C-5 airlift aircrews at Travis Air Force Base, which we identified as an example of servicemembers who generally deploy for periods less than 30 days. We did not identify any specific concerns regarding compensation received as a result of short-term deployments. We found that the C-5 aircrews were generally more concerned about the high pace of operations and associated unpredictability of their schedules, due to the negative impact on their quality of life, than about qualifying for the family separation allowance. DOD has proposed few changes to the amount of the family separation allowance and no proposals have been submitted to alter the 30-day eligibility threshold. Our review of proposals submitted through DOD's Unified Legislation and Budgeting process revealed that DOD has considered one proposal to change the amount of the monthly family separation allowance since 1994. In 1997, an increase in the family separation allowance from $75 to $120 was proposed. This provision was not approved by DOD. Since 1994, three modifications to the eligibility criteria have also been proposed. In 1994, a proposal was made to allow payment of the family separation allowance for members embarked on board a ship or on temporary duty for 30 consecutive days, whose family members were authorized to accompany the member but voluntarily chose not to do so. The proposal was endorsed by DOD and accepted by Congress. In 2001, DOD considered but ultimately rejected a similar proposal that would have applied to all members who elect to serve an unaccompanied tour of duty. The third proposal sought to modify the use of family separation allowance for joint military couples (i.e. one military member married to another military member). According to a DOD official, while this proposal was not endorsed by DOD, Congress ultimately passed legislation that clarified the use of family separation allowance for joint military couples. The family separation allowance is now payable to joint military couples, provided the members were residing together immediately before being separated by reason of their military orders. Although both may qualify for the allowance, only one monthly allowance may be paid to a joint military couple during a given month. If both members were to receive orders requiring departure on the same day, then payment would be made to the senior member. Overall, C-5 aircrew members and aircrew leadership with whom we met noted that the unpredictability of missions was having more of an adverse impact on crewmembers' quality of life than the compensation they receive as a result of their deployments. For example, several aircrew members at Travis Air Force Base indicated that over the past two years, they have been called up on very short advance notice, as little as 12 hours, and sent on missions lasting several weeks, making it difficult to conduct personal business or make plans with their families. According to the aircrew members and both officer and enlisted leadership with whom we met, the unpredictability of their missions is expected to continue for the foreseeable future due to the global war on terrorism. Officials informed us that the average number of days by month that aircrew members have been deployed has increased since September 11, 2001, with periods of higher activity, or surges. For example, as shown in figure 1, the average number of days in September 2001 that AMC C-5 co-pilots were deployed was 9. Since then, the average number of days by month that C-5 co-pilots were deployed has fluctuated between 12 and 19. Prior to September 2001, available data shows a low monthly average of 5 days in January 2001. While the average number of days deployed has fluctuated, aircrew members expressed concern about the intermittent suspension of pre- and post-mission crew rest periods that have coincided with increased operations. Generally, these periods have been intended to ensure that aircrew members have enough rest prior to flying another mission. However, aircrew members noted that crew rest periods also allow them to perform other assigned duties and spend time with their families. During our discussion-group meetings, aircrew members indicated that the rest period after a mission had been reduced from as much as 4 days to as little as 12 hours due to operational needs. In addition to basic compensation, DOD has several special pays and allowances available to further compensate servicemembers deployed for less than 30 days. Servicemembers who are deployed domestically or overseas for less than 30 days may be eligible to receive regular per diem. The per diem amount varies depending upon location. Servicemembers also may be eligible to receive other pays and allowances, such as hazardous duty pay, mission-oriented hardship duty pay, and combat-zone tax exclusions. However, DOD has not implemented one special allowance designed, in part, to compensate those frequently deployed for short periods. Congress supported DOD's legislative proposal to authorize a monthly high-deployment allowance with passage of the National Defense Authorization Act for Fiscal Year 2004. This provision allows the services to compensate their members for lengthy deployments as well as frequent shorter deployments. However, DOD has not set a timetable for establishing criteria to implement this allowance. In addition to basic military pay, servicemembers who are deployed for less than 30 days may also be eligible to receive regular per diem, other special pays and allowances, and tax exclusions (see table 1). When servicemembers are performing temporary duty away from their permanent duty station, they are entitled to per diem, which provides reimbursement for meals, incidental expenses, and lodging. To be eligible for per diem, servicemembers must perform temporary duty for more than 12 hours at a location to receive any of the per diem rate for that location. The per diem rates are established by: the General Services Administration, the State Department, and DOD's Per Diem, Travel, and Transportation Allowance Committee. The rates range from $86 to $284 per day within the continental United States and from $20 to $533 per day when outside the continental United States, depending on whether government meals and lodging are provided. Aircrews can earn various per diem rates during the course of their travel. For example, a typical two-week mission for Travis C-5 aircrew members would take them to Dover Air Force Base, then to Moron, Spain, and then to Baghdad, Iraq. At each of these locations, the aircrews can spend a night allowing them to accrue applicable per diem rates for that location. According to the Air Force, per diem rates for a typical C-5 mission are as follows: $157 for Dover Air Force Base; $235 for Moron, Spain; and $154 for Baghdad, Iraq. In some cases, aircrews may receive a standard $3.50 per day for incidental expenses, when they are at locations where the government can provide meals and lodging. This is the standard per diem rate used to compensate servicemembers traveling outside of the continental United States when the government can provide lodging and meals. Hostile Fire and Imminent Danger Pay are pays that provide additional compensation for duty performed in designated areas where the servicemembers are subject to hostile fire or imminent danger. Both pays are derived from the same statue and cannot be collected simultaneously. Servicemembers are entitled to hostile fire pay, an event-based pay, if they are (1) subjected to hostile fire or explosion of hostile mines; (2) on duty in an area close to a hostile fire incident and in danger of being exposed to the same dangers actually experienced by other servicemembers subjected to hostile fire or explosion of hostile mines; or (3) killed, injured, or wounded by hostile fire, explosion of a hostile mine, or any other hostile action. Imminent danger pay is a threat based pay intended to compensate servicemembers in specifically designated locations, which pose a threat of physical harm or imminent danger due to civil insurrection, civil war, terrorism, or wartime conditions. To be eligible for this pay in a month, servicemembers must have served some time, one day or less, in one of the designated zones during the month. The authorized amount for hostile fire and imminent danger pay is $150 per month, although the fiscal year 2003 Emergency Wartime Supplemental Appropriations Act temporarily increased the amount to $225 per month. If Congress takes no further action, the rate will revert to $150 per month in January 2005. Mission-oriented hardship duty pay compensates military personnel for duties designated by the Secretary of Defense as hardship duty due to the arduousness of the mission. Mission-oriented hardship duty pay is payable at a monthly rate up to $300, without prorating or reduction, when the member performs the specified mission during any part of the month. DOD has established that this pay be paid at a flat monthly rate of $150 per month. Active and Reserve component members who qualify, at any time during the month, receive the full monthly mission-oriented hardship duty pay, regardless of the period of time on active duty or the number of days they receive basic pay during the month. This pay is currently only available to servicemembers assigned to, on temporary duty with, or otherwise under the Defense Prisoner of War/Missing Personnel Office, the Joint Task Force-Full Accounting, or Central Identification Lab-Hawaii. Hardship duty includes missions such as locating and recovering the remains of U.S. servicemembers from remote, isolated areas including, but not limited to, areas in Laos, Cambodia, Vietnam, and North Korea. The combat-zone tax exclusion provides exclusion from federal income tax, as well as income tax in many states, to servicemembers serving in a presidentially designated combat zone or in a statutorily established hazardous duty area for any period of time. For example, although the C-5 aircrews at Travis and Dover Air Force Bases do not serve in a designated combat zone for an extended period of time, many of the missions that they fly may be within areas designated for combat-zone tax exclusion eligibility. Enlisted personnel and warrant officers may exclude all military compensation earned in the month in which they perform active military service in a combat-zone or qualified hazardous duty area for active military service from federal income tax. For commissioned officers, compensation is free of federal income tax up to the maximum amount of enlisted basic pay plus any imminent danger pay received. DOD has not established criteria defining what constitutes frequent deployments, nor has it determined eligibility requirements in order to implement the high deployment allowance. DOD sought significant modifications to high deployment compensation through a legislative proposal to the National Defense Authorization Act for Fiscal Year 2004. Congress had established a high deployment per diem as part of the National Defense Authorization Act for Fiscal Year 2000. Pursuant to statutorily granted authority, on October 8, 2001, DOD waived application of the high deployment compensation in light of the ongoing military response to the terrorist attacks on September 11, 2001. After implementing the waiver authority, DOD sought legislative changes to the high deployment compensation in an effort to better manage deployments. DOD's proposal sought, among other things, to: (1) change high- deployment compensation from a per diem rate to a monthly allowance, (2) reduce the dollar amount paid so that it was more in line with other special pays (e.g. hostile fire pay), and (3) allow DOD to recognize lengthy deployments as well as frequent deployments. The National Defense Authorization Act for Fiscal Year 2004 reflects many of DOD's proposed changes. The act changed the $100 per diem payment into an allowance not to exceed $1,000 per month. To help compensate those servicemembers who are frequently deployed, the act established a cumulative 2-year eligibility threshold not to exceed 401 days. Also, the act provided the Secretary of Defense with the authority to prescribe a cumulative threshold lower than 401 days. Depending upon where the Secretary of Defense establishes the cumulative threshold, servicemembers, such as the C-5 aircrews, serving multiple short-term deployments may be compensated through the high deployment allowance. Once a servicemembers' deployments exceed the established cumulative day threshold for the number of days deployed, the member is to be paid a monthly allowance not to exceed $1,000 per month at the beginning of the following month. From that point forward, the servicemember will continue to qualify for the allowance as long as the total number of days deployed during the previous 2-year period exceeds the cumulative threshold established by the Secretary of Defense. The high deployment allowance is in addition to per diem and any other special pays and allowances for which the servicemember might qualify. Moreover, the servicemember does not have to apply for the allowance, as the act mandated that DOD track and monitor days deployed and make payment accordingly. Finally, DOD may exclude specified duty assignments from eligibility for the high deployment allowance (e.g., sports teams or senior officers). According to DOD officials, this provision also provides additional flexibility in targeting the allowance to selected occupational specialties, by allowing DOD to exclude all occupations except those that it wishes to target for additional compensation because of retention concerns. The Senate report accompanying the bill that amended the high deployment provision encouraged DOD to promptly implement these changes. However, DOD officials told us that a timetable for establishing the criteria necessary to implement the high deployment allowance has not been set. Although we could not ascertain exactly why DOD had not taken action to implement the high deployment allowance, OSD officials informed us that the services had difficulty reaching agreement on what constitutes a deployment for purposes of the high deployment payment. The family separation allowance is directed at enlisted servicemembers and officers whose dependents incur extra expenses when the servicemember is deployed for more than 30 consecutive days. We found no reason to question the eligibility requirements that have been established for DOD's family separation allowance. We believe that no basis exists to change the 30-day threshold, as a problem has not been identified with the family separation allowance. Further, servicemembers who deploy for less than 30 days may be eligible to receive additional forms of compensation resulting from their deployment, such as per diem, other special pays and allowances, and tax exclusions. Since the terrorists' attacks on September 11, 2001, some servicemembers have experienced more short-term deployments. Given the long-term nature of the global war on terrorism, this increase in the frequency of short-term deployments is expected to continue for the foreseeable future. DOD will need to assure adequate compensation for servicemembers using all available special pays and allowances in addition to basic pay. While the aircrews with whom we met did not express specific concerns about compensation, they, like other servicemembers, are concerned about quality-of-life issues. The high deployment allowance could help to address such issues for servicemembers, while helping to mitigate DOD's possible long-term retention concerns. Also, unlike the family separation allowance, the high deployment allowance could be used to compensate servicemembers regardless of whether or not they have dependents. Although the Senate report accompanying the bill that amended the high deployment provision encouraged DOD to promptly implement these changes, the Secretary of Defense has not taken action to implement the high deployment allowance. We recommend that the Secretary of Defense direct the Deputy Undersecretary of Defense (Personnel and Readiness), in concert with the Service Secretaries and the Commandant of the Marine Corps, to take the following three actions set a timetable for establishing criteria to implement the high deployment define, as part of the criteria, what constitutes frequent short-term deployments within the context of the cumulative day requirement as stated in the high deployment allowance legislation; and determine, as part of the criteria, eligibility requirements targeting the high deployment allowance to selected occupational specialties. In written comments on a draft of this report, DOD partially concurred with our recommendations that it set a timetable for establishing criteria to implement the high deployment allowance and what the criteria should include. While DOD agreed that servicemembers should be recognized with additional pay for excessive deployments, it stated that DOD has not implemented the high deployment allowance because it views the high deployment allowance as a peacetime authority. Further, DOD stated that since we are in a wartime posture, it is more difficult to control the pace of deployments than during peacetime. DOD's response noted that it has elected to exercise the waiver given to it by Congress to suspend the entitlement for reasons of national security. DOD also noted that it has encouraged the use of other flexible pay authorities to compensate servicemembers who are away from home for inordinate periods. Finally, DOD stated that it would reassess the use of the high deployment allowance at some point in the future. We do not believe that the nation's current wartime situation prevents DOD from taking our recommended actions. The first recommended action being to set a timetable for establishing criteria to implement the high deployment allowance. We did recognize in our report that pursuant to statutorily granted authority, on October 8, 2001, DOD waived application of the high deployment allowance in light of the ongoing military response to the terrorist attacks on September 11, 2001. However, since then, DOD sought modifications through a legislative proposal to the National Defense Authorization Act for Fiscal Year 2004 for more flexibilities to manage the high deployment compensation better. These additional flexibilities include providing DOD with the opportunity to tailor the allowance to meet current or expected needs. Since the purpose of special pays and allowances is primarily to help retain more servicemembers, the high deployment allowance could be used as another compensation tool to help retain servicemembers during a time of war. As our report clearly states, given the expectations for a long-term commitment to the war on terrorism, developing the criteria for implementing the high deployment allowance would provide DOD with an additional option for compensating those military personnel who are frequently deployed for short periods of time. Regarding DOD's use of other flexible pay authorities to compensate servicemembers who are away from home for inordinate periods, the examples DOD cites for lengthy or protracted deployments in Iraq, Afghanistan, and Korea are not applicable to those servicemembers deployed for less than 30 days, the focus of this review. Finally, the vagueness of when and how the high deployment allowance will be implemented runs contrary to the congressional direction, which encouraged DOD to promptly implement the new high deployment allowance authority. Based on DOD's response, it is not clear when DOD intends to develop criteria to implement the high deployment allowance. We recommended that DOD set a timetable for establishing criteria to implement the high deployment allowance, not that DOD implement the allowance immediately. We believe that this recommendation is warranted, since establishing the criteria will make it possible for DOD to implement the high deployment allowance quickly, whenever it is deemed appropriate and necessary. DOD's comments are reprinted in their entirety in appendix I. DOD also provided technical comments, which we have incorporated as appropriate. To assess the rationale for family separation allowance eligibility requirements, including the rationale for the 30-day threshold, we reviewed the legislative history concerning the family separation allowance and analyzed DOD policies implementing this pay. We also interviewed officials in the offices of the Under Secretary of Defense (Personnel and Readiness); the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Air National Guard; and the Air Force Reserve. To determine the extent to which DOD had identified frequent short-term deployments as a family separation allowance issue, we reviewed proposals submitted through DOD's Unified Legislation and Budgeting process. We met with compensation representatives from the Office of the Under Secretary of Defense (Personnel and Readiness) and each of the services. We interviewed officials with the Defense Manpower Data Center and the Defense Finance and Accounting Service. We sought to use DOD's database for tracking and monitoring deployments to determine the extent of servicemembers experiencing frequent deployments lasting less than 30 days. We were not able to use the database for the purposes of our report to discern the number of deployments by location lasting less than 30 days, since more than 40 percent of the data for location was not included in the database. In addition, the database did not contain information related to some types of non-deployment activities (e.g. training), which we deemed important to our review. We focused our study on the Air Force since the fiscal year 2003 Secretary of Defense Annual Report to the President and Congress showed that the Air force was the only service whose members were deployed less than 30 days on average in fiscal year 2002. Further, through discussions with Air Force officials we identified strategic aircrews managed by the AMC as examples of those who would most likely be experiencing short-term deployments. We visited AMC, where we met with officials from personnel, operations, finance, and the tactical airlift command center. To understand the views of one group of short-term deployers, we visited Travis Air Force Base in California where we met with officer and enlisted leadership for the C-5 and KC-10 aircraft. We held discussion groups with 12 officers and 12 enlisted aircrew members from both aircraft, for a total of 48 aircrew members. We visited Dover Air Force Base in Delaware where we met with C-5 officer and enlisted leadership. We also met with officials representing personnel, operations, and finance offices at both Travis and Dover Air Force Bases. We assessed the reliability of AMC C-5 copilot deployment data, as well as data contained in the fiscal year 2003 Secretary of Defense Annual Report to the President and Congress. GAO's assessment consisted of (1) reviewing existing information about the data and the systems that produced them, (2) examining the electronic data for completeness, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To assess what special pays and allowances are available, in addition to basic compensation, to further compensate servicemembers deployed for less than 30 days, we identified special pays and allowances that do not have a time eligibility factor through the DOD's Military Compensation Background Papers, legislative research, and discussions with OSD officials. We reviewed the legislative history regarding recent legislative changes to special pays and allowances and how DOD has implemented these changes. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Personnel and Readiness); the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to appropriate congressional committees and to other interested parties on request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-5559 or Brenda S. Farrell at (202) 512-3604. Major contributors to this report were Aaron M. Adams, Kurt A. Burgeson, Ann M. Dubois, Kenya R. Jones, and Ronald La Due Lake. 1. The purpose of our congressionally directed review was to assess the special pays and allowances available to DOD that could be used to compensate servicemembers who are frequently deployed for less than 30 days. Consequently, our scope did not include an assessment of compensation for servicemembers serving lengthy, or protracted, deployments of 30 days or more. We found that DOD has available and is using several special pays and allowances, in addition to basic compensation, to further compensate servicemembers deployed for less than 30 days. However, we also found that DOD has one special allowance, the high deployment allowance, that is not available to provide further compensation to servicemembers who frequently deploy for less than 30 days and that DOD has not set a timetable to establish criteria to implement the allowance. During our review, we could not ascertain exactly why DOD had not taken action to develop criteria for implementing the high deployment allowance. During several discussions, OSD officials stated that the services had difficulty reaching agreement on what constitutes a deployment for the purposes of the high deployment payment. DOD's response to our draft report noted that it has elected to exercise the waiver given to it by Congress to suspend the high deployment allowance for reasons of national security. We recognized this waiver in our report. We also noted that after DOD waived application of the high deployment payment on October 8, 2001, DOD sought legislative modifications of the high deployment payment that would give it more flexibilities to better manage this type of compensation. Congress granted DOD these flexibilities and encouraged DOD to promptly implement these changes. As noted in our report, given the expectations for a long-term commitment to the war on terrorism, developing the criteria for implementing the new high deployment allowance would provide DOD with an additional option for compensating those military personnel who are frequently deployed for short periods of time. Also, the high deployment allowance, unlike the family separation allowance, could be used to compensate servicemembers regardless of whether or not they have dependents and thus would reach more servicemembers. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Bankruptcy Filings among Active Duty Service Members. GAO-04-465R. Washington, D.C.: February 27, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-89. Washington, D.C.: November 13, 2003. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Personnel: DFAS Has Not Met All Information Technology Requirements for Its New Pay System. GAO-04-149R. Washington, D.C.: October 20, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists During Mobilizations. GAO-03-573T. Washington, D.C.: March 19, 2003. Military Personnel: Oversight Process Needed to Help Maintain Momentum of DOD's Strategic Human Capital Planning. GAO-03-237. Washington, D.C.: December 5, 2002. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to e-mail alerts" under the "Order GAO Products" heading.
The fiscal year 2004 National Defense Authorization Act directed GAO to assess the special pays and allowances for servicemembers who are frequently deployed for less than 30 days, and to specifically review the family separation allowance. GAO's objectives were to assess (1) the rationale for the family separation allowance eligibility requirements, including the required duration of more than 30 consecutive days away from a member's duty station; (2) the extent to which DOD has identified short-term deployments as a family separation allowance issue; and (3) what special pays and allowances, in addition to basic military compensation, are available to compensate members deployed for less than 30 days. In 1963, Congress established the family separation allowance to help offset the additional expenses that may be incurred by the dependents of servicemembers who are away from their permanent duty station for more than 30 consecutive days. Additional expenses may include the costs associated with home repairs, automobile maintenance, and childcare that could have been performed by the deployed servicemember. Over the years, the eligibility requirements for the family separation allowance have changed. Today, the family separation allowance is authorized for officers and enlisted in all pay grades at a flat rate. The rationale for establishing the 30-day threshold is unknown. DOD has not identified frequent short-term deployments as a family separation allowance issue. No proposals seeking modifications to the family separation allowance because of frequent short-term deployments have been provided to DOD for consideration as part of DOD's Unified Legislation and Budgeting process, which reviews personnel pay proposals. Further, DOD officials were not aware of any specific concerns that have been raised by frequently deployed servicemembers about their eligibility to receive the family separation allowance. Based on group discussions with Air Force strategic airlift aircrews, who were identified as examples of those most likely to be experiencing short-term deployments, we did not identify any specific concerns regarding the lack of family separation allowance compensation associated with short-term deployments. Rather, many aircrew members indicated the high pace of operations and associated unpredictability of their schedules was a greater concern due to the negative impact on their quality of life. In addition to basic military compensation, DOD has several special pays and allowances to further compensate servicemembers deployed for short periods. Servicemembers who are deployed for less than 30 days may be eligible to receive regular per diem. The per diem amount varies depending upon location. For example, these rates range from $86 to $284 per day within the United States and from $20 to $533 per day when outside the United States. However, DOD has not implemented the high deployment allowance designed, in part, to compensate those frequently deployed for shorter periods. Congress supported DOD's legislative proposal to authorize a monthly high deployment allowance. This allowance permits the services to compensate members for lengthy as well as frequent shorter deployments. The most recent amendment to this provision provides DOD with the authority to adjust a cumulative day threshold to help compensate servicemembers experiencing frequent short deployments. DOD has flexibility to exclude all occupations except those that it wishes to target for additional pay. However, DOD has not established criteria to implement this allowance, nor has DOD set a timetable for establishing such criteria.
7,451
682
Throughout the disability compensation claims process, VBA staff have various roles and responsibilities. Claims assistants are primarily responsible for establishing the electronic claims folders to determine whether the dispositions of the claims and control actions have been appropriately identified. Veteran service representatives are responsible for providing veterans with explanations regarding the disability compensation benefits programs and entitlement criteria. They also are to conduct interviews, gather relevant evidence, adjudicate claims, authorize payments, and input the data necessary to generate the awards and notification letters to veterans describing the decisions and the reasons for them. Rating veterans service representatives are to make claims rating decisions and analyze claims by applying VBA's schedule for rating disabilities (rating schedule) against claims submissions; they also are to prepare rating decisions and the supporting justifications. Further, they are to inform the veteran service representative, who then notifies the claimant of the decision and the reasons for the decision. Supervisory veteran service representatives are to ensure that the quality and timeliness of service provided by VBA meets performance indicator goals. They are also responsible for the cost-effective use of resources to accomplish assigned outcomes. Decision review officers are to examine claims decisions and perform an array of duties to resolve issues raised by veterans and their representatives. They may conduct a new review or complete a review of a claim without deference to the original decision; they also can revise that decision without new evidence or clear and obvious evidence of errors in the original evaluation. The disability compensation claims process starts when a veteran (or other designated individual) submits a claim to VA in paper or electronic form. If submitted electronically, a claim folder is created automatically. When a paper claim is submitted, a claims assistant creates the electronic folder. Specifically, when a regional office receives a new paper claim, the receipt date is recorded electronically and the paper files (e.g., medical records and other supporting documents) are shipped to one of four document conversion locations so that the supporting documents can be scanned and converted into a digital image. In the processing of both electronic and paper claims, a veteran service representative reviews the information supporting the claim and helps identify any additional evidence that is needed to evaluate the claim, such as the veteran's military service records, medical examinations, and treatment records from medical facilities and private medical service providers. Also, if necessary to provide support to substantiate the claim, the department performs a medical examination on the veteran. Once all of the supporting evidence has been gathered, a rating veterans service representative evaluates the claim and determines whether the veteran is eligible for benefits. If so, the rating veterans service representative assigns a disability rating (expressed as a percentage). A veteran who submits a claim with multiple disabilities receives a single composite rating. If the veteran is due to receive compensation, an award is prepared and the veteran is notified of the decision. A veteran can reopen a claim for additional disability benefits if, for example, he or she experiences a new or worsening service-connected disability. If the veteran disagrees with the regional office's decision on the additional claim, a written notice of disagreement may be submitted to the regional office to appeal the decision, and the veteran may request to have the appeal processed at the regional office by a decision review officer or through the Board of Veterans' Appeals. Figure 1 presents a simplified view of VA's disability compensation claims process. VBA began the transformation of its paper-intensive claims process to a paperless environment in March 2009, and the effort became formally established as the Veterans Benefits Management System program in May 2010. VBA's initial plans for VBMS emphasized the development of a paperless claims platform to fully support the processing of disability compensation and pension benefits, as well as appeals. The program's primary focus was to convert existing paper-based claims folders into electronic claims folders (eFolders) to allow VBA staff to access claims information and evidence in an electronic format. Beyond the establishment of eFolders, VBMS is intended to streamline the entire disability claims process, from establishment through award, by automating rating decision recommendations, award and notification processes, and communications between VBA and the veteran throughout the claims life cycle. The system is also intended to assist in eliminating the claims backlog and serve as the enabling technology for quicker, more accurate, and integrated claims processing in the future. Moreover, it is to replace many of the key outdated legacy systems-- which are still in use today--for managing the claims process, including: Share--used to establish claims; it records and updates basic information about veterans and dependents. Modern Award Processing-Development--used to manage the claims development process, including the collection of data to support the claims and tracking of them. Rating Board Automation 2000--provides information about laws and regulations pertaining to disabilities, which are used by rating specialists in evaluating and rating disability claims. Award--used to prepare and calculate the benefit award based on the rating specialist's determination of the claimant's percentage of disability. It is also used to authorize the claim for payment. VBMS is to consist of three modules: VBMS-Core is intended to provide the foundation for document processing and storage during the claims development process, including establishing claims; viewing and storing electronic documents in the eFolder; and tracking evidence requested from beneficiaries. The eFolder serves as a digital repository for all documents related to a claim, such as the veteran's military service records, medical examinations, and treatment records from VA and Department of Defense medical facilities, and from private medical service providers. Unlike with paper files, this evidence can be reviewed simultaneously by multiple VBA claims processors at any location. VBMS-Rating is to provide raters with Web-accessible tools, including rules-based rating calculators and the capability for automated decision recommendations. For example, the hearing loss calculator is to automate decisions using objective audiology data and rules- based functionality to provide the rater with a suggested rating decision. In addition, the module is expected to include stand-alone evaluation builders--essentially interactive disability rating schedules--for all parts of the human body. With this tool, the rater uses a series of check boxes to identify the veteran's symptoms and the evaluation builder identifies the proper diagnostic code and the level of compensation based on those symptoms. VBMS-Awards is to provide an automated award and notification process to improve award accuracy and reduce rework associated with manual development of awards. This module is intended to automate and standardize communications between VBA and the veteran at the final stages of the claims process. VBA is using an agile software development methodology to develop, test, and deliver the system's functionality to its users. An agile approach allows subject matter experts to validate requirements, processes, and system functionality in increments, and to deliver the functionality to users in shorter cycles. Accordingly, the strategic road map that the VBMS Program Management Office is using to guide the system development effort indicated that releases of system functionality were to occur every 6 months. In a March 2013 Senate Veterans Affairs Committee hearing, VA's Under Secretary for Benefits stated that VBMS development was expected to be completed in 2015. Our September 2015 report noted that, since completing rollout of the initial version of VBMS at all regional offices in June 2013, VBA has continued developing and implementing additional system functionality and enhancements that support the electronic processing of disability compensation claims. As a result, 95 percent of records related to veterans' disability claims are electronic and reside in the system. However, while the Under Secretary for Benefits stated in March 2013 that the development of the system was expected to be completed in 2015, implementation of functionality to fully support electronic claims processing was delayed until beyond 2015. Specifically, even with the progress VBA has made toward developing and implementing the system, the timeline for initial deployment of a national workload management capability was delayed beyond the originally planned date of September 2014 to October 2015, with additional deployment to occur throughout fiscal year 2016. Efforts undertaken thus far have addressed the strategic road map's objective to deliver a national workload management capability and have entailed developing the technology and business processes needed to support the National Work Queue, which is intended to handle new disability claims in a centralized queue and assign claims to the next regional office with available capacity. The Program Management Office began work for the National Work Queue in June 2014, and had intended to deploy the first phase of functionality to users in September 2014. However, in late May 2015, the Director of the office informed us that VBA had delayed the initial rollout of the National Work Queue until October 2015 so that the department could fully focus on meeting its goal to eliminate the claims backlog by the end of September 2015. Following the initial rollout, the Program Management Office intends to implement the National Work Queue at all regional offices through fiscal year 2016. Beyond this effort, VBMS program documentation identified additional work to be performed after fiscal year 2015 to fully automate disability claims processing. Specifically, the Program Management Office identified the need to automate steps associated with a veteran's request for an increase in disability benefits, such as when an existing medical condition worsens. In addition, the Director stated that the Program Management Office intends to develop a capability to automatically associate veterans' correspondence when a new piece of evidence to support a claim is received electronically or scanned into VBMS. The office also plans to integrate VBMS with VA's Integrated Disability Evaluation System, which contains the results of veterans' disability medical examinations, as well as with external systems that contain military service treatment records for veterans, including those at the National Personnel Records Center. Further, while VBMS was planned to support the processing of disability compensation and pension benefits, VBA has not yet developed and implemented end-to-end pension processing capabilities in the system. Without such capabilities, the agency must continue to rely on three legacy systems to process pension claims. Specifically, program officials stated that both the Modern Award Processing-Development and Award legacy systems contain functionality related to processing pensions and will need to remain operational until VBMS can process pension claims. In addition, the Share legacy system contains functionality that is still needed throughout the claims process. Program documentation indicates that the first phase of pension-related functionality is expected to be introduced in December 2015. However, VBA has not yet developed plans and schedules for retiring the legacy systems and for fully developing and implementing their functionality in VBMS. VBA's progress toward developing and implementing appeals processing capabilities in VBMS also has been limited. Specifically, although the information in a veteran's eFolder is available to appeals staff for review, the appeals process for disability claims is not managed using the new system. According to VA's fiscal year 2016 budget submission, the department is pursuing a separate effort to manage end-to-end appeals modernization, and has requested $19.1 million in fiscal year 2016 funds to develop a system that will provide functionality not available in VBMS or other VA systems. The Director of the Program Management Office stated that VBA is currently analyzing commercial IT solutions that can meet the business requirements for appeals, such as providing document navigation capabilities. According to the Director, VBMS, nevertheless, is expected to be part of the appeals modernization solution because components of the system, such as the eFolder and certain workload management functionality, are planned to continue supporting appeals management. In the Director's view, the fact that VBMS requires additional development beyond 2015 does not reflect a delay in completing the system's development. Instead, the additional time is a consequence of decisions to enlarge the program's scope over time. The Director stated that the system's original purpose had been to serve primarily as an electronic document repository, and that the program has met this goal. In addition, the Director said that, as the program's mission has expanded to support the department's efforts to eliminate the disability claims backlog, the office has had to re-prioritize, add, and defer system requirements to accommodate broader departmental decisions and, in some cases, regulatory changes. For example, the office was tasked with developing functionality in VBMS to meet regulatory requirements for processing disability claims using mandatory forms. Officials in the office said they were made aware of this requirement well after system planning for the March 2015 release had been completed, which had introduced significant complexity to their development work. Finally, VBA included in its strategic road map a number of objectives related to VBMS that are planned to be addressed in fiscal year 2016. Officials in the Program Management Office stated that they intend to develop tactical plans that identify expected capabilities to be provided in future releases. Nevertheless, due to the department's incremental approach to developing and implementing VBMS, VBA has not yet produced a plan that identifies when VBMS will be completed and can be expected to fully support disability and pension claims processing and appeals. Thus, it will be difficult for the department to hold its managers accountable for meeting the time frame and for demonstrating progress. Accordingly, we recommended that the department develop an updated plan for VBMS that includes a schedule for when VBA intends to complete development and implementation of the system, including capabilities that fully support disability claims, pension claims, and appeals processing. VA agreed with our recommendation. Consistent with our guidance on estimating program costs, an important aspect of planning for IT projects, such as VBMS, involves developing a reliable cost estimate to help managers evaluate a program's affordability and performance against its plans, and provide estimates of the funding required to efficiently execute a program. In 2011, VBA submitted to the Office of Management and Budget a life-cycle cost estimate for VBMS of $934.8 million. This estimate was intended to capture costs for the system's development, deployment, sustainment, and general operating expenses through the end of fiscal year 2018. However, as of July 2015, the program's actual costs had exceeded the 2011 life-cycle cost estimate. Specifically, VBMS received approximately $1 billion in funding through the end of fiscal year 2015 and the department has requested an additional $290 million for the program in fiscal year 2016. A significant concern is that the Program Management Office has not reliably updated the VBMS life-cycle cost estimate to reflect the program's expanded scope and timelines for completion of the system. This is largely attributable to the fact that the office has developed cost estimates for 2-year project cycles that are used for VBMS milestone reviews under the Office of Information and Technology's Project Management Accountability System. When asked how the Program Management Office arrived at the cost estimates reported in the milestone reviews, program officials stated that they developed rough order of magnitude estimates for each business need based on expert knowledge of the system, past development and engineering experience, and lessons learned. However, while this approach may have provided adequate information for VBA to prioritize VBMS system requirements to be addressed in the next release, it has not produced estimates that could serve as a basis for identifying the system's funding needs. Because it is typically derived from limited data and in a short time, a rough order of magnitude analysis is not equivalent to a budget-quality cost estimate and may limit an agency's ability to identify the funding necessary to efficiently execute a program. In addition, the Program Management Office's annual operating plan, which is generally limited to high-level information about the program's organization, priorities, staffing, milestones, and performance measures for fiscal year 2015, also shows estimated costs totaling $512 million for VBMS development from fiscal years 2017 through 2020. However, according to the Director of the Program Management Office, this estimate was also developed using rough order of magnitude analysis. Further, the estimate does not provide reliable information on life-cycle costs because it does not include estimated IT sustainment and general operating expenses. Thus, even though the Program Management Office developed rough order of magnitude cost estimates for VBMS, these estimates have not been sufficiently reliable to effectively identify the program's funding needs. Instead, during the last 3 fiscal years, the Director has had to request an additional $118 million in IT development funds to meet program demands and to ensure support for ongoing development contracts. Specifically, in May 2013, VA requested $13.3 million to support additional work on VBMS. Then, during fiscal year 2014, VA reprogrammed $73 million of unobligated IT sustainment funds to develop functionality to transfer service treatment records from the Department of Defense to VA, and to support development of VBMS-Core functionality. In December 2014, the Program Management Office identified the need for additional fiscal year 2015 funds for ongoing system development contracts for VBMS-Core and VBMS-Awards, and, in late April 2015, department leadership submitted a letter to Congress requesting permission to reprogram $31.7 million to support work on these contracts, the National Work Queue, and other VBMS efforts. According to the Program Management Office Director, the need to request additional funding does not represent additional risk to the program, but is the result of VBMS's success. The Director further noted that, as the Program Management Office has identified opportunities to increase functionality to improve the electronic claims process, their funding needs have also increased. Nevertheless, evolution of the VBMS program illustrates the importance of continuous planning, including cost estimating, so that trade-offs between cost, schedule, and scope can be effectively managed. Further, without a reliable estimate of the total costs associated with completing work on VBMS, stakeholders will have a limited view of VBMS's future resource needs and the program is at risk of not being able to secure appropriate funding to fully develop and implement the system. Therefore, we recommended that VA develop an updated plan for VBMS that includes the estimated cost to complete development and implementation of the system. VA agreed with our recommendation. Our and other federal IT guidance recognize the importance of defining program goals and related performance targets, and using such targets to assess progress in achieving the goals. System performance and response times have a large impact on whether staff successfully complete work tasks. If systems are not responding at agreed-upon levels for availability and performance, it can be difficult to ensure that staff will complete tasks in a timely manner. This is especially important in the VBA claims processing environment, where staff are evaluated on their ability to process claims in a timely manner. VBA reported that, since its initial rollout in January 2013, VBMS has exceeded its 95 percent goal for availability. Specifically, the system was available at a rate of 98.9 percent in fiscal year 2013 and 99.3 percent in fiscal year 2014. Through May of fiscal year 2015, it was available for 99.98 percent of the time. Nevertheless, while VBA has reported exceeding its availability goals for VBMS, the system has also experienced periods of unavailability, many times at a critical level affecting all users. Specifically, since January 2013, VBA reported 57 VBMS outages that totaled about 117 hours of system unavailability. The system experienced about 18 hours of outages in January 2014, which were almost entirely at the critical level and affected all users. It reported experiencing only 2 system outages since July 2014--a 30-minute critical outage in December 2014 and a 23- minute critical outage in May 2015. In addition to system availability, VBA monitors system response times for each of the VBMS modules using an application that measures the amount of time taken for each transaction. From September 2013 through April 2015, VBA reported a decrease in average response times for VBMS-Core and VBMS-Rating. It attributed the decrease in response times to continuous engineering improvements to system performance. Program officials also explained that the difference in response times between modules was due to the type of information that is being pulled into each module from various other VBA systems. For example, both VBMS-Core and VBMS-Rating require information from the VBA corporate database, but VBMS-Core is populated with data from multiple VBA systems in addition to the corporate database. Program officials told us that specific goals for mean transaction response times have not been established because they feel that adequate tools are in place to monitor system performance and provide alerts if there are response time issues. For example, VBMS performance is monitored in real time by dedicated staff at a contractor's facility, users have access to a live chat feature where they can provide feedback on any issues they are experiencing with the system, and the VBMS help desk offers another avenue for users to provide feedback on the system's performance. The officials also noted that, because transaction response times have decreased, which can be indicative of an improvement to system performance, they are focusing their resources on adding additional functionality instead of trying to get the system to achieve a specific average transaction response time. While VBA's monitoring of VBMS's performance is commendable and the system's performance and response times have improved over time, the system is still in development and there is no guarantee that performance will remain at current levels as the system evolves. Performance targets and goals for VBMS response times would provide users with an expectation of the system response times they should anticipate, and management with an indication of how well the system is performing relative to performance goals. To address this situation, we recommended that the department establish goals for system response time and use the goals as a basis for periodically reporting actual system performance. VA agreed with this recommendation. A key element of successful system testing is appropriately identifying and handling defects that are discovered during testing. Outstanding defects can delay the release of functionality to end users, denying them the benefit of features. Key aspects of a sound defect management process include the planning, identification and classification, tracking, and resolution of defects. Leading industry and government organizations consider defect management and resolution to be among the primary goals of testing. The VBMS program has defect management policies in place and is actively performing defect management activities. Specifically, in October 2012, the department developed the VBMS Program Management and Technical Support Defect Management Plan, which describes the program's defect management process. The plan was updated in March 2015 and describes, among other things, the process for identifying, classifying, tracking, and resolving VBMS defects. For example, it provides criteria for assigning four different levels of severity for defects-- critical, high, medium, and low. According to the plan, critical severity defects are characterized by complete system or subsystem failure, complete loss of functionality, and compromised security or confidentiality. Critical defects also have extensive user impact and do not have workarounds. High severity defects can have major user impact, leading to significant loss of system functionality. Medium severity defects can have moderate user impact and lead to moderate loss of functionality. For high and medium severity defects, workarounds could exist. Low severity defects lead to minor loss of functionality with no workaround necessary. According to the Program Management Office, high, medium, and low severity defects do not need to be resolved prior to a system release. The Program Management Office uses an automated tool to monitor and track defects in the VBMS defect repository. It is used to produce a daily defect management report that is shared with VBMS leadership, and to provide the current status of all open defects identified in testing of a forthcoming VBMS release or identified during production of a previous release. According to the defect management plan, defects can be resolved in a number of different ways, and, once a defect has been fixed, tested, and has passed testing, it is considered done or resolved. Defects that cannot be attributed to an existing requirement are reclassified as a system enhancement and considered resolved, as they do not affect a current system release requirement. A defect is also considered resolved if it is determined to work as designed, duplicate another defect, or if it is no longer evident in the system. From March 2014 through March 2015, the total number of VBMS defects declined as release dates approached for four releases (7.0, 7.1, 8.0, and 8.1). Additionally, to the department's credit, no critical defects remained at the time of each of these releases. However, even with the department's efforts to resolve defects prior to a VBMS release, defects that affected system functionality remained open at the time of the releases. Specifically, of the 254 open defects at the time of VBMS release 8.1, 76 were high severity, 99 were medium severity, and 79 were low severity. Examples of medium and high level defects that remained open at the time of VBMS release 8.1 included: E-mail addresses for dependents only occasionally allowed special characters (medium). The intent to file for compensation/pension had an active status for a deceased veteran (medium). Creating a claim in legacy or VBMS would remove the Homeless, POW, and/or Gulf War Registry Flash (high). Disability name appeared incorrectly in Issue and Decision text for amyotrophic lateral sclerosis (ALS) (high). VBMS-Core did not recognize updated rating decisions from VBMS- Rating (high). According to the Program Management Office, these defects were communicated to users and an appropriate workaround for each was established. Nevertheless, even with the workarounds, high and medium severity open defects, which by definition impact system functionality, degraded users' experiences with the system. Continuing to deploy system releases with defects that impact system functionality increases the risk that these defects will diminish users' ability to process disability claims in an efficient manner. Accordingly, we recommended that VA reduce the incidence of high and medium severity level defects that are present at the time of future VBMS releases. The department agreed with this recommendation. Our September 2015 report noted that, in addition to having defined program goals and related performance targets, leading practices identify continuous customer feedback as a crucial element of IT project success. Particularly for projects like VBMS, where development activities are iterative, customer and end user perspectives and insights can be solicited through various methods--user acceptance testing, interviews, complaint programs, and satisfaction surveys--to validate or raise questions about the project's implementation. Further, leading practices emphasize that periodic customer satisfaction data should be proactively used to improve performance and demonstrate the level of satisfaction the project is delivering. The Office of Management and Budget has developed standards and guidelines in survey research that are generally consistent with best practices and call for statistically valid data collection efforts to be used in fulfilling agencies' customer service data collection. These leading practices also stress the importance of centrally integrating all customer feedback data in order to have more complete diagnostic information to guide improvement efforts. VA has used a variety of methods for obtaining customer and end user feedback on the performance of VBMS. For example, the department solicits end user involvement and feedback in the iterative system development process based on user acceptance criteria. According to the Senior Project Manager for VBMS Development within the Office of Information and Technology, at the end of each development cycle and before a new version of VBMS is deployed, end users are involved in user acceptance testing and a final customer acceptance meeting. The department also provides training to a subset of end users--known as "superusers"--on the updated functionality introduced in a new version of VBMS. These superusers are expected to train the remaining users in the field on the new version's features. The department tracks the overall satisfaction level with training received after each VBMS major release. However, this tracking is limited to superusers' satisfaction with the training, rather than with their satisfaction with the system. Further, the department solicits customer feedback about the system through interviews. For example, the Director of the Program Management Office stated that the Under Secretary for Benefits hosts a weekly phone call with bargaining unit employees as a "pulse check" on VBA transformation activities, including VBMS. According to this official, the VBA Office of Field Operations also offers an instant messaging chat service to all regional office employees to solicit feedback about the latest deployment of VBMS functionality. Another method in which the department obtains customer input is through a formal feedback process. For example, according to the Director, VA provides national service desk support to assist users in troubleshooting system issues and identifying system defects. In addition, VBMS applications include a built-in feature that enables users to provide feedback to the Program Management Office on problems with the system. According to the Director, the feedback received by the office also helps to identify user training issues. Nevertheless, while VA has taken these various steps to obtain feedback on the performance and implementation of VBMS, it has not established goals to define user satisfaction that can be used as a basis for gauging the success of its efforts to promote satisfaction with the system. Further, while the efforts that have been taken to solicit users' feedback provide VBA with useful insights about particular problems, data are not centrally compiled or sufficient for supporting overall conclusions about whether customers are satisfied. In addition, VBA has not employed a customer satisfaction survey of claims processing employees who use the system on a daily basis to process disability claims. Such a survey could provide a more comprehensive picture of overall customer satisfaction and help identify areas where the system's development and implementation efforts might need additional attention. According to the Director of the Program Management Office, VBA has not used a survey to solicit feedback because of concern that such a mechanism may negatively impact the efficiency of claims processors in completing disability compensation claims on behalf of veterans. Further, the Director believed that the office had the benefit of receiving ongoing end user input on VBMS by virtue of the intensive testing cycles, as well as several of the other mechanisms by which end users have provided ongoing feedback. Nevertheless, without establishing user satisfaction goals and collecting the comprehensive data that a statistically valid survey can provide, the Program Management Office limits its ability to obtain a comprehensive understanding of VBMS users' satisfaction with the system. Thus, VBA could miss opportunities to improve the efficiency of its claims process by increasing satisfaction with VBMS. Therefore, we recommended that VA develop and administer a statistically valid survey of VBMS users to determine the effectiveness of steps taken to make improvements in users' satisfaction. The department agreed with this recommendation. In response to a statistical survey that we administered, most of the VBMS users reported that they were satisfied with the system that had been implemented at the time of the survey. These users (claims assistants, veteran service representatives, supervisory veteran service representatives, rating veterans service representatives, decision review officers, and others) were satisfied with the three modules of VBMS. Specifically, an estimated 59 percent of the claims processors were satisfied with VBMS-Core; an estimated 63 percent were satisfied with the Rating module, and an estimated 67 percent were satisfied with the Awards module. Nevertheless, while a majority of users were satisfied with the three modules, decision review officers expressed considerably less satisfaction than other users with VBMS-Core and VBMS-Rating. Specifically, for VBMS-Core, an estimated 27 percent of decision review officers were satisfied compared to an estimated 59 percent of all roles of claims processors (including decision review officers) who were satisfied. In addition, for VBMS-Rating, an estimated 38 percent of decision review officers were satisfied, compared to an estimated 63 percent of all roles of claims processors. Decision review officers were considerably less satisfied with VBMS in comparison to all roles of claims processors in additional areas. For example, an estimated 26 percent of decision review officers viewed VBMS-Core as an improvement over the previous legacy system or systems for establishing claims and storing and reviewing electronic documents related to a claim in an eFolder. In contrast, an estimated 58 percent of all users (including decision review officers) viewed the Core module as an improvement. In addition, an estimated 26 percent of decision review officers viewed VBMS-Rating as an improvement over the previous systems with respect to providing Web-accessible tools, including rules-based rating calculators, to assist in making claims rating decisions. In contrast, an estimated 55 percent of all roles of claims processors viewed the Rating module as an improvement. For VBMS-Awards, an estimated 61 percent of all roles viewed this module as an improvement over the previous systems to automate the award and notification process. Similarly, in considering the three modules, a majority of users (including decision review officers) would have chosen VBMS over the legacy system or systems. However, decision review officers indicated that they were less likely to have chosen VBMS-Core and VBMS-Rating over legacy systems. Specifically, an estimated 27 percent of decision review officers would have chosen VBMS-Core compared to an estimated 60 percent of all roles of claims processors. In addition, an estimated 27 percent of decision review officers would have chosen VBMS-Rating compared to 61 percent of all roles that would have chosen the system over the legacy system or systems. For VBMS-Awards, an estimated 67 percent of all roles would have chosen this module over the previous systems. Decision review officers perform an array of duties to resolve claims issues raised by veterans and their representatives. They may also conduct a new review or complete a review of a claim without deference to the original decision, and, in doing so, must click through all documents included in the e-Folder. Survey comments from decision review officers stated, for example, that reviews in the VBMS paperless environment take longer because of the length of time spent loading, scrolling, and viewing each document (particularly if the documents are large, such as a service medical record file). Additionally, multiple decision review officers commented that it is easier and faster to review documents in a paper file. Although such comments provide illustrative examples of individual decision review officers' views and are not representative, according to the Director of the Program Management Office, decision review officers' relative dissatisfaction is not surprising because the system does not yet include functionality that supports their work, which primarily relates to appeals processing. To improve this situation, we recommended that VA establish goals that define customer satisfaction with the system and report on actual performance toward achieving the goals based on the results of our survey of VBMS users and any future surveys VA conducts. The department concurred with this recommendation. In conclusion, while VA has made progress in developing and implementing VBMS, additional capabilities to fully process disability claims were delayed beyond when the system's completion was originally planned. Further, in the absence of a plan that identifies when and at what cost the system can be expected to fully support disability compensation and pension claims processing and appeals, holding VA management accountable for meeting a schedule, while ensuring sufficient program funding, will be difficult. Also, without goals for system response times, users do not have an expectation of the response times they can anticipate, and management lacks an indication of how well the system is performing. Furthermore, continuing to deploy system releases with defects that impact functionality increases the risk that these defects will diminish users' ability to process disability claims in an efficient manner. Lastly, although the results of our survey provide VBA with useful data about users' satisfaction with VBMS (e.g., the majority of users are satisfied), without having goals to define user satisfaction, VBA does not have a basis for gauging the success of its efforts to improve the system. As we stressed in our report, attention to these issues can improve VA's efforts to effectively complete the development and implementation of VBMS. Fully addressing our recommendations, as VA agreed to do, should help the department give appropriate attention to these issues. Chairman Miller, Ranking Member Brown, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information about this testimony, contact Valerie C. Melvin at (202) 512-6304 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs are listed on the last page of the testimony. Other key contributors to this testimony include Mark Bird (assistant director), Kavita Daitnarayan, Kelly Dodson, Nancy Glover, Brandon S. Pettis, and Eric Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VBA pays disability benefits for conditions incurred or aggravated while in military service, and pension benefits for low-income veterans who are either elderly or have disabilities unrelated to military service. In fiscal year 2014, the department paid about $58 billion in disability compensation and about $5 billion in pension claims. The disability claims process has been the subject of attention by Congress and others, due in part to long waits for processing claims and a large backlog of claims. To process disability and pension claims more efficiently, VA began development and implementation of an electronic, paperless system--VBMS--in 2009. This statement summarizes GAO's September 2015 report ( GAO-15-582 ) on (1) VA's progress toward completing the development and implementation of VBMS and (2) the extent to which users report satisfaction with the system. As GAO reported in September 2015, the Veterans Benefits Administration (VBA) within the Department of Veterans Affairs (VA) has made progress in developing and implementing the Veterans Benefits Management System (VBMS), with deployment of the initial version of the system to all of its regional offices as of June 2013. Since then, VBA has continued developing and implementing additional system functionality and enhancements that support the electronic processing of disability compensation claims. As a result, 95 percent of records related to veterans' disability claims are electronic and reside in the system. However, VBMS is not yet able to fully support disability and pension claims, as well as appeals processing. Nevertheless, while the Under Secretary for Benefits stated in March 2013 that the development of VBMS was expected to be completed in 2015, implementation of functionality to fully support electronic claims processing has been delayed beyond 2015. In addition, VBA has not yet produced a plan that identifies when the system will be completed. Accordingly, holding VA management accountable for meeting a time frame and for demonstrating progress will be difficult. As VA continues its efforts to complete development and implementation of VBMS, three areas could benefit from increased management attention. Cost estimating: The program office does not have a reliable estimate of the cost for completing the system. Without such an estimate, VA management and the department's stakeholders have a limited view of the system's future resource needs, and the program risks not having sufficient funding to complete development and implementation of the system. System availability: Although VBA has improved its performance regarding system availability to users, it has not established system response time goals. Without such goals, users do not have an expectation of the system response times they can anticipate and management does not have an indication of how well the system is performing relative to performance goals. System defects : While the program has actively managed system defects, a recent system release included unresolved defects that impacted system performance and users' experiences. Continuing to deploy releases with large numbers of defects that reduce system functionality could adversely affect users' ability to process disability claims in an efficient manner. VA has not conducted a customer satisfaction survey that would allow the department to compile data on how users view the system's performance, and ultimately, to develop goals for improving the system. GAO's survey of VBMS users found that a majority of them were satisfied with the system, but decision review officers were considerably less satisfied. Although the results of GAO's survey provide VBA with data about users' satisfaction with VBMS, the absence of user satisfaction goals limits the utility of survey results. Specifically, without having established goals to define user satisfaction, VBA does not have a basis for gauging the success of its efforts to promote satisfaction with the system, or for identifying areas where its efforts to complete development and implementation of the system might need attention. In its September 2015 report, GAO recommended that VA develop a plan with a time frame and a reliable cost estimate for completing VBMS, establish goals for system response time, minimize the incidence of high and medium severity system defects for future VBMS releases, assess user satisfaction, and establish satisfaction goals to promote improvement. VA concurred with GAO's recommendations.
7,931
861
Medicare is typically the primary source of health insurance coverage for seniors. Individuals who are eligible for Medicare automatically receive Hospital Insurance, known as part A, which helps pay for inpatient hospital care, skilled nursing facility services following a hospital stay, hospice care, and certain home health care services. Beneficiaries generally pay no premium for this coverage but are liable for required deductibles, coinsurance, and copayments. Medicare-eligible beneficiaries may elect to purchase Supplemental Medical Insurance, known as part B, which helps pay for selected physician, outpatient hospital, laboratory, and other services. Beneficiaries must pay a premium for part B coverage, currently $50 per month. Beneficiaries are also responsible for part B deductibles, coinsurance, and copayments. See table 1 for a summary of Medicare's beneficiary cost-sharing requirements for 2001. To help pay for some of Medicare's cost-sharing requirements as well as some benefits not covered by Medicare parts A or B, most Medicare beneficiaries have some type of supplemental coverage. Privately purchased Medigap is an important source of this supplemental coverage. Other supplemental coverage options may include coverage through an employer, enrolling in a Medicare+Choice plan that typically offers lower cost-sharing requirements and additional benefits such as prescription drug coverage in exchange for a restricted choice of providers, or assistance from Medicaid, the federal-state health financing program for low-income individuals, including aged or disabled individuals. The Omnibus Budget Reconciliation Act (OBRA) of 1990 required that Medigap plans be standardized in as many as 10 different benefit packages offering varying levels of supplemental coverage. All policies sold since July 1992 (except in three exempted states) have conformed to 1 of these 10 standardized benefit packages, known as plans A through J. (See table 2.) In addition, beneficiaries may purchase Medicare Select, a type of Medigap policy that generally costs less in exchange for a limited choice of providers. A high-deductible option is also available for plans F and J. Policies sold prior to July 1992 are not required to comply with these 10 standard packages. Insurers in Massachusetts, Minnesota, and Wisconsin are exempt from offering these standardized plans because these states standardized their Medigap policies prior to the establishment of the federal standardized plans. Medigap coverage is widely available to most beneficiaries. Federal law provides Medicare beneficiaries with guaranteed access to Medigap policies offered in their state of residence during an initial 6-month open- enrollment period, which begins on the first day of the month in which an individual is 65 or older and is enrolled in Medicare Part B. During this initial open-enrollment period, an insurer cannot deny Medigap coverage for any plan types they sell to eligible individuals, place conditions on the policies, or charge a higher price because of past or present health problems. Additional federal Medigap protections include "guaranteed- issue" rights, which provide beneficiaries over age 65 access to plans A, B, C, or F in certain circumstances, such as when their employer terminates retiree health benefits or their Medicare+Choice plan leaves the program or stops serving their area. Depending on laws in the state where applicants reside and their health status, insurers may choose to offer more than these four plans. Federal law also allows individuals who join a Medicare+Choice plan when they first become eligible for Medicare and who leave the plan within 1 year of joining to purchase any of the 10 standardized Medigap plans sold in their respective states. In 1999, about 10.7 million Medicare beneficiaries--more than one-fourth of all beneficiaries--had a Medigap policy to help cover Medicare's cost- sharing requirements as well as some benefits not covered by Medicare parts A or B. Of those having Medigap coverage in 1999, about 61 percent purchased 1 of the 10 standardized plans (A through J), while about 35 percent had supplemental plans that predate standardization. The remaining 4 percent had Medigap plans meeting state standards in the three states--Massachusetts, Minnesota, and Wisconsin--in which insurers are exempt from offering the federally standardized plans. Among the 10 standardized plans, over 60 percent of purchasers were enrolled in two mid-level plans (C or F), which cover part A and part B cost-sharing requirements but do not cover prescription drugs. There are several reasons why these plans may be particularly popular among beneficiaries. For example, both plans cover the part B deductible, which insurers report is a popular benefit for purchasers. They also represent two of the four plans that insurers are required to guarantee issue during special enrollment periods. With the exception of plan B, in which 13 percent were enrolled, less than 7 percent of beneficiaries selected any one of the remaining seven plans. (See fig. 1.) Enrollment in the three plans with prescription drug coverage--H, I, and J--is relatively low (a total of 8 percent of standardized plan enrollment) for several reasons. Insurance representatives noted that the drug coverage included in these plans is limited while the premium costs are higher than plans without this coverage. For example, under the Medigap plan with the most comprehensive drug coverage (plan J), a beneficiary would have to incur $6,250 in prescription drug costs to receive the full $3,000 benefit because of the benefit's deductible and coinsurance requirements. Moreover, insurers often medically underwrite these plans--that is, screen for health status--for beneficiaries enrolling outside of their open-enrollment period. Thus, individuals in poor health who want to purchase a plan with drug coverage may be denied or charged a higher premium. Further, insurers may be reluctant to market Medigap plans with prescription drug coverage because they would be required to offer them to any applicant regardless of health status during beneficiaries' initial 6- month open-enrollment period, according to NAIC officials. Finally, an insurance representative attributed low enrollment in these plans to beneficiaries who do not anticipate a need for a prescription drug benefit at the time they enroll. Relatively few beneficiaries have purchased the Medicare Select and high- deductible plan options, which were created to increase the options available to beneficiaries. About 9 percent of beneficiaries enrolled in standardized Medigap plans had a Medicare Select plan in 1999. With Medicare Select, beneficiaries buy 1 of the 10 standardized plans but are limited to choosing among hospitals and physicians in the plan's network except in emergencies. In exchange for a limited choice of providers, premiums are typically lower, averaging $979 in 1999, or more than $200 less than the average Medigap premium for a standardized plan. Similarly, insurers report that few individuals choose the typically lower priced high- deductible option available for plans F and J. These options require beneficiaries to pay a $1,580 deductible before either plan covers any services. An NAIC official noted that these options may have relatively low enrollment because beneficiaries may prefer first-dollar coverage and no restrictions on providers. In addition, an insurance representative noted that administrative difficulties and higher costs associated with operating these plans have discouraged some insurers from actively marketing these products, which likely contributes to the low enrollment. Beneficiaries do not have access to Medicare Select plans in all states-- NAIC reports that 15 states do not have insurers within the state selling Medicare Select plans. While consumers typically have access to all 10 standardized Medigap plans during their 6-month open-enrollment periods, the extent to which they can choose from among multiple insurers offering these plans varies depending on where they live. Insurance companies marketing Medigap policies must offer plan A and frequently offer plans B, C, and F, but are less likely to offer the other six plans, particularly those plans with prescription drug coverage. (See table 3.) Our review of state consumer guides and other information from states and insurers shows that during the 6-month open-enrollment period applicants typically have access to multiple Medigap insurers, with the most options available for plans A, B, C, and F. For example, in 19 states, every Medigap insurer offered plan F to these beneficiaries. In contrast, fewer insurers offer Medigap plans with prescription drug benefits. Although in most states several insurers offer these plans, state consumer guides indicate that only one insurer offers plan J in New York and plans H, I, and J in Rhode Island. In addition, no insurers market plan H in Delaware or plans F, G, or I in Vermont.Appendix II includes a summary of the number of insurers estimated to offer Medigap plans in each state. While beneficiaries in most states have access to multiple insurers for most Medigap plans, a few insurers represent most Medigap enrollment. In all but one state, United HealthCare Insurance Company or a Blue Cross/Blue Shield plan represents the largest Medigap insurer. Nationally, about 64 percent of Medigap policies in 1999 were sold by either United HealthCare or a Blue Cross/Blue Shield plan. United HealthCare offers all 10 Medigap policies to AARP members during their initial 6-month open- enrollment period in nearly all states and charges applicants in a geographic area the same premium regardless of their health status (a rating practice known as community rating). Outside of beneficiaries' 6- month open-enrollment period, United HealthCare also offers applicants without end-stage renal disease (i.e., permanent kidney failure) plans A through G without medically underwriting--that is, screening beneficiaries for health status--in states where it sells these policies. In an effort to minimize adverse selection and to remain competitive, United HealthCare medically underwrites applicants for the three plans with prescription drug coverage who are outside their initial open-enrollment period. Medicare beneficiaries who are not in their open-enrollment period or do not otherwise qualify for one of the special enrollment periods, such as when an employer eliminates supplemental coverage or a Medicare+Choice plan stops serving an area, are not guaranteed access under federal law to any Medigap plans. Depending on their health, these individuals may find coverage alternatives to be reduced or more expensive. Outside of the initial or special open-enrollment periods, access to any Medigap plan could depend on the individual's health, the insurer's willingness to offer coverage, and states' laws. Further, beneficiaries whose employer terminates their health coverage or whose Medicare+Choice plan withdraws from the program are only guaranteed access to Medigap plans A, B, C, and F, which do not offer prescription drug coverage. Medicare beneficiaries can change Medigap policies, but may be subject to insurers' screening them for health conditions prior to allowing a change to a Medigap policy with more generous benefits outside open- enrollment/guaranteed-issue periods. If a person has a Medigap policy for at least 6 months and decides to switch plans, the new policy generally must cover all preexisting conditions. However, if the new policy has benefits not included in the first policy, the company may make a beneficiary wait 6 months before covering that benefit or, depending on the health condition of the applicant, may charge a higher premium or deny the requested change. According to an insurer representative, virtually all Medigap insurers will screen the health condition of applicants who want to switch to plans H, I, or J to avoid the potential for receiving a disproportionate share of applicants in poor health. Beneficiaries purchasing Medigap plans may still incur significant out-of- pocket costs for health care expenses in addition to their premiums. In 1999, the average annual Medigap premium was more than $1,300, although a number of factors, such as where a beneficiary lives and insurer rating practices, may contribute to significant variation in the premiums charged by insurers. Despite their supplemental coverage, Medicare beneficiaries with Medigap coverage paid more out-of-pocket for health care services (excluding long-term care facility care) than any other group of beneficiaries even though their self-reported health status was generally similar to other beneficiaries. Medigap plans can be relatively expensive, with an average annual premium of more than $1,300 in 1999. Premiums varied widely based on the level of coverage purchased. Among the 10 standardized plans, plan A, which provides the fewest benefits, was the least expensive with average premiums paid of nearly $900 per year. The most popular plans--C and F--had average premiums paid of about $1,200. (See table 4.) The plans with prescription drug coverage--H, I, and J--had average premiums paid more than $450 higher than those plans without such coverage ($1,602 compared to $1,144). In addition, Medigap policies are becoming more expensive. One recent study reports that premiums for the three Medigap plans offering prescription drug coverage have increased the most rapidly--by 17 to 34 percent from 1999 to 2000. Medigap plans without prescription drug coverage rose by 4 to 10 percent from 1999 to 2000. Additional factors, such as where Medicare beneficiaries live or specific demographic or behavioral characteristics, also may influence variation in Medigap premiums. For example, premiums often vary widely across states, which may in large part reflect geographic differences in use and costs of health care services as well as state policies that affect how insurers can set premium rates. Additionally, premiums for the same policy can vary significantly within the same state. The method used by insurers to determine premium rates can dramatically impact the price a beneficiary pays for coverage over the life of the policy. Finally, depending on the state or insurer, other factors such as smoking status and gender may also affect the premiums consumers are charged. Premiums vary widely among states. For example, based on data reported by the insurers to NAIC, average premiums per covered life for standardized Medigap plans in California were $1,600 in 1999--more than one-third higher than the national average of $1,185 and more than twice as high as Utah's average of $706. (See app. III for average premiums per covered life for standardized plans by state.) This variation is also evident for specific plan types. For example, average annual premiums per covered life for plan J were $646 in New Hampshire and $2,802 in New York, and for plan C were $751 in New Jersey and $1,656 in California. In six states (i.e., Alabama, California, Florida, Illinois, Louisiana, and Texas), the average premium per covered life exceeded the national average for all 10 standard plan types while in six states (i.e., Hawaii, Montana, New Hampshire, New Jersey, Utah, and Vermont), the average premium per covered life always fell below the national average. Beneficiaries in the same state may also face widely varying premiums for a given plan type offered by different insurers. For example, our review of state consumer guides showed that in Texas, a 65-year-old consumer could pay an annual premium from $300 to as much as $1,683 for plan A, depending on the insurer. Similarly, in Ohio, plan F annual premiums for a 65-year-old ranged from $996 to $1,944 and in Illinois, plan J premiums ranged from $2,247 to $3,502. Table 5 provides premium ranges for a 65- year-old in five states with large Medicare populations. Some of the variation seen in table 5 is attributable to differences in the premium rating methodology used by different insurers. Insurers who "community rate" policies charge all applicants the same premium, regardless of their age. Those who use an "issue-age-rated" methodology base premiums on an applicant's age when first purchasing the policy, and although the premium can increase for inflation, any such increase should not be attributable to the aging of the individual. In addition to increases for inflation, an "attained-age-rated" policy's premium also increases as a beneficiary ages. Although attained-age policies are generally cheaper when initially purchased, they may become more expensive than issue-age rated policies at older ages. For example, a Pennsylvania insurer that sells both attained-age and issue-age policies for plan C charges a 65-year-old male $869 for an attained-age policy or $1,347 for an issue-age policy. But over time, under the attained-age rated policy, this individual would have premium increases both due to inflation and the higher cost the insurer anticipates as the policyholder ages. However, under the issue-age-rated policy, rate increases would only reflect inflation because the higher anticipated costs resulting from aging have already been included in the premium rate. By age 80, excluding increases attributable to inflation, the attained-age policy would cost $1,580 but the issue-age policy would remain at $1,347. Individuals who did not anticipate premium increases over time for attained-age policies may find it increasingly difficult to afford continued coverage and consequently may let their Medigap coverage lapse. Or, as their premiums increase and if an individual is still in good health, individuals may switch to plans sold by insurers that charge the same premium regardless of age, thus creating the potential for these insurers to have a disproportionate share of older beneficiaries. For individuals not in their open-enrollment period or otherwise eligible for a guaranteed-issue product, insurers may also adjust premium prices based on the health status of individuals. Because the consumer guides show that insurers offer the same Medigap plan type for a wide range in premiums, some plans with higher premiums are unlikely to have high enrollment. Nonetheless, insurers may have an incentive to continue offering higher cost plans despite low enrollment because states prohibit insurers that stop marketing a plan type from reentering the market and selling that particular plan for a 5-year period. Insurers that may not want to completely exit a market may continue to offer a plan type with a premium higher than the market rate, thereby discouraging enrollment but ensuring their continued presence in the market. However, federal law requiring Medigap plans to pay at least 65 percent of premiums earned for beneficiaries' medical expenses for individually purchased policies limits insurers' ability to charge rates excessively higher than the market rates. Despite purchasing Medigap policies to help cover Medicare cost-sharing requirements and other costs for health care services that the beneficiary would have to pay directly out of pocket, Medigap purchasers still pay higher out-of-pocket costs than do other Medicare beneficiaries. Our analysis of the 1998 MCBS showed that out-of-pocket costs for health care services, excluding long-term facility care costs, averaged $1,392 for those purchasing individual Medigap policies with prescription drug coverage and $1,369 for those purchasing individual Medigap policies without prescription drug coverage--significantly higher than the $1,056 average for all Medicare beneficiaries. (See fig. 2.) Furthermore, Medigap purchasers had higher total expenditures for health care services ($7,631, not including the cost of the insurance) than Medicare beneficiaries without supplemental coverage from any source ($4,716) in 1998. These higher expenditures for individuals with Medigap may be due in large part to higher utilization rates for individuals with supplemental coverage. In addition, Medigap's supplemental coverage of prescription drugs is less comprehensive than typically provided through employer-sponsored supplemental coverage and therefore may leave beneficiaries with higher out-of-pocket costs. Differences in health status do not appear to account for higher out-of-pocket costs and expenditures for those with Medigap. We found that Medicare beneficiaries with Medigap coverage reported a health status similar to those without supplemental coverage. Supplemental coverage can offset the effects of cost-sharing requirements intended to encourage prudent use of services and thus control costs. Providing "first-dollar coverage" by eliminating beneficiaries' major cost requirements for health care services, including deductibles and coinsurance for physicians and hospitals, in the absence of other utilization control methods can result in increased utilization of discretionary services and higher total expenditures. One study found that Medicare beneficiaries with Medigap insurance had 28 percent more outpatient visits and inpatient hospital days relative to beneficiaries who did not have supplemental insurance, but were otherwise similar in terms of age, gender, income, education, and health status. Service use among beneficiaries with employer-sponsored supplemental insurance (which often reduces, but does not eliminate, cost sharing and is typically managed through other utilization control methods) was approximately 17 percent higher than the service use of beneficiaries with Medicare fee-for- service coverage only. Medigap covers some health care expenses for policyholders but also leaves substantial out-of-pocket costs in some areas, particularly for prescription drugs. Our analysis of the 1998 MCBS shows that Medigap paid about 13 percent of the $7,631 in average total health care expenditures (including Medicare payments) for beneficiaries with Medigap. Even with Medigap, beneficiaries still paid about 18 percent of their total costs directly out of pocket, with prescription drugs being the largest out-of-pocket cost. (See table 6.) Among Medigap policyholders with prescription drug coverage, Medigap covered 27 percent ($239) of prescription drug costs, leaving the beneficiary to incur 61 percent ($548) of the costs out of pocket. For Medigap policyholders without drug coverage, beneficiaries incurred 82 percent ($618) of prescription drug costs. Out-of-pocket costs for prescription drugs were higher for Medigap policyholders than any other group of Medicare beneficiaries, including those with employer-sponsored supplemental coverage ($301). Higher out-of-pockets costs for prescription drugs may be attributable to differences in supplemental coverage. Medigap policyholders with prescription drug coverage have high cost-sharing requirements (a $250 deductible and 50-percent coinsurance with a maximum annual benefit of $1,250 or $3,000 depending on the plan selected) in contrast to most employer-sponsored supplemental plans that provide relatively comprehensive prescription drug coverage. Employer-sponsored supplemental plans typically require small copayments of $8 to $20 or coinsurance of 20 to 25 percent, and provide incentives for enrollees to use selected, less costly drugs, such as generic brands or those for which the plan has negotiated a discount. Further, few employer-sponsored health plans have separate deductibles or maximum annual benefits for prescription drugs. As Congress continues to examine potential changes to the Medicare program, it is important to consider the role that Medigap supplemental coverage has on beneficiaries' use of services and expenditures. Medicare beneficiaries who purchase Medigap plans have coverage for essentially all major Medicare cost-sharing requirements, including coinsurance and deductibles. But offering this "first-dollar" coverage may undermine incentives for prudent use of Medicare services, especially with regard to discretionary services, which could ultimately increase costs for beneficiaries and the entire Medicare program. While the lack of coverage for outpatient prescription drugs through Medicare has led to various proposals to expand Medicare benefits, relatively few beneficiaries purchase standardized Medigap plans offering these benefits. Low enrollment in these plans may be due to fewer plans being marketed with these benefits, their relatively high cost, and the limited nature of their prescription drug benefit, which still requires beneficiaries to pay more than half of their prescription drug costs while receiving a maximum of $3,000 in benefits. As a result, Medigap beneficiaries with prescription drug coverage continue to incur substantial out-of-pocket costs for prescription drugs and other health care services. We did not seek agency comments on this report because it does not focus on agency activities. However, we shared a draft of this report with experts in Medigap insurance at CMS and NAIC for their technical review. We incorporated their technical comments as appropriate. We will send copies of this report to the Administrator of CMS and other interested congressional committees and members and agency officials. We will also make copies available to others on request. Please call me at (202) 512-7118 or John Dicken, Assistant Director, at (202) 512-7043 if you have any questions. Rashmi Agarwal, Susan Anthony, and Carmen-Rivera Lowitt also made major contributions to this report. To assess issues related to Medigap plan enrollment and premiums incurred by beneficiaries who purchase Medigap plans, we analyzed data collected on the National Association of Insurance Commisioners' (NAIC) 1999 Medicare Supplement Insurance Experience exhibit. We also analyzed the 1998 Medicare Current Beneficiary Survey (MCBS) to examine out-of-pocket costs paid by Medicare beneficiaries with Medigap policies. To assess the availability of Medigap plans across states and to individuals who are not in their open-enrollment periods we examined consumer guides for Medicare beneficiaries published by many states and by the Health Care Financing Administration (HCFA). (Appendix II further discusses our review of these consumer guides and the number of insurers offering standardized Medigap plans.) Additionally, we interviewed researchers and representatives from insurers, HCFA, NAIC, and several state insurance regulators. We conducted our work from March 2001 through July 2001 in accordance with generally accepted government auditing standards. We relied on data collected on the NAIC's 1999 Medicare Supplement Insurance Experience Exhibit for information on Medigap enrollment by plan type and premiums per covered life by plan type and across states. Under federal and state statutes, insurers selling Medigap plans annually file reports, known as the Medicare Supplement Insurance Experience Exhibit, with the NAIC. NAIC then distributes the exhibit information to the states. These exhibits are used as preliminary indicators, in conjunction with other information, as to whether insurers meet federal requirements that at least a minimum percentage of premiums earned are spent on beneficiaries' medical expenses, referred to as loss ratios. Additionally, insurers report information on various aspects of Medigap plans including plan type, premiums earned, the number of covered lives, as well as other plan characteristic information and a contact for the insurer. We relied on NAIC data containing filings as of December 31, 1999, for the 50 states and the District of Columbia. These data represent policies in force as of 1999, including pre-standardized policies, standardized policies, and policies for individuals living in three states in which insurers are exempt from the federal standardized policies (i.e., Massachusetts, Minnesota, and Wisconsin). An initial analysis of the 1999 data set revealed that several insurers failed to include or did not designate a valid plan type on their filings. As part of our data cleaning, we reclassified some of these filings to include or correct the plan type based on information reported in other sections of the insurance exhibit. We also called 37 insurers that covered more than 5,000 lives and had not included a valid plan type on their filing. During these calls, we asked for plan type information as well as verified whether the insurer sold a Medicare Select plan that included incentives for beneficiaries to use a network of health care providers, and corrected the data in the database. After the data-cleaning process, approximately 8 percent of the 10.7 million covered lives still had an unknown plan typeand less than 1 percent had missing information about whether the plan was sold as a Medicare Select policy. NAIC does not formally audit the data that insurers report, but it does conduct quality checks before making the data publicly available. We did not test the accuracy of the data beyond the data-cleaning steps mentioned above. During our phone calls to insurers, we found that some insurers failed to report separate filings for the various Medigap plan types they sell and instead reported aggregate information across multiple plan types. Since plan type information was unavailable for these plans, information for these insurers was excluded from our estimates of enrollment and premium estimates for standardized plans. We relied on HCFA's 1998 MCBS for information on expenditures for health care services by payer for Medicare beneficiaries. Specifically, we examined (1) the out-of-pocket costs incurred by beneficiaries with a Medigap plan in comparison to other beneficiaries and (2) the out-of- pocket costs for beneficiaries with a Medigap plan as a share of total expenditures for health care services, including payments by Medicare and other payers. The MCBS is a multipurpose survey of a representative sample of the Medicare population. The 1998 MCBS collected information on a sample of 13,024 beneficiaries, representing about a 72-percent response rate. Because the MCBS is based on a sample, any estimates derived from the survey are subject to sampling errors. A sampling error indicates how closely the results from a particular sample would be reproduced if a complete count of the population were taken with the same measurement methods. To minimize the chances of citing differences that could be attributable to sampling errors, we highlight only those differences that are statistically significant at the 95-percent confidence level. We analyzed the MCBS' cost-and-use file representing persons enrolled in Medicare as of January 1, 1997, and 1998. The cost-and-use file contains a combination of survey-reported data from the MCBS and Medicare claims and other data from HCFA administrative files. The survey also collects information on services not covered by Medicare, including prescription drugs and long-term facility care. HCFA notes that there may be some underreporting of services and costs by beneficiaries. To compensate in part for survey respondents who may not know how much an event of care costs or how the event was paid for, HCFA used Medicare administrative data to adjust or supplement survey responses for some information, including cost information. We did not verify the accuracy of the information in the computerized file. Because some Medicare beneficiaries may have supplemental coverage from several sources, we prioritized the source of insurance individuals reported to avoid double counting. That is, if individuals reported having coverage during 1998 from two or more kinds of supplemental coverage, we assigned them to one type to estimate enrollment and costs without including the same individuals in multiple categories. We initially separated beneficiaries enrolled in a health maintenance organization (HMO) contracting with the Medicare program (a Medicare HMO) from beneficiaries in the traditional fee-for-service Medicare program. Then, we used the following hierarchy of supplemental insurance categories: (1) employer-sponsored, (2) individually purchased (that is, a Medigap policy) with prescription drug coverage, (3) individually purchased without prescription drug coverage, (4) private HMO, (5) Medicaid, and (6) other public health plans (including coverage through the Department of Veterans Affairs and state-sponsored drug plans). Finally, those without any supplemental coverage were categorized as having Medicare fee-for- service only. For example, a beneficiary with Medicare HMO coverage sponsored by an employer would be included within the Medicare HMO category. Table 7 shows the number and percent of beneficiaries in each insurance category. Table 8 shows the extent to which health insurers offer the 10 standardized Medigap policies to 65-year-olds during the initial open- enrollment period. The table lists information for 47 states and the District of Columbia where insurers sell these plans. Three states--Massachusetts, Minnesota, and Wisconsin--are not included in the table because insurers in these states are exempt from federal Medigap standardized requirements. To determine the extent to which Medigap standardized plans are available in each state, we primarily relied on state consumer guides and information available from the Health Care Financing Administration's (HCFA) web site. For states that did not have available information in consumer guides or Internet sites, we obtained information from their state insurance departments and insurers. We also contacted state insurance departments and insurers to verify state consumer guide information for states reporting three or fewer insurers offering any plan type to ensure that we did not understate the availability of Medigap plans in these states. Information from consumer guides and HCFA data may not contain comprehensive data on insurers operating in a state at a given point in time because (1) in some states, insurers voluntarily submit data to insurance departments and do not always report on the Medigap policies they offer and (2) data may not reflect recent changes such as companies that stop selling a product or new insurers that the states certify to sell Medigap plans. Also, in some states, such as Michigan, some insurers may be licensed to sell Medigap in the state but are not actively marketing the plan to new enrollees. We did not independently confirm information reported by state insurance departments and insurers. Table 9 presents information from the National Association of Insurance Commissioners' (NAIC) 1999 Medicare Supplement Insurance Experience Exhibit on premiums per covered life for standardized Medigap plans among the states and the District of Columbia offering the federally standardized Medigap plans. Nationally, the average premium per covered life in 1999 for the standardized plans was $1,185, and ranged from $706 in Utah to $1,600 in California. Medicare: Cost-Sharing Policies Problematic for Beneficiaries and Program (GAO-01-713T, May 9, 2001) Retiree Health Benefits: Employer-Sponsored Benefits May Be Vulnerable to Further Erosion (GAO-01-374, May 1, 2001) Medicare+Choice: Plan Withdrawals Indicate Difficulty of Providing Choice While Achieving Savings (GAO/HEHS-00-183, Sept. 7, 2000) Medigap: Premiums for Standardized Plans That Cover Prescription Drugs (GAO/HEHS-00-70R, Mar. 1, 2000) Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications (GAO/T-HEHS/AIMD-00-100, Feb. 16, 2000) Medigap Insurance: Compliance With Federal Standards Has Increased (GAO/HEHS-98-66, Mar. 6, 1998) Medigap Insurance: Alternatives for Medicare Beneficiaries to Avoid Medical Underwriting (GAO/HEHS-96-180, Sept. 10, 1996)
To protect themselves against large out-of-pocket expenses and help fill gaps in Medicare coverage, most beneficiaries buy supplemental insurance, known as Medigap; contribute to employer-sponsored health benefits to supplement Medicare coverage; or enroll in private Medicare+Choice plans rather than traditional fee-for-service Medicare. Because Medicare+Choice plans are not available everywhere and many employers do not offer retiree health benefits, Medigap is sometimes the only supplemental insurance option available to seniors. Medicare beneficiaries who buy Medigap plans have coverage for essentially all major Medicare cost-sharing requirements, including coinsurance and deductibles. But this "first-dollar" coverage may undermine incentives for prudent use of Medicare services, which could ultimately boost costs for the Medicare program. Although various proposals have been made to add a prescription drug benefit to Medicare, relatively few beneficiaries buy standardized Medigap plans with this benefit. Low enrollment in these plans may be due to the fact that fewer plans are being marketed with these benefits; their relatively high cost; and the limited nature of their prescription drug benefit, which still requires beneficiaries to pay more than half of their prescription drug costs. Most plans have a $3,000 cap on benefits. As a result, Medigap beneficiaries with prescription drug coverage continue to incur substantial out-of-pocket expenses for prescription drugs and other health care services.
7,449
292
GAO is currently conducting several reviews related to first responder grants. One of these reviews, to be published within the next few weeks, addresses issues of coordinated planning and the use of federal grant funds for first responders in the National Capitol Region, which encompasses the District of Columbia and 11 surrounding jurisdictions. Another effort is focused on intergovernmental efforts to manage fiscal year 2002 and 2003 grants administered by the Office for Domestic Preparedness (ODP) within the Department of Homeland Security (DHS). Because much of our work in this area is ongoing and our findings remain preliminary, my testimony today will focus principally on the major findings of the reports on preparedness funding issued by the DHS OIG and the House Select Committee, supplemented by some examples from our work in four selected locations in three states. Our analysis focused on three ODP grant programs: the State Domestic Preparedness Grant Program of fiscal year 2002, with $315,440,000 in appropriations, and the fiscal year 2003 State Homeland Security Grant Programs, Parts I and II, with appropriations of $566,295,000 and $1,500,000,000, respectively. The purpose of this work was to document the flow of selected fiscal year 2002 and 2003 grant monies from ODP to local governments and the time required to complete each step in the process. In doing this work, we met with state and local officials in each state and obtained and reviewed federal, state, and local documentation. We did this work between December 2003 and February 2004 in accordance with generally accepted government auditing standards. In recent months, the Conference of Mayors, members of Congress, and others have expressed understandable concerns about delays in the process by which congressional appropriations for first responders reach the local fire fighter, police officer, or other first responder. The reports by DHS OIG and the House Select Committee examined the distribution of homeland security grant funding to states and local governments to understand what obstacles--if any--prevent the expeditious flow of grant funding from the federal government to state and local governments. In March 2003, ODP was moved from the Department of Justice to the DHS. In fiscal years 2002 and 2003, ODP managed about $3.5 billion under 16 separate grant programs. Generally, states and local grant recipients could use these funds for some combination of training, new equipment, exercise planning and execution, general planning efforts, and administration. The largest of these grants were the State Homeland Security Grant Programs and the Urban Area Security Initiative grants. In both grant programs, states may retain 20 percent of total state grant funding but must distribute the remaining 80 percent to local governments within the state. Before discussing some of the issues that have been raised about the distribution of federal grant funds to first responders, I would like briefly to discuss some basic issues associated with using those funds effectively. A key goal of first responder funding should be developing and maintaining the capacity and ability of first responders to respond effectively to and mitigate incidents that require the coordinated actions of first responders. These incidents encompass a wide range of possibilities, including daily auto accidents, truck spills, and fires; major natural disasters such as floods, hurricanes, and earthquakes; or a terrorist attack that involves thousands of injuries. Effectively responding to such incidents requires well-planned, well-coordinated efforts by all participants. Major events, such as natural disasters or terrorist attacks, may require the coordinated response of first responders from multiple jurisdictions within a region, throughout a state or among states. Thus, it follows that developing a coordinated plan for such events should generally involve participants from the multiple jurisdictions that would be involved in responding to the event. However, a major challenge in administering first responder grants is balancing two goals: (1) minimizing the time it takes to distribute grant funds to state and local first responders and (2) ensuring appropriate planning and accountability for effective use of the funds. In fiscal years 2002 and 2003, at least 16 federal grants were available for first responders, each with somewhat different requirements. Previously, we have noted that substantial problems occur when state and local governments attempt to identify, obtain, and use the fragmented grants-in- aid system to meet their needs. Such a proliferation of programs leads to administrative complexities that can confuse state and local grant recipients. Congress is aware of the challenges facing grantees and is considering several bills that would restructure first responder grants. Much of the concern about delays in distributing federal grant funds to local first responders has involved the State Homeland Security grants which are distributed to states on the basis of a formula. Each state received 0.75 percent of the total grant appropriation, with the remaining funds distributed according to state population. There are a number of sequential steps common to the distribution of ODP State Homeland Security Grants from ODP to the states and from the states to local governments. They include the following: 1. Congress appropriates funds. 2. ODP issues grant guidance to states. 3. State submits application, including spending plans, to ODP. 4. ODP makes award to states noting any special conditions that must be cleared before the funds can be used. 5. State meets and ODP lifts special conditions, if applicable. 6. State subgrants at least 80 percent of its funds to local governments. 7. Local governments purchase equipment directly or through the state. 8. Local governments submit receipts to the state for reimbursement. 9. State draws down grant funds to reimburse local governments. The total time required to complete these steps is dependent upon ODP requirements and state and local laws, requirements, regulations, and procedures. Generally, the DHS OIG report and the report of the House Select Committee on Homeland Security found similar causes of delays in getting funds to local governments and first responder agencies. These included delays in completing state and local planning requirements and budgets; legal requirements for the procedures to be used by local governments in accepting state grant allocations; the need to establish procedures for the use of the funds, such as authority to buy equipment and receive reimbursement later; and procurement requirements, such as bidding procedures. Generally, neither the IG report nor the House Select Committee report found that the delays were principally due to ODP's grant management procedures and processes. Both the DHS OIG report and the House Select Committee report found that ODP's grant applicant process was not a major factor in delaying the distribution of funds to states. The DHS OIG found that in fiscal years 2002 and 2003, ODP reduced the time it took to make on-line grant application guidance and applications available to states, process grant applications, and award the grants to states after applications were submitted. The DHS OIG found that the total number of days from grant legislation enacted to award of grants to states declined from on average 292 days for fiscal year 2002 grants to on average 77 days for fiscal year 2003 grants. For the three states we examined, we found that the time between the enactment of the appropriation and ODP's award of the grant to these states declined from 8 months in fiscal year 2002 to 3 months for fiscal year 2003 State Homeland Security Grant Program, Part I, and 2 months for fiscal year 2003 State Homeland Security Grant Program, Part II. One factor that did delay the states' ability to use ODP grant funds was the imposition of special conditions. In fiscal years 2002 and 2003, ODP imposed special conditions for the state homeland security formula grants if the state had failed to adequately complete one of the requirements of the grant application. For example, in fiscal years 2002 and 2003, to receive funding states had to submit detailed budget worksheets to identify how grant funds would be used for equipment, training, and exercises. To accelerate the grant distribution process, ODP would award funds to states that had not completed the budget detail worksheets, with the special condition that states and locals would be essentially unable to use the funds until the required budgets were submitted and approved. Thus, the time it took to lift the special conditions was largely dependent upon the time it took state and local governments to submit the required documentation. States could not begin to draw down on the grant funds until the special conditions were met. In one state we reviewed, ODP notified the state of the special conditions on May 28, 2003, and the conditions were removed on August 6, 2003, after the state had met those conditions. In another state, ODP notified the state of the special conditions on September 13, 2002, and the conditions were removed on March 18, 2003. ODP imposed special conditions on both the fiscal year 2002 State Domestic Preparedness Grant Program and the fiscal year 2003 State Homeland Security Grant Program, Part I, but not on the State Homeland Security Grant Program, Part II. After ODP makes its initial award, the state must subgrant at least 80 percent of its grant award to local units of government. In fiscal year 2003, the states had to certify to ODP within 45 days that they had made these subgrants. The subgrant entities and procedures can vary with each state, making it hard to generalize about this phase of the distribution process. In our work, we found that some states subgranted the funds to the county level, while another subgranted to regional task forces composed of several counties. Subgrantees also varied in their procedures to distribute funds to local governments. Some subgrantees managed the grant process themselves, while others chose to pass funds further down, to a county or city within the jurisdictional area. As reported by the DHS OIG, Congress adopted appropriation language for the fiscal year 2003 State Homeland Security Grant, Part II, that required states to transfer 80 percent of first responder grant funds to local jurisdictions within 45 days of the funds being awarded by ODP. This requirement was included in the appropriation bill to ensure that states pass funds down to local jurisdictions quickly, and ODP incorporated this requirement into its grant application guidance. However, according to the DHS OIG report, this action had a limited effect because most states met the 45-day deadline by counting funds as transferred when the states agreed to allocate a specific amount of the grant to a local jurisdiction, even if the state had not determined how the funds would be spent or when contracts for goods and services would be let. Additionally, many states and local jurisdictions delayed spending of prior year grant funds in order to meet the fiscal year 2003 requirement. The House Select Committee staff also reported that nearly all states met this 45-day requirement with respect to 2003 funding as of February 2004, but noted that this may not reflect the actual availability of funds for expenditures by local jurisdictions. The committee report cited the example of Seattle, Washington. While it had been awarded $30 million in May 2003, Seattle received authorization to spend these funds only shortly before the April 2004 release of the committee's report. In the three states we examined, we also found that states had certified they had allocated funds to local jurisdictions within the required 45-day period. According to the DHS OIG, state and local governments were sometimes responsible for delaying the delivery of fiscal year 2002 grant funds to first responders because various governing and political bodies within the states and local jurisdictions had to approve and accept the grant funds. Six out of the 10 states included in the DHS OIG's sample reported that their own state's review and approval process was one of the top three reasons that the funds had not been spent by the time the report was published. For example, one of three states for which data were available took 22 days to accept ODP's award and 51 days to award a subgrant to one of its local jurisdictions; the local jurisdiction did not accept the grant for another 92 days. Another state took 25 days to accept ODP's grant award and up to 161 days to award the funds to its local jurisdictions. Local jurisdictions then took up to 50 days to accept the awards. Our work showed similar results. One city was notified on July 17, 2003 that grant funds were available for use, but the city council did not vote to accept the funds until November 7, 2003. The House Select Committee reported that, in over half of the states they reviewed, local jurisdictions had not submitted detailed spending plans to the states prior to the time the states had transferred grant funds to them. Specifically, they found that often times, even though a reasonable estimate of the available award amount was available months earlier, many local jurisdictions waited to initiate their planning efforts until they were officially notified of their grant awards. Because ODP imposed special conditions in some grant years, these local jurisdictions, therefore, could not begin to draw down funds until they provided the detailed budget documentation, outlining how the funds would be spent, as required by ODP. For the fiscal year 2002 statewide homeland security grants, local jurisdictions and state agencies were required to prepare, submit, and receive approval of detailed budget work sheets that specifically accounted for all grant funds provided. This specific detailing of items included not only individual equipment items traditionally accounted for as long-term capital equipment, but also all other items ordinarily recorded in accounting records as consumable items, such as disposable plastic gloves, that usually need not be accounted for individually. Preparing this detailed budget information took time on the part of local jurisdictions to prepare and for the states and ODP to review and approve. Since the first round of fiscal year 2003 state homeland security grants, ODP has not linked the submission and approval of detailed budget information to the release of grant funds. ODP required the submission and approval of the same detailed budget worksheets for the fiscal year 2003 statewide grants, but did not condition the release of funds on their submission and approval. For the fiscal year 2004 grants, ODP still requires the submission of detailed budget work sheets by local jurisdictions to the state, but not to ODP, for its approval. The DHS OIG also found that there were numerous reasons for delays in spending grant funds. Some were unavoidable and others they found to be remediable. In general, the DHS OIG found identifying the highest priority for spending grant funds to be a difficult task. Most states the DHS OIG visited were not satisfied with the needs analysis that they had done prior to September 11, 2001. Some states took the time to update their homeland security strategies, and one state delayed fiscal year 2002 grant spending until it had completed a new strategy using ODP's fiscal year 2003 needs assessment tool. The DHS OIG also found little consistency in how the states manage the grant process. The states used various methods for identifying and prioritizing needs and allocating grant funds. States may rely on the work of regional task forces, statewide committees, county governments, mutual aid groups, or local fire and police organizations to identify and prioritize grant spending. Both the DHS OIG report and House Select Committee report noted that state and local procurements have, in some cases, been affected by delays resulting from specific procurement requirements. Some states purchase equipment centrally for all jurisdictions, while others sub-grant funds to local jurisdictions that make their own purchases. In these latter instances, local procurement regulations can affect the issuance of equipment purchase orders. The House Select Committee report discussed how state and local procurement processes and regulations could slow the expenditure of grant funds. For example, in Kentucky, an effort was taken to organize bidding processes for localities and to provide them with pre- approved equipment and services lists. However, state and local laws require competitive bidding for any purchases above $20,000, a requirement that can delay actual procurements. Moreover, if bids had been requested for a proposal and those bid specifications were not met, then the bidding process must start over again. As Kentucky's Emergency Managing Director explained, "There is a process and procedure that must be gone through before localities can actually spend the funds, and the state has not identified funds that are exempt from these local rules of procedure that are in place." In one of the jurisdictions for which we obtained documentation, we also found that procurement regulations may require that funds be available prior to the issuance of equipment purchase orders. This requirement took from June 18, 2003 to September 4, 2003 before purchase orders could be issued. In the individual jurisdictions in the three states for which we obtained documentation we also found that some apparent delays in obligating grant funds resulted from the time normally required by local jurisdictions to purchase and contract for items, to prepare requests for proposals, evaluate them once received, and have purchase orders approved by legal departments and governing councils and boards. In one case, the time between the city controller's release of funds to the issuance of the first purchase order was about 3 months, from September 4, 2003, to December 15, 2003. The reports by the DHS OIG and by the Select Committee, as well as the preliminary work we have undertaken, support the conclusion that local first responders may not have anticipated the natural delays that should have been expected in the complex process of distributing dramatically increased funding through multiple governmental levels while maintaining procedures to ensure proper standards of accountability at each level. The evidence available suggests that the process is becoming more efficient and that all levels of government are discovering and institutionalizing ways to streamline the grant distribution system. These increased efficiencies, however, will not continue to occur unless federal, state, and local government each continue to examine their processes for ways to expedite funding for the equipment and training needed by the nation's first responders. At the same time, it is important that the quest for speed in distributing funds does not hamper the planning and accountability needed to ensure that the funds are spent on the basis of a comprehensive, well-coordinated plan to provide first responders with the equipment, skills, and training needed to be able to respond quickly and effectively to a range of emergencies, including, where appropriate, major natural disasters and terrorist attacks. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The terrorist attacks of September 11, 2001, highlighted the critical role first responders play at the state and local level when a disaster or emergency strikes. In fiscal years 2002 and 2003, Congress appropriated approximately $13.9 billion for domestic preparedness. A large portion of these funds were for the nation's first responders to enhance their ability to address future emergencies, including potential terrorist attacks. These funds are primarily to assist with planning, equipment purchases, training and exercises, and administrative costs. They are available to first responders mainly through the State Homeland Security Grant Programs and Urban Area Security Initiative grants. Both programs are administered through the Department of Homeland Security's Office for Domestic Preparedness. In this testimony, GAO addressed the need to balance expeditious distribution of first responder funds to states and localities with accountability for effective use of those funds and summarized major findings related to funding distribution delays and delays involving funds received by local governments, as presented in reports issued by the Department of Homeland Security Office of Inspector General and the House Select Committee on Homeland Security. The testimony incorporated supporting evidence on first-responder funding issues based on ongoing GAO work in selected states. The reports of the Department of Homeland Security Office of Inspector General (OIG) and the House Select Committee on Homeland Security examined the distribution of funds to states and localities. Both reports found that although there have been delays in getting federal first-responder funds to local governments and first-responder agencies, the grant management requirements, procedures, and processes of the Office for Domestic Preparedness (ODP) were not the principal cause. According to the OIG's report, in fiscal years 2002 and 2003, ODP reduced the time required to provide on-line grant application guidance to states, process grant applications, and make grant awards. For example, for fiscal year 2002 grants, it took 292 days, on average, from the time the grant legislation was enacted to the awarding of grants to states. For fiscal year 2003 grants, the total cycle was reduced to 77 days, on average. According to the reports, most states met deadlines for subgranting first-responder funds to local jurisdictions. The fiscal year 2003 State Homeland Security Grant Programs and Urban Area Security Initiative required states to transfer 80 percent of first-responder grant funds to local jurisdictions within 45 days of the funds being awarded by ODP. Most states met that deadline by counting funds as transferred when states agreed to allocate a specific amount of the grant to a local jurisdiction, the OIG's report found. The House Select Committee staff concurred. And in the three states GAO examined, states certified they had allocated funds to local jurisdictions within the 45-day period. Delays in allocating grant funds to first responder agencies are frequently due to local legal and procedural requirements, the OIG's report found. State and local governments sometimes delayed delivery of fiscal year 2002 grant funds, for example, because governing and political bodies within the states and local jurisdictions had to approve and accept the grant funds. GAO's work indicated a similar finding. In one state GAO reviewed, roughly four months elapsed from the date the city was notified that grant funds were available to the date when the city council voted to accept the funds. Both reports GAO reviewed found that state and local procurement processes have, in some cases, been affected by delays resulting from specific procurement requirements. While some states purchase first-responder equipment centrally for all jurisdictions, in some instances, those purchases are made locally and procurement may be delayed by competitive bidding rules, among other things. It is important to note that those who manage homeland security grants to states and local governments must balance two sometimes competing goals: (1) getting funds to states and localities expeditiously and (2) assuring that there is appropriate planning and accountability for the effective use of the funds.
3,889
815
ATF is the chief enforcer of explosives laws and regulations in the United States and is responsible for licensing and regulating explosives manufacturers, importers, dealers, and users. ATF is also responsible for regulating most, but not all, explosives storage facilities. Under federal explosives regulations, a license is required for persons who manufacture, import, or deal in explosives and, with some exceptions, for persons who intend to acquire explosives for use. No license is required solely to operate an explosives storage facility. State and local government agencies are not required to obtain an explosives license to use and store explosives. However, all persons who store explosive materials (including state and local entities) must conform with applicable ATF storage regulations, irrespective of whether they are required to obtain an explosives license for other purposes. According to ATF data, as of February 2005 there were 12,028 federal explosives licensees in the United States. Roughly 7,500 of these had some kind of explosives storage facility, consisting of 22,791 permanent or mobile storage magazines. ATF storage regulations include requirements relating to the safety and security of explosives storage magazines--that is, any building or structure (other than an explosives manufacturing building) used for storage of explosive materials. Regarding safety, the storage regulations include requirements related to location, construction, capacity, housekeeping, interior lighting, and magazine repairs, as well as a requirement that the local fire safety authority be notified of the location of each storage magazine. Regarding security, the ATF storage regulations include the following requirements: Explosives handling. All explosive materials must be kept in locked magazines unless they are in the process of manufacture, being physically handled in the operating process of a licensee or user, being used, or being transported to a place of storage or use. Explosives are not to be left unattended when in portable storage magazines. Magazine construction. Storage magazines must be theft-resistant and must meet specific requirements dealing with such things as mobility, exterior construction, door hinges and hasps, and locks. Magazine inspection. Storage magazines must be inspected at least every 7 days. This inspection need not be an inventory, but it must be sufficient to determine if there has been an unauthorized entry or attempted entry into the magazines, or unauthorized removal of the magazine contents. Magazine inventory. Within the magazine, containers of explosive materials are to be stored so that marks are visible. Stocks of explosive materials are to be stored so they can be easily counted and checked. Notwithstanding the security requirements described above, ATF storage regulations do not require explosives storage facilities to have any of the following physical security features--fences, restricted property access, exterior lighting, alarm systems, or electronic surveillance. Also, while ATF licensing regulations require explosives licensees to conduct a physical inventory at least annually, there is no similar inventory requirement in the storage regulations applicable to other persons who store explosives. According to ATF data, the number of reported state and local government thefts is relatively small when compared with the total number of thefts that have occurred nationwide. During a recent 3-year period (January 2002--February 2005), ATF received reports of 205 explosives thefts from all sources nationwide. By comparison, during this same period, only 9 thefts were reported that involved state and local government storage facilities--5 involving state and local law enforcement agencies, 3 involving state government entities (all universities), and 1 involving a county highway department. The amounts of explosives reported stolen or missing from state and local government facilities are relatively small when compared with the total amounts of stolen and missing explosives nationwide. During a recent 10-month period for which data were available (March 2003 through December 2003), there were a total of 76 theft incidents nationwide reported to ATF, amounting to a loss of about 3,600 pounds of high explosives, 3,100 pounds of blasting agents, 1,400 detonators, and 2,400 feet of detonating cord and safety fuse. By comparison, over an entire 10-year period (January 1995 through December 2004), ATF received only 14 reports of theft from state and local law enforcement storage magazines. Reported losses in these cases were about 1,000 pounds of explosive materials, and in 10 of the incidents less than 50 pounds of explosives was reported stolen or missing. While the ATF theft data indicate that thefts from state and local facilities make up only a small part of the overall thefts nationwide, these reports could be understated by an unknown amount. There are two federal reporting requirements relating to the theft of explosives. One is specific to all federal explosives licensees (and permittees) and requires any theft or loss of explosives to be reported to ATF within 24 hours of discovery. The second reporting requirement generally requires any other "person" who has knowledge of the theft or loss of any explosive materials from his stock to report to ATF within 24 hours. Although the term "person" as defined in law and regulation does not specifically include state and local government agencies, ATF has historically interpreted this requirement as applying to nonlicensed state and local government explosives storage facilities. However, ATF officials acknowledged that some state and local government entities could be unsure as to their coverage under the theft reporting requirements and, as a result, may not know they are required to report such incidents to ATF. Indeed, during our site visits and other state and local contacts, we identified five state and local government entities that had previously experienced a theft or reported missing explosives-- two involving local law enforcement agencies, two involving state universities, and one involving a state department of transportation. However, one of these five incidents did not appear in ATF's nationwide database of reported thefts and missing explosives. Based on these findings, the actual number of thefts occurring at state and local government storage facilities nationwide could be more than the number identified by ATF data. There is no ATF oversight mechanism in place to ensure that state and local government facilities comply with federal explosives regulations. With respect to private sector entities, ATF's authority to oversee and inspect explosives storage facilities is primarily a function of its licensing process. However, state and local government entities are not required to obtain a federal license to use and store explosives. In addition, ATF has no specific statutory authority to conduct regulatory inspections at state and local government storage facilities. Under certain circumstances, ATF may inspect these facilities--for example, voluntary inspections when requested by a state and local entity, and mandatory annual inspections at locations where ATF shares space inside a state and local storage magazine. Regarding those state and local government facilities that ATF does not inspect, ATF officials acknowledged they had no way of knowing the extent to which these facilities are complying with federal explosives regulations. ATF officials stated that if the agency were to be required to conduct mandatory inspections at all state and local government storage facilities, they would likely need additional resources to conduct these inspections because they are already challenged to keep up with inspections that are mandated as part of the explosive licensing requirements. Under provisions of the Safe Explosives Act, ATF is generally required to physically inspect a license applicant's storage facility prior to issuing a federal explosives license--which effectively means at least one inspection every 3 years. At the same time, however, ATF inspectors are also responsible for conducting inspections of federal firearms licensees. The Department of Justice Inspector General reported that ATF has had to divert resources from firearms inspections to conduct explosives storage facility inspections required under the Safe Explosives Act. Despite recent funding increases for ATF's explosives program, giving ATF additional responsibility to oversee and inspect state and local government storage facilities could further tax the agency's inspection resources. According to ATF officials, because inspection of explosives licensees is legislatively mandated, the effect of additional state and local government explosives responsibilities (without related increases in inspector resources) could be to reduce the number of firearms inspections that ATF would be able to conduct. ATF does not collect nationwide information on the number and location of state and local government explosives storage facilities, nor does the agency know the types and amounts of explosives being stored in these facilities. Since data collection is a function of the licensing process and state and local facilities are not required to be licensed, no systematic information about these facilities is collected. With respect to private sector licensees, ATF collects descriptive information concerning explosive storage facilities as part of the licensing process. ATF license application forms require applicants to submit information about their storage capabilities, including specific information about the type of storage magazine, the location of the magazine, the type of security in place, the capacity of the magazine, and the class of explosives that will be stored. ATF also collects information about licensed private sector storage facilities during mandatory inspections, through examination of explosives inventory and sales records and verification that storage facilities meet the standards of public safety and security as prescribed in the regulations. During the course of our audit work, we compiled some data on state and local government entities that used and stored explosives. At the 13 state and local law enforcement bomb squads we visited, there were 16 storage facilities and 30 storage magazines. According to Federal Bureau of Investigation data, there are 452 state and local law enforcement bomb squads nationwide. However, because of the limited nature of our fieldwork, we cannot estimate the total number of storage facilities or magazines that might exist at other bomb squad locations. Moreover, other state and local government entities (such as public universities and state and local departments of transportation) in addition to law enforcement bomb squads also have explosives storage facilities. At the one public university we visited, there were 2 storage facilities and 4 storage magazines. Again, however, because of the limited nature of our fieldwork, we cannot estimate the total number of storage facilities and magazines that exist at these other state and local government entities nationwide. We found that security measures varied at the 14 state and local government entities we visited. Overall, we visited, 2 state bomb squads, 11 city or county bomb squads (including police departments and sheriffs' offices), and 1 public university. Four of the 14 state and local entities had 2 separate storage areas, resulting in a total of 18 explosives storage facilities among the 14 entities. Three of these storage facilities were located on state property, 7 were located at city or county police training facilities, 7 were located on other city or county property, and 1 was located at a metropolitan airport. Eleven of the 18 explosives storage facilities we visited contained multiple magazines for the storage of explosives. As a result, these 18 facilities comprised a total of 34 storage magazines. All of the 18 facilities contained a variety of high explosives, including C-4 plastic explosive, detonator cord, TNT, binary (two-part) explosives, and detonators. Estimates of the amount of explosives being stored ranged from 10 to 1,000 pounds, with the majority of the entities (9) indicating they stored 200 pounds or less. At each of the 14 state and local storage entities we visited, we observed the types of security measures in place at their explosives storage facilities. Our criteria for identifying the type of security measures in place included existing federal explosives storage laws and regulations (27 C.F.R., Part 555, Subpart K) and security guidelines issued by the explosives industry (the Institute of Makers of Explosives). Most of these security measures (fencing, vehicle barriers, and electronic surveillance, for example) are not currently required under federal storage regulations. However, we are presenting this information in order to demonstrate the wide range of security measures actually in place at the time of our visits. Physical security. Thirteen of the 18 storage facilities restricted vehicle access to the facility grounds by way of a locked exterior security gate or (in one case) by virtue of being located indoors. Five of the 13 facilities restricted vehicle access after normal working hours (nights or nights and weekends). Officials at 7 other facilities said that vehicle access to the facilities was restricted at all times, including the 1 indoor facility that was located in the basement of a municipal building. Six of the 18 storage facilities had an interior barrier--consisting of a chain-link fence with a locked gate--immediately surrounding their storage magazines to prevent direct access by persons on foot. One other facility (the indoor basement facility), relied on multiple locked doors to prevent access by unauthorized personnel. Conversely, at 1 facility we visited, the storage magazine could be reached on foot or by vehicle at any time because it did not have fencing or vehicle barriers to deter unauthorized access. In addition to restricted access to storage facilities, officials at all of the 18 storage facilities we visited told us that official personnel--either bomb squad or other police officers--patrolled or inspected the storage facility on a regular basis. And, at 9 of the 18 storage facilities we visited, officials said that state or local government employees--police training personnel, jail or correctional personnel, or other city/county employees--maintained a 24-hour presence at the facilities. Electronic security. Four of the 18 explosives storage facilities had either an alarm or video monitoring system in place. Two storage facilities with video surveillance took advantage of existing monitoring systems already in place at their storage locations--one located at a county correctional facility and one located inside a municipal/police building. Officials at 4 storage facilities told us they had alarm systems planned (funding not yet approved), and officials at 3 facilities said they had alarm systems pending (funding approved and awaiting installation). Officials at 2 facilities also told us they planned to install video monitoring. Regarding the feasibility of installing electronic monitoring systems, 4 officials noted that storage facilities are often located in remote areas without easy access to electricity. Regarding the possibility of new federal regulations that would require electronic security at storage magazines, 9 officials told us they would not object as long as it did not create an undue financial burden. Inventory and oversight issues. Officials at all 14 of the entities we visited told us they performed periodic inventories of the contents of their explosives storage magazines in order to reconcile the contents with inventory records. In addition, 9 entities said they had received inspections of their storage facilities, primarily by ATF. Six entities told us they received the inspections on a periodic basis, with another 3 entities having received a onetime inspection. Regarding oversight by multiple regulatory authorities, one entity had been inspected by both ATF and a local government authority, while another entity was inspected on a recurring basis by both ATF and a state government authority. Five of the 14 entities we visited told us they were required to obtain a license from state regulatory authorities to operate their explosives storage facilities. One of these entities was also required (by the state regulatory authority) to obtain a federal explosives license issued by ATF. Officials at 13 entities we visited said they did not object to the possibility of federal licensing or inspection of their explosives storage facilities. Officials at 3 state and local entities noted that additional federal oversight was not a concern as long as they were not held to a higher standard of security and safety than ATF requires of private industry. Thefts and compliance issues. Two of the five thefts we documented during our site visits and other state and local contacts occurred at state and local entities we visited. At one storage facility, officials told us that criminals had once used a cutting torch to illegally gain entry to an explosives storage magazine. At another storage facility, officials said that an unauthorized individual had obtained keys to a storage magazine and taken some of the explosives. In both incidents, the perpetrators were apprehended and the explosives recovered. However, one of these incidents did not appear in ATF's nationwide database of reported thefts and missing explosives. We also observed storage practices at four facilities that may not be in compliance with federal explosives regulations. However, these circumstances appeared to be related to storage safety issues, rather than storage security. In April 2005, the National Bomb Squad Commanders Advisory Board--which represents more than 450 law enforcement bomb squads nationwide--initiated a program encouraging bomb squads to request a voluntary ATF inspection, maintain an accurate explosives inventory, and assess the adequacy of security at their explosive storage facilities to determine if additional measures might be required (such as video monitoring, fencing, and alarms). This is a voluntary program and it is too soon to tell what effect, if any, it will have towards enhancing security at state and local law enforcement storage facilities and reducing the potential for thefts. The overall number of state and local government explosives storage facilities, the types of explosives being stored, and the number of storage magazines associated with these facilities are currently not known by ATF. ATF has no authority to oversee state and local government storage facilities as part of the federal licensing process, nor does it have specific statutory authority to conduct regulatory inspections of these facilities. As a result, ATF's ability to monitor the potential vulnerability of these facilities to theft or assess the extent to which these facilities are in compliance with federal explosives storage regulations is limited. According to ATF's interpretation of federal explosives laws and regulations, state and local government agencies--including law enforcement bomb squads and public universities--are required to report incidents of theft or missing explosives to ATF within 24 hours of an occurrence. Because this reporting requirement applies to any "person" who has knowledge of a theft from his stock and the definition of "person" does not specifically include state and local government agencies, ATF officials acknowledged that these entities may be unsure as to whether they are required to report under this requirement. If state and local government entities are unsure about whether they are required to report thefts and missing explosives, ATF's ability to monitor these incidents and take appropriate investigative action may be compromised by a potential lack of information. Further, the size of the theft problem, and thus the risk, at state and local government storage facilities will remain unclear. To allow ATF to better monitor and respond to incidents of missing or stolen explosives, the report we are releasing at this hearing recommends that the Attorney General direct the ATF Director to clarify the explosives incident reporting regulations to help ensure that all persons and entities who store explosives, including state and local government agencies, understand their obligation to report all thefts or missing explosives to ATF within 24 hours of an occurrence. The Department of Justice agreed with our recommendation and said it would take steps to implement it. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or members of the subcommittee may have. For information about this testimony, please contact Laurie E. Ekstrand, Director, Homeland Security and Justice Issues, at (202) 512-8777, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other individuals making key contributions to this testimony include William Crocker, Assistant Director; Philip Caramia; and Michael Harmond. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
More than 5.5 billion pounds of explosives are used each year in the United States by private sector companies and government entities. The Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has authority to regulate explosives and to license privately owned explosives storage facilities. After the July 2004 theft of several hundred pounds of explosives from a local government storage facility, concerns arose about vulnerability to theft. This testimony provides information about (1) the extent of explosives thefts from state and local government facilities, (2) ATF's authority to regulate and oversee state and local government storage facilities, and (3) security measures in place at selected state and local government storage facilities. This information is based on a report GAO is releasing today on these issues. Judging from available ATF data, there have been few thefts of explosives from state and local government storage facilities. From January 2002 to February 2005, ATF received 9 reports of thefts or missing explosives from state and local facilities, compared with a total of 205 explosives thefts reported from all sources nationwide during this same period. During the course of the audit, GAO found evidence of 5 thefts from state and local government facilities, 1 of which did not appear in ATF's national database of thefts and missing explosives. Thus, the actual number of thefts occurring at state and local facilities could be higher than that identified by ATF data. ATF has no authority to oversee or inspect state and local government explosives storage facilities. State and local agencies are not required to obtain a license from ATF to use and store explosives, and only licensees--such as private sector explosives storage facilities--are subject to mandatory oversight. Thus, ATF has no means to ensure that state and local facilities comply with federal regulations. Further, ATF does not collect nationwide information on the number and location of state and local storage facilities, nor does the agency know the types and amounts of explosives being stored in these facilities. Because this data collection is a function of the licensing process and state and local facilities are not required to be licensed, no systematic information about these facilities is collected. By comparison, all licensed private sector facilities must submit a variety of information about their facilities--including location and security measures in place--to ATF during the licensing process. ATF also collects information about these facilities during mandatory inspections. At the 18 state and local government storage facilities GAO visited, a variety of security measures were in place, including locked gates, fencing, patrols, and in some cases electronic surveillance. All the facilities' officials told GAO that they conducted routine inventories. But most of the state and local government entities GAO visited were not required to be licensed or inspected by state or local regulatory agencies. GAO identified several instances of possible noncompliance with federal regulations, but these were related primarily to storage safety issues rather than security.
4,204
620
The WOTC is intended to encourage employers to hire individuals from eight targeted groups that have consistently high unemployment rates. The targeted groups are individuals in families currently or previously receiving welfare benefits under the Temporary Assistance for Needy Families (TANF) program or its precursor, the Aid to Families With Dependent Children (AFDC) program; veterans in families currently or previously receiving assistance under a food stamp recipients--aged 18 through 24 years--in families currently or previously receiving assistance under a food stamp program; youth--aged 18 through 24 years--who live within an empowerment zone youth--aged 16 and 17 years--who live within an empowerment zone or enterprise community and are hired for summer employment only; ex-felons in low-income families; individuals currently or previously receiving Supplemental Security Income; and individuals currently or previously receiving vocational rehabilitation services. Additional eligibility criteria apply to these groups. For example, welfare recipients must have received AFDC or TANF benefits for any 9 months during the 18-month period ending on the hiring date in order to be eligible for the program. The amount of tax credit that employers can claim under this program depends upon how long they retain credit-eligible employees and the amount of wages they pay to WOTC-certified employees. Employers who retain certified employees for at least 120 but less than 400 hours qualify for a credit of 25 percent of up to $6,000 in wages, for a maximum credit of $1,500. Employers who retain certified employees for 400 hours or more qualify for a credit equal to 40 percent of up to $6,000 in wages, for a maximum credit of $2,400. The credit is calculated using the actual first year wages paid or incurred. Employers must reduce their tax deductions for wages and salaries by the amount of the credit. In addition, as part of the general business credit, the WOTC is subject to a yearly cap. However, excess WOTC can be used to offset tax liabilities in the preceding year or in any of 20 succeeding years. The WOTC was first authorized in the Small Business Job Protection Act of 1996 to improve upon and replace a similar, expired program--the Targeted Jobs Tax Credit program. The WOTC was designed to mitigate some shortcomings that had been identified in the previous credit program--specifically, that it gave employers windfalls for hiring employees that they would have hired anyway and that too many credit- eligible employees left their jobs before they received much work experience. Some target groups were reformulated with the intention of focusing narrowly on those who truly need a credit for firms to risk hiring them. In addition, the minimum employment period for receiving the higher rate of credit was lengthened. The WOTC became effective beginning in October 1996 and has since been reauthorized. It is due to expire in December 2001. In fiscal year 1999, 335,707 individuals were certified as members of the targeted groups, making their employers eligible for the credit if the workers remained on the job for at least 120 hours. Individuals in the welfare target group made up 54 percent of the individuals certified. Youth in the food stamp target group made up another 20 percent of the individuals certified. The other six target groups each accounted for 1 to 8 percent of the remaining certifications. Federal and state agencies share responsibility for administering the WOTC program. The Department of the Treasury, through the Internal Revenue Service (IRS), is responsible for the tax provisions of the credit. The Department of Labor, through the Employment and Training Administration, is responsible for developing policy and program guidance and providing oversight of the WOTC program. In addition, the Department of Labor awards grants to states for administering the eligibility determination and certification provisions of the program. State agencies verify and report to the Department of Labor on state certification activities. All 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands participate in the program. Neither the Department of the Treasury nor the Department of Labor regulations require these agencies to take any actions regarding displacement or churning. The State of New York and the Department of Labor have undertaken studies that may have findings relevant to whether employers engage in displacement or churning practices. The New York study, which was issued in 1998, concluded, among other things, that employer windfalls from churning employees are minimal. This conclusion was based on analysis of state WOTC and Wage Reporting databases with records on 12,609 individuals in New York covering the fourth quarter of 1996 through the first quarter of 1998. The study did not address displacement. The Department of Labor study is ongoing, so its results are not yet available. The study is using in-depth interviews with 16 employers who hire a large number of employees under the WOTC program to examine the hiring, retention, and career advancement experiences of WOTC employers and employees. To obtain information on the characteristics of employers, we analyzed national tax data from the IRS' Statistics of Income Division for 1997, the most recent year that data were available, and state WOTC data from agencies in California and Texas for 1997 through 1999. To obtain information relating to the extent of displacement and churning, we surveyed a stratified probability sample of employers who have participated in the WOTC program in California and Texas. The participating employers that we surveyed are those with repeated and recent experience in the program in that they hired at least one WOTC employee in 1999 and hired at least one WOTC employee in another year. Our sample is projectible to the entire population of 1,838 employers in California and Texas who met these hiring criteria. For information relating to churning, we also analyzed WOTC and unemployment insurance data for these states. With these data, we determined the total earnings and length of employment of WOTC- certified employees and examined this information for evidence concerning the extent and likelihood of churning. For additional information relating to displacement, we analyzed national employment data in the Commerce Department's Current Population Survey (CPS) for 1995 through 1999. We used the CPS data to estimate employment rates for members of groups targeted by the credit and members of groups not targeted by the credit but who may substitute in employment for target group members. The absence of a centralized database containing the necessary detailed information precluded a nationwide survey of employers and analysis of employment practices. We chose California and Texas because they are among the states that certified the largest number of employees to participate in the WOTC program in fiscal year 1999, have electronic databases of their WOTC program data, and provided a somewhat geographically diverse population. Together, California and Texas certified about 12 percent of WOTC- eligible individuals in fiscal year 1999, ranking them second and fifth, respectively, in WOTC certifications for that fiscal year. When reporting our estimates derived from the sample and our analysis of program and unemployment insurance data, we combined data from both states because the results in the two states were similar. Furthermore, the confidence intervals for all point estimates in the letter of this report are no more than 10 percentage points on either side of the estimate. Our survey and state agency data pertain only to participating employers in California and Texas. However, to assure ourselves that our findings are likely to apply to WOTC employers in the rest of the nation, we examined the federal laws and regulations related to the credit, surveyed state administrators responsible for the credit, and analyzed the data on participating employers. The federal tax benefits offered by the WOTC are the same across all states. Therefore, we have no reason to believe that employers in California and Texas respond differently to these incentives than employers in other states. We spoke to the officials who were responsible for administering the WOTC program in all 50 states, and they all confirmed that their states made no effort to either encourage or discourage displacement or churning. From the participating employer data, we determined that employers who operate in multiple states account for most of the WOTC hires in California and Texas. Moreover, we found no differences relevant to churning and displacement between employers in California and Texas in the results of our survey and agency data analyses, suggesting that our conclusions would be generalizable to employers in other states as well. We did not evaluate how effective or efficient the WOTC has been in increasing the employment and earnings of target group members. To do this, we would have had to determine the extent to which the credit caused employers to hire workers that they would not otherwise employees' experience with WOTC employers increases their current and employers received "windfall" credits for employees whom they would have hired anyway. We did not address any of those issues in this report. We did not verify the state and federal databases we used. However, agreements between the Department of Labor and state WOTC offices require the states to conduct audits of the accuracy of their WOTC records. A review of studies of the accuracy of unemployment insurance data, which was conducted for the National Research Council, concluded that the data appear to be accurate. The study notes that employers are required by law to report the data and that intentional inaccuracies are subject to penalties. This same review of studies found that the CPS data are a valuable source of information on the national low-income population, with broad and fairly accurate measures of income. However, the study noted that sample sizes may be small for some subpopulations (e.g., welfare recipients in particular states), and the percentage of some subpopulations covered by the survey appears to have declined modestly in recent years. The tax data from IRS' Statistics of Income Division undergo numerous quality checks but do not include information from amended tax returns. We conducted our review from January 2000 through December 2000 in accordance with generally accepted government auditing standards. Our scope, methodology, and the sources of the data we used are discussed further in appendix I. We requested comments on a draft of this report from the Department of Labor and asked cognizant agencies in California and Texas to review the draft's discussion of their WOTC efforts. The comments are discussed near the end of this letter. Employers who were large in terms of gross receipts earned most of the credit reported in 1997, the latest year for which data were available. Data from the agencies that certify WOTC employees in California and Texas showed that a relatively small number of employers did most of the hiring in the WOTC program from 1997 through 1999. Employers' participation in the program was greatly influenced by such factors as the opportunity to obtain the credit, address a labor shortage, and be a good corporate citizen. In 1997, nationwide, an estimated 4,465 corporations earned an estimated total of $134.6 million in tax credits. Approximately 66 percent of the credit was earned by corporations with gross receipts of $1 billion or more. Table 1 shows the amount of credit that businesses earned by amount of gross receipts. Most of the credit was reported by businesses engaged in nonfinancial services, such as hotel, motel, and other personal services, and retail trade. These industries accounted for 81 percent of the credit reported. Table 2 shows the credit amounts earned by businesses in each industry in 1997. The aggregate amount of WOTC earned by taxpayers is likely to have grown significantly between 1997 and 1999 because the number of WOTC certifications grew significantly nationwide over that period--from 126,113 to 335,707. However, based on the certification data we have from California and Texas, we believe that the percentage distribution of the credit by size of employer and by industry has not changed dramatically. The size distribution of employers measured by number of WOTC hires did not change significantly in either California or Texas during that period. The distribution of certifications by industry also changed little in Texas; we do not have industry information for California. Our analysis of WOTC certification data in California and Texas for 1997 through 1999 showed that a few employers did most of the hiring in the WOTC program. Employers who hired more than 100 WOTC-certified employees represented about 3 percent of all employers in the program but accounted for about 83 percent of all hires. About 65 percent of employers in the program made only one WOTC hire. The larger WOTC employers spent more time in the program. Employers who hired more than 100 WOTC-certified employees were in the program for an average of 10 or more quarters, while those hiring 5 or fewer employees were in the program for an average of less than 3 quarters. The larger WOTC employers also hired more frequently. Employers who hired in every year accounted for about 83 percent of total hires while representing about 8 percent of all employers. Table 3 shows the distribution of the number of employers, the number of WOTC-certified employees, and time in the program, by size of employers (in terms of WOTC-certified hires) for 1997 through 1999. The employers that we surveyed in two states reported that the opportunity to obtain a tax credit was by far the factor that most influenced their decisions to participate in the WOTC program, followed by the opportunity to address labor shortages and be a good corporate citizen. According to our survey, the opportunity to obtain the credit was the largest influence, with an estimated 85 percent of participating employers in California and Texas saying they were greatly influenced by this opportunity. Figure 1 shows the extent to which employers in the states we reviewed said that specific factors greatly influenced their participation in the program. Participation in the program appears often to have had support from high levels within the companies. For example, for an estimated 57 percent of California and Texas employers, the possibility of participating in the program was raised by someone inside the company rather than by an outside organization. In those situations, high-level management was responsible for raising the idea of participating in the WOTC program about three-quarters of the time, according to our survey-based projections. Displacement and churning are likely to be limited, if they occur at all, because, as our survey of employers in California and Texas indicates, most employers view these practices as having little or no cost- effectiveness. This view is consistent with the employers' estimate that the credit offsets less than half the costs of recruiting, hiring, and training credit-eligible employees. Our employer survey also indicates that most vacancies filled by credit-eligible employees occur for reasons unrelated to displacement and churning, such as voluntary separations. Furthermore, our survey indicates that most employers change at least one recruitment, hiring, or training practice, which, studies suggest, may make these employers more likely to retain new hires. Our analysis of program and employment data from state agencies supports what we learned from the survey regarding the low probability of churning. These data show that employment rarely ends near the earnings level that yields the maximum credit, and employees earning the maximum are no more likely to separate than are other WOTC-certified employees. The agency data do not allow us to perform similar tests for the occurrence of displacement. However, displacement is less likely to occur when employers are increasing their workforce--as has been the case since the introduction of the credit-- because they have less need to dismiss non-WOTC workers in order to hire WOTC workers. Most employers do not consider displacement and churning to be cost- effective employment practices. Based on our survey, we estimate that 93 percent of participating employers in California and Texas would agree that displacement is cost-effective to little or no extent. An estimated 93 percent of employers also hold that view regarding churning. Displacement and churning are not cost-effective if the cost of recruiting, hiring, and training a new employee exceeds the amount of WOTC that an employer expects to earn from that employee. Under those circumstances, the WOTC provides no incentive for that employer to dismiss an existing employee to hire a WOTC-certified one. According to our employer survey, on average, the tax credit offsets less than one-half (47 percent) of this cost. Furthermore, employers told us that it is important to reduce the turnover of WOTC-certified employees. Based on our survey, we estimate that for 71 percent of participating employers in those two states, retaining employees after the maximum tax credit has been secured is very important. An additional 20 percent would view retention of employees after the maximum tax credit is secured as somewhat important. For those employers who could tell us the reasons for the vacancies that were filled by WOTC-certified employees, an estimated average of 61 percent of such vacancies arose because the previous employees quit. On average, the next most frequent reasons for the vacancies were that the previous employees were terminated for cause and that the positions were newly created. Figure 2 shows the distribution by California and Texas employers' responses regarding the reasons for vacancies. None of the reasons given were related to displacement or churning. About 85 percent of employers in California and Texas have changed a recruiting, hiring, or training practice to secure the WOTC and better prepare credit-eligible new hires, according to estimates that are based on employer-reported information from our survey. Furthermore, an estimated 43 percent of employers in these two states have changed their practices in all three of these areas. A 1999 study conducted by Jobs for the Future found that employers who successfully employed welfare recipients--which is the largest targeted group in the WOTC program-- developed strategies to improve access, retention, and advancement of those individuals. The strategies used by employers in our survey included targeted recruitment; outreach and screening assistance from organizations that know and understand the targeted group; pre- employment training, such as training in communication skills; and mentors, among other strategies. These strategies are consistent with ones these researchers identified in other studies. Based on the results of our survey, we estimate that about two-thirds of participating employers in the two states changed at least one recruitment practice to secure the tax credit. The most frequent change in recruitment practice was that employers listed job openings with a public agency or partnership. An estimated 49 percent of participating employers in the two states took such an action. Figure 3 shows the extent to which participating employers changed recruitment practices to secure the credit. An estimated three-quarters of participating employers in the two states changed at least one hiring practice to secure the tax credit. Our survey indicated that the most frequent change in hiring practices was that employers began training their managers about the tax credit, with an estimated 66 percent of employers making that change. Figure 4 shows the extent to which participating employers changed hiring practices to secure the credit. Based on our survey, we estimate that about one-half of participating California and Texas employers changed at least one training practice to better prepare WOTC new hires. For example, an estimated 40 percent began providing mentors to their new hires. Figure 5 shows the extent to which participating employers changed training practices to secure the credit. Displacement is less likely to occur when employers are increasing their workforce because they have less need to dismiss non-WOTC workers in order to hire WOTC workers. Since the introduction of the credit in the last quarter of 1996, employment in the U. S. economy has grown robustly, even for low-skilled workers. Using the CPS data, we found that employment rates grew over the period for certain target group members and closely related nontarget group members that may substitute in employment for the target groups. For example, we estimated employment rates for welfare recipients in the CPS (those on welfare for 9 or more months in the previous year) who would be members of the group targeted by the credit. We also estimated employment rates for welfare recipients who would not be target group members (those on welfare less than 9 months of the previous year). The employment rate of the target group welfare recipients grew by 47 percent and nontarget welfare recipients by 12 percent from 1995 through 1999. Figure 6 shows employment rates over the period for members of the targeted and nontargeted welfare groups. Our analysis of the WOTC and unemployment insurance data in California and Texas showed that most certified employees do not earn enough income while working for WOTC employers for churning to make sense for those employers. Sixty- seven percent of certified employees separated from their employers after earning less than $3,000. Furthermore, only a relatively small number of certified employees earned incomes in the range where churning may be most likely to occur. Employers wishing to maximize their credit would retain WOTC employees until they had earned a total of $6,000, the maximum earnings eligible for the credit. Only about 7 percent of certified employees separated after earning incomes between $5,000 and $7,000 (a range of earnings within $1,000 of the credit maximizing level). If employers did not churn when employees reached this level of earnings, it seems less likely that they would churn at other levels of earnings. Figure 7 shows the percentage of employees separating after earning a given amount of income. In addition to determining the percentage of WOTC-certified employees who separated near the maximum earnings level, we also analyzed the effect of reaching the maximum earnings level on the likelihood of separation. We used a statistical technique to measure the likelihood of separation of WOTC-certified employees who reach the maximum earnings level in a given quarter relative to the likelihood of separation of WOTC-certified employees who do not reach the maximum. The technique that we used allows us to measure the effect on the likelihood of separation, while controlling for the effects of other employee characteristics, such as membership in a particular target group. The measured effect is, therefore, the net effect on the likelihood of separation (i.e., net of the effects of the other characteristics). Using this technique, our analysis showed that WOTC-certified employees who reach the maximum earnings in a given quarter (i.e., those whose cumulative earnings are between $5,000 and $7,000) are no more likely to separate from their WOTC employers than those employees who do not reach the maximum. In addition, the analysis showed that reaching the maximum has no effect on the likelihood of separation across most target groups. For example, members of the welfare target group are no more likely to separate in the quarter in which they reach the maximum than are members of other target groups who reach the maximum. Besides differences in target group membership, this analysis also controlled for differences in the occupation of employees, size of employers in terms of total employment, and other factors. This analysis is described in more detail in appendix III. The fact that an overwhelming majority of WOTC employers whom we surveyed in California and Texas considered displacement and churning to have little or no cost-effectiveness leads us to conclude that few of them would engage in these practices. Our analyses of WOTC employment data compiled by the two states provides further support for this conclusion with respect to churning. Further, although our survey and state agency data pertain only to participating employers in California and Texas, we believe that our conclusions regarding the occurrence of displacement and churning are likely to hold true in the remainder of the nation. The federal tax benefits offered by the WOTC are the same across all states. Therefore, we have no reason to believe that employers in California and Texas would be less responsive to those incentives than employers in other states. Moreover, employers that operate in multiple states account for most of the WOTC hires in California and Texas. We spoke to the officials who were responsible for administering the WOTC program in all 50 states and they all confirmed that their states made no efforts to either discourage or encourage displacement or churning. The fact that there were no differences relevant to displacement and churning between the results of our survey and agency data analyses for California and those for Texas also gives credence to the generalizability of our conclusions. The Department of Labor sent e-mail comments on a draft of this report to us on March 1, 2001. The Department of Labor made suggestions for clarifying information in the report. We modified the report where appropriate. The Department of Labor also stated that, given the wealth of evidence in our report indicating that displacement and churning are limited, our conclusions regarding the use of these practices could be stronger. We did not strengthen our characterization of the extent to which displacement and churning may be occurring because we believe that our conclusion appropriately reflects the strength of our methodology and resulting data. Agencies in California and Texas responsible for the WOTC program also reviewed our draft report regarding our description of the credit program in their state and our analysis of state data. The agencies stated that they had no suggestions for changes in our report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we are sending copies of this report to Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; the Honorable Elaine L. Chao, Secretary of Labor; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; Mark Heilman, Chief, Job Services Division, California Employment Development Department; and John Carlson, WOTC Coordinator, Texas Workforce Commission. Copies of this report will be made available to others upon request. If you have any questions regarding this report, please contact me or James Wozny at (202) 512-9110. Key contributors to this report are acknowledged in appendix IV. The objectives of this report were to determine (1) the characteristics of employers who have participated in the WOTC program and (2) the extent, if any, to which employers have practiced displacement and churning. To obtain information on the characteristics of employers, we analyzed national tax data from the Statistics of Income Division of the Internal Revenue Service for 1997, the most recent year that data were available, and state WOTC data from agencies in California and Texas for 1997 through 1999. To obtain information relating to the extent of displacement and churning, we surveyed a stratified probability sample of employers who have participated in the WOTC program in California and Texas. Our survey of employers is discussed in more detail below. For information relating to churning, we also analyzed WOTC and unemployment insurance data for California and Texas. With these data, we determined the total earnings and length of employment of WOTC- certified employees and analyzed this information for evidence concerning the extent and likelihood of churning. Our methodology for this analysis is discussed in detail in appendix III. For additional information relating to displacement, we analyzed national employment data in the Commerce Department's Current Population Survey (CPS) for 1995 through 1999. We used the CPS to estimate employment rates for members of groups targeted by the credit and members of groups not targeted by the credit but who may substitute in employment for target group members. To obtain information relating to the extent of displacement and churning, we identified participating employers from databases of employees who had applied for certification under the WOTC program. These databases are maintained by the state agencies in California and Texas that are responsible for determining the eligibility of employees as members of targeted groups and issuing certifications of eligibility to employers. Our desired survey population initially was managers who were hiring WOTC program employees nationwide. However, since this information is kept by each state office in various forms, it was not feasible to assemble a national sampling frame. Therefore, we used data from two of the five states with the largest numbers of WOTC employee participants in 1999. California and Texas were the two states of the five largest with manageable electronic databases of WOTC employees in 1999. We identified employers from these lists by their unique employer identification numbers (EIN), which are used by IRS. In order to have a population of employers with repeated and recent experience with the program, we included only those who had hired at least one certified employee, hired at least once in 1999, and hired at least once in 1997 or 1998. To identify employers from the databases of WOTC-eligible employees, we aggregated the employees according to their employer's EIN. For the purposes of our sample, we defined "employer" as a unique EIN and selected a stratified random sample of 157 employers from the 975 total employers in California and 148 employers from the 863 total employers in Texas. The strata were defined by how many WOTC employees the employer hired. Because employers who had more than 100 WOTC hires accounted for 80 percent of the total WOTC hires, those employers hiring more than 100 employees were a separate stratum from those hiring between 2 and 100 WOTC employees. In this way, we were able to sample more employers with larger numbers of WOTC hires. Table 4 shows the breakdown by state and stratum of the number of employers in the population, the number selected into the sample, and the number who responded to the survey. In total, we sampled 305 employers and received responses from 225, for an overall response rate of 74 percent. In addition to the EINs for the employers associated with WOTC-eligible employees, the databases included limited information for a contact person. To try to ensure that our surveys reached the correct person at the employer site, we contacted every sampled employer by phone first. In this initial phone call, we explained the purpose of the survey, the kinds of questions we would be asking, and the location for which we were interested in obtaining information, and we asked for the name of the most appropriate respondent. Most initial contacts indicated that they were the most appropriate respondent or that they would receive the survey and forward it as necessary. Approximately 4 weeks after the initial mailout, we conducted a second mailout to those who had not yet responded. Approximately 4 weeks after that, we followed up with all remaining nonrespondents by telephone, reminding them that they had not responded and asking them to complete a shorter version of the questionnaire over the telephone. Because the survey results come from a sample, all results are estimates that are subject to sampling errors. These sampling errors measure the extent to which samples of these sizes and structure are likely to differ from the populations they represent. Each of the sample estimates is surrounded by a 95-percent confidence interval, indicating that we can be 95-percent confident that the interval contains the actual population value. Unless otherwise noted, the 95-percent confidence intervals for all percent estimates in the letter of the report do not exceed plus or minus 10 percentage points around the estimate. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of error, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted may introduce variability into our survey results that is difficult to measure. We conducted pretests of the survey to evaluate the wording of the questions. One particular source of nonsampling error unique to this survey involves the location to which that respondent's answers refer. In some cases, the employer or EIN that we selected corresponded to a very large corporation, and our contact was in a hiring division located outside the state or local office of interest. In the initial phone calls, the location of interest was specified; however, the respondent may have responded with a different location in mind or may have been unable to take into account variation in hiring practices across several local offices. Careful pretesting of the survey did not uncover such issues, but this possibility may lead to additional variation in our survey results. Our survey and state agency data pertain only to participating employers in California and Texas. However, to assure ourselves that our findings are likely to apply to WOTC employers in the rest of the nation, we examined federal laws and regulations related to the credit, surveyed state administrators responsible for the credit program, and analyzed data on the participating employers. The federal tax benefits offered by the WOTC are the same across all states. Therefore, we have no reason to believe that employers in California and Texas respond differently to these incentives than employers in other states. We spoke to the officials who were responsible for administering the WOTC program in all 50 states, and they all confirmed that their states made no effort to either encourage or discourage displacement or churning. Moreover, employers that operate in multiple states account for most of the WOTC hires in California and Texas. We found no significant differences between employers in California and Texas in the results of our survey and agency data analyses, suggesting that our conclusions will be generalizable to employers in other states as well. We did not verify the state and federal databases we used. However, agreements between the Department of Labor and state WOTC offices require the states to conduct audits of the accuracy of state WOTC records. A review of studies of the accuracy of unemployment insurance data conducted for the National Research Council concluded that the data appear to be accurate. The review noted that employers are required by law to report the data, and intentional inaccuracies are subject to penalties. This same review of studies found that the CPS data are a valuable source of information on the national low-income population, with broad and fairly accurate measures of income. However, the study noted that sample sizes might be small for some subpopulations (e.g., welfare recipients in particular states) and the percentage of some subpopulations covered by the survey appears to have declined modestly in recent years. The sample size for the targeted and nontargeted groups in our analysis was sufficiently large that the confidence intervals for the estimated employment rates were no more than 6 percentage points on either side of the estimate. We concluded that the slight decline in coverage of welfare recipients is unlikely to affect our analysis of trends in employment rates over the period. As noted, we analyzed the tax data from IRS' Statistics of Income Division. These data undergo numerous quality checks but do not include information from amended tax returns (i.e., revisions made by taxpayers themselves after their initial filings). To investigate whether reaching the maximum earnings in a given quarter affects the likelihood that employees will separate from their WOTC employers, we used state WOTC and unemployment insurance data on total earnings and duration of employment. We also used data from these sources on other employee characteristics, such as target group and occupation, and employer characteristics, such as total employment and the industry of the employer. The data were collected for 108,935 WOTC- certified employees and 5,347 employers in California and Texas for the years 1997 through 1999. We used the logistic regression model to quantify the effect of reaching the maximum earnings on the probability that the employee separates from the employer. We also used the model to estimate the effect of other employee characteristics, such as current wages (total earnings in a given quarter) and membership in a target group, on the probability of separation. The results of this analysis are presented as odds ratios in table 5. An odds ratio is a measure of relative risk of the occurrence of an event-in this case, the separation from an employer. The reported odds ratios indicate the effect of a particular characteristic (e.g., reaching the maximum earnings) on the probability of separation, controlling for the effects of other characteristics included in the analysis. The estimate of the effect, represented by the odds ratio, is the net effect of the characteristic (i.e., net of the effects of all other characteristics). If the characteristic increases the probability of separation, the odds ratio will be greater than 1, and if it decreases the probability of separation, the odds ratio will be less than 1. This interpretation is slightly different when the characteristics are different categories. An example of such a "categorical" characteristic is membership in a target group where the categories are welfare recipients, veterans, food stamp youth, and so on. In such cases, the analysis omits one of the categories (called the "reference group") and tests whether the included categories have greater or less chance of separation relative to the omitted category. An odds ratio of greater than1 indicates greater probability of separation, while an odds ratio of less than 1 indicates less probability of separation. Table 5 shows that reaching the maximum earnings has no statistically significant effect on the odds that employees will separate from their employers. The variable called "maximum earnings" indicates the quarter in which an employee's cumulative earnings are between $5,000 and $7,000. This interval includes $6,000 as its midpoint and indicates that reaching the maximum occurs in the quarter when the employee is within $1,000, more or less, of the maximum earnings eligible for the credit. The odds ratio for this variable is not significantly different from 1, meaning that employees whose earnings are within $1,000 of the maximum in a quarter are no more likely to separate than employees whose earning are outside this range. Table 5 shows that reaching the maximum has no effect on the likelihood of separation across most target groups as well. For example, members of the welfare target group are no more likely to separate in the quarter in which they reach the maximum than are members of other target groups who reach the maximum. We also used the logistic regression model to analyze the effect of reaching the maximum earnings separately for each state. The separate analysis permitted more characteristics of the employees and employers to be included because data on characteristics were not always available for both states. We analyzed the likelihood of separation in each state using only the characteristics in table 5, and then expanded the analysis to include the additional variable characteristics available in each state. This analysis shows that the conclusion about the effect of reaching the maximum on separation does not change when additional characteristics are added to the model. When variables indicating the occupation of the employee are added to the analysis in California, reaching the maximum earnings continues to have no effect on separation. When variables indicating the employer's industry and size in terms of total employment are added to the analysis in Texas, reaching maximum earnings is significant, but employees reaching the maximum are still slightly less likely to separate. Specifically, they are 9 percent less likely to separate than are employees who do not reach maximum earnings. In addition to those named above, Kerry Dunn, Tre Forlano, Wendy Ahmed, Sam Scrutchins, Stuart Kaufman, Barry Seltser, and Cheryl Peterson made key contributions to this report.
In 1997, 4,369 corporations earned a total of $135 million in Work Opportunity Tax Credits (WOTC). The employers who earned most of the credit were large companies with gross receipts exceeding $1 billion and engaged in nonfinancial services and retail trade. GAO's analysis of state agency data for California and Texas from 1997 through 1999 showed that three percent of participating employers accounted for 82 percent of all hires of WOTC-certified workers. Many employers who participated in the tax credit program in those two states in 1999 say that, besides the opportunity to obtain the credit, their participation in the program was also greatly influenced by such factors as the need to address a labor shortage and the opportunity to be a good corporate citizen. The results of GAO's two state analysis indicate a low probability of replacing employees who were not eligible for the tax credit.
8,168
186
Medicare, authorized in 1965 under Title XVIII of the Social Security Act, is a federal health insurance program providing coverage to individuals 65 years of age and older and to many of the nation's disabled. HCFA uses about 70 claims-processing contractors, called intermediaries and carriers, to administer the Medicare program. Intermediaries primarily handle part A claims (those submitted by hospitals, skilled nursing facilities, hospices, and home health agencies), while carriers handle part B claims (those submitted by providers, such as physicians, laboratories, equipment suppliers, outpatient clinics, and other practitioners). The use of incorrect billing codes is a problem faced both by public and private health insurers. Medicare pays part B providers a fee for each covered medical service identified by the American Medical Association's uniformly accepted coding system, called the physicians' Current Procedural Terminology (CPT). The coding system is complicated, voluminous, and undergoes annual changes; as a result, physicians and other providers often have difficulty identifying the codes that most accurately describe the services provided. Not only can such complexities lead providers to inadvertently submit improperly coded claims, in some cases it makes it easier to deliberately abuse the billing system, resulting in inappropriate payment. The examples in table 1 illustrate several coding categories commonly used in inappropriate ways. Commercial claims-auditing systems for detecting inappropriate billing have been available for a number of years; as early as 1991, commercial firms marketed specialized auditing systems that identify inappropriately coded claims. The potential value of such a system to Medicare has been noted both by the HHS Inspector General (in 1991) and by us (in 1995). In fact, both the Inspector General and we noted that such a tool could save the Medicare program hundreds of millions of dollars annually. Recognizing its need to address the inappropriate billing problem, HCFA directed its carriers to begin developing claims auditing edits in February 1991. In August 1994, it awarded a contract to further develop these claims auditing edits, called CCI, which it now owns and operates. According to HCFA, the CCI edits helped Medicare save about $217 million in 1996 by successfully identifying inappropriate claims. Nevertheless, inappropriate coding and resulting payments continue to plague Medicare. Last summer HHS' Office of Inspector General reported that about $23 billion of Medicare's fee for service payments in fiscal year 1996 were improper, and that about $1 billion of this amount was attributable to incorrect coding by physicians. On September 30, 1996, HCFA initiated action to improve its capability to detect inappropriate claims and payment. It awarded a contract to HBO & Company (HBOC), a vendor marketing a claims-auditing system, to test the vendor's system in Iowa and evaluate whether it could be effectively used throughout the Medicare program. Our objective was to determine if HCFA was using an adequate methodology for testing the commercial claims auditing system in Iowa for potential implementation with its Medicare claims processing systems. To do this, we analyzed documents related to HCFA's test, including the test contract, test plans and methodologies, test results and status reports, and task orders. This analysis included assessing the limitations of the test contract, size of the test claims processing sample, representation of users involved with the test, and information provided to management in its oversight role. We also met with HCFA staff responsible for conducting the test to obtain further insight into HCFA's test methodology. While we reviewed the reports of HCFA's estimated savings, we did not independently validate the reported savings by validating the sample of paid claims used as the basis for projecting them. However, the magnitude of HCFA's estimated savings is in line with our earlier estimate of potential annual savings from such systems. We observed operations at the test site in Des Moines, Iowa, and assessed the carrier officials' role in the test. We visited HBOC offices in Malvern, Pennsylvania, and the Plano, Texas, headquarters of Electronic Data Systems (EDS), the part B system maintainer, into whose system the claims-auditing system was integrated. During these visits, we documented these companies' roles and responsibilities in testing the system. Also, in August 1997 at a 3-day conference at HCFA headquarters, we observed the test team's effectiveness and objectivity in discussing the progress made to date and in developing solutions to issues still needing resolution. We compared the adequacy of HCFA's test methodology with the methodologies used by other public health care insurers to test and integrate a commercial claims-auditing system. We visited offices of these insurers and analyzed documents describing their test and integration approach. Finally, we compared the approach used by these insurers with HCFA's. The insurers whose methodologies we analyzed consisted of the Department of Defense's TRICARE support office (formerly called the Civilian Health and Medical Program of the Uniform Services (CHAMPUS)) in Aurora, Colorado; Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA) in Denver, Colorado; and the Kansas and Mississippi state Medicaid agencies in Topeka, Kansas, and Jackson, Mississippi, respectively. To evaluate HCFA's decisions regarding national implementation of a commercial claims-auditing system, we reviewed the contract and other documents related to the test and evaluated their impact on HCFA's ability to implement a claims-auditing system nationally. We also discussed HCFA's rationale for these decisions with senior HCFA officials. Finally, to assess HCFA's experience in acquiring and using the HCFA-owned CCI claims auditing edits, we reviewed the CCI contract (and related documents). We discussed this project and its results with cognizant HCFA officials. We performed our work from July 1997 through March 1998, in accordance with generally accepted government auditing standards. HCFA provided written comments on a draft of this report. These comments are presented and evaluated in the "Agency Comments and Our Evaluation" section of this report, and are included in appendix I. HCFA used a test methodology that was comparable with processes followed by other public insurers who have successfully tested and implemented such commercial systems. HCFA's test showed that commercial claims auditing edits could achieve significant savings. Other public insurers--CHAMPVA, TRICARE, and the Kansas and Mississippi Medicaid offices--each used four key steps to test their claims-auditing systems prior to implementation. Specifically, they (1) performed a detailed comparison of their payment policies with the system's edits to determine where conflicts existed, (2) modified the commercial system's edits to comply with their payment policies, (3) integrated the system into their claims payment systems, and (4) conducted operational tests to ensure that the integrated systems properly processed claims. These insurers' activities were comprehensive and required significant time to complete. CHAMPVA took about 18 months to integrate the commercial system at one claims processing site. TRICARE took about 18 months to integrate the system at two sites. It allowed about 2 years to implement the modified system at its nine remaining sites. HCFA's methodological approach was similar. From the contract award on September 30, 1996, through its conclusion on December 29, 1997, HCFA and contractor staff made significant progress in integrating the test commercial system at the Iowa site and evaluating its potential for Medicare use nationwide. HCFA used two teams to concentrate separately on the policy evaluation and technical aspects of the test. The policy evaluation team consisted of HCFA headquarters individuals and Kansas City (Missouri) and Dallas regional office staff knowledgeable of HCFA policies and the CPT billing codes, as well as individuals representing the Iowa carrier and HBOC. This team conducted a detailed comparison of the commercial system's payment policy manuals with Medicare policy manuals to identify conflicting edits. The reviews identified inconsistencies that both increased and decreased the amount of Medicare payments. For example, the commercial system pays for the higher cost procedure of those deemed mutually exclusive, while Medicare policy dictates paying for the lower cost procedure. Conversely, the commercial claims-auditing system denies certain payments for assistant surgeons, whereas Medicare policy allows these payments. These and all other conflicts identified were provided to the vendor, who modified the system's edits to be consistent with HCFA policy. The technical team consisted of staff from HCFA's headquarters and its Kansas City (Missouri) and Dallas regional offices; HBOC; EDS; and the Iowa carrier. This team prepared and carried out three critical tasks. First, it developed the design specifications and related computer code necessary for integrating the commercial system into the Medicare claims-processing software. Second, it integrated the claims auditing system into the Medicare part B claims-processing system. Finally, the team conducted numerous tests of the integrated system to determine its effect on processing times and its ability to properly process claims. HCFA management was kept apprised of the status of the test through biweekly progress reports and frequent contact with the project management team. HCFA reported that the edits in this commercial system could save Medicare up to $465 million annually by identifying inappropriate claims. Specifically, the analysis showed that the system's mutually exclusive and incidental procedure edits could save about $205 million, and the diagnosis-to-procedure edits would save about $260 million. HCFA's analysis was based on a national sample of paid claims that had already been processed by the Medicare part B systems and audited for inappropriate coding with the HCFA-owned CCI edits. While we reviewed the reports of HCFA's estimated savings, we did not independently verify the national sample from which these savings were derived. However, the magnitude of savings when added to the savings from CCI, which HCFA reported to be about $217 million in 1996, is in line with our earlier estimate that about $600 million in annual savings are possible. Test officials also concluded that the claims-processing portion of the test system's software provides little, if any, added value since the existing part B claims processing system already handles this function. Further, the test showed that integrating the commercial system's claims-processing function with the existing claims processing system could significantly increase processing time and delay payment. On November 25, 1997, HCFA officials notified the administrator about the success of the commercial system test. They reported that the test showed that the system's claims auditing edits could save Medicare up to $465 million annually, which is in addition to the savings provided by the CCI edits. Despite the success of the test, two key management decisions, if left unchanged, could have significantly delayed national implementation. One decision was to limit the test contract to the test, and not include a provision for nationwide implementation, thus delaying implementation of commercial claims auditing edits into the Medicare program. The second--HCFA's initial plan following the test to award a contract to develop its own edits rather than acquiring commercial edits such as those used in the test--would have potentially not only required additional time before implementation, but could well have resulted in a system that is not as comprehensive as commercially available edits. In March 1998, the Administrator of HCFA, told us that HCFA's plans have changed. She said HCFA (1) is evaluating legal options for expediting the contracting process, and (2) now plans to begin immediately to acquire commercial claims auditing edits. HCFA limited the use of the test system to its Iowa testing site--just one of its 23 Medicare part B claims-processing sites and did not include a provision for implementation throughout the Medicare program. As a result, additional time will be needed to award another contract to implement either the test system's claims auditing edits or any other approach throughout the Medicare program. A contracting official estimated that it could take as much as a year to award another contract using "full and open" competition--the contracting method normally used for such implementation. This would involve preparing for and issuing a request for proposals, evaluating the resulting bids, and awarding the contract. HCFA's estimated savings of up to $465 million per year demonstrate the costs associated with delays in implementing such payment controls nationwide. Awarding a new contract could result in additional expense to either develop new edits or for substantial rework to adapt the new system's edits to HCFA's payment policy if a contractor other than the one performing the original test wins the competition. If another contractor became involved, this would mean that much of the work HCFA performed during the 15-month test would have to be redone. Specifically, this would involve evaluating the new claims auditing edits for conflict with agency payment policy. Instead of limiting the test contract to the test site, HCFA could have followed the approach used by TRICARE, which awarded a contract that provided for a phased, 3-year implementation at its 11 processing sites following successful testing. In March 1998, HCFA's administrator told us that HCFA is doing what it can to avoid any delay resulting from this limited test contract. She said HCFA is evaluating legal options to determine if other contracting avenues are available, which would allow HCFA to expedite national implementation of commercial claims auditing edits. In reporting the test results, HCFA representatives recommended that the HCFA administrator award a contract to develop HCFA-owned claims-auditing edits, which would supplement CCI, rather than to acquire these edits commercially. They provided the following key reasons for this position. First, they said this approach could cost substantially less than commercial edits because (1) HCFA would not always be required to use the same contractor to keep the edits updated, (2) it would not be required to pay annual licensing fees, and (3) the developmental cost would be much less than using commercial edits. Second, they said this approach would result in HCFA-owned claims-auditing edits, which are in the public domain, allowing HCFA to continue to disclose all policies and coding combinations to providers--as is currently done with the CCI edits. They also explained that if a vendor of a commercial claims auditing system chooses to bid, wins this contract, and agrees to allow its claims auditing edits to be in the public domain as they are with CCI, HCFA will allow the vendor to start with its existing edits, which should shorten the development time. We do not agree that this approach is the most cost-effective. First, upgrading the edits by moving from the contractor who develops the original edits to one unfamiliar with them would not be easy and could be costly because this is a major task, which is facilitated by a thorough clinical knowledge of the existing edits. For example, the Iowa test system contains millions of edits, which would have to be compared against annual changes in the CPT codes. Second, the annual licensing fees that HCFA would avoid with HCFA-owned edits would be offset somewhat by the need to pay a contractor with the clinical expertise offered by commercial vendors to keep the edits current. Third, while the commercial edits could cost more than HCFA-owned ones, this increased cost has been justified by HCFA's test results, which demonstrated that commercial edits provide significantly more Medicare savings than HCFA-developed edits. Regarding HCFA's initial plan to fully disclose the HCFA-owned edits as they are with CCI, this policy is not mandated by federal law or explicit Medicare policies, nor is it followed by other public insurers, and it could result in potential contractors declining to bid. In a May 1995 memorandum from HHS to HCFA, the HHS Office of General Counsel concluded that federal law and regulations do not preclude HCFA from protecting the proprietary edits and related computer logic used in commercial claims auditing systems. Further, according to HCFA's deputy director, Provider Purchasing and Administration Group, HCFA has no explicit Medicare policies that require it to disclose the specific edits used to audit providers' claims. Likewise, other public health care insurers, including CHAMPVA, TRICARE, and the two state Medicaid agencies we visited, do not have such a policy, and are indeed using commercial claims-auditing systems without disclosing the details of the edits. Rather than disclose the edits, these insurers notified providers that they were implementing the system and provided examples of the categories of edits that would be used to check for such disparities as mutually exclusive claims. This approach protects the proprietary nature of the commercial claims auditing edits. Finally, the development time would likely be shortened if a commercial claims auditing vendor is awarded this contract and uses its existing edits as a starting point. However, if the request for proposals requires that these edits be in the public domain, it is doubtful that such vendors would bid on this contract using their already developed edits. An executive of a vendor that has already developed a claims auditing system told us that his company would not enter into such a contractual agreement if HCFA insists on making the edits public, because this would result in the loss of the proprietary rights to his company's claims auditing edits. Although HCFA's then director of the Center for Health Plans and Providers, recommended that HCFA develop its own edits, he also acknowledged that this approach could result in a less effective system than use of a commercial one. In a November 25, 1997, memorandum to the administrator assessing the results of the commercial test, the director stated that there were several "cons" to developing HCFA-owned edits. He concluded that "the magnitude of edits approved for national implementation could potentially be less , depending on the number of edits developed and reviewed for acceptance prior to the implementation date." He also stated that "there could be a perception that HCFA is unwilling to take full advantage of the technology and clinical expertise offered by vendors." Furthermore, HCFA's initial plan to develop its own claims-auditing edits was inconsistent with Office of Management and Budget (OMB) policy in acquiring information resources. OMB Circular A-130, 8b(5)(b) states that in acquiring information resources, agencies shall "acquire off-the-shelf software from commercial sources, unless the cost-effectiveness of developing custom software to meet mission needs is clear and has been documented." HCFA has not demonstrated that its plan to develop HCFA-owned claims auditing edits is cost-effective. A key factor showing otherwise is HCFA's estimate that every year it delays implementing claims auditing edits of the caliber of those used in the commercial test system in Iowa, about $465 million in savings could be lost. Developing comprehensive HCFA-owned claims auditing edits could take years, during which time hundreds of millions of dollars could be lost annually due to incorrectly coded claims. To illustrate: HCFA began developing its CCI database of edits in 1991 and has continued to improve it over the past 6 years. While HCFA reported that CCI identified about $217 million in savings (in the mutually exclusive and incidental procedure categories) in 1996, CCI did not identify an additional $205 million in those categories identified by the test edits nor does it address the diagnosis-to-procedure category, where the test edits identified an additional $260 million in possible savings. Furthermore, HCFA has no assurance that the HCFA-owned edits would be as effective as available commercial edits. In March 1998, after considering our findings and other factors, the Administrator, HCFA told us that she now plans to take an approach consistent with the test results. She said she plans to acquire and implement commercial claims auditing edits. HCFA followed an approach in testing and evaluating the commercial claims auditing system that was consistent with the approach used by other public health care insurers. This test showed that using this system's edits in the Medicare program can save up to $465 million annually. However, the Medicare program is losing millions each month that HCFA delays implementing such comprehensive claims auditing edits. Two critical HCFA decisions could have unnecessarily delayed implementation for several years and prevented HCFA from taking full advantage of the substantial savings offered by this technology. These decisions--to limit the test contract to the test and not include a provision for national implementation, and to develop HCFA's own edits rather than acquiring commercial ones--would have resulted in costly delays and could have resulted in an inferior system. However, we believe these decisions were appropriately changed by the administrator in March 1998. The administrator's current plans for expediting national implementation and acquiring commercial claims auditing edits should, if successfully implemented, help HCFA take full advantage of the potential savings demonstrated by the commercial test. To implement HCFA's current plans to expeditiously realize dollar savings in the Medicare program through the use of claims auditing edits, we recommend that the Administrator, Health Care Financing Administration proceed immediately to purchase or lease existing comprehensive commercial claims auditing edits and begin a phased national implementation, and require, in any competition, that vendors have comprehensive claims auditing edits, which at a minimum address the mutually exclusive, incidental procedure, and diagnosis-to-procedure categories of inappropriate billing codes. HCFA agreed with our recommendations in this report and stated that it is proceeding immediately with a two-phased approach for procuring and implementing commercially developed edits for the Medicare program. During the first phase, HCFA plans to immediately implement procedure-to-procedure edits, such as those described in the mutually exclusive and incidental procedure categories in table 1. According to HCFA, the second phase will be used to complete its determination of the consistency of diagnosis-to-procedure edits with Medicare coverage policy--which was begun during the test--and then implement the edits as quickly as possible. HCFA added that, as part of this process, it will also consider modifying national coverage policy, where appropriate, to meet program goals. It cautioned that the amount of the projected savings from the commercial test may decrease once its full analysis is complete. We are encouraged that HCFA concurs with our recommendations and is proceeding immediately to take advantage of this commercial claims auditing tool, which can save Medicare hundreds of millions of dollars annually. HCFA's comments and our detailed evaluation of them are in appendix I. As agreed with your offices, unless you publicly announce its contents earlier, we will not distribute this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Health and Human Services; the Administrator, Health Care Financing Administration; the Director, Office of Management and Budget; the Ranking Minority Members of the House Committee on Commerce and the Senate Special Committee on Aging; and other interested congressional committees. We will also make copies available to others upon request. If you have any questions, please call me at (202) 512-6253, or Mark Heatwole, Assistant Director, at (202) 512-6203. We can also be reached by e-mail at [email protected] and [email protected], respectively. Major contributors to this report are listed in appendix II. The following are GAO's comments on the Department of Health Care Financing Administration's letter responding to a draft of this report. 1. We are encouraged that HCFA concurs with our recommendations and is proceeding immediately to take advantage of this commercial claims auditing tool. If effectively implemented, according to test results, commercial claims auditing edits should save Medicare hundreds of millions of dollars annually. Further, we are pleased that, in addition to determining that the commercial edits are consistent with HCFA policy, HCFA also plans to evaluate its national coverage policy to determine if it also needs modification. This dual assessment should improve the overall effectiveness of the final implemented edits. Finally, although the amount of HCFA's projected savings may decrease once its full analysis is complete, its projected annual savings of $465 million is so large that, most likely, even a reduced figure will still be significant. 2. As stated, the HHS Office of the Inspector General identified its findings through a manual review. The Inspector General's report findings included examples of improper billing for incidental procedures. Thus, commercial systems could have detected some of the errors identified in the Inspector General's report. While HCFA is correct in asserting that other identified problems would not typically be identified by the type of commercial claims editing system discussed in this report, other types of automated analytical claims analyses systems are available to examine profiles of provider submitted claims for targeting investigations of potential fraud. See our reports titled Medicare: Antifraud Technology Offers Significant Opportunity to Reduce Health Care Fraud (GAO/AIMD-95-77, Aug. 11, 1995) and Medicare Claims: Commercial Technology Could Save Billions Lost to Billing Abuse (GAO/AIMD-95-135, May 5, 1995). 3. We considered HCFA's suggested wording changes and have incorporated them as appropriate. John B. Mollet, Senior Evaluator John G. Snavely, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Health Care Financing Administration (HCFA) used an adequate methodology for testing the commercial claims auditing system for potential nationwide implementation with its Medicare claims processing system. GAO noted that: (1) the test methodology HCFA used in Iowa was consistent with the approach used by other public health care insurers who have already implemented a commercial claims auditing system; (2) HCFA's test covered 15 months and included extensive work, such as modifying the system's software to comply with Medicare payment policies; (3) the test showed that the commercial claims auditing system could save Medicare up to $465 million annually with claims auditing edits that detect inappropriately coded claims; (4) these savings are in addition to any results from the correct coding initiative which, according to HCFA, saved Medicare about $217 million in 1996; (5) while HCFA used an adequate methodology to test the system and demonstrated that commercial claims auditing edits could result in significant savings, two critical management decisions would have unnecessarily delayed implementation for several years, resulting in potentially hundreds of millions of dollars in lost savings annually; (6) first, HCFA limited its 1996 test contract to the test, and did not include a provision for implementing the commercial system throughout the Medicare program; (7) thus, to acquire a commercial system for nationwide implementation, up to an additional year may be required to complete all activities necessary to plan for and award another contract; (8) this could also result in substantial rework to adapt the system if a different contractor were to win the new contract; (9) HCFA's administrator told GAO that HCFA is evaluating legal options for expediting the contracting process; (10) second, in addition to the potential delay from the test contract limitation, following the test HCFA initially planned to develop its own claims auditing edits rather than acquire commercial edits, such as those used in the test; (11) under this plan, HCFA would have obtained a development contractor that may, or may not, have existing claims auditing edits; (12) if the winning contractor did not have existing edits on which to build, it could take years to complete the HCFA-owned edits; (13) near the conclusion of GAO's review HCFA representatives told GAO this approach would have allowed them to make the edits available to the public and avoid being obligated to one vendor's commercial edits and related fees; and (14) public health care insurers for the Department of Defense and the Department of Veterans Affairs and several state Medicaid agencies did not take this approach, opting to lease commercial systems instead of owning the claims auditing edits.
5,798
584
In May 2003, the Office of Force Transformation began funding small experimental satellites to enhance the responsiveness to the warfighter and to create a new business model for developing and employing space systems. As we have reported over the past two decades, DOD's space portfolio has been dominated by larger space system acquisitions, which have taken longer, cost more, and delivered fewer quantities and capabilities than planned. The ORS initiative is a considerable departure from DOD's large space acquisition approach. The initiative aims to quickly deliver low cost, short-term tactical capabilities to address unmet needs of the warfighter. Unlike traditional large satellite programs, the ORS initiative is intended to address only a small number of unmet tactical needs--one or two--with each delivery of capabilities. It is not designed to replace current satellite capabilities or major space programs in development. Also, the initiative potentially aims to identify and facilitate ways to reduce the time and cost for all future space development efforts. As we have previously reported, managing requirements so that their development is matched with resources offers an opportunity to mature technologies in the science and technology environment--a best acquisition practice. We also have reported that the ORS initiative could provide opportunities for small companies--who often have a high potential to introduce novel solutions and innovations into space acquisitions--to compete for DOD contracts. Consolidations within the defense industrial base for space programs have made it difficult for such companies to compete. ORS could broaden the defense industrial base and thereby promote competition and innovation. Since we last reported on DOD's ORS efforts in 2006, the department has taken several steps toward establishing a program management structure for ORS and executing research and development efforts. Despite this progress, it is too early to determine the overall success of these efforts because most are still in their initial phases. Congress directed that DOD submit a report that sets forth a plan for the quick acquisition of low cost space capabilities and establish a Joint ORS Office to coordinate and manage the ORS initiative. In the first half of 2007, DOD delivered an ORS plan to Congress and established a Joint ORS Office. DOD created the Joint ORS Office to coordinate and manage specific science and technology efforts to fulfill joint military operational requirements for on-demand space support and reconstitution. In addition, DOD is working with other government agencies to staff the office, developing an implementation plan, and establishing a process for determining which existing requirements for short-term tactical capabilities the office should pursue. Responsiveness is an attribute desired by the entire space community, including the National Aeronautics and Space Administration and the military service laboratories. Most of the efforts under the ORS initiative are being executed by science and technology organizations and other DOD agencies. The office will be responsible for coordinating, planning, acquiring, and transitioning those efforts. Its work is to be guided by an executive committee, comprised of senior officials from DOD, the military services, the intelligence community, and other government agencies. Most requirements for needed short-term tactical capabilities are expected to come through the U.S. Strategic Command. To respond to unmet warfighter needs, ORS requirements will be based on existing validated requirements. Table 1 summarizes the status of some of DOD's efforts related to the management structure. DOD is continuing to make progress in developing TacSats--its small experimental satellite projects. In addition, DOD is funding research efforts by industry to facilitate the development of future capabilities and is working with industry and academia to develop standards for building satellite components. Finally, DOD is working to improve the capabilities of existing small launch vehicles and providing some funding for future launch vehicles. The TacSat experiments aim to quickly provide the warfighter with a capability that meets an identified need within available resources--time, funding, and technology. Limiting the TacSats' scope allows DOD to trade off higher reliability and performance for speed, responsiveness, convenience, and customization. Once each TacSat satellite is launched, DOD plans to test its level of utility to the warfighter in theater. If military utility is established, DOD will assess the acquisition plan required to procure and launch numerous TacSats--forming constellations--to provide wider coverage over a specific theater. As a result, each satellite's capability does not need to be as complex as that of DOD's larger satellites and does not carry with it the heightened consequence of failure as if each satellite alone were providing total coverage. DOD currently has five TacSat experiments in different stages of development (see table 2). In addition, DOD is sponsoring the development of new capabilities provided mostly by the small satellite industry. These efforts include the ORS Payload Technology Initiative, which awarded 15 contracts to satellite industry contractors for payload technology concepts that may be developed in the future. The Air Force has been funding additional research conducted by small technology companies that could provide ORS capabilities, such as faster ways of designing satellites, and identifying the types and characteristics of components based on mission requirements. DOD is also working to establish standards for the "bus"--the platform that provides power, attitude, temperature control, and other support to the satellite in space. Establishing interface standards for bus development would allow DOD to create a "plug and play" approach to building satellites--similar to the way personal computers are built. According to DOD officials, interface standards would allow the development of modular or common components and would facilitate building satellites--both small and large--more quickly and at a lower cost. DOD's service laboratories, industry, and academia have made significant progress to develop satellite bus standards. The service labs expect to test some standardized components on the TacSat 3 bus and system standards on the TacSat 4 bus. Table 3 provides a description of the bus standardization efforts and their status. To get new tactical space capabilities to the warfighter sooner, DOD must secure a small, low cost launch vehicle on demand. Current alternatives include Minotaur launch vehicles, ranging in cost from about $21 million to $28 million, and an Evolved Expendable Launch Vehicle--DOD's primary satellite launch vehicles--at an average cost of roughly $65 million (for medium and intermediate launchers). DOD is looking to small launch vehicles, unlike current systems, that could be launched in days, if not hours, and whose cost would better match the small budgets of experiments. Both DOD and private industry are working to develop small, low cost, on-demand launch vehicles. Notably, DOD expects the Defense Advanced Research Projects Agency's (DARPA) FALCON launch program to flight-test hypersonic technologies and be capable of launching small satellites such as TacSats. In addition to securing low cost launch vehicles, DOD plans to acquire a more responsive, reliable, and affordable launch tracking system to complement the existing launch infrastructure. Table 4 describes DOD's efforts to develop a launch infrastructure and their status. DOD faces several challenges in succeeding in its ORS efforts. With relatively modest resources, the Joint ORS Office must quickly respond to the warfighter's urgent needs, including gaps in capabilities, as well as continue its longer-term research and development efforts that are necessary to help reduce the cost and time of future space acquisitions. As the office negotiates these priorities, it will need to coordinate its efforts with a broad array of programs and agencies in the science and technology, acquisition, and operational communities. Historically it has been difficult to transition programs initiated in the science and technology environment to the acquisition and operational environment. At this time, DOD lacks tools which would help the program office navigate within this environment--primarily, a plan that lays out how the office will direct its investments to meet current operational needs while at the same time pursuing innovative approaches and new technologies. The Joint ORS Office has a budget totaling about $646 million for fiscal years 2008 through 2013 and with no more than 20 government staff. These resources are relatively modest when compared with the resources provided major space programs. For example, the ORS fiscal year 2008 budget represents less than 12 percent of the budget of the Transformational Satellite Communications System program which is in the concept development phase, and staffing is about a quarter of that program's staff. While the Joint ORS Office's responsibilities are not the same as those of large, complex acquisition programs, it is expected to address urgent tactical needs that have not been met by the larger space programs. At this time, for example, the office has been asked to develop a solution to meet current communications shortfalls that cannot be met by the current Ultra High Frequency Follow-On satellite system. And, while the office has not yet been asked, officials have told us that ORS could potentially satisfy a gap in early missile warning capabilities because of delays in the Space Based Infrared Systems program, as well as gaps in communications and navigation capabilities. Taking on any one of these efforts will be challenging for ORS as there are constraints in available technologies, time, money, and other resources that can be used to fill capability gaps. At the same time, the Joint ORS Office will be pressured to continue to sponsor longer term research and development efforts. According to the Air Force Research Laboratory, the average cost of a small satellite is about $87 million. This is substantially higher than the target acquisition cost of about $40 million for an integrated ORS satellite in the 2007 National Defense Authorization Act. In addition, concerns are being expressed that not enough funding and support are being devoted to acquiring low cost launch capabilities. Some government and industry officials believe that achieving such capabilities is a linchpin to reducing satellite development costs in the future. The current alternatives for launching ORS satellites--an Evolved Expendable Launch Vehicle and Minotaur launch vehicles--do not meet DOD's low cost goal. DARPA expects its responsive launch capabilities, currently in development, will total about $5 million to produce--a significantly lower cost than that of current capabilities. However, in order to achieve the lower cost launch capability, DOD will have to continue to fund research beyond the $15.6 million already spent on advanced technology development, facilities, test- range and mission support, and program office support. To execute both its short- and long-term efforts, the Joint ORS Office will also need to gain cooperation and consensus from a diverse array of officials and organizations. These include science and technology organizations, the acquisition community, the U.S. Strategic Command, the intelligence community, and industry. We have previously reported on difficulties DOD has encountered in bringing these organizations together, particularly when it comes to setting requirements for new acquisitions and transitioning technologies from the science and technology community to acquisition programs. As a new and relatively small organization, the Joint ORS Office may well find it does not have the clout to gain cooperation and consensus on what short- and long-term projects should get the highest priority. Despite the significant expectations placed on the Joint ORS Office and the challenges it faces, DOD does not have an investment plan to guide its ORS decisions. DOD has begun to develop an ORS strategy that is to identify the investments needed to achieve future capabilities. However, the strategy is not intended to become a formalized investment plan that would (1) help DOD identify how to achieve these capabilities, (2) prioritize funding, and (3) identify and implement mechanisms to enforce the plan. At the same time, there are other science and technology projects in DOD's overall space portfolio competing for the same resources, including those focused on discovering and developing technologies and materials that could enhance U.S. superiority in space. Further, as DOD's major space acquisition programs continue to experience cost growth and schedule delays, DOD could be pressured to divert funds from ORS. We have previously recommended that DOD prioritize investments for both its acquisitions and science and technology projects--the ORS plan could be seamlessly woven into an overall DOD investment plan for space. However, DOD has yet to develop this overall investment plan. Providing the warfighter with needed space capabilities in a fiscally constrained and rapidly changing technological environment is a daunting task. ORS provides DOD with a unique opportunity to work outside the typical acquisition channels to more quickly and less expensively deliver these capabilities. However, even at lower costs, pressure on ORS funding will come in DOD's competition for its resources. As DOD moves forward, decisions on using constrained resources to meet competing demand will need to be made and reevaluated on a continuing basis. Until DOD develops an investment plan, it will risk forgoing an opportunity to get continuing success out of the ORS initiative. To better ensure that DOD meets the ORS initiative's goal, we recommend that the Secretary of the Air Force develop an investment plan to guide the Joint ORS Office as it works to meet urgent needs and develops a technological foundation to meet future needs. The plan should be approved by the stakeholders and identify how to achieve future capabilities, establish funding priorities, and identify and implement mechanisms to ensure progress is being achieved. We provided a draft of this report to DOD for review and comment. DOD concurred with our findings and our recommendation but clarified that the Secretary of the Air Force, specifically the Executive Agent for Space, would be responsible for developing an investment plan since the Under Secretary of the Air Force position is vacant. Full comments can be found in appendix I. To assess DOD's progress to date in implementing its ORS goal and addressing associated challenges, we interviewed and reviewed documents from officials in Washington, D.C., at the Office of the Deputy Under Secretary of Defense for Advanced Systems and Concepts; National Security Space Office; Office of the Director of Defense Research and Engineering; Office of the Director of Program Analysis and Evaluation; Office of the Joint Chiefs of Staff; the U.S. Naval Research Laboratory; and the Office of the Assistant Secretary of the Navy for Research, Development and Acquisition. We also interviewed and reviewed documents from officials in Virginia at the Office of the Assistant Secretary of Defense for Networks Information and Integration; Office of the Under Secretary of the Air Force; Defense Advanced Research Project Agency; and U.S. Army Space and Missile Defense Command. In addition, we interviewed and reviewed documents from officials at the Navy Blossom Point Satellite Tracking Facility in Maryland; Air Force Space Command, Peterson Air Force Base, Colorado; Space and Missile Systems Center, Los Angeles Air Force Base, California; the U.S. Strategic Command, Offutt Air Force Base, Nebraska; and the Air Force Research Laboratory and Joint Operationally Responsive Space Office, Kirtland Air Force Base, New Mexico. We also interviewed officials from the National Aeronautics and Space Administration, Washington, D.C., and industry representatives involved in developing small satellites and commercial launch vehicles. We reviewed and analyzed the documents that we received. We will send copies of the letter to the Department of Defense and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4859 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Art Gallegos, Assistant Director; Maria Durant; Jean Harker; Arturo Holguin; and Karen Sloan. In addition, key contributors to the report include Maria Durant, Art Gallegos, Jean Harker, Arturo Holguin, and Karen Sloan.
The Department of Defense (DOD) invests heavily in space assets to provide the warfighter with intelligence, navigation, and other information critical to conducting military operations. In fiscal year 2008 alone, DOD expects to spend over $22 billion dollars on space systems. Despite this investment, senior military commanders have reported shortfalls in tactical space capabilities in each recent major conflict over the past decade. To provide short-term tactical capabilities as well as identify and implement long-term solutions to developing low cost satellites, DOD initiated operationally responsive space (ORS). Following a 2006 GAO review of ORS, the Congress directed DOD to submit a report that sets forth a plan for providing quick acquisition of low cost space capabilities. This report focuses on the status of DOD's progress in responding to the Congress and is based on GAO's review and analyses of ORS documentation and interviews with DOD and industry officials. Since GAO last reported on DOD's ORS efforts in 2006, the department has taken several steps toward establishing a program management structure for ORS and executing research and development efforts. On the programmatic side, DOD provided Congress with a plan that lays out an organizational structure and defines the responsibilities of the newly created Joint ORS Office, and describes an approach for satisfying warfighters' needs. DOD has also begun staffing the office. On the research and development side, DOD has launched one of its TacSat satellites--small experimental satellites intended to quickly provide a capability that meets an identified need within available resources--and has begun developing several others. It has also made progress in developing interface standards for satellite buses--the platform that provides power, altitude, temperature control, and other support to the satellite in space--and continued its sponsorship of efforts aimed at acquiring low cost launch vehicles. Despite this progress, it is too early to determine the overall success of these efforts because most are still in their initial phases. Achieving success in ORS will be challenging. With relatively modest resources, the Joint ORS Office must quickly respond to the warfighter's urgent needs, while continuing research and development efforts that are necessary to help reduce the cost and time of future space acquisitions. As it negotiates these priorities, the office will need to coordinate its efforts with a broad array of programs and agencies in the science and technology, acquisition, and operational communities. Historically, it has been difficult to transition programs from the science and technology environment to the acquisition and operational environment. At this time, DOD lacks a plan that lays out how it will direct its investments to meet current operational needs while pursuing innovative approaches and new technologies.
3,421
574
The World Bank and IMF have classified 42 countries as heavily indebted and poor; three quarters of these are in Africa. In 1996, creditors agreed to create the HIPC Initiative to address concerns that some poor countries would have debt burdens greater than their ability to pay, despite debt relief from bilateral creditors. In 1999, in response to concerns about the continuing vulnerability of these countries, the World Bank and the IMF agreed to enhance the HIPC Initiative by more than doubling the estimated amount of debt relief and increasing the number of potentially eligible countries. A major goal of the HIPC Initiative is to provide recipient countries with a permanent exit from unsustainable debt burdens. To date, 27 poor countries have reached their decision points, and 11 of these have reached their completion points. In 1996, to help multilateral creditors meet the cost of the HIPC Initiative, the World Bank established a HIPC Trust Fund with contributions from member governments and some multilateral creditors. The HIPC Trust Fund has received about $3.4 billion (nominal) in bilateral pledges and contributions, including $750 million in pledges from the U.S. government. The World Bank, AfDB, and IaDB face a combined financing shortfall of $7.8 billion in present value terms under the existing HIPC Initiative (see table 1). Financing the enhanced HIPC Initiative remains a major challenge for the World Bank. The total cost of the enhanced HIPC Initiative to the World Bank for 34 countries is estimated at $9.5 billion. As of June 30, 2003, the World Bank had identified $3.5 billion in financing, resulting in a gap of about $6 billion (see table 1). Donor countries will be reviewing the financing gap during the IDA-14 replenishment discussions beginning in spring 2004. If donor countries close the financing gap through future replenishments, we estimate that the U.S. government could be asked to contribute $1.2 billion, which is based on its historical replenishment rate of 20 percent to IDA. Over 70 percent of the funds IDA has identified thus far come from transfers of IBRD's net income to IDA. Although IBRD has not committed any of its net income for HIPC debt relief beyond 2005, we estimate that the financing gap of $6 billion could be reduced to about $3.5 billion, or by about 42 percent, if the net income transfers from the IBRD continue. Similarly, the U.S.'s potential share decreases by the same percentage, from $1.2 billion to about $700 million. However, transferring more of IBRD's net income to HIPC debt relief could come at the expense of other IBRD priorities. The total cost of the enhanced HIPC Initiative to the AfDB for its 32 member countries is estimated at about $3.5 billion (see table 1). As of September 2003, the AfDB has identified financing of approximately $2.3 billion, including $2 billion from the HIPC Trust Fund and about $300 million from its own resources. Thus, AfDB is faced with a financing shortfall of about $1.2 billion in present value terms. We estimate that AfDB will need about $400 million to cover its shortfall for its 23 eligible countries, as well as about $800 million for its 9 potentially eligible countries. In addition, we estimate that the U.S. share of the AfDB's financing shortfall is between $132 and $348 million, depending on the method used to close the $1.2 billion shortfall. The IaDB expects to provide about $1.4 billion for HIPC debt relief to four countries--Bolivia, Guyana, Honduras, and Nicaragua. Most of the relief is for debt owed to the Fund for Special Operations (FSO), the concessional lending arm of the IaDB that provides financing to the bank's poorer members. As of January 2004, the IaDB has identified financing for the full $1.4 billion, about $200 million from donor contributions through the HIPC Trust Fund and $1.2 billion through its own resources. Although the IaDB is able to cover its full participation in the HIPC Initiative, the institution faces about a $600 million reduction in the lending resources of its FSO lending program from 2009 through 2019 as a direct consequence of providing HIPC debt relief. According to IaDB officials, the FSO will not have enough money to lend from 2009 through 2013. To eliminate this shortfall, donor countries may be asked to provide the necessary funds through a future replenishment contribution. Assuming that donor countries agree to close the financing gap, we estimate that the U.S. government could be asked to contribute about $300 million so that the FSO can continue lending to poor countries after 2008. This estimate is based on the 50-percent rate at which the United States historically contributes to the FSO. The $7.8 billion shortfall for the three MDBs is understated for two reasons. First, the estimated financing shortfall for two institutions--IDA and the AfDB--is understated because the data for four likely recipient countries--Laos, Liberia, Somalia, and Sudan--are unreliable. The World Bank considers existing estimates of the countries' total debt and outstanding arrears to be incomplete, subject to significant change, and it is uncertain when the countries will reach their decision points. Similarly, the estimated costs of debt relief for three of AfDB's countries--Liberia, Somalia, and Sudan--are likely understated due to data reliability concerns. Second, the financing shortfall does not include any additional relief that may be provided to countries because their economies deteriorated since they originally qualified for debt relief. Under the enhanced HIPC Initiative, creditors and donors could provide countries with additional debt relief above the amounts agreed to at their decision points, referred to as "topping up." This relief could be provided when external factors, such as movements in currency exchange rates or declines in commodity prices, cause countries' economies to deteriorate, thereby affecting their ability to achieve debt sustainability. The World Bank and IMF project that seven to nine countries may be eligible for additional debt relief, and their preliminary estimates range from $877 million to about $2.3 billion, depending on whether additional bilateral relief is included or excluded from the calculation. The additional cost to the U.S. government could range from $106 million to $207 million for assistance to the World Bank and AfDB, based on the U.S. historical replenishment rates to these banks. Furthermore, the topping-up estimate considered only the 27 countries that have reached their decision or completion point; the estimate may rise as additional countries reach their decision points. Even if the $7.8 billion shortfall is fully financed, we estimate that, if exports grow slower than the World Bank and IMF project, the 27 countries that have qualified for debt relief may need more than $375 billion in additional assistance to help them achieve their economic growth and debt relief targets through 2020. This $375 billion consists of $153 billion in expected development assistance, $215 billion in assistance to fund shortfalls from lower export earnings, and at least $8 billion for debt relief (see fig. 1). If the United States decides to help fund the $375 billion, we estimate it would cost approximately $52 billion over 18 years. According to our analysis of World Bank and IMF projections, the expected level of development assistance for the 27 countries is $153 billion through 2020. This estimate assumes that the countries will follow their World Bank and IMF development programs, including undertaking recommended reforms. It also assumes that countries achieve economic growth rates consistent with reducing poverty and maintaining long-term debt sustainability. These conditions will help countries meet their development objectives, including the Millennium Development Goals that world leaders committed to in 2000. These goals include reducing poverty, hunger, illiteracy, gender inequality, child and maternal mortality, disease, and environmental degradation. Another goal calls on rich countries to build stronger partnerships for development and to relieve debt, increase aid, and give poor countries fair access to their markets and technology. We estimate that 23 of the 27 HIPC countries will earn about $215 billion less from their exports than the World Bank and IMF project. The World Bank and IMF project that all 27 HIPC countries will become debt sustainable by 2020 because their exports are expected to grow at an average of 7.7 percent per year. However, as we have previously reported, the projected export growth rates are overly optimistic. We estimate that export earnings are more likely to grow at the historical annual average of 3.1 percent per year--less than half the rate the World Bank and IMF project. Under lower, historical export growth rates, countries are likely to have lower export earnings and unsustainable debt levels (see table 2). We estimate the total amount of the potential export earnings shortfall over the 2003 to 2020 projection period to be $215 billion. High export growth rates are unlikely because HIPC countries rely heavily on primary commodities such as coffee, cotton, and copper for much of their export revenue. Historically, the prices of these commodities have fluctuated, often downward, resulting in lower export earnings and worsening debt indicators. A 2003 World Bank report found that the World Bank/IMF growth assumptions had been overly optimistic and recommended more realistic economic forecasts when assessing debt sustainability. Since HIPC countries are assumed to follow their World Bank and IMF reform programs, any export shortfalls are considered to be caused by factors outside their control such as weather and natural disasters, lack of access to foreign markets, or declining commodity prices. Although failure to follow the reform program could result in the reduction or suspension of development assistance, export shortfalls due to outside factors would not be expected to have this result. Therefore, if countries are to achieve economic growth rates consistent with their development goals, donors would need to fund the $215 billion shortfall. Without this additional assistance, countries would grow more slowly, resulting in reduced imports, lower gross domestic product (GDP), and lower government revenue. These conditions could undermine progress toward poverty reduction and other goals. Even if donors make up the export earnings shortfall, more than half of the 27 countries will experience unsustainable debt levels. We estimate that these countries will require $8.5 to $19.8 billion more to achieve debt sustainability and debt-service goals. After examining 40 strategies for providing debt relief, we narrowed our analysis to three specific strategies: (1) switching the minimum percentage of loans to grants for future multilateral development assistance for each country to achieve debt sustainability, (2) paying debt service in excess of 5 percent of government revenue, and (3) combining strategies (1) and (2). We chose these strategies because they maximize the number of countries achieving debt sustainability while minimizing costs to donors. We found that, with this debt relief, as many as 25 countries could become debt sustainable and all countries would achieve a debt service-to-revenue ratio below 5 percent over the entire 18-year projection period (see table 3). In the first strategy, multilateral creditors switch the minimum percentage of loans to grants for each country to achieve debt sustainability in 2020. We estimate that the additional cost of this strategy would be $8.5 billion. The average percentage of loans switched to grants for all countries under this strategy would be 33.5 percent. Twelve countries are projected to be debt sustainable with no further assistance. In addition, 13 countries would achieve sustainability by switching between 2 percent (Benin) and 96 percent (Sao Tome and Principe) of new loans to grants. A total of 25 countries could be debt sustainable by 2020, although only 2 countries would achieve the 5-percent debt service-to-revenue target over the entire period. The second strategy is aimed at reducing each country's debt-service burden. Under this strategy, donors would provide assistance to cover annual debt service above 5 percent of government revenue. We estimate that this strategy would cost an additional $12.6 billion to achieve the goal of 5-percent debt service to revenue for all countries throughout the projection period. Under this strategy, no additional countries become debt sustainable other than the 12 that are already projected to be debt sustainable with no further assistance. While this strategy would free significant resources for poverty reduction expenditures, it could provide an incentive for countries to pursue irresponsible borrowing policies. By guaranteeing that no country would have to pay more than 5 percent of its revenue in debt service, this strategy would separate the amount of a country's borrowing from the amount of its debt repayment. Consequently, it could encourage countries to borrow more than they are normally able to repay, increasing the cost to donors and reducing the resources available for other countries. The third strategy combines strategies 1 and 2 to achieve both debt sustainability and a lower debt-service burden. Under this strategy, multilateral creditors would first switch the minimum percentage of loans to grants to achieve debt sustainability, and then donors would pay debt service in excess of 5 percent of government revenue. We estimate that this strategy would cost an additional $19.8 billion, including $8.5 billion for switching loans to grants, and $11.3 billion for reducing debt service to 5 percent of revenue. Under this strategy, 25 countries would achieve debt sustainability in 2020--that is, 13 countries in addition to the 12 that are projected to be debt sustainable with no further assistance. All 27 countries would reach the 5-percent debt-service goal for the duration of the projection period. However, similar to the debt-service strategy above, this strategy dissociates borrowing from repayment and could encourage irresponsible borrowing policies. If the United States decides to help fund the $375 billion, we estimate that it could cost approximately $52 billion over 18 years, both in bilateral grants and in contributions to multilateral development banks. This amount consists of $24 billion, which represents the U.S. share of the $153 billion in expected development assistance projected by the World Bank and IMF, as well as approximately $28 billion for the increased assistance to the 27 countries. Historically, the United States has been the largest contributor to the World Bank and IaDB, and the second largest contributor to the AfDB, providing between 11 and 50 percent of their funding. The U.S. share of bilateral assistance to the 27 countries we examined has historically been about 12 percent. We also analyzed the impact of fluctuations in export growth on the likelihood of these countries achieving debt sustainability. The export earnings of HIPC countries experience large year-to-year fluctuations due to their heavy reliance on primary commodities, as well as weather extremes, natural disasters, and other factors. We found that the higher a country's export volatility, the lower its likelihood of achieving debt sustainability. For example, Honduras has low export volatility, resulting in little impact on its debt sustainability. In contrast, Rwanda has very high export volatility, which greatly lowers its probability of achieving debt sustainability. Since volatility in export earnings reduces countries' likelihood of achieving debt sustainability, it is also likely to further increase donors' cost as countries may require an even greater than expected level of debt relief to achieve debt sustainability. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For additional information about this testimony, please contact Thomas Melito, Acting Director, International Affairs and Trade, at (202) 512-9601 or Cheryl Goodman, Assistant Director, International Affairs and Trade, at (202) 512-6571. Other individuals who made key contributions to this testimony included Bruce Kutnick, Barbara Shields, R.G. Steinman, Ming Chen, Robert Ball, and Lynn Cothern. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Heavily Indebted Poor Countries (HIPC) Initiative, established in 1996, is a bilateral and multilateral effort to provide debt relief to poor countries to help them achieve economic growth and debt sustainability. Multilateral creditors are having difficulty financing their share of the initiative, even with assistance from donors. Under the existing initiative, many countries are unlikely to achieve their debt relief targets, primarily because their export earnings are likely to be significantly less than projected by the World Bank and International Monetary Fund (IMF). In a recently issued report, GAO assessed (1) the projected multilateral development banks' funding shortfall for the existing initiative and (2) the amount of funding, including development assistance, needed to help countries achieve economic growth and debt relief targets. The Treasury, World Bank, and African Development Bank commented that historical export growth rates are not good predictors of the future because significant structural changes are under way in many countries that could lead to greater growth. We consider these historical rates to be a more realistic gauge of future growth because of these countries' reliance on highly volatile primary commodities and other vulnerabilities such as HIV/AIDS. The three key multilateral development banks we analyzed face a funding shortfall of $7.8 billion in 2003 present value terms, or 54 percent of their total commitment, under the existing HIPC Initiative. The World Bank has the most significant shortfall--$6 billion. The African Development Bank has a gap of about $1.2 billion. Neither has determined how it would close this gap. The Inter-American Development Bank is fully funding its HIPC obligation by reducing its future lending resources to poor countries by $600 million beginning in 2009. We estimate that the cost to the United States, based on its rate of contribution to these banks, could be an additional $1.8 billion. However, the total estimated funding gap is understated because (1) the World Bank does not include costs for four countries for which data are unreliable and (2) all three banks do not include estimates for additional relief that may be required because countries' economies deteriorated after they qualified for debt relief. Even if the $7.8 billion gap is fully financed, we estimate that the 27 countries that have qualified for debt relief may need an additional $375 billion to help them achieve their economic growth and debt relief targets by 2020. This $375 billion consists of $153 billion in expected development assistance, $215 billion to cover lower export earnings, and at least $8 billion in debt relief. Most countries are likely to experience higher debt burdens and lower export earnings than the World Bank and IMF project, leading to an estimated $215 billion shortfall over 18 years. To reach debt targets, we estimate that countries will need between $8 billion and $20 billion, depending on the strategy chosen. Under these strategies, multilateral creditors switch a portion of their loans to grants and/or donors pay countries' debt service that exceeds 5 percent of government revenue. Based on its historical share of donor assistance, the United States may be called upon to contribute about 14 percent of this $375 billion, or approximately $52 billion over 18 years.
3,416
649
Most Navy cardholders properly used their travel cards and paid amounts owed to Bank of America in a timely manner. However, as shown in figure 1, the Navy's average delinquency rate was nearly identical to the Army's, which, as we have previously testified, is the highest delinquency rate in the government. The Navy's quarterly delinquency rate fluctuated from 10 to 18 percent, and on average was about 6 percentage points higher than that of federal civilian agencies. As of March 31, 2002, over 8,400 Navy cardholders had $6 million in delinquent debt. We also found substantial charge-offs of Navy travel card accounts. Since the inception of the travel charge card task order between DOD and Bank of America on November 30, 1998, Bank of America has charged off over 13,800 Navy travel card accounts with $16.6 million of bad debt. Recent task order modifications allow Bank of America to institute a salary offset against DOD military personnel members whose travel card accounts were previously charged off or are more than 120 days past due. As a result, as of July 31, 2002, Bank of America had recovered $5.2 million in Navy government travel card bad debts. The high level of delinquencies and charge-offs have also cost the Navy millions of dollars in lost rebates, higher fees, and substantial resources spent pursuing and collecting past due accounts. For example, we estimate that in fiscal year 2001, delinquencies and charge-offs cost the Navy $1.5 million in lost rebates, and will cost about $1.3 million in increased automated teller machines (ATM) fees annually. As shown in figure 2, the travel cardholder's rank or grade (and associated pay) is a strong predictor of delinquency problems. We found that the Navy's overall delinquency and charge-off problems are primarily associated with young, low- and mid-level enlisted military personnel with basic pay levels ranging from $12,000 to $27,000. According to Navy officials, low- and mid-level enlisted military personnel comprise the bulk of the operational forces and are generally young, often deployed, and have limited financial experience and resources. It is therefore not surprising to see a higher level of outstanding balances and delinquent amounts due for these personnel. Figure 2 also shows that, in contrast, the delinquency rate for civilians employed by the Navy is substantially lower. As of September 30, 2001, the delinquency rate of low- and mid-level enlisted personnel was almost 22 percent, compared to a Navy civilian rate of slightly more than 5 percent. This rate is comparable to the non-DOD civilian delinquency rate of 5 percent. The case study sites we audited exhibited this pattern. For example, at Camp Lejeune, a principal training location for Marine air and ground forces, over one-half of the cardholders are enlisted personnel. Representative of the Navy's higher delinquency rate, Camp Lejeune's quarterly delinquency rate for the 18-month period ending March 31, 2002, averaged over 15 percent and was close to 10 percent as of March 31, 2002. In contrast, at Puget Sound Navy Shipyard, where the mission is to repair and modernize Navy ships, civilian personnel earning more than $38,000 a year made up 84 percent of total government travel card holders and accounted for 86 percent of total fiscal year 2001 travel card transactions. This site's delinquency rate had declined to below 5 percent as of March 31, 2002. In combination with these demographic factors, a weak overall control environment, flawed policies and procedures, and a lack of adherence to valid policies and procedures contributed to the significant delinquencies and charge-offs. Further discussion of these breakdowns is provided later in this testimony. Our work identified numerous instances of potentially fraudulent and abusive activity related to the travel card. During fiscal year 2001 and the first 6 months of fiscal year 2002, over 5,100 Navy employees wrote at least one nonsufficient fund (NSF), or "bounced" check, to Bank of America as payment for their travel card bills. Of these, over 250 wrote 3 or more NSF checks, a potentially fraudulent act. Appendix III provides a table summarizing 10 examples, along with more detailed descriptions of selected cases in which cardholders might have committed fraud by writing 3 or more NSF checks to Bank of America. These 10 accounts were subsequently charged-off or placed in salary offset or voluntary fixed payment agreements with Bank of America. We also found that the government cards were used for numerous abusive transactions that were clearly not for the purpose of government travel. As discussed further in appendix I, we used data mining tools to identify transactions we believed to be potentially fraudulent or abusive based upon the nature, amount, merchant, and other identifying characteristics of the transaction. Through this procedure, we identified thousands of suspect transactions. Table 1 illustrates a few of the types of abusive transactions and the amounts charged to the government travel card in fiscal year 2001 and the first 6 months of fiscal year 2002 that were not for valid government travel. Government travel cards were used for purchases in categories as diverse as legalized prostitution services, jewelry, gentlemen's clubs, gambling, cruises, and tickets to sporting and other events. The number of instances and amounts shown includes both cases in which the cardholders paid the bills and instances in which they did not pay the bills. We found that 50 cardholders used their government travel card to purchase over $13,000 in prostitution services from two legalized brothels in Nevada. Charges were processed by these establishments' merchant bank, and authorized by Bank of America, in part because a control afforded by the merchant category code (MCC), which identifies the nature of the transactions and is used by DOD and other agencies to block improper purchases, was circumvented by the establishments. In these cases, the transactions were coded to appear as restaurant and dining or bar charges. For example, the merchant James Fine Dining, which actually operates as a brothel known as Salt Wells Villa, characterizes its services as restaurant charges, which are allowable and not blocked by the MCC control. According to one assistant manager at the establishment, this is done to protect the confidentiality of its customers. Additionally, the account balances for 11 of the 50 cardholders purchasing services from these establishments were later charged off or put into salary offset. For example, one sailor, an E-2 seaman apprentice, charged over $2,200 at this brothel during a 30-day period. The sailor separated from the Navy, and his account balance of more than $3,600 was eventually charged off. We also found instances of abusive travel card activity where Navy cardholders used their cards at establishments such as gentlemen's clubs, which provide adult entertainment. Further, these clubs were used to convert the travel card to cash by supplying cardholders with actual cash or "club cash" for a 10 percent fee. For example, we found that an E-5 second class petty officer circumvented ATM cash withdrawal limits by charging, in a single transaction, $2,420 to the government travel card and receiving $2,200 in cash. Subsequently, the club received payment from Bank of America for a $2,420 restaurant charge. Another cardholder, an E- 7 chief petty officer, obtained more than $7,000 in cash from these establishments. For fiscal year 2001 and through March 2002, 137 Navy cardholders made charges totaling almost $29,000 at these establishments. These transactions represented abusive use of the travel cards that were clearly unrelated to official government travel. There should be no misunderstanding by Navy personnel that personal use of the card is not permitted. In fact, the standard government travel card used by most Navy personnel is clearly marked "For Official Government Travel Only" on the face of the card. Additionally, upon receipt of their travel cards, all Navy cardholders are required to sign a statement of understanding that the card is to be used only for authorized official government travel expenses. However, as part of our statistical sampling results at the three sites we audited, we estimated that personal use of the government travel card ranged from almost 7 percent of fiscal year 2001 transactions at one site to over 26 percent at another site. As shown in appendix V, cardholders who abused the card but paid the bill also used the government travel cards for the same transaction types discussed in table 1. Personal use of the card also increases the risk of charge-offs related to abusive purchases, which are costly to the government and the taxpayer. Our work found that charged-off accounts included both those of (1) cardholders who were reimbursed by the Navy for official travel expenses but failed to pay Bank of America for the related charges, thus pocketing the reimbursement, and (2) those who used their travel cards for personal purchases for which they did not pay Bank of America. Appendix IV provides a summary table and supporting narrative describing examples of abusive travel card activity where the account was charged off or placed in salary offset or voluntary fixed payment agreements with Bank of America. Furthermore, as detailed by the 10 examples in appendix V, we also found instances in which cardholders used their travel cards for personal purposes, but paid their travel card bills when they became due. For example, an E-5 second class petty officer reservist, whose civilian job was with the U.S. Postal Service, admitted making phony charges of over $7,200 to operate his own limousine service. In these transactions, the sailor used the travel card to pay for bogus services from his own limousine company during the first few days of the card statement cycle. By the second day after the charges were posted, Bank of America would have deposited funds--available for the business' immediate use--into the limousine business' bank account. Then, just before the travel card bill became due, the limousine business credited the charge back to the sailor's government travel card and repaid the funds to Bank of America. This series of transactions had no impact on the travel card balance, yet allowed the business to have an interest-free loan for a period. This pattern was continued over several account cycles. Navy officials were unaware of these transactions until we brought them to their attention and are currently considering what, if any, action should be taken against the cardholder. We did not always find documented evidence of disciplinary actions taken by Navy commanders and supervisors against cardholders who wrote NSF checks or had their accounts charged off or placed in salary offset. Of the 57 cardholders fitting these categories that we selected through data mining, we did not find any documented evidence that 37 had been disciplined. For example, a lieutenant commander (O-4) with the Naval Air Reserve used his travel card for personal purchases in California and frequent personal trips to Mexico. The individual did not pay his account when due and was placed in salary offset in October 2001. Although the agency program coordinator (APC) responsible for program oversight had apprised management of this officer's abuse of the travel card, and had initiated actions to take away the cardholder's security clearance, management had not taken any administrative action against this cardholder. In addition, of the 10 individuals who abused the card but paid their bills, only 1 was disciplined. Appendixes III, IV, and V provide further details of the extent of disciplinary actions taken against some of the cardholders we examined. In addition, we found that 27 of these same 57 travel cardholders we examined whose accounts were charged off or placed in salary offset as of March 31, 2002, still had active secret or top-secret security clearances in August 2002. Some of the Navy personnel holding security clearances who have had difficulty paying their travel card bills may present security risks to the Navy. DOD rules provide that an individual's finances are one of the factors to be considered in whether an individual should be entrusted with a security clearance. However, we found that Navy security officials were unaware of these financial issues and consequently could not consider their potential effect on whether these individuals should continue to receive a security clearance. We have referred cases identified from our audit to the U.S. Navy Central Adjudication Facility (commonly referred to as Navy CAF) for its continued investigation. For fiscal year 2001, we identified significant breakdowns in key internal controls over individually billed travel cards. The breakdowns stemmed from a weak overall control environment, a lack of focus on oversight and management of the travel card program, and a lack of adherence to valid policies and procedures. These breakdowns contributed to the significant delinquencies and charge-offs of Navy employee account balances and potentially fraudulent and abusive activity related to the travel card. In contrast, one Navy unit we audited with a low average delinquency rate (4 percent) attributed its relative success to constant monitoring of delinquencies and to some monitoring of inappropriate travel card use. We found that in fiscal year 2001, management at the three case study locations we audited focused primarily on reducing delinquencies. In general, management placed little emphasis on controls designed to prevent, or provide for early detection of, travel card misuse. In addition, we identified two key overall control environment weaknesses: (1) the lack of clear, sufficiently detailed Navy travel card policies and procedures and (2) limited internal travel card audit and program oversight. First, the units we audited used DOD's travel management regulations (DOD Financial Management Regulation, volume 9, chapter 3) as the primary source of policy guidance for management of Navy's travel card program. In many areas, the existing guidance was not sufficiently detailed to provide clear, consistent travel management procedures to be followed. Second, as recognized in the DOD Inspector General's March 2002 summary report on the DOD travel card program, "ecause of its dollar magnitude and mandated use, the DOD travel card program requires continued management emphasis, oversight, and improvement by the DOD. Independent internal audits should continue to be an integral component of management controls." However, no internal review report had been issued since fiscal year 1999 concerning the Navy's travel card program. We found that this overall weak control environment contributed to design flaws and weaknesses in a number of management control areas needed for an effective travel card program. For example, many problems we identified were the result of ineffective controls over issuance of travel cards. Although DOD's policy allows an exemption from the requirement to use travel cards for certain groups or individuals with poor credit histories, we found that the Navy's practice was to facilitate Bank of America issuing travel cards--with few credit restrictions--to all applicants regardless of whether they have a history of credit problems. For the cases we reviewed, we found a significant correlation between travel card fraud, abuse, and delinquencies and individuals with substantial credit history problems. The prior and current credit problems we identified for Navy travel card holders included charged-off credit cards, bankruptcies, judgments, accounts in collections, and repeated use of NSF checks. Also, a key element of internal control, which, if effectively implemented, may reduce the risk and occurrence of delinquent accounts, is frequent account monitoring by the APC. However, some APCs, who have the key responsibility for managing and overseeing travel card holders' activities, were essentially set up to fail in their duties. Some were assigned APC responsibilities as collateral duties and given little time to perform these duties, while other full-time APCs had responsibilities for a large number of cardholders. When an APC is unable to focus on managing travel card usage because of the high number of cardholders or the extent of other duties, the rate of delinquency and potentially abusive and fraudulent transactions is adversely affected. For example, at Camp Lejeune, where the delinquency rate was over 15 percent, the six APCs we interviewed were given the role as "other duty as assigned," with most spending less than 20 percent of their available time to perform their APC responsibilities. In addition, a lack of management focus and priority on ensuring proper training for APCs resulted in some APCs being unfamiliar with the capabilities of Bank of America's credit card database that would help them to manage and oversee the travel card program. For example, one APC did not know that she could access reports that would help identify credit card misuse and thus enable the responsible supervisors or commanders to counsel cardholders before they became delinquency problems. With the large span of control, minimal time allotted to perform this duty, and lack of adequate training, we found that APCs generally were ineffective in carrying out their key travel card program management and oversight responsibilities. In contrast, a Navy unit we visited--Patuxent River--showed that constant monitoring of delinquency by a knowledgeable APC contributed to a lower delinquency rate. The APC at this unit had responsibility for approximately 1,200 to 1,500 active travelers monthly, but APC duties were her only responsibility. The APC informed us that she constantly monitored the government travel card program. For example, she reviewed delinquency reports several times a month to identify and promptly alert cardholders and supervisors about the status of delinquent accounts. She also told us that less frequently, but still on a monthly basis, she monitored transactions in the Bank of America database for improper and abusive uses of the card and sent out notices to the cardholders and the cardholders' supervisors if such transactions were identified. She also emphasized the use of the split disbursement payment process (split disbursements) whenever possible. Consequently, the delinquency rate for this unit was consistently lower than the Navy-wide rate and the civilian agency rate. Another area of weakness in internal controls relates to the process over the cancellation and/or deactivation of cards in case of death, retirement, or separation from the service. These ineffective controls allowed continued use of the government travel card for personal purposes, which in some instances led to charge-offs, thereby contributing to increased costs to the government. For example, In one Navy unit, a cardholder died in October 1999. However, ineffective controls over the notification process resulted in the APC not being aware that this had occurred. Therefore, the APC did not take actions to cancel this individual's government travel card account. Consequently, in October 2000, when the old card was about to expire, Bank of America mailed a new card to the address of record. When the card was returned with a forwarding address, the bank remailed the card and the personal identification number used to activate the card to the new address without performing other verification procedures. The card was activated in mid- December 2000, and within a month, 81 fraudulent transactions for hotel, food, and gas totaling about $3,600 were charged to the card. In January 2001, in the course of her monthly travel card monitoring, the APC noticed suspicious charges in the vicinity of the cardholder's post-of-duty. The APC took immediate action to deactivate the card, thus preventing additional charges from occurring. Upon learning of the cardholder's death from further discussion with the cardholder's unit, the APC immediately reported the case to a Bank of America fraud investigator. Investigations revealed that a family member of the cardholder might have made these charges. No payment was ever made on this account, and the entire amount was subsequently charged off. We referred this case to the U.S. Secret Service Credit Card Task Force for further investigation and potential prosecution. A chief warrant officer (W-3) at Naval Air Systems Command Atlantic repeatedly used his travel card after his retirement on December 1, 2000. The cardholder currently works for a private company. The cardholder used the government travel card, since his retirement, to make charges totaling $44,000 for hotels, car rentals, restaurants, and airline tickets. In a number of instances, the cardholder was able to obtain the government rate--which can be substantially lower than the commercial rate--for lodging in San Diego, Philadelphia, and Cincinnati. Because the Navy does not routinely monitor cardholders' transaction reports for abusive activity and because this particular account was always paid in full, they did not detect the abusive activity. Bank of America data showed that the cardholder's account was still open in early September 2002 and thus available for further charges. In another instance, a mechanic trainee at the Puget Sound Naval Shipyard was convicted of a felony conviction for illegal possession of a firearm in October 2000 and placed on indefinite suspension by his employer in November 2000. However, neither the security office, which took action against the employee, nor the office where the individual worked notified the APC to cancel or deactivate the cardholder's government travel card account. Following his suspension, the cardholder used the government travel card to make numerous cash withdrawals and gas purchases totaling almost $4,700. The APC was not aware of these abusive charges until the monthly delinquency review identified the account as being delinquent. The account balance of $1,600 was subsequently charged off in January 2002. Although security officers at the Puget Sound Naval Shipyard referred the case to Navy CAF in October 2000, our work indicated as of August 2002, the suspended employee continued to maintain a secret clearance, despite the account charge-off and felony conviction. Table 2 summarizes our statistical tests of four key control activities related to basic travel transaction and voucher processing at three Navy locations. We concluded that the control was effective if the projected failure rate was from 0 to 5 percent. If the projected failure rate was from 6 to 10 percent, we concluded that the control was partially effective. We considered controls with projected failure rates greater than 10 percent to be ineffective. Although we found significant failure rates at all three case study sites for the requirement that vouchers be filed within 5 working days of travel completion, this did not have an impact on these units' delinquency rates. However, we found substantial errors in travel voucher processing that resulted in both overpayment and underpayment of the amount that cardholders should have received for their official travel expenses. At times, these errors were substantial in comparison with the total voucher amounts. For example, we found data entry errors that resulted, in one case, in an overpayment of more than $1,700 to the traveler. In another case, failure to carefully scrutinize supporting documentation resulted in an overpayment to a traveler of more than $1,000 for cell phone calls, for which the traveler did not submit detailed documentation to support what were claimed to be calls made for business purposes. As a result of our work, the Navy unit has taken actions to recover these overpayments. DOD has taken a number of actions focused on reducing delinquencies. For example, the Department of the Navy had established a goal of a delinquency rate of no more than 4 percent. Beginning in November 2001, DOD implemented a system of wage and retirement payment offset for many employees. It also began encouraging the use of split disbursements--a payment process by which cardholders elect to have all or part of their reimbursements sent directly to Bank of America. This payment method is a standard practice of many private sector employers. Although split disbursements have the potential to significantly reduce delinquencies, this payment process is strictly voluntary at DOD. According to Bank of America, split disbursements accounted for 30 percent of total payments made by Navy employees in June 2002. This rate represented a large increase over fiscal year 2001, when only 16 percent of Navy payments were made through split disbursements. As a result of these actions, the Navy experienced a significant drop in charged-off accounts in the first half of fiscal year 2002. The Navy has also initiated actions to improve the management of travel card usage. The Navy has a three-pronged approach to address travel card issues: (1) provide clear procedural guidance to APCs and travelers, available on the Internet, (2) provide regular training to APCs, and (3) enforce proper use and oversight of the travel card through data mining to identify problem areas and abuses. Further, to reduce the risk of card misuse, the Navy has also begun to deactivate cards while travelers are not on travel status and close a number of inactive cards, and plans to close inactive cards semi-annually to eliminate credit risk exposure. The Navy is also pursuing the use of "pre-funded" debit or stored value cards for high- risk travelers--funds would be available on the cards when travel orders were issued in an amount authorized on the order. Further, the DOD Comptroller created a DOD Charge Card Task Force to address management issues related to DOD's purchase and travel card programs. We met with the task force in June 2002 and provided our perspectives on both programs. The task force issued its final report on June 27, 2002. To date, many of the actions that DOD has taken primarily address the symptoms rather than the underlying causes of the problems with the program. Specifically, actions to date have focused on dealing with accounts that are seriously delinquent, which are "back-end" or detective controls rather than preventive controls. To effectively reform the travel program, DOD and the Navy will need to work to prevent potentially fraudulent and abusive activity and severe credit problems with the travel card. We are encouraged that the DOD Comptroller recently took action to deactivate the travel cards of all cardholders who have not been on official government travel within the last 6 months. However, additional preventive solutions are necessary if DOD is to effectively address these issues. To that end, we will be issuing a related report in this area with specific recommendations, including a number of preventive actions that, if effectively implemented, should substantially reduce delinquencies and potentially fraudulent and abusive activity related to Navy travel cards. For example, we plan to include recommendations that will address actions needed in the areas of exempting individuals with histories of financial problems from the requirement to use a travel card; providing sufficient infrastructure to effectively manage and provide day-to-day monitoring of travel card activity related to the program; deactivating cards when employees are not on official travel; taking appropriate disciplinary action against employees who commit fraud or abuse of the travel card; ensuring that information on travel card fraud or abuse of cardholders with secret or top-secret security clearances is provided to appropriate security officials for consideration in whether such clearances should be suspended or revoked; and moving towards mandating use of the split disbursement payment process. The defense authorization bill for fiscal year 2003 passed by the Senate reflected a move in this direction. This bill would change the voluntary use of split disbursements by authorizing the Secretary of Defense to require that any part of an employee's travel allowance be disbursed directly to the employee's travel card issuer for payment of official travel expenses. The defense authorization bill for fiscal year 2003 passed by the House does not contain comparable authority. As of September 12, 2002, the bill (H.R. 4546) was in conference. Mr. Chairman, Members of the Subcommittee, and Senator Grassley, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information regarding this testimony, please contact Gregory D. Kutz at (202) 512-9505 or [email protected] or John J. Ryan at (202) 512- 9587 or [email protected]. We used as our primary criteria applicable laws and regulations, including the Travel and Transportation Reform Act of 1998, the General Services Administration's Federal Travel Regulation, and the DOD Financial Management Regulations, Volume 9, Travel Policies and Procedures. We also used as criteria our Standards for Internal Control in the Federal Government and our Guide to Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in our internal control standards to the practices followed by management at our three case study locations. To assess the magnitude and impact of delinquent and charged-off accounts, we compared the Navy's delinquency and charge-off rates to those of other DOD services and agencies and federal civilian agencies. We also analyzed the trends in the delinquency and charge-off data from the third quarter of fiscal year 2000 through the first half of fiscal year 2002. In addition, we obtained and analyzed Bank of America data to determine the extent to which Navy travel card holders wrote NSF checks to pay their travel card bills. We also obtained documented evidence of disciplinary action against cardholders with accounts that were in charge-off or salary offset status or had NSF checks written in payment of those accounts. We accepted hard copy file information and verbal confirmation by independent judge advocate general officials as documented evidence of disciplinary action. We also used data mining to identify Navy individually billed travel card transactions for audit. Our data mining procedures covered the universe of individually billed Navy travel card activity during fiscal year 2001 and the first 6 months of fiscal year 2002, and identified transactions that we believed were potentially fraudulent or abusive. However, our work was not designed to identify, and we did not determine, the extent of any potentially fraudulent or abusive activity related to the travel card. To assess the overall control environment for the travel card program at the Department of the Navy, we obtained an understanding of the travel process, including travel card management and oversight, by interviewing officials from the Office of the Undersecretary of Defense (Comptroller), Department of the Navy, Defense Finance and Accounting Service (DFAS), Bank of America, and the General Services Administration, and by reviewing applicable policies and procedures and program guidance they provided. We visited three Navy units to "walk through" the travel process, including the management of travel card usage and delinquency, and the preparation, examination, and approval of travel vouchers for payment. We also assessed actions taken to reduce the severity of travel card delinquencies and charge-offs. Further, we contacted one of the three largest U.S. credit bureaus to obtain credit history data and information on how credit scoring models are developed and used by the credit industry for credit reporting. To test the implementation of key controls over individually billed Navy travel card transactions processed through the travel system--including the travel order, travel voucher, and payment processes--we obtained and used the database of fiscal year 2001 Navy travel card transactions to review random samples of transactions at three Navy locations. Because our objective was to test controls over travel card expenses, we excluded credits and miscellaneous debits (such as fees) from the population of transactions used to select a random sample of travel card transactions to audit at each of the three Navy case study units. Each sampled transaction was subsequently weighted in the analysis to account statistically for all charged transactions at each of the three units, including those that were not selected. We selected three Navy locations for testing controls over travel card activity based on the relative amount of travel card activity at the three Navy commands and at the units under these commands, the number and percentage of delinquent accounts, and the number and percentage of charged-off accounts. Each of the units within the commands was selected because of the relative size of the unit within the respective command. Table 3 presents the sites selected and the universe of fiscal year 2001 transactions at each location. We performed tests on statistical samples of travel card transactions at each of the three case study sites to assess whether the system of internal control over the transactions was effective, as well as to provide an estimate, by unit, of the percentage of transactions that were not for official government travel. For each transaction in our statistical sample, we assessed whether (1) there was an approved travel order prior to the trip, (2) the travel voucher payment was accurate, (3) the travel voucher was submitted within 5 days of the completion of travel, and (4) the travel voucher was paid within 30 days of the submission of an approved travel voucher. We considered transactions not related to authorized travel to be abuse and incurred for personal purposes. The results of the samples of these control attributes, as well as the estimate for personal use--or abuse--related to travel card activity, can be projected to the population of transactions at the respective test case study site only, not to the population of travel card transactions for all Navy cardholders. Table 4 shows the results of our test of the key control related to the authorization of travel (approved travel orders were prepared prior to dates of travel). Table 5 shows the results of our test for effectiveness of controls in place over the accuracy of travel voucher payments. Table 6 shows the results of our tests of two key controls related to timely processing of claims for reimbursement of expenses related to government travel--timely submission of the travel voucher by the employee and timely approval and payment processing. To determine if cardholders were reimbursed within 30 days, we used payment dates provided by DFAS. We did not independently validate the accuracy of these reported payment dates. We briefed DOD managers, Navy managers, including the Assistant Secretary of the Navy (Financial Management and Comptroller) officials, unit commanders, and APCs; and Bank of America officials on the details of our audit, including our findings and their implications. We incorporated their comments where appropriate. We did not audit the general or application controls associated with the electronic data processing of Navy travel card transactions. We conducted our audit work from January 2002 through September 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President's Council on Integrity and Efficiency. Following this testimony, we plan to issue a report, which will include recommendations to DOD and the Navy for improving internal controls over travel card activity. Tables 7, 8, and 9 show the grade, rank (where relevant), and the associated basic pay rates for fiscal year 2001 for the Navy's and Marine Corp's military personnel and civilian personnel. Table 12 shows cases of travel card use for personal expenses where the cardholder paid the bill.
This testimony discusses the Department of the Navy's internal controls over the government travel card program. The Navy's average delinquency rate of 12 percent over the last 2 years is nearly identical to the Army's, which has the highest delinquency rate in the Department of Defense, and 6 percentage points higher than that of federal civilian agencies. The Navy's overall delinquency and charge-off problems, which have cost the Navy millions in lost rebates and higher fees, are primarily associated with lower-paid, enlisted military personnel. In addition, lack of management emphasis and oversight has resulted in management failure to promptly detect and address instances of potentially fraudulent and abusive activities related to the travel card program. During fiscal year 2001 and the first 6 months of fiscal year 2002, over 250 Navy personnel might have committed bank fraud by writing three or more nonsufficient fund checks to Bank of America, while many others abused the travel card program by failing to pay Bank of America charges or using the card for inappropriate transactions such as for prostitution and gambling. However, because Navy management was often not aware of these activities, disciplinary actions were not consistently taken against these cardholders. GAO also found a significant relationship between travel card fraud, abuse, and delinquencies and individuals with substantial card history problems. Many cardholders whose accounts were charged off or put in salary offset had bankruptcies and accounts placed in collection prior to receiving the card. The Navy's practice of authorizing a travel card to be issued to virtually anyone who asked for it compounded an already existing problem by giving those with a history of bad financial management additional credit. Although GAO found that Navy management had taken some corrective actions to address delinquencies and misuse, additional preventive solutions are necessary if Navy is to effectively address these issues.
7,394
395
Ex-Im operates under the authority of the Export-Import Bank Act of 1945, as amended. It is an independent agency of the U.S. government. Ex- Im's mission is to support jobs in the United States by facilitating the export of U.S. goods and services. In fiscal year 2012, Ex-Im authorized about $35.8 billion, for 3,796 transactions, to support U.S. exports. Ex-Im is part of the U.S. Trade Promotion Coordinating Committee, an interagency committee chaired by Commerce and tasked with coordinating the export promotion and financing activities of federal agencies. Ex-Im is also a key participant in the National Export Initiative, a strategy announced in 2010 to double U.S. exports by 2015 to support U.S. employment. Ex-Im provides four types of financing: direct loans, loan guarantees, working capital guarantees, and export credit insurance. Direct loans: Medium- and long-term fixed-rate loans Ex-Im provides directly to foreign buyers of U.S. goods and services. Loan guarantees: Medium- and long-term loan guarantees to lenders that Ex-Im will pay the lender if the foreign buyer of U.S. goods and services, who received the loan, defaults. Working capital guarantees: Guarantees to lenders for U.S.-based companies to obtain short-term loans that facilitate the export of goods and services. Working capital guarantee loans may be approved for a revolving line of credit that supports multiple export sales or a single loan that supports a specific export contract. Insurance: Short- and medium-term insurance Ex-Im provides to U.S. exporters to protect them against the risk of nonpayment by foreign buyers for commercial or political reasons. To balance the interests of multiple stakeholders and Ex-Im's mission to support U.S. jobs through export financing, Ex-Im has a domestic content policy regarding the amount of U.S. content directly associated with the goods and services exported from the United States. Ex-Im's short-term transaction content policy requires at least 50 percent U.S. content. For medium- and long-term transactions, there is no minimum U.S. content requirement to receive a portion of financing, but Ex-Im's support is limited to the lesser of (1) 85 percent of the total value of all eligible goods and services in the U.S. export transaction, or (2) 100 percent of the value of the domestic content in all eligible goods and services in the U.S. export transaction. To be eligible for support, goods must be shipped from the United States. Other industrial countries have their own export credit agencies. For example, the other Group of Seven (G-7) countries all have at least one export credit agency. G-7 agencies differ in the magnitude and types of their activities. All offer medium- and long-term officially supported export credits. Export credit agencies also can provide other products and services that can complicate comparisons among institutions. Ex-Im's mission emphasizing supporting domestic jobs through exports is unique among the G-7 agencies. Ex-Im's charter states that the bank's objectives are to contribute to maintaining or increasing the employment of U.S. workers by financing and facilitating exports through loans, guarantees, insurance, and credits. This mission underlies certain Ex-Im policies, such as its economic impact analysis requirement and its domestic content policy. Other export credit agencies' missions range from promoting and supporting domestic exports to securing natural resources. To estimate the number of U.S. jobs associated with the exports it helps finance, Ex-Im uses a methodology based on the input-output approach. To apply this methodology, Ex-Im uses a BLS data product, known as employment requirements tables (ERT) which are based on the input- output methodology. It is important to understand the ERT because these tables play an essential role in Ex-Im's jobs calculation process. The ERT provide the total number of jobs (on average) supported by production in each industry. These BLS data allow Ex-Im to produce a measure that translates the value of the exports it supports in each industry into an employment estimate for that industry. In order to use the ERT, Ex-Im must rely on data from its own system. Ex-Im's four-step process estimates the value of all the exports it supports by the industries associated with those exports. By combining data from the ERT and its own system and aggregating across industries, Ex-Im produces an estimate of the total jobs its financing supported. The methodology Ex-Im uses relies on a basic input-output approach. According to Ex-Im and Commerce officials, the basic input-output approach was designated as the standard for U.S. government agencies by the Trade Promotion Coordinating Committee and has the advantage of generating a uniform jobs calculation methodology across the federal government. The logic underlying the input-output modeling approach assumes that the production of goods and services in an economy uses inputs (such as labor) in fixed proportions. Consequently, it is possible to determine the quantity of labor required for a given level of production. To apply this methodology, Ex-Im uses the ERT, data tables created by BLS, to estimate the number of jobs associated with the specific value of exports Ex-Im supports, rather than the value of total U.S. exports. The ERT are derived from a set of data showing the relationship between industries, known as input-output tables. For researchers using an input- output approach, the ERT can be used for analyses that attempt to estimate the employment effects of exports. BLS develops the ERT so that users can analyze the job impact of various types of expenditures, such as exports. The ERT contain, for 195 industries, the number of jobs required to produce one million dollars of value in each industry (this report refers to this factor as the "jobs ratio"). Because industries may vary widely in how many jobs they support per million dollars of expenditure, it is important for Ex-Im to correctly identify the industry associated with each export transaction it finances. BLS produces two types of ERT, one that includes the employment effects of both domestic and imported production and another that removes the employment effects of imports so that only domestic production is captured. Ex-Im uses the domestic ERT to estimate the number of U.S. jobs associated with its exports. While annual versions of the ERT are produced, the most current year available as of May 2013 is 2010. Using the ERT, it is possible to obtain either the jobs supported directly in a particular industry, or in a particular industry plus the industries that support its production. For example, construction directly supports jobs in the construction industry but also indirectly supports jobs in industries that supply the material necessary for construction, such as the steel industry. Ex-Im uses the value that also includes employment in supporting industries, which produces a larger jobs ratio. Sometimes this larger estimate is called the "direct plus indirect effect" or "supply chain." Ex-Im's process for using the ERT has four steps. First, it determines the industry associated with each transaction. In some cases, there could be multiple industries associated with a transaction, if Ex-Im financed multiple products in the transaction. Second, it determines the total value of exports Ex-Im supports for each industry. Third, it multiplies these export values by BLS's jobs ratio for each industry to obtain the jobs for that industry. Finally, it aggregates across all industries to produce an overall estimate. Figure 1 depicts each step of the process. In step 1, Ex-Im either uses the industry code provided by the applicant (the exporter or the lender) or relies on its engineers (whom Ex-Im considers its in-house industry experts) to identify the appropriate North American Industry Classification System (NAICS) code for the contracts associated with each transaction Ex-Im finances or supports. Ex-Im translates its data on transactions into the same industry groups (i.e., NAICS-based codes) used by BLS. The method by which Ex-Im obtains the NAICS code varies by length of repayment term. For short- and medium-term financing and working capital credit, the applicant (either the exporter or the lender) provides the NAICS code. For long-term financing, Ex-Im engineers work with the exporters and project sponsor to determine the NAICS code. According to Ex-Im officials, to verify and assign NAICS codes in long-term financing, Ex-Im uses both the guidance provided by the codebook for assigning NAICS codes and the experience of the engineer. In step 2, Ex-Im estimates a dollar value of exports it supports, referred to as the export value. It does this for each transaction it finances. However, according to Ex-Im officials, because Ex-Im provides different types of financial products, it uses two different methods to determine the export values. 1. For some financial products, such as direct loans and loan guarantees, Ex-Im determines the export value after authorization-- but before disbursement--by using information provided on the exporter's certificate. Specifically, Ex-Im determines the export value by using the net contract price--the aggregate price of all goods and services to be exported (i.e., U.S. content plus eligible foreign content that does not include local costs). Ex-Im includes the value of the purchase of goods and services that were financed by entities other than Ex-Im. In other words, the export value is the value of exports in purchase orders that were at least partially financed by Ex-Im. According to Ex-Im officials, they generally provided approximately 83 percent of the financing for medium and long-term transactions for fiscal year 2010 through fiscal year 2012. 2. For other financial products, such as short-term insurance or working capital, Ex-Im uses the entire value of the credit or the insurance policy as the proxy for the export value. Because the export value is not known at the time of authorization, Ex-Im cannot use the net contract price to determine the export value. These products include revolving lines of credit that may be drawn down multiple times during the available period. Under this type of support, a domestic exporter can access the credit to make purchases and later repay the debt, thereby making additional credit available. According to Ex-Im, this approach may result in an understatement of the total value of the exports, as multiple purchases can occur without ever reaching the limit. However, Ex-Im also confirmed that using the entire value of the credit or insurance policy could result in an overstatement, if all the credit is not used. At the end of step 2, Ex-Im creates a summary table, where each row contains the sum of export value in an industry. In step 3, Ex-Im multiplies the export values for each industry by the appropriate jobs ratio from the ERT. Finally, in step 4 it sums across all of the industries to obtain a single estimate for the number of jobs it supports. Using this process, Ex-Im estimated 255,000 jobs supported in 2012. To illustrate, on average, Ex-Im used the following steps: Ex-Im determined that it supported approximately $40 billion of exports. On average, in fiscal year 2012, every million dollars of exports supported by Ex-Im was associated with 6.5 jobs (based on the industries that used Ex-Im financing, and the ERT). Finally, multiplying approximately 40 billion dollars of exports by 6.5 jobs per million results in approximately 255,000 jobs. In order to verify our understanding of Ex-Im's jobs calculation process, we obtained the individual transaction level data from Ex-Im, including the export values and industry codes for each transaction. We then merged that data with the most recent ERT from the BLS and summed across all transactions. Using this data, we were able to obtain close to Ex-Im's exact value for the total number of jobs supported, thus confirming the process that Ex-Im described to us. For more detail about our analysis, see appendix I. The basic methodology used by Ex-Im has recognized limitations, and Ex-Im also makes certain assumptions about its data. However, in its reports, Ex-Im does not describe limitations or fully detail assumptions that are inherent to the methodology. As a result, stakeholders may not fully understand what the job number represents or how to interpret it in the proper context. Although the input-output approach on which the ERT are based is a commonly used methodology, this approach has several limitations. Some of these limitations are inherent to the ERT. Additional limitations result from assumptions Ex-Im makes about its data on the industry codes and export values for the export transactions it finances. The limitations specific to the ERT are outside of Ex-Im's direct control. For example, officials from Commerce and Ex-Im said that the data in the ERT cannot be used to distinguish between jobs that were newly created and those that were maintained. The ERT simply show the direct and indirect (also known as supply chain) employment per $1 million of sales of goods to final users for each commodity, not whether these are "jobs created" (employing previously unemployed people or people out of the labor force, such as students), or "jobs maintained" (continuing pre- existing employment). According to BLS officials, it would be challenging to find data that can distinguish between newly created and maintained jobs. Obtaining data detailed enough to allow a researcher to make that distinction would require many more resources than are currently available to BLS, according to these officials. They added that this is a general limitation of the input-output methodology, upon which the ERT are based, and which is a standard methodology used to calculate average employment and other inputs needed for a certain level of production. Because of the lack of specificity and limitations, Ex-Im officials report that the jobs are "associated with" or "supported by" Ex-Im financing. Moreover, the documentation accompanying the ERT also describes several limitations and assumptions to those data, including the following: The employment data are a count of jobs, not of persons employed, and treat full-time, part-time, and seasonal jobs equally. Persons who hold multiple jobs show up multiple times in the employment data. The age of the data underlying the ERT is a general limitation of BLS's employment requirements tables. The ERT incorporate a large amount of data, which takes time to collect and put in the ERT framework, according to BLS officials. Ex-Im is using the latest available ERT, the 2010 ERT; however the industry relationships that the ERT are based on come from 2002 data from BEA. BLS officials stated that the current economy may be very different from the economy in 2002, and the relationships reflected in the latest available ERT are a decade old. BLS officials acknowledged, however, these data are the best currently available for Ex-Im to use. Furthermore, the ERT data assume average industry relationships; however Ex-Im's clients could be different than the typical firm in the same industry. For example: The ERT that are adjusted to reflect only domestic employment assume that each industry's share of domestic versus international use of a particular input is constant across industries. For example, these ERT assume that the automobile industry uses the same proportion of imported steel as the construction industry. Because of Ex-Im's domestic content policy, agency officials said that Ex-Im does not consider the exports supported by its financing to contain the same level of imports as the industry averages. Ex-Im officials agreed that this is a limitation but said that using BLS's adjusted ERT helps ensure that imported content is accounted for to some extent. Ex-Im officials told us they had not assessed the extent to which this limitation affects the overall jobs estimate. In addition, officials from Export Development Canada and Ex-Im and a trade policy researcher said that using input-output methodology to calculate employment estimates for specific transactions is also a limitation, since a particular export may be different than the average for that industry. The ERT also exclude the impact of spending that results from income generated by Ex-Im supported jobs, sometimes called the multiplier effect. For example, an increase in employment in a factory may result in employment at a nearby restaurant. According to BLS, including these additional consumer expenditures would result in a larger employment impact. Some limitations stem from Ex-Im's process for determining the industry and export value. As discussed previously, during step 1 (as shown in fig. 1), Ex-Im determines the industry associated with each transaction. However, in some cases, Ex-Im has been unable to determine the industry code. In cases where the NAICS code is missing for transactions, Ex-Im has used the average across all of its other industries as the jobs ratio. In almost all of those cases we identified with missing NAICS codes (that had positive export values), the type of support was short-term insurance. According to Ex-Im, in short-term insurance, the lender may not know at the time of authorization which exporter will benefit from the insurance coverage, and this may explain why the NAICS code is not identified. Ex-Im's jobs calculation methodology is also sensitive to certain assumptions about how it determines the export value based on its financing. For example, as discussed previously in step 2, using the authorized amount as the export value for short-term insurance transactions could overstate or understate the actual export value. In addition, according to Ex-Im officials, the export value includes the value of the purchase of goods and services that were financed by entities other than Ex-Im. Finally, according to government officials and trade policy researchers, the methodology that Ex-Im uses does not answer the question of what would have happened without Ex-Im financing. A Commerce report and trade policy researchers we consulted noted that in a high unemployment economy, additional exports may result in additional jobs. However, in a low unemployment economy, additional exports may result in jobs shifting from one firm to another, without an increase in total employment. Ex-Im reports the number of jobs its financing supports and the methodology it uses but does not describe the limitations or fully detail the assumptions related to its data or methodology. Ex-Im first reported the total number of jobs it supports in its 2010 annual report and started providing an overview of its methodology in its 2011 report. The 2012 report states that the Trade Promotion Coordinating Committee identified this basic methodology as the official U.S. government calculation of jobs supported through exports. The report further states that Ex-Im uses the latest available domestic ERT from BLS (which is based on input-output tables from BEA), National Income and Product Accounts data (also from BEA), and BLS industry employment data to calculate the number of jobs associated with Ex-Im supported exports of goods and services. Ex-Im has also reported the number of jobs it supports in various other documents, including reporting to comply with the Government Performance and Results Act, the Chairman's statements to Congress, its website, and press releases. Some press releases that announce new transactions also state the number of jobs associated with a specific transaction. Most of the press releases we reviewed provide only a brief statement about how Ex-Im calculates its job estimate. For example, an October 2, 2012, press release announcing $105 million in financing for an aquarium in Brazil states: "The transaction will support approximately 700 American jobs, according to bank estimates derived from Departments of Commerce and Labor data and methodology." Ex-Im officials told us they use the results of its jobs calculations for reporting purposes only. According to Ex-Im officials, Ex-Im calculates the number of jobs supported for the transactions reviewed by Ex-Im's Board of Directors, at the request of one of its board members. Ex-Im board members stated that the purpose of reporting these numbers is to give Congress a sense of the employment effects of Ex-Im activities; they do not use them for decision making. Board members also told us that the chief consideration when making a financing decision is the credit worthiness of the firm. Officials stated that they do not make decisions based on how many jobs would be supported by a particular transaction. However, none of Ex-Im's reporting discusses limitations or fully details the assumptions in its data or in the methodology it uses. Most of the limitations and assumptions are not specific to Ex-Im, but are common to the methodology. For example, Ex-Im's brief discussion of the methodology in its 2012 annual report does not explain that the methodology does not allow it to differentiate between the number of new jobs that were created and the number of jobs maintained as a result of its financing. In addition, Ex-Im does not specify that jobs associated with the multiplier effect are not captured in its jobs estimates. Further, the report does not state that the employment estimate is an overall count of jobs, not full-time equivalents. Thus, the number of jobs that Ex-Im says it supports can include part-time and seasonal jobs. Similarly, its press releases that include the number of jobs associated with a specific transaction also do not state the limitations and assumptions associated with the methodology. Officials said that, in reporting the number of jobs associated with Ex-Im financing, they clearly state that it is an estimate. Because it is a nonfinancial and unaudited number, the caveat of "estimate" seemed sufficient, according to Ex-Im officials. According to GAO's Standards for Internal Control in the Federal Government, effective communications should occur in a broad sense with information flowing down, across, and up the organization. Management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. By not including more information in its report, Ex-Im does not allow readers, including Congressional and public stakeholders, to fully understand what the jobs number represents or how to interpret it in the proper context. Although alternative methodologies may address some of the limitations in Ex-Im's jobs calculation methodology, these alternatives have their own limitations. Trade policy researchers we spoke to suggested alternative methodologies that Ex-Im could potentially use to calculate the effects of its financing on employment. However, these methodologies have their own limitations, such as not including the effects of Ex-Im financing on indirect jobs (the supply chain) and would require a significant amount of data collection by Ex-Im that would be time consuming, require more technical expertise, and cost more. One trade policy researcher we spoke to suggested that Ex-Im could conduct an assessment of firms that received Ex-Im financing in comparison to firms that did not receive Ex-Im financing. This approach, using firm-specific data, could potentially estimate whether the jobs would have existed without Ex-Im financing. For example, the German Ministry of Economics and Technology commissioned a study by the University of Munich on the employment effects of the export credit guarantees provided by the German export credit agency. This 6-month study used econometric analysis to examine firm-level data while taking into account other potential causes of export success and found that the German export credit agency's guarantees had increased exports and created jobs. According to their report, their estimate of jobs created using this approach was comparable to estimates derived from an input-output approach. However, Ex-Im officials noted that the type of data used in the German study may not be readily available in the United States to Ex-Im. Another trade policy researcher suggested a different approach using firm-level data from Census or BLS to examine job creation and destruction over time. This approach could be potentially informative of changes in the labor market not captured by a total jobs number, such as whether these are new jobs, or whether firms supported by Ex-Im are less likely to reduce employment. In contrast, the current input-output method used by Ex-Im provides a static look at the number of jobs supported by Ex-Im financing and does not show how the economy has gained or lost jobs over time. While the approach of using firm-level data may yield information about the creation and destruction of jobs, it may not yield a static estimate of the number of jobs supported. In addition, BLS officials stated that such an analysis would only identify whether a firm's total employment increased or decreased over time, but would not identify a new set of jobs in the firm and would not control for factors other than Ex- Im financing that could cause a change in employment. An Export Development Canada official also stated that such a methodology introduces the potential for selection bias. Furthermore, Commerce officials stated that such an analysis would be too time consuming to conduct every year. Two trade policy researchers and Ex-Im and Export Development Canada officials we spoke with said that these alternative approaches that rely on firm-level data would require more resources for data collection and analysis than does Ex-Im's current input-output based methodology. In particular, a methodology using firm-level data would require a significant amount of data collection by Ex-Im that would be time consuming, require more technical expertise, and have a monetary cost. Moreover, these alternatives may not capture the indirect (the supply chain) effect of Ex-Im financing. These trade policy researchers said that the input-output approach is appropriate given Ex-Im's limited resources and how the agency uses the number of jobs supported. Export Development Canada officials said they use an input-output based approach, which also captures the indirect (the supply chain) effect, similar to the methodology used by Ex-Im to calculate the number of jobs supported by its financing. However, for insurance products, Export Development Canada uses the contracts for the exports it is supporting to calculate the export value. This approach allows this agency to capture export values that differ from authorized amounts since the authorized amount could overstate or understate the actual export value. Ex-Im officials said they lack the staff and resources to adopt Export Development Canada's method and that Ex-Im faced some limitations with its data systems. Additionally, using the authorized value for the short-term insurance products, Ex-Im officials said, ensures that the value is only counted once in the fiscal year it was authorized and is not counted again in subsequent fiscal years. Prior to the use of the input-output based approach, Ex-Im, as well as Export Development Canada, tried to collect information on the number of jobs associated with their financing directly from the companies that received the financing. Officials from both agencies said that they had problems with the data they received from the companies. An official from Export Development Canada also said that smaller companies found this process burdensome. According to Ex-Im, surveyed firms responded in inconsistent ways, such as claiming all employed workers at a firm were supported by the exports. They also reported that since financial intermediaries or foreign buyers often submit the applications for Ex-Im financing, they do not know the jobs impact for the U.S. exporter or service provider. Moreover, any jobs-impact information from applicants does not account for indirect jobs created in the supply chain, which the input-output approach does include. Ex-Im's primary mission is to support U.S. jobs through the exports that it finances, and it estimates the number of jobs supported by its financing in order to provide Congress and the public with a broad sense of its impact on U.S. employment. The jobs number reported by Ex-Im is an estimate, used by Ex-Im as an indicator of how the agency is fulfilling its mission to support U.S. employment. Although the methodology Ex-Im uses does not distinguish between jobs that were newly created or jobs that were maintained, its current methodology has certain advantages. For example, it is based on the input-output approach commonly used in economic analysis; it includes indirect jobs in the supply chain; and it can be performed using limited resources. Providing a precise accounting of the jobs supported by Ex-Im's financing may not be feasible because of the complexity and cost of doing so. While trade policy researchers we consulted identified other methodologies, they also identified limitations of those methodologies. For example, these methodologies would require more resources to conduct, would be difficult to perform on a regular basis, and would not include indirect jobs in the supply chain. Nonetheless there are important limitations and assumptions that affect Ex-Im's estimate of the number of jobs supported by its financing. While Ex-Im's reporting includes a brief overview of its methodology, it has not included a discussion of the limitations or fully detailed the assumptions of the methodology and data. The lack of detailed reporting reduces the ability of congressional and public stakeholders to fully understand what the jobs number represents and the extent to which Ex-Im's financing may have affected U.S. employment. To ensure better understanding of its jobs calculation methodology, the Chairman of Ex-Im Bank should increase transparency by improving reporting on the assumptions and limitations in the methodology and data used to calculate the number of jobs Ex-Im supports through its financing. We provided a draft of this report to Ex-Im, Commerce, and Labor for comment. We also provided relevant sections to Export Development Canada for technical comment. In its written comments, which are reproduced in appendix II, Ex-Im stated that it agrees with GAO's recommendation and will provide greater detail on the assumptions and limitations associated with its jobs calculation methodology. Ex-Im further stated that it will begin implementation of the recommendation this fiscal year with its 2013 annual report, which will include greater information on the assumptions and limitations of its methodology. Ex-Im will provide this information in annual reports and on its website. Commerce stated that it agrees with GAO's recommendation for improved reporting on how Ex-Im calculates the number of jobs that are supported by exports for which it provides financing. Commerce also recommended that Ex-Im make it clear that its jobs estimate is indicative of jobs supported by Ex-Im financing and is different than the estimate of jobs supported by total U.S. exports that Commerce publishes as the official estimate of the U.S. government. Commerce's comments are reproduced in appendix III. Export Development Canada stated that it recognizes the deficiencies in the input-output approach, but that it believes that compared with other potential methodologies, this approach provides the best solution. According to Export Development Canada, the input-output approach uses a simple method to capture the indirect impact of the supply chain on domestic employment. In addition, Export Development Canada said that while using firm-level data to estimate the effect of financing might offer other insights, it would also be complex to analyze, and could introduce another bias. Further, Export Development Canada said that in its experience, surveying firms directly may not lead to reliable information, and also could be burdensome to smaller firms. Ex-Im, Commerce, and Export Development Canada also provided technical comments that were incorporated, as appropriate. We received no comments from Labor. We are sending copies of this report to interested congressional committees, the Chairman of the Export-Import Bank of the United States, and the Secretaries of Commerce and Labor. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. The objectives of this report were to (1) describe the methodology and processes the Export-Import Bank of the United States (Ex-Im) uses to calculate the effects of its financing on employment in the United States, (2) examine the limitations of Ex-Im's approach and how Ex-Im reports on its methodology, and (3) describe alternative methodologies and their limitations. To describe the methodology and processes Ex-Im uses to calculate the effects of its financing on employment in the United States, we interviewed Ex-Im staff involved with producing the estimate and reviewed descriptions of the estimate in the most recent annual reports and other documentation provided by Ex-Im. Because Ex-Im's method uses the Bureau of Labor Statistics' (BLS) employment requirements tables (ERT), we interviewed BLS staff and reviewed technical documentation on the ERT. In addition, we reviewed the Microsoft Excel spreadsheet Ex-Im uses to perform the estimate, by examining formulas in the sheet used to produce the estimate. Because Ex-Im provided the underlying raw data used by the spreadsheet, we were able to combine its data with the ERT data and replicate Ex-Im's jobs estimate by the following steps. First, we downloaded the ERT directly from the BLS website. Then, we merged the ERT with the raw data provided by Ex-Im, by industry. Finally, we multiplied the jobs ratio by Ex-Im data's export value (for the appropriate industry), and aggregated across transactions. Following this procedure, we obtained a value close to Ex-Im's exact value for a jobs estimate. Replicating Ex-Im's estimate helped to verify that Ex-Im followed the process and used the specific ERT that it stated it did, and that all of the raw data were reflected in its jobs estimate. We performed our replication using SAS, a computer program distinct from Excel. Based on our interviews with knowledgeable agency officials, review of relevant documentation, and replication of Ex-Im's calculation, we determined the data were sufficiently reliable for the purposes of our report. To examine the limitations of Ex-Im's approach, how Ex-Im reports on its methodology, and alternative methodologies, we reviewed relevant documentation related to Ex-Im, including recent annual reports, descriptions of Ex-Im's jobs calculation methodology, and press releases that included information on jobs supported by Ex-Im financing. We also reviewed recent GAO reports on Ex-Im and export credit agencies, and literature related to input-output methodology. In addition, we interviewed Ex-Im officials from various divisions of the organization about how they calculate the number of jobs supported by Ex-Im's financing, how they obtain data about Ex-Im's transactions, and how the jobs number is used. We also interviewed officials from the BLS at the Department of Labor to discuss the employment requirements tables (ERT). In addition, we interviewed officials from the Department of Commerce, specifically from the Bureau of Economic Analysis--which develops the data in the input- output tables that BLS uses in its ERT--and from the International Trade Administration--which also calculates the number of jobs supported by U.S. exports overall. We reviewed relevant documentation from these agencies such as technical documentation on the ERT. We also spoke with officials from four other countries' export credit agencies to obtain information on their efforts to determine the number of jobs associated with their financing, including the export credit agencies of Canada, Japan, France, and the United Kingdom. We selected these countries' export credit agencies because GAO had consulted with them on prior engagements based on their similarities to Ex-Im. We obtained information on a study that analyzed the employment effects of Germany's export credit agency as an example of an alternative methodology. We met with three selected trade policy researchers to obtain their perspectives on Ex-Im's methodology and discuss potential alternative methodologies to calculate the effect of Ex-Im's financing on employment. We selected these researchers because GAO had consulted with them on prior engagements related to export credit agencies based on their knowledge of the issues, or they had been recommended to us through interviews with knowledgeable government officials due to their expertise in the area. In addition, we reviewed GAO's Standards for Internal Control in the Federal Government to assess Ex- Im's communication regarding its jobs calculation methodology. We conducted this performance audit from August 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the person named above, Jose Alfredo Gomez (Director), Juan Gobel (Assistant Director), Christina Werth, Rachel Girshick, and Benjamin Bolitzer made key contributions to this report. Also contributing to this report were Karen Deans, Susan Offutt, Martin de Alteriis, Etana Finkler, Robert Alarapon, and Ernie Jackson.
Ex-Im provides loans, guarantees, and insurance to U.S. exporters. One of Ex-Im's primary missions is to support U.S. jobs through exports. In its 2012 annual report, Ex-Im stated that its financing helped support an estimated 255,000 export-related U.S. jobs. In 2012, Congress passed the Export-Import Bank Reauthorization Act of 2012. The act required GAO to report on the process and methodology used by Ex-Im to calculate the effects of export financing on U.S. employment. This report (1) describes the methodology and processes Ex-Im uses to calculate the effects of its financing on U.S. employment and (2) examines the limitations of Ex-Im's approach and how Ex-Im reports on its methodology, and provides additional related information. To address these objectives, GAO reviewed relevant Ex-Im documents, obtained and reviewed the data Ex-Im uses for its calculations, and interviewed agency officials and trade policy researchers. The U.S. Export-Import Bank's (Ex-Im) methodology to calculate the number of U.S. jobs associated with the exports it helps finance has four key steps. First, Ex-Im determines the industry associated with each transaction it finances. Second, Ex-Im calculates the total value of exports it supports for each industry. Ex-Im implements these first two steps using its own data. Third, Ex-Im multiplies the export value for each industry by the Bureau of Labor Statistics (BLS) ratio of jobs needed to support $1 million in exports in that industry--a figure known as the "jobs ratio." Finally, Ex-Im aggregates across all industries to produce an overall estimate. Ex-Im reports the number of jobs its financing supports and the methodology it uses but does not describe limitations of the methodology or fully detail its assumptions. Although the BLS data tables that Ex-Im relies on are based on a commonly used methodology, this methodology has limitations. For example, the employment data are a count of jobs that treats full-time, part-time, and seasonal jobs equally. In addition, the data assume average industry relationships, but Ex-Im's clients could be different from the typical firm in the same industry. Further, the underlying approach cannot answer the question of what would have happened without Ex-Im financing. Ex-Im does not report these limitations or fully detail the assumptions related to its data or methodology. GAO's Standards for Internal Controls in the Federal Government states that, in addition to internal communication, management should ensure adequate communication with external stakeholders, which could include Congress and the public. Because of a lack of reporting on the assumptions and limitations of its methodology and data, Congressional and public stakeholders may not fully understand what the jobs number that Ex-Im reports represents and the extent to which Ex-Im's financing may have affected U.S. employment. To ensure better understanding of its jobs calculation methodology, GAO recommends that Ex-Im improve reporting on the assumptions and limitations in the methodology and data used to calculate the number of jobs Ex-Im supports through its financing. Ex-Im agreed with the recommendation and stated that it would begin reporting more detailed information in its fiscal year 2013 annual report.
7,952
710
ACI's estimate of planned capital development costs is considerably larger than FAA's because it reported a broader base of projects. According to FAA's estimate, which includes only projects that are eligible for AIP grants, the total cost of airport development will be about $41 billion, or about $8.2 billion per year for 2007 through 2011. (See table 1.) ACI estimates annual costs of about $78 billion, or about $15.6 billion per year, for the same period. These estimates differ mainly because ACI's estimate includes all future projects that may or may not have an identified funding source or be eligible for federal funding and also because they are based on different estimating approaches. Projects that are eligible for AIP grants include runways, taxiways, and noise mitigation and reduction efforts; projects that are not eligible for AIP funding include parking garages, hangars, and expansions of commercial space in terminals. Several factors account for the differences between the FAA and ACI estimates of future development costs. The biggest difference stems from ACI's inclusion of projects that are not eligible for AIP grants, while FAA's estimate includes only AIP-eligible projects (see table 2). However, even when comparing just the AIP-eligible portions of the respective estimates, ACI's estimate is 20 percent ($8 billion in total or $1.6 billion annually) greater. This points to differences in how the two estimates are formed. One difference is the estimating approach. FAA's estimates cover projects for every airport in the national system, while ACI surveyed the 100 largest airports (mostly large and medium hub airports) and then extrapolated a total based on cost per enplanement calculations for small, medium, and large hub airports that did not respond. Further analysis on a project-by-project level shows variances related to three other factors: Definition--FAA data are based on planned project information taken from airport master plans and state system plans, minus projects that already have an identified funding source, while ACI includes all projects, whether funding has been identified or not. For example, ACI's estimate for Washington Dulles airport includes $278 million for an automated people mover, but FAA's estimate does not because it is being funded by a PFC approved in 2006. Measurement--FAA data include only the portion of a project that is eligible for AIP, while ACI estimates the total value project cost. On a terminal construction project at Dulles International Airport, ACI estimated total costs of $1.6 billion for construction; however, only a small portion is eligible for AIP funding. FAA did not report any amount because under FAA AIP rules only a small portion ($20 million) was eligible for AIP funding and the airport had exhausted the AIP funds that could be used for this type of project. Timing--The ACI and FAA estimated planned development costs for the same five year time period, but the estimates were made at different times--the ACI survey was completed in early 2007, while FAA's estimate is based on information collected in early 2006. Further, the ACI estimate includes projects that FAA does not believe will be commissioned during the next 5 years. At Fort Lauderdale International Airport, for example, ACI reported a $700 million runway project but FAA reports less than $200 million for the same project. According to FAA, the remaining costs are beyond 2011. FAA and ACI estimates do not consider cost increases such as rising construction costs. Going forward these costs may increase, especially construction costs which have jumped 26 percent in 30 major U.S. cities over the past three years. Industry experts predict that construction costs will continue to increase project costs. FAA acknowledges that development estimates may or may not include increase in costs based on construction uncertainty and that annual costs increases are not captured. From 2001 to 2005, the 3,364 active airports that make up the national airport system received an average of about $13 billion per year for planned capital development from a variety of funding sources. These funds are used for both AIP-eligible and ineligible projects. The single largest source of these funds was bond proceeds, backed primarily by airport revenues, followed by AIP grants, PFCs, and state and local contributions (see table 3). The amount and source of funding vary with the size of airports. The nation's 67 larger airports, which handled almost 90 percent of the passenger traffic in 2005, accounted for 72 percent of all funding ($9.4 billion annually), while the 3,297 other smaller commercial and general aviation airports that make up the rest of the national system accounted for the other 28 percent ($3.5 billion annually). As shown in figure 1, airports' reliance on federal grants is inversely related to their size--- federal grants contributed a little over $1.3 billion annually to larger airports (14 percent of their total funding) and $2.3 billion annually to smaller airports (64 percent of their total funding). Based on past funding levels, airports' funding is about $1 billion per year less than estimated planned capital development costs. If the $13 billion annual average funding continues over the next 5 years and were applied only to AIP-eligible projects, it would cover all of the projects in FAA's estimate. However, much of the funding available to airports is for AIP- ineligible projects that can attract private bond financing. We could not determine how much of this financing is directed to AIP-eligible versus ineligible projects. Figure 2 compares the $13 billion average annual funding airports received from 2001 through 2005 (adjusted for inflation to 2006 dollars) with the $14 billion in annual planned development costs for 2007 through 2011. The $14 billion is the sum of FAA's estimated AIP- eligible costs of $8.2 billion annually and ACI's estimated ineligible costs of $5.8 billion annually. The overall difference of about $1 billion annually is not an absolute predictor of future funding shortfalls; both funding and planned development may change in the future. The difference between current funding and planned development costs for larger airports is about $600 million if both AIP-eligible and ineligible projects are considered. From 2001 through 2005, larger airports collected an average of about $9.4 billion a year for capital development, as compared to over $10 billion in annual planned development costs. Figure 3 shows the comparison of average annual funding versus planned development costs for larger airports. At $5.7 billion annually, the ineligible portion of costs is 57 percent of the total planned development costs. The difference between past funding and planned development costs for smaller airports is roughly $400 million annually. At smaller airports, average annual funding from 2001 through 2005 was about $3.6 billion a year (expressed in 2006 dollars). Annual planned development costs for smaller airports from 2007 through 2011 is estimated at about $4 billion. Figure 4 compares average annual funding to planned development costs. As the figure shows, the portion of smaller airports' project costs not eligible for AIP funding is relatively small--about $75 million annually, or about 2 percent of total planned development costs. The financial health of airports is strong and has generally improved since September 11, 2001, especially for larger airports. Passenger traffic has rebounded to 2000 levels and bond ratings have improved. Following September 11, many airports cut back on their costs and deferred capital projects. However, credit rating agencies and financial experts now agree that larger airports are generally financially strong and have ready access to capital markets. A good indicator of airports' financial strength is the number and scale of underlying bond ratings provided by bond rating agencies. More bonds were rated in 2007 than 2002, and more bonds are rated at the higher end of the rating scale in 2007, meaning that the rating agencies consider them less of a risk today. Furthermore, larger airports tended to have higher ratings than smaller airports. The administration's reauthorization proposal for AIP would increase funding for larger airports, but its effect on smaller airports is uncertain because of the overall reduction in AIP and the proposed changes in how AIP grants are allocated between larger and smaller airports. The 2008 fiscal year budget reduces AIP funding from its past level of $3.5 billion in fiscal years 2006 and 2007 to $2.75 billion in 2008. The proposal also would eliminate entitlement, otherwise known as apportionment, grants for larger airports while increasing the PFC ceiling from $4.50 to $6 per passenger. While larger airports that account for 90 percent of all passengers will come out ahead, an increased PFC may not compensate smaller airports for the overall reduction in AIP, even with the proposed changes in how AIP is allocated between larger and smaller airports. As a separate issue, the administration's reauthorization proposal would change the way that AIP and other FAA programs are funded and may not provide enough monies for these programs, even at the reduced levels proposed by the administration. The administration's 2008 FAA reauthorization proposal would reduce AIP, change how AIP is allocated, and increase the PFC available to commercial airports. (Key changes in the proposal's many elements are outlined in appendix I.) Unlike previous reauthorization proposals, which made relatively modest changes in the structure of the AIP program, this proposal contains some fundamental changes in the funding and structure of the AIP program. Notably, following the pattern set by the 2000 FAA reauthorization, which required larger airports to return a certain percentage of their entitlement funding in exchange for an increase in the PFC, the administration proposes eliminating entitlement grants for larger airports altogether and at the same time allowing those airports to charge a higher PFC. The reauthorization proposal would eliminate some set-aside programs and increase the proportion of discretionary grant funds available to FAA at higher AIP funding levels. Table 4 compares AIP funding allocations under the current funding formulas to the proposed reauthorization allocations at both the current $3.5 billion level and at the proposed $2.75 billion level. Another change is to the entitlement formulas--for example, removing the funding trigger in current law that doubles the amount of entitlement funds airports receive if the overall AIP funding level is above $3.2 billion--is intended to make more discretionary funding available. According to FAA officials, their objective is to increase the amount of discretionary funding for airports so that higher priority projects can be funded; however, that is only achieved when total AIP funds are greater than the $2.75 billion budgeted by the administration. For example, at $2.75 billion in AIP, the current law would generate $967 million in discretionary grants versus $866 million under the proposed reauthorization. This reverses at $3.5 billion in AIP funding, for which the proposal generates $1.328 billion in discretionary grants versus $845 million under current law. The administration's proposed reauthorization would allow airports to increase their PFC to a maximum of $6 and allow airports to use their collections for any airport projects while forgoing their entitlement funds. A $6 PFC could generate an additional $1.1 billion for larger airports that currently have a PFC in place, far exceeding the $247 million in entitlements that FAA estimates they would forego under this reauthorization proposal (see table 5). However, the impact on smaller airports is uncertain because they collect far less in PFCs and are more reliant on AIP for funding. A change to a $6 PFC would yield an additional $110 million for small hub airports based on airports that currently have a PFC in place and $132 million if every one of the small hub airports had a $6 PFC. It is uncertain whether the proposed allocation of AIP under the administration's proposal would shift a greater proportion of funds to smaller airports to compensate for the overall reduction in AIP. The reauthorization proposal would also relax project eligibility criteria to allow airports to use their collections in the same way as they use internally generated revenue, including off-airport intermodal transportation projects. The application and review process would also be streamlined; as a result, FAA would no longer approve collections but rather ensure compliance with PFC and airport revenue rules. The administration's proposal would modify the current pilot program on private ownership of airports in two key ways. First, the proposed modifications will expand eligibility beyond the current statutory limit of 5 to 15 airports. Restrictions limiting participation in the pilot program to specific airport size categories would also be eliminated. Second, the pilot program would be amended to eliminate the veto power that airlines can exercise under current law to prevent privatization transactions at commercial airports. Under current law, the sale of an airport to private interests may only proceed if a super-majority of the airlines at that airport approve of the sale or lease. Additionally, the airline veto power to prevent fee increases higher than inflation rates would be repealed. In place of these veto powers, the airport sponsor would need to demonstrate to the Secretary of Transportation that the airlines using that airport were consulted prior to the transaction proceeding. Congress established the Airport Privatization Pilot Program in October 1996 to determine if privatization could produce alternative sources of capital for airport development and provide benefits such as improvements in customer service. It also hoped to determine if new investment and capital from the private sector could be attracted through innovative financial arrangements. Proponents of privatization believe that the privatization of airports can lead to capacity-increasing investment in airports through the commitment of private capital, lower operating costs, and greater efficiency and that privatization can increase customer satisfaction. Overall, there has been relatively little interest in the current pilot program. Six airports have applied for participation in the program and three of those airports withdrew their applications in 2001. To date, Stewart International Airport, located in Newburgh, New York, is the only airport accepted into the pilot program. The airport received this exemption in March 2005, but is currently being purchased back by a public owner, the Port Authority of New York and New Jersey. In September 2006, the City of Chicago submitted a preliminary application for Chicago Midway International Airport. FAA completed its review of the Midway preliminary application and determined that it meets the procedural requirements for participation in the pilot program. Consequently, the City of Chicago can now proceed to select a private operator, negotiate an agreement, and submit a final application to FAA for exemption. In addition to concerns about the level and allocation of AIP funds, another concern is that the fuel tax revenues that the administration's reauthorization proposal has designated to largely fund AIP after 2009 may not be as great as anticipated. Currently, AIP and other FAA programs are principally funded by the Airport and Airway Trust Fund (trust fund), which receives revenue from passenger ticket taxes and segment taxes, airline and general aviation fuel taxes, and other taxes. The administration's reauthorization proposal would fund air traffic control through user fees for commercial aircraft and fuel taxes for general aviation while limiting the sources of revenue for the trust fund and its uses. Under the proposal, beginning in 2009, the trust fund would continue but only to fund three programs--AIP, Research, Engineering and Development (RE&D), and Essential Air Service (EAS)--and would be funded solely by an equal fuel tax on commercial and general aviation fuel purchases and an international arrival and departure tax. FAA officials confirmed for us that in estimating fuel tax revenues they did not take into account possible reductions in fuel purchases due to the increase in the tax rates. Although we do not know by how much such purchases would decline, conventional economic reasoning, supported by the opinions of industry stakeholders, suggests that some decline would take place. Therefore, the tax rate should be set taking into consideration effects on use and the resulting impact on revenue. FAA officials told us that they believe that these effects would be small because the increased tax burden is a small share of aircraft operating costs and therefore there was no need to take its impact into account. Representatives of general aviation, however, have said that the impact could be more substantial. If consumption possibly falls short of projections or Congress appropriates more funds for AIP, RE&D, or EAS than currently proposed, then fuel tax rates and the international arrival and departure tax would correspondingly have to be increased or additional funding from another source, such as the trust fund's uncommitted balance or the General Fund, would be needed. In conclusion, Mr. Chairman, airports have rebounded financially from the September 2001 terrorist attacks. We expect the demand for air travel to continue to increase, the system capacity to be stretched, and airports to increase their demand for capital improvements to relieve congestion and improve their services. As Congress moves forward with reauthorizing FAA, it will have to decide on several key issues, including how it wants to fund and distribute grants under the AIP. While some elements of the administration's proposal are to be commended--for example, simplifying the funding formulas and giving FAA more discretion to fund high priority projects--other parts of the proposal raise concerns. For example, the extent to which the administration's proposed cuts in AIP funding will affect development at smaller airports is unclear. For further information on this statement, please contact Dr. Gerald Dillingham at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were Paul Aussendorf, Jay Cherlow, Jessica Evans, David Hooper, Nick Nadarski, Edward Laughlin, Minette Richardson, and Stan Stenersen. Trust fund for all capital programs are funded by an airline ticket tax, segment tax, international departure and arrival taxes, varying rates of fuel taxes and other taxes. Funding for AIP is appropriated from the trust fund. Trust fund is funded by fuel tax of 13.6 cents/gallon for commercial and general aviation and a reduced international arrival and departure tax. Funding for AIP is appropriated from the Trust Fund. If AIP is increased, the tax rates would have to be increased, the trust fund's uncommitted balance would have to be drawn down, or another funding source would have to found. Up to 75 percent of entitlements for large and medium hub airports collecting a PFC are turned back to the small airport fund. Entitlements for large and medium hub airports eliminated by 2010. If AIP greater than $3.2 billion, primary airport entitlements are doubled. $3.2 billion trigger for doubling entitlements is eliminated except for small and nonhub primary airports. State apportionment is 20 percent of AIP (18.5 percent if AIP is less than $3.2 billion). State apportionment set at greater of 10 percent of AIP or $300 million. Nonprimary airport entitlement of up to $150,000. The nonprimary airport minimum entitlement of $150,000 per airport is eliminated and replaced by a tiered system of entitlements ranging from $400,000 for large general aviation airports to $100,000 for smaller general aviation airports. The 750 airports that have less than 10 operational and registered based aircraft are guaranteed nothing. Reliever and military airport set asides minimum discretionary funding set at $148 million. The set-aside for reliever and military airports is eliminated. Small airport fund funded by large and medium hub airport PFC turnbacks of up to 75 percent of PFC collections. Minimum discretionary funding set at $520 million. Small airport fund equal to 20 percent of discretionary funds. Most types of airfield projects, excluding interest costs, nonrevenue producing terminal space and on-airport access project costs. General aviation airports may use their entitlement funds for some revenue producing activities (e.g., hangars). Expanded to include additional revenue producing aeronautical support facilities (e.g., self-service fuel pumps) at general aviation airports. Government share set at 95 percent for smaller airports through 2007, and 75 percent for large and medium hub airports (noise 80 percent). Eliminates 95 percent government share except for the very smallest airports. Now maximum share will be a flexible amount with a maximum percentage of 90 percent. Airfield rehabilitation projects lowered to 50 percent maximum at large and medium hubs. Maximum rate is $4.50 per passenger. Maximum rate is $6 per passenger. All applications subject to FAA review. Review and approval is streamlined. PFCs can be used for all AIP eligible projects, but also interest costs on airport bonds, terminal gates and related areas, and noise mitigation can also be used. Eligibility expanded to include almost any airport -related project, including off-airport intermodal projects. Up to 10 large and medium hub airports willing to assume the cost of air navigation facilities are allowed a $7 PFC. Up to five airports, one of each size, with strict limit on rates and charges and requires approval by 65 percent of airlines. Up to 15 airports of any size, no limit on rates and charges and no airline veto, but subject to DOT review and approval. To determine how much planned development would cost over the next 5 years, we obtained planned development data from the Federal Aviation Administration (FAA) and Airports Council International-North America (ACI). To determine how much airports of various sizes are spending on capital development and from which sources, we sought data on airports' capital funding because comprehensive airport spending data are limited and because, over time, funding and spending should roughly equate. We obtained capital funding data from the FAA, ACI, the National Association of State Aviation Officials (NASAO), and Thomson Financial--a firm that tracks all municipal bonds. We screened each of these databases for their accuracy to ensure that airports were correctly classified and compared funding streams across databases where possible. We did not, however, audit how the databases were compiled or test their overall accuracy, except in the case of state grant data from the NASAO and some of the Thomson Financial bond data, which we independently confirmed. We determined the data to be sufficiently reliable for our purposes. We subtotaled each funding stream by year and airport category and added other funding streams to determine the total funding. We met with FAA, bond rating agencies, bond underwriters, airport financial consultants, and airport and airline industry associations and discussed the data and our conclusions to verify their reasonableness and accuracy. To determine whether current funding is sufficient to meet planned development for the 5-year period from 2007--2011 for each airport category and overall, we compared total funding to planned development. We correlated each funding stream to each airports' size, as measured by activity, and among other funding streams to better understand airports' varying reliance on them and the relationships among sources of finance. We then discussed our findings with FAA, bond rating agencies, bond underwriters, airport financial consultants, and airport and airline industry associations to determine how our findings compared with their knowledge and experiences. To determine some of the potential effects from changes to how airport development is funded under the administration's proposed FAA reauthorization legislation, we first analyzed the suggested changes to the Airport Improvement Program's (AIP) funding and allocation. In particular we analyzed the effect of various funding levels on how the program funds would be allocated. Second, we evaluated the effects of raising the passenger facility charge (PFC) ceiling, as the administration proposal suggests, by estimating the potential PFC collections under a $6 PFC on the basis of 2005 enplanements and collection rates assuming all airports imposed a $6 PFC. Third, we determined the status of FAA's pilot program for airport privatization. Moreover, we discussed the impact of all of the proposed changes (funding/allocation, $6 PFC, and privatization) with FAA, bond rating agencies, bond underwriters, airport financial consultants, and airport and airline industry associations. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To address the strain on the aviation system, the Federal Aviation Administration (FAA) has proposed transitioning to the Next Generation Air Transportation System (NextGen). To finance this system and to make its costs to users more equitable, the administration has proposed fundamental changes in the way that FAA is financed. As part of the reauthorization, the administration proposes major changes in the way that grants through the Airport Improvement Program (AIP) are funded and allocated to the 3,400 airports in the national airport system. In response, GAO was asked for an update on current funding levels for airport development and the sufficiency of those levels to meet planned development costs. This testimony comprises capital development estimates made by FAA and Airports Council International (ACI), the chief industry association; analyzes how much airports have received for capital development and whether this is sufficient to meet future planned development; and summarizes the effects of proposed changes in funding for airport development. This testimony is based on ongoing GAO work. Airport funding and planned development data are drawn from the best available sources and have been assessed for their reliability. This testimony does not contain recommendations. ACI's estimate for planned development costs is considerably larger than FAA's, reflecting a broader range of projects included as well as differences in when and how the estimates are made. For 2007 through 2011, FAA estimated annual planned capital development costs at $8.2 billion, while ACI estimated annual costs at $15.6 billion. The estimates differ primarily because FAA's estimate only includes projects that are eligible for AIP grants, while ACI's covers all projects, including $5.8 billion for projects not eligible for federal funding, such as parking garages. From 2001 through 2005, airports received an average of about $13 billion a year for planned capital development. This amount covers all types of projects, including those not eligible for federal grants. The primary source of this funding was bonds, which averaged almost $6.5 billion per year, followed by federal grants and passenger facility charges (PFC), which accounted for $3.6 billion and $2.2 billion, respectively (see figure below). If airports continue to attract this level of funding for planned capital development, this amount would annually fall about $1 billion short of the $14 billion in total planned development costs (the sum of FAA's estimated $8.2 billion in eligible costs and the industry's $5.8 billion in ineligible costs). Larger airports foresee a shortfall of about $600 million annually, while smaller airports foresee a shortfall of $400 million annually. FAA's reauthorization proposal would reduce the size of AIP by $750 million but increase the amount that airports can collect from PFCs. However, the benefit from increased PFCs would accrue mostly to larger airports and may not offset a reduced AIP grants program for smaller airports. The proposal would also change the way that AIP and other FAA programs are funded. The new fuel taxes that FAA has proposed may not provide the revenues for AIP that FAA anticipates.
5,250
652
Community colleges serve almost 40 percent of undergraduate students in the United States. Because most community colleges have a commitment to open access admissions policies--allowing anyone to enroll in classes--their student populations often have varied needs. For example, community colleges have a long history of serving older and part-time students by offering affordable tuition, convenient locations, and flexible course schedules (see table 1). Among their many goals, community colleges aim to prepare students who will transfer to 4-year institutions, provide workforce development and skills training, and offer noncredit programs ranging from English as a second language to skills retraining. Upon enrollment, students typically take a placement test in reading, writing, and math so that community college administrators can assess their skill level. Depending on their performance on the test, students who are not considered college-ready in these subject areas are placed into developmental education courses. Based on their assessed skills, students could be placed in one developmental education course or several. If placed, these courses will add to the time it takes these students to complete their certificate or degree, and generally do not qualify for college credit. While developmental education is a category of coursework and not a specific federal program, community colleges use a variety of federal funding sources, such as federal grants, to help fund their programs. Additionally, many community college developmental education students access federal student aid to pay for these and other classes. Generally, a student enrolled in developmental education courses is eligible for federal student aid for up to 1 academic year's worth of courses in a program leading to a degree, credential, or certificate at an eligible institution. Education, the federal agency that is responsible for overseeing programs authorized under the Higher Education Act of 1965, as amended (HEA) provided approximately $2.2 billion in the 2007-2008 academic year in federal student aid to community college students. In the 2007-2008 school year 36 percent of community college students who were enrolled in developmental education courses were receiving federal student aid.Education provides national statistics and conducts national research on various outcomes related to post-secondary education. All of the community college and state education officials we interviewed described strategies that they have used related to curriculum, placement, and working with high schools to help improve developmental education outcomes for students. Curriculum changes were focused on efforts to shorten the total amount of time students spend in developmental education and making developmental coursework relevant to a student's career or academic area of study. Several officials and stakeholders with whom we spoke told us that, based on their experience, the longer students spend in developmental education, the less likely they are to move onto college-level classes. Additionally, these officials and stakeholders stated that they often observed that students who spent multiple semesters in developmental education dropped out, in large part because the students did not see the immediate benefits of the developmental coursework on their academic or career goals. Accelerating developmental coursework could also result in reduced financial costs for students, since they will potentially finish their coursework in a shorter period of time. Reducing the time spent in developmental education is particularly important given some of the recent changes to federal financial aid that shorten the amount of time certain aid is available to students.officials told us that initiatives related to better placement of students and working with high schools on preparing students can lead to less time in developmental education or perhaps prevent the need for it altogether. Lastly, most of the community college The following provides examples of strategies that are being implemented by some states and community colleges we visited: Shortening the time in developmental education: All of the states and community colleges we visited implemented a number of initiatives to shorten the amount of time students spent in developmental education. Nearly all of the community colleges we visited implemented initiatives that broke up the developmental education classes into smaller, shorter component modules that would otherwise last a full term. Virginia officials described their statewide developmental math redesign as one that segmented classes into one-credit modules requiring students to take only those modules that they needed based on the results of an assessment. Prior to the redesign, a single developmental education math course carried a credit load of four or five credits, which could have led to students taking coursework they did not need over a longer period of time. Another initiative used to shorten the amount of time students spend in developmental education, involved compressing the developmental curriculum to allow students to complete more than one class in a single term. For example, two community colleges we visited offered fast track math classes that allowed students to complete two classes in one semester. Additionally, officials in two states we visited told us that they had implemented a statewide curriculum combining developmental education reading and writing coursework, thus reducing a two-class requirement to one during a term. Lastly, several community colleges we visited had reexamined the developmental education content needed to prepare students for college-level classes in order to reduce the number of required courses. For example, one community college did this by eliminating the overlap between developmental math classes and the college-level classes, which led them to reduce the number of developmental education math classes in the sequence from five classes to three. Making coursework applicable to academic or career goals: One state and most of the community colleges we visited were making their developmental education coursework more applicable to students' academic or career goals so that students could see the relevance of the developmental course content immediately, while also earning college credits. All of the community colleges we visited in Washington are integrating developmental education instruction into their college-level classes. Washington's Integrated Basic Education and Skills Training (I-BEST) program places students directly into career and technical or college-level academic classes with two instructors: one to teach the subject matter and the other to teach developmental education in the context of the class. Another initiative used by a few of the community colleges we visited to make the coursework more relevant to students' goals involved linking college- level and developmental education classes. For example, in one community college we visited, students can enroll in a college-level history or psychology class while concurrently taking a developmental reading class that integrates the content of the college-level class into its coursework. Lastly, several of the community colleges we visited offered alternative pathways for developmental math students because traditional developmental math prepares students for higher levels of college math they may not need for certain fields. In one community college, the developmental math coursework has several pathways for students: students in Science, Technology, Engineering, or Mathematics (STEM) fields could take a path that leads them to the types of math they need for their field, such as calculus, while students in the social sciences or liberal arts could take a path that leads them to different types of statistics courses that may be more relevant to their fields of study. Rethinking Placement: Community colleges we visited are changing how students are placed into developmental courses so that students might spend less time taking such courses. Several of the community colleges we visited are providing preparatory classes or online test preparation software to better prepare students and sharpen their skills for the placement test. A few community college officials told us that students may need only a quick refresher on material they have already mastered but may not have used in some time. With the refresher course, students could place into a higher-level developmental education class or be placed directly into college-level courses. Several officials told us that these refreshers provided by preparatory classes or online test preparation are especially helpful for students who have been out of an academic setting for an extended amount of time. Additionally, several community colleges we spoke with are also considering a student's high school grades or grade point average when determining placement. For example, one community college we visited reviews students' transcripts and uses students' grades in specified math classes at local high schools--or the results of their placement test, whichever was higher--to determine their direct placement into a developmental or college-level math class. Preventing the need for developmental education: Most of the community colleges we visited partnered with local K-12 schools to align their curriculums to help ensure that students graduating from the local high schools were ready for college. For example, one Texas community college established vertical teams that brought together high school and community college faculty in science, math, and social studies to discuss students' academic needs. In another example, Washington state officials told us that, starting in 2015, the state plans to offer a college assessment test in the 11th grade to identify and provide additional instruction to students who may have remediation needs so that when these students graduate, they will be ready for college. Researchers are reviewing some of the initiatives that community colleges are instituting to improve outcomes for developmental education students, but the evidence base is limited. One program that is showing early promise is Washington's I-BEST. In a study conducted by the Community College Research Center (CCRC), an independent research organization housed at Columbia University's Teachers College, I-BEST was regarded as an effective model for increasing the rate at which students enter and succeed in postsecondary career education overall. Additionally, a few community college officials told us they are planning to conduct evaluations of their initiatives in the future to understand the outcomes of their activities. However, according to a few stakeholders and a community college official we spoke with, there is limited information available on a national basis for community colleges to have confidence in the impacts of their initiatives. Most of the community colleges and other stakeholders with whom we spoke stated that more research is needed to determine if developmental education initiatives work. (See fig. 1.) Some stakeholders told us that additional research is needed to help community college officials understand the context in which community colleges and states are using developmental education models and how they are resolving issues, such as helping students transition into regular credit-bearing courses more quickly. Community college officials also expressed concerns with promoting some strategies without fully understanding the long- term outcomes, particularly on certain populations. For example, a few community college officials worried about the impact of using accelerated developmental education classes. These officials were concerned that the fast-paced nature of an accelerated program would increase a student's risk of not completing a course or program. For example, part-time students enrolled in an accelerated program may have additional stress when trying to balance personal responsibilities, such as child care or work demands, while enrolled in an accelerated course and may end up dropping out of the college altogether. Additionally, another official told us that knowing how to scale up pilot initiatives was a challenge because initiatives that were successful with one population of students may not be successful with other students. Obtaining faculty support for unproven reforms was also cited by several community college and state officials as a challenge. Officials at one community college told us that it was difficult for staff to buy into changes to developmental education at their community college because there was not much training provided and initiatives were unproven. A literature review conducted by a stakeholder organization on acceleration strategies, for example, noted that faculty may resist working on reforms and that there is limited research to help "quell the skepticism." Recent literature also suggests that faculty support is a key factor to bringing effective practices to scale. Officials at one community college explained that new models of learning can be a radical change for some faculty and many find it difficult to change their teaching styles to adapt to the unproven curriculum. Officials at this community college also told us that some faculty members at their college are resistant and skeptical because they may have different philosophical views about how courses should be taught. To address these issues, officials in one state we visited created a task force that included community college and K-12 representatives and sought input from faculty, students, and staff. Additionally, they relied on the limited research available to help guide their discussions with faculty and make decisions about the redesign, all of which helped move the statewide redesign forward with little resistance. The Department of Education is taking steps to address some of the challenges cited by community colleges and states in improving developmental education by funding a new research center on this topic. Education officials confirmed that not enough information was available about successful developmental education strategies. Education officials further explained that most initiatives did not yet have sufficient data--2 years worth of data or less--to determine what worked. In its Annual Performance Plan for Fiscal Year 2013, Education stated that one of its goals is enhancing the U.S. education system's ability to continuously improve through better and more widespread use of data, research and evaluation, transparency, innovation, and technology. In light of this goal and to help further community colleges' understanding of what works in developmental education, in May 2013, Education requested proposals for a National Research Center on Developmental Education Assessment and Instruction. Education plans for this research center to focus exclusively on developmental education assessment and instruction in order to help policy makers and practitioners improve student outcomes. The goals of the research center are (1) to convene policy makers, practitioners, and researchers interested in developmental education reform; (2) to identify promising reforms and support further innovations; (3) to conduct rigorous evaluations on the effectiveness and cost- effectiveness of models that have the potential to be expanded; and (4) to bolster efforts by states, colleges, and universities to bring effective developmental education reforms to scale. An Education official stated that the Department, through the Center's research, will first collect a nationwide inventory on what approaches are being used and then evaluate different approaches to teaching developmental education. The Center could address the research needs cited by community college and state officials to improve developmental education and help administrators with obtaining faculty buy-in. The research center is expected to launch in 2014. Meeting the national goal of increasing the rates of attainment of post- secondary degrees and certificates may be hampered by the significant numbers of students who enter developmental education and fail to move toward that outcome. Community colleges and states are initiating new strategies to address this problem, but the limited research available to them on what strategies work and for whom is proving challenging. Education's research center will serve as a much needed resource for community colleges and states as they continue to experiment with new strategies, but only if it is successful in uncovering what works and helping colleges to put into practice what the Center learns through its research. Otherwise, community college students entering developmental education will continue to face hurdles in reaching their goals. We provided a draft of this report to the Department of Education for review and comment. Education provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO's website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The objectives of this report were to determine (1) what strategies select states and community colleges are using to improve developmental education for community college students and (2) what challenges, if any, community colleges have identified while implementing these developmental education strategies. To address the first objective, we interviewed nonprofit stakeholders with knowledge of community college issues. We conducted site visits to Texas, Virginia, and Washington, which had been identified by experts as doing innovative work in improving developmental education. They also represent regional diversity. While on those site visits, we interviewed officials from 10 community colleges as well as representatives from each state's education office. We also visited a community college in California, which brought the total number of schools to 11. These 11 community colleges were identified by state officials and our own research as colleges that were implementing changes to developmental education. Since school officials were selected based on their school's participation in developmental education reform efforts, the high incidence of these initiatives among the interviewed schools should not be interpreted as an indicator of the incidence of such programs among community colleges broadly. In addition, we conducted a group interview with community college officials and other knowledgeable stakeholders--who were identified by the conference sponsors as being knowledgeable about developmental education--at a national conference focused on reforming community college student outcomes. (See table 2 for a full list of stakeholders, state offices, and community colleges we interviewed individually and as part of our group interview.) Additionally, we reviewed selected literature on the topic. To address the second objective, in addition to the information gathered in the interviews and literature review addressed above, we interviewed officials at the Department of Education. The officials were from the following offices within the Department of Education: the Office of Vocational and Adult Education; the Office of Federal Student Aid; the National Center for Education Statistics; and the Office of Planning, Evaluation, and Policy Development. Additionally, we reviewed pertinent agency documents, including budget proposals, Requests for Application, and a list of Education's current initiatives for community colleges, as well as relevant laws, regulations, and guidance. Given that we were examining strategies of a few selected states and schools, we do not intend for the options and challenges identified by the stakeholders, state, or community college officials we interviewed to be an exhaustive list. In addition, we did not assess or evaluate the initiatives that were proposed to improve developmental education, nor do we necessarily recommend any such initiatives. We use indefinite quantifiers when describing the number of stakeholders or community colleges whose representatives mentioned the topic referenced in the respective sentence. In using the indefinite quantifiers, we are only including the 11 community colleges we visited directly as part of our site visits and the 11 stakeholder organizations whose representatives we spoke with individually. The community colleges or stakeholders referenced in the indefinite quantifiers were not part of our group interview. The indefinite qualifiers categories are listed in table 3: We conducted this performance audit from August 2012 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual above, Janet Mascia (Assistant Director), David Reed, Vernette Shaw, and Anjali Tekchandani made significant contributions to this report. Kirsten Lauber, Jessica Botsford, Deborah Bland, and Holly Dye also contributed to this report.
Education reported that approximately 42 percent of entering community college students were not sufficiently prepared for college-level courses and enrolled in at least one developmental education course. Researchers also estimate that fewer than 25 percent of developmental education students will complete a degree or certificate. Improving developmental education is key to increasing degree and certificate completion. Some community colleges and states are instituting various initiatives to improve the outcomes of students placed into developmental education. GAO was asked to examine current developmental education efforts. This report addresses the following questions: (1) What strategies are selected states and community colleges using to improve developmental education for community college students; and (2) what challenges, if any, have community colleges identified while implementing these developmental education strategies? GAO conducted site visits to community colleges and state education offices in Texas, Virginia, and Washington, which were identified by experts and the literature as states initiating innovative changes in developmental education coursework. GAO interviewed Education officials, as well as stakeholders from non-profit and research organizations focused on community college issues. In addition, GAO reviewed relevant laws, regulations, and guidance. States and community colleges GAO visited have implemented several strategies to improve developmental education--which is remedial coursework in math, reading, or writing for students who are assessed not to be ready for college-level classes. Many initiatives involved shortening the amount of time for developmental education and better targeting material to an individual student's needs. For example, two community colleges have implemented fast track classes that enable students to take two classes in one semester instead of in two semesters. One developmental education program in Washington places students directly into college level classes that also teach developmental education as part of the class. Community colleges are also using tools such as test preparatory classes to help students prepare for placement tests that determine if they will need to take developmental education courses. According to community college officials GAO spoke with, these classes help familiarize students with prior coursework and, in some cases, help them place directly into college level courses. Additionally, most community colleges GAO visited have worked to align their curriculum with local high schools so that graduating seniors are ready for college. Little research has been published on these developmental education initiatives and whether they are leading to successful outcomes. Most community college officials with whom GAO spoke noted that the limited availability of research in this area is a challenge to implementing strategies to improve developmental education programs. Specifically, they noted that it is difficult to determine whether new programs are working, and to gain faculty support for unproven models of teaching. Department of Education (Education) officials confirmed that research regarding successful developmental education strategies is insufficient. In response, Education has announced the availability of grant funds for a National Research Center on Developmental Education Assessment and Instruction. The Center will focus exclusively on developmental education assessment and instruction to inform policymakers and instructors on improving student outcomes. The Center is expected to launch in 2014. GAO is making no recommendations in this report.
3,923
606
In support of the President's annual budget request for VA health care services, which includes a request for advance appropriations, VA develops a budget estimate of the resources needed to provide such services for 2 fiscal years. Typically, VHA starts to develop a health care budget estimate approximately 10 months before the President submits the budget request to Congress in February. This is approximately 18 months before the start of the fiscal year to which the request relates and about 30 months prior to the start of the fiscal year to which the advance appropriations request relates. VA's health care budget estimate includes estimates of the total cost of providing health care services as well as costs associated with management, administration, and maintenance of facilities. VA develops most of its budget estimate for health care services using the Enrollee Health Care Projection Model. VA uses other methods to estimate needed resources for long-term care, other services, and health-care-related initiatives proposed by the Secretary of Veterans Affairs or the President. After determining the amount of VA's appropriations, Congress provides VA resources for health care through three accounts: Medical Services, which funds health care services provided to eligible veterans and beneficiaries in VA's medical centers, outpatient clinic facilities, contract hospitals, state homes, and outpatient programs on a fee basis; Medical Facilities, which funds the operation and maintenance of the VA health care system's capital infrastructure, including costs associated with NRM and non-NRM activities, such as utilities, facility repair, laundry services, and grounds keeping; and Medical Support and Compliance, which funds the management and administration of the VA health care system, including financial management, human resources, and logistics. VA allocates most of its health care resources for these three accounts through VERA--a national, formula-driven system--at the beginning of each fiscal year and allocates additional resources throughout the year. VA allocates about 80 percent of the health care appropriations to its 21 health care networks through VERA. VA uses methods other than VERA to allocate the remaining resources to networks and medical centers for such programs as prosthetics, homeless grants, and state nursing homes. VA may also use methods other than VERA to allocate any additional resources it may receive from Congress during the year. The networks in turn allocate resources received through VERA and other methods to their respective medical facilities, as part of their role in overseeing all medical facilities within their networks. In addition to amounts allocated to networks and medical facilities at the beginning of the fiscal year, VA also sets aside resources from each of VA's three health care appropriations accounts--in what is known as a national reserve--so that resources are available for contingencies that may arise during the year. In general, VA allocates resources from the national reserve to match network spending needs for each appropriations account. Within each appropriations account, VA also has flexibility as to how the resources are used. For example, within the Medical Services account, VA has the authority to use resources for outpatient services instead of hospital services, should the demand for hospital services be lower than expected and demand for outpatient services be higher. In a similar manner, VA has the authority to use resources in the Medical Facilities account for NRM instead of non-NRM activities--such as utilities--should spending for those activities be less than estimated. In June 2012, we reported that VA's NRM spending has consistently exceeded the estimates reported in VA's budget justifications from fiscal years 2006 to 2011. This pattern continued in fiscal year 2012 when VA spent about $1.5 billion for NRM, which was $622 million more than estimated. (See fig. 1.) To help inform its budget estimates for NRM, VA collects information on facility repair and maintenance needs as part of an ongoing process to evaluate the condition of its medical facilities. VA conducts facility condition assessments (FCA) at each of its medical facilities at least once every 3 years. VA uses contractors to conduct FCAs, and these contractors are responsible for inspecting all major systems (e.g., structural, mechanical, plumbing, and others) and assigning each a grade of A (for a system in like-new condition) through F (for a system in critical condition that requires immediate attention). As part of this assessment, the contractors use an industry cost database to estimate the correction costs for each system graded D or F. According to VA officials, the agency's reported NRM backlog represents the total cost of correcting these FCA-identified deficiencies. Our analysis of data for fiscal years 2006 through 2012 found that in each of these years VA had higher than estimated resources available in its Medical Facilities account, which VA used to increase NRM spending by about $4.9 billion. These resources derived from two sources: (1) lower than estimated non-NRM spending, which made more resources available for NRM, and (2) higher than estimated budget resources, which included annual appropriations, supplemental appropriations, reimbursements, transfers, and unobligated balances. As figure 2 shows, after fiscal year 2008, lower than estimated spending on non- NRM activities accounted for most of VA's spending on NRM that exceeded VA's budget estimates. Lower than estimated non-NRM spending. VA spent fewer resources from the Medical Facilities account on non-NRM activities than it estimated, which allowed the agency to spend over $2.5 billion more on NRM than it originally estimated in fiscal years 2009 through 2012. When we asked why VA spent more on NRM projects than estimated, VA officials said one reason was that the agency spent less than it estimated on non-NRM activities and that the most practical use of these unspent resources was to increase spending on NRM because of the large backlog of FCA-identified deficiencies. VA officials further explained that VA spent less for non-NRM activities than anticipated because of a decrease in the demand for utilities and other weather-dependent non- NRM activities because of mild weather patterns during the last four winters. However, lower spending on these weather-dependent activities only accounted for $460 million--18 percent--of the resources eventually used for NRM. The remaining 82 percent eventually used for NRM came from resources originally intended to be used for various other activities, including administrative functions and rent. VA has consistently overestimated spending for these non-NRM activities, and if the agency continues to determine estimates for such activities in the same way, its future budget estimates of spending for non-NRM may not be reliable. Higher than estimated budget resources. VA had more budget resources available in its Medical Facilities account than the agency estimated it would have, and this allowed VA to spend over $2.3 billion more on NRM than it originally estimated. When we asked why VA spent more on NRM projects than estimated, VA officials said that in addition to spending less on non-NRM activities the agency also received higher annual appropriations than requested and unanticipated supplemental appropriations from Congress. For example, in fiscal year 2009 VA received $300 million more than it requested in annual appropriations as well as $1 billion in supplemental appropriations included in the American Recovery and Reinvestment Act of 2009 (Recovery Act). VA also received $550 million in supplemental appropriations as part of the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007.appropriations were specifically for NRM. In addition to higher annual appropriations and supplemental appropriations, we also found that VA used other budget resources to increase NRM spending. The budget resources included transfers from VA's other appropriations accounts, reimbursements for services provided under service agreements with the Department of Defense, and unobligated balances carried over from prior fiscal years. While, according to VA officials, the agency did not track the use of specific resources used to increase NRM spending, data provided by VA suggests that more than $1.8 billion from higher than requested appropriations and about $762 million from other budget resources were available for this spending in fiscal years 2006 through 2012. VA used VERA to perform an initial allocation of resources for NRM at the beginning of each fiscal year for fiscal years 2006 through 2012, allocating a total of about $4.6 billion over this time frame. In addition, for fiscal years 2009 through 2012, VA allocated $2.9 billion in total for NRM from higher than requested appropriations and its reserve account. Figure 3 shows the nearly $7.5 billion allocated for NRM using VERA and other methods, which included network estimated costs to maintain medical facilities in good working condition--that is for sustainment--and costs to address the NRM backlog of FCA-identified deficiencies for fiscal years 2006 through 2012. Over the course of allocating about $4.6 billion for NRM using VERA between fiscal years 2006 and 2012, VA changed VERA's NRM allocation formula from one being based primarily on patient workload in the networks to one that primarily considers both sustainment of buildings and the NRM backlog in each network. Prior to fiscal year 2009, VA used VERA to allocate nearly $1.5 billion of NRM resources on the basis of patient workload and adjusting the cost of construction. Under this formula, networks that treated the largest number of patients received the most resources for NRM, according to VA officials. Beginning in fiscal year 2009, VA used VERA to allocate resources for NRM primarily based on each network's estimated cost for sustainment and the cost for addressing the NRM backlog. VA used VERA to allocate about $2.6 billion from fiscal year 2009 through 2012--which according to VA officials resulted in more resources being allocated to networks with a higher proportion of more expensive building space. In addition, since fiscal year 2009, VA has also used VERA to allocate about $512 million to NRM projects that improve access or provide accommodations for specific health care services, such as research, women's health, and mental health, and certain other NRM projects. In fiscal years 2009 and 2010, VA allocated more than $1.4 billion of higher than requested Medical Facilities appropriations using methods other than VERA. Over the course of both fiscal years, VA allocated $1 billion of supplemental appropriations included in the Recovery Act mostly on the basis of each network's estimated cost of addressing FCA- identified deficiencies, according to VA officials. In both fiscal years, Congress also provided higher appropriations in the Medical Facilities account than the President requested, in part, to fund additional NRM projects. In providing these higher appropriations for NRM, Congress required VA to allocate a specific amount using methods other than VERA. In fiscal year 2009, VA allocated $300 million based on each network's estimated cost of addressing the NRM backlog of FCA- identified deficiencies. the basis of networks' estimated sustainment costs and their cost to address the NRM backlog. See Pub. L. No. 110-329, div. E, tit. II, 122 Stat. 3574, 3705 (2008). the backlog of FCA-identified deficiencies.Medical Facilities appropriations in the reserve that are not used for non- NRM purposes are available for NRM. The Under Secretary for Health determines whether funds in the reserve are available and recommends allocations of those funds to the Secretary of Veterans Affairs, who approves the allocations. VA officials explained that the allocation of funds from the reserve for NRM were typically based on sustainment costs as well as the cost of addressing FCA-identified deficiencies and other VA NRM priorities, such as VA's energy investment "Green" initiatives. These allocations are also subject to the networks' ability to award the projects and obligate the additional funds prior to their expiration. In anticipation of the availability of such resources, networks typically identify in advance projects that can be implemented if additional funds become available, according to VA officials. Officials explained further that the networks do this to better address the NRM backlog. VA relied on its networks to prioritize all NRM spending until centralizing this process for more costly projects in fiscal year 2012. NRM projects VA funded were generally consistent with VA priorities. For fiscal years 2006 through 2011, VA relied on its networks to prioritize projects for NRM spending. Each fiscal year, networks provided VA headquarters with a list of prioritized NRM projects, known as NRM operating plans. According to officials from headquarters and the two selected networks, NRM operating plans represented all of the NRM projects that a network plans to fund and carry out in a given year. VA officials told us that to prioritize NRM projects during this period, the networks used oral guidance communicated to the networks during management meetings with VA headquarters that encouraged the networks to prioritize projects addressing critical FCA-identified deficiencies and sustainment. Beginning in fiscal year 2012, VA changed its process for prioritizing more costly NRM projects. Specifically, VA headquarters assumed responsibility for prioritizing these NRM projects as part of VA's newly established Strategic Capital Investment Planning process, known as SCIP. Through SCIP, VA headquarters evaluates these more costly NRM projects and other types of capital investment projects using a set of weighted criteria in order to develop a list of prioritized projects to guide the agency's capital planning decisions.threshold for including NRM projects in this centralized prioritization process was $1 million. VA used this process to identify 190 projects as the agency's highest NRM priorities for fiscal year 2012. Under SCIP, VA prioritizes NRM projects based on the extent to which they meet the following six criteria: For fiscal year 2012, the 1. improve the safety and security of VA facilities by mitigating potential damage to buildings facing the risk of damage from natural disaster, improving compliance with safety and security laws and regulations, and ensuring that VA can provide service in the wake of a catastrophic event; 2. address selected key major initiatives and supporting initiatives identified in VA's strategic plan;3. address existing deficiencies in its facilities that negatively affect the delivery of services and benefits to veterans; 4. reduce the time and distance a veteran has to travel to receive services and benefits, increase the number of veterans utilizing VA's services, and improve the services provided; 5. right-size VA's inventory by building new space, converting underutilized space, or reducing excess space; and 6. ensure cost-effectiveness and the reduction of operating costs for new capital investments. While VA uses SCIP to prioritize more costly NRM projects, the networks remain responsible for prioritizing all other or "below-threshold" NRM projects. However, VA has not provided its networks with written policies on how to prioritize these projects. According to a VA official, in fiscal year 2012, below-threshold projects accounted for over $625 million or 42 percent of VA's NRM spending. According to officials, instead of providing written guidance, VA officials have orally encouraged the networks to apply the same criteria included in SCIP when prioritizing below-threshold NRM projects. VA's lack of written policies for prioritizing below-threshold NRM projects which specify that is inconsistent with federal internal control standards,agency policies should be documented and that all documentation should be properly managed and maintained. Without written policies that clearly document VA's guidance to networks for prioritizing these less costly NRM projects, there is an increased risk that networks may not apply, or may inconsistently apply, the criteria included in SCIP. Our review of VA data shows that for fiscal years 2006 through 2011 the majority of the NRM projects that were funded by the networks were projects that the networks had prioritized in their operating plans. Specifically, in each year during this period, at least 85 percent of the NRM projects the networks funded were listed in the networks' operating plans. For example, of the 2,905 NRM projects that networks funded in fiscal year 2011, over 2,400 projects (85 percent) were listed on the operating plans. When asked about funded projects that were not listed on networks' operating plans, VA officials told us that networks may fund NRM projects in response to emerging needs during the course of the year. For fiscal year 2012, our analysis of VA data also shows that NRM projects funded that year were generally consistent with projects prioritized using SCIP and those prioritized by the networks in their operating plans. Specifically, in fiscal year 2012, 189 NRM projects that were prioritized through the SCIP process received funding. Moreover, as figure 4 shows, of the 1,909 NRM projects that were funded by the networks outside of the SCIP process in fiscal year 2012, 1,668 (87 percent) were listed on the networks' 2012 operating plans. This consistency notwithstanding, because VA has not provided its networks with written policies for prioritizing below-threshold projects, the agency faces an ongoing risk that NRM projects could be funded in a manner inconsistent with the SCIP criteria. Officials at VA headquarters have taken several steps in recent years to better monitor NRM spending to ensure that funded projects were consistent with the agency's priorities. In fiscal years 2009 and 2010, in compliance with congressional requirements, VA tracked and reported spending on NRM projects that used funding provided through the Recovery Act. Recognizing the value of such monitoring, VA headquarters officials decided to expand efforts tracking NRM spending by project on a monthly basis. Since fiscal year 2011, VA has used what it calls its capital assets database to manage and monitor NRM spending on a monthly basis. As part of these efforts, VA has instructed its project managers to update the information on each project on a monthly basis and review tracking reports to ensure that spending for each project is within its estimated cost. VA officials told us that there are new efforts under way to improve the data reliability of the capital assets database and to incorporate its tracking reports into the SCIP process. Our work shows that VA has consistently spent more on NRM than estimated because of the availability of higher than estimated resources in its Medical Facilities account. These additional resources derived from lower than estimated spending for non-NRM activities and higher than requested appropriations. Further, our work shows that spending for administrative functions, utilities, and rent accounted for most of the lower than estimated non-NRM spending in recent years. Thus, given the underestimates for these activities, VA's future budget estimates for non- NRM activities in its budget justification may not be reliable if the agency continues to determine its estimates in the same way. VA has taken important steps in establishing a centralized process for prioritizing more costly NRM projects through SCIP, and during the period we reviewed, VA's funded NRM projects were generally consistent with agency priorities. However, VA does not have reasonable assurance that spending on NRM will be consistent with criteria included in SCIP. Our work shows that while networks remain responsible for prioritizing below- threshold NRM projects, VA has not provided its networks with written policies for prioritizing these less costly NRM projects. Spending on these projects is not insignificant: in fiscal year 2012, spending on projects below the threshold was over $625 million or 42 percent of VA's spending on NRM. Without written policies that clearly document VA's guidance to networks for prioritizing below-threshold NRM projects, VA faces a continued risk that its networks may not apply, or may inconsistently apply, the criteria included in SCIP when funding these projects. We recommend the Secretary of Veterans Affairs take the following actions: to improve the reliability of information presented in VA's congressional budget justifications that support the President's budget request for VA health care, determine why recent justifications have overestimated spending for non-NRM activities and incorporate the results to improve future budget estimates for such activities; and to provide reasonable assurance that VA's networks prioritize NRM spending consistent with VA's overall NRM priorities, establish written policies for its networks for applying SCIP criteria when prioritizing the funding of NRM projects that are below the threshold for inclusion in VA's centralized prioritization process. We provided a draft of this report to the Secretary of Veterans Affairs for comment. In the agency's comments--reprinted in appendix I--VA concurred with both of our recommendations. In concurring with our first recommendation regarding improvements needed in its estimates for non-NRM activities, VA noted that the budget formulation process has been modified to include a better synchronization of events that play a significant role in the overestimated spending for non-NRM activities. VA stated that this modification has been incorporated in the fiscal year 2014 President's budget. In concurring with our second recommendation regarding written guidance on the application of SCIP criteria to prioritization of below-threshold NRM projects, VA noted that the NRM handbook and related guidance will be updated to direct facilities and networks to apply SCIP criteria when prioritizing below-threshold NRM projects. In addition, networks' Capital Asset Managers, who are responsible for monitoring and evaluating each network's NRM program, will be required to review below-threshold NRM projects included in a network's operating plan. VHA's Office of Capital Asset Management Engineering and Support will also review networks' operating plans to ensure compliance. We are sending copies of this report to the Secretary of Veterans Affairs and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, James C. Musselwhite, Assistant Director; Krister Friday; Aaron Holling; Lisa Motley; and Said Sariolghalam made key contributions to this report. Veterans' Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification. GAO-12-908. Washington, D.C.: September 18, 2012. Veterans' Health Care Budget: Transparency and Reliability of Some Estimates Supporting President's Request Could Be Improved. GAO-12-689. Washington, D.C.: June 11, 2012. VA Health Care: Estimates of Available Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 20, 2012. Veterans Affairs: Issues Related to Real Property Realignment and Future Health Care Costs. GAO-11-877T. Washington, D.C.: July 27, 2011. Veterans' Health Care Budget: Changes Were Made in Developing the President's Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. VA Health Care: Need for More Transparency in New Resource Allocation Process and for Written Policies on Monitoring Resources. GAO-11-426. Washington, D.C.: April 29, 2011. VA Real Property: Realignment Progressing, but Greater Transparency about Future Priorities Is Needed. GAO-11-521T. Washington, D.C.: April 5, 2011. VA Real Property: Realignment Progressing, but Greater Transparency about Future Priorities Is Needed. GAO-11-197. Washington, D.C.: January 31, 2011. Veterans' Health Care: VA Uses a Projection Model to Develop Most of Its Budget Estimate to Inform President's Budget Request. GAO-11-205. Washington, D.C.: January 28, 2011. VA Health Care: Overview of VA's Capital Asset Management. GAO-09-686T. Washington, D.C.: June 9, 2009. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009. VA Health Care: VA Should Better Monitor Implementation and Impact of Capital Asset Alignment Decisions. GAO-07-408. Washington, D.C.: March 21, 2007. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006. VA Health Care: Preliminary Findings on the Department of Veterans Affairs Health Care Budget Formulation for Fiscal Years 2005 and 2006. GAO-06-430R. Washington, D.C.: February 6, 2006.
VA operates about 1,000 medical facilities--such as hospitals and outpatient clinics--that provide services to more than 6 million patients annually. The operation and maintenance of its facilities, including NRM, is funded from VA's Medical Facilities appropriations account, one of three accounts through which Congress provides resources for VA health care services. In prior work, GAO found that VA's spending on NRM has consistently exceeded its estimates. GAO recommended that VA ensure that its NRM estimates fully account for this long-standing pattern, and VA agreed to implement this recommendation. GAO was asked to conduct additional work on NRM spending. In this report, GAO examines, for fiscal years 2006 through 2012, (1) what accounted for the pattern of NRM spending exceeding VA's budget estimates; (2) VA's allocation of resources for NRM to its health care networks; and (3) VA's process for prioritizing NRM spending and the extent to which NRM spending was consistent with these priorities. GAO reviewed VA's budget justifications and VA data and interviewed officials from headquarters and selected networks. During fiscal years 2006 through 2012, the Department of Veterans Affairs (VA) had higher than estimated resources available for facility maintenance and improvement--referred to as non-recurring maintenance (NRM); these resources accounted for the $4.9 billion in VA's NRM spending that exceeded budget estimates. The additional resources came from two sources. First, VA spent less than it estimated on non-NRM, facility-related activities such as administrative functions, utilities, and rent, which allowed VA to spend over $2.5 billion more than originally estimated. Lower spending for administrative functions, utilities, and rent accounted for most of the resources estimated but not spent on non-NRM activities. Given that VA has consistently overestimated the costs of such activities in recent years, VA's budget estimates for its non-NRM activities may not be reliable. Second, more than $2.3 billion of the higher than estimated spending on NRM can be attributed to VA having higher than estimated budget resources available. In some years VA received higher appropriations from Congress than requested and supplemental appropriations for NRM--such as those included in the American Recovery and Reinvestment Act of 2009. The additional budget resources VA used for NRM also included transfers of funds from the agency's appropriations account that funds health care services. VA allocated about $7.5 billion in resources for NRM to its 21 health care networks from fiscal year 2006 through fiscal year 2012. VA allocated about $4.6 billion of these resources at the beginning of each fiscal year through the Veterans Equitable Resource Allocation--its national, formula-driven system. In addition, VA allocated $2.9 billion during this period from higher than requested annual appropriations and its national reserve account, which is maintained to address contingencies that may develop each fiscal year. In anticipation of such resources, networks typically identify projects that can be implemented if additional funds become available. VA officials told us that they do this to better address the backlog of identified building deficiencies most recently estimated to cost over $9 billion. To prioritize NRM spending more centrally, VA established a new process for projects above a minimum threshold, and from fiscal years 2006 through 2012 spending on NRM was generally consistent with VA priorities. Prior to fiscal year 2012, VA provided oral guidance to networks for prioritizing NRM spending and relied on its 21 health care networks to prioritize NRM projects to maintain medical facilities in good working condition and address deficiencies. Beginning in fiscal year 2012, as part of VA's Strategic Capital Investment Planning (SCIP) process, VA headquarters assumed responsibility for prioritizing more costly NRM projects using a set of weighted criteria. For fiscal year 2012, the threshold for NRM projects to be included in this centralized process was $1 million, while networks remain responsible for prioritizing "below-threshold" NRM projects. NRM spending during fiscal years 2006 through 2012 was generally consistent with VA priorities: at least 85 percent of the projects funded in each year were identified by networks as priorities. However, VA has not provided written policies for networks on how to apply SCIP criteria to below-threshold projects, which represented over 40 percent of VA's fiscal year 2012 NRM spending. Without such written policies, VA does not have reasonable assurance that network spending for below-threshold NRM projects will be consistent with SCIP criteria. GAO recommends that VA determine why it has overestimated spending for non-NRM and use the results to improve future, non-NRM budget estimates. GAO also recommends that VA provide networks with written guidance for prioritizing below-threshold NRM projects. VA concurred with GAO's recommendations.
5,318
1,004
From April 24 through September 11, 2000, the U.S. Census Bureau surveyed a sample of about 314,000 housing units (about 1.4 million census and A.C.E. records in various areas of the country, including Puerto Rico) to estimate the number of people and housing units missed or counted more than once in the census and to evaluate the final census counts. Temporary bureau staff conducted the surveys by telephone and in-person visits. The A.C.E. sample consisted of about 12,000 "clusters" or geographic areas that each contained about 20 to 30 housing units. The bureau selected sample clusters to be representative of the nation as a whole, relying on variables such as state, race and ethnicity, owner or renter, as well as the size of each cluster and whether the cluster was on an American Indian reservation. The bureau canvassed the A.C.E. sample area, developed an address list, and collected response data for persons living in the sample area on Census Day (April 1, 2000). Although the bureau's A.C.E. data and address list were collected and maintained separately from the bureau's census work, A.C.E. processes were similar to those of the census. After the census and A.C.E. data collection operations were completed, the bureau attempted to match each person counted by A.C.E. to the list of persons counted by the census in the sample areas to determine the number of persons who lived in the sample area on Census Day. The results of the matching process, together with the characteristics of each person compared, provided the basis for statistical estimates of the number and characteristics of the population missed or improperly counted by the census. Correctly matching A.C.E. persons with census persons is important because errors in even a small percentage of records can significantly affect the undercount or overcount estimate. Matching over 1.4 million census and A.C.E. records was a complex and often labor-intensive process. Although several key matching tasks were automated and used prespecified decision rules, other tasks were carried out by trained bureau staff who used their judgment to match and code records. The four phases of the person matching process were (1) computer matching, (2) clerical matching, (3) nationwide field follow- up on records requiring more information, and (4) a second phase of clerical matching after field follow-up. Each subsequent phase used additional information and matching rules in an attempt to match records that the previous phase could not link. (first phase) (second phase) Computer matching took pairs of census and A.C.E. records and compared various personal characteristics such as name, age, and gender. The computer then calculated a match score for the paired records based on the extent to which the personal characteristics were aligned. Experienced bureau staff reviewed the lists of paired records, sorted by their match scores, and judgmentally assigned cutoff scores. The cutoff scores were break points used to categorize the paired records into one of three groups so that the records could be coded as a "match," "possible match," or one of a number of codes that defines them as not matched. Computer matching successfully assigned a match score to nearly 1 million of the more than 1.4 million records reviewed (about 66 percent). Bureau staff documented the cutoff scores for each of the match groups. However, they did not document the criteria or rules used to determine cutoff scores, the logic of how they applied them, and examples of their application . As a result, the bureau may not benefit from the possible lessons learned on how to apply cutoff scores. When the computer links few records as possible matches, clerks will spend more time searching records and linking them. In contrast, when the computer links many records as possible matches, clerks will spend less time searching for records to link and more time unlinking them. Without documentation and knowledge of the effect of cutoff scores on clerical matching productivity, future bureau staff will be less able to determine whether to set cutoff scores to link few or many records together as possible matches. (first phase) (second phase) During clerical matching, three levels of matchers--including over 200 clerks, about 40 technicians, and 10 experienced analysts or "expert matchers"--applied their expertise and judgment to manually match and code records. A computer software system managed the workflow of the clerical matching stages. The system also provided access to additional information, such as electronic images of census questionnaires that could assist matchers in applying criteria to match records. According to a bureau official, a benefit of clerical matching was that records of entire households could be reviewed together, rather than just individually as in computer matching. During this phase over a quarter million records (or about 19 percent) were assigned a final match code. The bureau taught clerks how to code records in situations in which the A.C.E. and census records differed because one record contained a nickname and the other contained the birth name. The bureau also taught clerks how to code records with abbreviations, spelling differences, middle names used as first names, and first and last names reversed. These criteria were well documented in both the bureau's procedures and operations memorandums and clerical matchers' training materials, but how the criteria were applied depended on the judgment of the matchers. The bureau trained clerks and technicians for this complex work using as examples some of the most challenging records from the 1998 Dress Rehearsal person matching operation. In addition, the analysts had extensive matching experience. For example, the 4 analysts that we interviewed had an average of 10 years of matching experience on other decennial census surveys and were directly involved in developing the training materials for the technicians and clerks. (first phase) (second phase) The bureau conducted a nationwide field follow-up on over 213,000 records (or about 15 percent) for which the bureau needed additional information before it could accurately assign a match code. For example, sometimes matchers needed additional information to verify that possibly matched records were actually records of the same person, that a housing unit was located in the sample area on Census Day, or that a person lived in the sample area on Census Day. Field follow-up questionnaires were printed at the National Processing Center and sent to the appropriate A.C.E. regional office. Field follow-up interviewers from the bureau's regional offices were required to visit specified housing units and obtain information from a knowledgeable respondent. If the household member for the record in question still lived at the A.C.E. address at the time of the interview and was not available to be interviewed after six attempts, field follow-up interviewers were allowed to obtain information from one or more knowledgeable proxy respondents, such as a landlord or neighbor. (first phase) (second phase) The second phase of clerical matching used the information obtained during field follow-up in an attempt to assign a final match code to records. As in the first phase of clerical matching, the criteria used to match and code records were well documented in both the bureau's procedures and operations memorandums and clerical matchers' training materials. Nevertheless, in applying those criteria, clerical matchers had to use their own judgment and expertise. This was particularly true when matching records that contained incomplete and inconsistent information, as noted in the following examples. Different household members provided conflicting information. The census counted one person--the field follow-up respondent. A.C.E. recorded four persons--including the respondent and her daughter. The respondent, during field follow-up, reported that all four persons recorded by A.C.E. lived at the housing unit on Census Day. During the field follow-up interview, the respondent's daughter came to the house and disagreed with the respondent. The interviewer changed the answers on the field follow-up questionnaire to reflect what the daughter said-- the respondent was the only person living at the household address on Census Day. The other three people were coded as not living at the household address on Census Day. According to bureau staff, the daughter's response seemed more reliable. An interviewer's notes on the field follow-up questionnaire conflicted with recorded information. The census counted 13 people--including the respondent and 2 people not matched to A.C.E. records. A.C.E. recorded 12 people--including the respondent, 10 other matched people, and the respondent's daughter who was not matched to census records. The field follow-up interview attempted to resolve the unmatched census and A.C.E. people. Answers to questions on the field follow-up questionnaire verified that the daughter lived at the housing address on Census Day. However, the interviewer's notes indicated that the daughter and the respondent were living in a shelter on Census Day. The daughter was coded as not living at the household address on Census Day, while the respondent remained coded as matched and living at the household address on Census Day. According to bureau staff, the respondent should also have been coded as a person that did not live at the household address on Census Day, based on the notes on the field follow-up questionnaire. A.C.E., census, or both counted people at the wrong address. The census counted two people--the respondent and her husband--twice; once in an apartment and once in a business office that the husband worked in, both in the same apartment building. The A.C.E. did not record anyone at either location, as the residential apartment was not in the A.C.E. interview sample. The respondent, during field follow-up, reported that they lived at their apartment on Census Day and not at the business office. The couple had responded to the census on a questionnaire delivered to the business office. A census enumerator, following up on the "nonresponse" from the couple's apartment, had obtained census information from a neighbor about the couple. The couple, as recorded by the census at the business office address, was coded as correctly counted in the census. The couple, as recorded by the census at the apartment address, was coded as living outside the sample block. According to bureau staff, the couple recorded at the business office address were correctly coded, but the couple recorded at the apartment should have been coded as duplicates. An uncooperative household respondent provided partial or no information. The census counted a family of four--the respondent, his wife, and two daughters. A.C.E. recorded a family of three--the same husband and wife, but a different daughter's name, "Buffy." The field follow-up interview covered the unmatched daughters--two from census and one from A.C.E. The respondent confirmed that the four people counted by the census were his family and that "Buffy" was a nickname for one of his two daughters, but he would not identify which one. The interviewer wrote in the notes that the respondent "was upset with the number of visits" to his house. "Buffy" was coded as a match to one of the daughters; the other daughter was coded as counted in the census but missed by A.C.E. According to bureau staff, since the respondent confirmed that "Buffy" was a match for one of his daughters--although not which one--and that four people lived at the household address on Census Day, they did not want one of the daughters coded so that she was possibly counted as a missed census person. Since each record had to have a code identifying whether it was a match by the end of the second clerical matching phase, records that did not contain enough information after field follow-up to be assigned any other code were coded as "unresolved." The bureau later imputed the match code results for these records using statistical methods. While imputation for some situations may be unavoidable, it introduces uncertainty into estimates of census over- or undercount rates. The following are examples of situations that resulted in records coded as "unresolved." Conflicting information was provided for the same household. The census counted four people--a woman, an "unmarried partner," and two children. A.C.E. recorded three people--the same woman and two children. During field follow-up, the woman reported to the field follow- up interviewer that the "unmarried partner" did not really live at the household address, but just came around to baby-sit, and that she did not know where he lived on Census Day. According to bureau staff, probing questions during field follow-up determined that the "unmarried partner" should not have been coded as living at the housing unit on Census Day. Therefore, the "unmarried partner" was coded as "unresolved." A proxy respondent provided conflicting or inaccurate information. The census counted one person--a female renter. A.C.E. did not record anyone. The apartment building manager, who was interviewed during field follow-up, reported that the woman had moved out of the household address sometime in February 2000, but the manager did not know the woman's Census Day address. The same manager had responded to an enumerator questionnaire for the census in June 2000 and had reported that the woman did live at the household address on Census Day. The woman was coded as "unresolved." The bureau employed a series of quality assurance procedures for each phase of person matching. The bureau reported that person matching quality assurance was successful at minimizing errors because the quality assurance procedures found error rates of less than 1 percent. Clerks were to review all of the match results to ensure, among other things, that the records linked by the computer were not duplicates and contained valid and complete names. Moreover, according to bureau officials, the software used to link records had proven itself during a similar operation conducted for the 1990 Census . The bureau did not report separately on the quality of computer matched records. Although there were no formal quality assurance results from computer matching, at our request the bureau tabulated the number of records that the computer had coded as "matched" that had subsequently been coded otherwise. According to the bureau, the subsequent matching process resulted in a different match code for about 0.6 percent of the almost 500,000 records initially coded as matched by the computer. Of those records having their codes changed by later matching phases, over half were eventually coded as duplicates and almost all of the remainder were rematched to someone else. Technicians reviewed the work of clerks and analysts reviewed the work of technicians primarily to find clerical errors that (1) would have prevented records from being sent to field follow-up, (2) could cause a record to be incorrectly coded as either properly or erroneously counted by the census, or (3) would cause a record to be incorrectly removed from the A.C.E. sample. Analysts' work was not reviewed. Clerks and technicians with error rates of less than 4 percent had a random sample of about 25 percent of their work reviewed, while clerks and technicians exceeding the error threshold had 100 percent of their work reviewed. About 98 percent of clerks in the first phase of matching had only a sample of their work reviewed. According to bureau data, less than 1 percent of match decisions were revised during quality assurance reviews, leading the bureau to conclude that clerical matching quality assurance was successful. Under certain circumstances, technicians and analysts performed additional reviews of clerks' and technicians' work. For example, if during the first phase of clerical matching a technician had reviewed and changed more than half of a clerk's match codes in a given geographic cluster, the cluster was flagged for an analyst to review all of the clerk and technician coding for that area. During the second phase, analysts were required to make similar reviews when only one of the records was flagged for their review. This is one of the reasons why, as illustrated in figure 2, these additional reviews were a much more substantial part of the clerks' and technicians' workload that was subsequently reviewed by more senior matchers. The total percentage of workload reviewed ranged from about 20 to 60 percent across phases of clerical matching, far in excess of the 11- percent quality assurance level for the bureau's person interviewing operation. The quality assurance plan for the field follow-up phase had two general purposes: (1) to ensure that questionnaires had been completed properly and legibly and (2) to detect falsification. Supervisors initially reviewed each questionnaire for legibility and completeness. These reviews also checked the responses for consistency. Office staff were to conduct similar reviews of each questionnaire. To detect falsification, the bureau was to review and edit each questionnaire at least twice and recontact a random sample of 5 percent of the respondents. As shown in figure 3, all 12 of the A.C.E. regional offices exceeded the 5 percent requirement by selecting more than 7 percent of their workload for quality assurance review, and the national rate of quality assurance review was about 10 percent. At the local level, however, there was greater variation. There are many reasons why the quality assurance coverage can appear to vary locally. For example, a local census area could have a low quality assurance coverage rate because interviewers in that area had their work reviewed in other areas, or the area could have had an extremely small field follow-up workload, making the difference of just one quality assurance questionnaire constitute a large percentage of the local workload. Seventeen local census office areas (out of 520 nationally, including Puerto Rico) had 20 percent or more of field follow-up interviews covered by the quality assurance program, and, at the other extreme, 5 local census areas had 5 percent or less of the work covered by the quality assurance program. Less than 1 percent of the randomly selected questionnaires failed quality assurance nationally, leading the bureau to report this quality assurance operation as successful. When recontacting respondents to detect falsification by interviewers, quality assurance supervisors were to determine whether the household had been contacted by an interviewer, and if it had not, the record of that household failed quality assurance. According to bureau data, about 0.8 percent of the randomly selected quality assurance questionnaires failed quality assurance nationally. This percentage varied between 0 and about 3 percent across regions. The bureau carried out person matching as planned, with only a few procedural deviations. Although the bureau took action to address these deviations, it has not determined how matching results were affected. As shown in table 1, these deviations included (1) census files that were delivered late, (2) a programming error in the clerical matching software, (3) printing errors in field follow-up forms, (4) regional offices that sent back incomplete questionnaires, and (5) the need for additional time to complete the second phase of clerical matching. It is unknown what, if any, cumulative effect these procedural deviations may have had on the quality of matching for these records or on the resultant A.C.E. estimates of census undercounts. However, bureau officials believe that the effect of the deviations was small based on the timely responses taken to address them. The bureau conducted reinterviewing and re-matching studies on samples of the 2000 A.C.E. sample and concluded that matching quality in 2000 was improved over that in 1990, but that error introduced during matching operations remained and contributed to an overstatement of A.C.E. estimates of the census undercounts. The studies provided some categorical descriptions of the types of matching errors measured, but did not identify the procedural causes, if any, for those errors. Furthermore, despite the improvement in matching reported by the bureau, A.C.E. results were not used to adjust the census due to these errors as well as other remaining uncertainties. The bureau has reported that additional review and analysis on these remaining uncertainties would be necessary before any potential uses of these data can be considered. The computer matching phase started 3 days later than scheduled and finished 1 day late due to the delayed delivery of census files. In response, bureau employees who conducted computer matching worked overtime hours to make up lost time. Furthermore, A.C.E. regional offices did not receive clusters in the prioritized order that they had requested. The reason for prioritizing the clusters was to provide as much time as possible for field follow-up on clusters in the most difficult areas. Examples of areas that were expected to need extra time were those with staffing difficulties, larger workloads, or expected weather problems. Based on the bureau's Master Activities Schedule, the delay did not affect the schedule of subsequent matching phases. Also, bureau officials stated that although clusters were not received in prioritized order, field follow-up was not greatly affected because the first clerical matching phase was well staffed and sent the work to regional offices quickly. On the first full day of clerical matching, the bureau identified a programming error in the quality assurance management system, which made some clerks and technicians who had not passed quality assurance reviews appear to have passed. In response, bureau officials manually overrode the system. Bureau officials said the programming error was fixed within a couple of days, but could not explain how the programming error occurred. They stated that the software system used for clerical matching was thoroughly tested, although it was not used in any prior censuses or census tests, including the Dress Rehearsal. As we have previously noted, programming errors that occur during the operation of a system raise questions about the development and acquisition processes used for that system. A programming error caused last names to be printed improperly on field follow-up forms for some households containing multiple last names. In situations in which regional office staff may not have caught the printing error and interviewers may have been unaware of the error--such as when those questionnaires were completed before the problem was discovered-- interviews may have been conducted using the wrong last name, thus recording misleading information. According to bureau officials, in response, the bureau (1) stopped printing questionnaires on the date officials were notified about the misprinted questionnaires, (2) provided information to regional offices that listed all field follow-up housing units with multiple names that had been printed prior to the date the problem was resolved, and (3) developed procedures for clerical matchers to address any affected questionnaires being returned that had not been corrected by regional office staff. While resolving the problem, productivity was initially slowed in the A.C.E. regional offices for approximately 1 to 4 days, yet field follow-up was completed on time. Bureau officials inadvertently introduced this error when they addressed a separate programming problem in the software. Bureau officials stated that they tested this software system; however, the system was not given a trial run during the Census Dress Rehearsal in 1998. According to bureau officials, the problem did not affect data quality because it was caught early in the operation and follow-up forms were edited by regional staff. However, the bureau could not determine the exact day of printing for each questionnaire and thus did not know exactly which households had been affected by the problem. According to bureau data, the problem could have potentially affected over 56,000 persons, or about 5 percent of the A.C.E. sample. In addition to the problem printing last names, the bureau experienced other printing problems. According to bureau staff, field follow-up received printed questionnaires that were (1) missing pages, (2) missing reference notes written by clerical matchers, and (3) missing names and/or having some names printed more than once for some households of about nine or more people. According to bureau officials, these problems were not resolved during the operation because they were reported after field follow-up had started and the bureau was constrained by deadlines. Bureau officials stated that they believed that these problems would not significantly affect the quality of data collected or match code results, although bureau officials were unable to provide data that would document either the extent, effect, or cause of these problems. The bureau's regional offices submitted questionnaires containing an incomplete "geocoding" section. This section was to be used in instances when the bureau needed to verify whether a housing unit (1) existed on Census Day and (2) was correctly located in the A.C.E. sample area. Although the bureau returned 48 questionnaires during the first 6 days of the operation to the regional offices for completion, bureau officials stated that after that they no longer returned questionnaires to the regional offices because they did not want to delay the completion of field follow-up. A total of over 10,000 questionnaires with "geocoding" sections were initially sent to the regional offices. The bureau did not have data on the number, if any, of questionnaires that the regional offices submitted incomplete beyond the initial 48. The bureau would have coded as "unresolved" the persons covered by any incomplete questionnaires. As previously stated, the bureau later imputed the match code results for these records using statistical methods, which could introduce uncertainty into estimates of census over- or undercount rates. According to bureau officials, this problem was caused by (1) not printing a checklist of all sections that needed to be completed by interviewers, (2) no link from any other section of the questionnaire to refer interviewers to the "geocoding" section, and (3) field supervisors following the same instructions as interviewers to complete their reviews of field follow-up forms. However, bureau officials believed that the mistake should have been caught by regional office reviews before the questionnaires were sent back for processing. About a week after the second clerical matching phase began, officials requested an extension, which was granted for 5 days, to complete the second clerical matching phase. According to bureau officials, the operation could have been completed by the November 30, 2000, deadline as planned, but they decided to take extra steps to improve data quality that required additional time. According to bureau officials, the delay in completing person matching had no effect on the final completion schedule, only the start of subsequent A.C.E. processing operations. Matching A.C.E. and census records was an inherently complex and labor- intensive process that often relied on the judgment of trained staff, and the bureau prepared itself accordingly. For example, the bureau provided extensive training for its clerical matchers, generally provided thorough documentation of the process and criteria to be used in carrying out their work, and developed quality assurance procedures to cover its critical matching operations. As a result, our review identified few significant operational or procedural deviations from what the bureau planned, and the bureau took timely action to address them. Nevertheless, our work identified opportunities for improvement. These opportunities include a lack of written documentation showing how cutoff scores were determined and programming errors in the clerical matching software and software used to print field follow-up forms. Without written documentation, the bureau will be less likely to capture lessons learned on how cutoff scores should be applied, in order to determine the impact on clerical matching productivity. Moreover, the discovery of programming errors so late in the operation raises questions about the development and acquisition processes used for the affected A.C.E. computer systems. In addition, one lapse in procedures may have resulted in incomplete geocoding sections verifying that the person being matched was in the geographic sample area. The collective effect that these deviations may have had on the accuracy of A.C.E. results is unknown. Although the bureau has concluded that A.C.E. matching quality improved compared to 1990, the bureau has reported that error introduced during matching operations remained and contributed to an overstatement of the A.C.E. estimate of census undercounts. To the extent that the bureau employs an operation similar to A.C.E. to measure the quality of the 2010 Census, it will be important for the bureau to determine the impact of the deviations and explore operational improvements, in addition to the research it might carry out on other uncertainties in the A.C.E. results. As the bureau documents its lessons learned from the 2000 Census and continues its planning efforts for 2010, we recommend that the secretary of commerce direct the bureau to take the following actions: 1. Document the criteria and the logic that bureau staff used during computer matching to determine the cutoff scores for matched, possibly matched, and unmatched record pairs. 2. Examine the bureau's system development and acquisition processes to determine why the problems with A.C.E. computer systems were not discovered prior to deployment of these systems. 3. Determine the effect that the printing problems may have had on the quality of data collected for affected records, and thus the accuracy of A.C.E. estimates of the population. 4. Determine the effect that the incomplete geocoding section of the questionnaires may have had on the quality of data collected for affected records, and thus the accuracy of A.C.E. estimates of census undercounts. The secretary of commerce forwarded written comments from the U.S. Census Bureau on a draft of this report. (See appendix II.) The bureau had no comments on the text of the report and agreed with, and is taking action on, two of our four recommendations. In responding to our recommendation to document the criteria and the logic that bureau staff used during computer matching to determine cutoff scores, the bureau acknowledged that such documentation may be informative and that such documentation is under preparation. We look forward to reviewing the documentation when it is complete. In responding to our recommendation to examine system development and acquisition processes to determine why problems with the A.C.E. computer systems were not discovered prior to deployment, the bureau responded that despite extensive testing of A.C.E. computer systems, a few problems may remain undetected. The bureau plans to review the process to avoid such problems in 2010, and we look forward to reviewing the results of their review. Finally, in response to our two recommendations to determine the effects that printing problems and incomplete questionnaires had on the quality of data collected and the accuracy of A.C.E. estimates, the bureau responded that it did not track the occurrence of these problems because the effects on the coding process and accuracy were considered to be minimal since all problems were identified early and corrective procedures were effectively implemented. In our draft report we recognized that the bureau took timely corrective action in response to these and other problems that arose during person matching. Yet we also reported that bureau studies of the 2000 matching process had concluded that matching error contributed to error in A.C.E. estimates without identifying procedural causes, if any. Again, to the extent that the bureau employs an operation similar to A.C.E. to measure the quality of the 2010 Census, it will be important for the bureau to determine the impact of the problems and explore operational improvements as we recommend. We are sending copies of this report to other interested congressional committees. Please contact me on (202) 512-6806 if you have any questions. Key contributors to this report are included in appendix III. To address our three objectives, we examined relevant bureau program specifications, training manuals, office manuals, memorandums, and other progress and research documents. We also interviewed bureau officials at bureau headquarters in Suitland, Md., and the bureau's National Processing Center in Jeffersonville, Ind., which was responsible for the planning and implementation of the person matching operation. In addition, to review the process and criteria involved in making an A.C.E. and census person match, we observed the match clerk training at the National Processing Center and a field follow-up interviewer training session in Dallas, Tex. To identify the results of the quality assurance procedures used in key person matching phases, we analyzed operational data and reports provided to us by the bureau, as well as extracts from the bureau's management information system, which tracked the progress of quality assurance procedures. Other independent sources of the data were not available for us to use to test the data that we extracted, although we were able to corroborate data results with subsequent interviews of key staff. Finally, to examine how, if at all, the matching operation deviated from what was planned, we selected 11 locations in 7 of the 12 bureau census regions (Atlanta, Chicago, Dallas, Denver, Los Angeles, New York, and Seattle). At each location we interviewed A.C.E. workers from November through December 2000. The locations selected for field visits were chosen primarily for their geographic dispersion (i.e., urban or rural), variation in type of enumeration area (e.g., update/leave or list enumerate), and the progress of their field follow-up work. In addition, we reviewed the match code results and field follow-up questionnaires from 48 sample clusters. These clusters were chosen because they corresponded to the local census areas we visited and contained records reviewed during every phase of the person matching operation. The results of our field visits and our cluster review are not generalizable nationally to the person matching operation. We performed our audit work from September 2000 through September 2001 in accordance with generally accepted government auditing standards. In addition to those named above, Ty Mitchell, Lynn Wasielewski, Steven Boyles, Angela Pun, J. Christopher Mihm, and Richard Hung contributed to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to daily e-mail alert for newly released products" under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The U.S. Census Bureau conducted the Accuracy and Coverage Evaluation (ACE) survey to estimate the number of people missed, counted more than once, or otherwise improperly counted in the 2000 Census. On the basis of uncertainty in the ACE results, the Bureau's acting director decided that the 2000 Census tabulations should not be adjusted in order to redraw the boundaries of congressional districts or to distribute billions of dollars in federal funding. Although ACE was generally implemented as planned, the Bureau found that it overstated census undercounts because of an error introduced during matching operations and other uncertainties. The Bureau concluded that additional review and analysis of these uncertainties would be needed before the data could be used. Matching more than 1.4 million census and ACE records involved the following four phases, each with its own matching procedures and multiple layers of review: computer matching, clerical matching, field follow-up, and clerical matching. The Bureau applied quality assurance procedures to each phase of person matching. Because the quality assurance procedures had failure rates of less than one percent, the Bureau reported that person matching quality assurance was successful at minimizing errors. Overall, the Bureau carried out person matching as planned, with few procedural deviations. GAO identified areas for improving future ACE efforts, including more complete documentation of computer matching decisions and better assurance that problems do not arise with the bureau's automated systems.
7,564
299
The Cassini Program, sponsored by NASA, the European Space Agency, and the Italian Space Agency, began in fiscal year 1990. NASA's Jet Propulsion Laboratory (JPL), which is operated under contract by the California Institute of Technology, manages the Cassini Program. The spacecraft is expected to arrive at Saturn in July 2004 and begin a 4-year period of scientific observations to obtain detailed information about the composition and behavior of Saturn and its atmosphere, magnetic field, rings, and moons. Power for the Cassini spacecraft is generated by three radioisotope thermoelectric generators (RTG) that convert heat from the natural radioactive decay of plutonium dioxide into electricity. The spacecraft also uses 117 radioisotope heater units to provide heat for spacecraft components. The spacecraft carries 72 pounds of radioactive plutonium dioxide in the RTGs and 0.7 pounds in the heater units. The Department of Energy (DOE) provided the RTGs and their plutonium dioxide fuel, and the Department of Defense (DOD) provided the Titan IV/Centaur rocket to launch the spacecraft. According to NASA and JPL officials, most deep space missions beyond Mars, including the Cassini mission, must use RTGs to generate electrical power. The only proven non-nuclear source of electrical power for spacecraft are photovoltaic cells, also called solar arrays. However, as distance from the sun increases, the energy available from sunlight decreases exponentially. Thus, existing solar arrays cannot produce sufficient electricity beyond Mars' orbit to operate most spacecraft and their payloads. Before launching a spacecraft carrying radioactive materials, regulations implementing federal environmental laws require the sponsoring agency, in this instance NASA, to assess and mitigate the potential risks and effects of an accidental release of radioactive materials during the mission. As part of any such assessments, participating agencies perform safety analyses in accordance with administrative procedures. To obtain the necessary presidential approval to launch space missions carrying large amounts of radioactive material, such as Cassini, NASA is also required to convene an interagency review of the nuclear safety risks posed by the mission. RTGs have been used on 25 space missions, including Cassini, according to NASA and JPL officials. Three of these missions failed due to problems unrelated to the RTGs. Appendix I describes those missions and the disposition of the nuclear fuel on board each spacecraft. The processes used by NASA to assess the safety and environmental risks associated with the Cassini mission reflected the extensive analysis and evaluation requirements established in federal laws, regulations, and executive branch policies. For example, DOE designed and tested the RTGs to withstand likely accidents while preventing or minimizing the release of the RTG's plutonium dioxide fuel, and a DOE administrative order required the agency to estimate the safety risks associated with the RTGs used for the Cassini mission. Also, federal regulations implementing the National Environmental Policy Act of 1969 required NASA to assess the environmental and public health impacts of potential accidents during the Cassini mission that could cause plutonium dioxide to be released from the spacecraft's RTGs or heater units. In addition, a directive issued by the Executive Office of the President requires an ad hoc interagency Nuclear Safety Review Panel. This panel is supported by technical experts from NASA, other federal agencies, national laboratories, and academia to review the nuclear safety analyses prepared for the Cassini mission. After completion of the interagency review process, NASA requested and was given nuclear launch safety approval by the Office of Science and Technology Policy, within the Office of the President, to launch the Cassini spacecraft. In addition to the risks associated with a launch accident, there is also a small chance that the Cassini spacecraft could release nuclear material either during an accidental reentry into Earth's atmosphere when the spacecraft passes by Earth in August 1999 or during the interplanetary journey to Saturn. Potential reentry accidents were also addressed during the Cassini safety, environmental impact, and launch review processes. DOE originally developed the RTGs used on the Cassini spacecraft for NASA's previous Galileo and Ulysses missions. Figure 1 shows the 22-foot, 12,400-pound Cassini spacecraft and some of its major systems, including two of the spacecraft's three RTGs. DOE designed and constructed the RTGs to prevent or minimize the release of plutonium dioxide fuel from the RTG fuel cells in the event of an accident. DOE performed physical and analytical testing of the RTG fuel cells known as general-purpose heat source units, to determine their performance and assess the risks of accidental fuel releases. Under an interagency agreement with NASA, DOE constructed the RTGs for the Cassini spacecraft and assessed the mission risks as required by a DOE administrative order. DOE's final safety report on the Cassini mission, published in May 1997, documents the results of the test, evaluation, and risk assessment processes for the RTGs. The RTG fuel cells have protective casings composed of several layers of heat- and impact-resistant shielding and a strong, thin metal shell around the fuel pellets. According to NASA and DOE officials, the shielding will enable the fuel cells to survive likely types of launch or orbital reentry accidents and prevent or minimize the release of plutonium dioxide fuel. In addition to the shielding, the plutonium dioxide fuel itself is formed into ceramic pellets designed to resist reentry heat and breakage caused by an impact. If fuel is released from an impact-damaged fuel cell, the pellets are designed to break into large pieces to avoid inhalation of very small particles, which is the primary health risk posed by plutonium dioxide. Federal regulations implementing the National Environmental Policy Act of 1969 required NASA to prepare an environmental impact statement for the Cassini mission. To meet the requirements NASA conducted quantitative analyses of the types of accidents that could cause a release of plutonium dioxide from the RTGs and the possible health effects that could result from such releases. NASA also used DOE's RTG safety analyses and Air Force safety analyses of the Titan IV/Centaur rocket, which launched the Cassini spacecraft. NASA published a final environmental impact statement for the Cassini mission in June 1995. In addition to the analyses of potential environmental impacts and health effects, the document included and responded to public comments on NASA's analyses. NASA also published a final supplemental environmental impact statement for the Cassini mission in June 1997. According to NASA officials, NASA published the supplemental statement to keep the public informed of changes in the potential impacts of the Cassini mission based on analyses conducted subsequent to the publication of the final environmental impact statement. The supplemental statement used DOE's updated RTG safety analyses to refine the estimates of risks for potential accidents and document a decline in the overall estimate of risk for the Cassini mission. The environmental impact assessment process for the Cassini mission ended formally in August 1997 when NASA issued a Record of Decision for the final supplemental environmental impact statement. However, if the circumstances of the Cassini mission change and affect the estimates of accident risks, NASA is required to reassess the risks and determine the need for any additional environmental impact documentation. Agencies planning to transport nuclear materials into space are required by a presidential directive to obtain approval from the Executive Office of the President before launch. To prepare for and support the approval decision, the directive requires that an ad hoc Interagency Nuclear Safety Review Panel review the lead agencies' nuclear safety assessments. Because the Cassini spacecraft carries a substantial amount of plutonium, NASA convened a panel to review the mission's nuclear safety analyses. NASA formed the Cassini Interagency Nuclear Safety Review Panel shortly after the program began in October 1989. The panel consisted of four coordinators from NASA, DOE, DOD, the Environmental Protection Agency, and a technical advisor from the Nuclear Regulatory Commission. The review panel, supported by approximately 50 technical experts from these and other government agencies and outside consultants, analyzed and evaluated NASA, JPL, and DOE nuclear safety analyses of the Cassini mission and performed its own analyses. The panel reported no significant differences between the results of its analyses and those done by NASA, JPL, and DOE. The Cassini launch approval process ended formally in October 1997 when the Office of Science and Technology Policy, within the Executive Office of the President, gave its nuclear launch safety approval for NASA to launch the Cassini spacecraft. NASA officials told us that, in deciding whether to approve the launch of the Cassini spacecraft, the Office of Science and Technology Policy reviewed the previous NASA, JPL, DOE, and review panel analyses and obtained the opinions of other experts. NASA, JPL, and DOE used physical testing and computer simulations of the RTGs under accident conditions to develop quantitative estimates of the accident probabilities and potential health risks posed by the Cassini mission. To put the Cassini risk estimates in context, NASA compares them with the risks posed by exposure to normal background radiation. In making this comparison, NASA estimates that, over a 50-year period, the average person's risk of developing cancer from exposure to normal background radiation is on the order of 100,000 times greater than from the highest risk accident for the Cassini mission. For the launch portion of the Cassini mission, NASA estimated that the probability of an accident that would release plutonium dioxide was 1 in 1,490 during the early part of the launch and 1 in 476 during the later part of the launch and Earth orbit. The estimated health effect of either type of accident is that, over the succeeding 50-year period, less than one more person would die of cancer caused by radiation exposure than if there were no accident. Although the Titan IV/Centaur rocket is the United States' most powerful launch vehicle, it does not have enough energy to propel the Cassini spacecraft on a direct route to Saturn. Therefore, the spacecraft will perform two swingby maneuvers at Venus in April 1998 and June 1999, one at Earth in August 1999, and one at Jupiter in December 2000. In performing the maneuvers, the spacecraft will use the planets' gravity to increase its speed enough to reach Saturn. Figure 2 illustrates the Cassini spacecraft's planned route to Saturn. NASA estimates that there is less than a one in one million chance that the spacecraft could accidentally reenter Earth's atmosphere during the Earth swingby maneuver. To verify the estimated probability of an Earth swingby accident, NASA formed a panel of independent experts, which reported that the probability estimates were sound and reasonable. If such an accident were to occur, the estimated health effect is that, during the succeeding 50-year period, 120 more people would die of cancer than if there were no accident. If the spacecraft were to become unable to respond to guidance commands during its interplanetary journey, the spacecraft would drift in an orbit around the sun, from which it could reenter Earth's atmosphere in the future. However, the probability that this accident would occur and release plutonium dioxide is estimated to be one in five million. The estimated health effect of this accident is the same as for an Earth swingby accident. Due to the spacecraft's high speed, NASA and DOE projected that an accidental reentry during the Earth swingby maneuver would generate temperatures high enough to damage the RTGs and release some plutonium dioxide. As a safety measure, JPL designed the Earth swingby trajectory so that the spacecraft will miss Earth by a wide margin unless the spacecraft's course is accidently altered. About 50 days before the swingby, Cassini mission controllers will begin making incremental changes to the spacecraft's course, guiding it by Earth at a distance of 718.6 miles. According to NASA and JPL officials, the Cassini spacecraft and mission designs incorporate other precautions to minimize the possibility that an accident could cause the spacecraft to reenter during either the Earth swingby maneuver or the interplanetary portion of its journey to Saturn. NASA regulations require that, as part of the environmental analysis, alternative power sources be considered for missions planning to use nuclear power systems. JPL's engineering study of alternative power sources for the Cassini mission concluded that RTGs were the only practical power source for the mission. The study stated that, because sunlight is so weak at Saturn, solar arrays able to generate sufficient electrical power would have been too large and heavy for the Titan IV/Centaur to launch. The studies also noted that, even if the large arrays could have been launched to Saturn on the Cassini spacecraft, they would have made the spacecraft very difficult to maneuver and increased the mission's risk of failure due to the array's uncertain reliability over the length of the 12-year mission. Figure 3 compares the relative sizes of solar arrays required to power the Cassini spacecraft at various distances from the sun, including Saturn. Since 1968, NASA, DOE, and DOD have together invested more than $180 million in solar array technology, according to a JPL estimate. The agencies are continuing to invest in improving both solar and nuclear spacecraft power generation systems. For example, in fiscal year 1998, NASA and DOD will invest $10 million for research and development of advanced solar array systems, and NASA will invest $10 million for research and development of advanced nuclear-fueled systems. NASA officials in charge of developing spacecraft solar array power systems said that the current level of funding is prudent, given the state of solar array technology, and that the current funding meets the needs of current agency research programs. The fiscal year 1998 budget of $10 million for solar array systems exceeds the estimated 30-year average annual funding level of $6 million (not adjusted for inflation). According to NASA and JPL officials, solar arrays offer the most promise for future non-nuclear-powered space missions. Two improvements to solar array systems that are currently being developed could extend the range of some solar array-powered spacecraft and science operations beyond the orbit of Mars. New types of solar cells and arrays under development will more efficiently convert sunlight into electricity. Current cells operate at 18 to 19 percent efficiency, and the most advanced cells under development are intended to achieve 22 to possibly 30 percent efficiency. Although the improvement in conversion efficiency will be relatively small, it could enable some spacecraft to use solar arrays to operate as far out as Jupiter's orbit. Another improvement to solar arrays under development will add lenses or reflective surfaces to capture and concentrate more sunlight onto the arrays, enabling them to generate more electricity. NASA's technology demonstration Deep Space-1 spacecraft, scheduled for launch in July 1998, will include this new technology. Over the long term, limitations inherent to solar array technology will preclude its use on many deep space missions. The primary limitation is the diminishing energy in sunlight as distance from the sun increases. No future solar arrays are expected to produce enough electricity to operate a spacecraft farther than Jupiter's orbit. Another key limitation is that solar arrays cannot be used for missions requiring operations in extended periods of darkness, such as those on or under the surface of a planet or moon. Other limitations of solar arrays, including their vulnerability to damage from radiation and temperature extremes, make the cells unsuitable for missions that encounter such conditions. NASA and DOE are working on new nuclear-fueled generators for use on future space missions. NASA and DOE's Advanced Radioisotope Power Source Program is intended to replace RTGs with an advanced nuclear-fueled generator that will more efficiently convert heat into electricity and require less plutonium dioxide fuel than existing RTGs. NASA and DOE plan to flight test a key component of the new generator on a space shuttle mission. The test system will use electrical power to provide heat during the test. If development of this new generator is successful, it will be used on future missions. NASA is currently studying eight future space missions between 2000 and 2015 that will likely use nuclear-fueled electrical generators. These missions are Europa Orbiter, Pluto Express, Solar Probe, Interstellar Probe, Europa Lander, Io Volcanic Observer, Titan Organic Explorer, and Neptune Orbiter. On the basis of historical experience, NASA and DOE officials said that about one-half of such missions typically obtain funding and are launched. In addition, several planned Mars missions would carry from 5 to 30 radioisotope heater units to keep spacecraft components warm. Each heater unit would contain about 0.1 ounces of plutonium dioxide. In accordance with NASA's current operating philosophy, spacecraft for future space science missions will be much smaller than those used on current deep space missions. Future spacecraft with more efficient electrical systems and reduced demands for electrical power, when coupled with the advanced nuclear-fueled generators, will require significantly less plutonium dioxide fuel. For example, the new nuclear-fueled generator that NASA studied for use on the Pluto Express spacecraft is projected to need less than 10 pounds of plutonium dioxide compared with 72 pounds on the Cassini spacecraft. According to NASA and DOE officials, spacecraft carrying much smaller amounts of radioactive fuel will reduce human health risks because it is anticipated that less plutonium dioxide could potentially be released in the event of an accident. NASA and JPL officials also pointed out that planned future missions may not need to use Earth swingby trajectories. Depending on the launch vehicle used, the smaller spacecraft planned for future missions may be able to travel more direct routes to their destinations without the need to use Earth swingby maneuvers to increase their speed. In written comments on a draft of this report, NASA said that the report fairly represents NASA's environmental and nuclear safety processes for the Cassini space mission (see app. II). In addition, NASA and DOE also provided technical and clarifying comments for this report, which we incorporated as appropriate. To obtain information about the processes used by NASA to assess the safety and environmental risks of the Cassini mission, NASA's efforts and costs to develop non-nuclear power sources for deep space missions, and future space missions for which nuclear-fueled power sources will be used, we interviewed officials at NASA Headquarters in Washington, D.C.; JPL in Pasadena, California; and DOE's Office of Nuclear Energy, Science, and Technology in Germantown, Maryland. We reviewed the primary U.S. legislation and regulations applicable to the use of nuclear materials in space and NASA, JPL, and DOE documents pertaining to the safety and environmental assessment processes that were used for the Cassini mission. We reviewed the Cassini Safety Evaluation Report prepared by the Cassini Interagency Nuclear Safety Review Panel. We also reviewed NASA and JPL documents on the development of improved non-nuclear and nuclear electrical power sources for spacecraft and studies for future nuclear-powered space missions. We did not attempt to verify NASA and DOE estimates of risks associated with the Cassini mission or the financial and other data provided by the agencies. We performed our work from September 1997 to February 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Director of the Office of Management and Budget, the Administrator of NASA, the Secretary of Energy, and appropriate congressional committees. We will also make copies available to other interested parties on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are Jerry Herley and Jeffery Webster. Since 1961 the United States has launched 25 spacecraft with radioisotope thermoelectric generators (RTG) on board. Three of the missions failed, and the spacecraft reentered Earth's atmosphere. However, none of the failures were due to problems with the RTGs. In 1964, a TRANSIT 5BN-3 navigational satellite malfunctioned. Its single RTG, which contained 2.2 pounds of plutonium fuel, burned up during reentry into Earth's atmosphere. This RTG was intended to burn up in the atmosphere in the event of a reentry. In 1968, a NIMBUS-B-1 weather satellite was destroyed after its launch vehicle malfunctioned. The plutonium fuel cells from the spacecraft's two RTGs were recovered intact from the bottom of the Santa Barbara Channel near the California coast. According to National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) officials, no radioactive fuel was released from the fuel cells, and the fuel was recycled and used on a subsequent space mission. Figure I.1 shows the intact fuel cells during the underwater recovery operation. In 1970, the Apollo 13 Moon mission was aborted due to mechanical failures while traveling to the moon. The spacecraft and its single RTG, upon return to Earth, were jettisoned into the Pacific Ocean, in or near the Tonga Trench. According to DOE officials, no release of radioactive fuel was detected. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the use of nuclear power systems for the Cassini spacecraft and other space missions, focusing on: (1) the processes the National Aeronautics and Space Administration (NASA) used to assess the safety and environmental risks associated with the Cassini mission; (2) NASA's efforts to consider the use of a non-nuclear power source for the Cassini mission; (3) the federal investment associated with the development of non-nuclear power sources for deep space missions; and (4) NASA's planned future nuclear-powered space missions. GAO noted that: (1) federal laws and regulations require analysis and evaluation of the safety risks and potential environmental impacts associated with launching nuclear materials into space; (2) as the primary sponsor of the Cassini mission, NASA conducted the required analyses with assistance from the Department of Energy (DOE) and the Department of Defense (DOD); (3) in addition, a presidential directive required that an ad hoc interagency panel review the Cassini mission safety analyses; (4) the directive also required that NASA obtain presidential approval to launch the spacecraft; (5) NASA convened the required interagency review panel and obtained launch approval from the Office of Science and Technology Policy, within the Office of the President; (6) while the evaluation and review processes can minimize the risks of launching radioactive materials into space, the risks themselves cannot be eliminated, according to NASA and Jet Propulsion Laboratory (JPL) officials; (7) as required by NASA regulations, JPL considered using solar arrays as an alternative power source for the Cassini mission; (8) engineering studies conducted by JPL concluded that the solar arrays were not feasible for the Cassini mission primarily because they would have been too large and heavy and had uncertain reliability; (9) during the past 30 years, NASA, DOE, and DOD have invested over $180 million in solar array technology, the primary non-nuclear power source; (10) in FY 1998, NASA and DOD will invest $10 million to improve solar array systems, and NASA will invest $10 million to improve nuclear-fueled systems; (11) according to NASA and JPL officials, advances in solar array technology may expand its use for some missions; however, there are no currently practical alternatives to using nuclear-fueled power generation systems for most missions beyond the orbit of Mars; (12) NASA is planning eight future deep space missions between 2000 and 2015 that will likely require nuclear-fueled power systems to generate electricity for the spacecraft; (13) none of these missions have been approved or funded, but typically about one-half of such planned missions are eventually funded and launched; (14) advances in nuclear-fueled systems and the use of smaller, more efficient spacecraft are expected to substantially reduce the amount of nuclear fuel carried on future deep space missions; and (15) thus, NASA and JPL officials believe these future missions may pose less of a health risk than current and prior missions that have launched radio isotope thermoelectric generators into space.
4,782
640
The structure of DHS's acquisition function creates ambiguity about who is accountable for acquisition decisions. A common theme in our work on DHS's acquisition management has been the department's struggle from the outset to provide adequate support for its mission components and resources for departmentwide oversight. Of the 22 components that initially joined DHS from other agencies, 7 came with their own procurement support. In January 2004, a year after the department was created, an eighth office, the Office of Procurement Operations, was created to provide support to a variety of DHS entities. To improve oversight, in December 2005, CPO established a departmentwide acquisition oversight program, designed to provide comprehensive insight into each component's acquisitions and disseminate successful acquisition management approaches throughout DHS. DHS has set a goal of integrating the acquisition function more broadly across the department. Prior GAO work has shown that to implement acquisition effectively across a large federal organization requires an integrated structure with standardized policies and processes, the appropriate placement of the acquisition function within the department, leadership that fosters good acquisition practices, and a general framework that delineates the key phases along the path for a major acquisition. An effective acquisition organization has in place knowledgeable personnel who work together to meet cost, quality, and timeliness goals while adhering to guidelines and standards for federal acquisition. DHS, however, relies on dual accountability and collaboration between the CPO and the heads of DHS's components. The October 2004 management directive for its acquisition line of business--the department's principal guidance for leading, governing, integrating, and managing the acquisition function--allows managers from each component organization to commit resources to training, development, and certification of acquisition professionals. It also highlights the CPO's broad authority, including management, administration, and oversight of departmentwide acquisition. However, we have reported that the directive may not achieve its goal of creating an integrated acquisition organization because it creates unclear working relationships between the CPO and the heads of DHS components. For example, some of the duties delegated to the CPO have also been given to the heads of DHS's components, such as recruiting and selecting key acquisition officials at the components, and providing appropriate resources to support the CPO's initiatives. Accountability for acquisitions is further complicated because, according to DHS, the Coast Guard and Secret Service were exempted from its acquisition management directive because of DHS's interpretation of the Homeland Security Act. We have questioned this exemption, and recently CPO officials have told us that they are working to revise the directive to make it clear that the Coast Guard and Secret Service are not exempt. Furthermore, for major investments--those exceeding $50 million--accountability, visibility, and oversight is shared among the CPO, the Chief Financial Officer, the Chief Information Officer, and other senior management. Recently, the DHS Inspector General's 2007 semiannual report stated an integrated acquisition system still does not exist, but noted that that the atmosphere for collaboration between DHS and its component agencies on acquisition matters has improved. In addition, our work and the work of the DHS Inspector General has found acquisition workforce challenges across the department. In 2005, we reported on disparities in the staffing levels and workload among the component procurement offices. We recommended that DHS conduct a departmentwide assessment of the number of contracting staff, and if a workload imbalance were to be found, take steps to correct it by realigning resources. In 2006, DHS reported significant progress in providing staff for the component contracting offices, though much work remained to fill the positions with qualified, trained acquisition professionals. DHS has established a goal of aligning procurement staffing levels with contract spending at its various components by the last quarter of fiscal year 2009. Staffing of the CPO Office also has been a concern, but recent progress has been made. According to CPO officials, their small staff faces the competing demands of providing acquisition support for urgent needs at the component level and conducting oversight. For example, CPO staff assisted the Federal Emergency Management Agency in contracting for the response to Gulf Coast hurricanes Katrina and Rita. As a result, they needed to focus their efforts on procurement execution rather than oversight. In 2005, we recommended that the Secretary of Homeland Security provide the CPO with sufficient resources to effectively oversee the department's acquisitions. In 2006, we reported that the Secretary had supported an increase of 25 positions for the CPO to improve acquisition management and oversight. DHS stated that these additional personnel will significantly contribute to continuing improvement in the DHS acquisition and contracting enterprise. To follow-up on some of these efforts, we plan to conduct additional work on DHS acquisition workforce issues in the near future. Our prior work has shown that in a highly functioning acquisition organization, the CPO is in a position to oversee compliance by implementing strong oversight mechanisms. Accordingly, in December 2005, the CPO established a departmentwide acquisition oversight program, designed to provide comprehensive insight into each component's acquisition programs and disseminate successful acquisition management approaches throughout DHS. The program is based in part on elements essential to an effective, efficient, and accountable acquisition process: organizational alignment and leadership, policies and processes, financial accountability, acquisition workforce, and knowledge management and information systems. The program includes four recurring reviews, as shown in table 1. In September 2006, we reported that the CPO's limited staff resources had delayed the oversight program's implementation, but the program is well under way, and DHS plans to implement the full program in fiscal year 2007. Recently, the CPO has made progress in increasing staff to authorized levels, and as part of the department's fiscal year 2008 appropriation request, the CPO is seeking three additional staff, for a total of 13 oversight positions for this program. We plan to report on the program later this month. While this program is a positive step, we have reported that the CPO lacks the authority needed to ensure the department's components comply with its procurement policies and procedures such as the acquisition oversight program. We reported in September 2006 that the CPO's ability to effectively oversee the department's acquisitions and manage risks is limited, and we continue to believe that the CPO's lack of authority to achieve the department's acquisition goals is of concern. In 2003, DHS put in place an investment review process to help protect its major, complex investments. The investment review process is intended to reduce risk associated with these investments and increase the chances for successful outcomes in terms of cost, schedule, and performance. In March 2005, we reported that in establishing this process, DHS has adopted a number of acquisition best practices that, if applied consistently, could help increase the chance for successful outcomes. However, we noted that incorporating additional program reviews and knowledge deliverables into the process could better position DHS to make well-informed decisions on its major, complex investments. Specifically, we noted that the process did not include two critical management reviews that would help ensure that (1) resources match customer needs prior to beginning a major acquisition and (2) program designs perform as expected before moving to production. We also noted that the review process did not fully address how program managers are to conduct effective contractor tracking and oversight. The investment review process is still under revision, and the department's performance and accountability report for fiscal year 2006 stated that DHS will incorporate changes to the process by the first quarter of fiscal year 2008. Our best practices work shows that successful investments reduce risk by ensuring that high levels of knowledge are achieved at these key points of development. We have found that investments that were not reviewed at the appropriate points faced problems--such as redesign--that resulted in cost increases and schedule delays. Concerns have been raised about the effectiveness of the review process for large investments at DHS. For example, in November 2006, the DHS Inspector General reported on the Customs and Border Protection's Secure Border Initiative program, noting that the department's existing investment oversight processes were sidelined in the urgent pursuit of SBInet's aggressive schedule. The department's investment review board and joint requirements council provide for deliberative processes to obtain the counsel of functional stakeholders. However, the DHS Inspector General reported that for SBInet, these prescribed processes were bypassed and key decisions about the scope of the program and the acquisition strategy were made without rigorous review and analysis or transparency. The department has since announced plans to complete these reviews to ensure the program is on the right track. To quickly get the department up and running and to obtain necessary expertise, DHS has relied extensively on other agencies' and its own contracts for a broad range of mission-related services and complex acquisitions. Governmentwide, increasing reliance on contractors has been a longstanding concern. Recently, in 2006, government, industry and academic participants in a GAO forum on federal acquisition challenges and opportunities noted that many agencies rely extensively on contractors to carry out their basic missions. The growing complexity of contracting for technically difficult and sophisticated services increases challenges in terms of setting appropriate requirements and effectively monitoring contractor performance. With the increased reliance on contractors comes the need for an appropriate level of oversight and management attention to its contracting for services and major systems. Our work to date has found that DHS faces challenges in managing services acquisitions through interagency contracting--a process by which agencies can use another agency's contracting services or existing contracts often for a fee. In 2005, DHS spent over $6.5 billion on interagency contracts. We found that DHS did not systematically monitor or assess its use of interagency contracts to determine whether this method provides good outcomes for the department. Although interagency contracts can provide the advantages of timeliness and efficiency, use of these types of vehicles can also pose risks if they are not properly managed. GAO designated management of interagency contracting a governmentwide high risk area in 2005. A number of factors can make these types of contracts high risk, including their use by some agencies that have limited expertise with this contracting method and their contribution to a much more complex procurement environment in which accountability has not always been clearly established. In an interagency contracting arrangement, both the agency that holds and the agency that makes purchases against the contract share responsibility for properly managing the use of the contract. However, these shared responsibilities often have not been well defined. As a result, our work and that of some agency inspectors general has found cases in which interagency contracting has not been well managed to ensure that the government is getting good value. Government agencies, including DHS components, have turned to a systems integrator in situations such as when they believe they do not have the in-house capability to design, develop, and manage complex acquisitions. This arrangement creates an inherent risk, as the contractor is given more discretion to make certain program decisions. Along with this greater discretion comes the need for more government oversight and an even greater need to develop well-defined outcomes at the outset. Our reviews of the Coast Guard's Deepwater program have found that the Coast Guard had not effectively managed the program or overseen the system integrator. Specifically, we expressed concerns and made a number of recommendations to improve the program in three areas: program management, contractor accountability, and cost control through competition. While the Coast Guard took some actions in response to some of our concerns, they have recently announced a series of additional steps to address problems with the Deepwater program, including taking on more program management responsibilities from the systems integrator. We also have ongoing work reviewing other aspects of DHS acquisition management. For example, we are reviewing DHS's contracts that closely support inherently governmental functions and the level of oversight given to these contracts. Federal procurement regulation and policy contain special requirements for overseeing service contracts that have the potential for influencing the authority, accountability, and responsibilities of government officials. Agencies are required to provide greater scrutiny of these service contracts and an enhanced degree of management oversight, which includes assigning a sufficient number of qualified government employees to provide oversight, to better ensure that contractors do not perform inherently governmental functions. The risks associated with contracting for services that closely support the performance of inherently governmental functions are longstanding governmentwide concerns. We are also reviewing oversight issues related to DHS's use of performance-based services acquisitions. If this acquisition method is not appropriately planned and structured, there is an increased risk that the government may receive products or services that are over cost estimates, delivered late, and of unacceptable quality. Since DHS was established in 2003, it has been challenged to integrate 22 separate federal agencies and organizations with multiple missions, values, and cultures into one cabinet-level department. Due to the complexity of its organization, DHS is likely to continue to face challenges in integrating the acquisition functions of its components and overseeing their acquisitions--particularly those involving large and complex investments. Given the size of DHS and the scope of its acquisitions, we are continuing to assess the department's acquisition oversight process and procedures in ongoing work. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information regarding this testimony, please contact John Hutton at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this product. Other individuals making key contributions to this testimony were Amelia Shachoy, Assistant Director; Tatiana Winger; William Russell; Heddi Nieuwsma; Karen Sloan; and Sylvia Schatz. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2006, the Department of Homeland Security (DHS) obligated $15.6 billion to support its broad and complex acquisition portfolio. Since it was tasked with integrating 22 separate federal agencies and organizations into one cabinet-level department, DHS has been working to create an integrated acquisition organization while addressing its ongoing mission requirements and responding to natural disasters and other emergencies. Due to the enormity of this challenge, GAO designated the establishment of the department and its transformation as high-risk in January 2003. This testimony discusses DHS's (1) challenges to creating an integrated acquisition function; (2) investment review process; and (3) reliance on contracting for critical needs. This testimony is based primarily on prior GAO reports and testimonies. The structure of DHS's acquisition function creates ambiguity about who is accountable for acquisition decisions because it depends on a system of dual accountability and collaboration between the Chief Procurement Officer (CPO) and the component heads. Further, a common theme in GAO's work on acquisition management has been DHS's struggle to provide adequate support for its mission components and resources for departmentwide oversight. In 2006, DHS reported significant progress in staffing for the components and the CPO, though much work remained to fill the positions. In addition, DHS has established an acquisition oversight program, designed to provide the CPO comprehensive insight into each component's acquisition programs and disseminate successful acquisition management approaches departmentwide. However, GAO continues to be concerned that the CPO may not have sufficient authority to effectively oversee the department's acquisitions. In 2003, DHS put in place an investment review process to help protect its major complex investments. In 2005, GAO reported that this process adopted many acquisition best practices that, if applied consistently, could help increase the chances for successful outcomes. However, GAO noted that incorporating additional program reviews and knowledge deliverables into the process could better position DHS to make well-informed decisions. Concerns have been raised about how the investment review process has been used to oversee its largest acquisitions, and DHS plans to revise the process. DHS has contracted extensively for a broad range of services and complex acquisitions. The growing complexity of contracting for technically difficult and sophisticated services increases challenges in terms of setting appropriate requirements and effectively monitoring contractor performance. However, DHS has been challenged to provide the appropriate level of oversight and management attention to its contracting for services and major systems.
2,984
524
FDA is responsible for overseeing the safety and effectiveness of medical devices that are marketed in the United States, whether manufactured in domestic or foreign establishments. All establishments that manufacture medical devices for marketing in the United States must register with FDA. As part of its efforts to ensure the safety, effectiveness, and quality of medical devices, FDA is responsible for inspecting certain domestic and foreign establishments to ensure that they meet manufacturing standards established in FDA's quality system regulation. FDA does not have authority to require foreign establishments to allow the agency to inspect their facilities. However, FDA has the authority to prevent the importation of products manufactured at establishments that refuse to allow an FDA inspection. Unlike food, for which FDA primarily relies on inspections at the border, physical inspection of manufacturing establishments is a critical mechanism in FDA's process to ensure that medical devices and drugs are safe and effective and that manufacturers adhere to good manufacturing practices. Within FDA, CDRH assures the safety and effectiveness of medical devices. Among other things, CDRH works with ORA, which conducts inspections of both domestic and foreign establishments to ensure that devices are produced in conformance with federal statutes and regulations, including the quality system regulation. FDA may conduct inspections before and after medical devices are approved or otherwise cleared to be marketed in the United States. Premarket inspections are conducted before FDA will approve U.S. marketing of a new medical device that is not substantially equivalent to one that is already on the market. Premarket inspections primarily assess manufacturing facilities, methods, and controls and may verify pertinent records. Postmarket inspections are conducted after a medical device has been approved or otherwise cleared to be marketed in the United States and include several types of inspections: (1) Quality system inspections are conducted to assess compliance with applicable FDA regulations, including the quality system regulation to ensure good manufacturing practices and the regulation requiring reporting of adverse events. These inspections may be comprehensive or abbreviated, which differ in the scope of inspectional activity. Comprehensive postmarket inspections assess multiple aspects of the manufacturer's quality system, including management controls, design controls, corrective and preventative actions, and production and process controls. Abbreviated postmarket inspections assess only some of these aspects, but always assess corrective and preventative actions. (2) For-cause and compliance follow- up inspections are initiated in response to specific information that raises questions or problems associated with a particular establishment. (3) Postmarket audit inspections are conducted within 8 to 12 months of a premarket application's approval to examine any changes in the design, manufacturing process, or quality assurance systems. FDA determines which establishments to inspect using a risk-based strategy. High priority inspections include premarket approval inspections for class III devices, for-cause inspections, inspections of establishments that have had a high frequency of device recalls, and other devices and manufacturers FDA considers high risk. The establishment's inspection history may also be considered. A provision in FDAAA may assist FDA in making decisions about which establishments to inspect because it authorizes the agency to accept voluntary submissions of audit reports addressing manufacturers' conformance with internationally established standards for the purpose of setting risk-based inspectional priorities. FDA's programs for domestic and foreign inspections by accredited third parties provide an alternative to the traditional FDA-conducted comprehensive postmarket quality system inspection for eligible manufacturers of class II and III medical devices. MDUFMA required FDA to accredit third persons--which are organizations--to conduct inspections of certain establishments. In describing this requirement, the House of Representatives Committee on Energy and Commerce noted that some manufacturers have faced an increase in the number of inspections required by foreign countries, and that the number of inspections could be reduced if the manufacturers could contract with a third-party organization to conduct a single inspection that would satisfy the requirements of both FDA and foreign countries. Manufacturers that meet eligibility requirements may request a postmarket inspection by an FDA-accredited organization. The eligibility criteria for requesting an inspection of an establishment by an accredited organization include that the manufacturer markets (or intends to market) a medical device in a foreign country and the establishment to be inspected must not have received warnings for significant deviations from compliance requirements on its last inspection. MDUFMA also established minimum requirements for organizations to be accredited to conduct third-party inspections, including protecting against financial conflicts of interest and ensuring the competence of the organization to conduct inspections. FDA developed a training program for inspectors from accredited organizations that involves both formal classroom training and completion of three joint training inspections with FDA. Each individual inspector from an accredited organization must complete all training requirements successfully before being cleared to conduct independent inspections. FDA relies on manufacturers to volunteer to host these joint inspections, which count as FDA postmarket quality system inspections. A manufacturer that is cleared to have an inspection by an accredited third party enters an agreement with the approved accredited organization and schedules an inspection. Once the accredited organization completes its inspection, it prepares a report and submits it to FDA, which makes the final assessment of compliance with applicable requirements. FDAAA added a requirement that accredited organizations notify FDA of any withdrawal, suspension, restriction, or expiration of certificate of conformance with quality systems standards (such as those established by the International Organization for Standardization) for establishments they inspected for FDA. In addition to the Accredited Persons Inspection Program, FDA has a second program for accredited third-party inspections of medical device establishments. On September 7, 2006, FDA and Health Canada announced the establishment of PMAP. This pilot program was designed to allow qualified third-party organizations to perform a single inspection that would meet the regulatory requirements of both the United States and Canada. The third-party organizations eligible to conduct inspections through PMAP are those that FDA accredited for its Accredited Persons Inspection Program (and that completed all required training for that program) and that are also authorized to conduct inspections of medical device establishments for Health Canada. To be eligible to have a third- party inspection through PMAP, manufacturers must meet all criteria established for the Accredited Persons Inspection Program. As with the Accredited Persons Inspection Program, manufacturers must apply to participate and be willing to pay an accredited organization to conduct the inspection. FDA relies on multiple databases to manage its program for inspecting medical device manufacturing establishments. DRLS contains information on domestic and foreign medical device establishments that have registered with FDA. Establishments that are involved in the manufacture of medical devices intended for commercial distribution in the United States are required to register annually with FDA. These establishments provide information to FDA, such as establishment name and address and the medical devices they manufacture. As of October 1, 2007, establishments are required to register electronically through FDA's Unified Registration and Listing System and certain medical device establishments pay an annual establishment registration fee, which in fiscal year 2008 is $1,706. OASIS contains information on medical devices and other FDA-regulated products imported into the United States, including information on the establishment that manufactured the medical device. The information in OASIS is automatically generated from data managed by U.S. Customs and Border Protection, which are originally entered by customs brokers based on the information available from the importer. FACTS contains information on FDA's inspections, including those of domestic and foreign medical device establishments. FDA investigators enter information into FACTS following completion of an inspection. According to FDA data, more than 23,600 establishments that manufacture medical devices were registered as of September 2007, of which 10,600 reported that they manufacture class II or III medical devices. More than half--about 5,600--of these establishments were located in the United States. As of September 2007, there were more registered establishments in China and Germany reporting that they manufacture class II or III medical devices than in any other foreign countries. Canada, Taiwan, and the United Kingdom also had a large number of registered establishments. (See fig. 1.) Registered foreign establishments reported that they manufacture a variety of class II and III medical devices for the U.S. market. For example, common class III medical devices included coronary stents, pacemakers, and contact lenses. FDA has not met the statutory requirement to inspect domestic establishments manufacturing class II or III medical devices every 2 years. The agency conducted relatively few inspections of foreign establishments. The databases that provide FDA with data about the number of foreign establishments manufacturing medical devices for the U.S. market contain inaccuracies. In addition, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA--both in human resources and logistics. From fiscal year 2002 through fiscal year 2007, FDA primarily inspected establishments located in the United States, where more than half of the 10,600 registered establishments that reported manufacturing class II or III medical devices are located. In contrast, FDA inspected relatively few foreign medical device establishments. During this period, FDA conducted an average of 1,494 domestic and 247 foreign establishment inspections each year. This suggests that each year FDA inspects about 27 percent of registered domestic establishments that reported manufacturing class II or class III medical devices and about 5 percent of such foreign establishments. The inspected establishments were in the United States and 44 foreign countries. Of the foreign inspections, more than two-thirds were in 10 countries. Most of the countries with the highest number of inspections were also among those with the largest number of registered establishments that reported manufacturing class II or III medical devices. The lowest rate of inspections in these 10 countries was in China, where 64 inspections were conducted in this 6-year period and almost 700 establishments were registered. (See table 1.) Despite its focus on domestic inspections, FDA has not met the statutory requirement to inspect domestic establishments manufacturing class II or III medical devices every 2 years. For domestic establishments, FDA officials estimated that, on average, the agency inspects class II manufacturers every 5 years and class III manufacturers every 3 years. For foreign establishments--for which there is no comparable inspection requirement--FDA officials estimated that the agency inspects class II manufacturers every 27 years and class III manufacturers every 6 years. FDA's inspections of medical device establishments, both domestic and foreign, are primarily postmarket inspections. While premarket inspections are generally FDA's highest priority, relatively few have to be performed in any given year. Therefore, FDA focuses its resources on postmarket inspections. From fiscal year 2002 through fiscal year 2007, 95 percent of the 8,962 domestic establishment inspections and 89 percent of the 1,481 foreign establishment inspections were for postmarket purposes. (See fig. 2.) FDA's databases on registration and imported products provide divergent estimates regarding the number of foreign medical device manufacturing establishments. DRLS provides FDA with information about domestic and foreign medical device establishments and the products they manufacture for the U.S. market. According to DRLS, as of September 2007, 5,616 domestic and 4,983 foreign establishments that reported manufacturing a class II or III medical device for the U.S. market had registered with FDA. However, these data contain inaccuracies because establishments may register with FDA but not actually manufacture a medical device or may manufacture a medical device that is not marketed in the United States. FDA officials told us that their more frequent inspections of domestic establishments allow them to more easily update information about whether a domestic establishment is subject to inspection. In addition to DRLS, FDA obtains information on foreign establishments from OASIS, which tracks the import of medical devices. While not intended to provide a count of establishments, OASIS does contain information about the medical devices actually being imported into the United States and the establishments manufacturing them. However, inaccuracies in OASIS prevent FDA from using it to develop a list of establishments subject to inspection. OASIS contains duplicate records for a single establishment because of inaccurate data entry by customs brokers at the border. According to OASIS, in fiscal year 2007, there were as many as 22,008 foreign establishments that manufactured class II medical devices for the U.S. market and 3,575 foreign establishments that manufactured class III medical devices for the U.S. market. Despite the divergent estimates of foreign establishments generated by DRLS and OASIS, FDA does not routinely verify the data within each database. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, the databases cannot exchange information to be compared electronically and any comparisons are done manually. Efforts are underway that could improve FDA's databases. FDA officials suggested that, because manufacturers are now required to pay an annual establishment registration fee, manufacturers may be more concerned about the accuracy of the registration data they submit. They also told us that, because of the registration fee, manufacturers may be less likely to register if they do not actually manufacture a medical device for the U.S. market. In addition, FDA officials stated that the agency is pursuing various initiatives to try to address the inaccuracies in OASIS, such as providing a unique identifier for each foreign establishment to reduce duplicate entries for individual establishments. Inspections of foreign establishments pose unique challenges to FDA-- both in human resources and logistics. FDA does not have a dedicated cadre of investigators that only conduct foreign medical device establishment inspections; those staff who inspect foreign establishments also inspect domestic establishments. Among those qualified to inspect foreign establishments, FDA relies on staff to volunteer to conduct inspections. FDA officials told us that it is difficult to recruit investigators to voluntarily travel to certain countries. However, they added that if the agency could not find an individual to volunteer for a foreign inspection trip, it would mandate the travel. Logistically, foreign medical device establishment inspections are difficult to extend even if problems are identified because the trips are scheduled in advance. Foreign medical device establishment inspections are also logistically challenging because investigators do not receive independent translational support from FDA or the State Department and may rely on English-speaking employees of the inspected establishment or the establishment's U.S. agent to translate during an inspection. Few inspections of medical device manufacturing establishments have been conducted through FDA's two accredited third-party inspection programs--the Accredited Persons Inspection Program and PMAP. FDAAA specified several changes to the requirements for inspections by accredited third parties that could result in increased participation by manufacturers. Few inspections have been conducted through FDA's Accredited Persons Inspection Program since March 11, 2004--the date when FDA first cleared an accredited organization to conduct independent inspections. Through January 11, 2008, five inspections had been conducted independently by accredited organizations (two inspections of domestic establishments and three inspections of foreign establishments), an increase of three since we reported on this program one year ago. As of January 11, 2008, 16 third-party organizations were accredited, and individuals from 8 of these organizations had completed FDA's training requirements and been cleared to conduct independent inspections. As of January 8, 2008, FDA and accredited organizations had conducted 44 joint training inspections. Fewer manufacturers volunteered to host training inspections than have been needed for all of the accredited organizations to complete their training. Moreover, scheduling these joint training inspections has been difficult. FDA officials told us that, when appropriate, staff are instructed to ask manufacturers to host a joint training inspection at the time they notify the manufacturers of a pending inspection. FDA schedules inspections a relatively short time prior to an actual inspection, and as we reported in January 2007, some accredited organizations have not been able to participate because they had prior commitments. As we reported in January 2007, manufacturers' decisions to request an inspection by an accredited organization might be influenced by both potential incentives and disincentives. According to FDA officials and representatives of affected entities, potential incentives to participation include the opportunity to reduce the number of inspections conducted to meet FDA and other countries' requirements. For example, one inspection conducted by an accredited organization was a single inspection designed to meet the requirements of FDA, the European Union, and Canada. Another potential incentive mentioned by FDA officials and representatives of affected entities is the opportunity to control the scheduling of the inspection by an accredited organization by working with the accredited organization. FDA officials and representatives of affected entities also mentioned potential disincentives to having an inspection by an accredited organization. These potential disincentives include bearing the cost for the inspection, doubts about whether accredited organizations can cover multiple requirements in a single inspection, and uncertainty about the potential consequences of an inspection that otherwise may not occur in the near future--consequences that could involve regulatory action. Changes specified by FDAAA have the potential to eliminate certain obstacles to manufacturers' participation in FDA's programs for inspections by accredited third parties that were associated with manufacturers' eligibility. For example, an eligibility requirement that foreign establishments be periodically inspected by FDA was eliminated. Representatives of the two organizations that represent medical device manufacturers with whom we spoke about FDAAA told us that the changes in eligibility requirements could eliminate certain obstacles and therefore potentially increase their participation. These representatives also noted that key incentives and disincentives to manufacturers' participation remain. FDA officials told us that they are currently revising their guidance to industry in light of FDAAA and expect to issue the revised guidance during fiscal year 2008. It is too soon to tell what impact these changes will have on manufacturers' participation. FDA officials acknowledged that manufacturers' participation in the Accredited Persons Inspection Program has been limited. In December 2007, FDA established a working group to assess the successes and failures of this program and to identify ways to increase participation. Representatives of the two organizations that represent medical device manufacturers with whom we recently spoke stated that they believe manufacturers remain interested in the Accredited Persons Inspection Program. The representative of one large, global manufacturer of medical devices told us that it is in the process of arranging to have 20 of its domestic and foreign device manufacturing establishments inspected by accredited third parties. As of January 11, 2008, two inspections, both of domestic establishments, had been conducted through PMAP, FDA's second program for inspections by accredited third parties. Although it is too soon to tell what the benefits of PMAP will be, the program is more limited than the Accredited Persons Inspection Program and may pose additional disincentives to participation by both manufacturers and accredited organizations. Specifically, inspections through PMAP would be designed to meet the requirements of the United States and Canada, whereas inspections conducted through the Accredited Persons Inspection Program could be designed to meet the requirements of other countries. In addition, two of the five representatives of affected entities noted that in contrast to inspections conducted through the Accredited Persons Inspection Program, inspections conducted through PMAP could undergo additional review by Health Canada. Health Canada will review inspection reports submitted through this pilot program to ensure they meet its standards. This extra review poses a greater risk of unexpected outcomes for the manufacturer and the accredited organization, which could be a disincentive to participation in PMAP that is not present with the Accredited Persons Inspection Program. Americans depend on FDA to ensure the safety and effectiveness of medical products, including medical devices, manufactured throughout the world. However, our findings regarding inspections of medical device manufacturers indicate weaknesses that mirror those presented in our November 2007 testimony regarding inspections of foreign drug manufacturers. In addition, they are consistent with the FDA Science Board's findings that FDA's ability to fulfill its regulatory responsibilities is jeopardized, in part, by information technology and human resources challenges. We recognize that FDA has expressed the intention to improve its data management, but it is too early to tell whether the intended changes will ultimately enhance the agency's ability to manage its inspection programs. We and others have suggested that the use of accredited third parties could improve FDA's ability to meet its inspection responsibilities. However, the implementation of its programs for inspecting medical device manufacturers has resulted in little progress. To date, its programs for inspections by accredited third parties have not assisted FDA in meeting its regulatory responsibilities nor have they provided a rapid or substantial increase in the number of inspections performed by these organizations, as originally intended. Although recent statutory changes to the requirements for inspections by accredited third parties may encourage greater participation in these programs, the lack of meaningful progress raises questions about the practicality and effectiveness of establishing similar programs that rely on third parties to quickly help FDA fulfill other responsibilities. Mr. Chairman, this completes my prepared statement, I would be happy to respond to any questions you or the other Members of the subcommittee may have at this time. For further information about this testimony, please contact Marcia Crosse at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this testimony. Geraldine Redican-Bigott, Assistant Director; Kristen Joan Anderson; Katherine Clark; Robert Copeland; William Hadley; Cathy Hamann; Mollie Hertel; Julian Klazkin; Lisa Motley; Daniel Ries; and Suzanne Worth made key contributions to this testimony. In congressional testimony in November 2007, we presented our preliminary findings on the Food and Drug Administration's (FDA) program for inspecting foreign drug manufacturers. We found that (1) FDA's effectiveness in managing the foreign drug inspection program continued to be hindered by weaknesses in its databases; (2) FDA inspected relatively few foreign establishments; and (3) the foreign inspection process involved unique circumstances that were not encountered domestically. Our preliminary findings indicated that more than 9 years after we issued our last report on FDA's foreign drug inspection program, FDA's effectiveness in managing this program continued to be hindered by weaknesses in its databases. FDA did not know how many foreign establishments were subject to inspection. Instead of maintaining a list of such establishments, FDA relied on information from several databases that were not designed for this purpose. One of these databases contained information on foreign establishments that had registered to market drugs in the United States, while another contained information on drugs imported into the United States. One database indicated about 3,000 foreign establishments could have been subject to inspection in fiscal year 2007, while another indicated that about 6,800 foreign establishments could have been subject to inspection in that year. Despite the divergent estimates of foreign establishments subject to inspection generated by these two databases, FDA did not verify the data within each database. For example, the agency did not routinely confirm that a registered establishment actually manufactured a drug for the U.S. market. However, FDA used these data to generate a list of 3,249 foreign establishments from which it prioritized establishments for inspection. Because FDA was not certain how many foreign drug establishments were actually subject to inspection, the percentage of such establishments that had been inspected could not be calculated with certainty. We found that FDA inspected relatively few foreign drug establishments, as shown in table 2. Using the list of 3,249 foreign drug establishments from which FDA prioritized establishments for inspection, we found that the agency may inspect about 7 percent of foreign drug establishments in a given year. At this rate, it would take FDA more than 13 years to inspect each foreign drug establishment on this list once, assuming that no additional establishments are subject to inspection. FDA's data indicated that some foreign drug manufacturers had not received an inspection, but FDA could not provide the exact number of foreign drug establishments that had never been inspected. Most of the foreign drug inspections were conducted as part of processing a new drug application or an abbreviated new drug application, rather than as current good manufacturing practices (GMP) surveillance inspections, which are used to monitor the quality of marketed drugs. FDA used a risk-based process, based in part on data from its registration and import databases, to develop a prioritized list of foreign drug establishments for GMP surveillance inspections in fiscal year 2007. According to FDA, about 30 such inspections were completed in fiscal year 2007, and at least 50 were targeted for inspection in fiscal year 2008. Further, inaccuracies in the data on which this risk-based process depended limited its effectiveness. Finally, the very nature of the foreign drug inspection process involved unique circumstances that were not encountered domestically. For example, FDA did not have a dedicated staff to conduct foreign drug inspections and relied on those inspecting domestic establishments to volunteer for foreign inspections. While FDA may conduct unannounced GMP inspections of domestic establishments, it did not arrive unannounced at foreign establishments. It also lacked the flexibility to easily extend foreign inspections if problems were encountered due to the need to adhere to an itinerary that typically involved multiple inspections in the same country. Finally, language barriers can make foreign inspections more difficult to conduct than domestic ones. FDA did not generally provide translators to its inspection teams. Instead, they may have had to rely on an English-speaking representative of the foreign establishment being inspected, rather than an independent translator. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of the Food and Drug Administration's (FDA) oversight of the safety and effectiveness of medical devices marketed in the United States, it inspects domestic and foreign establishments where these devices are manufactured. To help FDA address shortcomings in its inspection program, the Medical Device User Fee and Modernization Act of 2002 required FDA to accredit third parties to inspect certain establishments. In response, FDA has implemented two such voluntary programs. GAO previously reported on the status of one of these programs, citing concerns regarding its implementation and factors that may influence manufacturers' participation. (Medical Devices: Status of FDA's Program for Inspections by Accredited Organizations, GAO-07-157 , January 2007.) This statement (1) assesses FDA's management of inspections of establishments--particularly those in foreign countries--manufacturing devices for the U.S. market, and (2) provides the status of FDA's programs for third-party inspections of medical device manufacturing establishments. GAO interviewed FDA officials; reviewed pertinent statutes, regulations, guidance, and reports; and analyzed information from FDA databases. GAO also updated its previous work on FDA's programs for inspections by accredited third parties. FDA has not met the statutory requirement to inspect certain domestic establishments manufacturing medical devices every 2 years, and the agency faces challenges inspecting foreign establishments. FDA primarily inspected establishments located in the United States. The agency has not met the biennial inspection requirement for domestic establishments manufacturing medical devices that FDA has classified as high risk, such as pacemakers, or medium risk, such as hearing aids. FDA officials estimated that the agency has inspected these establishments every 3 years (for high risk devices) or 5 years (for medium risk devices). There is no comparable requirement to inspect foreign establishments, and agency officials estimate that these establishments have been inspected every 6 years (for high risk devices) or 27 years (for medium risk devices). FDA faces challenges in managing its inspections of foreign medical device establishments. Two databases that provide FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market contain inaccuracies that create disparate estimates of establishments subject to FDA inspection. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, these databases cannot exchange information and any comparisons must be done manually. Finally, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA in human resources and logistics. Few inspections of medical device manufacturing establishments have been conducted through FDA's two accredited third-party inspection programs--the Accredited Persons Inspection Program and the Pilot Multi-purpose Audit Program (PMAP). From March 11, 2004--the date when FDA first cleared an accredited organization to conduct independent inspections--through January 11, 2008, five inspections have been conducted by accredited organizations through FDA's Accredited Persons Inspection Program. An incentive to participation in the program is the opportunity to reduce the number of inspections conducted to meet FDA and other countries' requirements. Disincentives include bearing the cost for the inspection, particularly when the consequences of an inspection that otherwise might not occur in the near future could involve regulatory action. The Food and Drug Administration Amendments Act of 2007 made several changes to program eligibility requirements that could result in increased participation by manufacturers. PMAP was established on September 7, 2006, and as of January 11, 2008, two inspections had been conducted by an accredited organization through this program, which is more limited than the Accredited Persons Inspection Program. The small number of inspections completed to date by accredited third-party organizations raises questions about the practicality and effectiveness of establishing similar programs that rely on third parties to quickly help FDA fulfill its responsibilities.
5,410
800
IRS founded the Problem Resolution Program (PRP) in 1976 to provide an independent means of ensuring that taxpayers' unresolved problems were promptly and properly handled. Initially, PRP units were established in IRS district offices and, in 1979, PRP was expanded to include the service centers. In late 1979, IRS created the position of Taxpayer Ombudsman to head PRP. In 1996, Congress replaced the Ombudsman's position with what is now the National Taxpayer Advocate. The goals of PRP are consistent with IRS' mission of providing quality service to taxpayers by helping them meet their tax responsibilities and by applying the tax laws fairly. PRP's first goal is to assist taxpayers who cannot get their problems resolved through normal IRS channels or who are suffering significant hardships. For example, local advocate offices can expedite tax refunds or stop enforcement actions for taxpayers experiencing significant hardships. During fiscal year 1998, PRP closed more than 300,000 cases, of which about 10 percent involved potential hardships. The second goal of PRP is to determine the causes of taxpayer problems so that systemic causes can be identified and corrected and to propose legislative changes that might help alleviate taxpayer problems. IRS commonly refers to this process as advocacy. The third goal of PRP is to represent the taxpayers' interests in the formulation of IRS' policies and procedures. IRS has a taxpayer advocate in each of its 4 regional offices and has local advocates in its 33 district offices, 30 former district offices, and 10 service centers. The National Taxpayer Advocate has responsibility for the overall management of PRP, and regional and local advocates have responsibility for managing PRP at their respective levels. The Office of the Taxpayer Advocate funds the advocate positions; the staff in advocate offices at all levels; and other resources for advocate offices. PRP assistance to taxpayers who cannot get their problems resolved through normal IRS channels is done by employees called caseworkers, who are not part of the Advocate's Office. They are in IRS' functional units--mainly customer service, collection, and examination--in the district offices and service centers. Most PRP resources, including caseworkers, are funded by the functions, and about 80 percent of the caseworkers report to functional managers--not local advocates. Some offices, however, had a centralized structure in which PRP casework was done by employees who were funded by the functions, but reported to the local taxpayer advocate. Formerly, regional and local advocates were selected by and reported to the director of the regional, district, or service center office where they worked. However, in response to a requirement in the IRS Restructuring and Reform Act of 1998, regional advocates are now selected by and report to the National Taxpayer Advocate or his or her designee; and local advocates are now selected by and report to regional advocates. Additionally, last October, IRS began moving to a more centralized reporting structure for the caseworkers--in which they would report to local advocates instead of functional management. IRS officially assigned those caseworkers who were already reporting to local advocates--about 20 percent of the caseworkers--to local advocate offices. In addition, IRS is developing an implementation plan to have the remaining 80 percent of the caseworker positions assigned to local advocate offices this year. IRS plans to submit budget requests that reflect these staffing changes by transferring funds for caseworkers to the Advocate's Office. During fiscal year 1998, the staffing level of the Advocate's Office increased from 428 to 584 authorized positions. Our survey showed that, as of June 1, 1998, the Advocate's Office had 508 on-board staff. At the same time, there were about 1,500 functional employees doing PRP casework in IRS' field offices. Advocate staff worked on, among other things, sensitive cases; cases involving taxpayer hardship; and advocacy work, such as identifying IRS procedures that cause recurring taxpayer problems. Caseworkers worked on resolving individual taxpayer problems as well as participating in some advocacy efforts. During times of high casework levels, many Advocate's Office staff are required to do casework in addition to their other duties. The first challenge facing IRS and the National Taxpayer Advocate is the need to address staffing and operational issues while ensuring the independence of the Advocate's Office. Staffing and operational issues, such as resource allocation, training, and staff selection, are commonplace in most organizations. However, dealing with these issues could prove more challenging for the Advocate's Office because of the need for PRP to be independent from the IRS operations that have been unsuccessful in resolving taxpayers' problems. Independence--actual and apparent--is important because, among other things, it helps promote taxpayer confidence in PRP. A key staffing and operational issue is developing an implementation plan for bringing all caseworkers into the Advocate's Office that includes operational mechanisms that will give PRP the potential benefits of both a reliance on the functions and a separate operation. According to IRS officials, having the caseworkers in the functions may have facilitated caseworker training and the handling of workload fluctuations; however, this arrangement may also have led to the perception that PRP was not an independent program. In addition, as we will discuss later, this organizational arrangement may have contributed to some of the other PRP staffing and operational issues. Another, but related, staffing and operational issue is capturing information about resource usage that advocates need to manage PRP. Some local advocates told us that the lack of control over PRP resources, including staff, made it difficult to manage PRP operations. Advocates do not know the full staffing levels or the total cost of resources devoted to PRP, because IRS does not have a standard system to track PRP resources. Instead, each function tracks its resources differently. The absence of this type of management information yields an incomplete picture of program operations, places limitations on decision-making, and hinders the identification of matters requiring management attention. In addition, having this basic program information would improve the National Taxpayer Advocate's ability to estimate the resources needed in the restructured Advocate's Office. Providing appropriate training is also an issue. It is important that caseworkers and other staff receive adequate training if they are going to be able to help taxpayers resolve their problems and effectively work on advocacy efforts. Our survey of IRS staff who were doing advocate office work showed that training has been inconsistent throughout the Advocate's Office and among PRP caseworkers. For example, as of June 1, 1998, more than half of the PRP caseworkers had not completed a formal PRP training course for their current position. Caseworkers should be trained in both functional responsibilities and PRP operations. Functional training, such as training in tax law changes, is important because resolving taxpayer problems requires that caseworkers understand the tax law affecting a particular case. Historically, because caseworkers were usually functional employees, they routinely received training in functional matters. The National Taxpayer Advocate is faced with ensuring that caseworkers continue to receive needed functional training even if they are no longer functional employees. In this regard, the National Taxpayer Advocate is considering whether to implement a cross- functional training program for caseworkers that would provide training in multiple IRS functions. IRS officials told us that this would broaden caseworker skills and might provide faster and more accurate service to taxpayers. Acquiring qualified PRP caseworkers has been an issue. In the past, the quality of caseworkers depended on the office and the function that assigned the caseworkers to PRP. Local advocates told us that they had no assurance that the functions would provide PRP with qualified staff. It is important for the Advocate's Office to develop mechanisms to ensure that qualified caseworkers are selected so that program goals are met. Once the Advocate's Office is no longer dependent upon the functions for its staff, it can implement a competitive selection process for PRP caseworkers that should help ensure that it gets the staff it needs. As IRS restructures the Advocate's Office, it must consider how best to handle workload fluctuations. Over the past 18 months, the Advocate's Office and PRP's workloads have increased. Factors that have affected and could continue to affect workload include increased media attention, the introduction of a toll-free telephone number for taxpayers to call PRP, and Problem Solving Days. Historically, PRP has relied on the functions to provide additional staff to cover workload increases. However, as the office is moving toward a structure that would place all caseworkers in the Advocate's Office, this source of additional caseworkers may no longer be available. Many local advocates told us that it would be difficult to handle workload fluctuations without the traditional ability to obtain additional caseworkers from functional units. Workload increases may also make it necessary for the Advocate's Office to decide which cases to address with PRP resources. That is, some taxpayers who seek help from PRP may have to be referred to other IRS offices. Local advocates told us that workload increases could compromise PRP's ability to help taxpayers. For example, an increase in the number of PRP cases could negatively affect the timeliness and quality of PRP casework. IRS has three criteria for deciding what qualifies as a PRP case. The first two criteria are specific--(1) any contact by a taxpayer on the same issue at least 30 days after the initial contact and (2) no response to the taxpayer by a promised date. However, the third criterion--any contact that indicates established systems have failed to resolve the taxpayer problem, or when it is in the best interest of the taxpayer or IRS to resolve the problem in PRP--is broad enough to encompass virtually any taxpayer contact. We understand why the Advocate's Office would not want to turn away any taxpayer. However, if PRP accepts cases that could be handled elsewhere in IRS, the program could be overburdened, potentially reducing PRP's ability to help taxpayers who have nowhere else to go to resolve their problems. The second challenge facing IRS and the National Taxpayer Advocate is to strengthen advocacy efforts within the Advocate's Office. Advocacy efforts are key to the success of the Advocate's Office because the improvements they generate can reduce the number of taxpayers who ultimately require help from PRP. Ideas for advocacy efforts are generated at the national, regional, and local levels. These efforts are aimed at eliminating deficiencies in IRS' processes and procedures that cause recurring problems. Through advocacy efforts, the National Taxpayer Advocate can recommend changes to the Commissioner, IRS functions, and Congress to improve IRS operations and address provisions in law that may be causing undue burden to taxpayers. The Advocate's Office has taken steps to promote advocacy, such as implementing regional advocacy councils and identifying strategies to increase awareness of advocacy within IRS. The Advocate's Office has encouraged the functions to play a greater role in assisting taxpayers and improving procedures to reduce taxpayer compliance burden. For example, the Advocate's Office is working with functional management through an executive level group--called the Taxpayer Equity Task Force-- to develop ways to strengthen equity and fairness in tax administration. The Task Force consists of a cross section of executives from IRS' functions and staff from the Advocate's Office. It was established to "fast- track" potential administrative changes and legislative proposals recommended to the National Taxpayer Advocate. However, the Advocate's Office staff and PRP caseworkers told us that they were spending only a minimal amount of time on advocacy. In that regard, our survey showed that as of June 1, 1998, advocates and their staffs were spending about 10 percent of their time on advocacy, and PRP caseworkers were spending less than 1 percent of their time on advocacy. Advocate office staff and PRP caseworkers told us that increased casework limited the time they could spend on advocacy. We understand the need to give priority to casework over advocacy when there is not enough time to do both. The National Taxpayer Advocate's ability to deal with these competing priorities is hampered, however, by the absence of (1) a systematic and coordinated approach for conducting advocacy efforts and (2) data with which to prioritize potential advocacy work. To provide information on advocacy to field offices, the Advocate's Office has developed a list of ongoing advocacy projects. However, the list includes only national-level projects; there is no corresponding list of local efforts, even though those efforts could be addressing issues with agencywide implications. Advocacy staff told us that because there is no system for sharing information on local advocacy efforts, there is some duplication of effort among field offices. Additionally, field staff told us that there is no system that provides feedback on the status of advocacy recommendations. For example, in one district, staff told us that they forwarded the same recommendations to the Advocate's Office over the course of several years but never received feedback on what actions, if any, were taken on those recommendations. The Advocate's Office also has not identified its top advocacy priorities, and it has no way to determine the actual impact of its advocacy efforts. Without such information, the National Taxpayer Advocate does not know which advocacy efforts have the greatest potential to reduce taxpayers' compliance burden. The third challenge facing IRS and the National Taxpayer Advocate is to develop performance measures to be used in managing operations and assessing the effectiveness of the Office of the Taxpayer Advocate and PRP. Developing measures of effectiveness is a difficult undertaking for any organization because it requires that management shift its focus away from descriptive information on staffing, activity levels, and tasks completed. Instead, management must focus on the impact its programs have on its customers. Currently, the Advocate's Office uses four program measures, but they do not produce all of the information needed to assess program effectiveness. The first two measures--the average length of time it takes to process a PRP case and the currency of PRP inventory--describe program activity. While these two measures are useful for some program management decisions, such as the number of staff needed at a specific office, they do not provide information on how effectively PRP is operating. The third measure, PRP case identification and tracking, attempts to determine if potential PRP cases are properly identified from incoming service center correspondence and subsequently worked by PRP. This measure is an important tool to help the National Taxpayer Advocate know whether PRP actually serves those taxpayers who need and qualify for help from the program. However, a recent review of this measure by IRS' Office of Internal Audit found, among other things, that inconsistent data collection for the measure could affect the integrity and reliability of the measure's results. Also, the measure is designed for use only at service centers; there is no similar measure for use at district offices, resulting in an incomplete picture of whether taxpayers are being properly identified and subsequently referred to PRP. PRP's fourth measure--designed to determine the quality of PRP casework--provides some data on program effectiveness. This measure is based on a statistically valid sample of PRP cases and provides the National Taxpayer Advocate with data on timeliness and the technical accuracy of PRP cases. Among other things, selected PRP cases are checked to determine whether the caseworker contacted the taxpayer by a promised date, whether copies of any correspondence with the taxpayer appeared to communicate issues clearly, and whether the taxpayer's problem appeared to be completely resolved. Caseworkers and advocate staff in the field told us that the quality measure was helpful because the elements that are reviewed provide a checklist for working PRP cases. According to staff, this helps ensure that most cases are worked in a similar manner in accordance with standard elements. The quality measure, however, does not have a customer satisfaction component. The Advocate's Office is piloting a method for collecting customer satisfaction data, but the results of this effort are unknown. Because IRS does not collect customer satisfaction data from taxpayers who contacted PRP, the National Taxpayer Advocate does not know if taxpayers are satisfied with PRP services or whether taxpayers considered their problems solved. The National Taxpayer Advocate has the formidable task of developing measures that will provide useful data for improving program performance, increasing accountability, and supporting decisionmaking. To be comprehensive, these measures should cover the full range of Advocate Office operations, including taxpayer satisfaction with PRP services and the effectiveness of advocacy efforts in reducing taxpayer compliance burden.
Pursuant to a congressional request, GAO discussed the challenges facing the Internal Revenue Service's (IRS) Office of the Taxpayer Advocate, focusing on IRS' need to: (1) address complex staffing and operational issues within the Advocate's Office; (2) strengthen efforts within the Advocate's Office to determine the causes of taxpayer problems; and (3) develop performance measures that the National Taxpayer Advocate needs to manage operations and measure effectiveness. GAO noted that: (1) IRS and the National Taxpayer Advocate need to address staffing and operational issues while ensuring the independence of the Advocate's Office; (2) a key staffing and operational issue is developing an implementation plan for bringing all caseworkers into the Advocate's Office that includes operational mechanisms that will give the Problem Resolution Program (PRP) the potential benefits of both a reliance on the functions and a separate operation; (3) another staffing and operational issue is capturing information about resource usage that advocates need to manage PRP; (4) providing appropriate training is also an issue; (5) it is important that caseworkers and other staff receive adequate training; (6) caseworkers should be trained in both functional responsibilities and PRP operations; (7) it is important for the Advocate's Office to develop mechanisms to ensure that qualified caseworkers are selected so that the program goals are met; (8) as IRS restructures the Advocate's Office, it must consider how best to handle workload fluctuations; (9) IRS and the National Taxpayer Advocate need to strengthen advocacy efforts within the Advocate's Office; (10) the Advocate's Office has taken steps to promote advocacy, such as implementing regional advocacy councils and identifying strategies to increase awareness of advocacy within IRS; (11) the Advocate's Office has encouraged the functions to play a greater role in assisting taxpayers and improving procedures to reduce taxpayer compliance burden; (12) IRS and the National Taxpayer Advocate need to develop performance measures to be used in managing operations and assessing the effectiveness of the Taxpayer Advocate and PRP; (13) management must focus on the impact its programs have on its customers; (14) the National Taxpayer Advocate has the formidable task of developing measures that will provide useful data for improving program performance, increasing accountability, and supporting decisionmaking; and (15) to be comprehensive, these measures should cover the full range of Advocate Office operations, including taxpayer satisfaction with PRP services and the effectiveness of advocacy efforts in reducing taxpayer compliance burden.
3,526
520
MACs process and pay claims, conduct prepayment and postpayment claim reviews, and provide Medicare fee-for-service billing education to providers in their jurisdictions. For each type of Medicare claim, the number of jurisdictions and the number of MACs that handle that type of claim vary. For Medicare Part A and B claims--handled by A/B MACs-- there are 12 jurisdictions in which 8 MACs operated at the time of our review. Three of these MACs also processed home health and hospice claims in addition to Medicare A/B claims and therefore served as MACs for the four home health and hospice (HH+H) jurisdictions. For durable medical equipment (DME), including orthotics, prosthetics, and supplies-- handled by DME MACs--there are four jurisdictions in which two MACs operated at the time of our review. A MAC can operate in more than one jurisdiction and handle more than one type of Medicare claim. For example, a MAC can operate as an A/B MAC in one jurisdiction and a DME MAC in another. (For maps of the 20 jurisdictions, see app. I.) The provider education department is part of a MAC's provider customer service program, which is intended to provide timely information, education, and training to providers on Medicare fee-for-service billing, as outlined in CMS's provider customer service program manual. The costs for MACs' provider education departments average 2.1 to 3.3 percent of their total annual costs. MACs' provider education department efforts are aimed at educating providers and their staff on Medicare program fundamentals, national and local policies and procedures, new Medicare initiatives, significant changes to the Medicare program, and issues identified through data analyses. Provider education departments provide education through a variety of methods, such as webinars, online tutorials available on- demand, 'ask-the-contractor' teleconferences, seminars at national conferences and association meetings, and website articles. These efforts are designed to educate many providers at the same time or individual providers via one-to-one education. Attendance at provider education department events is voluntary on the part of the providers. MACs are required to report their provider education department efforts monthly into the Provider Customer Service Program Contractor Information Database that CMS oversees and maintains. CMS also requires the MACs to submit a semi-annual Provider Customer Service Program Activities Report that summarizes and recounts Provider Customer Service Program activities, process improvements, and best practices during the reporting period. MACs' medical review departments identify areas vulnerable to improper billing, review medical records to determine whether Medicare claims are medically necessary and properly documented, conduct one-to-one education as a result of claim reviews, and provide referrals to the provider education department for further education. This department frequently works with the provider education department to conduct educational efforts focusing on correcting provider billing (see fig. 1). CMS requires each MAC to identify areas vulnerable to improper billing in its jurisdiction(s) to guide MAC efforts in medical review and provider education. Areas identified by the MACs are listed in their IPRS reports. MACs' medical review departments identify these areas by analyzing various internal and external data, such as data from CMS's Comprehensive Error Rate Testing (CERT) program, issues identified by recovery auditors, Office of Inspector General reports, comparative billing reports, and internal MAC data. The objective of the CERT program is to estimate the payment accuracy of the Medicare fee-for-service program, which results in a Medicare fee-for-service improper payment rate. Improper payment rates are computed at multiple levels: nationally, by MAC, by service, and by provider type. According to CMS's provider customer service program manual, MACs with improper payment rates a certain percentage above HHS's target for determining progress toward one of its Government Performance and Results Act of 1993 (GPRA) goals may be required by CMS to submit quarterly or monthly provider education department status updates. However, CMS officials told us that they have never required any MAC to submit these quarterly or monthly status updates and they are considering removing this requirement from the manual. The probe and educate reviews are a CMS strategy to determine the extent to which providers understand recent policy changes for certain areas vulnerable to improper billing and help providers improve billing in these areas through a review of a sample of claims from every provider. Under the reviews, MAC medical review departments, with varying levels of coordination with the provider education departments, sample and review a certain number of claims from each provider to determine whether the claims were billed and documented properly. These reviews are resource intensive, because they involve manual review of associated medical records by trained medical review staff. Because of the resources involved, manual reviews are done infrequently in the Medicare program, with less than 1 percent of all Medicare claims receiving manual review. Following the first round of review, providers are informed of their results and those who billed and documented a specified percentage of claims improperly are offered voluntary one-to-one education to learn why each claim was approved or denied. Providers that billed and documented a specified percentage of claims properly are excluded from subsequent rounds of review, if any. MACs may repeat this process for subsequent rounds of review using a new sample of claims. (See fig. 2.) In addition to the areas vulnerable to improper billing identified by the MACs, CMS identified two areas vulnerable to improper billing--short- stay hospital visits and home health services--and required MACs to conduct probe and educate reviews for each of these areas. The first probe and educate review examined short-stay hospital claims to determine the extent to which certain hospitals were properly applying the "two-midnight rule" that CMS implemented effective October 1, 2013. Under the rule, hospital stays for Medicare beneficiaries spanning two or more midnights should generally be billed as inpatient hospital claims. Conversely, hospital stays not expected to span at least two midnights should generally be billed as outpatient hospital claims. From October 1, 2013, through September 30, 2015, 64,776 short-stay inpatient hospital claims were reviewed by the MACs over three rounds. Beginning on October 15, 2015, quality improvement organizations began conducting these reviews at the direction of CMS. At the direction of CMS, MACs began conducting probe and educate reviews of home health agency claims on October 1, 2015, for episodes of care that occurred on or after August 1, 2015. Round 1 was completed as of September 30, 2016, and the second round began on December 15, 2016. The purpose of these reviews is to ensure that home health agencies understand the new patient certification requirements that became effective January 1, 2015. These requirements stipulate that the referring physician, also referred to as the ordering or referring provider, must certify a patient's eligibility for home health services as a condition of payment. As part of the certification, the referring provider must document that a face-to-face patient encounter occurred within a certain time frame. In addition, the patient's medical record must support the certification of eligibility. MAC officials state that their provider education department efforts focus on areas vulnerable to improper billing. We found that these efforts are subject to limited oversight by CMS. Additionally, CMS does not require MACs to educate referring providers on documentation requirements for DME and home health services. MAC officials told us that their provider education departments focus education on areas vulnerable to improper billing, including those they've identified and listed in their annual IPRS reports. There were 278 areas listed in the IPRS reports we reviewed, and based on our analysis, some of these areas, such as skilled nursing facilities, ambulance services, and blood glucose monitors, were identified by a majority of MACs. A detailed description of the problem areas may also be identified in these IPRS reports, as illustrated by the examples below. Part A. A majority of Part A MACs reported claims from skilled nursing facilities and inpatient rehabilitation facilities as vulnerable to improper billing. Examples of reported problem areas within skilled nursing facilities included claims for individuals using an "ultrahigh" level of therapy and episodes of care greater than 90 days. Part B. A majority of Part B MACs reported claims for evaluation and management and ambulance services as areas vulnerable to improper billing. Examples of reported problem areas within the evaluation and management category included the incorrect level of coding for office visits, hospital visits, emergency room visits, and home visits for assisted living and nursing homes. DME. A majority of DME MACs reported claims for glucose monitors, urological supplies, continuous positive airway pressure (CPAP) devices, oxygen, wheelchair options and accessories, lower limb prosthetics, and immunosuppressive drugs as areas vulnerable to improper billing. An example of a reported problem area with oxygen billing was that the beneficiary medical record documentation did not provide support for symptoms that might be expected to improve with oxygen therapy. HH+H. Half of the HH+H MACs reported claims for home health therapy services and home health or hospice stays that were longer than average as areas vulnerable to improper billing. An example of a reported problem area with home health therapy services included claims from home health providers reporting a high average number of therapy visits for their patients as compared to their peers within the state and the MAC's jurisdiction. CMS collects limited information on MACs' provider education department efforts that focus on areas vulnerable to improper billing. CMS officials told us that they oversee the extent to which MACs' provider education department efforts focus on areas vulnerable to improper billing by reviewing MACs' IPRS reports. Although the IPRS reports focus mainly on how the medical review departments will address the areas identified as vulnerable to improper billing, CMS's instructions to the MACs state that they should also include information on related provider education department activities or provider education department referrals. However, the IPRS reports we reviewed lacked specifics indicating how provider education department efforts focused on 74 percent of the 278 MAC-identified areas vulnerable to improper billing. We considered a provider education department effort to be specific if it included one or more of the following: the month, day, and year the event occurred or would occur; the type or number of providers attending; or a description of the event. As an example of a provider education department description that met our definition of 'specific,' one MAC reported its provider education department would conduct webinars focused on the top 5 to 10 denial reasons for oxygen equipment in the upcoming year. This MAC's IPRS report in our analysis listed specific provider education department efforts for all areas vulnerable to improper billing. However, 74 percent of the areas vulnerable to improper billing listed in the 14 IPRS reports we reviewed lacked specifics--48 percent of the time the provider education department efforts listed were not specific and 26 percent of the time no provider education department efforts were included. As an example of a provider education department description that was not specific, one MAC reported that the medical review department would make provider referrals to its provider education department "as needed" for inpatient hospital and rehabilitation facilities admissions, but gave no additional detail (see fig. 3). According to CMS officials, they do not require IPRS reports to have a certain level of specificity regarding how provider education department efforts focus on areas vulnerable to improper billing because they do not want to be overly prescriptive regarding MACs' provider education department efforts. As a result, CMS receives limited and varying degrees of information on the extent to which provider education department efforts are focused on the MAC-identified areas vulnerable to improper billing. CMS's collection of limited information is inconsistent with federal internal control standards related to information and communications, which state that management should use quality information to achieve the entity's objectives--CMS's objective in this instance being the education of providers about proper billing. Unless CMS requires sufficient MAC provider education department reporting, it cannot ensure that MACs' provider education department efforts are focused on areas vulnerable to improper billing. CMS does not require A/B MACs to educate referring providers on documentation requirements for ordering DME and home health services because referring providers do not bill for any DME or home health services on these orders. DME suppliers and home health agencies are responsible for submitting a proper written order from the referring provider to receive payment, and DME and HH+H MACs are required to educate DME suppliers and home health agencies--but not the referring provider--on a proper written order. However, when a DME supplier or home health agency accepts a written order, its payment may be denied if the claim is reviewed and the referring provider's medical record documentation does not support the supply or service provided. See figure 4 for an example in the case of DME. Some MAC officials told us they have started working with other MACs voluntarily to provide education to referring providers regarding DME and home health services documentation requirements in some jurisdictions, although CMS has not specifically required this collaboration. As an example, officials from one DME MAC told us that they and three A/B MACs that operate within its jurisdiction co-hosted two webinars on documentation requirements when ordering durable medical equipment and prosthetics and orthotics in September 2015; these webinars focused on the medical records and orders that are part of the supplier requirement for documentation. However, this voluntary collaboration does not ensure that referring providers are always being educated. For example, two A/B MACs reported that they have done little collaboration with the HH+H MAC that serves their jurisdiction for referring providers on proper billing documentation for home health services. CMS officials stated that they have not explicitly required the MACs to work together on this activity because it has not risen to a level of significant concern. If education were provided, officials from two DME MACs told us there would still be a lack of incentive for referring providers to bill properly for DME and home health services because they do not experience any repercussions for insufficient documentation--one type of improper billing. Instead, when DME or home health claims are denied due to insufficient documentation, from either the supplier or the referring provider, the DME or home health provider loses the payment, while the referring provider does not. This education gap is problematic because insufficient documentation is the most common reason for improper payments for home health services and DME, which have high improper payment rates. As reported for fiscal year 2016, DME had a 46.3 percent improper payment rate with the Medicare program paying an estimated $3.7 billion improperly; home health services had a 42.0 percent improper payment rate with the program paying an estimated $7.7 billion improperly (see fig. 5.). Of these improper payment amounts, 81 percent and 96 percent were the result of insufficient documentation for DME and home health services, respectively. Although the DME improper payment rate has decreased somewhat in recent years, both the home health and DME programs' improper payment rates remain higher than the overall Medicare fee-for- service improper payment rate of 11.0 percent. Because referring physicians do not receive education from MACs for the required documentation to support referrals for DME and home health services, the risk is increased that DME suppliers or home health agencies will improperly submit claims with insufficient documentation from referring providers. Although both the A/B and DME MAC contracts contain a requirement for the MACs to share ideas and coordinate their efforts as necessary, they do not explicitly require collaboration between these MACs to address this education gap for referring providers. The absence of a requirement for MACs to educate referring providers about proper documentation for DME and home health claims is inconsistent with federal internal control standards, which state that in order to achieve an entity's objectives, management should assign responsibility, and delegate authority. Without explicitly requiring that MACs educate referring providers, the billing errors that result from referring providers' insufficient documentation may persist. Although CMS officials consider the MACs' short-stay hospital probe and educate reviews to be a success, they did not measure the effectiveness of this new strategy in reducing improper billing. CMS officials consider the reviews to be a success based on feedback from providers who were happy with the education they received and based on the reduction in the number of providers from the first to third rounds who were billing and documenting claims improperly. We found that the effectiveness of the MACs' short-stay hospital probe and educate reviews cannot be confirmed because CMS did not establish performance metrics to determine whether the probe and educate reviews were effective in reducing improper billing. Although CMS stated the objective of the reviews was to determine the extent to which providers understood recent policy changes for certain services and were billing properly for those services, CMS officials told us they did not establish performance metrics that defined their objectives in measurable terms and would allow them to evaluate whether they met those objectives-- for instance, specifying the percentage decrease they'd want to see in the number of providers reviewed from the first round to third rounds. This is inconsistent with federal internal control standards that specify management should define objectives in specific and measurable terms, establish appropriate performance measures for the defined objectives, and conduct ongoing monitoring to evaluate whether they are meeting those objectives. We reviewed the data provided by the MACs to CMS about the inpatient short-stay probe and educate reviews and found that the reviews may not have been a clear success. For instance, the percentage of providers who continued to require review remained high throughout the three rounds--over 90 percent. Additionally, the percentage of claims denied in each round also remained high throughout the three rounds (see table 1). CMS officials told us that because providers billing properly were removed after each round, they could not determine how much the overall denial rate effectively decreased from the first to third rounds, noting that the decrease in the claims denial rate could be greater than results indicate. However, the number of providers removed after each round was small. It is too early to say whether the home health probe and educate reviews are successful because only one round of reviews had been completed at the time of our review. CMS officials told us they have not established specific performance metrics for the home health reviews either. The probe and educate reviews are resource-intensive. Though their costs have not been quantified by CMS, the reviews require manual assessments of thousands of claims, as well as the offer of one-to-one education from the MACs to certain providers. The importance of measuring the effectiveness of these probe and educate reviews is highlighted by their resource-intensive nature, as well as by the fact that the percentage of providers requiring review and claims denied remained high throughout the three rounds of the probe and educate reviews of short inpatient hospital stays. Therefore, without performance metrics, CMS cannot determine whether future probe and educate reviews would be effective in reducing improper billing. The MACs' provider education departments play an important role in reducing the rate of improper payments by educating Medicare providers on coverage and payment policies so that they can bill properly. However, CMS has missed opportunities to improve the effectiveness and its oversight of those efforts. CMS needs sufficient reporting from the MACs to determine if their provider education department efforts are focusing on areas vulnerable to improper billing. Lack of detail in the MACs' IPRS reporting provides CMS with insufficient information for oversight. Without sufficient reporting, CMS cannot assure that the MACs are focusing their provider education department efforts on reducing areas vulnerable to improper billing. In order to reduce the high improper payment rates for home health and DME, education on proper documentation for providers who refer their patients for DME and home health services is necessary; however, MACs are not required to provide this education to the referring providers. To provide this education, collaboration is needed between the A/B MACs, which are the primary contacts for the referring providers, and the DME and HH+H MACs, which have expertise in the DME and home health billing areas. Without requiring MACs to work together to educate referring providers, CMS has little assurance that referring providers are being educated in order to help reduce improper billing in DME and home health services. Finally, CMS has not determined the effectiveness of the probe and educate reviews. CMS does not have sufficient information to indicate whether the reviews help to reduce improper billing; establishing performance metrics would help CMS determine if the reviews are effective in doing so. Without performance metrics, little assurance exists that the probe and educate reviews are effective in reducing improper billing and whether they should be used for additional areas vulnerable to improper billing in the future. To ensure MACs' provider education efforts are focused on areas vulnerable to improper billing and to strengthen CMS's oversight of those efforts, we recommend that CMS take the following three actions: 1. CMS should require sufficient detail in MAC reporting to allow CMS to determine the extent to which MACs' provider education department efforts focus on areas identified as vulnerable to improper billing. 2. CMS should explicitly require that A/B, DME, and HH+H MACs work together to educate referring providers on documentation requirements for DME and home health services. 3. For any future probe and educate reviews, CMS should establish performance metrics that will help the agency determine the reviews' effectiveness in reducing improper billing. We provided a draft of this product to HHS for comment. In its written comments, which are reprinted in appendix II, HHS concurred with our recommendations. HHS also provided technical comments, which we incorporated as appropriate. HHS acknowledged the role of referring providers in ensuring proper billing for Medicare services, stating it will ensure the MACs work together to educate referring providers on documentation requirements for DME and home health services. Further, HHS noted that it will work with the MACs on providing additional information related to their provider education department efforts. HHS also noted it is currently developing performance metrics to help measure the effectiveness of future probe and educate reviews. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Kathleen M. King, (202) 512-7114 or [email protected]. In addition to the contact named above, Lori Achman, Assistant Director; Teresa Tam, Analyst-in-Charge; Cathleen Hamann; Deborah Linares; Vikki Porter, and Jennifer Whitworth made key contributions to this report.
For fiscal year 2016, HHS reported an estimated 11 percent improper payment rate and $41.1 billion in improper payments in the Medicare fee-for-service program. To help ensure payments are made properly, CMS contracts with MACs to conduct provider education efforts. CMS cites the MACs' provider education department efforts as an important way to reduce improper payments. GAO was asked to examine MACs' provider education department efforts and the results of MACs' probe and educate reviews. This report examines (1) the focus of MACs' provider education department efforts to help reduce improper billing and CMS oversight of these efforts and (2) the extent to which CMS measured the effectiveness of the MAC probe and educate reviews. GAO reviewed and analyzed CMS and MAC documents and MAC probe and educate review data for 2013-2016; interviewed CMS and MAC officials; and assessed CMS's oversight activities against federal internal control standards Medicare administrative contractors (MAC) process Medicare claims, identify areas vulnerable to improper billing, and develop general education efforts focused on these areas. MAC officials state that their provider education departments focus their educational efforts on areas vulnerable to improper billing; however, the Centers for Medicare & Medicaid Services' (CMS)--the agency within the Department of Health and Human Services (HHS) that administers Medicare--oversight and requirements for these efforts are limited. CMS collects limited information about how these efforts focus on the areas MACs identify as vulnerable to improper billing. According to CMS officials, the agency has not required the MACs to provide specifics on their provider education department efforts in these reports because it does not want to be overly prescriptive regarding MAC provider education department efforts. Federal internal control standards state that management should use quality reporting information to achieve the entity's objectives. Unless CMS requires sufficient MAC provider education department reporting, it cannot ensure that the departments' efforts are focused on areas vulnerable to improper billing. CMS does not require MACs to educate providers who refer patients for durable medical equipment (DME), including prosthetics, orthotics, and supplies, and home health services on proper billing documentation, nor does it explicitly require MACs to work together to provide this education. HHS has reported that a large portion of the high improper payment rates in these services is related to insufficient documentation. The absence of a requirement for MACs to educate referring providers about proper documentation for DME and home health claims is inconsistent with federal internal control standards, which state that in order to achieve an entity's objectives, management should assign responsibility and delegate authority. Without an explicit requirement from CMS to educate these referring providers, billing errors due to insufficient documentation may persist. Short-stay hospital and home health claims have been the focus of the MACs' probe and educate reviews--a CMS strategy to help providers improve billing in certain areas vulnerable to improper billing. Under the probe and educate reviews, MACs review a sample of claims from every provider and then offer individualized education to reduce billing errors. CMS officials consider the completed short-stay hospital reviews to be a success based on anecdotal feedback from providers. However, the effectiveness of these reviews cannot be confirmed because CMS did not establish performance metrics to determine whether the reviews were effective in reducing improper billing. Furthermore, GAO found the percentage of claims remained high throughout the three rounds of the review process, despite the offer of education after each round. Federal internal control standards state that management should define objectives in specific and measurable terms and evaluate results against those objectives. Without performance metrics, CMS cannot determine whether future probe and educate reviews would be effective in reducing improper billing. GAO recommends that CMS should (1) require sufficient detail in MAC reporting to determine the extent to which MACs' provider education department efforts focus on vulnerable areas, (2) explicitly require MACs to work together to educate referring providers on proper documentation for DME and home health services, and (3) establish performance metrics for future probe and educate reviews. HHS concurred with GAO's recommendations.
4,926
851
Financial assistance to help students and families pay for postsecondary education has been provided for many years through student grant and loan programs authorized under Title IV of the Higher Education Act of 1965, as amended. Examples of these programs include Pell Grants for low-income students, PLUS loans to parents and graduate students, and Stafford loans.Much of this aid has been provided on the basis of the difference between a student's cost of attendance and an estimate of the ability of the student and the student's family to pay these costs, called the expected family contribution (EFC). The EFC is calculated based on information provided by students and parents on the Free Application for Federal Student Aid (FAFSA). Federal law establishes the criteria that students must meet to be considered independent of their parents for the purpose of financial aid and the share of family and student income and assets that are expected to be available for the student's education.In fiscal year 2007, the Department of Education made available approximately $15 billion in grants and another $65 billion in Title IV loan assistance. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work-study aid, collectively known as campus-based aid. Table 1 provides brief descriptions of the Title IV programs that we reviewed in our 2005 report and includes two programs--Academic Competitiveness Grants and National Science and Mathematics Access to Retain Talent Grants--that were created since that report was issued. Postsecondary assistance also has been provided through a range of tax preferences,including postsecondary tax credits, tax deductions, and tax- exempt savings programs. For example, the Taxpayer Relief Act of 1997 allows eligible tax filers to reduce their tax liability by receiving, for tax year 2007, up to a $1,650 Hope tax credit or up to a $2,000 Lifetime Learning tax credit for tuition and qualified related expenses paid for a single student. According to the Office of Management and Budget, the fiscal year 2007 federal revenue loss estimate of the postsecondary tax preferences that we reviewed was $8.7 billion. Tax preferences discussed as part of our 2005 report and December 2006 testimony include the following: Lifetime Learning Credit--income-based tax credit claimed by tax filers on behalf of students enrolled in one or more postsecondary education courses. Hope Credit--income-based tax credit claimed by tax filers on behalf of students enrolled at least half-time in an eligible program of study and who are in their first 2 years of postsecondary education. Student Loan Interest Deduction--income-based tax deduction claimed by tax filers on behalf of students who took out qualified student loans while enrolled at least half-time. Tuition and Fees Deduction--income-based tax deduction claimed by tax filers on behalf of students who are enrolled in one or more postsecondary education courses and have either a high school diploma or a General Educational Development (GED) credential. Section 529 Qualified Tuition Programs--College Savings Programs and Prepaid Tuition Programs--non-income-based programs that provide favorable tax treatment to investments and distributions used to pay the expenses of future or current postsecondary students. Coverdell Education Savings Accounts--income-based savings program providing favorable tax treatment to investments and distributions used to pay the expenses of future or current elementary, secondary, or postsecondary students. As figure 1 demonstrates, the use of tax preferences has increased since 1997, both in absolute terms and relative to the use of Title IV aid. Postsecondary student financial assistance provided through programs authorized under Title IV of the Higher Education Act and the tax code differ in timing of assistance, the populations that receive assistance, and the responsibility of students and families to obtain and use the assistance. Title IV programs and education-related tax preferences differ significantly in when eligibility is established and in the timing of the assistance they provide. Title IV programs generally provide benefits to students while they are in school. Education-related tax preferences, on the other hand, (1) encourage saving for college through tax-exempt saving, (2) assist enrolled students and their families in meeting the current costs of postsecondary education through credits and tuition deductions, and (3) assist students and families repaying the costs of past postsecondary education through a tax deduction for student loan interest paid. While Title IV programs and tax preferences assist many students and families, program and tax rules affect eligibility for such assistance. These rules also affect the distribution of Title IV aid and the assistance provided through tax preferences. As a result, the beneficiaries of Title IV programs and tax preferences differ. Title IV programs generally have rules for calculating grant and loan assistance that give consideration to family and student income, assets, and college costs in the awarding of financial aid.For example, Pell Grant awards are calculated by subtracting the student's EFC from the maximum Pell Grant award ($4,310 in academic year 2007--2008) or the student's cost of attendance, whichever is less. Because the EFC is closely linked to family income and circumstances (such as the size of the family and the number of dependents in school), and modest EFCs are required for Pell Grant eligibility, Pell awards are made primarily to families with modest incomes. In contrast, the maximum unsubsidized Stafford loan amount is calculated without direct consideration of financial need: students may borrow up to their cost of attendance, minus the estimated financial assistance they will receive.As table 2 shows, 92 percent of Pell financial support in 2003--2004 was provided to dependent students whose family incomes were $40,000 or below, and the 38 percent of Pell recipients in the lowest income category ($20,000 or below) received a higher share (48 percent) of Pell financial support. Because independent students generally have lower incomes and accumulated savings than dependent students and their families, patterns of program participation and dollar distribution differ. Participation of independent students in Pell, subsidized Stafford, and unsubsidized Stafford loan programs is heavily concentrated among those with incomes of $40,000 or less: from 74 percent (unsubsidized Stafford) to 95 percent (Pell) of program participants have incomes below this level. As shown in table 3, the distribution of award dollars follows a nearly identical pattern. Many education-related tax preferences have both de facto lower limits created by the need to have a positive tax liability to obtain their benefit and income ceilings on who may use them. For example, the Hope and Lifetime Learning tax credits require that tax filers have a positive tax liability to use them, and income-related phase-out provisions in 2007 began at $47,000 and $94,000 for single and joint filers, respectively. Furthermore, tax-exempt savings are more advantageous to families with higher incomes and tax liabilities because, among other reasons, these families hold greater assets to invest in these tax preferences and have a higher marginal tax rate, and thus benefit the most from the use of these tax preferences. Table 4 shows the income categories of tax filers claiming the three tax preferences available to current students or their families, along with the reduced tax liabilities from those preferences in 2005. The federal government and postsecondary institutions have significant responsibilities in assisting students and families in obtaining assistance provided under Title IV programs but only minor roles with respect to tax filers' use of education-related tax preferences. To obtain federal student aid, applicants must first complete the FAFSA, a form that requires students to complete up to 99 fields for the 2007--2008 academic year. Submitting a completed FAFSA to the Department of Education largely concludes students' and families' responsibility in obtaining aid. The Department of Education is responsible for calculating students' and families' EFC on the basis of the FAFSA, and students' educational institutions are responsible for determining aid eligibility and the amounts and packaging of awards. In contrast, higher education tax preferences require students and families to take more responsibility. Although postsecondary institutions provide students and the Internal Revenue Service (IRS) with information about higher education attendance, they have no other responsibilities for higher education tax credits, deductions, or tax-preferred savings. The federal government's primary role with respect to higher education tax preferences is the promulgation of rules; the provision of guidance to tax filers; and the processing of tax returns, including some checks on the accuracy of items reported on those tax returns. The responsibility for selecting among and properly using tax preferences rests with tax filers. Unlike Title IV programs, users must understand the rules, identify applicable tax preferences, understand how these tax preferences interact with one another and with federal student aid, keep records sufficient to support their tax filing, and correctly claim the credit or deduction on their return. According to our analysis of 2005 IRS data on the use of Hope and Lifetime Learning Credits and the tuition deduction, some tax filers appear to make less-than-optimal choices among them. The apparent suboptimal use of postsecondary tax preferences may arise, in part, from the complexity of these provisions. Making poor choices among tax preferences for postsecondary education may be costly to tax filers. For example, families may strand assets in a tax-exempt savings vehicle and incur tax penalties on their distribution if their child chooses not to go to college. They may also fail to minimize their federal income tax liability by claiming a tax credit or deduction that yields less of a reduction in taxes than a different tax preference or by failing to claim any of their available tax preferences. For example, if a married couple filing jointly with one dependent in his/her first 2 years of college had an adjusted gross income of $50,000, qualified expenses of $10,000 in 2007, and tax liability greater than $2,000, their tax liability would be reduced by $2,000 if they claimed the Lifetime Learning Credit but only $1,650 if they claimed the Hope Credit. In our analysis of 2005 IRS data for returns with information on education expenses incurred, we found that some people who appear to be eligible for tax credits or the tuition deduction did not claim them. We estimate that 2.1 million filers could have claimed a tax credit or tuition deduction and thereby reduced their taxes. However, about 19 percent of those filers, representing about 412,000 returns, failed to claim any of them. The amount by which these tax filers failed to reduce their tax averaged $219; 10 percent of this group could have reduced their tax liability by over $500. In total, including both those who failed to claim a tax credit or tuition deduction and those who chose a credit or a deduction that did not maximize their benefit, we found that in 2005, 28 percent, or nearly 601,000 tax filers did not maximize their potential tax benefit. Regarding those making a poor choice among the provisions, for example, 27 percent of tax filers that claimed the tuition deduction could have further reduced their tax liability by an average of $220 by instead claiming the Lifetime Learning Credit; 10 percent of this group could have reduced their tax liabilities by over $630. Tax filers that claimed the Hope Credit when the Lifetime Learning Credit was a more optimal choice failed to reduce their tax liabilities by an average of $356. Suboptimal choices were not limited to tax filers who prepared their own tax returns. A possible indicator of the difficulty people face in understanding education-related tax preferences is how often the suboptimal choices we identified were found on tax returns prepared by paid tax preparers. We estimate that 50 percent of the returns we found that appear to have failed to optimally reduce the tax filer's tax liability were prepared by paid tax preparers. Generalized to the population of tax returns we were able to review, returns prepared by paid tax preparers represent about 301,000 of the approximately 601,000 suboptimal choices we found. Our April 2006 study of paid tax preparers corroborates the problem of confusion over which of the tax preferences to claim. Of the nine undercover investigation visits we made to paid preparers with a taxpayer with a dependent college student, three preparers did not claim the credit most advantageous to the taxpayer and thereby cost these taxpayers hundreds of dollars in refunds. In our investigative scenario, the expenses and the year in school made the Hope education credit far more advantageous to the taxpayer than either the tuition and fees deduction or the Lifetime Learning credit. The apparently suboptimal use of postsecondary tax preferences may arise, in part, because of the complexity of using these provisions. Tax policy analysts have frequently identified postsecondary tax preferences as a set of tax provisions that demand a particularly large investment of knowledge and skill on the part of students and families or expert assistance purchased by those with the means to do so. They suggest that this complexity arises from multiple postsecondary tax preferences with similar purposes, from key definitions that vary across these provisions, and from rules that coordinate the use of multiple tax provisions. Twelve tax preferences are outlined in IRS Publication 970, Tax Benefits for Education: For Use in Preparing 2007 Returns. The publication includes four different tax preferences for educational saving. Three of these preferences--Coverdell Education Savings Accounts, Qualified Tuition Programs, and U.S. education savings bonds--differ across more than a dozen dimensions, including the tax penalty that occurs when account balances are not used for qualified higher education expenses, who may be an eligible beneficiary, annual contribution limits, and other features. In addition to learning about, comparing, and selecting tax preferences, filers who wish to make optimal use of multiple tax preferences must understand how the use of one tax preference affects the use of others. The use of multiple education-related tax preferences is coordinated through rules that prohibit the application of the same qualified higher education expenses for the same student to more than one education- related tax preference, sometimes referred to as "anti-double-dipping rules." These rules are important because they prevent tax filers from underreporting their tax liability. Nonetheless, anti-double-dipping rules are potentially difficult for tax filers to understand and apply, and misunderstanding them may have consequences for a filer's tax liability. Many researchers and policy analysts support simplifying the existing federal grant, loans and tax preferences in the belief that doing so would have a net benefit on encouraging access. Indeed, suggestions put forth in recent years to combine the federal grants and tax credits, for example, may help address some of the challenges we identified in recent years regarding tax filers' suboptimal use of postsecondary tax preferences or the confusion created by the interactions between direct student aid programs, such as the Pell Grant, and existing tax preferences. In this case, reducing the number of choices students and their families have to make would likely reduce tax filers' confusion and mistakes. To date, we have not undertaken any studies of how current Title IV student aid programs or tax preferences could be simplified and, as a result, have not developed any such models or proposals. However, while different aspects of simplification may provide students and their families with various benefits, Congress would likely want to weigh those benefits against a number of potentially related costs. Simplifying the federal application for student aid--A better understanding is needed about whether or to what extent simplifying the application for federal aid would: (1) alter the administration of other federal, state and institutional student aid programs, (2) be capable of accommodating future federal policies designed to target aid, and (3) affect current programs that are specifically tied to Pell Grant eligibility. The current FAFSA is used to determine students' eligibility for various federal aid programs, including Pell Grants, Academic Competitiveness Grants, SMART Grants, Stafford and PLUS loans, Supplemental Educational Opportunity Grants (SEOG), Perkins Loans, and Federal Work-Study. In addition, many states and schools rely on the FAFSA when awarding state and institutional student aid. To the extent that other programs require FAFSA-like information from applicants to award financial aid, additional research is needed to determine whether simplifying the FAFSA may actually increase the number of applications students and families would be required to submit. Simplifying eligibility verification requirements--Both grants and tax credits are awarded based, in part, on students' and their families' incomes, which means students and families are required to document their income to receive the benefit. Under the current system, some students and families are eligible to apply for Title IV student aid even though they are not required to file a tax return; in such cases, eligibility is computed based upon information reported on the FAFSA. Any plan to consolidate some or all of the current federal grants and tax preferences would need to consider how to minimize burden on students and families while also controlling federal administrative costs, for example, by minimizing the use of multiple verification procedures that use multiple forms of documentation and that are administered by multiple agencies. Simplifying program administration while maintaining federal cost controls --Federal grant and loan programs are administered by the Department of Education while federal tax preferences are administered by IRS. Under a system where existing grant aid and tax credits are consolidated, it is unclear without additional research, whether cost efficiency is better achieved through having the Department of Education or IRS assume federal budgeting and accounting responsibilities. In addition, the grant programs generally are subject to an annual appropriation which enables Congress to control overall federal expenditures by taking into account other federal priorities. In contrast, most tax preferences are like entitlement programs and their revenue losses can only be controlled by changing the statutory qualifications for the tax preference. Simplifying aid distribution--Policymakers will need to consider costs associated with the federal government recovering funds if students fail to maintain eligibility requirements over the course of an academic year. Families currently claim tax preferences after qualifying higher education expenses have been incurred but receive federal grant benefits to pay current expenses. Program simplifications that consolidate grants and tax preferences into a benefit paid before expenses are incurred likely will require the implementation of new cost recovery mechanisms or other means to allocate payments based on costs actually incurred. Simplifying eligible expenses--Room and board expenses are considered in the administration of the federal student aid programs authorized under Title IV of the Higher Education Act but not in all tax preferences, particular the Hope and Lifetime Learning Credits. Careful analysis will be needed of how such expenses could be accounted for in a simplified scheme if it is changed to being structured as a tax preference rather than a grant. Room and board expenses vary based on where a school is located or whether a student lives on or off campus, and they can be a significant component of a student's cost of attendance, particularly at community colleges. While certain strategies might be employed to lessen tax filers' recordkeeping requirements and result in fewer tax filer compliance issues, further research is needed on how such an allowance would be optimally set. Establishing too high an allowance, for example, could result in some students receiving a benefit in excess of the costs they incur for room and board, especially for those students who choose to live with their parents. Alternatively, if tax assistance is provided in advance of incurring costs, but the assistance is to be limited to costs actually incurred, a cost recovery or other administrative mechanism would be needed as discussed above. Little is known about the effectiveness of federal grant and loan programs and education-related tax preferences in promoting attendance, choice, and the likelihood that students either earn a degree or continue their education (referred to as persistence). Many federal aid programs and tax preferences have not been studied, and for those that have been studied, important aspects of their effectiveness remain unexamined. In our 2005 report, we found no research on any aspect of effectiveness for several major Title IV federal postsecondary programs and tax preferences. For example, no research had examined the effects of federal postsecondary education tax credits on students' persistence in their studies or on the type of postsecondary institution they choose to attend, and there is limited research on the effectiveness of the Pell Grant program on students' persistence. One recently published study suggests that complexity in the federal grant and loan application processes may undermine its efficacy in promoting postsecondary attendance. The relative newness of most of the tax preferences also presents challenges because relevant data are just now becoming available. These factors may contribute to a lack of information concerning the effectiveness of the aid programs and tax preferences. GAO, Student Aid and Tax Benefits: Better Research and Guidance Will Facilitate Comparison of Effectiveness and Student Use, GAO-02-751 (Washington, D.C.: Sept. 13, 2002). in, or completion of postsecondary education." Multiyear projects funded under this subtopic began in July 2007. However, none of the grants awarded to date appear to directly evaluate the role and effectiveness of Title IV programs and tax preferences in improving access to, persistence in, or completion of postsecondary education. As we noted in our 2002 report, more research into the effectiveness of different forms of postsecondary education assistance is important.Without such information federal policymakers cannot make fact-based decisions about how to build on successful programs and make necessary changes to improve less-effective programs. The budget deficit and other major fiscal challenges facing the nation necessitate rethinking the base of existing federal spending and tax programs, policies, and activities by reviewing their results and testing their continued relevance and relative priority for a changing society. In light of the long-term fiscal challenge this nation faces and the need to make hard decisions about how the federal government allocates resources, this hearing provides an opportunity to continue a discussion about how the federal government can best help students and their families pay for postsecondary education. Some questions that Congress should consider during this dialog include the following: Should the federal government consolidate postsecondary education tax provisions to make them easier for the public to use and understand? Given its limited resources, should the government further target Title IV programs and tax provisions based on need or other factors? How can Congress best evaluate the effectiveness and efficiency of postsecondary education aid provided through the tax code? Can tax preferences and Title IV programs be better coordinated to maximize their effectiveness? Mr. Chairman and Members of the Subcommittee, this concludes our statement. We welcome any questions you have at this time. For further information regarding this testimony, please contact Michael Brostek at (202) 512-9110 or [email protected] or George Scott at (202) 512-7215 or [email protected]. Individuals making contributions to this testimony include David Lewis, Assistant Director; Sarah Farkas, Sheila R. McCoy, John Mingus, Danielle Novak, Daniel Novillo, Carlo Salerno, Andrew J. Stephens, and Jessica Thomsen. The federal government helps students and families save, pay for, and repay the costs of postsecondary education through grant and loan programs authorized under Title IV of the Higher Education Act of 1965, as amended, and through tax preferences--reductions in federal tax liabilities that result from preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferrals, and preferential tax rates. Assistance provided under Title IV programs include Pell Grants for low- income students, the Academic Competitiveness and National Science and Mathematics Access to Retain Talent Grants, PLUS loans, which parents as well as graduate and professional students may apply for, and Stafford loans. While each of the three grants reduces the price paid by the student, student loans help to finance the remaining costs and are to be repaid according to varying terms. Stafford loans may be either subsidized or unsubsidized. The federal government pays the interest cost on subsidized loans while the student is in school, and during a 6-month period known as the grace period, after the student leaves school. For unsubsidized loans, students are responsible for all interest costs. Stafford and PLUS loans are provided to students through both the Federal Family Education Loan program (FFEL) and the William D. Ford Direct Loan Program (FDLP). The federal government's role in financing and administering these two loan programs differs significantly. Under the FFEL program, private lenders, such as banks, provide loan capital and make loans, and the federal government guarantees FFEL lenders a minimum yield on the loans they make and repayment if borrowers default. Under FDLP, the federal government makes loans to students using federal funds. The Department of Education and its private-sector contractors jointly administer the program. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work-study aid, collectively known as campus-based aid. To receive Title IV aid, students (along with parents, in the case of dependent students) must complete a Free Application for Federal Student Aid form. Information from the FAFSA, particularly income and asset information, is used to determine the amount of money--called the expected family contribution--that the student and/or family is expected to contribute to the student's education. Federal law establishes the criteria that students must meet to be considered independent of their parents for the purpose of financial aid and the share of family and student income and assets that are expected to be available for the student's education. Once the EFC is established, it is compared with the cost of attendance at the institution chosen by the student. The cost of attendance comprises tuition and fees; room and board; books and supplies; transportation; certain miscellaneous personal expenses; and, for some students, additional expenses.If the EFC is greater than the cost of attendance, the student is not considered to have financial need, according to the federal aid methodology. If the cost of attendance is greater than the EFC, then the student is considered to have financial need. Title IV assistance that is made on the basis of the calculated need of aid applicants is called need-based aid. Key characteristics of Title IV programs are summarized in table 5 below. Prior to the 1990s, virtually all major federal initiatives to assist students with the costs of postsecondary education were provided through grant and loan programs authorized under Title IV of the Higher Education Act. Since the 1990s, however, new federal initiatives to assist families and students in paying for postsecondary education have largely been implemented through the federal tax code. The federal tax code now contains a range of tax preferences that may be used to assist students and families in saving for, paying, or repaying the costs of postsecondary education. These tax preferences include credits and deductions, both of which allow tax filers to use qualified higher education expenses to reduce their federal income tax liability. The tax credits reduce the tax filers' income tax liability on a dollar-for-dollar basis but are not refundable. Tax deductions permit qualified higher education expenses to be subtracted from income that would otherwise be taxable. To benefit from a higher education tax credit or tuition deduction, a tax filer must use tax form 1040 or 1040A, have an adjusted gross income below the provisions' statutorily specified income limits, and have a positive tax liability after other deductions and credits are calculated, among other requirements. Tax preferences also include tax-exempt savings vehicles. Section 529 of the tax code makes tax free the investment income from qualified tuition programs. There are two types of qualified tuition programs: savings programs established by states and prepaid tuition programs established either by states or by one or more eligible educational institutions. Another tax-exempt savings vehicle is the Coverdell Education Savings Account. Tax penalties apply to both 529 programs and Coverdell savings accounts if the funds are not used for allowable education expenses. Key features of these and other education-related tax preferences are described below, in table 6. Our review of tax preferences did not include exclusions from income, which permit certain types of education-related income to be excluded from the calculation of adjusted gross income on which taxes are based. For example, qualified scholarships covering tuition and fees and qualified tuition reductions from eligible educational institutions are not included in gross income for income tax purposes. Similarly, student loans forgiven when a graduate goes into certain professions for a certain period of time are also not subject to federal income taxes. We did not include special provisions in the tax code that also extend existing tax preferences when tax filers support a postsecondary education student. For example, tax filers may claim postsecondary education students as dependents after age 18, even if the student has his or her own income over the limit that would otherwise apply. Also, gift taxes do not apply to funds used for certain postsecondary educational expenses, even for amounts in excess of the usual $12,000 limit on non-taxable gifts. In addition, funds withdrawn early from an Individual Retirement Account are not subject to the usual 10 percent penalty when used for either a tax filer's or his or her dependent's postsecondary educational expenses. For an example of how the use of college savings programs and the tuition deduction is affected by "anti-double-dipping" rules, consider the following: To calculate whether a distribution from a college savings program is taxable, tax filers must determine if the total distributions for the tax year are more or less than the total qualified educational expenses reduced by any tax-free educational assistance, i.e., their adjusted qualified education expenses (AQEE). After subtracting tax-free assistance from qualified educational expenses to arrive at the AQEE, tax filers multiply total distributed earnings by the fraction (AQEE / total amount distributed during the year). If parents of a dependent student paid $6,500 in qualified education expenses from a $3,000 tax-free scholarship and a $3,600 distribution from a tuition savings program, they would have $3,500 in AQEE. If $1,200 of the distribution consisted of earnings, then $1,200 x ($3,500 AQEE / $3,600 distribution) would result in $1,167 of the earnings being tax free, while $33 would be taxable. However, if the same tax filer had also claimed a tuition deduction, anti-double-dipping rules would require the tax filer to subtract the expenses taken into account in figuring the tuition deduction from AQEE. If $2,000 in expenses had been used toward the tuition deduction, then the taxable distribution from the section 529 savings program would rise to $700.For families such as these, anti-double-dipping rules increase the computational complexity they face and may result in unanticipated tax liabilities associated with the use of section 529 savings programs. We used two data sets for this testimony: Education's 2003-2004 National Postsecondary Student Aid Study and the Internal Revenue Service's 2005 Statistics of Income. Estimates from both data sets are subject to sampling errors and the estimates we report are surrounded by a 95 percent confidence interval. The following tables provide the lower and upper bounds of the 95 percent confidence interval for all estimate figures in the tables in this testimony. For figures and text drawn from these data, we provide both point estimates and confidence intervals.
Federal assistance helps students and families pay for postsecondary education through several policy tools--grant and loan programs authorized by Title IV of the Higher Education Act of 1965 and more recently enacted tax preferences. This testimony summarizes our 2005 report and provides updates on (1) how Title IV assistance compares to that provided through the tax code (2) the extent to which tax filers effectively use education tax preferences, (3) potential benefits and costs of simplifying federal student aid, and (4) what is known about the effectiveness of federal assistance. This hearing is an opportunity to consider whether changes should be made in the government's overall strategy for providing such assistance or to the individual programs and tax provisions that provide the assistance. This statement is based on updates to previously published GAO work and reviews of relevant literature. Title IV student aid and tax preferences provide assistance to a wide range of students and families in different ways. While both help students meet current expenses, tax preferences also assist students and families with saving for and repaying postsecondary costs. Both serve students and families with a range of incomes, but some forms of Title IV aid--grant aid, in particular--provide assistance to those whose incomes are lower, on average, than is the case with tax preferences. Tax preferences require more responsibility on the part of students and families than Title IV aid because taxpayers must identify applicable tax preferences, understand complex rules concerning their use, and correctly calculate and claim credits or deductions. While the tax preferences are a newer policy tool, the number of tax filers using them has grown quickly, surpassing the number of students aided under Title IV in 2002. Some tax filers do not appear to make optimal education-related tax decisions. For example, our analysis of a limited number of 2005 tax returns indicated that 41 percent of eligible tax filers did not claim either the tuition deduction or a tax credit. In so doing, these tax filers failed to reduce their tax liability by $219, on average, and 10 percent of these filers could have reduced their tax liability by over $500. One explanation for these taxpayers' choices may be the complexity of postsecondary tax provisions, which experts have commonly identified as difficult for tax filers to use. Simplifying the grants, loans, and tax preferences may reduce complexities in higher education financing, including reducing the number of eligible tax filers that do not claim tax preferences, but more research would be necessary to understand the full benefits and costs of any such changes. Little is known about the effectiveness of Title IV aid or tax preferences in promoting, for example, postsecondary attendance or school choice, in part because of research data and methodological challenges. As a result, policymakers do not have information that would allow them to make the most efficient use of limited federal resources to help students and families.
6,666
592
The total long-term funding for helping the Gulf Coast recover from the 2005 hurricanes hinges on numerous factors including policy choices made at all levels of government, knowledge of spending across the federal government, and the multiple decisions required to transform the region. To understand the long-term federal financial implications of Gulf Coast rebuilding it is helpful to view potential federal assistance within the context of overall estimates of the damages incurred by the region. Although there are no definitive or authoritative estimates of the amount of federal funds that could be invested to rebuild the Gulf Coast, various estimates of aspects of rebuilding offer a sense of the long-term financial implications. For example, early damage estimates from the Congressional Budget Office (CBO) put capital losses from Hurricanes Katrina and Rita at a range of $70 billion to $130 billion while another estimate put losses solely from Hurricane Katrina--including capital losses--at more than $150 billion. Further, the state of Louisiana has estimated that the economic effect on its state alone could reach $200 billion. The exact costs of damages from the Gulf Coast hurricanes may never be known, but will likely far surpass those from the three other costliest disasters in recent history--Hurricane Andrew in 1992, the 1994 Northridge earthquake, and the September 2001 terrorist attacks. These estimates raise important questions regarding how much additional assistance may be needed to continue to help the Gulf Coast rebuild, and who should be responsible for providing the related resources. To respond to the Gulf Coast devastation, the federal government has already committed a historically high level of resources--more than $116 billion--through an array of grants, loan subsidies, and tax relief and incentives. The bulk of this assistance was provided between September 2005 and May 2007 through five emergency supplemental appropriations. A substantial portion of this assistance was directed to emergency assistance and meeting short-term needs arising from the hurricanes, such as relocation assistance, emergency housing, immediate levee repair, and debris removal efforts. The Brookings Institution has estimated that approximately $35 billion of the federal resources provided supports longer-term rebuilding efforts. The federal funding I have mentioned presents an informative, but likely incomplete picture of the federal government's total financial investments to date. Tracking total funds provided for federal Gulf Coast rebuilding efforts requires knowledge of a host of programs administered by multiple federal agencies. We previously reported that the federal government does not have a governmentwide framework or mechanism in place to collect and consolidate information from the individual federal agencies that received appropriations in emergency supplementals for hurricane relief and recovery efforts or to report on this information. It is important to provide transparency by collecting and publishing this information so that hurricane victims, affected states, and American taxpayers know how these funds are being spent. Until such a system is in place across the federal government, a complete picture of federal funding streams and their integration across agencies will remain lacking. Demands for additional federal resources to rebuild the Gulf Coast are likely to continue, despite the substantial federal funding provided to date. The bulk of federal rebuilding assistance provided to the Gulf Coast states funds two key programs--FEMA's Public Assistance (PA) program and HUD's Community Development Block Grant (CDBG) program. These two programs follow different funding models. PA provides funding for restoration of the region's infrastructure on a project-by-project basis involving an assessment of specific proposals to determine eligibility. In contrast, CDBG affords broad discretion and flexibility to states and localities for restoration of the region's livable housing. In addition to funding PA and CDBG, the federal government's recovery and rebuilding assistance also includes payouts from the National Flood Insurance Program (NFIP) as well as funds for levee restoration and repair, coastal wetlands and barrier islands restoration, and benefits provided through Gulf Opportunity Zone (GO Zone) tax expenditures. The PA Grant program provides assistance to state and local governments and eligible nonprofit organizations on a project-by-project basis for emergency work (e.g., removal of debris and emergency protective measures) and permanent work (e.g., repairing roads, reconstructing buildings, and reestablishing utilities). After the President declares a disaster, a state becomes eligible for federal PA funds through FEMA's Disaster Relief Fund. Officials at the local, state, and federal level are involved in the PA process in a variety of ways. The grant applicant, such as a local government or nonprofit organization, works with state and FEMA officials to develop a scope of work and cost estimate for each project that is documented in individual project worksheets. In addition to documenting scope of work and cost considerations, each project worksheet is reviewed by FEMA and the state to determine whether the applicant and type of facility are eligible for funding. Once approved, funds are obligated, that is, made available, to the state. PA generally operates on a reimbursement basis. Reimbursement for small projects (up to $59,700) are made based on the project's estimated costs, while large projects (more than $59,700) are reimbursed based upon actual eligible costs when they are incurred. As of the middle of July 2007, FEMA had approved a total of 67,253 project worksheets for emergency and permanent work, making available about $8.2 billion in PA grants to the states of Louisiana, Mississippi, Texas, and Alabama. A smaller portion of PA program funds are going toward longer- term rebuilding activities than emergency work. Of the approximately $8.2 billion made available to the Gulf Coast states overall, about $3.4 billion (41 percent) is for permanent work such as repairing and rebuilding schools and hospitals and reestablishing sewer and water systems, while about $4.6 billion (56 percent) is for emergency response work such as clearing roads for access and sandbagging low-lying areas. The remaining amount of PA funds, about $0.2 billion (3 percent) is for administrative costs. (See fig. 1.) Of the funds made available by FEMA to the states for permanent rebuilding, localities have only received a portion of these funds since many projects have not yet been completed. Specifically, in Louisiana and Mississippi, 26 and 22 percent of obligated funds, respectively, have been paid by the state to applicants for these projects. The total cost of PA funding for the Gulf Coast hurricanes will likely exceed the approximately $8.2 billion already made available to the states for two reasons: (1) the funds do not reflect all current and future projects, and (2) the cost of some of these projects will likely be higher than FEMA's original estimates. According to FEMA, as of the middle of July 2007, an additional 1,916 project worksheets were in process (these projects are in addition to the 67,253 approved project worksheets mentioned above). FEMA expects that another 2,730 project worksheets will be written. FEMA expects these worksheets to increase the total cost by about $2.1 billion, resulting in a total expected PA cost of about $10.3 billion. Some state and local officials have also expressed concerns about unrealistically low cost estimates contained in project worksheets, which could lead to even higher than anticipated costs to the federal government. A senior official within the Louisiana Governor's Office of Homeland Security and Emergency Preparedness recently testified that some of the projects were underestimated by a factor of 4 or 5 times compared to the actual cost. For example, the lowest bids on 11 project worksheets for repairing or rebuilding state-owned facilities, such as universities and hospitals, totaled $5.5 million while FEMA approved $1.9 million for these projects. The extent to which the number of new project worksheets and actual costs that exceed estimated costs will result in demands for additional federal funds remains unknown. In addition, PA costs may increase until a disaster is closed, which can take many years in the case of a catastrophic disaster. For instance, PA costs from the Northridge earthquake that hit California in January 1994 have not been closed out more than 13 years after the event. Our ongoing work on the PA program will provide insights into efforts to complete infrastructure projects, the actual costs of completed projects, and the use of federal funds to complete PA projects. HUD's CDBG program provides funding for neighborhood revitalization and housing rehabilitation activities, affording states broad discretion and flexibility in deciding how to allocate these funds and for what purposes. Congress has provided even greater flexibility when allocating additional CDBG funds to affected communities and states to help them recover from presidentially-declared disasters, such as the Gulf Coast hurricanes. To date, the affected Gulf Coast states have received $16.7 billion in CDBG funding from supplemental appropriations--so far, the largest federal provider of long-term Gulf Coast rebuilding funding. As shown in figure 2, Louisiana and Mississippi were allocated the largest shares of the CDBG appropriations, with $10.4 billion allocated to Louisiana, and another $5.5 billion allocated to Mississippi. Florida, Alabama, and Texas received the remaining share of CDBG funds. To receive CDBG funds for Gulf Coast rebuilding, HUD required that each state submit an action plan describing how the funds would be used, including how the funds would address long-term "recovery and restoration of infrastructure." Accordingly, the states had substantial flexibility in establishing funding levels and designing programs to achieve their goals. As shown in figure 3, Mississippi set aside $3.8 billion to address housing priorities within the state while Louisiana dedicated $8 billion for its housing needs. Each state also directed the majority of its housing allocations to owner- occupied homes and designed a homeowner assistance program to address the particular conditions in their state. As discussed below, each state used different assumptions in designing its programs, which in turn affects the financial implications for each state. Using $8.0 billion in CDBG funding, the Louisiana Recovery Authority (LRA) developed a housing assistance program called the Road Home to restore the housing infrastructure in the state. As shown in figure 4, Louisiana set aside about $6.3 billion of these funds to develop the homeowner assistance component of the program and nearly $1.7 billion for rental, low-income housing, and other housing-related projects. Louisiana anticipated that FEMA would provide the homeowner assistance component with another $1.2 billion in grant assistance. Louisiana based these funding amounts on estimates of need within the state. Accordingly, Louisiana estimated that $7.5 billion would be needed to assist 114,532 homeowners with major or severe damage. Louisiana also estimated these funds would provide an average grant award of $60,109 per homeowner. The LRA launched the Road Home homeowner assistance program in August 2006. Under the program, homeowners who decide to stay in Louisiana and rebuild are eligible for the full amount of grant assistance-- up to $150,000. Aside from the elderly, residents who choose to sell their homes and leave the state will have their grant awards reduced by 40 percent, while residents who did not have insurance at the time of the hurricanes will have their grant awards reduced by 30 percent. To receive compensation, homeowners must comply with applicable code and zoning requirements and FEMA advisory base flood elevations when rebuilding and agree to use their home as a primary residence at some point during a 3-year period following closing. Further, the amount of compensation that homeowners can receive depends on the value of their homes before the storms and the amount of flood or wind damage that was not covered by insurance or other forms of assistance. As of July 16, 2007, the Road Home program had received 158,489 applications and had held 36,655 closings with an average award amount of $74,216. With the number of applications exceeding initial estimates and average award amounts higher than expected, recent concerns have been raised about a potential funding shortfall and the Road Home program's ability to achieve its objective of compensating all eligible homeowners. Concerns over the potential shortfall have led to questions about the Road Home program's policy to pay for uninsured wind damage instead of limiting compensation to flood damage. In recent congressional hearings, the Executive Director of the LRA testified that the Road Home program will require additional funds to compensate all eligible homeowners, citing a higher than projected number of homeowners applying to the program, higher costs for homeowner repairs, and a smaller percentage of private insurance payouts than expected. According to the Federal Coordinator for Gulf Coast Rebuilding, CDBG funds were allocated to Louisiana on the basis of a negotiation with the state conducted between January and February 2006. That negotiation considered the provision of federal funding for the state's need to conduct a homeowner assistance program covering homes that experienced major or severe damage from flooding. The state requested the allocation of federal funding at that time to expand the program to assist homeowners who experienced only wind damage. That request to provide federal funds to establish a homeowner program for homes which only experienced wind damage was denied, as were similar requests from Gulf Coast states such as Texas. The Administration requested the negotiated amount from Congress on February 15, 2006. Congress approved that amount and it was signed into law by the President on June 15, 2006. Subsequently, Louisiana announced the expansion of the Road Home program to cover damage exclusively from wind regardless of the stated intention of the federal allocation, but fully within their statutory authority. In addition, the Executive Director of the LRA testified that Louisiana had not received $1.2 billion in funds from FEMA--assistance that had been part of the Road Home program's original funding design. Specifically, the state expected FEMA to provide grant assistance through its Hazard Mitigation Grant Program (HMGP)--a program that generally provides assistance to address long-term community safety needs. Louisiana had planned to use this funding to assist homeowners with meeting elevation standards and other storm protection measures, as they rebuilt their homes. However, FEMA has asserted that it cannot release the money because the Road Home program discriminates against younger residents. Specifically, the program exempts elderly recipients from the 40 percent grant reduction if they choose to leave the state or do not agree to reside in their home as a primary residence at some point during a 3-year period. Although we have not assessed their assumptions, recent estimates from the Road Home program and Louisiana's state legislative auditor's office have estimated a potential shortfall in the range of $2.9 billion to $5 billion. While these issues will not be immediately resolved, they raise a number of questions about the potential demands for additional federal funding for the states' rebuilding efforts. Our ongoing work on various aspects of the CDBG program--including a review of how the affected states developed their funding levels and priorities--will provide insights into these issues. In Mississippi, Katrina's storm surge destroyed tens of thousands of homes, many of which were located outside FEMA's designated flood plain and not covered by flood insurance. Using about $3 billion in CDBG funds, Mississippi developed a two-phase program to target homeowners who suffered losses due to the storm surge. Accordingly, Phase I of the program was designed to compensate homeowners whose properties were located outside the floodplain and had maintained hazard insurance at a minimum. Eligible for up to $150,000 in compensation, these homeowners were not subject to a requirement to rebuild. Phase II of the program is designed to award grants to those who received flood surge damage, regardless of whether they lived inside or outside the flood zone or had maintained insurance on their homes. Eligible applicants must have an income at or below 120 percent of the Area Median Income (AMI). Eligible for up to $100,000 in grant awards, these homeowners are not subject to a requirement to rebuild. In addition, homeowners who do not have insurance will have their grant reduced by 30 percent, although this penalty does not apply to the "special needs" populations as defined by the state (i.e., elderly, disabled, and low-income). As of July 18, 2007, Mississippi had received 19,277 applications for Phase I of its program and awarded payments to 13,419 eligible homeowners with an average award amount of $72,062. In addition, Mississippi had received 7,424 applications for Phase II of its program and had moved an additional 4,130 applications that did not qualify for Phase I assistance to Phase II. The State had awarded 234 grants to eligible homeowners in Phase II with an average award amount of $69,448. The National Flood Insurance Program (NFIP) incurred unprecedented storm losses from the 2005 hurricane season. NFIP estimated that it had paid approximately $15.7 billion in flood insurance claims as of January 31, 2007, encompassing approximately 99 percent of all flood claims received. The intent of the NFIP is to pool risk, minimize costs and distribute burdens equitably among those who will be protected and the general public. The NFIP, by design, is not actuarially sound. Nonetheless, until recent years, the program was largely successful in paying flood losses and operating expenses with policy premium revenues--the funds paid by policyholders for their annual flood insurance coverage. However, because the program's premium rates have been set to cover losses in an average year based on program experience that did not include any catastrophic losses, the program has been unable to build sufficient reserves to meet future expected flood losses. Historically, the NFIP has been able to repay funds borrowed from the Treasury to meet its claims obligations. However, the magnitude and severity of losses from Hurricane Katrina and other 2005 hurricanes required the NFIP to obtain borrowing authority of $20.8 billion from the Treasury, an amount NFIP will unlikely be able to repay while paying future claims with its current premium income of about $2 billion annually. In addition to the federal funding challenge created by the payment of claims, key concerns raised from the response to the 2005 hurricane season include whether or not some property-casualty insurance claims for wind-related damages were improperly shifted to NFIP at the expense of taxpayers. For properties subjected to both high winds and flooding, determinations must be made to assess the damages caused by wind, which may be covered through a property-casualty homeowners policy, and the damages caused by flooding, which may be covered by NFIP. Disputes over coverage between policyholders and property-casualty insurers from the 2005 hurricane season highlight the challenges of determining the appropriateness of claims for multiple-peril events. NFIP may continue to face challenges in the future when servicing and validating flood claims from disasters such as hurricanes that may involve both flood and wind damages. Our ongoing work addresses insurance issues related to wind versus flood damages, including a review of how such determinations are made, who is making these determinations and how they are regulated, and the ability of FEMA to verify the accuracy of flood insurance claims payments based on the wind and flood damage determinations. Congress has appropriated more than $8 billion to the U.S. Army Corps of Engineers (Corps) for hurricane protection projects in the Gulf Coast. These funds cover repair, restoration and construction of levees and floodwalls as well as other hurricane protection and flood control projects. These projects are expected to take years and require billions of dollars to complete. Estimated total costs for hurricane protection projects are unknown because the Corps is also conducting a study of flood control, coastal restoration, and hurricane protection measures for the southeastern Louisiana coastal region as required by the 2006 Energy and Water Development Appropriations Act and Department of Defense Appropriations Act. The Corps must propose design and technical requirements to protect the region from a Category 5 hurricane. According to the Corps, alternatives being considered include a structural design consisting of a contiguous line of earthen or concrete walls along southern coastal Louisiana, a nonstructural alternative involving only environmental or coastal restoration measures, or a combination of those alternatives. The Corps' final proposal is due in December 2007. Although the cost to provide a Category 5 level of protection for the southeastern Louisiana coastal region has not yet been determined, these costs would be in addition to the more than $8 billion already provided to the Corps. The Corps' December 2007 proposal will also influence future federal funding for coastal wetlands and barrier islands restoration. Since the 1930s, coastal Louisiana lost more than 1.2 million acres of wetlands, at a rate of 25-35 square miles per year, leaving the Gulf Coast exposed to destructive storm surge. Various preliminary estimates ranging from $15 billion to $45 billion have been made about the ultimate cost to complete these restoration efforts. However, until the Corps develops its plans and the state and local jurisdictions agree on what needs to be done, no reliable estimate is available. We are conducting work to understand what coastal restoration alternatives have been identified and how these alternatives would integrate with other flood control and hurricane protection measures, the challenges and estimated costs to restore Louisiana's coastal wetlands, and the opinions of scientists and engineers on the practicality and achievability of large-scale, comprehensive plans and strategies to restore coastal wetlands to the scale necessary to protect coastal Louisiana. The Gulf Opportunity Zone Act of 2005 provides tax benefits to assist in the recovery from the Gulf Coast hurricanes. From a budgetary perspective, most tax expenditure programs, such as the GO Zones, are comparable to mandatory spending for entitlement programs, in that federal funds flow based on eligibility and formulas specified in authorizing legislation. The 5-year cost of the GO Zones is estimated at $8 billion and the 10-year cost is estimated to be $9 billion. Since Congress and the President must change substantive law to change the cost of these programs, they are relatively uncontrollable on an annual basis. The GO Zone tax benefits chiefly extend, with some modifications, existing tax provisions such as expensing capital expenditures, the Low Income Housing Tax Credit (LIHTC), tax exempt bonds, and the New Markets Tax Credit (NMTC). The 2005 Act increases limitations in expensing provisions for qualified GO Zone properties. The Act also increased the state limitations in Alabama, Louisiana, and Mississippi on the amount of LIHTC that can be allocated for low-income housing properties in GO Zones. Further, the act allows these states to issue tax-exempt GO Zone bonds for qualifying residential and nonresidential properties. Finally, the NMTC limitations on the total amount of credits allocated yearly were also increased for qualifying low-income community investments in GO Zones. We have a congressional mandate to review the practices employed by the states and local governments in allocating and utilizing the tax incentives provided in the Gulf Opportunity Zone Act of 2005. We have also issued reports on the tax provisions, such as LIHTC and NMTC, now extended to the GO Zones by the 2005 Act. Rebuilding efforts in the Gulf Coast continue amidst questions regarding the total cost of federal assistance, the extent to which federal funds will address the rebuilding demands of the region, and the many decisions left to be made by multiple levels of government. As residents, local and state leaders and federal officials struggle to respond to these questions, their responses lay a foundation for the future of the Gulf Coast. As states and localities continue to rebuild, there are difficult policy decisions that will confront Congress about the federal government's continued contribution to the rebuilding effort and the role it might play over the long-term in an era of competing priorities. Congress will be faced with many questions as it continues to carry out its critical oversight function in reviewing funding for Gulf Coast rebuilding efforts. Our ongoing and preliminary work on Gulf Coast rebuilding suggests the following questions: How much could it ultimately cost to rebuild the Gulf Coast and how much of this cost should the federal government bear? How effective are current funding delivery mechanisms--such as PA and CDBG--and should they be modified or supplemented by other mechanisms? What options exist to effectively build in federal oversight to accompany the receipt of federal funds, particularly as federal funding has shifted from emergency response to rebuilding? How can the federal government further partner with state and local governments and the nonprofit and private sectors to leverage public investment in rebuilding? What are the "lessons learned" from the Gulf Coast hurricanes, and what changes need to be made to help ensure a more timely and effective rebuilding effort in the future? Mr. Chairman and Members of the committee, this concludes my statement. I would be happy to respond to any questions you may have at this time. For information about this testimony, please contact Stanley J. Czerwinski, Director, Strategic Issues, at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Kathleen Boggs, Peter Del Toro, Jeffrey Miller, Carol Patey, Brenda Rabinowitz, Michelle Sager, and Robert Yetvin. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The devastation caused by the Gulf Coast hurricanes presents the nation with unprecedented challenges as well as opportunities to reexamine shared responsibility among all levels of government. All levels of government, together with the private and nonprofit sectors, will need to play a critical role in the process of choosing what, where, and how to rebuild. Agreeing on what the costs are, what federal funds have been provided, and who will bear the costs will be key to the overall rebuilding effort. This testimony (1) places federal assistance provided to date in the context of damage estimates for the Gulf Coast, and (2) discusses key federal programs that provide rebuilding assistance to the Gulf Coast states. In doing so, GAO highlights aspects of rebuilding likely to place continued demands on federal resources. GAO visited the Gulf Coast region, reviewed state and local documents, and interviewed federal, state, and local officials. GAO's ongoing work on these issues focuses on the use of federal rebuilding funds and administration of federal programs in the Gulf Coast region. To respond to the Gulf Coast devastation, the federal government has already committed a historically high level of resources--more than $116 billion--through an array of grants, loan subsidies, and tax relief and incentives. A substantial portion of this assistance was directed to emergency assistance and meeting short-term needs arising from the hurricanes, leaving a smaller portion for longer-term rebuilding. To understand the long-term financial implications of Gulf Coast rebuilding, it is helpful to view potential federal assistance within the context of overall estimates of the damages incurred by the region. Some estimates put capital losses at a range of $70 billion to more than $150 billion, while the state of Louisiana estimated that the economic effect on its state alone could reach $200 billion. These estimates raise questions regarding how much additional assistance may be needed to help the Gulf Coast continue to rebuild, and who should be responsible for providing the related resources. Demands for additional federal resources to rebuild the Gulf Coast are likely to continue. The bulk of federal rebuilding assistance provided to the Gulf Coast states funds two key programs--the Federal Emergency Management Agency's Public Assistance (PA) program and the Department of Housing and Urban Development's Community Development Block Grant (CDBG) program. In addition to funding PA and CDBG, the federal government's recovery and rebuilding assistance also includes payouts from the National Flood Insurance Program as well as funds for levee restoration and repair, coastal wetlands and barrier islands restoration, and benefits provided through Gulf Opportunity Zone tax expenditures. As states and localities continue to rebuild, there are difficult policy decisions that will confront Congress about the federal government's continued contribution to the rebuilding effort and the role it might play over the long-term in an era of competing priorities. GAO's ongoing and preliminary work on Gulf Coast rebuilding suggests the following questions: How much could it ultimately cost to rebuild the Gulf Coast and how much of this cost should the federal government bear? How effective are current funding delivery mechanisms--such as PA and CDBG--and should they be modified or supplemented by other mechanisms? What options exist to effectively build in federal oversight to accompany the receipt of federal funds, particularly as federal funding has shifted from emergency response to rebuilding? How can the federal government further partner with state and local governments and the nonprofit and private sectors to leverage public investment in rebuilding? What are the "lessons learned" from the Gulf Coast hurricanes, and what changes need to be made to help ensure a more timely and effective rebuilding effort in the future?
5,350
750
DOE is the steward of a nationwide complex of facilities created during World War II to research, produce, and test nuclear weapons. Now that the United States is reducing its nuclear arsenal, DOE has shifted its focus towards cleaning up the enormous quantities of radioactive and hazardous waste resulting from weapons production. This waste totals almost 30 million cubic meters--enough to cover a football field 4 miles deep. DOE expects that environmental restoration will continue until 2070 before all of its problems have been addressed. Remediation activities at DOE's facilities are governed by the Comprehensive Environmental Response, Compensation, and Liabilities Act (CERCLA) of 1980, as amended, and the Resource Conservation and Recovery Act (RCRA) of 1976, as amended. These laws lay out requirements for identifying waste sites, studying the extent of their contamination and identifying possible remedies, and involving the public in making decisions about the sites. At each facility we visited, DOE has signed an interagency agreement with the Environmental Protection Agency (EPA) and state regulators laying out the facility's schedule for meeting the requirements of CERCLA and other environmental laws. CERCLA offers three methods for determining how a waste site will be remediated: the full CERCLA process, interim remedial measures, and removal actions. For each of these methods, table 1 shows the key documents and related activities required before remediation can begin. Generally, DOE's guidance recommends that EPA and state regulators be involved at each of these steps. In addition, other documents not requiring regulatory approval frequently may supplement the documents shown. For example, for the full CERCLA process, DOE often issues reports for each phase of the remedial investigation for a group of waste sites, and before embarking on a remedial design, DOE generally prepares a remedial design work plan. Removal actions are the most abbreviated of the three planning processes. A removal action can be used to plan for remediating a waste site to the point that no further action is needed, or it can serve as a stopgap measure for a waste site that presents an urgent threat to the public or the environment. At some point after the remediation is concluded, a removal action, like an interim remedial measure, requires a record of decision to certify that the site is clean and no further action is required. Because removal actions generally require much less characterization and planning than other approaches except in emergency situations, they are most effective at sites where the contaminants and the probable remedy are relatively well known. Although removal actions in the private sector are limited to projects costing $2 million or less and taking 12 months or less, these limits do not apply at federal facilities. Available data indicate that removal actions save time and money compared with other planning approaches. Furthermore, removal actions have been used across a wide variety of environmental restoration projects, including the same kinds of projects that have been planned using the other approaches. Removal actions may also provide other benefits, such as reducing continued risks to the environment by moving projects more quickly to actual cleanups. Through January 1996, the five facilities we reviewed had a total of 39 removal actions either completed or under way. Three facilities (INEL, Hanford, and Rocky Flats) provided data allowing some comparisons of the relative time and cost involved in removal actions and other types of planning efforts. As figure 1 shows, at all three facilities the average time needed for planning was considerably shorter under removal actions than under the other approaches. At INEL, for example, planning for cleanups under removal actions averaged 4.4 months, compared with 15.2 months under interim remedial measures and 25.6 months under the full CERCLA process. Cost comparisons show the same pattern. As figure 2 shows, at INEL and Rocky Flats, under removal actions, the cost for characterization and studies before cleanup averaged $140,000, compared with almost $2 million under either interim remedial measures or the full CERCLA process. More limited data for Hanford support the same conclusion. The last five removal actions cost an average of about $790,000 for cleanup planning. These sites are now clean, or remediation is under way. In contrast, for the 18 areas along the Columbia River where Hanford plans to use interim remedial measures to manage the cleanup, the cost of preparation averaged $4.4 million per area between October 1991 and September 1995. Remediation has not begun at any of these areas. When examined on a project-by-project basis, planning for removal actions also appears to be cheaper and faster than planning at comparable sites for the other environmental restoration processes. Many of DOE's waste sites fit into one of three categories: burial grounds, contaminated soil, or contaminated water. Burial grounds may contain radioactive and/or hazardous solid and/or liquid waste. Buried in them are such things as barrels of chemicals and other material and equipment from DOE facilities. (See fig. 3.) Soil may have been contaminated by leaks or spills or by using liquid waste disposal facilities, such as trenches and waste ponds, to disperse contaminated liquids. (See fig. 4.) Surface water or groundwater may have been contaminated by radioactive or hazardous materials leaching through the soil from spills and leaks or through normal operations. (See fig. 5.) At Hanford and INEL, where more complete comparative information was available, we analyzed the removal actions that fell into these categories. We found four instances in which removal actions had been used at sites where conditions were reasonably comparable to those at sites that had been addressed under interim remedial measures (see table 2). In each case, planning for the remediation was accomplished much more quickly and at substantially less cost using a removal action. While the projects being compared are not identical, their similarities provide a reasonable basis for comparing the relative time and cost required to complete the planning that precedes remediation. The relative speed of removal actions can provide other advantages to DOE. Because removal actions progress to actual cleanup more quickly than other CERCLA processes, removal actions can provide information about waste sites that is useful in focusing other types of remediation. For example, one removal action at Hanford involved cleaning up a liquid disposal area near one of the shutdown reactors. The project manager for the removal action said important information obtained during the removal action on the extent and spread of contamination through the soil will be used to plan and conduct cleanups near other shutdown reactors, saving both time and money. Removal actions may also reduce the cumulative risk to human health and the environment. For example, Hanford's removal action in a trench near the Columbia River reduced the concentration of uranium in the groundwater from up to 28 times the drinking water standard to below the drinking water standard. Without the removal action, uranium would have continued to leach into the groundwater for at least 3 years before a planned water treatment facility was completed. At Oak Ridge, the EPA region 4 administrator praised a recent removal action that successfully reduced radioactive strontium releases by about 40 percent. He noted that the projects were completed in less time, at less cost, and with equal or greater effectiveness than the "typical" decision-making process would have allowed. He also attributed the results to teamwork and cooperation between DOE and the regulators. Finally, removal actions may allow DOE to "pull in its fences" by cleaning up isolated waste sites on the outskirts of a facility and thereby reduce the number of acres requiring DOE's control. For example, two removal actions addressing waste sites on remote portions of the Hanford reservation allowed DOE to complete the remediation of 27 percent, or 153 square miles, of Hanford's total land area. In February 1996, a record of decision was issued requiring no further cleanup for these areas. Although DOE's guidance calls for using removal actions where appropriate, the use of these actions varies widely by facility--from greater use at two locations, to increasing use at one location, to very limited use at the remaining two locations. While many contaminated waste sites are similar in type to those already remediated through removal actions, DOE officials have given several reasons for not using removal actions more often. They have noted, for example, that the interagency agreements and contracts governing DOE's environmental restoration do not encourage the use of removal actions, and they expressed a preference for using removal actions only in urgent situations. Not all waste sites may best be addressed through removal actions; however, there are still additional opportunities to accelerate the progress of DOE's environmental restoration through wider use of this approach. In August 1994, DOE and EPA adopted a policy encouraging the use of streamlined approaches to remediate waste sites. The policy encourages DOE managers to use removal actions, among other tools, when doing so "will achieve results comparable to a remedial action, but which may be completed in less time." The policy recommends that managers give strong consideration to using removal actions in nonemergency situations. DOE issued further guidance to its facilities in November 1995, reiterating that removal actions and other accelerated approaches should be based on consensus between DOE and its regulators. At the five facilities we reviewed, the response to DOE's policy has varied. Three facilities are adjusting their environmental restoration strategies to make greater use of removal actions, while the other two continue to plan only a limited role for the approach. Both Rocky Flats and INEL are planning to use removal actions to address significant portions of their waste sites. A Rocky Flats manager responsible for cleanup estimates that 27 waste sites will require remediation, and she plans to use removal actions for about half of them. She said using removal actions will be important to accomplishing remediation milestones because DOE officials at Rocky Flats proposed a new interagency agreement requiring several waste sites to be remediated each year. These specific remediation goals were also reflected in DOE's contract with the contractor responsible for the remediation at Rocky Flats. For example, in fiscal year 1996 the contractor is required to clean up three high-priority waste sites at the plant. The contractor's manager responsible for environmental restoration said that without using removal actions, these goals would be difficult or impossible to achieve. The state regulator for Rocky Flats added that removal actions will permit DOE to do more with fewer resources. DOE and regulatory officials said that the old interagency agreement focused almost exclusively on completing milestones required under the full CERCLA planning process. As a result, they said, the old agreement made it difficult to use removal actions. At INEL, DOE officials have the flexibility under their agreement to use removal actions where appropriate. Since 1993, INEL has reallocated funds and has conducted nine removal actions, including remediating contaminated soil at several sites. INEL has three other removal actions planned, including removing almost 300,000 cubic yards of contaminated soil, recovering ammunition and other ordnance scattered over several square miles, and removing 11 underground storage tanks of up to 50,000 gallons each. DOE's Director for Environmental Restoration at INEL said the facility uses removal actions to maximize the cleanup that can be achieved with available funds. However, she noted that at some point the results of the removal action still need to be evaluated under the CERCLA process to ensure that no further action is required. Managers from Idaho's Department of Health and Welfare who oversee environmental restoration at INEL said they consider removal actions to be effective and to save both time and money. They said that if DOE asked to use removal actions instead of other more extensive CERCLA planning processes, they would consider removal actions an acceptable alternative. While Oak Ridge has not relied extensively on removal actions in the past, officials at the facility now expect to use removal actions more frequently. Between fiscal years 1991 and 1995, Oak Ridge conducted seven removal actions. However, Oak Ridge has four removal actions planned for fiscal year 1996 and has compiled a list of 10 candidate removal actions to be carried out in the next 2 fiscal years. DOE officials believe that removal actions should be used when they can be done quickly and cost-effectively. Compared to the other three facilities, Hanford and Savannah River plan to rely less on the use of removal actions. At Hanford, officials previously pursued removal actions actively, but they are no longer doing so. In 1991, Hanford issued a cleanup strategy (called the Past Practice Strategy) proposing that all waste sites be considered as potential candidates for the removal action approach. Hanford had a contractor group dedicated to selecting, planning, and conducting removal actions. This group identified about 25 projects as candidates for removal actions. Seven actions were initiated before the group was dissolved in 1993 as part of a reorganization of responsibilities. Since then, although the Past Practice Strategy encouraging the use of removal actions has remained in effect, Hanford has initiated only one removal action. DOE, EPA, and state regulators have agreed to pursue interim remedial measures as the primary CERCLA planning process at the installation. Likewise, Savannah River has made only limited use of removal actions. Since fiscal year 1991, Savannah River has performed seven removal actions. None of these actions has been intended to serve as the final remediation for the waste site. Savannah River staff plan three additional removal actions for fiscal year 1996, but these projects, much like the removal actions carried out in the past, are stopgap measures, designed to control vegetation on three waste sites, and are not intended to be final actions. Of the more than 3,000 waste sites located at the five facilities, many are similar to those that have been addressed through removal actions. The 39 removal actions we studied addressed 4 burial grounds, 5 cases of groundwater or surface water contamination, and 21 instances of soil contamination. While many untreated sites may require no cleanup, hundreds will require further action. Many involve liquid waste disposal facilities, burial grounds, contaminated soil, and contaminated groundwater--conditions similar to those at waste sites that DOE has addressed through removal actions. For example, of the 498 identified waste sites along the Columbia River at Hanford, 54 are burial grounds and 108 are liquid waste disposal facilities. Our analysis and discussions with DOE and regulatory officials at the facilities we visited suggest that six factors limit the wider use of removal actions. Removal actions are not part of the agreements with regulators or DOE contractors. Generally, interagency agreements have not included removal actions. Instead, these agreements have often incorporated the steps included in lengthier CERCLA planning processes. The extensive planning and evaluation processes characteristic of the full CERCLA and interim remedial approaches, including the preparation of work plans and various reports, were specified in each of the agreements we reviewed. For example, at Savannah River, DOE and its regulators established milestones for fiscal year 1996 calling for the submission of almost 50 documents required under CERCLA, such as remedial investigation reports and proposed plans. Like the interagency agreements, DOE's contracts emphasize completing steps in the process rather than performing cleanup actions, and they provide few specific incentives for remediation. For example, at Savannah River the incentive goal is tied to meeting the interagency agreement milestones on time and doing the work at less cost. Similarly, at Hanford, over half of the incentive is tied to improving the contractor's operating processes, and less than 20 percent is tied solely to performing the actual remediation. In contrast, in order to accomplish remediation more quickly, DOE and the regulators at Rocky Flats are revising their agreement to establish remediation-based instead of process-based milestones. In the interim, they have agreed to remediate two trenches in fiscal year 1996. DOE is already implementing this change with its Rocky Flats contractor. In fiscal year 1996, the contractor will remediate the two trenches and one other waste site as directed by DOE. The contractor said this results-oriented strategy will force the greater use of removal actions because none of the other planning approaches can be used to complete the work on schedule. At Oak Ridge, officials attribute their more frequent use of removal actions to a change in their interagency agreement. The agreement now requires regulators to be involved in removal actions. Oak Ridge officials believe the change has increased the regulators' acceptance of removal actions. Perceptions about when removal actions should be used are incorrect. Some DOE and regulatory officials told us that they believe removal actions are intended for emergency situations or for planning relatively small, uncomplicated remediation projects, not for "mainstream" cleanups. For example, at Hanford, DOE conducted a time-critical action to remove buried barrels containing solvents because the barrels were leaking and threatened to contaminate the Columbia River. A deputy director of environmental restoration at Hanford said that he would consider using a removal action in the future if a waste site were continuing to release contamination that posed a significant threat to human health or the environment. However, he does not view removal actions as appropriate for Hanford's normal cleanup operations at sites where no urgent threat exists. The view that removal actions should be limited to urgent or small, uncomplicated remediation projects is not supported by DOE's and EPA's guidance or by experiences at the sites we visited. As discussed above, DOE and EPA jointly issued policy in 1994 encouraging the use of removal actions in nonemergency situations as long as CERCLA's regulations were followed. Furthermore, DOE has successfully used removal actions when an urgent threat has not existed or when large or complex problems have required attention. Preference is given to streamlining full CERCLA and interim remedial planning approaches. As a way to shorten the time before remediation can begin, officials at some sites are concentrating on shortening the steps of lengthier CERCLA planning processes. These officials estimate that the streamlining will reduce the time required in various planning steps. For example, DOE officials at Savannah River estimate that by streamlining the full CERCLA process they will be able to reduce the average time required to plan for a cleanup from 4 years to 3 years. However, planning and evaluation will still take significantly longer under streamlined CERCLA processes than under removal actions. At Oak Ridge, for example, the expedited CERCLA process laid out in the site's interagency agreement is expected to take 6 years. In some cases, Oak Ridge officials expect to further shorten the full CERCLA process to about 3.5 years. However, under Oak Ridge's interagency agreement, removal actions are scheduled to take only 14 months. At Savannah River, the streamlined planning process is expected to take 3 years, whereas removal actions are estimated to require only 6 to 12 months. At Hanford, DOE and its regulators have agreed to eliminate certain documents required by the interim remedial process, but they were unable to estimate how much time and money would be saved. Planning has progressed too far to benefit from the simpler removal action process. Several DOE officials at these facilities said that, for many waste sites, the investigative studies for the full CERCLA and interim remedial processes have progressed so far that there would be little benefit from switching to removal actions. For example, officials at Hanford pointed out that they expect most high-priority waste sites in the environmentally sensitive area next to the Columbia River to be ready for cleanup in 1 to 3 years, making removal actions unnecessary. We found instances, however, in which the use of removal actions has been effective even after planning for remediation under lengthier processes has been partially completed. Officials at Rocky Flats and INEL used information gathered under lengthier CERCLA processes as the basis for removal actions, thereby accomplishing these actions more quickly than they would otherwise have done. For example, INEL officials used the remedial investigation report from the full CERCLA process as the engineering evaluation for a removal action to remove radioactively contaminated soil from six waste sites. INEL officials estimate that changing to a removal action speeded the actual remediation by several years and saved $2.6 million. At Oak Ridge, the state regulator said that at some sites cleanups now under the full CERCLA process may be converted to removal actions. He said that Oak Ridge's focus is increasingly on getting into the field. Limited planning may increase the risk that an incorrect remedy will be chosen. Frequently, contamination at DOE waste sites is not well known. Of the 39 removal actions we reviewed, 1 incurred added or unnecessary costs because the actual conditions at the site were different from the expected ones. At Hanford, DOE conducted a removal action to excavate old drums thought to contain residues of a hazardous chemical. Upon excavation, DOE found no significant contamination in the pit. Fuller characterization before excavating the site might have helped to avoid the expense of excavation. However, a state regulator at Hanford said that full characterization of the burial ground would have cost more than the excavation. A removal action may not be the final solution. A final issue that was raised at several facilities was that, in contrast to the full CERCLA process, a removal action is an interim solution that must be documented through a record of decision after the action has been completed. EPA officials said that potential problems with final decisions could be significantly reduced by encouraging public participation and close cooperation between the regulators and DOE. DOE officials at INEL also stressed the importance of securing the regulators' agreement with the proposed removal actions, particularly at sites where little is known of the contamination and the effectiveness of the planned remedial technology is unclear. DOE officials also expressed concern that when the final decision is proposed to the public and the regulators, additional remediation could be required. Of the 39 removal actions we studied, 26 were intended to be the final solution. None of the 26 is expected to require additional remediation when the record of decision is completed, but only one record of decision covering 4 removal actions at Hanford has been completed. In addition, interim remedial measures, which are widely used by DOE, also require a record of decision after the measures have been implemented. More extensive use of removal actions would provide a means for speeding the planning process and devoting more environmental restoration dollars to actual remediation at sites. We recognize that not every waste site is appropriate for the abbreviated planning that takes place under removal actions; however, the successful use of removal actions at a variety of environmental restoration sites throughout the DOE complex indicates that additional opportunities exist to employ this cost- and time-saving approach. We recommend that the Secretary of Energy direct the managers of DOE's facilities, working with their regulators, to reevaluate their environmental restoration strategies to ensure the maximum possible use of removal actions. Where appropriate, this action may include systematically evaluating each waste site where actual cleanup has not yet begun, including those sites where a lengthier assessment process is under way, to identify the sites where using a removal action would be feasible and cost-effective; seeking agreement to eliminate requirements in existing interagency agreements that favor lengthier review and assessment processes in exchange for a commitment to achieving significant cleanup progress through removal actions; and identifying and implementing incentives for DOE's contractors that would increase the emphasis on, and the reward for, pursuing removal actions where appropriate. We provided a draft of this report to DOE and EPA for their review and comment. We discussed the report with officials from DOE's Office of Environmental Restoration, including the Director of the Office of Program Integration, and with officials from EPA's Federal Facilities Enforcement Office, including the Senior Enforcement Counsel. Overall, the officials agreed that the report was accurate. Both agencies provided some technical comments that we have incorporated in the report. DOE agreed with our conclusion that removal actions can be completed in less time and are less costly than other approaches. However, DOE said that the report implies that DOE has more discretion to initiate removal actions than the Department believes that it has. DOE said that the report did not give enough emphasis to the barriers, such as the requirements in interagency agreements, that the Department faces in using removal actions at more waste sites. DOE also noted that it is supporting revisions to CERCLA to increase its flexibility. We have modified our report to reflect DOE's concerns; however, we continue to believe that DOE can do more to overcome these barriers. EPA said that it generally supports the increased use of removal actions where it and/or state regulators have had the opportunity to coordinate with DOE. EPA suggested that removal actions could be enhanced by closer cooperation between regulators and DOE through the use of teams and early efforts to include the public in decisions about using removals. EPA also suggested that DOE document the savings in time and cost from using removal actions by collecting comparative data to improve the public's and regulators' acceptance of removal actions. We agree that these are steps that DOE should consider. We conducted our review at Hanford in Washington State, INEL in Idaho, Oak Ridge Reservation in Tennessee, Savannah River in South Carolina, and Rocky Flats in Colorado. We selected these facilities because DOE estimates that they will account for about 94 percent of the total cost of restoring the DOE complex. To determine whether removal actions have been successful in speeding cleanups, reducing costs, and providing other benefits, we attempted at each facility to gather data on the time spent and the costs incurred to plan waste sites' remediation using both removal actions and lengthier CERCLA processes. We reviewed projects' files, toured various sites restored through the removal action process, analyzed official records, and reviewed various reports. At Oak Ridge, Savannah River, and Hanford, cost data were not available on all projects. At those facilities, we obtained the cost data that were readily available. We also discussed the advantages and disadvantages of removal actions with DOE and contractor officials. To identify additional opportunities for DOE to use removal actions, we compared untreated waste sites to waste sites that had been successfully treated through removal actions. We also interviewed officials at each location and reviewed lists of potential removal actions that had been prepared at some sites. To identify potential barriers to the greater use of removal actions, at each location we reviewed agreements with regulators, as well as selected contracts and incentives provided to DOE contractors. We also reviewed relevant statutes and regulations, as well as EPA's and DOE's guidance, and discussed the Department's guidance with DOE's Office of Environmental Guidance. To obtain the Department's perspective on the role of removal actions, we discussed the approach with DOE's Office of Environmental Restoration. We also interviewed state and EPA regulators responsible for activities at the five facilities and EPA officials from the Federal Facilities Enforcement Office. We conducted our work from July 1995 to April 1996 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix I. Chris Abraham Robert Lilly James Noel Delores Parrett Angela Sanders Bernice Steinhardt Stanley Stenersen William Swick The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) use of removal actions to reduce the cost and accelerate the pace of environmental restoration projects. GAO found that: (1) removal actions save money and time compared with other remediation planning approaches; (2) the use of removal actions may provide information that is useful for other types of remediation, reduce the cumulative risk to human health and the environment, and reduce the size of sites under DOE control; (3) the use of removal actions at DOE facilities varies; (4) the use of removal actions is limited because removal actions are not part of interagency agreements with regulators or DOE contractors; (5) some officials believe that removal actions are intended for emergency situations or for planning small remediation projects; (6) officials at some sites are concentrating on streamlining Comprehensive Environmental Response, Compensation, and Liabilities Act (CERCLA) and interim remedial planning approaches, but planning and evaluation will still take significantly longer under simpler CERCLA processes; and (7) limited planning may increase the risk that an incorrect remedy will be chosen.
6,049
230
USPTO helps promote industrial and technological progress in the United States and strengthen the national economy by administering the laws relating to patents and trademarks. A critical part of its mission is examining patent applications and issuing patents. A patent is a property right granted by the U.S. government to an inventor who secures, generally for 20 years from the date of initial application in the United States, his or her exclusive right to make, use, offer for sale, or sell the invention in exchange for disclosing it. The number of patent filings to USPTO continues to grow and, by 2009, the agency is projecting receipt of over 450,000 patent applications annually. Patent processing essentially involves three phases: pre-examination, examination, and post-examination. The process begins when an applicant files a patent application and pays a filing fee. During the pre-examination phase, patent office staff document receipt of the application and process the application fee, scan and convert the paper documents to electronic format, and conduct an initial review of the application and classify it by subject matter. During the subsequent examination phase, the application is assigned to a patent examiner with expertise in the subject area who searches existing U.S. and foreign patents, journals, and other literature and, as necessary, contacts the applicant to resolve questions and obtain additional information to determine whether the proposed invention can be patented. Examiners document their determinations on the applications in formal correspondence, referred to as office actions. Applicants may abandon their applications at any time during this process. If the examiner determines that a patent is warranted, a supervisor reviews and approves it and the applicant is informed of the outcome. The application then enters the post-examination phase and, upon payment of an "issue fee," a patent is granted and published. Historically, the time from the date that a patent application is filed to the date that the patent is either granted or the application is abandoned has been called "patent pendency." Because of long-standing concerns about the increasing volume and complexity of patent applications, USPTO has been undertaking projects to automate its patent process for about the past two decades. In 1983, the agency began one of its most substantial projects--the Automated Patent System (APS)--with the intent of automating all aspects of the patent process. APS was to be deployed in 1990 and, when completed, consist of five integrated subsystems that would (1) fully automate incoming patent applications; (2) allow examiners to electronically search the text of granted U.S. patents and access selected abstracts of foreign patents; (3) scan and allow examiners to retrieve, display, and print images of U.S. patents; (4) help examiners classify patents; and (5) support on-demand printing of copies of patents. In reporting on APS more than 10 years following its inception, we noted that USPTO had deployed and was operating and maintaining certain parts of the system, supporting text search, limited document imaging, order- entry and patent printing, and classification activities. However, our report raised concerns about the agency's ability to adequately plan and manage this major project, pointing out that its processes for exercising effective management control over APS were weak. Ultimately, USPTO never fully developed and deployed APS to achieve the integrated, end-to- end patent processing system that it envisioned. The agency reported spending approximately $1 billion on this initiative from 1983 through 2002. In addition, in 1998, the agency implemented an Internet-based electronic filing system at a reported cost of $10 million, enabling applicants to submit their applications online. Further, through 2002, the agency continued to enhance its capabilities that enabled examiners to search patent images and text, and upgraded its patent application classification and tracking systems. To help the agency address the challenges of reviewing an increased volume of more complex patent applications and of reducing the length of time it takes to process them, Congress passed a law requiring USPTO to improve patent quality, implement electronic government, and reduce pendency. In response to the law, in June 2002, the agency embarked on an aggressive 5-year modernization plan outlined in its 21st Century Strategic Plan, which was updated to include stakeholder input and re- released in February 2003. The strategic plan outlines 38 initiatives related to the patent organization that focus on three crosscutting strategic themes: capability, productivity, and agility. The capability theme focuses on efforts to enhance patent quality through workforce and process improvements; the productivity theme focuses on efforts to decrease the pendency of patent applications; and the agility theme focuses on initiatives to electronically process patent applications. To fully fund the initiatives in its strategic plan, the agency requested authority from Congress to increase the user fees it collects from applicants and to spend all of these fees on patent processing. Legislation enacted in December 2004 increased the fees available to USPTO; however, the increases are only effective for fiscal years 2005 and 2006. As was its intent with APS, USPTO has continued to pursue a paperless, end-to-end, automated patent process. In 2001, the agency initiated its Tools for Electronic Application Management (TEAM) automation project, aiming to deliver an end-to-end capability to process patent applications electronically by fiscal year 2006. Under the TEAM concept, the agency had planned to integrate its existing electronic filing system and the classification and search capabilities from the earlier APS project with new document management and workflow capabilities, and with image- and text-based processing of patent applications to achieve a sophisticated means of handling documents and tracking patent applications throughout the examination process. By implementing image- and text-based capabilities, the agency had anticipated that patent examiners would be able to view and process applications online, as well as manipulate and annotate text within a patent application, thus eliminating manual functions and improving processing accuracy, reliability, and productivity, as well as the quality of the patents that are granted. With the issuance of its 21st Century Strategic Plan, however, USPTO altered its approach to accomplishing patent automation. The strategic plan, among other things, identified the agency's high-level information technology goals for fully automating the patent process as part of the 5- year modernization effort. It incorporated automation concepts from the TEAM project, but announced an accelerated goal of delivering an operational system to electronically process patent applications by October 1, 2004, earlier than had been scheduled under TEAM. In carrying out its patent automation plans, USPTO has delivered a number of important processing capabilities through the various information systems that it has implemented. For example, an automated search capability, available since 1986, has eliminated the need for patent examiners to manually search for prior art in paper files, and the classification and fee accounting capabilities have facilitated assigning applications to the correct subject areas and managing collections of applicable fees. In addition, the electronic filing system that has existed since 1998 has enabled applicants to file their applications with the agency via the Internet. Using the Internet, patent applicants also can review the status of their applications online and the public can electronically access and search existing published patents. Further, an imaging system implemented in August 2004, called the Image File Wrapper, has given USPTO the capability to scan patent applications and related documents, which can then be stored in a database and retrieved and reviewed online. The agency's progress in implementing its automated patent functions is illustrated in figure 1. Nonetheless, even with the progress that has been made, collectively, these automated functions have not provided the fully integrated, electronic patent processing capability articulated in the agency's automation plans. Two of the key systems that it is relying on to further enhance its capabilities--the electronic filing system and the Image File Wrapper--have not yielded the processing improvements that the agency has deemed essential to successfully operate in a fully integrated, electronic environment. Specifically, in implementing its electronic filing system, USPTO had projected significant increases in processing efficiencies and quality by providing patent applicants the capability to file online, thus alleviating the need for them to send paper applications to the agency or for patent office staff to manually key application data into the various processing systems. However, even after enhancements in 2002 and 2004, the system did not produce the level of usage among patent filers that the agency had anticipated. For example, although USPTO's preliminary justification for acquiring the electronic filing system had projected an estimated usage rate of 30 percent in fiscal year 2004, patent officials reported that, as of April 2005, fewer than 2 percent of all patent applications were being submitted to the agency via this system. As a result, anticipated processing efficiencies and quality improvements through eliminating the manual re- keying of application data have not been realized. In September 2004, USPTO convened a forum of senior officials representing the largest U.S. corporate and patent law firm filers to identify causes of patent applicants' dissatisfaction with the electronic filing system and determine how to increase the number of patents being filed electronically. According to the report resulting from this forum, the majority of participants viewed the system as cumbersome, time- consuming, costly, inherently risky, and lacking a business case to justify its usage. Among the barriers to system usage that the participants specifically identified were (1) users' lack of a perceived benefit from filing applications electronically, (2) liability concerns associated with filers' unsuccessful use of the system or unsuccessful transmission of patent applications to USPTO, and (3) significant disruptions to filers' normal office/corporate processes and workflow caused by factors such as difficulty in using the automated tools and the inability to download necessary software through firewalls. Several concerns raised during the forum mirrored those that USPTO had earlier identified in a 1997 analysis of a prototype for electronic filing. However, at the time of our review, the agency had not completed plans to show how it would address the concerns regarding use of the electronic filing system. The agency's Image File Wrapper also had not resulted in critical patent processing improvements. The system includes image technology for storage and maintenance of records associated with patent applications and provides the capability to scan each page of a submitted paper application and convert the pages into electronic images. Patent examiners in a majority of the focus groups that we conducted commented that the system had provided them with the ability to easily access patent applications and related information. In addition, patent officials stated that the system had enabled multiple users to simultaneously access patent applications. Nonetheless, patent officials acknowledged that the system had experienced performance and usability problems. Specifically, in speaking about the system's performance, the officials and agency documentation stated that, after its implementation, the Image File Wrapper had been unavailable for extended periods of time or had experienced slow response times, resulting in decreased productivity. To lessen the impact of this problem, patent officials said they had developed a backup tool to store images of an examiner's most recent applications, which can be accessed when the Image File Wrapper is not available. Further, in commenting on this matter, the USPTO director stated that the system's performance had begun to show improvement. Regarding the usability of the system, patent officials and focus group results indicated that the Image File Wrapper did not fully meet processing needs. For example, the officials stated that, as an image-based system, the Image File Wrapper did not fully enable patent examiners to electronically search, manipulate, or track and log changes to application text, which were key processing features emphasized in the agency's automation plans. The examiners also commented that a limited capability to convert images to text, which was intended to assist them in copying and reusing information contained in patent files, was error-prone, contributing to their need to download and print the applications for review. Further, because the office's legacy systems were not integrated with the Image File Wrapper, examiners were required to manually print correspondence from these systems, which then had to be scanned into the Image File Wrapper in order to be included as part of an applicant's electronic file. Patent and Office of Chief Information Officer (OCIO) officials largely attributed the system's performance and usability problems to the agency's use of software that it acquired from the European Patent Office. The officials explained that, to meet the accelerated date for delivering an operational system as outlined in its strategic plan, the agency had decided in 2002 to acquire and use a document-imaging system owned by the European Patent Office, called ePhoenix, rather than develop the integrated patent processing system that had been described in its automation plans. According to the officials, the director, at that time, had considered ePhoenix to be the most appropriate solution for further implementing USPTO's electronic patent processing capabilities given (1) pressures from Congress and from customers and stakeholders to implement an electronic patent processing system more quickly than originally planned and (2) the agency's impending move to its new facility in Alexandria, Virginia, which did not include provisions for transferring and storing paper patent applications. However, they indicated that the original design of the ePhoenix system had not been compatible with USPTO's technical platform for electronic patent processing. Specifically, they stated that the European Patent Office had designed the system to support only the printing of files for subsequent manual reviews, rather than for electronic review and processing. In addition, they stated that the system had not been designed for integration with other legacy systems or to incorporate additional capabilities, such as text processing, with the existing imaging capability. Further, an official of the European Patent Office noted that ePhoenix had supported their office's much smaller volume of patent applications. Thus, with USPTO's patent application workload being approximately twice as large as that of its European counterpart, the agency placed greater stress on the system than it was originally designed to accommodate. OCIO officials told us that, although they had tested certain aspects of the system's capability, many of the problems encountered in using the system were not revealed until after the system was deployed and operational. Patent and OCIO officials acknowledged that the agency had purchased ePhoenix although senior officials were aware that the original design of the system had not been compatible with USPTO's technological platform for electronic patent processing. They stated that, despite knowing about the problems and risks associated with using the software, the agency had nonetheless proceeded with this initiative because senior officials, including the former USPTO director, had stressed their preference for using ePhoenix in order to expedite the implementation of a system. Patent and OCIO officials acknowledged that management judgment, rather than a rigorous analysis of costs, benefits, and alternatives, had driven the agency's decision to use this system. To a significant extent, USPTO's difficulty in realizing intended improvements through its electronic filing system and Image File Wrapper can be attributed to the fact that the agency took an ad hoc approach to planning and managing its implementation of these systems, driven in part by its accelerated schedule for implementing an automated patent processing capability. The Clinger-Cohen Act of 1996, as well as information technology best practices and our prior reviews, emphasize the need for agencies to undertake information technology projects based on well-established business cases that articulate agreed-upon business and technical requirements; effectively analyze project alternatives, costs, and benefits; include measures for tracking projects through their life cycle against cost, schedule, benefit, and performance targets; and ultimately, provide the basis for credible and informed decision making and project management. Yet, patent officials did not rely on established business cases to guide their implementation of these key automation initiatives. The absence of sound project planning and management for these initiatives has left the agency without critical capabilities, such as text processing, and consequently, has impeded its successful transition to an integrated and paperless patent processing environment. The Under Secretary of Commerce for Intellectual Property, who serves as the director of USPTO, stated at the conclusion of our review that he recognized and intended to implement measures to address the weaknesses in the agency's planning and management of its automated patent systems. USPTO's ineffective planning for and management of its patent automation projects, in large measure, can be attributed to enterprise- level, systemic weaknesses in the agency's information technology investment management processes. A key requirement of the Clinger- Cohen Act is that agencies have established processes, such as capital planning and investment control, to help ensure that information technology projects are implemented at acceptable costs and within reasonable and expected time frames, and contribute to tangible, observable improvements in mission performance. Such processes guide the selection, management, and evaluation of information technology investments by aiding management in considering whether to undertake a particular investment in information systems and providing a means to obtain necessary information regarding the progress of an investment in terms of cost, capability of the system to meet specified requirements, timeliness, and quality. Further, our Enterprise Architecture Framework emphasizes that information technology projects should show evidence of compliance with the organization's enterprise architecture, which serves as a blueprint for systematically and completely defining an organization's current (baseline) operational and technology environment and as a roadmap toward the desired (target) state. Effective implementation of an enterprise architecture can facilitate an agency by informing, guiding, and constraining the decisions being made for the agency, and subsequently decrease the risk of buying and building systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. At the time of our study, USPTO had begun instituting certain essential information technology investment management mechanisms, such as a framework for its enterprise architecture and components of a capital planning and investment control process. However, it had not yet established the necessary linkages between its enterprise architecture and its capital planning and investment control process to ensure that its automation projects would comply with the architecture or fully instituted enforcement mechanisms for investment management. For example, USPTO drafted a capital planning and investment control guide in June 2004 and issued an agency administrative order on its integrated investment decision practices in February 2005. However, according to senior officials, many of the processes and procedures in the guide had not been completed and fully implemented and it was unclear how the agency administrative order was being applied to investments. In addition, while the agency had completed the framework for its enterprise architecture, it had not aligned its business processes and information technology in accordance with the architecture. According to OCIO officials, the architecture review board responsible for enforcing compliance with the architecture was not yet in place; thus, current architecture reviews were of an advisory nature and were not required for system implementation. Our analysis of architecture review documents that system officials provided for the electronic filing system and the Image File Wrapper confirmed that the agency had not rigorously assessed either of these systems' compliance with the enterprise architecture. Adding to these conditions, a study commissioned by the agency in 2004 found that its Office of Chief Information Officer was not organized to help the agency accomplish the goals in its automation strategy and that its investment management processes did not ensure appropriate reviews of automation initiatives. USPTO has an explicit responsibility to ensure that the automation initiatives that it is counting on to enhance its overall patent process are consistent with the agency's priorities and needs and are supported by the necessary planning and management to successfully accomplish this. At the conclusion of our review, the agency's director and its chief information officer acknowledged the need to strengthen the agency's investment management processes and practices and to effectively apply them to USPTO's patent automation initiatives. Since 2000, USPTO has also taken steps intended to help attract and retain a qualified patent examination workforce. The agency has enhanced its recruiting efforts and has used many human capital flexibilities to attract and retain qualified patent examiners. However, during the past 5 years, its recruiting efforts and use of benefits have not been consistently sustained, and officials and examiners at all levels in the agency told us that the economy has more of an impact on their ability to attract and retain examiners than any actions taken by the agency. Consequently, how USPTO's actions will affect its long-term ability to maintain a highly qualified workforce is unclear. While the agency has been able to meet its hiring goals, attrition has recently increased. USPTO's recent recruiting efforts have incorporated several measures that we and others identified as necessary to attract a qualified workforce. First, in 2003, to help select qualified applicants, the agency identified the knowledge, skills, and abilities that examiners need to effectively fulfill their responsibilities. Second, in 2004, its permanent recruiting team, composed of senior and line managers, participated in various recruiting events, such as job fairs, conferences sponsored by professional societies, and visits to the 10 schools that the agency targeted based on the diversity of their student population and the strength of their engineering and science programs. Finally, for 2005, USPTO developed a formal recruiting plan that, among other things, identified hiring goals for each technology center and described the agency's efforts to establish ongoing partnerships with the 10 target schools. In addition, the agency trained its recruiters in effective interviewing techniques to help them better describe the production system and incorporated references to the production-oriented work environment in its recruitment literature. USPTO has also used many of the human capital benefits available under federal personnel regulations to attract and retain qualified patent examiners. Among other benefits, it has offered recruitment bonuses ranging from $600 to over $10,000; a special pay rate for patent examiners that is 10 percent above federal salaries for comparable jobs; non-competitive promotion to the full performance level; and flexible working schedules, including the ability to schedule hours off during midday. According to many of the supervisors and examiners who participated in our focus groups, these benefits were a key reason they were attracted to the agency and are a reason they continue to stay. The benefits that examiners most frequently cited as important were the flexible working schedules and competitive salaries. However, it is too soon to determine the long-term effect of the agency's efforts, in part because neither its recruiting efforts nor the human capital benefits have been consistently sustained due to budgetary constraints. For example, in 2002 the agency suspended reimbursements to examiners for law school tuition because of funding limitations, although it resumed the reimbursements in 2004 when funding became available. Examiners in our focus groups expressed dissatisfaction with the inconsistent availability of these benefits, in some cases saying that the suspension of benefits, such as law school tuition reimbursement, provided them an incentive to leave the agency. More recently, in March 2005, USPTO proposed to eliminate or modify other benefits, such as the ability of examiners to earn credit hours and to set their own work schedules. Another, and possibly the most important, factor adding to the uncertainty of USPTO's recruiting efforts is the unknown potential impact of the economy, which, according to agency officials and examiners, has a greater effect on recruitment and retention than any actions the agency may take. Both agency officials and examiners told us that when the economy picks up, more examiners tend to leave the agency and fewer qualified candidates are attracted to it. On the other hand, when there is a downturn in the economy, the agency's ability to attract and retain qualified examiners increases because of perceived job security and competitive pay. When discussing their reasons for joining USPTO, many examiners in our focus groups cited job security and the lack of other employment opportunities, making comments such as, "I had been laid off from my prior job, and this was the only job offer I got at the time." This relationship between the economy and USPTO's hiring and retention success is part of the reason why the agency has met its hiring goals for the last several years. However, the agency has recently experienced a rise in attrition rates. In particular, a high level of attrition among younger, less experienced examiners could affect its efforts to maintain a highly qualified patent examination workforce. Attrition of examiners with 3 years or less experience is a significant loss for the agency because considerable time and dollar resources are invested to help new examiners become proficient during their first few years. While USPTO has undertaken a number of important and necessary actions to attract and retain qualified patent examiners, it continues to face three long-standing human capital challenges which, if not addressed, could also undermine its recent efforts. First, although organizations with effective human capital models have strategies to communicate with employees and involve them in decision making, the lack of good communication and collaboration has been a long-standing problem at USPTO. We found that the agency does not have a formal communication strategy and does not actively seek input from examiners on key management decisions. Most of the emphasis is on enhanced communication among managers but not between managers and other levels of the organization, such as patent examiners. Patent examiners and supervisory patent examiners in our focus groups frequently stated that communication with agency management was poor and that managers provided them with inadequate or no information, creating an atmosphere of distrust of management. The examiners also said that management was out of touch with them and their concerns and that communication with the managers tended to be one way and hierarchical, with little opportunity for feedback. Management officials told us that informal feedback can always be provided by anyone in the organization--for example, through an e-mail to anyone in management. The lack of communication between management and examiners is exacerbated by the contentious working relationship between management and union officials and by the complexity of the rules about what level of communication can occur between managers and examiners without involving the union. Some managers alluded to this contentious relationship as one of the reasons why they had limited communication with patent examiners, who are represented by the union even if they decide not to join it. Specifically, they believed they could not solicit the input of employees directly without engaging the union. Another official, however, told us that nothing prevents the agency from having "town hall" type meetings to discuss potential changes in policies and procedures, as long as the agency does not promise examiners a benefit that impacts their working conditions. Union officials agreed that USPTO can invite comments from examiners on a plan or proposal; however, if the proposal concerns a negotiating issue, the agency must consult the examiners' union, which is their exclusive representative with regard to working conditions. Second, human capital models suggest that agencies should periodically assess their monetary awards systems to ensure that they help attract and retain qualified staff. However, patent examiners' awards are based largely on the number of applications they process, and the assumptions on which application processing quotas are based have not been updated since 1976. Patent examiners and management have differing opinions on whether these assumptions need to be updated. Examiners in our focus groups told us that, in the last several decades, the tasks associated with and the complexity of processing applications have greatly increased while the time allowed has not. As a result, many of the examiners and supervisory patent examiners in our focus groups and respondents to previous agency surveys reported that examiners do not have enough time to conduct high- quality reviews of patent applications. The examiners noted that these inadequate time frames create a stressful work environment and are cited in the agency's exit surveys as a primary reason that examiners leave the agency. In contrast, USPTO managers had a different perspective on the production model and its impact on examiners. They stated that the time estimates used in establishing production quotas do not need to be adjusted because the efficiencies gained through actions such as the greater use of technology have offset the time needed to address the greater complexity of the applications and the increase in the number of claims. Moreover, they said that for an individual examiner, reviews of applications that take more time than the estimated average are generally offset by other reviews that take less time. Finally, counter to current workforce models, USPTO does not require ongoing technical education for patent examiners, which could negatively affect the quality of its patent examination workforce. Instead, the agency requires newly hired examiners to take extensive training only during their first year of employment; all subsequent required training is focused on developing legal expertise. Almost all patent examiners are required to take a range of ongoing training in legal matters, including patent law. In contrast, patent examiners are not required to undertake any ongoing training to maintain expertise in their area of technology, even though the agency acknowledges that such training is important, especially for electrical and electronic engineers. In 2001 the agency stated, "Engineers who fail to keep up with the rapid changes in technology, regardless of degree, risk technological obsolescence." However, agency officials told us that examiners automatically maintain currency with their technical fields by just doing their job. Patent examiners and supervisory patent examiners disagreed, stating that the literature they review in applications is outdated, particularly in rapidly evolving technologies. The agency does offer some voluntary in-house training, such as technology fairs and industry days at which scientists and others are invited to present lectures to patent examiners that will help keep them current on the technical aspects of their work. In addition, the agency offers voluntary external training and, for a small number of examiners, pays conference or workshop registration fees. Agency officials could provide no data on the extent to which examiners have taken advantage of such training opportunities. In carrying out its strategic plan to become a more productive and responsive organization, our work found that USPTO has made greater progress in implementing its initiatives to make the patent organization more capable by improving the quality of examiners' skills and work processes than it has in implementing its productivity and agility initiatives aimed at decreasing the length of time to process a patent application and improving electronic processing. Specifically, of the activities planned for completion by December 2004, the agency has fully or partially implemented all 23 of the initiatives related to its capability theme to improve the skills of employees, enhance quality assurance, and alter the patent process through legislative and rule changes. In contrast, it has partially implemented only 1 of the 4 initiatives related to the productivity theme to restructure fees and expand examination options for patent applicants and has fully or partially implemented 7 of the 11 initiatives related to the agility theme to increase electronic processing of patent applications and to reduce examiners' responsibilities for literature searches. Table 1 provides our assessment of each of the strategic plan initiatives. Agency officials primarily cited the need for additional funding as the main reason that some initiatives have not been implemented. With passage of the legislation in December 2004 to restructure and increase the fees available to USPTO, the agency is reevaluating the feasibility of many initiatives that it had deferred or suspended. In summary, through its attempts to implement an integrated, paperless patent process over the past two decades, USPTO has delivered a number of important automated capabilities. Nonetheless, after spending over a billion dollars on its efforts, the agency is still not yet effectively positioned to process patent applications in a fully automated environment. Moreover, when and how it will actually achieve this capability is uncertain. Largely as a result of ineffective planning and management of its automated capabilities, system performance and usability problems have limited the effectiveness of key systems that the agency has implemented to support critical patent processes. Although USPTO's director and its chief information officer have recognized the need to improve the agency's planning and management of its automation initiatives, weaknesses in key information technology management processes needed to guide the agency's investments in patent automation, such as incomplete capital planning and investment controls, could preclude their ability to successfully accomplish this. Thus, the agency risks further implementing information technology that does not support its needs and that threatens its overall goal of achieving a fully electronic capability to process its growing patent application workload. Further, to improve its ability to attract and retain the highly educated and qualified patent examiners it needs, USPTO has taken steps recognized by experts as characteristic of highly effective organizations. However, without an effective communication strategy and a collaborative culture that includes all layers of the organization, the agency's efforts could be undermined. The absence of effective communication and collaboration has created distrust and a significant divide between management and examiners on important issues such as the appropriateness of the production model and the need for technical training. Unless the agency begins to develop an open, transparent, and collaborative work environment, its efforts to hire and retain examiners may be adversely affected in the long run. Overall, while USPTO has progressed in implementing strategic plan initiatives aimed at improving its organizational capability, the agency attributes its limited implementation of other initiatives intended to reduce pendency and improve electronic patent application processing primarily to the need for additional funding. Given the weaknesses in USPTO's information technology investment management processes, we recommended that the agency, before proceeding with any new patent automation initiatives, (1) reassess and, where necessary, revise its approach for implementing and achieving effective use of information systems supporting a fully automated patent process; (2) establish disciplined processes for planning and managing the development of patent systems based on well-established business cases; and (3) fully institute and enforce information technology investment management processes and practices to ensure that its automation initiatives support the agency's mission and are aligned with its enterprise architecture. Further, in light of its need for a more transparent and collaborative work environment, we recommended that the agency develop formal strategies to (1) improve communication between management and patent examiners and between management and union officials and (2) foster greater collaboration among all levels of the organization to resolve key issues, such as the assumptions underlying the quota system and the need for required technical training. USPTO generally agreed with our findings, conclusions, and recommendations regarding its patent automation initiatives and acknowledged the need for improvements in its management processes by, for example, developing architectural linkages to the planning process and implementing a capital planning and investment control guide. Nonetheless, the agency stated that it only partially agreed with several material aspects of our assessment, including our recommendation that it reassess its approach to automating its patent process. Further, the agency generally agreed with our findings, conclusions, and recommendations regarding its workforce collaboration and suggested that it would develop a communication plan and labor management strategy, and educate and inform employees about progress on initiatives, successes, and lessons learned. In addition, USPTO indicated that it would develop a more formalized technical program for patent examiners to ensure that their skills are fresh and ready to address state-of-the-art technology. Mr. Chairman, this concludes our statement. We would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information, please contact Anu K. Mittal at (202) 512-3841or Linda D. Koontz at (202) 512-6240. They can also be reached by e-mail at [email protected] and [email protected], respectively. Other individuals making significant contributions to this testimony were Valerie C. Melvin, Assistant Director; Cheryl Williams, Assistant Director; Mary J. Dorsey, Vijay D'Souza, Nancy Glover, Vondalee R. Hunt, and Alison D. O'Neill. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States Patent and Trademark Office (USPTO) is responsible for issuing patents that protect new ideas and investments in innovation and creativity. However, the volume and complexity of patent applications to the agency have increased significantly in recent years, lengthening the time needed to process patents and raising concerns about the validity of the patents that are issued. Annual applications have grown from about 185,000 to over 350,000 in the last 10 years and are projected to exceed 450,000 by 2009. Coupled with this growth is a backlog of about 750,000 applications. Further complicating matters, the agency has faced difficulty in attracting and retaining qualified staff to process patent applications. USPTO has long recognized the need to automate its patent processing and, over the past two decades, has been engaged in various automation projects. More recently, in its strategic plan, the agency articulated its approach for accelerating the use of automation and improving workforce quality. In two reports issued in June 2005, GAO discussed progress and problems that the agency faces as it develops its electronic patent process, its actions to attain a highly qualified patent examination workforce, and the progress of the agency's strategic plan initiatives. At Congress's request, this testimony summarizes the results of these GAO reports. As part of its strategy to achieve an electronic patent process, USPTO had planned to deliver an operational patent system by October 2004. It has delivered important capabilities, for example, allowing patent applicants to electronically file and view the status of their applications and the public to search published patents. Nonetheless, after spending over $1 billion on its efforts from 1983 through 2004, the agency has not yet developed the fully integrated, electronic patent process articulated in its automation plans, and when and how it will achieve this process is uncertain. Key systems that the agency is relying on to help reach this goal--an electronic application filing system and a document imaging system--have not provided capabilities that are essential to operating in a fully electronic environment. Contributing to this situation is the agency's ineffective planning for and management of its patent automation initiatives, due in large measure to enterprise-level, systemic weaknesses in its information technology investment management processes. Although the agency has begun instituting essential investment management mechanisms, such as its enterprise architecture framework, it has not yet finalized its capital planning and investment control process, or established necessary linkages between the process and its architecture to guide the development and implementation of its information technology. The Under Secretary of Commerce for Intellectual Property and the agency's chief information officer have acknowledged the need for improvement. USPTO has taken steps to attract and retain a highly qualified patent examination workforce by, for example, enhancing its recruiting efforts and using many of the human capital benefits available under federal personnel regulations. However, it is too soon to determine the long-term success of the agency's efforts because they have been in place only a short time and have not been consistently sustained because of budgetary constraints. Long-term uncertainty about the agency's hiring and retention success is also due to the unknown impact of the economy. In the past, the agency had more difficulty recruiting and retaining staff when the economy was doing well. Further, USPTO faces three long-standing challenges that could undermine its efforts: the lack of an effective strategy to communicate and collaborate with examiners, outdated assumptions in production quotas that it uses to reward examiners, and the lack of required ongoing technical training for examiners. Patent examiners said the lack of a collaborative work environment has lowered morale and created an atmosphere of distrust between management and patent examiners. Overall, USPTO has made more progress in implementing its strategic plan initiatives aimed at increasing its patent processing capability through workforce and process improvements than in its initiatives to decrease patent pendency and improve electronic processing. It has fully or partially implemented all 23 capability initiatives, but only 8 of 15 initiatives to reduce patent pendency and improve electronic processing. The agency cited a lack of funding as the primary reason for not implementing all initiatives.
7,643
853
As agreed with your office, we limited our examination of targeting opportunities to our published work. To answer your questions about whether targeting can help the federal government downsize and to provide illustrative examples of a targeting strategy for deficit reduction, we updated information from our March 1995 report on the budgetary implications of our work. At this date, the Congress is considering several of the options described in this report. Notwithstanding pending congressional actions, we included the options because they illustrate how targeting can fit in an effective deficit reduction strategy. That report presents a deficit reduction framework consisting of three broad themes. The first focuses on reassessing the objectives of federal programs and services. Our premise is that periodically reconsidering a program's original purpose, the conditions under which it continues to operate, and its cost-effectiveness is appropriate. The second focuses on improved targeting of federal programs and services to beneficiaries. This theme concerns how efficiently federal programs and services reach their intended recipients. The third focuses on improving the efficiency of program and service delivery. This theme suggests that focusing on the approach or delivery method can significantly reduce spending or increase collections. This letter expands on the second theme--improved targeting--as a strategy that allows for reducing the deficit while improving the design of federal government activities. We did this work in Washington, D.C., from August 1995 through October 1995. The following examples from our work illustrate potential opportunities to better target federal programs and services. Examples are detailed under one of four strategies for better targeting the intended beneficiaries: revise grant formulas, change eligibility rules, target fees and charges, and narrow tax preferences. At a time when federal domestic discretionary resources are constrained, better targeting of grant formulas offers a strategy to concentrate lower federal spending levels on states or localities with greater needs and lower capacity to absorb grant reductions. Through this process, federal funding reductions would fall more heavily on those communities with lesser relative needs and with greatest fiscal capacity to finance services from their own revenue base. We have issued many reports over the past decade showing that the allocation of federal grants to state and local governments is not well targeted. This work has been confirmed by many economic analyses from other sources. As a result, program recipients in areas with relatively lower needs and greater wealth received a higher level of services than those available in harder pressed areas, or wealthier areas were able to provide the same level of services at lower tax rates. Reductions in federal grants to states could be targeted by adjusting the allocation formulas to concentrate funding on those states with relatively lesser fiscal capacities and greater needs. Similarly, reductions in federal grants to local governments could be targeted by either concentrating cuts on areas with the strongest tax bases or by changing program eligibility to restrict grant funding in places with high fiscal capacity and/or few programmatic needs. For example, in 1992 we reported that Maternal and Child Health (MCH) Services block grants could be allocated more equitably. This program was designed to secure basic health care for low-income and moderate-income expectant mothers, their infants, and children with special health care needs. However, our report concluded that the allocation method for distributing MCH grants to states ran counter to the equity standards we developed. We found that while the number of children at risk, the costs of providing maternal and child health services, and the states' ability to pay for these services varied from state to state, the current MCH allocation method did not consider these factors. As a result, Louisiana--with the second highest proportion of children at risk and average service costs--ranked 14th in per capita grant funding. Similarly, at the time of our analysis, Kansas and Illinois received nearly equal per capita grants, even though Illinois had about 28 percent higher health care costs. In practical terms, this meant that Illinois consumers had to spend more money than Kansans to buy the same MCH services. We concluded that a new MCH allocation method that strikes a balance between each state's (1) need adjusted for costs and (2) ability to pay could substantially improve the overall equity of the MCH program. Federal spending for the MCH program reached a reported $687 million in fiscal year 1994. If overall funding for this program were reduced, such a new allocation method could help target the remaining MCH program funds more equitably. In another example, we found that the Medicaid program formula does not target most federal funds to states with weak tax bases and high concentrations of poor people. In 1990, we reported that while the program covered 75 percent of those below the poverty line nationwide, the coverage varied from 37 percent in Idaho to 111 percent in Michigan. We suggested that a formula using better indicators of states' financing capacities and poverty rates coupled with a reduced minimum federal share would more equitably distribute the burden state taxpayers face in financing Medicaid benefits for low-income residents in their respective states. Federal spending for Medicaid in fiscal year 1994 reached a reported $82 billion, and the Office of Management and Budget (OMB) projects spending to reach $136.5 billion by fiscal year 2000. Should the Congress act to reduce federal Medicaid spending, a revised grant allocation system could help target the reduced funding more equitably. Along these lines, a block grant that the Congressional Budget Office (CBO) estimated would reduce federal Medicaid spending by $163 billion over the next 7 years was included in the recently passed Balanced Budget Act of 1995. Under this proposal future Medicaid costs would be reduced and equity in the distribution of the remaining funding would be improved because the allocation formula uses new factors that more precisely measure differences in states' fiscal capacity and poverty levels. In another example, Title I grants to local educational agencies (LEAs), which fund supplementary education services for low achievers in poverty areas, could be modified to improve targeting among counties. Under these grants, formerly known as Chapter 1 grants, school districts have broad discretionary powers to determine how resources are distributed to schools, specifying the grades served and the type and extent of services, and defining which students are low achievers. In 1992, we reported that these factors resulted in considerable variation among students who receive Title I LEA services. For example, in some school districts Title I LEA funds served only children scoring below the 20th percentile on standardized tests. In other districts, program funds served some children scoring above the national average (the 50th percentile). We found that the legislatively mandated formula for Title I LEA grants did not (1) accurately reflect the distribution of poverty-related low achievers, (2) provide extra assistance to areas with relatively less ability to fund remedial education services, or (3) adequately reflect differences in local costs of providing education services. We concluded that modifications to the Title I LEA allocation method could target more funds to counties with the largest numbers of poverty-related low achievers and those least able to finance remedial instruction. Federal funding for Title I grants to local educational agencies reached a reported $6.3 billion in fiscal year 1994. If the Congress decides to reduce funding for these grants, a revised formula could better target Title I LEA grants to those counties with the greatest overall need. The formula could be revised to rely on a more precise method of estimating the number of poverty-related low achievers, use an income adjustment factor to grant additional assistance to areas least capable of financing remedial instruction, and employ a uniform measure of educational services costs that recognizes differences within and between states. Changing eligibility rules to better target the intended beneficiaries of federal programs offers another strategy that can allow for deficit reduction by concentrating reductions on beneficiaries with little demonstrable need for government assistance. We have issued many reports in recent years showing that programs could be better targeted to more cost-effectively address those beneficiaries most in need. For example, we found that the Vaccines for Children (VFC) Program is not well targeted. This program was created to improve immunization rates for measles, mumps, rubella, and other childhood diseases by lowering the cost of immunization for all children. However, we found that most children had already been immunized because cost was not a significant barrier and that a disproportionate number of children in underserved areas were not immunized. We suggested that the Congress consider targeting the program. Services could be improved by directing VFC funds to children in those particular geographic areas where underimmunization has been a persistent problem. Fiscal year 1995 costs for the childhood vaccine program were estimated at about $450 million. Based on our examinations of the Market Promotion Program (MPP), we believe that the program's eligibility rules could be tightened to provide support to small, generic, new-to-export companies, but not to large companies with substantial corporate advertising budgets. The MPP uses federal funds to subsidize efforts to expand export markets for U.S. agricultural products by financing such activities as advertising and consumer promotions. From 1986 through 1994, about one-third of MPP funds and those of its predecessor program (the Targeted Export Assistance (TEA) program) supported private for-profit companies' brand-name promotions. These companies included many large for-profit businesses with substantial corporate advertising budgets, such as Sunkist Growers and E.J. Gallo Winery. In fiscal year 1995, MPP funding was reduced to $84.5 million from the budgeted level of $110 million. Eligibility rules could be revised to ensure that MPP funds are supporting additional promotional activities rather than simply replacing company or industry funds. While large firms receive MPP funds to increase exports of U.S. agricultural products, the resources otherwise available to such firms may indicate that they have no demonstrable need for government assistance. Our reviews of U.S. Department of Agriculture crop price supports show that the program's eligibility rules allow producers to avoid payment limits and reduced program payments. These income support payments, known as deficiency payments, are the principal payments made to producers who participate in farm programs for wheat, feed grains, cotton, and rice. The payments are designed to protect producers' incomes when crop prices fall below a legally established target price. The Food Security Act of 1985 limited the payments for those commodities to $50,000 per person annually. For the act's purposes, a person is broadly defined as an individual, an entity (such as a corporation, limited partnership, association, trust, or estate), or a member of a joint operation (such as a general partnership or joint venture). Despite reforms made by the Congress in 1987, producers have avoided the payment limit by reorganizing their farming operations to include additional persons. According to OMB, deficiency payments amounted to $6.4 billion in fiscal year 1994. One option to further tighten payment limits as a means to reduce program costs would be to change eligibility rules to limit payments to $50,000 per individual and only provide benefits to individuals actively engaged in farming. In another example, narrowing eligibility rules for veterans disability compensation could generate savings without affecting veterans who suffered disabilities as a result of military service. In 1994, CBO reported that about 250,000 veterans were receiving about $1.5 billion annually in Department of Veterans Affairs (VA) compensation for diseases neither caused nor aggravated by military service. Our study of five other countries' veterans programs shows that they do not compensate veterans under these circumstances. Dollar savings could be achieved by targeting disability benefits more narrowly, as is done by other countries. Adjusting fees and charges to the beneficiaries of some business-type federal programs and services offers a third targeting strategy to reduce the deficit. Fees exist for many services provided by the federal government, including customs and other inspections, use of recreation and other facilities, and mail delivery. However, in many cases, the direct beneficiaries of these kinds of governmental activities contribute little to support the program or administrative costs of the activity. As a result, the programs and services are often overused and/or under-provided, and money must be found elsewhere in the budget to make up the difference between administrative costs and beneficiary charges. For example, although many beneficiaries of the Child Support Enforcement Program have higher incomes than the population originally envisioned to be served by this program they pay relatively little to support the program's administrative costs. The Congress created the Child Support Enforcement Program in 1975 to strengthen state and local efforts to obtain child support for both families eligible for Aid to Families with Dependent Children (AFDC) and non-AFDC families. Child support enforcement services were made available to non-AFDC individuals because it was believed that many families might not have to apply for welfare if they had adequate assistance in obtaining the support due from the noncustodial parent. In 1994, the program collected a reported $7.3 billion for 8.2 million non-AFDC clients. Bureau of the Census data for 1991 showed that about 65 percent of the individuals requesting non-AFDC child support enforcement services in that year had family incomes, excluding any child support received, exceeding 150 percent of the federal poverty level. Because states have exercised their discretion to charge only minimal application and service fees, they are doing little to recover the federal government's 66-percent share of program costs. In fiscal year 1994, state fee practices returned $33 million of the reported $1.1 billion spent to provide non-AFDC services. Rising non-AFDC caseloads and new program requirements could lead to administrative costs exceeding $1.6 billion by fiscal year 2000, with very little offset from those benefiting from the services. We have reported and testified on opportunities to defray some of the costs of child support programs. Based on this work, we believe that mandatory application fees should be dropped and that states should charge a minimum percentage service fee on successful collections for non-AFDC families. Under this proposal, non-AFDC beneficiaries would pay an increased share of the costs of administering this program. As a second example, veterans' long-term care costs could be reduced and comparability among retirees increased if veterans' copayments for these services were increased. All veterans with a medical need for nursing home care are eligible to receive such care in VA and community facilities to the extent that space and resources are available. VA is required to collect a fee, commonly known as a copayment, from certain veterans with nonservice-connected problems and incomes above a designated level. Nursing home care is free for other veterans who receive care in VA or contract community nursing homes. By contrast, we found that state veterans' homes recovered as much as 50 percent of the costs of operating their facilities through charges to veterans receiving services. Similarly, through estate recoveries during the 12 months ending June 30, 1992, Oregon recovered about 13 percent of the costs of nursing home care provided under its Medicaid program. However, in fiscal year 1990, the VA offset less than one-tenth of 1 percent of its costs through beneficiary copayments. OMB reported that in fiscal year 1994, VA's operating expenses were about $1.7 billion to provide nursing home and domiciliary care to veterans. The Congress may wish to consider increasing cost sharing for VA nursing home care by adopting cost-sharing requirements similar to those imposed by most state veterans' homes and by implementing an estate recovery program similar to those operated by many states under their Medicaid programs. The potential for recoveries appears to be greater within the VA system than under Medicaid. Home ownership is significantly higher among VA hospital users than among Medicaid nursing home recipients, and veterans living in VA nursing homes generally contribute less toward the cost of their care than do Medicaid recipients, allowing veterans to build larger estates. In another example, we found that the current ski fee system does not ensure that the Forest Service receives fair market value for the use of its land. In 1991, privately owned ski areas operating on Forest Service land--such as those in Vail, Colorado; Jackson Hole, Wyoming; and Taos, New Mexico--generated $737 million in gross sales. After making adjustments reflecting the revenues generated from federal land, these areas paid about $13.5 million, or about 2.2 percent of the total revenues generated, in fees to the government. When the Forest Service ski fee system was developed in 1965, the rates were to be adjusted periodically to reflect changes in economic conditions for these business-type operations. However, the rates by which fees are calculated have not been updated since the fee system was developed. Changing eligibility rules for tax preferences offers a fourth targeting strategy to reduce the federal budget deficit. While tax expenditures can be a valid means for achieving certain federal objectives, studies by GAO and others have raised concerns about the effectiveness, efficiency, and equity of some tax expenditures. As with poorly targeted fees, poorly targeted tax preferences often lead to overutilization by beneficiaries and reduced revenues that either add to the deficit or must be made up elsewhere in the budget. For example, tax-exempt industrial development bonds (IDBs) are poorly targeted. IDBs are issued by state and local governments to finance the creation or expansion of manufacturing facilities to create new jobs or to promote start-up companies or companies in economically distressed areas. However, in a review of IDB-funded projects, we found that only about one-fourth of the projects were located in economically distressed areas. We also found that the job creation benefits attributed to IDBs would likely have occurred anyway. In addition, most developers contacted said that they would have proceeded with their projects without IDBs. Moreover, few companies obtaining tax subsidized financing were start-up companies. OMB estimated that revenue loss due to the tax exempt status of small issue IDBs reached $690 million in fiscal year 1994. Similarly, we found that achievement of public benefits from qualified mortgage bonds (QMBs) is questionable. We found that QMBs did little to increase home ownership, were usually provided to home buyers who did not need them to obtain a conventional (unassisted) mortgage loan, and were not cost-effective. OMB estimated that revenue loss due to the tax-exempt status of QMBs amounted to $1.76 billion in fiscal year 1994. Both IDBs and QMBs could be better targeted. For example, IDBs could be focused on economically distressed areas or start-up companies, and QMBs could be directed toward home buyers who could not reasonably qualify for unassisted conventional loans. In another example, the current tax treatment of health insurance gives few incentives to workers to economize on purchasing health insurance.Some analysts believe that the tax-preferred status of these benefits has contributed to the overuse of health care services and large increases in our nation's health care costs. Improved targeting for this subsidy could play a role in reducing the associated revenue losses and improving the efficiency of the nation's health care system. Targeting is a viable approach because higher income employees are more likely to have health care coverage and, because they pay higher marginal tax rates than low-income earners, the tax benefits from employer-provided health benefits are greater for high-wage earners. The Department of the Treasury estimated that revenue loss due to the tax-exempt status of employer-provided health insurance amounted to $33.5 billion in fiscal year 1992. An option to better target this tax preference would be to place a cap on the dollar amount of health insurance premiums that could be excluded from income. Including in a worker's income the dollar amount over the cap could improve the efficiency of the health care system and, to a lesser extent, tax equity. Alternatively, including health insurance premiums in income but allowing a tax credit for some percentage of the premium would improve equity since tax savings per dollar of premium would be the same for all taxpayers, irrespective of the tax brackets. As the examples from our published work show, more effective targeting is one of several available approaches that can allow for reducing spending while improving federal programs and services. Programs and services, such as grants to states to provide health care for low- and moderate-income individuals or export promotion support for emerging firms, are created due to some perception of eligibility and/or need. In these instances, individuals, organizations, or jurisdictions outside the original targeted population--that is, populations with a greater capacity to provide the program or service from their own resources or having fewer needs--have received program funds, services, or tax subsidies. This poor targeting may have occurred because grant formulas or eligibility rules were constructed too broadly or fees did not fully reflect beneficiaries' capacity to offset program costs. In other instances, the circumstances creating a need for the program or service may have changed. The end result of poor targeting is that the federal government spends more money than needed to reach the intended beneficiaries and achieve its program or service goals. Moreover, in a climate of continuing large budget deficits, the inefficiencies resulting from poorly targeted programs and services have sometimes called into question the legitimacy of continuing these activities or maintaining them at their current levels. In many instances, broad support remains for the objectives of poorly targeted programs and services. In these areas, better targeting can increase the efficiency and effectiveness of the program or service while allowing for program reductions. In other cases, poor targeting raises fundamental questions about the program's or service's merit and/or feasibility. In these circumstances, decisionmakers may want to consider whether the program or service should be eliminated. We are sending copies of this report to the Ranking Minority Member of the House Committee on Government Reform and Oversight. Copies will be available to others upon request. Major contributors to this report were Margaret T. Wrightson, Assistant Director, and Timothy L. Minelli, Senior Evaluator. Please contact me at (202) 512-9573 if you or your staff have any questions concerning the report. Paul L. Posner Director, Budget Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO summarized its previous work suggesting better targeting of federal programs and services as a strategy for downsizing government and reducing the deficit. GAO found that: (1) better targeting can reduce spending and improve federal programs and services, but poor targeting can result in overspending; (2) Congress must decide how to reduce funding for certain programs and alter the allocation of resources to meet its deficit reduction goals; and (3) options for better targeting include revising formula grants to states and localities to reflect differences in fiscal capacity, altering eligibility rules for federal benefit programs to restrict certain benefits, instituting fees for those that consume certain government-provided, business-type services, and limiting or eliminating tax preferences given to state and local governments that issue industrial development bonds.
4,864
167
FHWA is the DOT agency responsible for federal highway programs-- including distributing billions of dollars in federal highway funds to the states--and developing federal policy regarding the nation's highways. The agency provides technical assistance to improve the quality of the transportation network, conducts transportation research, and disseminates research results throughout the country. FHWA's program offices conduct these activities through its Research and Technology Program, which includes "research" (conducting research activities), "development" (developing practical applications or prototypes of research findings), and "technology" (communicating research and development knowledge and products to users). FHWA maintains a highway research facility in McLean, Virginia. This facility, known as the Turner-Fairbank Highway Research Center, has over 24 indoor and outdoor laboratories and support facilities. Approximately 300 federal employees, on-site contract employees, and students are currently engaged in transportation research at the center. FHWA's research and technology program is based on the research and technology needs of each of its program offices such as the Offices of Infrastructure, Safety, or Policy. Each of the program offices is responsible for identifying research needs, formulating strategies to address transportation problems, and setting goals for research and technology activities that support the agency's strategic goals. (See Appendix I for examples of research that these offices undertake.) One program office that is located at FHWA's research facility provides support for administering the overall program and conducts some of the research. The agency's leadership team, consisting of the associate administrators of the program offices and other FHWA offices, provides periodic oversight of the overall program. In 2002 FHWA appointed the Director of its Office of Research, Development, and Technology as the focal point for achieving the agency's national performance objective of increasing the effectiveness of all FHWA program offices, as well as its partners and stakeholders, in determining research priorities and deploying technologies and innovation. In addition to the research activities within FHWA, the agency collaborates with other DOT agencies to conduct research and technology activities. For example, FHWA works with DOT's Research and Special Programs Administration to coordinate efforts to support key research identified in the department's strategic plan. Other nonfederal research and technology organizations also conduct research funded by FHWA related to highways and bridges. Among these are state research and technology programs that address technical questions associated with the planning, design, construction, rehabilitation, and maintenance of highways. In addition, the National Cooperative Highway Research Program conducts research on acute problems related to highway planning, design, construction, operation, and maintenance that are common to most states. Private organizations, including companies that design and construct highways and supply highway-related products, national associations of industry components, and engineering associations active in construction and highway transportation, also conduct or sponsor individual programs. Universities receive funding for research on surface transportation from FHWA, the states, and the private sector. Leading organizations that conduct scientific and engineering research, other federal agencies with research programs, and experts in research and technology have identified and use best practices for developing research agendas and evaluating research outcomes. Although the uncertain nature of research outcomes over time makes it difficult to set specific, measurable program goals and evaluate results, the best practices we identified are designed to ensure that the research objectives are related to the areas of greatest interest and concern to research users and that research is evaluated according to these objectives. These practices include (1) developing research agendas through the involvement of external stakeholders and (2) evaluation of research using techniques such as expert review of the quality of research outcomes. External stakeholder involvement is particularly important for FHWA because its research is expected to improve the construction, safety, and operation of transportation systems that are primarily managed by others, such as state departments of transportation. According to the Transportation Research Board's Research and Technology Coordinating Committee, research has to be closely connected to its stakeholders to help ensure relevance and program support, and stakeholders are more likely to promote the use of research results if they are involved in the research process from the start. The committee also identified merit review of research proposals by independent technical experts based on technical criteria as being necessary to help ensure the most effective use of federal research funds. In 1999, we reported that other federal science agencies--such as the Environmental Protection Agency and the National Science Foundation--used such reviews to varying degrees to assess the merits of competitive and noncompetitive research proposals. In April 2002, the Office of Management and Budget issued investment criteria for federal research and technology program budgets that urge these agencies to put into place processes to assure the relevance, quality and performance of their programs. For example, the guidance requires these programs to have agendas that are assessed prospectively and retrospectively through external review to ensure that funds are being expended on quality research efforts. The Committee on Science, Engineering, and Public Policy reported in 1999 that federal agencies that support research in science and engineering have been challenged to find the most useful and effective ways to evaluate the performance and results of the research programs they support. Nevertheless, the committee found that research programs, no matter what their character and goals, can be evaluated meaningfully on a regular basis and in accordance with the Government Performance and Results Act. Similarly, in April 2002 the Office of Management and Budget issued investment criteria for federal research and technology program budgets that require these programs to define appropriate outcome measures and milestones that can be used to track progress toward goals and assess whether funding should be enhanced or redirected. In addition, program quality should be assessed periodically in relation to these criteria through retrospective expert review. The Committee on Science, Engineering, and Public Policy also emphasized that the evaluation methods must match the type of research and its objectives, and it concluded that expert or peer review is a particularly effective means to evaluate federally funded research. Peer review is a process that includes an independent assessment of the technical and scientific merit or quality of research by peers with essential subject area expertise and perspective equal to that of the researchers. Peer review does not require that the final impact of the research be known. In 1999, we reported that federal agencies, such as the Department of Agriculture, the National Institutes of Health, and the Department of Energy, use peer review to help them (1) determine whether to continue or renew research projects, (2) evaluate the results of research prior to publication of those results, and (3) evaluate the performance of programs and scientists. In its 1999 report, the Committee on Science, Engineering, and Public Policy also stated that expert review is widely used to evaluate: (1) the quality of current research as compared with other work being conducted in the field, (2) the relevance of research to the agency's goals and mission, and (3) whether the research is at the "cutting edge." Although FHWA engages external stakeholders in elements of its research and technology program, the agency currently does not follow the best practice of engaging external stakeholders on a consistent and transparent basis in setting its research agendas. The agency expects each program office to determine how or whether to involve external stakeholders in the agenda setting process. As we reported in May 2002, FHWA acknowledges that its approach to preparing research agendas is inconsistent and that the associate administrators of FHWA's program offices primarily use input from the agency's program offices, resource centers, and division offices. Although agency officials told us that resource center and division office staff provide the associate administrators with input based on their interactions with external stakeholders, to the extent that external stakeholder input into developing research agendas occurs, it is usually ad hoc and provided through technical committees and professional societies. For example, the agency's agenda for environmental research was developed with input from both internal sources (including DOT's and FHWA's strategic plans and staff) and external sources (including the Transportation Research Board's reports on environmental research needs and clean air, environmental justice leaders, planners, civil rights advocates, and legal experts). In our May 2002 report we recommended that FHWA develop a systematic approach for obtaining input from external stakeholders in determining its research and technology program's agendas. FHWA concurred with our recommendation and has taken steps to develop such an approach. FHWA formed a planning group consisting of internal stakeholders as well as representatives from the Research and Special Programs Administration and the Pennsylvania Department of Transportation to determine how to implement our recommendation. This planning group prepared a report analyzing the approaches that four other federal agencies are taking to involve external stakeholders in setting their research and technology program agendas. Using the lessons learned from reviewing these other agencies' activities, FHWA has drafted a Corporate Master Plan for Research and Deployment of Technology & Innovation. Under the draft plan, the agency would be required to establish specific steps for including external stakeholders in the agenda setting process for all areas of research throughout the agency's research and technology program by fiscal year 2004. In drafting this plan, FHWA officials obtained input from internal stakeholders as well as external stakeholders, including state departments of transportation, academia, consultants, and members of the Transportation Research Board. It appears that FHWA has committed to taking the necessary steps to adopt the best practice of developing a systematic process for involving external stakeholders in the agenda setting process. The draft plan invites external stakeholders to assist FHWA with such activities as providing focus and direction to the research and technology program and setting the program's agendas and priorities. However, because FHWA's plan has not been finalized, we cannot comment on its potential effectiveness in involving external stakeholders. As we reported last year, FHWA does not have an agency wide systematic process to evaluate whether its research projects are achieving intended results that uses such techniques as peer review. Although the agency's program offices may use methods such as obtaining feedback from customers and evaluating outputs or outcomes versus milestones, they all use success stories as the primary method to evaluate and communicate research outcomes. According to agency officials, success stories are examples of research results adopted or implemented by such stakeholders as state departments of transportation. These officials told us that success stories can document the financial returns on investment and nonmonetary benefits of research and technology efforts. However, we raised concerns that success stories are selective and do not cover the breadth of FHWA's research and technology program. In 2001, the Transportation Research Board's Research and Technology Coordinating Committee concluded that peer or expert review is an appropriate way to evaluate FHWA's surface transportation research and technology program. Therefore, the committee recommended a variety of actions, including a systematic evaluation of outcomes by panels of external stakeholders and technical experts to help ensure the maximum return on investment in research. Agency officials told us that increased stakeholder involvement and peer review will require significant additional expenditures for the program. However, a Transportation Research Board official told us that the cost of obtaining expert assistance could be relatively low because the time needed to provide input would be minimal and could be provided by such inexpensive methods as electronic mail. In our May 2002 report, we recommended that FHWA develop a systematic process for evaluating significant ongoing and completed research that incorporates peer review or other best practices in use at federal agencies that conduct research. While FHWA has concurred that the agency must measure the performance of its research and technology program, it has not developed, defined or adopted a framework for measuring performance. FHWA's report on efforts of other federal agencies that conduct research, discussed above, analyzed the approaches that four other federal agencies are taking to evaluate their research and technology programs using these best practices. According to FHWA's assistant director for Research, Technology, and Innovation Deployment, the agency is using the results of this report to develop its own systematic approach for evaluating its research and technology program. However, this official noted that FHWA has been challenged to find the most useful and effective ways to evaluate the performance and results of the agency's research and technology program. According to FHWA's draft Corporate Master Plan for Research and Deployment of Technology & Innovation, FHWA is committed to developing a systematic method of evaluating its research and technology program that includes the use of a merit review panel. This panel would conduct evaluations and reviews in collaboration with representatives from FHWA staff, technical experts, peers, special interest groups, senior management, and contracting officers. According to the draft plan, these merit reviews would be conducted on a periodic basis for program-level and agency-level evaluations, while merit reviews at the project level would depend on the project's size and complexity. FHWA is still in the process of developing, defining, and adopting a framework for measuring performance. Therefore, we cannot yet comment on how well FHWA's efforts to evaluate research outcomes will follow established best practices. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or Members of the Committee may have. For further information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Deena Richart made key contributions to this testimony. FHWA's research and technology program is based on the research and technology needs of each of its program offices such as the Offices of Infrastructure, Safety, and Policy. Each of the program offices is responsible for identifying research needs, formulating strategies to address transportation problems, and setting goals for research and technology activities that support the agency's strategic goals. (See table 1.)
Improvement and innovation based on highway research have long been important to the highway system. The Federal Highway Administration (FHWA) is the primary federal agency involved in highway research. Throughout the past decade, FHWA received hundreds of millions of dollars for its surface transportation research program, including nearly half of the Department of Transportation's approximate $1 billion budget for research in fiscal year 2002. Given the expectations of highway research and the level of resources dedicated to it, it is important to know that FHWA is conducting high quality research that is relevant and useful. In May 2002, GAO issued a report on these issues and made recommendations to FHWA, which the agency agreed with, aimed at improving its processes for setting research agendas and evaluating its research efforts. GAO was asked to testify on (1) best practices for developing research agendas and evaluating research outcomes for federal research programs; (2) how FHWA's processes for developing research agendas align with these best practices; and (3) how FHWA's processes for evaluating research outcomes align with these best practices. Leading organizations, federal agencies, and experts that conduct scientific and engineering research use best practices designed to ensure that research objectives are related to the areas of greatest interest to research users and that research is evaluated according to these objectives. Of the specific best practices recommended by experts--such as the Committee on Science, Engineering, and Public Policy and the National Science Foundation--GAO identified the following practices as particularly relevant for FHWA: (1) developing research agendas in consultation with external stakeholders to identify high-value research and (2) using a systematic approach to evaluate research through such techniques as peer review. FHWA's processes for developing its research agendas do not always consistently include stakeholder involvement. External stakeholder involvement is important for FHWA because its research is to be used by others that manage and construct transportation systems. FHWA acknowledges that its approach for developing research agendas lacks a systematic process to ensure that external stakeholders are involved. In response to GAO's recommendation, FHWA has drafted plans that take the necessary steps toward developing a systematic process for involving external stakeholders. While the plans appear responsive to GAO's recommendation, GAO cannot evaluate their effectiveness until they are implemented. FHWA does not have a systematic process that incorporates techniques such as peer review for evaluating research outcomes. Instead, the agency primarily uses a "success story" approach to communicate about those research projects that have positive impacts. As a result, it is unclear the extent to which all research projects have achieved their objectives. FHWA acknowledges that it must do more to measure the performance of its research program, however, it is still in the process of developing a framework for this purpose. While FHWA's initial plans appear responsive to GAO's recommendation, GAO cannot evaluate their effectiveness until they are implemented.
2,862
612
For decades Cubans have fled Cuba, often by raft, seeking freedom in the United States. For example, during the first 6 months of 1993, the U.S. Coast Guard picked up about 1,300 rafters and brought them to the United States. This number increased to about 4,700 during the same period in 1994. At that time, Cuba was maintaining its strict policy of forbidding its citizens from illegally exiting the country. In June 1994, violence by both the Cuban authorities and would-be asylum seekers escalated when, for example, Cuban authorities shot and killed a Cuban who was attempting to escape the island. From July 13 through August 8, 1994, at least 37 asylum seekers and 2 Cuban officials were killed in a series of boat hijackings. In addition, a riot erupted in Havana on August 5 when police attempted to disperse a crowd that had gathered when a false rumor circulated that a flotilla of boats was on its way to pick up people seeking to leave. On August 13, Fidel Castro gave a televised speech blaming the United States for the riots and violence and threatened to remove restrictions on Cubans exiting the country if the United States did not take steps to deter boat departures and return those hijackers who had reached the United States. Not receiving the response he anticipated from the United States, Castro indicated he would not prevent Cubans from leaving. Over the next week, Cubans flocked to the beaches, where they constructed make-shift vessels and set out to sea. As the flow of rafters increased, President Clinton announced on August 19, 1994, that the Coast Guard would no longer bring interdicted Cubans to the United States but would hold them at Guantanamo Bay. The President and the Attorney General indicated at that time that those Cubans taken to Guantanamo Bay would have no opportunity for eventual entry into the United States. This announcement reversed a 3-decade policy of welcoming Cubans seeking refuge into the United States. Many Cubans did not believe that the United States would actually enforce the new policy and consequently continued to leave Cuba. About 33,000 Cubans were picked up at sea and taken to Guantanamo Bay. Concerned about the continuing exodus, on September 9, 1994, the United States and Cuba signed an accord under which the United States agreed to admit at least 20,000 Cubans per year directly from Cuba through legal channels. The U.S. Interests Section in Havana estimated that this number would comprise approximately 7,000 refugees and family members, 8,000 immigrant visa recipients and their families, and 5,000 paroled through the Special Cuban Migration Program--a special lottery. The Cuban government agreed "to prevent unsafe departures using mainly persuasive methods." Within days the Cuban police again were patrolling the roads leading to the beaches, under orders to arrest persons carrying rafts or the materials to build them, and Cubans stopped departing by raft. The United States later began granting parole to certain categories of Cubans in the safe haven camps at Guantanamo Bay. On October 14, 1994, President Clinton announced that parole would be granted to those over age 70, unaccompanied minors, or those with serious medical conditions and their caregivers. On December 2, 1994, the Attorney General announced that parole would be considered on a case-by-case basis for children and their immediate families who would be adversely affected by long-term presence in safe havens. These four categories became known as the "four protocols." On May 2, 1995, the White House Press Secretary announced that Cubans interdicted at sea would no longer be taken to safe haven at Guantanamo Bay but would be returned to Cuba where they could apply for entry into the United States through legal channels at the U.S. Interests Section. In discussing this announcement, the Attorney General stated that measures would be taken to ensure that persons who claimed a genuine need for protection, which they believed could not be satisfied by applying at the U.S. Interests Section, would be examined before their return to Cuba. She also announced at that time that remaining Cubans at Guantanamo Bay--about 18,500 as of June 7, 1995--would be considered for parole into the United States, excluding those found to be ineligible for parole due to criminal activity in Cuba, in the United States, or while in safe haven and those with certain serious medical conditions. Within the executive branch, an interagency working group is responsible for developing strategies for implementing the Cuban migration policy. The working group is chaired by the National Security Council and includes representatives from the State Department's Bureaus for Inter-American Affairs and Population, Refugees, and Migration and the Legal Advisor's Office; the Department of Justice's INS and CRS; the Defense Department's Offices of the Secretary of Defense (Humanitarian and Refugee Affairs) and Joint Chiefs of Staff; and the Coast Guard. The U.S. Interests Section in Havana is responsible for processing the more than 20,000 expected Cuban applicants for U.S. entry, annually. As of August 1995, the Interests Section had increased its processing staff to 6 full-time consular officers and about 3 temporary-duty consular officers, 4 INS officers, about 40 local nationals, and 4 U.S. and third country contract hires. Consular officers at the Interests Section process immigrant visa applications and prescreen parole applicants; the Refugee Coordinator prescreens refugee applicants. INS adjudicates refugee and parole applications in Havana and parole applications at Guantanamo Bay. The Defense Department is responsible for carrying out the safe haven program at Guantanamo Bay. The Office of the Secretary of Defense and the Joint Chiefs of Staff oversee safe haven operations, and the U.S. Atlantic Command has operational responsibility. Joint Task Force (JTF)-160 executes the safe haven mission at Guantanamo--caring for the inhabitants, providing for their security and protection, and preparing them for travel to the United States. JTF-160 is also charged with the safety and security of U.S. personnel at Guantanamo Bay and the security of the station itself. The U.S. Coast Guard interdicts rafters at sea and, until May 2, 1995, it took them to safe haven at Guantanamo Bay. Since May 2, 1995, most Cubans interdicted at sea have been returned by the Coast Guard to Cuba. Civilian agencies implement various components of the safe haven program. The Department of State's Bureau for Population, Refugees, and Migration provides assistance to the safe haven population at Guantanamo Bay through a grant to the World Relief Corporation. At Guantanamo Bay, CRS assists in parole processing and provides human resource services, such as family reunification, conciliation and mediation assistance and training, and recreation and education. CRS also provides resettlement assistance to parolees when they arrive in the United States. The State Department also maintains an officer in Guantanamo Bay as a liaison with the military and civilian agencies. Other organizations are also involved in Cuban migration operations at Guantanamo Bay. The World Relief Corporation, a nongovernmental organization, provides public health and social services, vocational training, mail services, and coordination of private donations. The International Organization for Migration (IOM), an intergovernmental organization based in Geneva, Switzerland, arranges resettlement for Cubans wishing to migrate to countries other than the United States. Pursuant to an agreement with the Cuban government to allow some voluntary repatriation over land rather than flying to Havana, IOM also arranges voluntary repatriation through the station's Northeast Gate. IOM was also working with the remaining Haitians in camps at Guantanamo Bay. Considerable military and civilian personnel resources are at Guantanamo Bay to support the safe haven operation. As shown in table 1, more than 5,000 personnel were providing security and services to Cubans in the safe haven camp at the time of our visit in June 1995. INS planned to increase its personnel to at least 18 by the end of summer to augment parole eligibility determination. IOM, on the other hand, expects to decrease its presence to six as the remaining Haitians are repatriated or allowed entry into the United States. We estimate that the total cost of the U.S. response to the Cuban exodus from August 1994 through fiscal year 1995 will exceed $497 million (see table 2). This represents incremental costs, which are costs that agencies would not have incurred had there been no Cuban migration crisis. Defense costs include procuring construction materials, food, medical supplies, and miscellaneous items for camps at Guantanamo Bay and in Panama; shipping food and supplies; transporting military personnel to the camps and about 500 to 550 parolees to Homestead Air Force Base, Florida, each week; and moving 8,763 Cubans from Guantanamo Bay to Panama in September 1994 and 7,291 back again in February 1995. Defense does not budget for such migrant operations, and it requested a $370-million supplemental appropriation for fiscal year 1995 to minimize the impact of these activities on military operations. Coast Guard expenses cover the costs of patrolling the waters between Cuba and Florida and bringing people to Guantanamo Bay and Cuba. CRS' costs primarily cover resettlement assistance to parolees arriving in the United States (about $31.3 million). State Department costs include expanding consular processing in Havana and providing a liaison officer at Guantanamo Bay and a grant to the World Relief Corporation to provide services at the safe haven camps. Our review of the processing workload at the Interests Section indicates that it will process 20,000 applicants for U.S. entry and the additional 6,700 applicants on the waiting list by September 8, 1995--the end of the first year under the agreement. As of June 9, 1995, the Interests Section had approved 16,305 for entry into the United States. This number included 7,693 refugees, 40 paroled family members of refugees, 3,601 immigrant visas, 3,073 paroled family members of immigrant visa recipients, and 1,898 parolees selected through a lottery. An additional 4,451 applicants for immigrant visas who were on the noncurrent preference lists had been approved for parole and 1,269 of their immediate relatives had been issued immigrant visas, pursuant to the September 1994 agreement. From 1996 through 1998, the workload will be somewhat reduced because the 20,000-person requirement will be offset each year for 3 years by up to 5,000 as a result of the May 2 announcement that all eligible Guantanamo Bay camp applicants would be paroled into the United States. Resettlement processing continues, as about 500 to 550 Cubans enter the United States from Guantanamo Bay each week. As of June 27, 1995, 14,746 had been paroled into the United States under the four humanitarian protocols, including 1,270 paroled from the temporary Howard Air Force Base safe haven in Panama from October 1994 through February 1995. Another 622 had returned to Cuba through diplomatic channels, 139 had resettled in third countries, and 1,000 had returned to Cuba on their own, either over land or by water. Sixty Cuban rafters had been interdicted and repatriated to Cuba as of that date, pursuant to the May 2 announcement that such individuals would be returned to Cuba. At the time of our visit to Guantanamo Bay, 18,802 Cubans remained in the camps. Of these, 5,856 qualified for parole under the four protocols, and 12,946 were eligible to apply for parole consideration under the May 2 announcement. The INS officer-in-charge noted that JTF-160 had compiled about 4,500 camp incident reports involving camp infractions that INS staff will review for impact on individual parole eligibility. However, INS estimates that only a small number of those involved will be ineligible for parole. While detention in safe haven camps is undoubtedly difficult, our review at the Guantanamo Bay camps indicated that living conditions were adequate. While we found no internationally accepted criteria for minimal refugee living standards, we noted that the U.S. Atlantic Command had developed standards for safe haven conditions based on inspection guidelines of the United Nations High Commissioner for Refugees (UNHCR) and standard military regulations and manuals' requirements. The Command developed a camp construction model for migrant operations based on a population of 10,000 that could be adapted for population changes and issued corresponding operational guidelines, including camp organization, services, construction, and logistics. We found that conditions generally met or exceeded Atlantic Command standards and UNHCR inspection guidelines. For example, minimal UNHCR inspection guidelines include 3.5 square meters of living space per migrant. Using this as guidance, the Command recommended using medium-sized tents to house up to 15 Cubans. We found no indication that these tents housed more than 15 persons. Camp conditions have improved since the influx of Cubans in the summer of 1994, due to decreasing population density and a Defense Department "Quality of Life" facilities upgrade. In late August 1994, thousands of people were arriving daily at the Guantanamo Bay camps. Together with about 12,000 Haitians, the camps' population totaled about 45,000 in September 1994. At that time, living conditions were marginal, according to Atlantic Command officials, as JTF-160 was erecting tents and installing portable toilets as quickly as people arrived. Crowded conditions began easing as most Haitians were repatriated to Haiti following President Aristide's return in October 1994, and more than 8,000 Cubans were relocated to safe haven camps in Panama for 6 months. Also, Cubans began leaving via parole following the October and December protocol announcements. The Defense Department had intended to spend almost $35 million to upgrade facilities to accommodate a longer term camp operation. However, the May 2 announcement that most camp inhabitants would be eligible for parole lessened the urgency to improve conditions. As a result, the Defense Department spent about $25.3 million for its upgrade program. Not all camps were upgraded; some camps were scheduled to be disassembled as populations decreased. Upgrades included elevated hardback tents, plumbing, tension fabric structures as multipurpose buildings, and electricity. (See figs. 1 through 3.) In general, those who are expected to be paroled in late 1995 and early 1996 are located in the newer camps. Those eligible for parole under the first four protocols are scheduled to leave by the end of summer 1995 and, for the most part, are located in the camps that have not been upgraded (see fig. 4). In addition to adequate shelter, camp residents receive breakfast, a hot dinner prepared by Cuban cooks, and Meals Ready to Eat (MRE) for lunch. Cubans with whom we spoke said that the food was better than when they first arrived, when they mostly received rice. They also receive medical treatment at camp clinics and in military medical and surgical units as necessary. Recreational activities include baseball, basketball, pool, ping-pong, movies, music, arts and crafts, and libraries. In addition, adults can attend English and vocational classes coordinated by World Relief. Most children have left the camps, but the few remaining receive basic schooling organized by CRS. Many of these services are provided by camp residents with special skills. Security is professional but not overtly oppressive. Camp residents are relatively free to move around within camp areas. When they first arrived, the Cubans were restricted to smaller areas behind razor concertina wire. According to military personnel, tensions have eased since the May 2 announcement that the Cubans would not be detained indefinitely but could apply for parole. Although by September 1995 the Interests Section will likely have processed for U.S. entry the 20,000 Cubans called for in the September 9, 1994, agreement as well as the 6,700 on the noncurrent immigrant visa preference list, it is unlikely that this number will travel to the United States by that date. Of the 7,693 refugees approved for travel, only 1,494 had left as of June 9, 1995. While this partly reflects the normal lag in obtaining sponsorship for approved refugees, the relatively small number who have left also reflects the adverse impact of steep Cuban government-imposed air fare increases and fees for migration-related services. In February 1995, the Cuban government raised the one-way fare from Havana to Miami from $150 to $990. When the rate was increased, the Interests Section refused to pay the higher amount and negotiated rates with commercial airlines for regularly scheduled flights to Miami through Mexico and Costa Rica. The number of such seats was limited, resulting in 5,267 refugees waiting to travel at the time of our visit. The remaining 932 refugees had been adjudicated but had not yet obtained all documents required for travel. Unlike refugees, immigrant visa holders and parolees must arrange and pay for their own transportation to the United States. Because they arrange their own travel, the Interests Section does not track the number that has departed from Cuba. Although INS will report in 1997 on numbers of Cubans coming through U.S. ports of entry in 1995, these numbers will reflect country of nationality, not country of departure. The U.S. government repeatedly voiced its concern to the Cuban government about the exorbitant airfare. Cuba agreed to lower the fare; however, it also imposed additional fees in June 1995, including $400 for the medical examination required for all people seeking U.S. entry ($250 for children), $250 for an exit permit and related documents, and $50 for a passport. U.S. officials told us that they believe that some fees for these previously free services may be reasonable, but the fees imposed (even with reduced air charters) will pose serious obstacles for Cubans seeking to emigrate. At the time of our visit to Guantanamo Bay, the backlog of those approved for travel from there was estimated by INS at about 1,200. Parolees leave Guantanamo Bay on three charter flights each week, and depending on the size of the aircraft, 500 to 550 parolees depart each week. At this rate, the camps should be empty by March 15, 1996. However, the availability of transportation is not the limiting factor in more rapidly reducing the camps' population. Despite the backlog and the continuing cost to operate the camps, the weekly departure rates are not expected to increase. According to Defense, State, and Justice officials, state of Florida officials maintain that the state can accommodate no more than 550 parolees per week. Defense, State, and Justice officials said that senior Clinton administration officials have agreed not to exceed that figure. According to Defense Department officials, if departures could be accelerated to 690 per week, they could empty the camps by December 15, 1995, and save about $22.2 million in operating expenses. The Departments of Defense, Justice, and State provided oral comments on this report. Their technical comments have been incorporated where appropriate. State Department officials suggested that it would have been useful to have compared costs incurred with those that might have been incurred by both the federal and state of Florida governments had no action been taken to stem the flow of Cubans to the United States. Such an analysis may be interesting, but it was not within the scope of work we were requested to perform. Furthermore, such analysis would be highly subjective because the cost would depend on many unknown factors such as the number of Cubans who would have fled to the United States had no action been taken to stem the flow, and what benefits and services would have been provided. Also, we found no evidence that the decision to reverse a 30-year policy of welcoming fleeing Cubans to the United States was based on cost consideration. We identified U.S. policies toward Cubans seeking U.S. entry through discussions with State, INS, and CRS officials and reviewing documentation such as agreements with the Cuban government, joint communiques, administration announcements of parole and safe haven positions, and pertinent legislation. To determine the processing capabilities of the Interests Section, we interviewed INS officials in Washington, D.C., and visited the Interests Section in Havana. In Havana, we discussed with consular, INS, and senior post officials the various screening and adjudication processes for refugees, immigrants, and parolees; reviewed sample case files; and observed ongoing screenings. To determine living conditions at Guantanamo Bay, we visited the U.S. Atlantic Command in Norfolk, Virginia, to discuss its oversight of migrant operations and how it developed criteria for living standards. We also visited Guantanamo Bay, where we observed camp conditions, examined the parole processing procedures, and monitored the weekly meeting with the JTF Commander and the Cuban representatives from each camp. In addition, we met with JTF-160 operations and logistics officers and officials from CRS, INS, State, World Relief Corporation, IOM, and UNHCR to discuss their activities at Guantanamo Bay. To determine program costs, we obtained estimated actual and projected Cuban migrant program incremental cost data for fiscal years 1994 and 1995 from the Departments of Defense and State, INS, CRS, and the Coast Guard. We did not verify the accuracy of the agencies' estimates. We conducted our review between April and August 1995 in accordance with generally accepted government auditing standards. Unless you announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to the Departments of State, Defense, and Justice and to interested congressional committees, and to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-4128. Major contributors to this report were David R. Martin, Assistant Director, and Audrey E. Solis, Senior Evaluator. Harold J. Johnson, Director International Affairs Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the U.S. government's actions to address the 1994 Cuban migration crisis, focusing on: (1) how U.S. policy has toward those seeking to leave Cuba has changed since that time; (2) the agencies involved with and costs to the government associated with the exodus of Cubans; (3) the capabilities of the U.S. Interests Section in Havana to process applicants seeking legal entry into the United States; and (4) the adequacy of living conditions at the Cuban safe haven camps at the U.S. Naval Station in Guantanamo Bay. GAO found that: (1) for over 30 years, fleeing Cubans had been welcomed to the United States; however, the U.S. government reversed this policy on August 19, 1994, when President Clinton announced that Cuban rafters interdicted at sea would no longer be brought to the United States and would be taken to safe haven camps at the U.S. Naval Station, Guantanamo Bay, Cuba, with no opportunity for eventual entry into the United States other than by returning to Havana to apply for entry through legal channels at the U.S. Interests Section; (2) on September 9, 1994, the U.S. and Cuban governments agreed that the United States would allow at least 20,000 Cubans to enter annually in exchange for Cuba's pledge to prevent further unlawful departures by rafters; (3) on May 2, 1995, a White House announcement was released stating that: Cubans interdicted at sea would not be taken to a safe haven but would be returned to Cuba where they could apply for entry into the United States at the Interest Section in Havana, eligible Cubans in the safe haven camps would be paroled into the United States, and those found to be ineligible for parole would be returned to Cuba; (4) several U.S. agencies have been involved in implementing the U.S. policy regarding Cubans wishing to leave their country including the: (a) Department of Defense which will spend about $434 million from August 1994 through September 1995 operating the safe haven camps, (b) U.S. Coast Guard which spent about $7.8 million interdicting Cubans at sea from August 1994 to the present, (c) Department of Justice's Immigration and Naturalization Service and Community Relations Service which together will spend about $48.6 million for the Cuban migration crisis from August 1994 through September 1995, and (d) Department of State which will spend an estimated $7.1 million during this same period; (5) the U.S. Interests Section in Havana has been able to meet the workload of processing applicants seeking legal entry into the United States; (6) as of June 9, 1995, it had approved 16,305 Cubans for U.S. entry; however, not all those approved will leave Cuba by September 1995, the anniversary of the September 1994 agreement; (7) the Cubans' living conditions at the Guantanamo Bay safe haven camps are difficult but adequate based on GAO's observations at the camps; and (8) conditions in all camps generally exceeded U.N. inspection guidelines for minimal shelter, food, and water, but found no internationally accepted standards of what the living conditions should be at refugee camps.
5,059
720
EPA does not actively seek out sites for the Superfund program but relies on states or interested parties to report them. Once reported, sites are added to EPA's inventory for evaluation. As of March 1994, EPA's inventory had 36,785 nonfederal sites, of which 1,192 had been placed on the National Priorities List. Evaluation of potentially hazardous sites occurs in several stages. At the completion of each stage, EPA may determine that no federal action is needed or it may proceed to the next stage. First, EPA requires that a site receive a preliminary assessment within a year of its entry into the inventory. The preliminary assessment involves a review of available documents and possible site reconnaissance. If the preliminary assessment indicates a potential problem, the site moves to the next stage of evaluation--the site inspection--which involves collecting and analyzing soil and water samples as appropriate. If warranted by the results of the site inspection, sites enter the final decision process. This process involves other evaluations, including an extended site inspection if needed, scoring under EPA's hazard ranking system; and a judgment by EPA officials on the appropriateness of listing the site on the priorities list. An extended site inspection requires more samples and could involve installing wells to monitor groundwater or other nonroutine data collection activities. The hazard ranking system is a method of quantifying the severity of site contamination to determine if a site should be placed on the list. The system assigns a numerical score based on the likelihood that a site has released or has the potential to release contaminants into the environment, the characteristics of the contaminants, and the people or environments affected by the release. A site must score at least 28.5 on the hazard ranking scale in order to be placed on the list. Sites can be dropped from further consideration following the extended site inspection or the scoring process. Sites also can be dropped from further consideration if, in the judgment of EPA regional officials, the sites do not pose risks great enough to warrant a Superfund cleanup. In addition to the sites following the process described above, the EPA inventory includes a large group of sites that have already been inspected but are awaiting reevaluation because of a change in the evaluation process. The Superfund Amendments and Reauthorization Act of 1986 required EPA to revise its evaluation system to make it more comprehensive and accurate in its assessment of threats to human health and the environment. According to EPA site assessment officials, the revision will change the mix of sites, but not necessarily the number of sites, that will end up on the priorities list. The revision was effective in March 1991. During the transition to the revised system, sites were evaluated through the site inspection stage using the original evaluation system. However, EPA decided to use the new system to make final decisions about placing these sites on the priorities list. In October 1991, EPA began to reevaluate these 6,467 sites, which it referred to as its evaluation backlog. Reevaluation could include collecting additional site information as well as limited sampling. As of the close of fiscal year 1993, EPA had completed this process for about 1,600 of the 6,467 sites. Fewer sites are being reported to EPA for evaluation, but site inspection results indicate that new sites reaching the site inspection stage are as likely to have contamination requiring a Superfund cleanup as those inspected in the past. The number of sites reported annually has been declining since fiscal year 1985. (See fig. 1.) In fiscal year 1993, 1,159 sites were added to the inventory--29 percent less than the prior year and 68 percent less than in fiscal year 1985. EPA attributed the decline since 1985 to the fact that many states now have their own Superfund programs. According to EPA site assessment officials, states are reluctant to report new sites, preferring instead to manage the cleanup themselves. EPA Region I site assessment officials suggested that states generally report sites that present challenging enforcement or cleanup problems. The percentage of sites that EPA believes warrant further consideration after completing the site inspection has been fairly steady for the last 10 years. (See fig. 2.) From program inception through fiscal year 1993, 43 percent of the 17,556 sites inspected were considered hazardous enough to need further consideration for the priorities list. In fiscal year 1993, 43 percent of the 725 sites inspected were also considered for further action. (App. II provides statistics on the number and percent of nonfederal sites accepted and rejected for further consideration after site inspection.) EPA officials do not expect to find in the future very large, heavily contaminated sites equivalent to Love Canal, which entered the Superfund program early in its history. However, the officials believe that contamination at newly discovered sites is generally not less severe than at previously reported sites--just less obvious. Earlier site discoveries more often included sites where the hazards were visible, such as barrels of hazardous waste above ground. Sites that are being discovered and reported now, according to EPA officials, are those with less obvious--but equally serious--problems, such as groundwater or drinking water contamination. Recent estimates of the future size of the Superfund workload have differed. In congressional testimony in February 1994, EPA forecast the smallest increase--1,700 new sites. In a report dated January 1994, CBO predicted 3,300 new sites through 2027, although it said that a wide range of additions was possible. EPA's Inspector General in a January 1994 report estimated that 3,000 of the 6,467 sites in the agency's evaluation backlog could be added to the Superfund. In February 1994 congressional testimony, EPA's Administrator testified that the Superfund National Priorities List could grow to about 3,000 federal and nonfederal sites, or roughly 1,700 more sites than are currently on the list. According to EPA officials, this estimate was based on an internal agency analysis prepared by the Office of Emergency and Remedial Response. The Office prepared low, medium, and high estimates, and EPA based its testimony on the medium estimate. (See app. III for a detailed breakdown of EPA's estimates.) EPA's estimates treated current and future inventory sites differently. In EPA's medium estimate, 6.5 percent of the currently reported sites were estimated to become Superfund sites compared with 3.5 percent of the sites that will be reported in the future. The inventory of reported sites was estimated to grow by 20,500 sites by the year 2020, or 54 percent more than at present. The estimate projected that the number of sites added to the inventory each year would decline from 1,500 sites in fiscal years 1994 through 1999 to 500 sites in fiscal years 2010 through 2019. EPA officials said that they based the decline on less state reporting, not on the existence of fewer sites that could be reported. CBO's estimate of potential future Superfund additions was developed in two parts. (See app. V.) First, CBO estimated the number of sites that would be reported to EPA's inventory of potential hazardous waste sites by developing trend lines based on the number of sites reported from 1981 to 1992. Because of the data's variability, CBO developed a base case, or most probable scenario, and low- and high-case scenarios. In the base case, CBO estimated that 25,394 sites would be added to the inventory by the year 2027. This estimate was about 5,000 sites higher than EPA's medium estimate. In the low and high cases, CBO estimated that 15,151 and 50,000 sites, respectively, would be added. Second, to determine the percentage of reported sites that would ultimately be placed on the priorities list, CBO relied on EPA staff's opinion since, according to CBO's report, usable site evaluation data were not available. When asked by CBO, EPA staff estimated that between 5 and 10 percent of all future inventory sites would be placed on the priorities list. CBO chose 8 percent for its base-case estimate and applied this rate to current and future inventory sites. For its own medium forecast, EPA estimated that 6.5 percent of the current inventory and 3.5 percent of the sites added to the inventory in the future will be placed on the priorities list. CBO's base-case estimate, after adjustment to eliminate federal sites, resulted in adding 3,300 more sites to the priorities list. The range of additional sites for the low- and high-case scenarios was between 1,100 and 6,600 sites. EPA's Inspector General estimated that 3,136 sites in the evaluation backlog could move to the priorities list. This estimate was made as part of a study of EPA's processing of these backlogged sites. At the time of the Inspector General's review, EPA had evaluated only 942 of the 6,467 sites. To estimate the number of potential sites for the priorities list, the Inspector General determined the proportion of sites evaluated in each region that were found to warrant consideration for the priorities list. The Inspector General then applied these proportions to the total number of backlogged sites in each region and added the regional numbers. The Inspector General reduced the total to account for an estimated proportion of sites that drop out in the final decision process. More recent data suggest that the Inspector General's estimate may be somewhat high. According to EPA's site evaluation staff, the Inspector General's estimate of 3,136 additional sites is high since it assumed that in the future, 52 percent of the sites in the backlog could move beyond the site inspection stage, the rate prevailing when the Office of Inspector General did its study. However, data for fiscal year 1993, available after the Inspector General completed the study, showed that the percentage of the backlogged sites warranting priorities list consideration had dropped to 28 percent. The number of future Superfund sites cannot be predicted with certainty. However, data from an EPA study of potential U.S. hazardous waste sites and our own analysis indicate that, assuming no major restructuring of the program, EPA's estimate of 1,700 additional future Superfund sites is conservative. The CBO estimate, especially the upper bounds of that estimate, may be a better predictor of potential program growth. Given the limited pace of site cleanup by the Superfund program to date, any of the increases in Superfund's size discussed in this report may be difficult for the program to manage. A September 1991 EPA analysis estimated that 58,000 sites could be added to the inventory in the future. When EPA made this estimate, it already had 34,618 sites in its inventory, for a combined total of 92,618 sites. This total is almost 6,000 sites more than CBO's high-case scenario estimate for the number of sites that would be in the inventory by 2032 and 1-1/2 times as high as the upper-bound estimate by EPA for the size of the inventory by 2020. Both CBO and EPA based their estimates on the number of sites expected to be reported under current EPA and state policies, not on the number that could be reported. The 58,000-site estimate, on the other hand, is for sites that could be reported. The estimated 58,000 sites consisted of sites that were assessed as having a high or moderate hazard potential. The estimate was developed from estimates for 12 individual industries provided by EPA divisions familiar with them. Each industry estimate was based on an analysis of data and judgment by EPA officials. Most of the sites were in one of the following categories: Resource Conservation and Recovery Act industrial process waste facilities, municipal solid waste landfills, off-site oil and gas waste management facilities, and large-quantity hazardous waste generators. EPA officials familiar with seven of the major categories, accounting for 93 percent of the 58,000 sites, told us that the results are still valid. The officials said that the study's figures represent the best estimates of the potential number of sites that could be added to the inventory in the future, although one official believed that the number of treatment, storage, and disposal facilities was overstated by 2,000 sites. The officials said that in no case did an actual inventory of potential sites exist. Our analysis indicates that between 10 and 11 percent of the currently reported nonfederal sites could become Superfund sites. This percentage is greater than the 6.5 percent indicated in EPA's medium estimate and is closer to CBO's 10 percent high-case estimate. As of September 30, 1993, EPA had completed evaluation for 26,026 of the 35,782 nonfederal sites in its inventory. The remaining 9,756 sites were in various stages of evaluation: 930 sites were awaiting final listing decisions, 4,892 backlog sites were awaiting final evaluation, 2,373 sites were awaiting site inspection, and 1,561 sites were awaiting preliminary assessment. If 1993 screening rates for these categories, as described in appendix IV, were to continue into the future, 2,497 to 2,799 of the 9,756 sites could become Superfund sites. Adding this range to the 1,177 sites already on the priorities list would result in a total estimate of 3,674 to 3,976 Superfund sites, or 10 to 11 percent, of the 35,782 inventoried sites. The Acting Deputy Director for EPA's Hazardous Site Evaluation Division believed that the 1993 evaluation rates were a reasonable basis for forecasting future Superfund additions from the current inventory. We also recognize, however, that certain factors make estimates of the number of future Superfund sites subject to substantial uncertainties. First, the rate at which sites move through the assessment process onto the priorities list may change in the future, making projections based on past rates inaccurate. Also, proposed legislation to reauthorize Superfund, which has been considered by the Congress, contains provisions to encourage parties responsible for hazardous waste sites to clean them up outside of the regular Superfund program and to authorize states, in cooperation with EPA, to assume certain cleanup responsibilities. These changes could reduce the number of sites that EPA would have to manage in the Superfund program. Any of the estimates discussed in this report suggest that EPA will be challenged by its future Superfund workload. In the 14-year history of the program through July 1994, Superfund has completed the construction of remedies (such as the installation of groundwater pumps and filters) at 234 of the 1,300 federal and nonfederal Superfund sites. Two years ago, EPA estimated that 650 sites would reach the construction-completed stage by the year 2000. At these completion rates, it could take many decades for Superfund to clean up its current inventory and future additions to the inventory. Although EPA has recently developed new procedures to speed up the cleanup process, it is too early to tell what impact they will have on the overall pace of the program. As agreed with your offices, we did not obtain written agency comments on a draft of this report. However, we discussed the contents of this report with program officials from EPA's Office of Emergency and Remedial Response (Superfund). EPA's Acting Site Assessment Branch Chief said that the facts presented in this report were balanced, fair, and accurate. He also said that program changes under consideration by the Congress and EPA, such as proposals to increase the states' cleanup role, could significantly reduce the number of sites to be added to the Superfund program. We conducted our work at EPA headquarters in Washington, D.C., and at its regional offices in Boston (Region I), Chicago (Region V), and Denver (Region VIII). We selected these regions because they presented a cross-section of Superfund activity and were geographically diverse. We obtained and reviewed recent reports and studies on the future size of the Superfund workload. We obtained and analyzed site inventory statistics on preliminary assessment and site inspection processing since program inception through the first quarter of fiscal year 1994. We interviewed EPA headquarters officials and program management officials in three EPA regional offices, as well as environmental protection officials in two states, about Superfund site discovery and evaluation. We reviewed the relevance and appropriateness of studies conducted by CBO, EPA, and EPA's Office of Inspector General and interviewed EPA program officials on the status of major site categories that could affect the Superfund site inventory. We performed our work in accordance with generally accepted government auditing standards between August 1993 and July 1994. As arranged with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to other appropriate congressional committees; the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Should you need further information, please contact me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Number of sites in inventory (col A) Percent of sites that could be listed (col B) Range of sites that could col B) Sites evaluated--not placed on priorities list Sites evaluated--placed on priorities list Sites still to be evaluated Sites awaiting final listing decision Backlogged sites awaiting final evaluation Subtotal of sites still to be evaluated Overall percentage of sites that could be placed on the priorities list(Table notes on next page) Number of federal and nonfederal inventory sitesPlacement rate (percent) Estimated priority list size before rounding Estimated priority list size (rounded) Bruce Skud, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) Superfund Program, focusing on: (1) trends in the number of reported hazardous waste sites; (2) EPA evaluation of potential contamination at these sites; and (3) recent estimates of the program's future growth. GAO found that: (1) the number of sites reported each year has steadily declined since 1985, primarily because the states believe that they can handle cleanups more efficiently and prefer to do the cleanups themselves; (2) states generally report sites that present challenging enforcement or cleanup problems; (3) the percentage of seriously contaminated sites among those reported has remained constant at 43 percent over the past 10 years; (4) EPA officials believe that contamination at newly discovered sites is not less severe, just less obvious; (5) EPA believes 1,700 new federal and nonfederal sites could be added to the National Priorities List through the year 2020; (6) the Congressional Budget Office believes that 3,300 new nonfederal sites could be added to the list through the year 2027; (7) the future Superfund workload could be higher than EPA estimated; and (8) any additions to the Superfund program will be difficult for EPA to manage.
3,974
257
FAA conducted a series of analyses to identify the most cost-effective way to use the radar data from Grand Junction. On the basis of the results of a 1992 study, FAA decided that building a TRACON facility at Grand Junction was less costly than remoting the radar signal from Grand Junction to Denver. However, in May 1994 FAA conducted another cost analysis that factored in the use of a new technology for remoting radar signals known as video compression. The results of this analysis showed that it would be less costly to remote the radar signal from Grand Junction to Denver, and in August 1994, FAA announced its choice of the less costly option. FAA's decision to remote the radar signal also means that the tower at Grand Junction will be operated by a contractor. FAA's decision to provide approach guidance to aircraft through the Denver TRACON dictates that the Grand Junction tower be classified as a level-1 tower that operates using visual flight rules (VFR). In 1993, the House and Senate Appropriations Committees directed FAA to contract out all level-1 VFR towers to the private sector. In March 1995, Grand Junction community leaders and local air traffic controllers met with FAA to outline their concerns about FAA's analyses and conclusions. The major concerns of the controllers and the city's representatives were (1) the accuracy and completeness of the cost comparisons between the two options and (2) the considerations about safety and efficiency associated with remoting radar signals and contracting out a tower's operations. FAA agreed to conduct a new study that would consider two options--(1) a local option that would establish either a TRACON or a TRACAB at Grand Junction or (2) a long-distance option that would remote the radar signal to Denver--and found once again that remoting the radar signal to Denver was the most cost-effective option and that it would not compromise the system's safety and efficiency. FAA's 1995 analysis of the costs of establishing a new TRACAB facility at Grand Junction or remoting the radar data to Denver was based on a comparison of the costs for facilities and equipment, telecommunications, staffing, and relocating staff over the 20-year life cycle of the project. FAA estimated that the cost of remoting the signal to the Denver TRACON would be about $9.4 million, while the cost of establishing a TRACAB in Grand Junction would be about $12.8 million, a difference of about $3.4 million.FAA also estimated that an additional $2.5 million would be saved over the same 20-year period by contracting out the tower at Grand Junction. According to FAA's estimates, these two actions would save about $5.9 million. To verify whether FAA chose the most cost-effective option for providing radar approach control to the Grand Junction airport, we performed an independent cost analysis of FAA's 1995 study. While we agree that FAA's analysis identified the most cost-effective option, FAA did not take into account three factors that, in our opinion, are valid in evaluating the options studied. When these factors are considered, FAA's total projected savings attributable to remoting and contracting out the tower operation at Grand Junction are reduced by about $500,000, from $5.9 million to $5.4 million. The principal findings from our analysis are summarized below. (See app. I for a detailed presentation of our analysis.) FAA did not include a cost for establishing telephone lines between Grand Junction and Denver under the remoting option. The overlooked cost of annual telephone lines was $107,500, or, when discounted over the 20-year life cycle of the project, $853,000 in 1995 dollars . We revised FAA's estimated total telecommunications cost under the remote option upward by $853,000, from $618,000 to $1,470,000. FAA overestimated the cost of staffing under each of the options studied because the agency used authorized staffing levels--even though the positions were often unfilled. Using staffing levels that more closely approximate actual levels in the Northwest Mountain Region, we estimate that the annual staffing cost would be lower by $147,600 (about $1.82 million over 20 years) for the TRACAB option and by $168,900 (about $2.091 million over 20 years) for the remote option. The net effect of these changes increases the savings attributable to remoting by about $271,000 over 20 years. Moreover, when using staffing levels that more closely approximate actual levels in the field, we estimate that the TRACAB option's staff relocation and training costs would be lower and further reduce the savings attributable to the remote option by $174,000. FAA underestimated the savings associated with contracting out the air traffic control functions at Grand Junction. We estimate that contracting out saves about $2.7 million--or about $218,000 more than FAA estimates--over 20 years after factoring in FAA's previous experience with contractor-operated towers and the additional costs of relocating the Grand Junction controllers who choose not to work for the contractor. The representatives of the city of Grand Junction expressed concern that by remoting the radar signal to Denver and by contracting out a tower's operation, FAA jeopardizes the safety and the efficiency of the air traffic control system at the Grand Junction airport. Specifically, the representatives questioned the implications for safety and efficiency of transmitting radar data over 250 miles and having Denver controllers provide Grand Junction's radar approach control. The city's representatives also questioned the safety and efficiency implications of contracting out Grand Junction's tower. We discussed remoting and considerations about the safety and efficiency of a contractor-operated tower with officials at FAA headquarters and at FAA's Northwest Mountain Region, who have jurisdiction over the Grand Junction and Denver areas. We also discussed these issues with officials from major aviation-related associations. According to the air traffic officials in FAA's Northwest Mountain Region, the agency has successfully transmitted radar data hundreds of miles to its enroute centers for the past 30 years without compromising or affecting the system's safety. Because FAA's ability to transmit radar data over 250 miles of mountainous terrain was a concern to the Grand Junction representatives, we reviewed FAA's information on the reliability and availability of radar data transmissions. The information showed that the reliability and availability of the transmissions averaged 99.98 percent nationally over the past 5 years and that they were unaffected by mountainous terrain. According to FAA and aviation association officials, a controller's physical location is not a safety issue, and controllers routinely control air traffic safely without having visual contact with other air traffic controllers. The critical issue is that information be exchanged in a timely manner, not that two individuals be in visual proximity. Moreover, FAA officials told us that when normal modes of communication are disrupted, the agency adjusts its operating procedures--such as transferring the control of air space to an enroute center or using nonradar approaches--to ensure the timely flow of information. The city's representatives believed that remoting caused traffic delays at the Grand Junction airport because Denver controllers were not trained to manage the airport's air traffic. According to FAA Air Traffic officials in the Northwest Mountain Region, Grand Junction incurred initial start-up problems similar to those that other facilities incurred when FAA began to remote radar data. To eliminate these problems, FAA provided refresher briefings to the Denver controllers on managing Grand Junction's air traffic. Grand Junction air traffic controllers told us that the Denver controllers are now efficiently managing this air traffic and delays are no longer a problem. According to the aviation association officials, their members had not raised any concerns about efficiency associated with FAA's remoting of radar data. In connection with private-sector controllers under contract to FAA, the manager of FAA's contract tower program told us that contract controllers are as well trained as FAA controllers. He provided documentation showing that contract controllers average 18 years of experience. The program manager also told us that contract controllers are certified by FAA and operate under the same regulations as FAA controllers. Additionally, officials representing various aviation associations told us that their members were provided with safe and efficient services by both FAA-operated and contractor-operated towers. As a result, these officials told us that they had no reason to question the safety and efficiency of FAA's contract tower program. The concerns raised by representatives of the city of Grand Junction have also been raised by citizens' groups in other communities where FAA has proposed to consolidate facilities and contract out a facility's operation. That other communities had similar concerns leads us to believe that FAA can do a better job of communicating the reasons for its future decisions on consolidating facilities. The issues and concerns raised by the city's representatives--the reliability of cost data and the safety and efficiency of the airport--were similar to those raised in 1994 by a Yakima, Washington, citizens' group that also questioned an FAA remoting decision. In both the Grand Junction and the Yakima projects, FAA took a relatively ad hoc approach in deciding whether to remote radar data. In both cases, our review showed that while FAA chose the most cost-effective option, it did not include all relevant cost factors in its savings computation and did little to communicate the rationale for its decision to the affected communities, thereby contributing to subsequent misperceptions by community representatives. We did not find any standard FAA guidance for officials to follow or analytical model for them to use when deciding what costs to include, how to compute those costs, and what documentation to maintain when analyzing candidate facilities for consolidation. In June 1996, FAA issued a report that identifies the types of information to be considered in deciding whether to establish or consolidate TRACON facilities; however, the report does not specify how the various factors will be computed in the decision-making process. In the absence of standard guidance or an analytical model, FAA patterned its Grand Junction studies after earlier remoting efforts. Officials in FAA's Air Traffic Plans and Requirements Program said that the agency uses this approach because each potential consolidation and remoting situation is unique. However, this approach has led to the agency's omitting certain telecommunications costs and not reflecting the more realistic scenarios for staffing facilities and has raised concerns in the affected communities. These types of process problems can have the effect of undermining the agency's credibility, discouraging the community from accepting FAA's decision, and delaying implementation plans and the realization of projected cost savings. While FAA chose the most cost-effective way to handle radar data for Grand Junction and Yakima, in both instances it overlooked relevant cost factors. Furthermore, in both cases FAA's decisions were challenged by the affected communities, thereby contributing to delays in implementing the decisions. A more structured decision-making process, based on formal guidance and an analytical model, could ensure that FAA considers all relevant factors when making a remoting decision. A more structured decision-making process could also help FAA defend its decisions to communities that protest the closure of an FAA-staffed facility. As FAA continues to remote radar data and consolidate facilities, it is to FAA's advantage to develop and implement a more structured decision-making process in conjunction with key stakeholders. We recommend that the Secretary of Transportation direct the Administrator, Federal Aviation Administration, to develop formal guidance and an analytical model for making its remoting decisions. The guidance should outline what costs to include, how those costs should be computed, and what documentation is required to support the analysis. It should also provide for early and continuous involvement of the major stakeholders, especially the affected communities. We provided a draft of this report to the Department of Transportation for review and comment. We met with officials of the Department, including FAA's Program Director for Air Traffic Plans and Requirements Program, who agreed with the draft report's conclusions and recommendation. The Program Director said that FAA does not normally conduct the level of analysis we recommended because of the wide difference in costs between remoting radar data and establishing a local terminal radar approach control facility. Nevertheless, FAA recognized that improvements can be made in its decision-making process. In our view, FAA's June 1996 report that identifies the types of information to be considered when deciding whether to establish or consolidate TRACON facilities is a step in the right direction for improving its decision-making process. However, the report does not specify how the various factors will be computed in the decision-making process. We interviewed FAA officials in Washington, D.C., and the Northwest Mountain Region and obtained specific documentation on the cost of each option and the associated safety information. To verify the figures FAA used in its most recent cost analysis, we conducted an independent cost analysis. We also met with representatives of the city of Grand Junction and officials from major aviation associations to discuss their concerns and obtain their opinions on the potential operational and safety impacts associated with remoting and contracting out the Grand Junction tower. We discussed our findings with FAA officials, including the Program Director, Air Traffic Plans and Requirements Program. We performed this review from October 1995 through October 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Transportation; the Administrator, Federal Aviation Administration; and representatives of the city of Grand Junction. We will also make copies available to others on request. Please call me at (202) 512-4803 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix II. Cost savings for remote option ($1,189) ($1,470,824) (1 position) $1,457,690 (2 positions) (1 position) (1 position) $4,864,936(8 positions) $3,321,156 (6 positions) $1,044,017(1 position) ($1,044,017) $1,824,351(3 positions) $1,216,234 (2 positions) ($2,700,000) (Table notes on next page) The costs for telecommunication, salary, and savings from the contract tower program were discounted over 20 years. We believe $50,000 per move is reasonable because FAA now projects $56,200 as the average cost per move for its Northwest Mountain Region. Because we eliminated one technician under the TRACAB option, we reduced the cost of training by $23,900. FAA training academy officials told us that this is the cost for training one technician. Linda S. Garcia Dana E. Greenberg Robert E. Levin Peter G. Maristch The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) whether the Federal Aviation Administration (FAA) chose the most cost-effective option for handling radar-based air traffic control activities at the Grand Junction, Colorado, airport; (2) whether the safety and efficiency of the air traffic control system would be compromised by remoting radar data and contracting out tower operations at Grand Junction; and (3) what can be done to improve the FAA process for determining when and where to remote radar data. GAO found that: (1) it agreed with the FAA determination that remoting the Grand Junction radar signal to a terminal radar approach control (TRACON) facility in Denver is the most cost-effective option for handling radar data from the site; (2) the FAA 20-year projected savings attributable to the remote option should be reduced by about $500,000, from $5.9 million to $5.4 million, since FAA overlooked certain telecommunications costs and did not utilize more realistic staffing scenarios; (3) GAO analysis of the available data disclosed no valid concerns about the safety and efficiency of remoting radar data or contracting out a tower's operation; (4) the FAA process for deciding when and where to remote radar signals was generally sound, but relatively ad hoc; and (5) a formal methodology for making such decisions would have helped FAA to ensure that all relevant factors were properly considered and communicate to the affected communities how its decision was made.
3,445
311
The United States is the largest consumer of crude oil and petroleum products. In 2007, the U.S. share of world oil consumption was approximately 24 percent. While DOE projects that U.S. demand for oil will continue to grow, domestic production has generally been in decline for decades, leading to greater reliance on imported oil. U.S. imports of oil have increased from 32 percent of domestic demand in 1985 to 58 percent in 2007. In managing the SPR, the Secretary of Energy is authorized by the Energy Policy and Conservation Act, as amended, to place in storage, transport, or exchange, (1) crude oil produced from federal lands; (2) crude oil which the United States is entitled to receive in kind as royalties from production on federal lands; and (3) petroleum products acquired by purchase, exchange, or otherwise. The act also states that the Secretary shall, to the greatest extent practicable, acquire petroleum products for the SPR ina manner that minimizes the cost of the SPR and the nation's vulnerability to a severe energy supply interruption, among other things. until being repealed in 2000, the act provided the Secretary discretion authority to require importers and refiners of petroleum products to store and maintain readily available inventories, and it directed the Secretary to establish and maintain regional petroleum reserves under certain circumstances. 42 U.S.C. SS 6240(b). SPR has sold or exchanged oil on several other occasions, including providing small quantities of oil to refiners to help them through short- term localized oil shortages. Oil markets have changed substantially in the 34 years since the establishment of the SPR. At the time of the Arab oil embargo, price controls in the United States prevented the prices of oil and petroleum products from increasing as much as they otherwise might have, contributing to a physical oil shortage that caused long lines at gasoline stations throughout the United States. Now that the oil market is global, the price of oil is determined in the world market primarily on the basis of supply and demand. In the absence of price controls, scarcity is generally expressed in the form of higher prices, as purchasers are free to bid as high as they want to secure oil supply. In a global market, an oil supply disruption anywhere in the world raises prices everywhere. Releasing oil reserves during a disruption provides a global benefit by reducing oil prices in the world market. In response to various congressional directives, DOE has studied the issue of including refined petroleum products at various times since 1975. After the initial SPR plan was developed, the issue was reviewed again in whole, or in part, in 1977, 1982, 1989, and 1998. Except for the 1998 report, DOE concluded that including refined petroleum products as part of the SPR was unnecessary and too expensive. The 1998 study dealt with establishing a home heating oil reserve and while it did not conclude that a reserve should or should not be established, it did find the construction of such a reserve would have net negative benefits. The 2000 amendments to the Energy Policy and Conservation Act authorized the Secretary to establish a Northeast Home Heating Oil Reserve, which was created and filled that same year. Although this reserve is considered separate from the SPR, it is authorized to contain 2 million barrels of heating oil and currently holds nearly that amount. The Reserve is an emergency source of heating oil to address a severe energy supply interruption in the Northeast. According to DOE, the intent was to create a reserve large enough to allow commercial companies to compensate for interruptions in supply of heating oil during severe winter weather, but not so large as to dissuade suppliers from responding to increasing prices as a sign that more supply is needed. To date, the Northeast Home Heating Oil Reserve has not been used to address an emergency winter shortage situation. Some of the arguments for including refined petroleum products in the SPR are: (1) the United States' increased reliance on foreign imports and resulting exposure to supply disruptions or unexpected increases in demand elsewhere in the world, (2) possible reduced refinery capacity during weather related supply disruptions, (3) time needed for petroleum product imports to reach all regions of the United States in case of an emergency, and (4) port capacity bottlenecks in the United States which limit the amount of petroleum products that can be imported quickly during emergencies. Some of the arguments against including refined petroleum products in the SPR are: (1) the surplus of gasoline in Europe, (2) the high storage costs of refined products, (3) the use of 'boutique' fuels in the United States, and (4) policy alternatives may diminish U.S. reliance on oil. First, in our December 2007 report, we found that while the United States was largely self-sufficient in gasoline in 1970, in fiscal year 2007, we imported over 10 percent of our annual consumption of gasoline and smaller percentages of jet fuel and some other products. We also found that along with an increased reliance on imports the United States is exposed to supply disruptions or unexpected increases in demand anywhere else in the world. Because the SPR contains only crude oil, if an unexpected supply disruption occurs in a supply center for the United States, the government's emergency strategy would rely on sufficient volumes of the SPR and a refinery sector able to turn out products at a pace necessary to meet consumer demands in a crisis. Any growth in demand in the United States would put increasing pressure on this policy, and for much of the past 25 years, demand for refined petroleum products in the United States and internationally has outpaced growth in refining capacity. Second, in our August 2006 report, we found that the ability of the SPR to reduce economic damage may be impaired if refineries are not able to operate at capacity or transport of oil to refineries is delayed. For example, petroleum product prices still increased dramatically following Hurricanes Katrina and Rita, in part because many refineries are located in the Gulf Coast region and power outages shut down pipelines that refineries depend upon to supply their crude oil and to transport their refined petroleum products to consumers. DOE reported that 21 refineries in affected states were either shut down or operating at reduced capacity in the aftermath of the hurricane. In total, nearly 30 percent of the refining capacity in the United States was shut down, disrupting supplies of gasoline and other products. Two pipelines that send petroleum products from the Gulf coast to the East Coast and the Midwest were also shut down as a result of Hurricane Katrina. For example, Colonial Pipeline, which transports petroleum products to the Southeast and much of the East Coast, was not fully operational for a week after Hurricane Katrina. Consequently, average retail gasoline prices increased 45 cents per gallon between August 29 and September 5, short-term gasoline shortages occurred in some places, and the media reported gasoline prices greater than $5 per gallon in Georgia. The hurricane came on the heels of a period of high crude oil prices and a tight balance worldwide between petroleum demand and supply, and illustrated the volatility of gasoline prices given the vulnerability of the gasoline infrastructure to natural or other disruptions. Third, because some foreign suppliers are farther from the U.S. demand centers they serve than the relevant domestic supply center, the time it takes to get additional product to a demand center experiencing a supply shortfall may be longer than it would be if the United States had its own product reserves. For example, imports of gasoline to the West Coast may come from as far away as Asia or the Middle East, and the transport time and therefore cost is greater. To the extent that imported gasoline or other petroleum products come from far away, the lengthening of the supply chain has implications for the ability to respond rapidly to domestic supply shortfalls. Specifically, if supplies to relieve a domestic regional supply shortfall must come from farther away, the price increases associated with such shortfalls may be greater and/or last longer. In this sense, the West Coast and the middle of the country are more vulnerable to price increases or volatility than is the Northeast, which can receive shipments of gasoline from Europe, often on voyages of less than a week. Fourth, the receipt of petroleum products may be delayed because port facilities are operating at or near capacity. For example, one-fourth of the ports in a U.S. Maritime Administration (MARAD) survey described their infrastructure impediments as "severe." Officials from the interagency U. S. Committee on the Maritime Transportation System, which includes MARAD, the National Oceanic and Atmospheric Administration, and the U.S. Army Corps of Engineers, told us that U.S. ports and waterways are constrained in capacity and utilization, and anticipate marine supply infrastructure will become more constrained in the future. Officials at the Ports of Los Angeles, Long Beach, Oakland, Houston, Savannah, and Charleston reported congestion and emphasized in a 2005 report that they are experiencing higher than projected growth levels. In fact, one European product transporter we spoke with said that the European response to Hurricanes Rita and Katrina were hindered because East Coast ports in the United States could not handle the number of oil tankers carrying petroleum products from Europe, with some tankers waiting for as long as 2 weeks at port. First, a key impetus for global trade in petroleum products has been a structural surplus in production of gasoline and a deficit in production of diesel in Europe. This surplus of gasoline is largely the result of a systematic switch in European countries toward automobiles with diesel- powered engines, which are more fuel efficient than gasoline-powered engines. European regulators promoted diesel fuel use in Europe by taxing diesel at a lower rate, and European demand for diesel vehicles rose. The European refining and marketing sector responded to this change in demand by importing increasing amounts of diesel, and exporting a growing surplus of gasoline to the United States and elsewhere. The United States has purchased increasing amounts of gasoline, including gasoline blendstocks, from Europe in recent years. These imports have generally had a strong seasonal component, with higher levels of imports during the peak summer driving months and lower imports during the fall and winter. The major exception to this seasonality came in the months of October 2005 through January 2006, when imports surged in response to U.S. shortfalls resulting from Hurricanes Katrina and Rita in August and September 2005, respectively. Experts and company representatives told us they believe this structural imbalance within the European Union will continue for the foreseeable future, and perhaps widen, resulting in more exports of European gasoline and blending components to the United States. Second, in its prior reports on the subject, DOE found that refined petroleum product reserves are more costly than crude oil to store and must be periodically used and replaced to avoid deterioration of the products. Although DOE officials said some refined products can be stored in salt caverns just as the SPR crude oil is currently stored, these caverns are predominantly found on the Gulf Coast. In order to store refined product in other parts of the United States, storage tanks may need to be built, which is costlier than centralized salt cavern storage. According to DOE, stockpiling oil in salt caverns costs about $3.50 per barrel in capital costs. Storing oil in aboveground tanks, by comparison, can cost $15 to $18 per barrel. One of the maintenance costs of refined petroleum products that is not associated with crude oil storage is turnover, or replacement costs, because refined products deteriorate more quickly. Turnover of the product is required to ensure quality. For example, DOE found that when gasoline is stored in above-ground tanks, the turnover time is 18 to 24 months. Conversely, DOE found that crude oil could be stored for prolonged periods without losing quality. The more frequent the turnover, the higher the throughput and administrative costs. Third, while the language in the Energy Policy and Conservation Act addresses refined petroleum products as well as crude oil, DOE conducted a study in 1977 that found geographically dispersed, small reserves of a variety of petroleum products would be more costly than a centralized crude oil reserve. For example, many states have adopted the use of special gasoline blends--or 'boutique' fuels, which could pose a challenge in incorporating refined products in the SPR. Unless requirements to use these fuels were waived during emergencies, as they were in the aftermath of Hurricanes Katrina and Rita, boutique fuels could need to be strategically stored at multiple regional, state, or local locations due to reduced product fungibility. Conversely, crude oil provides flexibility in responding to fluctuations in refined product market needs as regional fuel specifications and environmental requirements change over time. Furthermore, the switching of seasonal blends to meet environmental requirements and product degradation would require inventory turnover as compared to crude oil storage, which does not require the same level of turnover. Fourth, there are several policy choices that might diminish the growth in U.S. demand for oil. First, research and investment in alternative fuels might reduce the growth of U.S. oil demand. Vehicles that use alternative fuels, including ethanol, biodiesel, liquefied coal, and fuels made from natural gas, are now generally more expensive or less convenient to own than conventional vehicles, because of higher vehicle and fuel costs and a lack of refueling infrastructure. Alternative-fuel vehicles could become more viable in the marketplace if their costs and fuel delivery infrastructure become more comparable to vehicles fueled by petroleum products. Second, greater use of advanced fuel-efficient vehicles, such as hybrid electric and advanced diesel cars and trucks, could reduce U.S. oil demand. The Energy Policy Act of 2005, as amended, directs the Secretary of Energy to establish a program that includes grants to automobile manufacturers to encourage domestic production of these vehicles. Third, improving the Corporate Average Fuel Economy (CAFE) standards could curb demand for petroleum fuels. After these standards were established in 1975, the average fuel economy of new light-duty vehicles improved from 13.1 miles per gallon in 1975 to a peak of 22.1 miles per gallon in 1987. More recently, the fuel economy of new vehicles in the United States has stagnated at approximately 21 miles per gallon. However, CAFE standards have recently been raised to require auto manufacturers to achieve a combined fuel economy average of 35 miles per gallon for both passenger and non-passenger vehicles beginning in model year 2020. Any future increases could further decrease the U.S. oil demand. The following three lessons learned from the management of the existing crude oil SPR highlight some of the issues that may need to be considered in acquiring refined petroleum products. Select a cost-effective mix of products. To fill the SPR in a more cost- effective manner, we recommended in August 2006 that DOE include in the SPR at least 10 percent heavy crude oils, which are generally cheaper to acquire than the lighter oils that comprise the SPR's volume. Including heavier oil in the SPR could significantly reduce fill costs because heavier oil is generally less expensive than lighter grades. For example, if DOE included 10 percent heavy oil in the SPR as it expands to 1 billion barrels, that would require DOE to add 100 million barrels of heavy oil, or about one-third of the total new fill. From 2003 through 2007, Maya--a common heavy crude oil--has traded for about $12 less per barrel on average than West Texas Intermediate--a common light crude oil. If this price difference were to persist over the duration of the new fill period, DOE would save about $1.2 billion in nominal terms by filling the SPR with 100 million barrels of heavy oil. Similarly, refined petroleum products included as part of the SPR may comprise a number of different types of products (e.g., gasoline, diesel, and jet fuel) and possibly different blends of products (e.g., different grades and mixtures of gasoline); DOE will need to determine the most cost-effective mix of products in light of existing legal and regulatory requirements to use specific blends of fuels. Consider using a dollar-cost-averaging acquisition approach. Also in our August 2006 report, we recommended that DOE consider filling the SPR by acquiring a steady dollar value of oil over time, rather than a steady volume as has occurred in recent years. This "dollar-cost-averaging" approach would allow DOE to take advantage of fluctuations in oil prices and ensure that more oil would be acquired when prices are low and less when prices are high. In August 2006, we reported that if DOE had used this approach from October 2001 through August 2005, it could have saved approximately $590 million in fill costs. We also ran simulations to estimate potential future cost savings from using a dollar-cost-averaging approach over 5 years and found that DOE could save money regardless of the price of oil as long as there is price volatility, and that the savings would be generally greater if oil prices were more volatile. We would expect a dollar-cost-averaging acquisition method to also provide positive benefits when acquiring refined petroleum products. Maximize cost-effective storage options. According to DOE, salt formations offer the lowest cost, most environmentally secure way to store crude oil for long periods of time. Stockpiling oil in artificially created caverns, deep within rock-hard salt, has historically cost about $3.50 per barrel in capital costs. In comparison, storing oil in above-ground tanks can cost $15 to $18 per barrel. Similarly, for those refined petroleum products that can be stored below ground, salt formations may offer a cost-effective storage option. However, possible storage options would need to be evaluated hand-in-hand with the need to (1) turn over the refined stocks periodically because their stability deteriorates over time, and (2) transport the refined petroleum products quickly to major population centers where the products will be used. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jeffery D. Malcolm, Assistant Director, and Holly Sasso. Also contributing to this testimony were Josey Ballenger, Philip Farah, Quindi Franco, Michelle Munn, Benjamin Shouse, Karla Springer, and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The possibility of storing refined petroleum products as part of the Strategic Petroleum Reserve (SPR) has been contemplated since the SPR was created in 1975. The SPR, which currently holds about 700 million barrels of crude oil, was created to help insulate the U.S. economy from oil supply disruptions. However, the SPR does not contain refined products such as gasoline, diesel fuel, or jet fuel. The Energy Policy Act of 2005 directed the Department of Energy (DOE) to increase the SPR's capacity from 727 million barrels to 1 billion barrels, which it plans to do by 2018. With the possibility of including refined products as part of the expansion of the SPR, this testimony discusses (1) some of the arguments for and against including refined products in the SPR and (2) lessons learned from the management of the existing crude oil SPR that may be applicable to refined products. To address these issues, GAO relied on its 2006 report on the SPR (GAO-06-872), 2007 report on the globalization of petroleum products (GAO-08-14), and two 2008 testimonies on the cost-effectiveness of filling the SPR ( GAO-08-512T and GAO-08-726T). GAO also reviewed prior DOE and International Energy Agency studies on refined product reserves. Since the SPR, the largest emergency crude oil reserve in the world, was created in 1975 a number of arguments have been made for and against including refined petroleum products. Some of the arguments for including refined products in the SPR are: (1) the United States' increased reliance on imports and resulting exposure to supply disruptions or unexpected increases in demand elsewhere in the world, (2) possible reduced refinery capacity during weather related supply disruptions, (3) time needed for petroleum product imports to reach all regions of the United States in case of an emergency, and (4) port capacity bottlenecks in the United States, which limit the amount of petroleum products that can be imported quickly during emergencies. For example, the damage caused by Hurricane Katrina demonstrated that the concentration of refineries on the Gulf Coast and resulting damage to pipelines left the United States to rely on imports of refined product from Europe. Consequently, regions experienced a shortage of gasoline and prices rose. Conversely, some of the arguments against including refined products in the SPR are: (1) the surplus of refined products in Europe, (2) the high storage costs of refined products, (3) the use of a variety of different type of blends of refined products--"boutique" fuels--in the United States, and (4) policy alternatives that may diminish reliance on oil. For example, Europe has a surplus of gasoline products because of a shift to diesel engines, which experts say will continue for the foreseeable future. Europe's surplus of gasoline is available to the United States in emergencies and provided deliveries following Hurricanes Katrina and Rita in 2005. The following three lessons learned from the management of the existing SPR may have some applicability in dealing with refined products. (1) Select a cost-effective mix of products. In 2006, GAO recommended that DOE include at least 10 percent heavy crude oil in the SPR. If DOE bought 100 million barrels of heavy crude oil during its expansion of the SPR it could save over $1 billion in nominal terms, assuming a price differential of $12 between the price of light and heavy crude, the average differential from 2003 through 2007. Similarly, if directed to include refined products as part of the SPR, DOE will need to determine the most cost-effective mix of products. (2) Consider using a dollar-cost-averaging acquisition approach. Also in 2006, GAO recommended that DOE consider acquiring a steady dollar value--rather than a steady volume--of oil over time when filling the SPR. This would allow DOE to acquire more oil when prices are low and less when prices are high. GAO expects that a dollar-cost-averaging acquisition method would also provide benefits when acquiring refined products. (3) Maximize cost-effective storage options. According to DOE, below ground salt formations offer the lowest cost approach for storing crude oil for long periods of time--$3.50 per barrel in capital cost versus $15 to $18 per barrel for above ground storage tanks. Similarly, DOE will need to explore the most cost-effective storage options for refined products.
4,037
937
Poorly defined requirements and processes for extending injured and ill reserve component soldiers on active duty have caused soldiers to be inappropriately dropped from their active duty orders. For some, this has led to significant gaps in pay and health insurance, which has created financial hardships for these soldiers and their families. Based on our analysis of Army Manpower data during the period from February 1, 2004, through April 7, 2004, almost 34 percent of the 867 soldiers who applied to be extended on active duty orders fell off their orders before their extension requests were granted. This placed them at risk of being removed from active duty status in the automated systems that control pay and access to benefits, including medical care and access to the Commissary and Post Exchange--which allows soldiers and their families to purchase groceries and other goods at a discount. While the Army Manpower Office began tracking the number of soldiers who have applied for ADME and fell off their active duty orders during that process, the Army does not keep track of the number of soldiers who have lost pay or other benefits as a result. Although, logically, a soldier who is not on active duty orders would also not be paid, as discussed later, many of the Army installations we visited had developed ad hoc procedures to keep these soldiers in pay status even though they were not on official, approved orders. However, many of the ad hoc procedures used to keep soldiers in pay status circumvented key internal controls in the Army payroll system--exposing the Army to the risk of significant overpayment, did not provide for medical and other benefits for the soldiers dependents, and sometimes caused additional financial problems for the soldier. Because the Army did not maintain any centralized data on the number, location, and disposition of mobilized reserve component soldiers who had requested ADME orders but had not yet received them, we were unable to perform statistical sampling techniques that would allow us to estimate the number of soldiers affected. However, through our case study work, we have documented the experiences of 10 soldiers who were mobilized to active duty for military operations in Afghanistan and Iraq. Figure 1 provides an overview of the pay problems experienced by the 10 case study soldiers we interviewed and the resulting impact the disruptions in pay and benefits had on the soldiers and their families. According to the soldiers we interviewed, many were living from paycheck to paycheck; therefore, missing pay for even one pay period created a financial hardship for these soldiers and their families. While the Army ultimately addressed these soldiers' problems, absent our efforts and consistent pressure from the requesters of the report, it would likely have taken longer for the Army to address these soldiers' problems. Further details on these case studies are included in our related report. The Army has not provided (1) clear and comprehensive guidance needed to develop effective processes to manage and treat injured and ill reserve component soldiers, (2) an effective means of tracking the location and disposition of injured and ill soldiers, and (3) adequate training and education programs for Army officials and injured and ill soldiers trying to navigate their way through the ADME process. The Army's implementing guidance related to the extension of active duty orders is sometimes unclear or contradictory--creating confusion and contributing to delays in processing ADME orders. For example, the guidance states that the Army Manpower Office is responsible for approving extensions beyond 179 days but does not say what organization is responsible for approving extensions that are less than 179 days. In practice, we found that all applications were submitted to Army Manpower for approval regardless of the number of days requested. At times, this created a significant backlog at the Army Manpower Office and resulted in processing delays. In addition, the Army's implementing guidance does not clearly define organizational responsibilities, how soldiers will be identified as needing an extension, how ADME orders are to be distributed, and to whom they are to be distributed. Finally, according to the guidance, the personnel costs associated with soldiers on ADME orders should be tracked as a base operating cost. However, we believe the cost of treating injured and ill soldiers--including their pay and benefits--who fought in operations supporting the Global War on Terrorism should be accounted for as part of the contingency operation for which the soldier was originally mobilized. This would more accurately allocate the total cost of these wartime operations. As we have reported in the past, the Army's visibility over mobilized reserve component soldiers is jeopardized by stovepiped systems serving active and reserve component personnel. Therefore, the Army has had difficulty determining which soldiers are mobilized and/or deployed, where they are physically located, and when their active duty orders expire. In the absence of an integrated personnel system that provides visibility when a soldier is transferred from one location to another, the Army has general personnel regulations that are intended to provide some limited visibility over the movement of soldiers. However, when a soldier is on ADME orders, the Army does not follow these or any other written procedures to document the transfer of soldiers from one location to another--thereby losing even the limited visibility that might otherwise be achievable. Further, although the Army has a medical tracking system, the Medical Operational Data System (MODS), that could be used to track the whereabouts and status of injured and ill reserve component soldiers, we found that, for the most part, the installations we visited did not use or update that system. Instead, each of the installations we visited had developed its own stovepiped tracking system and databases. Although MODS, if used and updated appropriately, could provide some visibility over injured and ill active and reserve component soldiers-- including soldiers who are on ADME orders--8 of the 10 installations we visited did not routinely use MODS. MODS is an Army Medical Department (AMEDD) system that consolidates data from over 15 different major Army and DOD databases. The information contained in MODS is accessible at all Army Military Treatment Facilities (MTF) and is intended to help Army medical personnel administer patient care. For example, as soldiers are approved for ADME orders, the Army Manpower Office enters data indicating where the soldier is to receive treatment, to which unit he or she will be attached, and when the soldier's ADME orders will expire. However, as discussed previously, the Army has not established written standard operating procedures on the transfer and tracking of soldiers on ADME orders. Therefore, the installations we visited were not routinely looking to MODS to determine which soldiers were attached to them through ADME orders. When officials at one installation did access MODS, the data in MODS indicated that the installation had at least 105 soldiers on ADME orders. However, installation officials were only aware of 55 soldiers who were on ADME orders. According to installation officials, the missing soldiers never reported for duty and the installation had no idea that they were responsible for these soldiers. The Army has not adequately trained or educated Army staff or reserve component soldiers about ADME. The Army personnel responsible for preparing and processing ADME applications at the 10 installations we visited received no formal training on the ADME process. Instead, these officials were expected to understand their responsibilities through on-the- job training. However, the high turnover caused by the rotational nature of military personnel, and especially reserve component personnel who make up much of the garrison support units that are responsible for processing ADME applications, limits the effectiveness of on-the-job training. Once these soldiers have learned the intricacies of the ADME process, their mobilization is over and their replacements must go through the same on- the-job learning process. For example, 9 of the 10 medical hold units at the locations we visited were staffed with reserve component soldiers. In the absence of education programs based on sound policy and clear guidance, soldiers have established their own informal methods--using Internet chat rooms and word-of-mouth--to educate one another on the ADME process. Unfortunately, the information they receive from one another is often inaccurate and instead of being helpful, further complicates the process. For example, one soldier was told by his unit commander that he did not need to report to his new medical hold unit after receiving his ADME order. While this may have been welcome news at the time, the soldier could have been considered absent without leave. Instead, the soldier decided to follow his ADME order and reported to his assigned case manager at the installation. The Army lacks customer-friendly processes for injured and ill soldiers who are trying to extend their active duty orders so that they can continue to receive medical care. Specifically, the Army lacks clear criteria for approving ADME orders, which may require applicants to resubmit paperwork multiple times before their application is approved. This, combined with inadequate infrastructure for efficiently addressing the soldiers' needs, has resulted in significant processing delays. Finally, while most of the installations we reviewed took extraordinary steps to keep soldiers in pay status, these steps often involved overriding required internal controls in one or more systems. In some cases, the stopgap measures ultimately caused additional financial hardships for soldiers or put the Army at risk of significantly overpaying soldiers in the long run. Although the Army Manpower Office issued procedural guidance in July of 2000 for ADME and the Army Office of the Surgeon General issued a field operating guide in early 2003, neither provides adequate criteria for what constitutes a complete ADME application package. The procedural guidance lists the documents that must be submitted before an ADME application package is approved; however, the criteria for what information is to be included in each document are not specified. In the absence of clear criteria, officials at both Army Manpower and the installations we visited blamed each other for the breakdowns and delays in the process. For example, according to installation officials, the Army Manpower Office will not accept ADME requests that contain documentation older than 30 days. However, because it often took Army Manpower more than 30 days to process ADME applications, the documentation for some applications expired before approving officials had the opportunity to review it. Consequently, applications were rejected and soldiers had to start the process all over again. Although officials at the Army Manpower Office denied these assertions, the office did not have policies or procedures in place to ensure that installations were notified regarding the status of soldiers' applications or clear criteria on the sufficiency of medical documentation. For example, one soldier we interviewed at Fort Lewis had to resubmit his ADME applications three times over a 3-month period-- each time not knowing whether the package was received and contained the appropriate information. According to the soldier, weeks would go by before someone from Fort Lewis was able to reach the Army Manpower Office to determine the status of his application. He was told each time that he needed more current or more detailed medical information. Consequently, it took over 3 months to process his orders, during which time he fell off his active duty orders and missed three pay periods totaling nearly $4,000. The Army has not consistently provided the infrastructure needed-- including convenient support services--to accommodate the needs of soldiers trying to navigate their way through the ADME process. This, combined with the lack of clear guidance discussed previously and the high turnover of the personnel who are responsible for helping injured and ill solders through the ADME process, has resulted in injured and ill soldiers carrying a disproportionate share of the burden for ensuring that they do not fall off their active duty orders. This has left many soldiers disgruntled and feeling like they have had to fend for themselves. For example, one injured soldier we interviewed whose original mobilization orders expired in January 2003 recalls making over 40 trips to various sites at Fort Bragg during the month of January to complete his ADME application. Over time, the Army has begun to make some progress in addressing its infrastructure issues. At the time of our visits, we found that some installations had added new living space or upgraded existing space to house returning soldiers. For example, Walter Reed Army Hospital has contracted for additional quarters off base for ambulatory soldiers to alleviate the overcrowding pressure, and Fort Lewis had upgraded its barracks to include, among other things, wheelchair accessible quarters. Also, installations have been adding additional case managers to handle their workload. Case managers are responsible for both active and reserve component soldiers, including injured and ill active duty soldiers, reserve component soldiers still on mobilization orders, reserve component soldiers on ADME orders, and reserve component soldiers who have inappropriately fallen off active duty orders. As of June 2004, according to the Army, it had 105 case managers, and maintained a soldier-to-case- manager-ratio of about 50-to-1 at 8 of the 10 locations we visited while conducting fieldwork. Finally, to the extent possible, several of the sites we visited co-located administrative functions that soldiers would need-- including command and control functions, case management, ADME application packet preparation, and medical treatment. They also made sure that Army administrative staff, familiar with the paperwork requirements, filled out all the required paperwork for the soldier. Centralizing document preparation reduces the risk of miscommunication between the soldier and unit officials, case managers, and medical staff. It also seemed to reduce the frustration that soldiers would feel when trying to prepare unfamiliar documents in an unfamiliar environment. The financial hardships discussed previously that were experienced by some soldiers would have been more widespread had individuals within the Army not taken it upon themselves to develop ad hoc procedures to keep these soldiers in pay status. In fact, 7 of the 10 Army installations we visited had created their own ad hoc procedures or workarounds to (1) keep soldiers in pay status and (2) provide soldiers with access to medical care when soldiers fell off active duty orders. In many cases, the installations we visited made adjustments to a soldier's pay records without valid orders. While effectively keeping a soldier in pay status, this work- around circumvented key internal controls--putting the Army at risk of making improper and potentially fraudulent payments. In addition, because these soldiers are not on official active duty orders they are not eligible to receive other benefits to which they are entitled, including health coverage for their families. One installation we visited issued official orders locally to keep soldiers in pay status. However, in doing so, they created a series of accounting problems that resulted in additional pay problems for soldiers when the Army attempted to straighten out its accounting. Further details on these ad hoc procedures are included in our related report. Manual processes and nonintegrated order-writing, pay, personnel, and medical eligibility systems also contribute to processing delays which affect the Army's ability to update these systems and ensure that soldiers on ADME orders are paid in an accurate and timely manner. Overall, we found that the current stovepiped, nonintegrated systems were labor- intensive and require extensive error-prone manual data entry and reentry. Therefore, once Army Manpower approves a soldier's ADME application and the ADME order is issued, the ADME order does not automatically update the systems that control a soldier's access to pay and medical benefits. In addition, as discussed previously, the Army's ADME guidance does not address the distribution of ADME orders or clearly define who is responsible for ensuring that the appropriate pay, personnel, and medical eligibility systems are updated, so soldiers and their families receive the pay and medical benefits to which they are entitled. As a result, ADME orders were sent to multiple individuals at multiple locations before finally reaching individuals who have the access and authority to update the pay and benefits systems, which further delays processing. As shown in figure 2, once Army Manpower officials approve a soldier's ADME application, they e-mail a memorandum to HRC-St. Louis authorizing the ADME order. The Army Personnel Center Orders and Resource System (AORS), which is used to write the order, does not directly interface nor automatically update the personnel, pay, or medical eligibility systems. Instead, once HRC-St. Louis cuts the ADME order it e-mails a copy of the order to nine different individuals--four at the Army Manpower Office, four at the National Guard Bureau (NGB) headquarters, and one at HRC in Alexandria Virginia--none of which are responsible for updating the pay, personnel, or medical eligibility systems. As shown in figure 2, Army Manpower, upon receipt of ADME orders, e-mails copies to the soldier, the medical hold unit to which the soldier is attached, and the RMC. Again, none of these organizations has access to the pay, personnel, or medical eligibility systems. Finally, NGB officials e-mail copies of National Guard ADME orders to one of 54 state-level Army National Guard personnel offices and HRC-Alexandria e-mails copies of Reserve ADME orders to the Army Reserve's regional personnel offices. HRC-Alexandria also sends all Reserve orders to the medical hold unit at Walter Reed. When asked, the representative at HRC-Alexandria who forwards the orders did not know why orders were sent to Walter Reed when many of the soldiers on ADME orders were not attached or going to be attached to Walter Reed. The medical hold unit at Walter Reed that received the orders did not know why they were receiving them and told us that they filed them. At this point in the process, of the seven organizations that receive copies of ADME orders, only two--the ANG personnel office and the Army Reserve personnel office--use the information to initiate a pay or benefit- related transaction. Specifically, the Guard and Reserve personnel offices initiate a transaction that should ultimately update the Army's medical eligibility system, the Defense Enrollment Eligibility Reporting System (DEERS). To do this, the Army National Guard personnel office manually inputs a new active duty order end date into the Army National Guard personnel system, the Standard Installation Division Personnel Reporting System (SIDPERS). In turn, the data from SIDPERS are batch processed into the Total Army Personnel Database-Guard (TAPDB-G), and then batch processed to the Reserve Components Common Personnel Data System (RCCPDS). The data from RCCPDS are then batch processed into DEERS--updating the soldier's active duty status and active duty order end date. Once the new date is posted to DEERS, soldiers and family members can get a new ID card at any DOD ID Card issuance facility. The Army Reserve finance office initiates a similar transaction by entering a new active duty order end date into the Regional Level Application System (RLAS), which updates Total Army Personnel Database-Reserve (TAPDB- R), RCCPDS, and DEERS through the same batch process used by the Guard. As discussed previously, the Army does not have an integrated pay and personnel system. Therefore, information entered into the personnel system (TAPDB) is not automatically updated in the Army's pay system, the Defense Joint Military Pay System-Reserve Component (DJMS-RC). Instead, as shown in figure 2, after receiving a copy of the ADME orders from Army Manpower, the medical hold unit and/or the soldier provide a hard copy of the orders to their local finance office. Using the Active Army pay input system, the Defense Military Pay Office system (DMO), installation finance office personnel update DJMS-RC. Not only is this process vulnerable to input errors, but it is time consuming and further delays the pay and benefits to which the soldier is entitled. The Army's new MRP program, which went into effect May 1, 2004, and takes the place of ADME for soldiers returning from operations in support of the Global War on Terrorism, has resolved many of the front-end processing delays experienced by soldiers applying for ADME by simplifying the application process. In addition, unlike ADME, the personnel costs associated with soldiers on MRP orders are appropriately linked to the contingency operation for which they served, and, therefore, will more appropriately capture the costs related to the Global War on Terrorism. While the front-end approval process appears to be operating more efficiently than the ADME approval process, due to the fact that the first wave of 179-day MRP orders did not expire until October 27, 2004, after we completed our work, we were unable to assess how effectively the Army identified soldiers who required an additional 179 days of MRP and whether those soldiers experienced pay problems or difficulty obtaining new MRP orders. In addition, the Army has no way of knowing whether all soldiers who should be on MRP orders are actually applying and getting into the system. Further, MRP has not resolved the underlying management control problems that plagued ADME, and, in some respects, has worsened problems associated with the Army's lack of visibility over injured soldiers. Finally, because the MRP program is designed such that soldiers may be treated and released from active duty before their MRP orders expire, weaknesses in the Army's processes for updating its pay system to reflect an early release date have resulted in overpayments to soldiers. According to Army officials at each of the 10 installations we visited, unlike ADME, they have not experienced problems or delays in obtaining MRP orders for soldiers in their units. In fact some installation officials have said that the process now takes 1 or 2 days instead of 1 or 2 months. Because there is no mechanism in place to track application processing times, we have no way of substantiating these assertions. We are not aware of any soldier complaints regarding the process, which were commonplace with ADME. The MRP application and approval process, which rests with HRC-Alexandria instead of the Army Manpower Office, is a simplified version of the ADME process. As with ADME orders, the soldier must request that this process be initiated and voluntarily request an extension of active duty orders. Both the MRP and ADME request packets include the soldier's request form, a physician's statement, and a copy of the soldier's original mobilization orders. However, with MRP, the physician's statement need only state that the soldier needs to be treated for a service- connected injury or illness and does not require detailed information about the diagnosis, prognosis, and medical treatment plan as it does with ADME. As discussed previously, assembling this documentation was one of the primary reasons ADME orders were not processed in a timely manner. In addition, because all MRP orders are issued for 179 days, MRP has alleviated some of the workload on officials who were processing AMDE orders and who were helping soldiers prepare application packets by eliminating the need for a soldier to reapply every 30, 60, or 90 days as was the case with ADME. While MRP has expedited the application process, MRP guidance, like that of ADME, does not address how soldiers who require MRP will be identified in a timely manner, how soldiers requiring an additional 179 days of MRP will be identified in a timely manner, or how soldiers and Army staff will be trained and educated about the new process. Further, because the Army does not maintain reliable data on the current status and disposition of injured soldiers, we could not test or determine whether all soldiers who should be on MRP orders are actually applying and getting into the system. In addition, because MRP authorizes 179 days of pay and benefits regardless of the severity of the injury, the Army faces a new challenge--to ensure that soldiers are promptly released from active duty or placed in a medical evaluation board process upon completion of medical care or treatment in order to avoid needlessly retaining and paying these soldiers for the full 179 days. However, MRP guidance does not address how the Army will provide reasonable assurance that upon completion of medical care or treatment soldiers are promptly released from active duty or placed in a medical evaluation board process. MRP has also contributed to the Army's difficulty maintaining visibility over injured reserve component soldiers. Although the Army's MRP implementation guidance requires that installations provide a weekly report to HRC-Alexandria that includes the name, rank, and component of each soldier currently on MRP orders, according to HRC officials, they are not consistently receiving these reports. Consequently, the Army cannot say with certainty how many soldiers are currently on MRP orders, how many have been returned to active duty, or how many soldiers have been released from active duty before their 179-day MRP orders expired. As discussed previously, if the Army used and appropriately updated the agency's medical tracking system (MODS), the system could provide some visibility over injured and ill active and reserve component soldiers-- including soldiers on ADME or MRP orders. However, the Army MRP implementation guidance is silent on the use of MODS and does not define responsibilities for updating the system. According to officials at HRC- Alexandria, they do not update MODS or any other database when they issue MRP orders. They also acknowledged that the 1,800 soldiers reflected as being on MRP orders in MODS, as of September 2004, was probably understated given that, between May 2004 and September 2004, HRC-Alexandria processed approximately 3,300 MRP orders. Further, as was the case with ADME, 8 of the 10 installations we visited did not routinely use or update MODS but instead maintained their own local tracking systems to monitor soldiers on MRP orders. Not surprisingly, the Army does not know how many soldiers have been released from active duty before their 179-day MRP orders had expired. This is important because our previous work has shown that weaknesses in the Army's process for releasing soldiers from active duty and stopping the related pay before their orders have expired--in this case before their 179 days is up--often resulted in overpayments to soldiers. According to HRC- Alexandria officials, as of October 2004, a total of 51 soldiers had been released from active duty before their 179-day MRP orders expired. At the same time, Fort Knox, one of the few installations that tracked these data, reported it had released 81 soldiers from active duty who were previously on MRP orders--none of whom were included in the list of 51 soldiers provided by HRC-Alexandria. Concerned that some of these soldiers may have inappropriately continued to receive pay after they were released from active duty, we verified each soldier's pay status in DJMS-RC and found that 15 soldiers were improperly paid past their release date-- totaling approximately $62,000. A complete and lasting solution to the pay problems and overall poor treatment of injured soldiers that we identified will require that the Army address the underlying problems associated with its all-around control environment for managing and treating reserve component soldiers with service-connected injuries or illnesses and deficiencies related to its automated systems. Accordingly, in our related report (GAO-05-125) we made 20 recommendations to the Secretary of the Army for immediate action to address weaknesses we identified including (1) establishing comprehensive policies and procedures, (2) providing adequate infrastructure and resources, and (3) making process improvements to compensate for inadequate, stovepiped systems. We also made 2 recommendations, as part of longer term system improvement initiatives, to integrate the Army's order-writing, pay, personnel, and medical eligibility systems. In its written response to our recommendations, DOD briefly described its completed, ongoing, and planned actions for each of our 22 recommendations. The recent mobilization and deployment of Army National Guard and Reserve soldiers in connection with the Global War on Terrorism is the largest activation of reserve component troops since World War II. As such, in recent years, the Army's ability to take care of these soldiers when they are injured or ill has not been tested to the degree that it is being tested now. Unfortunately, the Army was not prepared for this challenge and the brave soldiers fighting to defend our nation have paid the price. The personal toll this has had on these soldiers and their families cannot be readily measured. But clearly, the hardships they have endured are unacceptable given the substantial sacrifices they have made and the injuries they have sustained. While the Army's new streamlined medical retention application process has improved the front-end approval process, it also has many of the same limitations as ADME. To its credit, in response to the recommendations included in our companion report, DOD has outlined some actions already taken, others that are underway, and further planned actions to address the weaknesses we identified. For further information about this testimony please contact Gregory D. Kutz at (202) 512-9095 or [email protected]. Individuals making key contributions to this testimony were Gary Bianchi, Francine DelVecchio, Carmen Harris, Diane Handley, Jamie Haynes, Kristen Plungas, John Ryan, Maria Storts, and Truc Vo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In light of the recent mobilizations associated with the Global War on Terrorism, GAO was asked to determine if the Army's overall environment and controls provided reasonable assurance that soldiers who were injured or became ill in the line of duty were receiving the pay and other benefits to which they were entitled in an accurate and timely manner. This testimony outlines pay deficiencies in the key areas of (1) overall environment and management controls, (2) processes, and (3) systems. It also focuses on whether recent actions the Army has taken to address these problems will offer effective and lasting solutions. Injured and ill reserve component soldiers--who are entitled to extend their active duty service to receive medical treatment--have been inappropriately removed from active duty status in the automated systems that control pay and access to medical care. The Army acknowledges the problem but does not know how many injured soldiers have been affected by it. GAO identified 38 reserve component soldiers who said they had experienced problems with the active duty medical extension order process and subsequently fell off their active duty orders. Of those, 24 experienced gaps in their pay and benefits due to delays in processing extended active duty orders. Many of the case study soldiers incurred severe, permanent injuries fighting for their country including loss of limb, hearing loss, and back injuries. Nonetheless, these soldiers had to navigate the convoluted and poorly defined process for extending active duty service. The Army's process for extending active duty orders for injured soldiers lacks an adequate control environment and management controls--including (1) clear and comprehensive guidance, (2) a system to provide visibility over injured soldiers, and (3) adequate training and education programs. The Army has also not established user-friendly processes--including clear approval criteria and adequate infrastructure and support services. Many Army locations have used ad hoc procedures to keep soldiers in pay status; however, these procedures often circumvent key internal controls and put the Army at risk of making improper and potentially fraudulent payments. Finally, the Army's nonintegrated systems, which require extensive errorprone manual data entry, further delay access to pay and benefits. The Army recently implemented the Medical Retention Processing (MRP) program, which takes the place of the previously existing process in most cases. MRP, which authorizes an automatic 179 days of pay and benefits, may resolve the timeliness of the front-end approval process. However, MRP has some of the same issues and may also result in overpayments to soldiers who are released early from their MRP orders. Out of 132 soldiers the Army identified as being released from active duty, 15 improperly received pay past their release date--totaling approximately $62,000.
6,222
556
Congress appropriates operations and maintenance funds for DOD, in part, for the purchase of spare and repair parts. DOD distributes operations and maintenance funding to major commands and military units. The latter use operations and maintenance funding to buy spare parts from the Department's central supply system. By the end of fiscal year 2001, DOD reported in its supply system inventory report that it had an inventory of spare parts valued at about $63.3 billion. Prior GAO reports have identified major risks associated with DOD's ability to manage spare parts inventories and prompted a need for reporting on spare parts spending and the impact of spare parts shortages on military weapon systems' readiness. In recent years, Congress has provided increased funding for DOD's spare parts budget to enable military units to purchase spare parts from the supply system as needed. In addition, beginning with fiscal year 1999, Congress provided supplemental funding totaling $1.5 billion, in part, to address spare parts shortages that were adversely affecting readiness. However, in making supplemental appropriations for fiscal year 2001, the Senate Committee on Appropriations voiced concerns about the Department's inability to articulate funding levels for spare parts needed to support the training and deployment requirements of the armed services and provide any meaningful history of funds spent for spare parts. In June 2001, we reported that DOD lacked the detailed information needed to document how much the military units were spending to purchase new and repaired spare parts from the central supply system. To increase accountability and visibility over spare parts funding, we recommended that DOD provide Congress with detailed reports on its past and planned spending for spare parts. In making the recommendation, we anticipated that such information, when developed through reliable and consistent data collection methods, would help Congress oversee DOD's progress in addressing spare parts shortages. In response to our recommendation, in June 2001 and February 2002, DOD provided Congress with Exhibit OP-31 reports as an integral part of the fiscal year 2002 and 2003 budget requests for operations and maintenance funding. These reports, which the services had previously submitted to DOD for internal use only, were to summarize the amounts each military service and reserve component planned to spend on spare parts in the future and the actual amount spent the previous fiscal year. Figure 1 shows the Exhibit OP-31 template as it appears in DOD's Financial Management Regulation. The regulation requires the military services to report the quantity and dollar values of actual and programmed spending for spare parts in total and by specific commodity groups, such as ships, aircraft engines, and combat vehicles and explain any changes from year to year as well as between actual and programmed amounts. (See apps. I through VI for each service's June 2001 and February 2002 exhibits.) DOD's June 2001 and February 2002 reports did not provide Congress with an actual and complete picture of spare parts spending. The actual amounts reported as spent by the Army in total on spare parts and by all services for most of the commodities were estimates. The services' budget offices had computed these estimates using various methods because they do not have a reliable system to account for and track such information.In addition, all the services did not include information on the supplemental operations and maintenance funding they received in their totals, include the quantities of parts purchased, or explain deviations between planned and actual spending as required on the template. These deficiencies limit the potential value of DOD's reports to Congress and other decision makers. Some of DOD's purported actual spending data were estimates. All of the Army's spending amounts and most of the other services' commodity amounts for prior years were estimates derived from various service methods--not actual obligations to purchase spare parts. The services' headquarters budget offices provided these estimates because they did not have a process for tracking and accumulating information on actual spending by commodity in their accounting and logistics data systems. The services' budget offices were to develop the Exhibit OP-31 data using the guidance shown on the template as published in DOD's Financial Management Regulation. The Department did not provide the services with any other guidance on how to develop information required for Exhibit OP-31 reports. The guidance directed the services to prepare reports showing planned and actual funding and quantities of repairable and consumable spare parts purchases by commodity for multiple fiscal years. Each service employed its own methodology to estimate the amount of money spent for spare parts as described below: The Army used estimates to report its total spending for spare parts and the breakout of spare parts spending for all commodity groups. The Army based its estimates on computer-generated forecasts of the spare parts needed to support the current and planned operations. Information from cost data files, logistics files, and the Operating and Support Management Information System was used to develop a consumption rate for spare parts on the basis of anticipated usage, considering such factors as miles driven and hours flown. The consumption factor was entered in the Army's Training Resources Model, which contains force structure, planned training events, and the projected operating tempo. The model used the consumption factor to estimate the total cost and quantities of spare parts that would be consumed. The model also provided the estimated spending for each of the commodities cited in the exhibit. The Navy Department used unaudited actual obligation data from the major commands as its basis for reporting total spending for spare parts and for some commodity groups. However, the breakout of actual spending data for the aircraft engine and airframe commodities were estimates. The Navy Department's headquarters budget office developed its reports on the basis of information contained in price and program change reports submitted by the major commands. The Navy Department's accounting system tracked obligations and developed pricing information for spare parts purchased under numerous subactivity groupings, some of which were tied to the categories listed on the OP-31 Exhibit. For example, codes have been established to track obligations for consumable and repairable spare parts purchased to support ship operations. The budget office prepared summary schedules accumulating these obligations from each command and transferred this information to the appropriate line of the OP-31 Exhibit. While the system provided accounting codes to summarize spare parts spending to support air operations and air training exercises, separate codes had not been established to distinguish spare parts purchased for aircraft engines and airframes--two separate and distinct commodity groupings on the exhibit. Lacking a separate breakout for aircraft engines and airframes, the budget office estimated the amounts for each commodity from historical trends. The Air Force used unaudited actual obligation data from its accounting system to identify and report its total spending, but its breakout of spending for the commodity groupings used estimates. The Air Force calculated estimates for each commodity by applying budget factors to the total actual obligation data shown in its accounting system. The accounting system provided these data by expense code, which designated depot-level repairables and consumables by "fly" and "non-fly" obligations. The Air Force allocated all "fly" obligations to airframes and left the engine commodity blank, even though some of the obligations were for engines. The Air Force selected this approach because spare parts for airframes and engines are budgeted together. To estimate the amount spent on the missiles, communications equipment, and other miscellaneous commodities, the Air Force allocated the total "non-fly" obligations on the basis of ratios derived from the amounts previously budgeted for these categories. While DOD had no reliable system to account for and track all of the needed information on actual spending, some of the services' major commands have data that can be compiled for this purpose. Our visits to selected major operating commands for each military service revealed that they maintain automated accounting and logistics support data systems that could be used to provide unaudited data on spare parts funding allocations and actual obligations to purchase repairable and consumable spare parts in significant detail. For example, at the Army's Training and Doctrine Command, we found that the Integrated Logistics Analysis Program provided information to monitor and track obligation authority by individual stock number and federal supply class. Personnel at that location used these data to develop a sample report documenting spending in the format requested by Exhibit OP-31. The Air Force's Air Combat Command and the Navy's Commander in Chief Atlantic Fleet each had systems that also could be used to provide information on spending. We discussed these reporting deficiencies with Office of the Secretary of Defense comptroller officials, who concurred that some figures on the service's Exhibit OP-31 reports were estimates and that DOD did not have a comprehensive financial management system that would routinely provide actual spending information. They said that estimates are all they have access to, given the absence of a comprehensive financial management system that reports accurate cost-accounting information. Furthermore, they stated that even though detailed information on such spending is available at the major commands, developing better estimates would entail an expensive and potentially difficult reporting requirement that should be considered in deciding whether the current information is acceptable. DOD's exhibits were also not complete in that they did not show all of the key information required by the template. DOD's guidance directed the services to report total operations and maintenance spare parts funding, the spare parts quantities bought, and the reasons for deviations between actual and programmed funding. However, two of the services did not provide information on the quantities of spare parts they had purchased, and none of the services explained variances between actual and initially programmed funding. Service officials commented that these reporting omissions were generally due to DOD's vague data collection guidance on the template and uncertainties about how to comply. The Army was the only service that reported spare parts quantity purchases each fiscal year. However, the Army's quantities were estimates that were based on applying historical usage rates to such factors as miles driven and hours flown, even when actual quantities were required. The Navy and Air Force did not report quantities because, according to service officials, such information was not readily available to them. Furthermore, they said that DOD's data collection guidance did not adequately explain how this information was to be developed. None of the services explained changes between actual and programmed spending in the exhibits as required. In comparing the June 2001 and February 2002 exhibits, we noted that each service's fiscal year 2001 actual spending deviated from the amount programmed and that some differences were significant. For example, in the February 2002 exhibits, the Navy showed an increase for fiscal year 2001 of approximately $400 million, and the Air Force showed a decrease of approximately $93 million in the actual amounts spent for spare parts versus the amount programmed in the June 2001 exhibits. Neither service provided a reason for the change. While DOD guidance requires the services to report total programmed and actual spending amounts, the services do not identify and report pending supplemental funding requests in their programmed spending totals until after the supplemental funds are received. For example, the Navy's June 2001 exhibit did not include supplemental funding of about $299 million in its reported fiscal year 2001 programmed funding estimate, which totaled approximately $3.5 billion. However, the Navy's February 2002 exhibit included the additional funding in the actual fiscal year 2001 actual spending totals. Similarly, the Army's June 2001 exhibit, which reported programmed funding of approximately $2.1 billion for fiscal year 2002, did not include $250 million in supplemental funding for the purchase of additional spare parts to improve readiness. The supplemental funding was later included in the spending estimates reflected on the February 2002 exhibit. Service officials commented that these reporting omissions were generally due to uncertainties about requirements for reporting changes to spare parts spending estimates that result from supplemental funding. Weaknesses in DOD's accounting and reporting practices hinder the usefulness of the data to decision makers. Providing actual data on spare parts spending is important to Congress and decision makers because, when linked to factors such as spare parts shortages and readiness, it can help serve as a baseline for evaluating the impact of funding decisions. Because the reports have not cited actual spending and have not been complete, they do not provide Congress with reasonable assurance about the amount of funds being spent on spares. As a result, they have less value to Congress and other decision makers in the Department during their annual deliberations about (1) how best to allocate future operations and maintenance resources to reduce spare parts shortages and improve military readiness and (2) when to make future resource allocation decisions about modernizing the force. Given the importance of spare parts to maintaining force readiness, and as justification for future budget requests, actual and complete information would be important to DOD as well as Congress. Therefore, we recommend that the Secretary of Defense issue additional guidance on how the services are to identify, compile, and report on actual and complete spare parts spending information, including supplemental funding, in total and by commodity, as specified by Exhibit OP-31 and direct the Secretaries of the military departments to comply with Exhibit OP-31 reporting guidance to ensure that complete information is provided to Congress on the quantities of spare parts purchased and explanations of deviations between programmed and actual spending. In written comments on a draft of this report, DOD partially concurred with both recommendations. DOD's written comments are reprinted in their entirety in appendix VII. DOD expressed concern that the first recommendation focused only on improving the reporting of operations and maintenance appropriations spending for spare parts but did not address other appropriations used for these purposes or working capital fund purchases. DOD stated that in order to have a comprehensive picture of spare parts spending, information on spare parts purchased with working capital funds and other investment accounts needs to be reported. The Department offered to work with Congress to facilitate this kind of analysis. As our report makes clear, we focused our analysis on the information the Department reported--operations and maintenance funding--and our recommendation was directed at improving the accuracy of the information. We continue to believe it is important that the Congress receive accurate actual spending data for these appropriations. Furthermore, as we point out in the report, operations and maintenance funding is the principal source of funds used by the military services to purchase new or repaired spare parts from the working capital funds, and as such, is a key indicator of the priority being placed on spares needs. Lastly, our report recognizes that there are other sources of funds for spare parts purchases, and we support DOD's statement that it will work with Congress to provide more comprehensive reporting on actual and programmed spending from all sources. In partially concurring with the second recommendation, the Department agreed that the services need to explain deviations between programmed and actual spending but believed that reporting spare parts quantities purchased as required by the financial management regulation does not add significant value to the information being provided to Congress because of the wide range in the unit costs for parts. While we recognize that the costs of parts vary significantly, continuing to include such information by commodity provides some basis for identifying parts procurement trends over time and provides valuable information about why shortages may exist for certain parts. Therefore, we continue to believe that our recommendation is appropriate. To determine the accuracy, completeness, and consistency of the oversight reports to Congress on spare parts spending for the active forces under the operations and maintenance appropriation, we obtained copies of and analyzed data reflected on OP-31 exhibits submitted by the Departments of the Army, Navy, and Air Force for the June 2001 and February 2002 budget submissions. We compared data and narrative explanations on the reports with reporting guidelines and templates contained in the DOD Financial Management Regulation. We analyzed and documented the data collection and reporting processes followed by each of the military departments through interviews with officials and reviews of available documentation at DOD's Office of the Comptroller and budget offices within the Departments of the Army, Navy, and the Air Force. To determine the availability of alternative systems for tracking and documenting information on actual obligations for spare parts purchases, we visited selected major commands in each of the military departments. These major commands included the Army's Training and Doctrine Command; the Navy's Commander in Chief, Atlantic Fleet; and the Air Force's Air Combat Command. However, we did not attempt to validate the commands' detailed funding data. We also reviewed our prior reports outlining expectations for enhanced oversight reporting on the use of spare funds and high-risk operations within the Department of Defense. We performed our review from February through August 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to John P. Murtha, the Ranking Minority Member of the Subcommittee on Defense, House Committee on Appropriations; other interested congressional committees; the Secretary of Defense; Secretaries of the Army, Air Force, and Navy; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Staff acknowledgments are listed in appendix VIII. Key contributors to this report were Richard Payne, Glenn Knoepfle, Alfonso Garcia, George Morse, Gina Ruidera, Connie Sawyer, George Surosky, Kenneth Patton, and Nancy Benco.
GAO was asked by the Department of Defense (DOD) to identify ways to improve DOD's availability of high-quality spare parts for aircraft, ships, vehicles, and weapons systems. DOD's recent reports do not provide an accurate and complete picture of spare parts funding as required by financial management regulation. As a result, the reports do not provide Congress with reasonable assurance about the amount of funds being spent on spare parts. Furthermore, the reports are of limited use to Congress as it makes decisions on how best to spend resources to reduce spare parts shortages and improve military readiness.
3,668
122
Unlike states that opt to cover CHIP-eligible children in their Medicaid programs and therefore must extend Medicaid covered services to CHIP- eligible individuals, states with separate CHIP programs have flexibility in program design and are at liberty to modify certain aspects of their programs, such as coverage and cost-sharing requirements. However, federal laws and regulations require states' separate CHIP programs to include coverage for routine check-ups, immunizations, emergency services, and dental services defined as "necessary to prevent disease and promote oral health, restore oral structures to health and function, and treat emergency conditions." States typically cover a broad array of additional services in their separate CHIP programs and, in some states, adopt the Medicaid requirement to cover Early and Periodic Screening, Diagnostic and Treatment (EPSDT) services.must also comply with mental health parity requirements--meaning they must apply any financial requirements or limits on mental health or substance abuse benefits in the same manner as applied to medical and surgical benefits. With respect to costs to consumers, CHIP premiums and cost-sharing-- irrespective of program design--may not exceed amounts as defined by law. States may vary separate CHIP premiums and cost-sharing based on income and family size, as long as cost-sharing for higher-income children is not lower than for lower-income children. Federal laws and regulations also impose additional limits on premiums and cost-sharing for children in families with incomes at or below 150 percent of the federal poverty level (FPL). In all cases, no cost-sharing can be required for preventive services--defined as well-baby and well-child care, including age-appropriate immunizations and pregnancy-related services. In addition, states may not impose premiums and cost-sharing that, in the aggregate, exceed 5 percent of a family's total income for the length of the child's eligibility period in CHIP. PPACA includes provisions that seek to standardize coverage and costs of private health plans in the individual and small group markets. QHPs offered both on and off the exchanges are required to comply with applicable private insurance market reforms, including relevant premium rating requirements, the elimination of lifetime and annual dollar limits on EHBs, prohibition of cost-sharing for preventive services, mental health parity requirements, and the offering of comprehensive coverage. PPACA allows exchanges in each state to offer coverage of pediatric dental services as an integrated benefit in a QHP or through an SADP, which consumers can purchase separately. In exchanges with at least one participating SADP, QHPs are not required to include the pediatric dental benefit. Some states require children obtaining coverage in their state- based exchanges to enroll in an SADP if their QHP does not include the pediatric dental benefit; consumers purchasing coverage in the federally facilitated exchange are not required to do so. With respect to costs to consumers, QHPs must offer coverage that meets one of four metal tier levels, which correspond to actuarial value (AV) percentages that range from 60 to 90 percent: bronze (AV of 60 percent), silver (AV of 70 percent), gold (AV of 80 percent), or platinum (AV of 90 percent). that a health plan will pay, on average--the higher the AV, the lower the cost-sharing expected to be paid by consumers. Cost-sharing subsidies are available to individuals with incomes between 100 and 250 percent of the FPL to offset the costs they incur through copayments, coinsurance, and deductibles in a silver-level QHP. The cost-sharing subsidies are not provided directly to consumers; instead, QHP issuers are required to offer three variations of each silver plan they market through an exchange in the individual market. These plans are to reflect the cost-sharing subsidies through lower out-of-pocket maximum costs and, if necessary, through lower deductibles, copayments, or coinsurance. Once the adjustments from the subsidies are made, the AV of the silver plan available to eligible consumers will effectively increase from 70 percent to 73, 87, or 94 percent, depending on income. SADPs have different AV requirements than QHPs. SADPs are categorized as "high" and "low" level plans, with 85 and 70 percent AV, respectively.subsidies are not available for pediatric dental costs incurred by a consumer enrolled in an SADP. Deductibles, co-pays, coinsurance amounts, and out-of-pocket maximum costs can vary within these plans, as long as the overall cost-sharing structure meets the required AV levels. Plans are allowed a de minimis variation of +/- 2 percent. Premium costs are not included in the AV computation. these premium contributions ranged from $471 to $8,949 for a family of four. The premium tax credit is available to eligible consumers regardless of which metal tier they choose; however, the credit is calculated based on the second-lowest cost silver plan in the rating area in which the consumer resides. Unlike cost-sharing subsidies, which generally do not apply to costs incurred for services by a consumer enrolled in an SADP, the maximum contribution amount on premiums includes premiums for both QHPs and SADPs, if relevant. Finally, PPACA established out-of-pocket maximum costs that apply to EHBs included in QHPs and SADPs. In 2014, these maximum costs for QHPs ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families for households with incomes between 100 and 400 percent of the FPL. Out-of-pocket maximum costs for SADPs are in addition to the out-of-pocket maximum costs for QHPs and were established by each exchange in 2014. CHIP-eligible children may enroll in QHPs instead of enrolling in CHIP-- either through a child-only plan or through a plan with other family members--but they are ineligible for premium tax credits and cost- sharing subsidies because of their eligibility for CHIP. However, if a state experiences a CHIP funding shortfall in the future and is therefore unable to enroll all CHIP-eligible children into a CHIP plan, such children may qualify for premium tax credits and cost-sharing subsidies to offset the In states not experiencing a funding shortfall, cost of QHP coverage.enrolling CHIP-eligible children in QHPs would generally increase costs for families. Under CMS regulations, if an individual who is ineligible for cost-sharing subsidies enrolls in the same policy as another family member who is eligible for cost-sharing subsidies, nobody covered under the policy will qualify for cost-sharing subsidies. As a result, enrolling CHIP-eligible children in QHPs could result in a loss of cost-sharing subsidies for family members that are eligible for these subsidies. To maintain cost-sharing subsidies for eligible family members, the CHIP- eligible child would need to be enrolled in a child-only health plan, for which premium tax credits would be unavailable because of the child's eligibility for CHIP. We determined that coverage in the selected CHIP plans and QHPs in our five states was generally comparable in that it included some level of coverage for nearly all the services we reviewed. Notable exceptions were certain enabling services and pediatric dental services, which were more frequently covered by the selected CHIP plans. (See app. I for a detailed list of selected services covered by the plans we reviewed.) With respect to certain enabling services, which may be particularly important for low income children, care coordination or case management was offered by all selected CHIP plans, but by only one selected QHP. Similarly, routine transportation to and from medical appointments was covered by two CHIP plans but by none of the selected QHPs. With respect to pediatric dental services, the QHP in New York was the only selected QHP that covered them; the selected QHPs in the other four states did not integrate pediatric dental services within the medical coverage they offered. To obtain coverage for pediatric dental services, consumers who purchased the selected QHP in these states would also need to purchase an SADP. For consumers who purchased the selected QHP in New York or the selected SADP in the other four states, we determined that pediatric dental coverage available was generally comparable to what was available in their state's selected CHIP plan, with the exception of Utah, where the selected CHIP plan was more generous than the selected SADP. However, the extent to which consumers obtained coverage that included pediatric dental services is not clear. Available federal data with information on QHP enrollment suggest that many children in the United States with exchange coverage in 2014 may have been without comprehensive dental coverage. According to our analysis of enrollment data for 2014 provided by ASPE, 16 percent of children younger than 18 years of age in the 36 states with federally facilitated exchanges were enrolled in a QHP that included comprehensive dental services that covered check-ups, basic, and major dental services. children were enrolled in QHPs that either had less than comprehensive or no dental coverage. Some of these families are likely to have purchased an SADP for their children, however. According to an ASPE report issued in May 2014, 18 percent of children younger than 18 years of age in the 36 states with federally facilitated exchanges who enrolled in a QHP also enrolled in an SADP, and these were likely among the families that had no comprehensive dental coverage included in their According to our analysis of enrollment data for 2014 provided by QHP.ASPE, virtually no children younger than 18 years of age in the 36 states with federally facilitated exchanges were enrolled in a QHP that included comprehensive dental services and an SADP. According to CMS, a QHP must offer check-ups, basic, and major dental services to be considered a QHP with embedded dental coverage. According to our analysis of enrollment data for 2014 provided by ASPE, less than half of the QHPs in a given state offered any type of dental coverage--checkups, basic, or major dental services--in two thirds of states with federally facilitated exchanges. did impose limits on outpatient therapies, pediatric vision, and pediatric hearing. One notable difference between these selected CHIP plans and QHPs was the frequency by which they limited home-and community- based health care. While the selected QHP in four states imposed day or visit limits on these services, only one state's selected CHIP plan did so. In contrast, no QHPs imposed limits on durable medical equipment, while one CHIP plan imposed a $2,000 annual limit. For services where coverage limits were sometimes imposed on QHPs and CHIP plans, our review found that the limits on CHIP plans were at times less restrictive. For example, the selected QHP in Utah limited home- and community-based health care services to 60 visits per year while the selected CHIP plan in the state did not impose any limits on these services. Comparability between service limits in states' selected CHIP plans and QHPs was less clear for outpatient therapy services. For example, the selected CHIP plan in New York limited outpatient physical and occupational therapies to 6 weeks per year, with no limits on outpatient speech therapy, while the selected QHP in the state limited outpatient therapies to a combined 60 visits per condition per lifetime. (See app. II for a detailed list of coverage limits for services we reviewed in the selected plans.) In addition, for pediatric dental services, coverage limits in the selected QHP and SADPs were generally similar to those in the selected CHIP plan; however, when there were differences, CHIP was generally more generous. For example, the selected CHIP plan in Kansas allowed one sealant per tooth per year; in contrast, the selected high and low SADP in the state allowed one sealant per tooth every three years. Similarly, the selected CHIP plan in Utah did not have any coverage limits on x-rays while the selected high and low SADPs in the state did. (See app. III for a detailed list of selected dental limits we reviewed in selected plans.) We determined that costs to consumers were almost always less in the selected CHIP plans than in the selected QHPs. Even considering PPACA provisions aimed at reducing cost-sharing amounts for certain low-income consumers who purchased QHPs, the differences remained, though were smaller. For example, the selected CHIP plans in four of the five states did not include any deductible, which means that enrollees in those states did not need to pay a specified amount before the plan began paying for services. In contrast, QHPs we reviewed typically imposed annual deductibles, which were as high as $500 for an individual and $1,500 for a family in the plan variation that offered the lowest available deductibles for QHP enrollees. In addition, consumers who purchase selected SADPs may face separate deductible costs. For example, whereas dental services were subject to the plan deductible in the New York QHP, SADPs in Colorado, Illinois, and Kansas had separate dental deductibles that ranged from $25 to $50 for individuals enrolled in selected high plans to $45 to $50 for individuals enrolled in selected low plans. (See app. III for a detailed list of selected dental cost- sharing we reviewed in the selected plans.) For services we reviewed where the plans imposed copayments or coinsurance, the amount was typically less in a state's selected CHIP plan compared to its selected QHP, even considering PPACA provisions aimed at reducing cost-sharing amounts for certain low income consumers who purchased QHPs. For example, the selected CHIP plan in two of our five states - Kansas and New York - did not impose copayments or coinsurance on any of the services we reviewed. In two of the remaining three states, the selected CHIP plan imposed copayments or coinsurance on less than half of the services we reviewed, and the amounts were usually minimal and on a sliding income scale. For example, for each brand-name prescription drug, the Illinois CHIP plan imposed a $3.90 copayment on enrollees with incomes greater than 142 and up to 157 percent of the FPL, which was increased to $7 for enrollees with incomes greater than 209 and up to 313 percent of the FPL. In contrast, selected QHPs in all five states imposed copayments or coinsurance on most covered services we reviewed, and the amounts were consistently higher than the CHIP plan in the same state. For example, depending on income, the copayment for primary care and specialist physician visits in Colorado ranged from $2 to $10 per visit for enrollees in the selected CHIP plan, but was $25 and $35 per visit, respectively, for all enrollees in the selected QHP. Cost-sharing for dental services was also higher in a state's selected SADP than in its selected CHIP plan a majority of the time. In addition, in states where the selected QHP charged coinsurance and the selected CHIP plan required a copayment, a direct comparison of cost differences could not be made, although data suggest CHIP costs would generally be lower. For example, for an inpatient hospital admission, higher-income enrollees in the selected CHIP plan in Colorado paid $50, while all enrollees in the selected QHP in the state were responsible for 20 percent coinsurance after the deductible was met, an amount that was likely to be higher given that 20 percent of the average price for an inpatient facility stay in 2011 was over $3,000.services we reviewed in selected plans.) (See app. IV for a detailed list of cost-sharing for Our review of premiums for selected CHIP plans and QHPs also suggests that premiums were always less in the CHIP plans than in the QHPs we reviewed, even with the application of the premium tax credit to defray the cost of QHP premiums. For example, according to CHIP officials, annual CHIP premiums in 2014 for an individual varied by income level and ranged from $0 for the lowest income CHIP enrollees in Colorado, Illinois, Kansas, and New York, to $720 for enrollees between 351 and 400 percent of the FPL in New York, with most enrollees across the five selected states paying less than $200 per year.annual premiums for a single child only enrolled in selected QHPs ranged from $1,111 to $1,776 in our five states before the application of the premium tax credit. With the premium tax credit, the annual premium amount for selected QHPs was often significantly lower, but was still higher than the selected CHIP plan in all five states. For example, in Illinois, the premium for the selected CHIP plan for an individual with an income at 150 percent of the FPL was $0 and was $1,254 for the selected QHP, which was reduced to $944 after the premium tax credit was applied. However, the additional premium cost to families enrolling previously eligible CHIP children into their QHPs--a possibility if CHIP funding is not reauthorized--may be minimal or nothing.amount lower income families pay in premiums, families with incomes at 250 percent or less of the FPL--at least 75 percent of the separate CHIP enrollees in the states we reviewed--would generally pay no additional premium to add a child to their QHP. For example, in Kansas, the 2014 annual premium for the lowest cost silver level QHP was $4,875 for a couple age 40 and an additional $1,211 to add a child. However, if the couple's income was 200 percent of the FPL, their maximum annual Because PPACA limits the premium would be $2,494, and they would incur no additional costs by adding a child to their plan. Finally, all selected CHIP plans and QHPs limited the total potential costs to consumers by imposing out-of-pocket maximum costs, and these maximum costs were typically less in the CHIP plans we reviewed. For example, all five states applied the limit a family could pay in CHIP plans as established under federal law--including deductibles, copayments, coinsurance, and premiums--at 5 percent of a family's income during the child's (or children's) eligibility for CHIP. This 5 percent cap resulted in limits that varied based on a family's income level. This amount ranged from $584 to $2,334 for individuals, and $1,193 to $4,770 for a family of four, between 100 and 400 percent of the FPL in 2014. PPACA also established out-of-pocket maximum costs that apply to QHPs and may vary by income. premiums, which may be separately reduced through the application of premium tax credits. QHPs may set out-of-pocket maximum costs that are lower than those established by PPACA, which was the case for For example, the selected QHP in three of the five selected QHPs.Colorado had individual out-of-pocket maximum costs ranging from $750 to $6,300 for individuals between 100 and 400 percent FPL. This amount was less than out-of-pocket maximum costs established under federal law, which ranged from $2,250 to $6,350 for individuals between 100 and 400 percent FPL in 2014. PPACA out-of-pocket maximum costs on EHB for households with incomes between 100 and 400 percent of the FPL in 2014 ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families. In 2015, out-of-pocket maximum costs on EHB for households with incomes between 100 and 400 percent of the FPL ranged from $2,250 to $6,600 for individuals and from $4,500 to $13,200 for families. Out-of-pocket maximum costs for SADPs are in addition to the out-of- pocket maximum costs for QHPs and may increase potential costs for families who purchase them. In 2014, each exchange established maximum out-of-pocket costs for SADPs, which do not include premiums. Annual out-of-pocket maximum costs for selected SADPs for three of the four selected SADPs were $700 for one child and $1400 for two or more children. We provided a draft of this report for comment to HHS. HHS officials provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Katherine Iritani at (202)512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Testing (screening and/or exam) The Patient Protection and Affordable Care Act (PPACA) allows exchanges in each state to make available coverage of pediatric dental services as an embedded benefit in a QHP or through a stand- alone dental plan (SADP), which consumers may purchase separately. In exchanges with at least one participating SADP, QHPs were not required to include the pediatric dental benefit. Consumers in these five states were not required to purchase SADPs in 2014, even if their QHP did not include the pediatric dental benefit. Rehabilitation is provided to help a person regain, maintain or prevent deterioration of a skill that has been acquired but then lost or impaired due to illness, injury, or disabling condition. While PPACA and its implementing regulations do not define habilitative services, habilitation has been defined by several advocacy groups as a service that is provided in order for a person to attain, maintain, or prevent deterioration of a skill or function never learned or acquired due to a disabling condition. The plan covers bone anchored hearing aids and cochlear implants only. Bone anchored hearing aids are used when traditional hearing aids are not efficient because of complications such as chronic infections or blockage. Cochlear implants are for patients with severe hearing loss where traditional amplification is no longer beneficial. The plan covers care coordination and case management for children with special health care needs only. Utah CHIP defines children with special health care needs as enrollees who have or are at increased risk for chronic physical, developmental, behavioral, or emotional conditions and who also require health and related services of a type or amount beyond that required by adults and children generally. Routine transportation includes transportation to and from medical appointments. Routine transportation is covered for CHIP children greater than 142 and up to 209 percent of the federal poverty level only. Tables 1 and 2 provide information on coverage limits for selected services in State Children's Health Insurance Program (CHIP) plans and qualified health plans (QHP) in each of the five states we reviewed: Colorado, Illinois, Kansas, New York, and Utah. For coverage limits on pediatric dental services, see Appendix III. Tables 3 through 12 provide information on coverage, coverage limits, and cost-sharing--deductibles, copayments, and coinsurance--for selected dental services in State Children's Health Insurance Program (CHIP) plans we reviewed in five states: Colorado, Illinois, Kansas, New York, and Utah; a qualified health plan (QHP) in New York; and stand- alone dental plans (SADP) in Colorado, Illinois, Kansas, and Utah. For selected CHIP plans and the QHP in New York, we note differences in cost-sharing amounts by income level. For selected SADPs, we note the cost-sharing amounts for the "high" and "low" level options, which have actuarial values of 85 and 70 percent, respectively. For all five states, cost-sharing amounts were subject to out-of-pocket maximum costs. For CHIP enrollees in each state, cost-sharing and premium amounts were subject to a federally established out-of-pocket- maximum cost equal to 5 percent of a family's income. For QHP enrollees, issuers established an out-of-pocket maximum cost for each plan that was equal to or less than the out-of-pocket maximum cost established under the Patient Protection and Affordable Care Act (PPACA). PPACA out-of-pocket maximum costs for households with incomes between 100 and 400 percent of the federal poverty level (FPL) in 2014 ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families. In 2014, each exchange established out-of-pocket maximum costs for SADPs. Annual out-of-pocket maximum costs for the selected SADPs in Colorado, Illinois, and Kansas were $700 for one child and $1400 for two or more children. The selected SADP in Utah imposed an out-of-pocket maximum cost of $40 for the low plan and $20 for the high plan. In contrast to CHIP, the out-of-pocket maximum costs for QHPs and SADPs do not include premiums. Tables 13 through 17 provide information on cost-sharing--deductibles, copayments, and coinsurance--for selected services in State Children's Health Insurance Program (CHIP) plans and qualified health plans (QHP) we reviewed in five states: Colorado, Illinois, Kansas, New York, and Utah. For selected CHIP plans and QHPs, we note differences in cost- sharing amounts by income level. For selected QHPs, these variations reflect the cost-sharing subsidies that are available to certain enrollees. For all five states, cost-sharing amounts were subject to out-of-pocket maximum costs. For CHIP enrollees in each state, cost-sharing and premium amounts were subject to a federally established out-of-pocket maximum cost equal to 5 percent of a family's income. For QHP enrollees, issuers established an out-of-pocket maximum cost for each plan that was equal to or less than out-of-pocket maximum costs established under the Patient Protection and Affordable Care Act (PPACA). PPACA out-of-pocket maximum costs for households with incomes between 100 and 400 percent of the federal poverty level (FPL) in 2014 ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families. These out-of-pocket maximum costs do not include costs associated with services provided through a SADP and, in contrast to CHIP, these out-of-pocket maximum costs do not include premiums. In addition to the contact named above, Susan T. Anthony, Assistant Director; Sandra George; John Lalomio; Laurie Pachter; and Teresa Tam made key contributions to this report. Children's Health Insurance: Cost, Coverage, and Access Considerations for Extending Federal Funding. GAO-15-268T. Washington, D.C.: December 3, 2014. Children's Health Insurance: Information on Coverage of Services, Costs to Consumers, and Access to Care in CHIP and Other Sources of Insurance. GAO-14-40. Washington, D.C.: November 21, 2013. Children's Health Insurance: Opportunities Exist for Improved Access to Affordable Insurance. GAO-12-648. Washington, D.C.: June 22, 2012. Medicaid and CHIP: Most Physicians Serve Covered Children but Have Difficulty Referring Them for Specialty Care. GAO-11-624. Washington, D.C.: June 30, 2011. Medicaid and CHIP: Given the Association between Parent and Child Insurance Status, New Expansions May Benefit Families. GAO-11-264. Washington, D.C.: February 4, 2011. Oral Health: Efforts Under Way to Improve Children's Access to Dental Services, but Sustained Attention Needed to Address Ongoing Concerns. GAO-11-96. Washington, D.C.: November 30, 2010. Medicaid: State and Federal Actions Have Been Taken to Improve Children's Access to Dental Services, but Gaps Remain. GAO-09-723. Washington, D.C.: September 30, 2009.
Federal funds appropriated to states for CHIP--the jointly financed health insurance program for certain low-income children--are expected to be exhausted soon after the end of fiscal year 2015 unless Congress acts to appropriate new funds. Beginning in October 2015, any state with insufficient CHIP funding must establish procedures to ensure that children who are not covered by CHIP are screened for Medicaid eligibility. If ineligible, children may be enrolled into a private qualified health plan--or QHP--that has been certified by the Secretary of Health and Human Services (HHS) as comparable to CHIP, if such a QHP is available. GAO was asked to examine coverage and costs to consumers in selected CHIP plans and private QHPs in selected states. GAO reviewed (1) coverage and (2) costs to consumers for one CHIP plan, one QHP, and, where applicable, one SADP in each of five states--Colorado, Illinois, Kansas, New York, and Utah. State selection was based on variation in location, program size, and design; CHIP plan selection was based on high enrollment; and QHP selection was based on low plan premiums. GAO obtained CHIP and QHP premium data from state officials and federal and state websites. GAO also obtained documents from and spoke to federal officials, including from HHS's Assistant Secretary for Planning and Evaluation, state officials, including from CHIP and insurance departments, and issuers of QHPs. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. In five selected states, GAO determined that coverage of services in the selected State Children's Health Insurance Program (CHIP) plans was generally comparable to that of the selected private qualified health plans (QHP), with some differences. In particular, the plans were generally comparable in that most covered the services GAO reviewed with the notable exceptions of pediatric dental and certain enabling services such as translation and transportation services, which were covered more frequently by the CHIP plans. For example, only the selected QHP in New York covered pediatric dental services; the QHPs in the other four states did not include pediatric dental services, although some officials indicated this would change for 2015 offerings. In those four states, stand-alone dental plans (SADP) could be purchased separately. Selected CHIP plans and QHPs were also similar in terms of the services on which they imposed day, visit, or dollar limits, although the five selected CHIP plans generally imposed fewer limits than the selected QHPs. For services where coverage limits were sometimes imposed on QHPs and CHIP plans, GAO's review found that the limits on CHIP plans were at times less restrictive. For example, the selected QHP in Utah limited home- and community-based health care services to 60 visits per year while the selected CHIP plan did not impose any limits. In addition, for pediatric dental services, coverage limits in the selected SADPs were generally similar to those in the selected CHIP plan; however, when there were differences, CHIP was generally more generous. Consumers' costs for these services--defined as deductibles, copayments, coinsurance, and premiums--were almost always less in the five states' selected CHIP plans when compared to their respective QHPs, despite the application of subsidies authorized under the Patient Protection and Affordable Care Act (PPACA) that reduce these costs in the QHPs. Specifically, when cost-sharing applied, the amount was typically less for CHIP plans, even considering PPACA provisions aimed at reducing cost-sharing amounts for certain low income consumers who purchased QHPs. For example, an office visit to a specialist in Colorado would cost a CHIP enrollee a $2 to $10 copayment per visit, depending on their income, compared to the lowest available copayment of $25 per visit in the selected Colorado QHP. GAO's review of premium data further suggests that selected CHIP premiums were always lower than selected QHP premiums, even when considering the application of PPACA subsidies that help to defray the cost to certain consumers. For example, the 2014 annual premium for the selected Illinois CHIP plan for an individual at 150 percent of the federal poverty level (FPL) was $0. By comparison, the 2014 annual premium for the selected Illinois QHP was $1,254, which was reduced to $944 for an individual at 150 percent of the FPL, after considering federal subsidies to offset the cost of coverage. Finally, all selected CHIP plans and QHPs GAO reviewed limited out-of-pocket maximum costs, and these maximum costs were typically less in the CHIP plans.
6,228
1,005
Medicaid is jointly financed by the federal government and the states, with the federal government matching most state Medicaid expenditures using a statutory formula that determines a federal matching rate for each state. Medicaid is a significant component of federal and state budgets, with estimated total outlays of $576 billion in fiscal year 2016, of which $363 billion is expected to be financed by the federal government and $213 billion by the states. Medicaid served about 72 million individuals, on average, during fiscal year 2016. As a federal-state partnership, both the federal government and the states play important roles in ensuring that Medicaid is fiscally sustainable over time and effective in meeting the needs of the populations it serves. States administer their Medicaid programs within broad federal rules and according to individual state plans approved by CMS, the federal agency that oversees Medicaid. Federal matching funds are available to states for different types of payments that states make, including payments made directly to providers for services rendered under a fee-for-service model and payments made to managed care organizations: Under a fee-for-service delivery model, states make payments directly to providers; providers render services to beneficiaries and then submit claims to the state to receive payment. States review and process fee-for-service claims and pay providers based on state- established payment rates for the services provided. Under a managed care delivery model, states pay managed care organizations a set amount per beneficiary; providers render services to beneficiaries and then submit claims to the managed care organization to receive payment. Managed care plans are required to report to the states information on services utilized by Medicaid beneficiaries enrolled in their plans--information typically referred to as encounter data. Most states use both fee-for-service and managed care delivery models, although the number of beneficiaries served through managed care has grown in recent years. Federal law requires each state, under both fee-for-service and managed care delivery models, to operate a claims processing system to record information about the services provided and report this information to CMS: Provider claims and managed care encounter data are required to include information about the service provided, including the general type of service; a procedure code that identifies the specific service provided; the location of the service; the date the service was provided; and information about the provider who rendered the service (e.g., provider identification number). Fee-for-service claims records must include the payment amount. Federal law requires states to collect managed care encounter data, but actual payment amounts to individual providers are not required. Long-term services and supports financed by Medicaid are generally provided in two settings: institutional facilities, such as nursing homes and intermediate-care facilities for individuals with intellectual disabilities; and home and community settings, such as individuals' homes or assisted living facilities. Under Medicaid requirements governing the provision of services, states generally must provide institutional care to Medicaid beneficiaries, while HCBS coverage is generally an optional service. Medicaid spending on long-term services and supports provided in home and community settings has increased dramatically over time--to about $80 billion in federal and state expenditures in 2014--while the share of spending for care in institutions has declined, and HCBS spending now exceeds long-term care spending for individuals in institutions (see fig. 1). All 50 states and the District of Columbia provide long-term care services to some Medicaid beneficiaries in home and community settings. Personal care services, a key type of HCBS, are typically nonmedical services provided by personal care attendants--home-care workers who may or may not have specialized training. The demand for personal care services is expected to increase as is the number of attendants providing these services in coming years. The number of Medicaid beneficiaries receiving personal care services at this time is not known, but likely in the millions. In calendar year 2012, the most recent and complete available data, an estimated 1.5 million beneficiaries in the 35 states reporting at the time received personal care services at least once. Total Medicaid spending for personal care services is also not known, as spending in managed care delivery systems is not reported by service. Total Medicaid spending for personal care services in fee-for-service delivery systems was about $15 billion in FY 2015. With approval from CMS, states can choose to provide personal care services under one or more types of authorities (referred to in this statement as programs) put in place over the past 41 years under different sections of the Social Security Act. The various types of programs provide states with options for permitting participant direction and choices about how to limit services, among other things (see table 1). CMS has implemented the different statutory requirements associated with these various program types by issuing regulations, as well as guidance to help states implement their Medicaid programs in accordance with applicable statutory and regulatory requirements. Guidance can include letters to state Medicaid directors, program manuals, and templates to help states apply for CMS approval to provide certain services like personal care. Together with federal statutes, the regulations and guidance issued by CMS establish a broad federal framework for the provision of personal care services. States are responsible for establishing and administering specific policies and programs within the federal parameters laid out in this framework. In our 2016 report examining the federal program requirements for the multiple programs under which personal care services are provided, we found significant differences in federal requirements related to beneficiary safety and ensuring that billed services are provided. These differences may translate to differences in beneficiary protections across program types. Program requirements can include general safeguards for ensuring beneficiary health and welfare, quality assurance measures, critical incident monitoring, and attendant screening. For example, states implementing an HCBS Waiver program or a State Plan HCBS program must: Describe to CMS how the state Medicaid agency will determine that it is assuring the health and welfare of beneficiaries. To do so, states must describe: the activities or processes related to assessing or evaluating the program; which entity will conduct the activities; the entity responsible for reviewing the results of critical incident investigations; and the frequency at which activities are conducted. Demonstrate to CMS, by providing specific details that an incident management system is in place, including incident reporting requirements that establish the type of incidents that must be reported, who must report incidents, and the timeframe for reporting. In contrast, states implementing a State Plan personal care services program or a Community First Choice program have fewer requirements for beneficiary safeguards. For example, for these programs, states are not required to do the following: Provide CMS with detailed information describing the activities they are taking to assure the health and welfare of beneficiaries. Demonstrate to CMS specific details about their critical incident management process and incident reporting system; instead they are required to describe more generally their "process for the mandatory reporting, investigating and resolution of allegations of neglect, abuse, or exploitation." Table 2 below illustrates more broadly the differences in federal program requirements that establish beneficiary safeguards and protections that we identified in our 2016 report. Differences in federal program requirements may also result in significant differences in the level of assurance that billed services are actually provided to beneficiaries. States implementing HCBS Waiver programs and State Plan HCBS programs, for example, are required by CMS to provide evidence that the state is only paying claims when services are actually rendered, while the State Plan personal care services and Community First Choice programs are not required to do so. Table 3 below highlights the federal Medicaid personal care services program requirements that we identified in our 2016 report to ensure that billed services are provided for each of the different type of HCBS program states may administer. The four selected states we examined as part of our 2016 report used different methods to ensure attendants provided billed services to beneficiaries, according to state officials. For example, for at least some personal care services programs, two states required beneficiaries to sign timesheets, and two states used electronic visit verification timekeeping systems. All four states performed quality assurance reviews for some personal care services programs to ensure billed services are received. The differing federal program requirements can create complexities for states and others in understanding federal requirements governing different types of HCBS programs, including personal care services. These different requirements may also result in significant differences in beneficiary safeguards and fiscal oversight, as illustrated in the following examples: Beneficiaries may experience different health and welfare safeguards depending on the program in which they are enrolled. For example, in one state we reviewed in 2016, the state required quarterly or biannual monitoring of beneficiaries for most of its personal care services programs. In contrast, for another program, the state required only annual monitoring contacts, in part, officials told us, due to the differing program requirements. Depending on the program type, CMS may have fewer assurances that beneficiaries' with similar levels of need are in programs with similar protections. For example, three of the four states we reviewed--Maryland, Oregon, and Texas-- have in recent years transitioned coverage of personal care services for beneficiaries who need an institutional level of care from personal care services programs with relatively more stringent federal beneficiary safety requirements to programs with relatively less stringent requirements. Although they were not required to do so, state officials in the three states reported that the states chose to continue using the same quality assurance measures in the new programs as the best way to ensure safety for beneficiaries. Without more harmonized requirements, we concluded that CMS has no assurance that states that transition personal care services from HCBS Waivers to Community First Choice in the future will make the same decisions. States can use different processes for each personal care services program to ensure that billed services are actually provided, and some programs may not be subject to federal personal care services requirements explicitly in this regard. For example, in one state we reviewed in 2016, steps taken to ensure billed services are provided under some types of personal care services programs were not required in another of the state's programs. A report we issued in 2012 reviewing states' implementation of different HCBS programs also suggested that states could benefit from more harmonization of program requirements. Officials in selected states we reviewed in 2012 noted the complexity of operating multiple programs. For example, officials from one state reported that the complexity resulted in a siloed approach, with different enrollment, oversight, and reporting requirements for each program. The administration and understanding of the programs available to beneficiaries was difficult for state staff and beneficiaries, according to officials in another state. The officials indicated that they would prefer CMS issue guidance on how states could operate different HCBS program types together, rather than issuing guidance on each program separately. In our 2016 report, we acknowledged certain efforts CMS had taken to harmonize requirements and improve oversight of personal care services programs. However, despite these efforts, we found that significant differences in program requirements existed. We recommended that CMS take additional steps to better harmonize and achieve a more consistent application of program requirements, as appropriate, across the different personal care services programs in a way that accounts for common risks faced by beneficiaries and to better ensure billed services were provided. CMS agreed with these recommendations, and has sought input by publishing a request for information on numerous topics related to Medicaid home and community-based services, including input on how to ensure beneficiary health and safety and program integrity across different types of personal care services programs. In our 2017 report examining the data CMS uses to monitor the provision of personal care services, we found that claims and encounter data collected by CMS were not timely. Data are typically not available for analysis and reporting by CMS or others for several years after services are provided. We found that this happens for two reasons. First, although states have 6 weeks following the completion of a quarter to report their claims data, their reporting could be delayed as a result of providers and managed care plans not submitting data in a timely manner, according to the CMS contractor responsible for compiling data files of Medicaid claims and encounters. For example, providers may submit claims for fee-for-service payments to the state late and providers may need to resubmit claims to make adjustments or corrections before they can be paid by the state. Second, once complete MSIS data are submitted by the states, the data must be compiled into annual person-level claims files that are in an accessible format, checked to identify and correct data errors, and consolidated for any claims with multiple records. This process, for one year of data, can take several years and, as a result, when information from claims and encounters becomes available for use by CMS for purposes of program management and oversight it could be several years old. We also found that Medicaid personal care services claims and encounter data that CMS collects were incomplete in two ways. First, specific data on beneficiaries' personal care services were not included in the calendar year 2012 MSIS data for 16 states, as of 2016, when we conducted our analysis. Nevertheless, these 16 states received federal matching funds for the $4.2 billion in total fee-for-service payments for personal care services that year--about 33 percent of total expenditures for personal care services reported by all states (see figure 2). Second, even for the 35 states for which 2012 MSIS claims and encounter data were available, certain data elements collected by CMS were incomplete. For example, for the records we analyzed, 20 percent included no payment information, 15 percent included no provider identification number to identify the provider of service, and 34 percent did not identify the quantity of services provided (see figure 3). Incomplete data limit CMS's ability to track spending changes and corroborate spending with reported expenditures because the agency lacked important information on a significant amount of Medicaid payments for personal care services. For example, among the 2012 claims we reviewed for personal care services under a fee-for-service delivery model, claims without a provider identification number accounted for about $4.9 billion in total payments. Similarly, payments for fee-for- service claims with missing information on the quantity of personal care services provided totaled about $5.1 billion. These data gaps represented a significant share of total personal care services spending, which totaled about $15 billion in fee-for-service expenditures in 2015. Even when states' claims and encounter data collected by CMS was complete, we found that it was often inconsistent, which limits the effectiveness of the data to identify questionable claims and encounters. For purposes of oversight, a complete record (claims or encounters) should include data for each visit with a provider or caregiver, with dates of when services were provided, the amount of services provided using a clearly specified unit of service (e.g., 15 minutes), and the type of services provided using a standard definition. Such a complete record would allow CMS and states to analyze claims to identify potential fraud and abuse. The following examples illustrate inconsistencies in data regarding when services were provided and the types of services that were provided from the 35 states whose data we reviewed: When services were provided. State-reported dates of service were overly broad. In the 35 states, some claims for personal care services had dates of services (i.e., start and end dates) that spanned multiple days, weeks, and in some cases months. For 12 of the 35 states, 95 percent of their claims were billed for a single day of service. However, in other states, a number of claims were billed over longer time periods. For example, for 10 of the states, 5 percent of claims covered a period of at least 1 month, and 9 states submitted claims that covered 100 or more days. When states report dates of service that are imprecise, it is difficult to determine the specific date for which services were provided and identify whether services were claimed during a period when the beneficiary is not eligible to receive personal care services--for example, when hospitalized for acute care services. Type of services provided. States used hundreds of different procedure codes for personal care services. Procedure codes on submitted claims and encounters were inconsistent in three ways: the number of codes used by states; the use of both national and state- specific codes; and the varying definitions of different codes across states. More than 400 unique procedure codes were used by the 35 states. CMS does not require that states use standard procedure codes for personal care services; instead, states have the discretion to use state-based procedure codes of their own choosing or national procedure codes. As a result, the procedure codes used for similar services differed from state to state, which limits CMS's ability to use this data as a tool to compare and track changes in the use of specific personal care services provided to beneficiaries because CMS cannot easily compare similar procedures by comparing service procedure codes. In our 2017 report we found that Medicaid personal care services expenditure data collected were not always accurate or complete, according to our analysis of expenditure data collected by CMS from states for calendar years 2012 through 2015. When submitting expenditure data, CMS requires states to report expenditures for personal care services on specific reporting lines. These reporting lines correspond with the specific types of programs under which states have received authority to cover personal care services, and can affect the federal matching payment amounts states receive when seeking federal reimbursement. For example, a 6 percent increase in federal matching is available for services provided through the Community First Choice program. For three other types of HCBS programs, CMS also requires states to report their expenditures for personal care services separately from other types of services provided under each program on what CMS refers to as feeder forms--that is, individual expenditure lines for different types of services that feed into the total HCBS spending amount for each program. We found that not all states were reporting their personal care services expenditures accurately, and, as result, personal care services expenditures may have been underreported or reported in an incorrect category. We compared personal care services expenditures for all states for calendar years 2012 through 2015 with each state's approved programs during this time period and found that about 17 percent of personal care services expenditure lines were not reported correctly. As illustrated in figure 4, nearly two-thirds of the reporting errors were a result of states not separately identifying and reporting personal care services expenditures using the correct reporting lines, as required by CMS. Without separate reporting of personal care expenditures as required, CMS is unable to ensure appropriate federal payment, monitor how spending changes over time across the different program types and have an accurate estimate of the magnitude of potential improper payments for personal care services. The other types of errors involved states erroneously reporting expenditures that did not correspond with approved programs. As a result, CMS is not able to efficiently and effectively identify and prevent states from receiving federal matching funds inappropriately, in part because it does not have accurate fee-for- service claims data that track payments by personal care program type that is linked with expenditures reported for purpose of federal reimbursement. These errors demonstrated that CMS was not effectively ensuring its reporting requirements for personal care expenditures were met. We concluded that by not ensuring that states are accurately reporting expenditures for personal care services, CMS is unable to accurately identify total expenditures for personal care services, expenditures by program, and changes over time. According to CMS, expenditures that states reported through MBES are subject to a variance analysis, which identifies significant changes in reported expenditures from year to year. However, CMS's variance analysis did not identify any of the reporting errors that we found. CMS officials told us that they would continue to review states' quarterly expenditure reports for significant variances and follow up on such variances. In our 2017 report, we acknowledged certain efforts CMS had taken to improve the data it collects. However, these efforts had not addressed data issues we identified that limited the usefulness of the data for oversight. We recommended that CMS take steps to improve the collection of complete and consistent personal care services data and better monitor the states' provision of and spending on Medicaid personal care services. Specifically, CMS agreed with recommendations to better ensure states comply with data reporting requirements and to develop plans for analyzing and using the data. The agency neither agreed nor disagreed with recommendations to issue guidance to ensure key data regarding claims and encounter data are complete and consistent, or with a recommendation to ensure claims data can be accurately linked with aggregate expenditure data. In light of our findings of inconsistent and incomplete reporting of claims and encounters, errors in reporting expenditures, and the high-risk of improper payments, we believe action in response to these recommendations is needed. In conclusion, Medicaid personal care services are an important benefit for a significant number of Medicaid beneficiaries and amount to billions of dollars in spending to the federal government and states. The demand and spending for personal care services continues to grow. However, the services are not without risk. Personal care services are at high risk for improper payments and beneficiaries may be vulnerable and at risk of unintentional harm and potential neglect and exploitation. Over the years, federal laws have given states a number of different options to provide home- and community- based services. Having various options for providing personal care services provides flexibilities for states in how they administer their programs and provide services to different groups of beneficiaries. At the same time, our work has also found a patchwork of federal requirements, resulting in varying levels of beneficiary safeguards and requirements to ensure that billed services are actually provided. As a result, beneficiaries with similar needs could be receiving services in programs with significantly different safeguards in place, depending on the program. Similarly, the level of assurance that billed services are actually provided could vary based on the type of program. Further, our work showed that the data CMS collects for oversight of these programs is not always timely, complete, accurate, and consistent. Without better data, CMS is hindered in effectively performing key management functions related to personal care services, such as ensuring state claims for enhanced federal matching funds are accurate. CMS has taken steps to improve the data it collects from states, and to establish more consistent administration of policies and procedures across the programs under which personal care services are provided. However, we found additional steps are warranted. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee this concludes my prepared statement. I would be pleased to respond to any question that you might have at this time. If you or your staffs have any questions about this testimony, please contact Katherine M. Iritani at (202) 512-7114. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tim Bushfield, Assistant Director; Anna Bonelli; Christine Davis; Barbara Hansen; Laurie Pachter; Perry Parsons; and Jennifer Whitworth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Medicaid, a joint federal-state health care program, provides long-term services and supports for disabled and aged individuals, increasingly in home and community settings. Federal and state Medicaid spending on home- and community-based services was about $80 billion in 2014. Personal care services are a key component of this care. States can offer personal care services through many different types of programs, and each may be subject to different federal requirements established by statute, regulations, and guidance. The provision of personal care in beneficiaries' homes can pose safety risks, and these services have a high and growing rate of improper payments, including cases where services for which the state was billed were not provided. In recent years Congress has directed HHS to improve coordination of these programs which could harmonize requirements--that is, implement a more consistent administration of policies and procedures--and enhance oversight. This statement highlights key issues regarding (1) the federal program requirements to protect beneficiaries' safety and ensure that billed services are provided, and (2) the usefulness of data collected by CMS for oversight. This testimony is based on reports GAO issued in 2016 and 2017. For these reports, GAO assessed CMS data on personal care services provided to beneficiaries and state spending. GAO also reviewed federal statutes, regulations, and guidance, and interviewed CMS officials. In its November 2016 report, GAO found a patchwork of federal requirements related to how states must protect the safety of beneficiaries in their personal care services programs and to how states ensure that billed services are actually provided. Personal care services help beneficiaries with basic activities of daily living such as bathing and dressing, in a home- or community-based setting. For two types of programs under which personal care services can be offered, states must describe to the Centers for Medicare & Medicaid Services (CMS) how they will ensure the health and welfare of beneficiaries. Similar requirements were not in place for several other programs GAO examined. In addition, for some but not all personal care services programs that GAO reviewed, states must provide evidence to CMS that the state is paying claims for services that have actually been provided. These differing federal program requirements result in uneven beneficiary safeguards and levels of assurances regarding states' beneficiary protections and oversight of billed services. GAO recommended that CMS take steps to harmonize and achieve a more consistent application of federal requirements across programs. CMS agreed with GAO's recommendation and sought input on how to do so by publishing a request for information. In its January 2017 report, GAO found limitations in the data that CMS collects to monitor the provision of personal care services and to monitor state spending on services. For example: Data on personal care services provided were often not timely, complete or consistent. The most recent data available during GAO's review (2016) were for 2012 and included data for only 35 states. Further, 15 percent of claims lacked provider identification numbers and 34 percent lacked information on the quantity of services provided. Data were also inconsistent as more than 400 different procedure codes were used by states to identify personal care services. Without timely, complete, and consistent data, CMS is unable to effectively oversee state programs and verify who is providing personal care services or the type, amount, and dates of services provided. Data on states' spending on CMS's expenditure reports, the basis for states' receipt of federal matching funds, were not always accurate or complete. From 2012 through 2015, 17 percent of expenditure lines were not reported correctly by states, according to GAO's analysis. Nearly two-thirds of these errors were due to states not separately identifying personal care services expenditures, as required by CMS, from other types of expenditures. Inaccurate and incomplete data limit CMS's ability to, among other oversight functions, ensure federal matching funds are appropriate. GAO made several recommendations to improve the data CMS collects to monitor the provision of and expenditures on personal care services. CMS agreed with some but not all of these recommendations.
4,823
826