-
Start Preamble
Start Printed Page 61241
AGENCY:
Children's Bureau (CB), Administration for Children and Families (ACF), Administration on Children, Youth and Families (ACYF), Department of Health and Human Services (HHS).
ACTION:
Final notice of statewide data indicators and national standards for Child and Family Services Reviews.
SUMMARY:
On April 23, 2014, the Administration of Children and Families (ACF) published a document in the Federal Register (79 FR 22604). The document provided the Children's Bureau's plan to replace the statewide data indicators used to determine a state's substantial conformity with titles IV-B and IV-E of the Social Security Act through the Child and Family Services Reviews (CFSRs). After consideration of the public comments and additional Children's Bureau analysis, the Children's Bureau is now publishing its final plan. Where relevant, this document addresses key comments from the field in response to the April 23, 2014 Federal Register document.
DATES:
Effective October 10, 2014.
Start Further InfoFOR FURTHER INFORMATION CONTACT:
Miranda Lynch Thomas, Children's Bureau, 1250 Maryland Ave. SW., 8th Floor, Washington, DC 20024, (202) 205-8138.
End Further Info End Preamble Start Supplemental InformationSUPPLEMENTARY INFORMATION:
Background
The Children's Bureau (CB) implemented the CFSRs in 2001 in response to a mandate in the Social Security Amendments of 1994. The legislation required the Department of Health and Human Services to issue regulations for the review of state child and family services programs under titles IV-B and IV-E of the Social Security Act (see section 1123A of the Social Security Act). The reviews are required for CB to determine whether such programs are in substantial conformity with title IV-B and IV-E plan requirements. The review process, as regulated at 45 CFR 1355.31-37, grew out of extensive consultation with interested groups, individuals, and experts in the field of child welfare and related areas.
The CFSRs enable CB to: (1) Ensure conformity with federal child welfare requirements; (2) determine what is actually happening to children and families as they are engaged in child welfare services; and (3) assist states to enhance their capacity to help children and families achieve positive outcomes. CB conducts the reviews in partnership with state child welfare agency staff and other partners and stakeholders involved in the provision of child welfare services. We have structured the reviews to help states identify strengths as well as areas needing improvement within their agencies and programs.
We use the CFSR to assess state performance on seven outcomes and seven systemic factors. The seven outcomes focus on key items measuring safety, permanency, and well-being. The seven systemic factors focus on key state plan requirements of titles IV-B and IV-E that provide a foundation for child outcomes. If we determine that a state has not achieved substantial conformity in one or more of the areas assessed in the review, the state is required to develop and implement a program improvement plan within two years addressing the areas of nonconformity. CB supports the states with technical assistance and monitors implementation of their program improvement plans. We withhold a portion of the state's federal title IV-B and IV-E funds if the state is unable to complete its program improvement plan successfully.
Most relevant to this document are the national standards for state performance on statewide data indicators CB uses to determine whether a state is in substantial conformity with certain child outcomes. We are authorized by the regulations at 45 CFR 1355.34(b)(4) and (5) to add, amend, or suspend any of the statewide data indicators and to adjust the national standards when appropriate. Statewide data indicators are aggregate measures and we calculate them using administrative data available from a state's submissions to the Adoption and Foster Care Analysis and Reporting System (AFCARS),[1] the National Child Abuse and Neglect Data System (NCANDS),[2] or a CB-approved alternate source for safety-related data. If a state is proposing to use alternative source data for NCANDS, such data must be child-level data and contain all of the data elements necessary for CB to calculate performance for an indicator. If we determine that a state is not in substantial conformity with a related outcome due to its performance on an indicator, the state will include that indicator in its program improvement plan. The improvement a state must achieve is relative to the state's baseline performance at the beginning of the program improvement plan period.
In an April 23, 2014 Federal Register document (79 FR 22604) we provided a detailed review of the consultation with the field and information considered in developing the third round of the CFSRs. We also proposed a plan for using statewide data indicators and national standards that is different than those used in prior rounds including the method to calculating such indicators and standards and our rationale. During the 30-day public comment period following the Federal Register document, we received 52 unique responses from state and local child welfare agencies, national and local advocacy and human services organizations, researchers and other interested persons. CB's reviewed all public comments and questions before making final decisions regarding the statewide data indicators and the methodology. This public notice includes a summary of our response. The public comments and questions that were submitted are available in their original form on www.regulations.gov.
Summary of Final Statewide Data Indicators and Methods
We have changed two indicators in response to the public comments. CB will measure the recurrence of maltreatment instead of repeat reports of maltreatment as we proposed in the April Federal Register document. We will also add a new indicator to measure permanency in 12 months for children in foster care for 12 months to 23 months.
Therefore our final plan is to use two statewide data indicators to measure maltreatment in foster care and recurrence of maltreatment in evaluating Safety Outcome 1 : Children are, first and foremost, protected from abuse and neglect. We will use statewide data indicators to measure achievement of permanency in 12 Start Printed Page 61242months for children entering foster care, permanency in 12 months for children in foster care for 12 months to 23 months, permanency in 12 months for children in foster care for 24 months or more, re-entry to foster care in 12 months, and placement stability. These five permanency indicators will be used in evaluating Permanency Outcome 1: Children have permanency and stability in their living situations.
A description of each of the seven statewide data indicators, how we will calculate them, a summary of relevant public comments, and our rationale for the final indicators and response to the public comments follows. This document includes our approach to measuring a state's program improvement on the indicators should the state not meet a national standard. We also provide information on how we will share data and information related to state performance as well as data quality issues that may impact the indicators and methods.
Attachment A provides a summary of each final statewide data indicator including the numerators, denominators, adjustments and data periods used to calculate the national standards. Attachment B provides a comparison of the data measures used during CFSR Round 2 with the statewide data indicators we will use during Round 3. Attachment C provides information on the AFCARS and NCANDS data elements that are used to calculate the indicators and national standards. Attachment D provides information on the data quality thresholds applied in determining whether to include state data for calculating the indicators.
Finally we are issuing concurrent to this document, CFSR Technical Bulletin #8 that expands on this document with additional technical information and discussion relevant to the statewide data indicators, national standards and states' performance on them. The technical bulletin will be available on CB's Web site www.acf.hhs.gov/programs/cb.
Statewide Data Indicators for CFSR Safety Outcome 1: Children Are, First and Foremost, Protected From Abuse and Neglect
Safety Performance Area 1: Maltreatment in Foster Care
Indicator Description: Of all children in foster care during a 12-month period, what is the rate of victimization per day of foster care?
Calculation: The denominator is of children in foster care during a 12-month period, the total number of days these children were in foster care as of the end of the 12-month period. The denominator is drawn from AFCARS. The numerator is of children in the denominator, the total number of substantiated or indicated reports of maltreatment (by any perpetrator) during a foster care episode within the 12-month period. Rates are calculated per day of foster care. However, we will multiply the rate by 100,000 to produce larger and more readily understood numbers. This indicator is calculated using data that match children across AFCARS and NCANDS using the AFCARS record number.
Some states provide incident dates in their NCANDS data submissions. If a state provides incident dates that are associated with the maltreatment report, those records with an incident date occurring outside of the removal episode will be excluded, even if the report dates fall within the episode. We will also exclude the following: Complete foster care episodes lasting less than 8 days, any report of maltreatment that occurs within the first 7 days of removal, victims who are age 18 or more and youth in foster care at age 18 or more. For those youth who at the beginning of an included report period are 17 years of age and turn age 18, any time spent in foster care beyond the young person's 18th birthday is not counted in the denominator.
Justification for Inclusion: This indicator provides a measure of whether the state child welfare agency is able to ensure that children do not experience abuse or neglect while in the state's foster care system. The indicator holds states accountable for keeping children safe from harm while under the responsibility of the state, no matter who perpetrates the maltreatment while the child is in foster care.
Public Comments and CB Response: Many commenters supported the statewide data indicator for maltreatment in foster care that we proposed originally. Such commenters endorsed how the rate will be calculated, the inclusion of all maltreatment types by any perpetrator (including parents), the exclusion of children in foster care less than eight days, and the use of incident dates.
Regarding incident dates, some of the comments noted concern that not all states were consistently reporting incident dates and some states have difficulty identifying those dates. CB acknowledges that there is variation in states' capacity to report and actual reporting of incident dates. We are committed to continuing technical assistance to states so that they can improve their ability to report incident dates. Since the report of an actual incident date can clarify whether an occurrence of maltreatment is actually separate from another or whether there were multiple reports that refer to the same incident in the data, we are compelled to use this information where it exists. Additionally, to prevent potential over-counting of reports that are made when a child first enters foster care that reflect what may have occurred prior to the child's foster care entry, we will exclude all reports of maltreatment that occur within the first 7 days of a child's removal from home. We will apply this exclusion consistently for all states.
Some commenters also expressed concern about the variation in how states decide to accept a report for investigation and define substantiated or indicated maltreatment to classify incidents of abuse or neglect. One commenter suggested that CB should have a consistent definition of substantiation or indication. We acknowledge that there is variation in how states screen in reports of maltreatment, define maltreatment, and substantiate maltreatment. This variation reflects the discretion that states have to define abuse and neglect and build a responsive child protective services system. CB does not have authority to mandate a singular definition or process. Further, doing so would result in skewing our understanding of how state child protective systems respond to alleged maltreatment. It may be helpful to think about this indicator as capturing how well the state is able to prevent child maltreatment, as it defines it, once the state has made a determination that a child needs the protection of the state's foster care system. How well the state is able to prevent child maltreatment in this circumstance is relative to a national standard based on how all states perform in preventing maltreatment in foster care as each state has defined maltreatment.
A couple of commenters were concerned that this indicator did not seem to capture how the agency protects children from maltreatment if such children do not enter foster care. It is accurate that this indicator is focused on protection from subsequent maltreatment for children who are already in the state agency's custody. We have another indicator that looks at victims of abuse and neglect more broadly to address the recurrence of maltreatment. We believe it is important to emphasize, however, that the set of indicators that are used for CFSR purposes are limited. We encourage states to have a more comprehensive set Start Printed Page 61243of indicators in their own CQI systems as measures of their performance for improvement and/or public accountability purposes. CB, through joint planning with states and the provision of technical assistance can assist states as they consider appropriate indicators and measures to be included in their Child and Family Services Plan.
Two commenters questioned how trial home visits would impact the indicator. One commenter advocated for the inclusion of trial home visits in the denominator while the other suggested that it should be excluded since the public may consider children on trial home visits to be at home. Since this indicator is intentionally capturing the maltreatment of a child while in the placement and care responsibility of the state agency, including when the child is visited by his parent or on a trial home visit, we have factored in the entire length of the trial home visit (until discharge) in the indicator. As such we will not apply a trial home visit adjustment to this indicator.
One commenter expressed concern that this indicator will make it more difficult for children in foster care to achieve normalcy in their lives. The concern was that a national measure of maltreatment in foster care may influence child welfare agencies to require all adults who a child comes into contact with to have criminal and child abuse background checks. CB is supportive of ensuring that children in foster care are afforded normalcy to the extent practicable. We would like to work with states that may have higher rates of maltreatment in foster care to analyze which populations appear at risk of such harm and the circumstances in which maltreatment is occurring. That way we can help states strategize how to address these issues programmatically while balancing the well-being and other needs of the children the state serves.
Finally, a few commenters were concerned that the difficulty some states experience in using a common identifier in the AFCARS and NCANDS files could impact the accuracy of this measure. We have set data quality thresholds (see attachment D) to ensure that states' data quality issues do not affect the integrity of the standard. We have required states to have consistent identifiers of children used in the reporting of AFCARS data since it began (1993) and we have requested the AFCARS record number in the NCANDS child files since FY 2003. In the last round of CFSRs, we provided states with data profiles that indicated the percentage of records with AFCARS record numbers reported in the NCANDS child file. This was a means of improving state reporting and providing context to the data that was provided to states on maltreatment by parents in foster care. As such, we proposed this indicator noting that states had improved their reporting of AFCARS record numbers which made viable using an indicator with this link in this round of reviews. We have identified the states for which using a consistent identifier is an issue and will be engaging in discussions with them on how they can improve their reporting of AFCARS record numbers.
Safety Performance Area 2: Recurrence of Maltreatment
Indicator Description: Of all children who were victims of a substantiated or indicated report of maltreatment during a 12-month reporting period, what percent were victims of another substantiated or indicated maltreatment allegation within 12 months of their initial report?
Calculation: The denominator is the number of children with at least one substantiated or indicated report of maltreatment in a 12-month period. The numerator is the number of children in the denominator that had another substantiated or indicated report of maltreatment within 12 months of their initial report. This indicator is calculated using data from NCANDS.
We will use report dates as the primary data element to determine when the maltreatment occurred, and include only reports occurring in the 12-month period. Substantiated or indicated maltreatments reports with report dates in the 12-month period with disposition dates after the 12-month period are included, as well. If there is a subsequent report of maltreatment within 14 days of the earlier report we will not count it as recurrent maltreatment. If the state provides the incident date and it indicates that multiple reports refer to the same incident, we will also not count it as recurrent maltreatment. Youth who are age 18 or more are excluded from the calculation of the indicator.
Justification for Inclusion: This indicator provides an assessment of whether the agency was successful in preventing subsequent maltreatment for a child if the child is the subject of a substantiated or indicated report of maltreatment.
Summary of Public Comments: We proposed originally an indicator of the percent of children with a screened-in report of alleged maltreatment that occurs within 12 months of an initial screened-in report. We justified the proposed indicator to replace the recurrence of maltreatment indicator used in prior CFSRs as we thought it could better assess the scope of the child welfare agency's protection response to incoming reports of maltreatment. We also believed the proposed indicator would address potential measurement problems of a substantiation-based indicator should a state change to a differential response approach during the course of a CFSR program improvement period.
A couple of commenters supported the re-report of maltreatment indicator as we proposed it originally. However, the majority of commenters, particularly state child welfare agencies, expressed their concerns with the proposed indicator. Many commenters were concerned about several unintended consequences or challenges in messaging what the results of this indicator mean.
One concern expressed by commenters was the potential for any state changes in the policy or program criteria for screening in reports to impact a state's performance on the indicator, either negatively or positively. Another concern was that the indicator was perceived as contrary to state and federal laws that encourage and support reporting of potential child maltreatment. Similarly, some commenters believed that the indicator, if constructed as a measure of safety, could be interpreted to mean that agencies that had high rates of screened-in reports of maltreatment were not ensuring child safety and that there were higher rates of actual recurrence of substantiated maltreatment. These commenters noted that some states screen in reports for children who are at little to no risk of maltreatment, such as for community or public service referrals. They noted that such referrals should not be thought of in the same way as actual allegations of maltreatment.
Secondary concerns raised by commenters were around the variation in state responses to screened-in reports as a matter of practice that could make interpretation of the indicator challenging. For example, commenters identified challenges associated with the variation in state screening decisions and unsubstantiated report expunction requirements. Several commenters provided suggestions for retaining the re-report of maltreatment indicator including: Requiring a substantiated report to follow the initial screened-in report to qualify as a re-report of maltreatment; risk adjusting based on the state's screen-in rate; and Start Printed Page 61244allowing for a defined period of time between a report and subsequent report.
We believe that there is good reason for a revision to our approach. We are mindful that an indicator must be readily explainable to the field and the public in terms of what it tells us about a child welfare system's response to vulnerable children and families. We also were concerned about the potential for unintended consequences with the proposed measures. We considered some of the commenter's suggestions for improving a re-report indicator but each proposed solution raised some level of concern. Still, CB believes that this indicator does hold potential to shed light on how well states are providing services to the larger population of children at risk. As such, we will include the re-report indicator as originally proposed as a context measure in the state's data profile.
CB will return to an indicator of recurrence of maltreatment, similar to that used in the prior two rounds. One of the modifications to this indicator over the one used in prior rounds will be to have an expanded timeframe—looking at substantiated or indicated reports in an initial 12-month period and whether there is a subsequent one within 12 months. We are also using similar adjustments as used in the recurrence of maltreatment indicator. We will use incident dates where available, exclude reports made within 14 days of an earlier report, and exclude youth age 18 and older. With this indicator, however, we are not able to address one of our concerns about the potential impact of a state implementing differential or alternative response on the measure. Where states implement differential response during program improvement, we will consider on a case-by-case basis the situation and its implications for accurate depictions of compliance and/or meeting improvement goals.
CFSR Permanency Outcome 1: Children Have Permanency and Stability in Their Living Situations
Permanency Performance Area 1: Permanency in 12 Months for Children Entering Foster Care
Indicator Description: Of all children who enter foster care in a 12-month period, what percent discharged to permanency within 12 months of entering foster care?
Calculation: The denominator is the number of children who enter foster care in a 12-month period. The numerator is the number of children in the denominator who discharged to permanency within 12 months of entering foster care and before turning age 18. This indicator is calculated using data from AFCARS. For the purposes of this indicator, discharged to permanency includes the AFCARS foster care discharge reasons of: Reunification with parents or primary caretakers, living with other relative(s), adoption and guardianship. This indicator excludes youth who enter foster care at or after age 18 and children who have a complete foster care episode lasting less than 8 days. For children with multiple foster care episodes in the 12-month period, this indicator will use the first episode reported.
We apply a trial home visit adjustment to this indicator. This means that if a child discharges from foster care during the 12-month period to reunification with parents or other caretakers after a placement setting of a trial home visit, any time in that trial home visit that exceeds 30 days is discounted from the length of stay in foster care. A similar trial home visit adjustment has been applied to permanency indicators in prior rounds of CFSRs. The adjustment is made to address variations in state policy regarding returning children to their families for a period of time before the state makes a formal discharge from foster care ending the agency's placement and care responsibility.
Justification for Inclusion: This indicator provides a focus on the child welfare agency's responsibility to reunify or place children in safe and permanent homes as soon as possible after removal.
Public Comments and CB Response: Many commenters expressed support for one or more aspects of the permanency in 12 months for children entering foster care indicator. In particular, commenters supported the inclusion of guardianship and adoption within the concept of permanency and the use of an entry cohort to assess the state's achievement of permanency for children. A few commenters requested clarification on whether we would apply the trial home visit adjustment to this indicator, which we have confirmed above.
A significant number of commenters believed that this indicator, in combination with the permanency in 12 months indicator for children who have been in foster care for 24 months or more, left a significant gap in understanding the experiences of children who have been in foster care for 12 to 23 months. We are addressing these comments by adding an indicator. We provide details on the new indicator in the next section.
Two commenters pointed out issues with our original description of the indicator as evaluating the first episode within the period for children who have multiple episodes during the same 12-month period. One commenter noted that we indicated in an attachment that we would rely on the “date of most recent removal” data element and questioned whether the description of capturing episodes was accurate. Another commenter pointed out that multiple episodes within a six-month period may be masked since you cannot duplicate children within a report period. Both commenters are accurate about the limits of the AFCARS data. Each six-month report period from AFCARS includes detail on the most recent foster care episode as of the end of the six-month period. We do not have information in AFCARS about any intervening foster care episodes. These `masked' episodes represent a very small percentage of all episodes reported to AFCARS. When we refer to using the first episode within the period, we mean we will use the episode provided in the first six-month report period of the year. We are using the earliest one available to us, given the structure of AFCARS. In the past, when we merged six-month submissions together we kept only the most recent reported episode for the 12-month period, so this represents a change from that practice.
Permanency Performance Area 2: Permanency in 12 Months for Children in Foster Care 12 to 23 Months
Indicator Description: Of all children in foster care on the first day of a 12-month period who had been in foster care (in that episode) between 12 and 23 months, what percent discharged from foster care to permanency within 12 months of the first day of the 12-month period?
Calculation: The denominator is the number of children in foster care on the first day of a 12-month period who had been in foster care (in that episode) between 12 and 23 months. The numerator is the number of children in the denominator who discharged from foster care to permanency within 12 months of the first day of the 12-month period and before turning 18. This indicator is calculated using data from AFCARS. For the purposes of this indicator, discharged to permanency includes AFCARS foster care discharge reasons of: Reunification with parents or primary caretakers, living with other relative(s), adoption and guardianship. Youth who are aged 18 years or more on the first day of the 12-month period are excluded from the calculation. We Start Printed Page 61245apply the trial home visit adjustment, as defined earlier, to this indicator.
Justification for Inclusion: This indicator provides a focus on the child welfare agency's responsibility to reunify or place children in safe and permanent homes timely if not achieved in the first 12 months of foster care.
Public Comments and CB Response: As noted above a number of commenters were concerned about the potential for a significant gap in the understanding and measurement of performance for children who may achieve permanency between 12 and 23 months. Some of the concerns expressed noted that a significant portion of children who remain in care beyond a year achieve permanency within the next year and that could not be captured with the two originally proposed indicators. Some made a programmatic argument about the requirements in title IV-B and IV-E of the Social Security Act (primarily due to amendments made by the Adoption and Safe Families Act) that focus on procedural safeguards for children who remain in care beyond 12 months. These include requirements for permanency hearings every 12 months that focus on moving a child to permanency and requirements to file petitions for termination of parental rights once a child has been in foster care for 15 out of the most recent 22 months, unless exceptions apply. Similarly, some commenters noted that guardianships and adoptions often take more than 12 months due to procedural and legal requirements, but could still be considered timely if occurred within 18 to 24 months. These commenters advocated for adding an indicator that incorporates the performance of the state in achieving permanency for children between their 1st and 3rd year of foster care. We found these arguments to be compelling and have added this 2nd indicator to be responsive to these points.
Before adding this indicator, we considered whether to extend either the permanency achievement indicator for the entry cohort to include children who enter foster care in a 24-month period, or to expand the cohort of children in care 24 months or more to include children in care 12 months or more. With the former option, we believed that the longer cohort would weaken the focus on the large group of children who are likely to exit to permanency quickly. We also noted that by changing the cohort we could no longer pair it with a companion measure of re-entry to foster care within 12 months (discussed later). With the latter option we were similarly concerned that we would no longer be able to focus attention to the children who have been in care for long periods of time and are must likely to grow up in foster care. Thus we chose to add a new cohort rather than expand one of the originally proposed indicators.
Permanency Performance Area 3: Permanency in 12 Months for Children in Foster Care 24 Months or More
Indicator Description: Of all children in foster care on the first day of a 12-month period, who had been in foster care (in that episode) for 24 months or more, what percent discharged to permanency within 12 months of the first day of the 12-month period?
Calculation: The denominator is the number of children in foster care on the first day of a 12-month period who had been in foster care (in that episode) for 24 months or more. The numerator is the number of children in the denominator who are discharged from foster care to permanency within 12 months of the first day of the 12-month period and before turning 18. This indicator is calculated using data from AFCARS. For the purposes of this indicator discharged to permanency includes AFCARS foster care discharge reasons of: Reunification with parents or primary caretakers, living with other relative(s), adoption, and guardianship. Young people who are aged 18 years or more on the first day of the 12-month period are excluded from the calculation. The trial home visit adjustment, as defined earlier, is applied to this indicator.
Justification for Inclusion: This indicator monitors the effectiveness of the state child welfare agency in continuing to ensure permanency for children who have been in foster care for longer periods of time.
Public Comments and CB Response: Several commenters expressed support for this indicator as a useful measure because we have a singular concept of permanency to include permanent placement with a relative, reunification, adoption and guardianship. Commenters agreed with using this measure in parallel with the permanency in 12 months for children entering foster care indicator. Commenters also appreciated the indicator's potential to maintain a focus on those children who experience long lengths of stay in foster care. The field expressed concerns similar to those for the permanency in 12 months for children entering foster care indicator. Some commenters also expressed a need to adjust for trial home visits. A few commenters raised concerns about how the experiences of children age 17 and older could impact the measure and if including this group of children in the statewide data indicator would disadvantage states that extend foster care beyond age 18.
CB has specified that this indicator will include the trial home visit adjustment as do the other two permanency achievement indicators. We have also addressed the concern regarding the gap in cohorts by adding another indicator as explained previously. Although we have excluded from the calculation of this indicator young people age 18 or older on the first day of the 12-month period, we will not exclude from the denominator young people who turn age 18 during the 12-month period. Regardless of federal and state provisions that provide young people avenues to remain in foster care beyond 18 for care and services while they transition to adulthood, when young people do not achieve permanency by 18 they cannot be considered to have achieved permanency. While we can agree that providing such extended care can mean better well-being outcomes for youth based on existing research, extending care does not address the young person's need for permanency, which is the focus of this indicator.
Permanency Performance Area 4: Re-entry to Foster Care in 12 Months
Indicator Description: Of all children who enter foster care in a 12-month period who discharged within 12 months to reunification, living with a relative(s), or guardianship, what percent re-enter foster care within 12 months of their discharge?
Calculation: The denominator is the number of children who entered foster care in a 12-month period and discharged within 12 months to reunification, living with a relative(s), or guardianship. The numerator is the number of children in the denominator who re-entered foster care within 12 months of their discharge from foster care. We exclude children in foster care for less than 8 days from this indicator and children who enter or exit foster care at age 18 or more. If a child re-enters foster care multiple times within 12 months of their discharge, only the first reported re-entry into foster care is selected. This indicator is calculated using data from AFCARS.
Justification for Inclusion: This indicator enables CB to monitor the effectiveness of programs and practice that support reunification and other permanency goals so that children do not return to foster care.
Public Comments and CB Response: Some commenters expressed support for the re-entry to foster care statewide data Start Printed Page 61246indicator as its own measure and as a companion measure to permanency performance area 1 as we proposed. Companion measures are discussed in the program improvement plan section of this document. Several commenters shared concerns about the possibility that the indicator overlooks the re-entry to foster care for children who did not achieve permanency quickly. Comments in this area point out that, because the indicator focuses on children who achieve permanency within one year, children who leave foster care after a year are not considered. They argued that this creates a truncated view of re-entry to foster care. Some of these commenters noted that the indicator used in the prior round of reviews had this more expanded cohort of children included and provided the state with a better perspective of the children who returned to foster care. A number of alternative approaches to measuring re-entry to foster care were suggested including revising the cohort of focus or adding cohorts or indicators that looked at re-entries into foster care more comprehensively.
During CFSR Round 2, this performance area was evaluated using a similar measure as a part of a composite. For that measure, we calculated the percent of all children discharged from foster care to reunification or living with a relative in a 12-month period, who re-entered foster care in less than 12 months from the date of discharge. The CFSR round 3 indicator differs from the measure used previously, in part, by limiting the children included in the indicator to the 12-month entry cohort. We intentionally limited the indicator to focus on children that enter foster care within a 12-month period to better align it with the other cohorts. We also note again that since most children return to their homes or achieve permanency within the first year of entry into foster care, this indicator will capture the majority of the population that may re-enter foster care.
Proposed Permanency Performance Area 4: Placement Stability
Indicator Description: Of all children who enter foster care in a 12-month period, what is the rate of placement moves per day of foster care?
Calculation: The denominator is of children who enter foster care in a 12-month period, the total number of days these children were in foster care as of the end of the 12-month period. The numerator is of children in the denominator, the total number of placement moves during the 12-month period. The days in care and moves during the placement episodes are cumulative across episodes reported in the same year. Rates are calculated per day of foster care. However, we will multiply the rate by 1,000 to produce larger numbers that are easier to understand. Only those placement settings that are required to be counted in the AFCARS file are used for this indicator. If the child is moved to a living arrangement or setting that would not result in the state increasing the number of placement settings reported in AFCARS such moves are not included in this indicator. Children in foster care for less than 8 days are excluded from the calculation. Youth who turn 18 during the 12-month period will not have time in care beyond their 18th birthday or moves after their 18th birthday counted.
Justification for Inclusion: This indicator emphasizes states' responsibility to ensure that children whom the state removes from their homes experience stability while they are in foster care.
Public Comment and CB Response: Several commenters expressed support for the placement stability data indicator citing it as an improvement over the previous measure and empirically-based. Some commenters agreed with the use of entry cohorts and the move to a rate of placements controlling for the length of stay. A few commenters asked for clarity on which moves in foster care are included in the indicator. In response, we have added to the description above. In general, there are placement settings that are reported in AFCARS but which are not `counted' in terms of a move. These include trial home visit episodes, runaway episodes, respite care and changes in a single foster family home's status, for example to reflect a licensing change from a foster care home to a home dually licensed for adoption. Additional information on AFCARS placement setting changes can be found in the CB's Child Welfare Policy Manual.[3]
A few commenters voiced concerns about using only entry cohorts for placement stability, which overlooks children who have been in foster care for longer periods of time. Other commenters pointed out that states could track additional cohorts of children without it being a federal indicator for CFSR purposes. During CFSR Round 2, we evaluated placement stability through three individual measures that made up a composite. All three of the measures, differentiated by length of stay in foster care, looked at the percent of children with two or fewer placement settings. The new indicator controls for the length of time children spend in foster care so only one indicator is needed. Further, it looks at moves per day of foster care, rather than children as the unit of analysis, as was employed during CFSR Round 2. The measure used for CFSR Round 2 was unable to differentiate between children who moved twice from children who moved more. The new indicator does not count initial placements, but counts each subsequent move to capture accurately the rate of placement moves given the amount of time they were at risk of moving, rather than the number of children affected.
CB believes that placement stability is important to the permanency and well-being of children in foster care regardless of how long they have been in foster care. Even so, our analysis of AFCARS data indicates that most placement moves occur within a child's first 12 months of foster care, which is why we focused this indicator on that time period. With this refined focus, CB and states can monitor the period during which placement moves are most likely to occur and the state's most recent performance. Since the CFSR Round 2 measures will still be included as a context measures in the data profile, states can use such information to analyze their trends, practice and target areas for improvement.
Some commenters questioned how to calculate the measure and whether the data were available to do so accurately. One concern was whether all placement days could be counted across all episodes in a year. Although the structure of AFCARS obscures some short-term episodes from view, we are using all available information to sum placement days and moves across episodes, to the extent practicable. The number of placement settings is always relevant to the reported episode, so this does not bias the results. Further, it is the same for all states, so we treat states equally methodologically.
Another commenter asked for clarification on whether the indicator would track children for 12 months from entry date, or simply count placement days during the 12-month period for children entering during that period. The calculation is the latter; we will count only the care days used within the 12-month period. Even if the child entered late in the 12-month period, we will count only those days and moves within the 12-month period. This measure allows for this because it controls for time in care.Start Printed Page 61247
Some commenters were apprehensive about how the placement stability indicator might impact beneficial placement moves in foster care. Several commenters pointed out that there are circumstances when placement changes might produce better outcomes for children or best address their well-being needs such as when children may be moved to be with siblings or to meet the placement Indian Child Welfare Act's placement preferences. These commenters noted that the data generated by the placement stability indicator might not adequately explain these situations or create disincentives to move a child when such moves are appropriate.
As we have noted in response to similar comments on the indicators of placement stability used in prior rounds of review, AFCARS does not have information about whether a placement change reflects a positive move that is made for the best interests of the child and/or towards the achievement of the child's permanency and well-being needs. The current administrative data collection does not capture all of the contextual information necessary for us to understand the dynamic needs of the child or the conditions of the child's placement. We have always used the onsite case review component of the CFSR to provide more evaluative information about a child's moves in foster care and continue to do so in this round of reviews.[4] In so doing, we consider whether moves that legitimately support the child's best interests rather than an agency's resource limitations or other concerns justify the move. States past performance during the onsite case review in this area indicates that children experience many moves that are not for the purposes of meeting their needs.[5]
Finally, a couple of commenters noted that state administrators might have difficulty in explaining this indicator to stakeholders or thinking through how it relates to practice since it is expressed as a rate as opposed to the prior placement stability measure. We understand that the new indicators, particularly those that are expressed as a rate, will require states to acquire new strategies to communicate with the field about how we measuring performance. We will work with states to do so in the data profiles and in the ongoing assistance we provide to states and their stakeholders around practice implications.
Additional Comments on Cross-Cutting Issues or Multiple Indicators
Some commenters opined on cross-cutting issues or requested that CB address other issues in connection with the indicators that are relevant as general concerns or to multiple indicators. There were several additional comments that were outside the scope of this Federal Register document and relate to comments or perspectives on child welfare policy that are inappropriate for us to address in this document.
Use of individual indicators and fewer indicators. Many commenters expressed strong support for our proposal to replace the composites used for permanency in round 2 with individual indicators of permanency in this round. Many appreciated our responsiveness to feedback from the field on their challenges with translating composite measures and noted that individual indicators had more promise for engaging their workers and partners in understanding performance and working together towards improvement. Similarly there were several commenters who supported using fewer indicators as part of the CFSR. Some noted that a limited number of indicators would also reduce challenges in the interpretation of multiple measures, which may sometimes appear to offer conflicting perspectives on performance.
Greater reliance on entry cohorts. Commenters generally supported CB's intention to rely more on entry cohorts as a method for measuring performance and gauging state improvement. However, a commenter suggested that CB be more precise in its terminology, noting that the term “entry cohort” was overbroad to describe the cohorts of interest the indicators include. While we agree that this term is broad, we included the term to reflect our general change in approach to measurement in some areas. As we have described each indicator's cohort specifically in terms of which children and circumstances are included in the numerator and the denominator we do not believe it is necessary to go into greater detail in naming the type of cohorts used.
Federal data elements and consistency of state practice. A few commenters requested that CB define terms that are referenced in the indicators or require states to have consistency in what is captured in AFCARS. One agency asked for CB to evaluate how to define “foster care placement” to ensure that the states report consistently who is in foster care across the country. In particular, the commenter noted that a child's placement outside of his or her own home and with a relative is not always included in the reporting population depending on the circumstances. Another requested that we provide more clarity regarding the discharge reason of `living with relatives' within AFCARS.
CB is not defining those terms further in this document. However, we will consider how to provide additional technical assistance and guidance to states on how to report AFCARS data accurately consistent with existing policy and also consider whether additional policy is necessary. We note that in defining AFCARS data elements and guidance, CB has intentionally considered the range of states' child welfare practices, authorities and responsibilities. For example, the issue of whether a child `placed' with a relative is reported as in foster care to AFCARS depends in part on whether the state child welfare agency has placement and care responsibility of the child and not whether the child is residing in his own home. We want all states to understand and apply AFCARS reporting populations, data element definitions and other related guidance consistently. However, the application of that guidance will reflect the unique aspects of a state's foster care program and population.
Well Being indicators. One organization recommended that CB improve well-being metrics used in the CFSR. Particular suggestions included tracking states' implementation of provisions of the Fostering Connections to Success and Increasing Adoptions Act of 2008 (Public Law 112-34) related to health including that children in foster care receive health screenings, have up-to-date health information and records, and states have processes for health oversight plans including monitoring children's use of psychotropic medications. Another suggestion was for CB to work with the Centers for Medicare and Medicaid Services (CMS) and the National Collaborative for Innovation in Quality to develop effective well-being measures.
CB focuses on how states are providing for children's well-being needs in the CFSR even though we do not have data elements in AFCARS or NCANDS that support the development of meaningful statewide data indicators relevant to child well-being at this time. Through the onsite review component Start Printed Page 61248of the CFSR, CB examines whether the state has appropriately assessed a child's health (including dental) and mental health needs, and if applicable, whether the state also identified and managed any health and mental health issues by facilitating the provision of the necessary services for all children in foster care and applicable children receiving services in their own homes. In the evaluation, we consider whether the state conducted initial and periodic health/mental health screenings for the child, the presence or lack thereof of up to date health information and oversight of medications, if applicable. More information on the particular assessment questions in the onsite review can be found in the CFSR Onsite Review Instrument.[6] CB has described some of our efforts to focus child well-being issues in an issuance in 2012.[7] CB will continue to work in collaboration with CMS and other appropriate partners to strengthen our ability to support states in measuring and ensuring positive outcomes in these areas.
Framing indicators in a positive direction. There were several comments along the theme of reframing some of the indicators so that they were stated positively. For example, one commenter suggested that the indicator be renamed to `permanency maintained' and change the calculation of the indicator to be positively framed so that the denominator includes children exiting care to permanency and the numerator includes those that do not re-enter. Regarding placement stability, two commenters noted that although the indicator nomenclature is positively stated as placement stability, the description clarifies that the indicator itself is calculated negatively as placement instability. These commenters suggested switching the numerator and denominator so that the indicator could be expressed in a positive fashion. CB chose not to revise the indicators or their descriptions in this way. Communicating these indicators can be challenging, and reversing the direction of the indicator makes it less intuitive and more complicated to measure and communicate. Second, maintaining these indicators as described allows us to remain consistent with the concepts as measured during prior CFSR rounds, promoting greater ease of use. In other cases, the measures simply cannot be reversed. As such we are keeping the indicators framed as described.
Applicability to particular populations. We received comments of concern about how the data indicators were perceived to apply to specific groups of children. One organization sought additional consultation with Indian tribes on the data indicators and revisions to round 3 overall to inform our thinking on applicability to Indian children. CB conducted in-person consultation with Indian tribes in 2011 regarding improvements in the CFSR in the areas use of data and performance monitoring overall. We used this feedback, in conjunction with feedback from states and other stakeholders in revising round 3 and the data indicators. However, we understand the need to further engage Indian tribes in meeting the needs of Indian children, particularly those in state custody. In addition to reinforcing with states the importance of engaging and collaborating Indian tribes throughout the CFSR process, CB will work directly with Indian tribes and organizations that advocate on behalf of Indian children to ensure that Indian tribes are informed about the CFSRs and the opportunities to participate in them.
We also received comments of concern about how data indicators can miss how states are performing with regard to Native American children, LGBTQ populations and older youth. We also heard concerns that state results on such indicators could be used as justification for the state to focus their attention on other groups of children or avoid work in accordance with best practices for such populations. We understand that the data indicators are limited and provide generalized information about a state's performance. CB is committed to consulting with states to understand what their statewide performance is or is not revealing about its programs, practice and results for the particular populations of children served by the state. Although the assessment of the state's performance on national indicators is part of our monitoring efforts, it must be paired with a state analysis of cases reviewed during the onsite review and other data or information that the state has its disposal to better understand what is the experience of children involved in the child welfare system.
National Standards and State Performance
We have set the national standard at the national observed performance for each of the seven indicators.
For indicators in which the outcome for a child either occurred or did not occur the standard is calculated as the number of children in the nation experiencing the outcome divided by the number of children in the nation eligible for and therefore at risk of the outcome. This is the case for the indicators that measure permanency (for all cohorts) in 12 months, re-entry to foster care in 12 months and recurrence of maltreatment. The result of the calculation is a proportion. However, we present the standard as a percentage by multiplying the proportion by 100.
For indicators in which the outcome for a child is a count per day in care the standard is calculated as the sum of counts for all children in the nation divided by the sum of days these children were in care. This is the case for the indicators for placement stability (moves per day in care) and maltreatment in foster care (number of victimizations per day in care). The result of the calculation is a rate. We are multiplying the rates to yield more understandable numbers: for placement stability by 1,000 to yield a rate of moves per 1,000 days; and, for maltreatment in foster care by 100,000 to give a rate of victimizations per 100,000 days in care.
The following table shows the national standards for each indicator.
Start Printed Page 61249Table 1—National Standards for CFSR Round 3 Statewide Data Indicators
Statewide data indicators for safety outcome 1 National standard Maltreatment in Foster Care 8.04 victimizations per 100,000 days in care. Recurrence of Maltreatment 9.0%. Statewide data indicators for permanency outcome 1 National standard Permanency in 12 Months for Children Entering Foster Care 40.4%. Permanency in 12 Months for Children in Foster Care 12 to 23 Months 43.7%. Permanency in 12 Months for Children in Foster Care 24 Months or More 30.3%. Re-Entry to Foster Care in 12 Months 8.3%. Placement Stability 4.12 moves per 1,000 days in foster care. Public Comment and CB Response: Some commenters stated that using the national observed performance as the national standard for state performance was an improvement over CFSR round 2. A few others argued that the state should be held to higher standards believing that was consistent with legislative intent in requiring “substantial conformity” with federally mandated state plan requirements.
As we considered how to set national standards, we attempted to balance the need for standards that were ambitious yet feasible. We also were mindful of the states' collective historical performance and our historical expectations of substantial conformity. As we noted in the prior document, we believe that the national observed performance is a reasonable benchmark and would appropriately challenge states to improve their performance.
Some commenters urged us to allow states to be measured against their own performance rather than using a national comparison due to the disparate ways states across the country conduct child welfare activities. Although we acknowledge that there are disparities in child welfare activities in the states, we believe it is appropriate for CB to set consistent expectations for states' performance in its title IV-B and IV-E programs. We also note that the regulation that governs CFSRs requires that we determine substantial conformity based in part on national standards versus state-specific benchmarks (45 CFR 1355.31(a) and (b)). CB has, however, set improvement goals based on how each state has performed historically.
Multi-level modeling approach. State performance on each statewide data indicator will be assessed using a multi-level (i.e., hierarchical) model appropriate for that indicator. A multi-level logistic regression model will be used for indicators in which the outcome for a child either occurred or did not occur. A multi-level Poisson regression model will be used for indicators in which the outcome is a count per unit of time. We chose multi-level modeling because it is a widely accepted statistical method that enables fair evaluation of relative performance among states with different case mixes. The multi-level model that we employ when assessing each state's performance takes into account: (1) The variation across states in the age distribution of children served for all indicators, and the state's entry rate for select indicators (risk adjustment); (2) the variation across states in the number of children they serve; and, (3) the variation in child outcomes between states. The result of this modeling is a performance value that is a more accurate and fair representation of each state's performance than can be obtained with simply using the state's observed performance.
Public Comments and CB Response: No specific comments were received on using a multi-level approach.
Risk Adjustment. We will risk adjust on child's age for each indicator (depending on the indicator it is the child's age at entry, exit, or on the first day). See appendix A for details on risk adjusters. We will also risk adjust on the state's foster care entry rate for two indicators: Permanency in 12 months for children entering foster care and re-entry to foster care in 12 months. Adjusting on age allows us to control statistically for the fact that children of different ages have different likelihoods of experiencing the outcome, regardless of the quality of care a state provides. Adjusting on foster care entry rate allows us to control for the impact of the states' case mixes as far as the overall risk children in that state have of experiencing the outcome. We use entry rate to account for the fact that states with lower entry rates tend to have children at greater risk for poor outcomes.
We use a separate “dummy” variable for each age when calculating the risk adjustment for age. Use of dummy variables is a common strategy in regression models to measure the impact of a characteristic on an outcome. A dummy variable has a value of 1 or 0 to indicate the presence or absence of the characteristic. For example, a child who entered care at age 2 will have a “1” for the “age 2” variable and a “0” for all others. For all but the first day permanency indicators, 19 age dummy variables are used to represent the ages from birth to 3 months, four to 11 months, and each year from age 1 through 17. The first day permanency measure for children in care 12 to 24 months uses 17 age dummy variables (ages 1 through 17), and the first day permanency indicator for children in foster care 24 months of more uses 16 age dummy variables (ages 2 through 17). The method requires specifying a base or reference age group and for that we use the median age.
We calculate the entry rate as the number of children entering foster care during the 12-month period divided by the number of children in the state's child population, multiplied by 1,000. We obtain the child population data from the population division of the U.S. Census Bureau.[8] This Census data reflect population estimates as of July 1st of each year, whereas the 12-month periods CB uses to define children entering care are either October to September, or April to March. Therefore, we chose to use the Census year closest to the 12-month period the child entered foster care as the denominator. For example, if the indicator follows children who entered care between April 1, 2011 and March 31, 2012 (an “11B/12A” file in AFCARS file conventions), we use child population estimates from the July 2011 Census estimate. If the 12-month period spanned October 1, 2012 through Start Printed Page 61250September 30, 2013, we would use population estimates as of July 1, 2013.
After we perform all the calculations in the model, the result will be the state's risk standardized performance. The risk standardized performance is the ratio of the number of predicted outcomes over the number of expected outcomes, multiplied by the national observed performance. For details on how the predicted and expected outcomes are calculated, please consult CFSR Technical Bulletin #8 for additional information.
Public Comments and CB Response: Public comments expressed general support for risk adjustment, but many more requested more information, explanation, and transparency to understand and comment on the concept. We have provided more detail in this document to address the issues of transparency with precise methodology explanations in CFSR Technical Bulletin #8. Additionally, we understand that risk adjustment adds complexity to understanding state performance and so we decided as a matter of policy to employ it judiciously in this round of reviews and use only those variables that had wide support from the field and were statistically significant.
Commenters offered numerous suggestions for possible risk adjustment variables, with the most frequently mentioned being child's age, foster care entry rate, and whether states included juvenile justice youth in their child welfare systems. Other variables the field proposed include: The length of time from the date of a report to the date of disposition, the state's screen-in rate, how child maltreatment is defined statutorily, the degree to which states serve mental health populations and adolescents with behavior problems, poverty, parent factors and children's individual risk factors such as sibling group or severe disabilities.
CB considered and tested age as a risk adjuster for all indicators and found it to be statistically significant so we are including it as a variable for all indicators. We considered and tested whether the state's foster care entry rate should be used for permanency in 12 months for children entering foster care, re-entry to foster care in 12 months and placement stability. We found that the foster care entry rate was statistically significant for permanency in 12 months for children entering foster care and re-entry to foster care in 12 months and are using those. We found that foster care entry rates were not statistically significant for placement stability. We did not consider using foster care entry rate as an adjuster for the two permanency indicators for children in foster care on the first day. This is because children in foster care on the first day of the period will include children who entered in various years, and therefore an entry rate using data from a single year may not adequately reflect the experience with every child followed in the indicator. For a similar reason, entry rate was not considered for the maltreatment in foster care indicator. This indicator is based on children in foster care during a 12-month period. Although this indicator includes children who entered during the 12-month period, it also includes children who were in foster care on the first day of the period whose entry could have occurred at any point in the past.
For the recurrence of maltreatment indicator, we considered as a risk adjuster the state's screen-in rate, defined as the number of referrals the state screens in per 1,000 children in the child population. However, we decided against using this adjustment because its impact on the outcome is unclear and may have unintended consequences. State's child protective services policies are still under considerable fluctuation, especially with the varied implementation of differential response and structured decision-making. These and other policies that states are implementing may affect screen-in rates in unclear ways, so it would be challenging to explain what the adjustment is doing. We believe more research on the impact of adjusting on screen-in rates is needed before implementing this into the CFSRs.
Despite the call by some commenters to risk adjust for demographic variables, a few commenters argued that doing so could unintentionally relieve providers of their responsibility to work diligently to reunify vulnerable populations. Further, the commenters noted that child welfare agencies have a moderate degree of influence over the nature and adequacy of the services being provided to these populations and that adjusting for demographic variables could mask the disparate negative experiences of higher-risk populations. CB believes the limited use of risk adjustment at this time mitigate some of the concerns expressed in these comments. CB would also like to note that states are still encouraged to examine observed performance for children by age, sex, race and other demographic variables. This level of analysis will help uncover disparities in outcomes for certain populations based on their demographics.
Many of the suggested risk adjustment variables related to the programmatic aspects of the state's child welfare program, such as whether the state child welfare agency serves youth who are involved in the juvenile justice system. Some commenters offered alternative approaches to risk adjustment including focusing on systemic and environmental variables at the state level. We note that state program features are not readily identifiable in the administrative data that states submit to CB at this time. However, risk adjusting on additional state-level variables is an important area of research, and CB encourages researchers to continue to explore the challenges and advantages of implementing such risk adjustment in child welfare.
Some commenters offered alternative approaches to risk adjustment that involved dividing some of the data indicators by sub-populations. CB considered dividing the data indicators by sub-populations as stratifying performance by sub-populations is a useful strategy to see how outcomes vary for children from different backgrounds and experiences. However, in the context of the CFSR, we chose not to pursue this approach because of the unmanageable set of indicators it would produce. For example, if we grouped child age into five groups as is commonly done, and had separate indicators for each age group, the result would be 35 indicators (7 indicators by 5 age groups) based on age, and presumably 35 separate national standards, and so forth. Instead, we chose to implement a risk adjustment strategy that is widely practiced and can incorporate multiple risk adjustment variables into a single outcome.
Some commenters questioned whether CB would provide risk adjusted information to local jurisdictions that would likely need to be responsible for implementing changes based on the states' performance on the indicators. We note that these same models could be implemented at the state level, using as the focus of analysis the county (instead of the state, as the CB is doing). Details about technical assistance available for states interested in performing similar analyses is forthcoming as are further details on the information that will be available to states in data profiles as we finalize them.
A commenter requested clarity on the consequences for program improvement if a state's observed score meets the national standard, but the state's risk adjusted performance does not. In this situation CB will still require the state to enter into program improvement. This is because the state's observed performance is not the most precise Start Printed Page 61251measure of the state's performance after considering its case mix and size in the context of the performance of other states with similar case mixes.
Categorizing State Performance relative to the National Standards: A state's risk standardized performance can be compared directly to the national observed performance to determine if the state's risk standardized performance is statistically higher or lower than the national observed performance. To make this assessment, CB calculates approximate 95% interval estimates around each state's risk standardized performance. For details on how these interval estimates are calculated, see Technical Bulletin #8. CB will compare each state's interval estimate to the national observed performance, and assign each state to one of three groups:
- “No different than national performance” if the 95% interval estimate surrounding the state's risk standardized performance includes the national observed performance.
- “Higher than national performance” if the entire 95% interval estimate surrounding the state's risk standardized performance is higher than the national observed performance.
- “Lower than national performance” if the entire 95% interval estimate surrounding the state's risk standardized performance is lower than the national observed performance.
Whether it is desirable for a state to be higher or lower than the national performance depends on the indicator. For the indicators assessing permanency in 12 months for the three cohorts, a higher value is desirable. For these indicators if the state's risk standardized performance is “lower than national performance” we will consider the state not to have met the national standard and will require program improvement. For the remaining indicators, a lower value is desirable. If a state's risk standardized performance is “higher than the national performance” for these indicators, we will consider the state not to have met the national standard and will require program improvement. For all indicators, we will consider states that are “no different than national performance” to have met the national standard and no program improvement will be required.
Public Comments and CB Response: A commenter requested clarification on whether the national standards will remain fixed over the course of the round. The national standard will remain the fixed standard over round 3 of the CFSRs. However, there are situations in which a state's more recent data will be used to evaluate their performance relative to the standard. Due to the staggered schedule of CFSRs, some states will begin their onsite review one to three years after the establishment of the national standards and any initial assessment we provide of where states fall relative to the standards. Or a state may resubmit data for an earlier reporting period prior to its review. In preparation for these states' statewide assessments, CB will rerun the national model using the state's most current data applicable, but using the fixed data from the original reference population (i.e., the fixed data for all other states). This allows us to assess if the state, given its most recent performance, would now meet the national standard had it performed this way when we provide each state's performance initially.
Sources and Data Periods: The datasets used for the national standard calculations depend on the indicator. Some indicators require more data periods than others. For example, the re-entry indicator requires six report periods of AFCARS data. This is because the cohort of children used requires a look at all children who enter foster care over a 12-month period; then they are followed for another 12-months to establish whether they have exited to permanency; then they are followed for a subsequent 12-months after their exit to see if they reenter foster care. Attachment A specifies the data periods that will be used for calculating the national standard for each indicator.
Monitoring Statewide Data Indicators in Program Improvement Plans
CB will require states that do not meet the national standard for an indicator to include improvement on that indicator in its program improvement plan. If we are unable to determine a state's performance on an indicator due to data quality issues, we will also require the state to include that indicator in its program improvement plan. Data quality levels that prevent CB from identifying a state's performance are described in the next section and are specified in Attachment C. For two of the statewide data indicators, permanency in 12 months for children entering foster care and re-entry to foster care, CB will determine performance for program improvement purposes on one indicator in concert with the other as a companion measure. The key components for setting improvement goals and monitoring a state's progress over the course of a program improvement plan involve calculating baselines, setting improvement goals, and when companion measures are included in an improvement plan, also establishing thresholds. CB will set improvement goals and thresholds in part relative to each state's past performance.
A state can complete its program improvement plan successfully with regard to the indicators by meeting its improvement goal and staying above the threshold for its companion measure, if applicable. The determination that the state has been successful can be made during the program improvement period or the non-overlapping data period. The non-overlapping data period follows the end of the program improvement plan and is the period in which CB is evaluating the state's resulting performance as evidenced in the data. Alternatively, CB can relieve a state of any further obligation to improve for CFSR purposes if the state meets the national standard for an indicator prior to or during the course of program improvement monitoring.
Companion Measures: If a state has a program improvement plan that includes improving on the indicator permanency in 12 months for children entering foster care, CB's determination of whether the state has improved successfully will take into consideration its performance on the re-entry to foster care indicator as a companion measure. Specifically, the state must not allow performance on the companion measure to get worse beyond a certain level from its baseline performance. Thresholds are established as the inverse of performance goals, to provide the bounds in which states should not worsen. For example, a state must stay below a threshold for the companion re-entry to foster care indicator as well as achieve its goal on the permanency in 12 months for children entering foster care indicator to successfully complete the program improvement plan. The reverse is also true. If a state must improve on the re-entry to foster care indicator in its program improvement plan, it must not get worse than the threshold established for permanency in 12 months for children entering foster care. For details about threshold calculations, please see the section below and CFSR Technical Bulletin #8.
Public Comments and CB Response: Several commenters expressed strong support for the use of companion measures, but requested technical assistance to support states' work in translating these concepts and the calculations for thresholds. CB will work to provide states with clear explanations and visuals within their data profiles and technical materials of how the companion measures can be interpreted and are calculated. On the other hand, a commenter requested that Start Printed Page 61252we acknowledge that there could be no evidence or justification that one indicator contributed to the result of the other. CB was careful to select the companion measures because of the close connection between the practices of one and the other. CB has no plans to demonstrate for program improvement purposes that when a state increases its exits to permanency within 12 months and there is a subsequent increase reentry that there is causal relationship between the two (or that decreased reentries was caused by decreased exits to permanency). However, the goal is not to show causality; the concept is that if a state is unable to keep from getting markedly worse on the companion measure it cannot be considered to have successfully improved on the primary indicator as it indicates that something in the state's practices was problematic for the related area of permanency. It will always be incumbent on the state, working in concert with CB, to drill down into the data and assess its practice to understand whether, where and how practices can be aligned to ensure that children's needs are met for permanency to be achieved timely and appears to be long lasting.
State Baselines: CB will set the baseline for each statewide data indicator included in a program improvement plan at the state's observed performance on that indicator for the most recent year of available data at the beginning of the program improvement plan. However, just as there are multiple data periods used for the development of the national standards, multiple time periods are needed to evaluate the state's baseline performance at the time of the PIP and then subsequently throughout the program improvement period. Since the CFSR review schedule is staggered, the applicable year or data periods used in establishing the baseline will vary. For example, a state with an onsite review in April 2015 (FY 2015) and enters into a program improvement plan in September 2015 that includes the recurrence of maltreatment indicator would have its baseline calculated based on its performance in FY 2014. Since recurrence of maltreatment requires two years of NCANDS data, the applicable data periods would be FY 2013 and FY 2014.
Public Comments and CB Response: No comments were received on the proposal in this area and no changes were made.
State Improvement Goals and Thresholds: We will establish improvement factors for program improvement goals and thresholds (if applicable) for the data indicators based on the variability in a state's observed performance in the three most recent years of data. The improvement factor is multiplied to the state's observed performance for each statewide data indicator needing improvement in the most recent year available at the start of the improvement plan. Thresholds are calculated for companion measures and reflect levels of performance decline that the state cannot cross for us to consider the state to have successfully completed the primary statewide indicator. Thresholds are simply the inverse of the improvement goals.
The resulting improvement goal or threshold may be limited or increased for a state based on minimum and maximum levels for improvement that we have set for each indicator. We will set the minimum and maximum improvement levels so that no states are required to improve by more than the amount of improvement at the 50th percentile, and all states engaged in a program improvement plan are to improve by at least the amount of improvement at the 20th percentile (or 80th percentile, depending on whether higher or lower performance is preferable on the indicator). We will then use these values to replace the otherwise resulting improvement goal/threshold. The technical detail of the several steps we will take for these calculations are presented in CFSR Technical Bulletin #8 as well as a full discussion about the methods chosen and our rationales for doing so.
Table 2 provides the range of improvement factors for each statewide data indicator. If the state is required to improve for an indicator, the state will use their most recent year of observed performance as their baseline in determining the applicable improvement factor. For example, for the permanency in 12 months for children entering foster care indicator, improvement factors will be no lower than 1.035 and no higher than 1.057. If the value generated by a state's own prior performance generates a value within that range, they would use that value. For example, if the baseline was 40% and the state has to show the most improvement, they would simply multiply 1.057 with the baseline and obtain a goal of 42.28%.
Table 2—Minimum and Maximum Improvement on the Statewide Data Indicators
Statewide data indicators for safety outcome 1 Minimum Maximum Maltreatment in Foster Care 0.922 0.849 Recurrence of Maltreatment 0.953 0.910 Statewide data indicators for permanency outcome 1 Minimum Maximum Permanency in 12 Months for Children Entering Foster care 1.035 1.057 Permanency in 12 Months for Children in Foster Care 12 to 23 months 1.040 1.074 Permanency in 12 Months for Children in Foster Care 24 Months or More 1.034 1.080 Re-Entry to Foster Care in 12 Months 0.912 0.867 Placement Stability 0.953 0.912 Public Comment and CB Response: Some commenters expressed support for the program improvement methodology related to statewide data indicators as an overall concept. Such comments included support for the use of companion measures and thresholds as well as the use of historical performance as the basis for performance improvement targets. However, others commented that they were confused about the methods we proposed and that they would have difficulty explaining them to stakeholders. Commenters requested more explicit descriptions on how we will establish goals and threshold and on the consequences for states that have performance that drops below a threshold during program improvement.
Further, a number of commenters stated that there was not enough information in the original document to inform further comments and challenged a number of our methods chosen as technically inaccurate. These commenters noted concerns with Start Printed Page 61253establishing states performance improvement goals based on only three data points; using four standard deviations as the distance required for improvement; employing the Chebyshev's theorem; and how the application of these techniques could lead to states failing to meet the minimal level of improvement. As alternatives, commenters suggested the use of two standard deviations; relying upon available data, such as historical AFCARS and NCANDS data; applying the Empirical Rule rather than using the Chebyshev theorem; and allowing performance goals to be mutually negotiated between states and ACF.
We made several changes in response to these comments. First, we have provided a more thorough explanation of our methods and rationales for those methods in CFSR Technical Bulletin #8 as we believe it is important for states to see the full detail of our methods. We also took another look at the application of four standard deviations in developing the improvement factors given the concerns about setting goals that were too large. After we conducted additional analysis of the resulting improvement factors we agree with commenters that in some circumstances employing the 4 standard deviations would result in more aggressive improvement factors than round 2 even when also setting minimum and maximum improvement expectations at the 80th and 20th percentiles. In response, we have adjusted the approach to use 2 standard deviations and also to set the maximum improvement of all states' expectations to the 50th percentile of all states' original improvement factors, when calculated for every state and ordered from highest to lowest.
Another commenter requested additional information on whether improvement goals and thresholds for the statewide data indicators can be negotiated. As was the case in the prior round, we have standardized the approach to establish improvement factors that are applied to the state's baseline and are not negotiating the amount of improvement on the indicators. However, we will negotiate with a state how to design its program improvement approaches to attain the improvement goals. We will also still allow a state the opportunity during a program improvement plan to provide data that can be verified, reproduced and otherwise approved by ACF, as evidence that the state has met the requirement for attaining the required improvement.
A commenter requested clarification on whether the same multi-level modeling and risk adjustment will be utilized in assessing a state's performance over time to account for fluctuations in the state's population. When assessing a state's performance over time to determine whether or not states meet program improvement plan goals, we will not be using the same multi-level modeling and risk adjustment approach. We will be using the state's own observed performance on the indicators, regardless of changes in the state's population to make these determination.
Successful completion of program improvement relative to the indicators: Although not specifically outlined in our original proposal, we wanted to clarify that a state can complete its program improvement plan successfully with regard to the indicators in a couple of ways. One is by meeting its improvement goal and not exceeding the threshold for its companion measure, if applicable, at some point before the end of the program improvement monitoring. Alternatively, CB can relieve a state of any further obligation to improve for CFSR purposes if the state meets the national standard for an indicator prior to the approval of a program improvement plan or during the course of program improvement monitoring. This latter provision also means that a state need not meet a program improvement goal (by application of the improvement factor or the minimum or maximum improvement level) for an indicator if the state first meets the national standard for that indicator.
Data
Data Profiles: We will provide data profiles of state performance to each state before the state's CFSR on all seven of the statewide data indicators and other contextual data available from AFCARS and NCANDS. This data profile will assist the state to develop its statewide assessment and begin planning for program improvement, if appropriate. In addition, we will provide data profiles semi-annually to assist states in measuring progress toward the goals identified in the program improvement plan.
Public Comment and CB Response: Several commenters appreciated our commitment to providing data semi-annually, recognizing their importance in preparing for CFSRs and improving practice on a more general basis. Several commenters requested specific categories of information that would be beneficial for continuous quality improvement activities. Requested information included disaggregated data for the statewide data indicators, a rate of placement that is not tied to federal performance standards, and indicators of juvenile justice case type and a child's Indian Child Welfare Act (ICWA) eligibility and status.
In CFSR Technical Bulletin #8 we have outlined the content of the data profiles that we will send to states so that they can evaluate their performance in completing the statewide assessment. We have also outlined our plans for data profile content that will be sent to states during program improvement, if necessary. We welcome continued input from states on the content of program improvement profiles that will support their analysis in developing strategies for improvement. However, we also encourage states to conduct analysis on any data available to the state, including data that is not submitted to CB such as juvenile justice case type and ICWA status, to inform their understanding of their performance and measure progress.
Data Quality: Excluding States From National Standards or State Performance
Setting national standards and measuring state performance on statewide data indicators for CFSR purposes relies upon the states submitting high-quality data to AFCARS and NCANDS. Therefore we will exclude states that have data quality issues that exceed the data quality limits established from the model we use to calculate the national standard (i.e., the national observed performance) and estimate states' risk adjusted performance.
Because errors in the data can misrepresent state performance, we made the decision to remove a state from the analysis entirely if they exceed certain limits on the data quality checks. We reviewed state-by-state performance on each data quality item before establishing these limits. Because we do not want to be too strict and exclude a great number of states, we were conservative and set the limits high for common issues (e.g. 10% for dropped cases). However, some checks are critical to calculations (such as a count of placements for the placement stability measure), and we set the limits a bit lower (5%) in order to not misrepresent state performance.
Data Quality: Case-Level Exclusions
For those states that do not exceed the data quality thresholds but still have identified data quality problems, we will include the state in national standards calculations and measure state performance but we will exclude child-level records with missing or Start Printed Page 61254invalid data on elements needed to determine the child's outcome and perform the risk adjustment. For example, if the risk adjustment for an indicator includes age at entry, a child whose age at entry cannot be determined (due to a missing date of birth) will not be included in the analysis. For each indicator, we will provide each state with a list of records that were excluded from the analyses.
Public Comments and CB Response: Two commenters expressed support for our approach to addressing data quality issues in estimating national standards and a state's risk adjusted performance. One commenter urged us to hold states responsible for producing “high-quality, consistent, and complete data” pointing out that we have not found any state in the past 13 years, to be in full compliance with the AFCARS standards through ACF's AFCARS Assessment Reviews. The other commenter commended us for recognizing that quality data is critical to assessing performance. Another commenter was concerned that the thresholds meant that the standards could not be considered national; while another wanted the thresholds raised to allow more states to either participate in the national standard calculations or have their state performance evaluated.
We concur with those commenters that believe that data quality standards are necessary to ensure the integrity of our performance assessment. We believe we have maintained an appropriate balance in setting data quality thresholds so as not to exclude states unreasonably. In terms of the national standards, the number of states excluded was relatively few. For the indicators permanency by 12 months for the 12 to 23 month and 2 or more years first day cohorts, one state was excluded from the national standard calculation. For the permanency by 12 months entry cohort indicator, three states were excluded. For the reentry to foster care, recurrence of maltreatment and maltreatment in foster care indicators, four states were excluded. Six states were excluded from the calculation of the national standard for the placement stability indicator. We will continue to work with states that have their data excluded from the national standards or evaluation of state performance and advise on how they can address the data quality issues in their systems.
Start SignatureMark Greenberg,
Acting Commissioner, Administration on Children, Youth and Families.
Attachment A: Statewide Data Indicators
Category Measure title Measure description Denominator Numerator Exclusions & notes Risk adjustment Safety Maltreatment in Foster Care Of all children in foster care during a 12-month period, what is the rate of victimization per day of foster care? For national standard calculation, uses AFCARS periods 2013A and 2013B and NCANDS FY2013 Child File. Of children in foster care during a 12-month period, the total number of days these children were in foster care as of the end of the 12-month period a Of children in the denominator, the total number of substantiated or indicated reports of maltreatment (by any perpetrator) during a foster care episode within the 12-month period b —If a state provides incident dates, records with an incident date occurring outside of the removal episode will be excluded, even if report dates fall within the episode —Complete foster care episodes lasting <8 days are excluded. —Any report that occurs within the first 7 days of removal is excluded. —Age at entry (for children entering) or age on first day of the 12-month period (for children already in care). —Victims age 18 or more are excluded, as well as youth in foster care at 18 or more. For youth who start out as 17 years of age and turn 18 during the period, any time in foster care beyond his/her 18th birthday is not counted in the denominator. —Cases are matched across AFCARS and NCANDS using AFCARS ID. Start Printed Page 61255 Safety Recurrence of Maltreatment Of all children who were victims of a substantiated or indicated report of maltreatment during a 12-month period, what percent were victims of another substantiated or indicated report of maltreatment within 12 months of their initial report? For national standard calculation, uses NCANDS FY 2012 and FY 2013 Child Files. Number of children with at least one substantiated or indicated report of maltreatment in a 12-month period Number of children in the denominator that had another substantiated or indicated report of maltreatment within 12 months of their initial report Relies primarily on the report date to determine whether the maltreatment occurred in the first 12-month period; therefore, if a case does not reach disposition until the following 12-month period but has a report date in the first, we include it —If subsequent report is within 14 days, we do not count it. —If incident date indicates that two reports refer to the same incident, we do not count it. —If report date is prior to the first 12 months, we exclude it. —Youth age 18 or more are excluded from the measure. —Age at initial victimization. Permanency Permanency in 12 Months for Children Entering Foster Care Of all children who enter foster care in a 12-month period, what percent discharged to permanency within 12 months of entering foster care? c For national standard calculation, uses AFCARS periods 2011B through 2013A. Number of children who enter foster care in a 12-month period Number of children in the denominator who discharged to permanency within 12 months of entering foster care and before turning 18 —Children in foster care <8 days are excluded —Children who enter foster care at age 18 or more are excluded. —Trial home visit adjustment is applied. —Age at entry. —State's foster care entry rate. Start Printed Page 61256 Permanency Permanency in 12 Months for Children in Foster Care 12-23 Months Of all children in foster care on the first day of a 12-month period who had been in foster care (in that episode) between 12 and 23 months, what percent discharged from foster care to permanency within 12 months of the first day of the 12-month period? For national standard calculation, uses AFCARS periods 2013B and 2014A. Number of children in foster care on the first day of a 12-month period, who had been in foster care (in that episode) between 12 and 23 months Number of children in the denominator who discharged from foster care to permanency within 12 months of the first day of the 12-month period and before turning 18 —Children age 18 or more on the first day of the 12-month period are excluded —Trial home visit adjustment is applied. —Age on first day. Permanency Permanency in 12 Months for Children in Foster Care 24 Months or More Of all children in foster care on the first day of a 12-month period, who had been in foster care (in that episode) for 24 months or more, what percent discharged to permanency within 12 months of the first day of the 12-month period? For national standard calculation, uses AFCARS periods 2013B and 2014A. Number of children in foster care on the first day of a 12-month period, who had been in foster care (in that episode) for 24 months or more Number of children in the denominator who discharged from foster care to permanency within 12 months of the first day of the 12-month period and before turning 18 —Children age 18 or more on the first day of the 12-month period are excluded. —Trial home visit adjustment is applied. —Age on first day. Permanency Re-Entry to Foster Care in 12 Months Of all children who enter foster care in a 12-month period, who discharged within 12 months to reunification, live with relative, or guardianship, what percent re-enter foster care within 12 months of their discharge? a For national standard calculation, uses AFCARS periods 2011B through 2014A. Number of children who enter foster care in a 12-month period and discharged within 12 months to reunification, live with relative(s), or guardianship Number of children in the denominator who re-enter foster care within 12 months of their discharge —Children in foster care <8 days are excluded —Children who enter or exit foster care at age 18 or more are excluded, —If a child has multiple re-entries within 12 months of their discharge, only his first re-entry is selected. —Age at exit. —State's foster care entry rate. Start Printed Page 61257 Permanency Placement Stability Of all children who enter foster care in a 12-month period, what is the rate of placement moves per day of foster care? For national standard calculation, uses AFCARS periods 2013B and 2014A. Of children who enter foster care in a 12-month period, the total number of days these children were in foster care as of the end of the 12-month period d Of children in the denominator, the total number of placement moves during the 12-month period e —Children in foster care <8 days are excluded —Children who enter foster care at age 18 or more are excluded. For youth who enter at 17 years of age and turn 18 during the period, any time in foster care beyond his/her 18th birthday or placement changes after that date are not counted. —Age at entry. —The initial removal from home (and into care) is not counted as a placement move. Notes: The letters `A' and `B' are shorthand for the six-month AFCARS reporting periods. The `A' period spans October 1st-March 31st, and the `B' period spans April 1st-September 30th of any given year. The year always refers to the year in which the six-month period ends. For example, 2014A refers to the six month period of 10/1/2013 through 3/31/2014. a For example, if during the 12-month period there were two children in foster care, one child for 10 days (1st episode), the same child for 40 days (2nd episode), and the other child for 100 days (his only episode), the denominator would = 150 days (10+40+100). b For example, if during the 12-month period there were two children in foster care, and one child had 3 substantiated or indicated reports and the other had 1 such report, the numerator would = 4 reports (3+1). c If a child has multiple entries during the 12-month period, only the first entry in the 12-month period is selected. d For example, if during the 12-month period two children entered care, one child for 10 days and the other child for 100 days, the denominator would be 110 days (10+100). e For example, if during the 12-month period two children entered care, and one child had 3 moves and the other had 1 move, the numerator would = 4 moves (3+1). Attachment B: Comparison of Data Measures—CFSR Round 2 and Round 3
Category Measure title CFSR round 3 indicator Comparable CFSR round 2 measure How and why it's changed Safety Maltreatment in foster care Of all children in foster care during a 12-month period, what is the rate of victimization per day a of foster care? Of all children in foster care during the reporting period, what percent were not victims of substantiated or indicated maltreatment by a foster parent or facility staff member? In the CFSR 2 measure, counts of children not maltreated in foster care are derived by subtracting the NCANDS count of children maltreated by foster care providers from the total count of all children placed in foster care, as reported in AFCARS. Because of improved reporting by states, we now link AFCARS and NCANDS data using the child ID and determine if maltreatment occurred during a foster care episode, improving accuracy on the indicator. Start Printed Page 61258 This also allows us to expand the measure to include all types of perpetrators (including, for example, parents) under the assumption that states should be held accountable for keeping children safe from harm while in the care of the state, no matter who the perpetrator is. Safety Recurrence of maltreatment Of all children who were victims of substantiated or indicated maltreatment allegation during a 12 month period, what percent were victims of another substantiated or indicated maltreatment allegation within the next 12 months? Of all children who were victims of substantiated or indicated maltreatment allegation during the first 6 months of the reporting period, what percent were not victims of another substantiated or indicated maltreatment allegation within a 6-month period? We will use a full 12-month period rather than only 6 months to capture the denominator, to create more stable estimates. We will also track them for another full 12 month to see if there is a recurring maltreatment. The indicator also includes these changes: If the subsequent report is within 14 days, we will not count it. While the measure relies on report date, we will also make use of the incident data, when available. If the incident date indicates that two reports refer to the same incident, we will not count it. Finally, youth age 18 or more are excluded from the measure. Permanency Permanency in 12 months for children entering foster care Of all children who enter foster care in a 12-month period, what percent discharged to permanency within 12 months of entering foster care? Composite 1.3: Of all children entering foster care for the first time in a 6-month period, what percent discharged to reunification (or live with relative) within 12 months of entering foster care or by the time they reached 18? We now count all types of permanency (reunification, live with relative, adoption or guardianship) as having `met' the indicator. We also expanded the measure to include all children who entered foster care that year; not just those on their first removal episode. We also expanded the window of time for the entry cohort to a full year instead of 6 months; this will yield more stable estimates. Permanency Permanency in 12 months for children in foster care between 12 and 23 months Of all children in foster care on the first day of a 12-month period who had been in foster care (in that episode) between 12 and 23 months, what percent discharged to permanency within 12 months of the first day? In CFSR Round 2, we looked at reunifications within 12 months as part of a measure within Composite 1, and we looked at adoptions in 24 months as part of Composite 2 We add this cohort to allow for children and youth in foster care who have already been in foster care between 1 and 2 years to be a focus for permanency, as well. We expect this population to have a higher percentage of exits to adoption or guardianship than those entering care during the year. Start Printed Page 61259 Permanency Permanency in 12 months for children in foster care for 24 months or longer Of all children in foster care on the first day of a 12-month period who had been in foster care (in that episode) for 24 months or longer, what percent discharged to permanency within 12 months of the first day? Composite 3.1: Of all children in foster care on the first day of a 12-month period who had been in foster care (in that episode) for 2 or more years, what percent discharged to permanency within 12 months of the first day or by the time they reached 18? Same measure; no change. The difference is that it is now evaluated on its own, rather than as just one part of a composite measure. We believe it is important to hold states accountable for getting those children and youth who have been in foster care for long periods of time to permanent homes. Permanency Re-entry in 12 months Of all children who enter foster care in a 12-month period and discharged within 12 months to reunification, live with relative, or guardianship, what percent re-entered foster care within 12 months of their date of discharge? Composite 1.4: Of all children discharged from foster care to reunification or live with a relative in a 12-month period, what percent re-entered foster care in less than 12 months from the date of discharge? The new indicator is limited to those children who entered foster care during the year, whereas the CFSR Round 2 measure counted all children who discharged to reunification or live with relative, regardless of when they entered foster care. The purpose of this focus is in keeping with the rationale that new interventions may best be monitored in an entry cohort. This indicator will also be used as a companion measure with permanency in 12 months, to ensure that states working to improve permanency rates in their entry cohort do not see worsening performance on rates of re-entry to foster care. We also expanded the denominator to allow discharges to guardianship, in an effort to capture more discharges to permanency. Exits to adoption are not included because they cannot be tracked reliably, as some states issue new child identifiers if a child who was previously adopted enters foster care. Permanency Placement stability Of all children who enter foster care in a 12-month period, what is the rate of placement moves per day b of foster care? Composite 4.1: Of all children served in foster care during the 12-month period, what percent had two or fewer placement settings? The proposed indicator controls for length of time in foster care, so we are looking at moves per day of foster care, rather than children as the unit of analysis. The rationale for using an entry cohort rather than all children served is that our analysis shows children entering foster care tend to move much more than those children/youth in foster care for longer periods of time, whose placements may have stabilized. In CFSR Round 2 measure, moves that took place prior to the monitoring period were counted. Now we only count those moves that occur during the monitoring period. The initial placement is not counted. Start Printed Page 61260 The CFSR Round 2 measure treated children who moved 2 times in an episode the same as children who moved 15 times; both were a failure to meet the measure. The new indicator counts each move, so it continues to hold states accountable for those children/youth who have already moved several times. a The rate may be expressed per 100,000 days because it is such a rare event. Using this metric gives us larger numbers that are easier to communicate. b The rate is expressed per 1,000 days to convert the rate to a metric that gives us larger numbers. Attachment C: Data Elements Used for Statewide Data Indicators
For information regarding AFCARS data elements, refer to http://www.acf.hhs.gov/programs/cb/resource/afcars-tb1.
For information regarding NCANDS data elements, refer to http://www.ndacan.cornell.edu/datasets/pdfs_user_guides/178-NCANDS-child2012v1-User-Guide-and-Codebook.pdf.
Primary data elements required for calculation Permanency in 12 months (all 3 indicators) Re-entry to foster care in 12 months Placement stability Recurrence of maltreatment Maltreatment in foster care AFCARS FC Element #1 1: Title IV-E Agency X X X NA X AFCARS FC Element #4: Record Number X X X NA X AFCARS FC Element #21: Date of Latest Removal X X X NA X AFCARS FC Element #23: Date of Placement in Current Foster Care Setting NA NA X NA NA AFCARS FC Element #24: Number of Placement Settings during this Removal Episode NA NA X NA NA AFCARS FC Element #56: Date of Discharge from FC X X X NA X AFCARS FC Element #58: Reason for Discharge X X NA NA NA NCANDS CF Element #4: Child ID NA NA NA X NA NCANDS CF Element #6: Report Date NA NA NA X X NCANDS CF Element #27: Child Maltreatment 1—Disposition Level 2 NA NA NA X X NCANDS CF Element #29: Child Maltreatment 2—Disposition Level NA NA NA X X NCANDS CF Element #31: Child Maltreatment 3—Disposition Level NA NA NA X X NCANDS CF Element #33: Child Maltreatment 4—Disposition Level NA NA NA X X NCANDS CF Element #34: Maltreatment death NA NA NA X X NCANDS CF Element #145: AFCARS ID NA NA NA NA X Optional Data Elements: AFCARS FC Element #41: Current Placement Setting X NA NA NA NA NCANDS CF #146 Incident Date NA NA NA X X Additional Data Elements Required for Risk-Adjusted Analysis: AFCARS FC Element #6: Child's Date of Birth X X X NA X NCANDS CF Element #14: Child Age NA NA NA X NA Start Printed Page 61261 U.S. Census Bureau: Child Population, by State (Used to derive state foster care entry rates) X 3 X NA X X 1 The elements are numbered by their position in the flat ASCII files submitted by states to these reporting systems. These numbering schema are specific to the files utilized by ACYF. Files obtained through the National Data Archive on Child Abuse and Neglect (NDACAN) may have a slightly different order. 2 Definition of ‘victim’ includes all children with a disposition level (for any of up to four maltreatments per child) of: a) Substantiated, or b) Indicated. These do not propose including differential response victims. Victims also include children who died as a result of maltreatment. 3 Relevant to Permanency by 12 months for the entry cohort only. Attachment D: Data Quality Items, Limits, and Applicable Measures
End Supplemental InformationData quality item Data quality limit Maltreatment in foster care Recurrence of maltreatment Permanency in 12 months (all 3 indicators) & re-entry to foster care in 12 months Placement stability AFCARS—Cross File Checks: Dropped cases >10% X n/a X X AFCARS IDs don't match from one period to next >40% X n/a X X AFCARS—Within-file checks: Missing date of birth >5% X n/a X X Missing date of latest removal >5% X n/a X X Missing # of placement settings >5% n/a n/a n/a X Date of birth after date of entry >5% X n/a X X Date of birth after date of exit >5% X n/a X X Age at entry greater than 21 >5% X n/a X X Age at discharge greater than 21 >5% X n/a X X In foster care more than 21 years >5% X n/a X X Enters and exits care the same day >5% X n/a X X Exit date is prior to removal date >5% X n/a X X Missing discharge reason (exit date exists) >5% n/a n/a X n/a Percent of children on 1st removal <95% X n/a X X NCANDS Data—Cross File Checks: Child IDs don't match across years <1% n/a X n/a n/a Child IDs match across years, but dates of birth and sex do not match >5% X X n/a n/a Some victims with AFCARS IDs should match IDs in AFCARS files Y/N X n/a n/a n/a Some victims have AFCARS IDs <1% X n/a n/a n/a NCANDS Within file checks: Missing age >5% X X n/a n/a Note. If a state exceeds these specified limits, we will not calculate performance for the state on the indicator. Footnotes
1. AFCARS collects case-level information from state and Tribal title IV-E agencies on all children in foster care and those who have been adopted with title IV-E agency involvement. Title IV-E agencies must submit AFCARS data to the Children's Bureau twice a year.
Back to Citation2. NCANDS collects child-level information on every child who receives a response from a child protective services agency due to an allegation of abuse or neglect. States report this data to the Children's Bureau voluntarily. In FFY 2013, all 50 states, the District of Columbia, and Puerto Rico submitted NCANDS data.
Back to Citation3. In particular, see the Child Welfare Policy Manual Section 1.2B.7, AFCARS, Data Elements and Definitions, Foster Care Specific Elements, Placements found at http://www.acf.hhs.gov/cwpm/programs/cb/laws_policies/laws/cwpm/index.jsp.
Back to Citation4. See the CFSR Onsite Review Instrument, Stability of Foster Care Placement (item 4) at https://training.cfsrportal.org/resources/3044.
Back to Citation5. U.S. Department of Health and Human Service, Child and Family Services Reviews Aggregate Report, Findings for Round 2 Fiscal Years 2007-2010. December 16, 2011. Located online at http://www.acf.hhs.gov/sites/default/files/cb/fcfsr_report.pdf.
Back to Citation6. See the CFSR Onsite Review Instrument, Physical Health of the Child and Mental/Behavioral Health of the Child (items 17 and 18). Available online at https://training.cfsrportal.org/resources/3044.
Back to Citation7. See for example ACYF-CB-IM-12-04, Promoting Social and Emotional Well-Being for Children and Youth Receiving Child Welfare Services. April 17, 2012. Available at http://www.acf.hhs.gov/sites/default/files/cb/im1204.pdf.
Back to Citation8. Population estimates can be downloaded from the U.S. Census Bureau's Web site at https://www.census.gov/popest/index.html.
Back to Citation[FR Doc. 2014-24204 Filed 10-9-14; 8:45 am]
BILLING CODE 4184-25-P
Document Information
- Effective Date:
- 10/10/2014
- Published:
- 10/10/2014
- Department:
- Children and Families Administration
- Entry Type:
- Rule
- Action:
- Final notice of statewide data indicators and national standards for Child and Family Services Reviews.
- Document Number:
- 2014-24204
- Dates:
- Effective October 10, 2014.
- Pages:
- 61241-61261 (21 pages)
- PDF File:
- 2014-24204.pdf
- Supporting Documents:
- » Title IV-E Program: Correction
- » Securing Updated and Necessary Statutory Evaluations Timely; Withdrawal
- » Securing Updated andNecessary Statutory Evaluations Timely; Administrative Delay of Effective Date
- » Securing Updated and Necessary Statutory Evaluations Timely; Administrative Delay of Effective Date; Correction
- » Equal Participation of Faith-Based Organizations in the Federal Agencies' Programs and Activities
- » Regulatory Clean Up Initiative
- » Head Start Designation Renewal System
- » Secretarial Determination to Lower Head Start Center-Based Service Duration Requirement
- » Head Start Program
- » Head Start Program
- CFR: (1)
- 45 CFR 1355