Guidelines and Tools

Click here to download a PDF of this section. To download tools for category three,  click the folder below. You can use the menu below to jump to each section of the guidelines.

DOWNLOAD CATETORY THREE TOOLS

DOWNLOAD CATETORY THREE TOOLS

3A. Create the Foundation

Establish the Infrastructure for Organization and Governance

A collaborative entity, representative of all the agencies involved in data collection and sharing, as well as youth and their families, should govern the development of data sharing efforts.  The group’s membership should possess the authority to direct the development and approval of data sharing policy, products, and activities.  This collaborative governance entity should be identified or developed at a state or local level, or it could be a state-local hybrid, depending on what jurisdiction the data sharing development encompasses.  It may be that an existing collaborative entity an effectively serve this purpose.

The governance group sets the goals of the effort, identifies the questions that need to be answered, reviews existing databases, identifies needed data elements and additional databases, and conducts the data analyses.  The governance group must put into place a decision-making structure for data projects for program evaluation and for performance measurement.  Data sharing involving more than one agency requires that all parties contributing data are involved in deciding what performance measures to track and the contours of an evaluation effort, at a minimum to ensure that each delivers accurate data.  Dialogue about the important questions to be answered by the evaluation and performance measurement efforts may surface other issues that will improve collaboration and the effective use of data.  As Kushmar advises, in developing integrated data systems across systems (see discussion at 3.C below), the participating agencies must establish “charters” or agreements that address several issues including the following:

  • Data sharing objectives;
  • Permitted and prohibited uses of the data;
  • Data interpretation guidelines;
  • Cross agency training;
  • The responsible stakeholders and methods for ensuring compliance with all applicable laws governing confidentiality; and
  • The responsible stakeholders and methods for monitoring, measuring and controlling the accuracy, timeliness, consistency and reliability of data. [1]

It is also important to determine which entity, agency or person on the governance group will be responsible for leading each distinct data sharing development effort.  For example, if a particular product will be developed – such as a guide, a data sharing agreement, or a data collection system – one of the participating entities should be designated to bring the group together, keep them on task to meet deadlines for the development of the product, and be responsible for product dissemination and its periodic update.

Identify the Goals for Data Collection and Sharing for Program Evaluation and/or Performance Measurement

It is critical to carefully identify data sharing goals to direct the development of specific policies as well as specific products.  Agencies within jurisdictions should first come to agreement as to why they are working on information sharing rather than moving too quickly to develop a product.

An example of a goal: “To develop aggregated data to measure the effectiveness of programs and practices designed to improve outcomes for juveniles.”

Review Existing Federal and State Laws

Prior to undertaking a data sharing project, a jurisdiction should set out the basic statutory framework of federal and state laws and policies on confidentiality and data sharing.  Policy and program personnel from each of the participating agencies and their legal counsel should create this framework to clearly explain the opportunities and prohibitions that exist to data sharing and share it with the effort’s governance committee.   This process will help identify the gaps in a state’s existing statutes and policies for data sharing and what legislation, policies, and protocols is needed.  The governance committee can then draft proposed legislation, policies, and protocol to address the needs.   

The Federal Law Overview section above provides an excellent starting point for understanding the framework of statutes and regulations affecting information and data sharing.  Additional time must be spent reviewing relevant state law.  A place to start in identifying relevant laws is with the titles of the state code, noting any that may relate to child welfare, juvenile justice, health care (physical & mental), substance abuse treatment, and privacy.  Once these titles are identified, a keyword search for terms such as “records,” “data,” “information,” “confidentiality,” “research,” and “security,” will yield relevant provisions.  There should be an effort to identify statutes that may specifically mandate the development and parameters of any data sharing systems within the state.  This can help jurisdictions identify possible sources of data as well as requirements and protections detailed by law for relevant data sharing systems. 

Several jurisdictions have developed and populated a grid or matrix to organize and better understand the laws identified as relevant to information and data sharing.  For example, in Jefferson Parish, Louisiana, the information sharing project team developed an extensive matrix that provided an opportunity for better understanding the legal framework as well as a reference tool for personnel grappling with information and data sharing questions. 

Assess the Readiness of Information Technology Systems to Support the Project

To support data sharing development efforts, sites should identify who in each participating agency will be responsible for the access to, development of, and security of electronic data for data sharing. These individuals should either sit on the governance committee or team dedicated to the development of particular data sharing efforts or develop a means of communication to keep each other apprised of these efforts.

The jurisdiction should conduct an inventory to identify where information about children and the decisions made about them are housed. This information is likely to be housed in multiple locations in public and private agencies and their corresponding information systems. The inventory should be organized by categories of information (i.e., demographic, assessment, treatment, case planning) noting where the information is kept and in what electronic format.

Jurisdictions should determine the capacity of the participating agencies’ information systems to produce relevant data and to interact with each other and develop a plan for the effective sharing of electronic data between agencies.  This may be accomplished by “assessing participating agencies’ data systems and systems infrastructure….include inventorying and identifying databases, operating systems, networks, software modeling tools and enterprise architecture tools used by participating JIS agencies.””[2]  Such a review may help to surface tools that are in use or are available that support data sharing across agencies.

Finally, sites should conduct a technical business requirements assessment.  This requirements assessment involves reviewing and modernizing processes related to the information that the participating agencies are attempting to incorporate.  It is important to solicit input from the identified users on how this system will function, what events will trigger data sharing, and what mandates govern the protection and privacy of data.

Security requirements tied to data security and privacy laws must be included in the technical business requirements assessment. After assessing the threats to privacy and security, the collaborative can determine appropriate and effective safeguards to address those risks.  Administrative safeguards may include security clearance and pass codes, prohibiting attachment of unauthorized hardware to the system, and audits.[3]

The National Juvenile Information Sharing Initiative (NJISI) developed a Juvenile Information Sharing (JIS) Readiness Assessment.  The NJISI project and research team based the readiness assessment on the Governance Guidelines for Juvenile Information Sharing, an updated version of the guidelines originally published in 2006.  The JIS Readiness Assessment enables collaborations to conduct an objective assessment of the current state of the collaboration, infrastructure, resources and context.  The JIS Readiness Assessment should be reviewed and then answered as a team exercise.  The final scores are based on specific areas addressed in the Governance Guidelines, including: Collaborative, Organization and Governance, Strategy and Planning, Technology and Tools, Evidence based Approaches, Law and Policy, Disclosure and Consent.  Self-explanatory instructions on how to take the assessment and auto scoring are found within the assessment.   The collaborative should periodically conduct a reassessment to document progress. 

Provide the Training

To ensure the success of a data sharing effort, the jurisdiction must provide training on law, policy, and protocol to all personnel involved. This training should include purpose, benefits, expected outcomes, policies, and protocols.  It should be provided both in the individual participating agencies and in cross agency training sessions, as appropriate. Training curricula should be developed in formats that are easily accessible for new personnel and can be readily updated as required.

3B. Develop a culture of accountability through the establishment of strong program evaluation and performance measurement systems for each of the participating agencies and their collaborative efforts.

As noted in the Introduction, program evaluation is designed to answer key questions about an intervention, such as:

  • What is the intervention designed to do?
  • How can we quantify whether or not it is doing it?
  • If we cannot quantify, how can we improve the intervention?

In addition to agency managers, the people who are impacted by the intervention (families and young people) as well as those who deliver the intervention (probation officers, case managers) need be involved at some level in the selection of program evaluation questions.  These key stakeholders will not buy in to the results of evaluation or assessment of outcomes unless these processes reflect issues that are of concern to them.

A performance measurement system requires the stakeholders to establish the:

  • sought outcomes, both system and youth outcomes;
  • baseline measurements; and
  • systems to monitor the quality of practices and other factors that affect the achievement of outcomes, and to track the outcomes themselves.

In deciding what outcomes to track, think about these questions:

  • Will the answers to the questions guide us to solving a pressing problem?
  • Will the answers to the questions help us set priorities for action?
  • Will the answers speak to the concerns of youth involved in the juvenile justice system and their families?
  • If we had the answer, would it make a difference?  Would we do something differently? 

Another approach is to use the ABCDE Method of Setting Outcomes:

A – Audience: determine the exact population or target audience for whom the desired outcome is intended.

B – Behavior: what is to happen? Clearly and specifically state the expected behavior change, e.g. changes in knowledge, attitudes, skills, etc.

C – Condition: by when? What is the time frame for implementation and measurement of these outcomes?

D – Degree: how much change is expected? 

E – Evidence: how will the change be measured?  You should have at least one measure for each outcome and use the shortest measure possible.

3C. Refine the existing and develop any needed additional databases or other methodologies to support program evaluation and performance measurement.

Identifying the data that already exists is an important exercise when undertaking performance measurement and program evaluation.  It is not always necessary to create new databases, particularly when conducting performance measurement.  It may be that one is already collecting the needed data and energies can be concentrated on the development of a performance measurement system (i.e., what are the target outcomes that we want to track, what and how much service activity is being delivered, what and how many youth complete those activities, and what are the youth outcomes?).

Existing databases may provide the needed data for program evaluation.  But more often what will be needed are specialized databases or the inclusion of business intelligence software designed to extract and collect data for program evaluation or other analytics, i.e., to establish baseline measures, to measure the effectiveness of particular programs and treatment modalities, and to measure the corresponding short and long-term outcomes for youth.

The team should identify and inventory all existing databases – local and national -- that contain evaluation data on programs and services provided to youth in their jurisdiction, and outcome data for the specific youth who participate in those programs.   In conducting the inventory, ask the following questions:

  • What data exists on programs and services?
    • How accessible is it? How well maintained?
  • Are there databases that:
    • describe characteristics of targeted youth?
    • describe performance of work processes?
    • track delivery of services and programs?
    • track outcomes of youth receiving particular programs or services?

After conducting the inventory, the next question to ask is whether and how to create an integrated data system that links data about youth and families from different agencies.  An integrated data system provides data on the use of various services and interventions by youth and families and with what outcomes, and thus provides a more complete picture to decision-makers than any one agency database can paint.[4] 

Kushmar identifies three main data integration approaches that are used in linking data across human services agencies:

  • Need Based Data Integration.   In this approach, data matching – and the accompanying data preparation tasks – is undertaken to respond to a specific need or question.    Thus, for example, the providers of a certain intervention may want to assess whether youth who completed the intervention are now attending school.  This would require the matching the provider’s data set to that maintained by the school system.   This approach may be best suited for a group of agencies that has never shared data before, in order to test capacities, assess the quality of the data and build relationships prior to undertaking a more ambitious data sharing project.  The disadvantage to doing “one-off” projects is that they each require legal review and approval.
  • Periodic Data Integration.  In this approach, agencies implement a process of collecting, cleansing, and matching data at regular intervals as per a data sharing agreement developed by the participants and approved by the agencies’ legal counsel. This method provides regular matched data sets for analysis.   One drawback of this approach is that it may not provide real time data for case management with clients.
  • Continuous Data Integration.  Participating agencies maintain a shared data source that is continuously updated.  This contrasts with Periodic Data Integration which only matches data at predetermined intervals.  The key advantage of this approach is that it makes matched data available for tracking services and outcomes on a macro level as well as for case management for clients.  But using such a data base for work directly with clients (Category One information sharing) will require a layer of additional work – including consents, court orders and legal reviews – that would not be required if the data was used simply for law, policy and program development (Category Two) and/or program evaluation and performance measurement (Category Three).  This approach also involves higher complexity and costs, and a data breach to a single integrated database can have greater repercussion as compared to agencies maintaining separate databases. 

Kushmar also advises that jurisdictions need to address “a number of design and implementation issues…such as where integrated data would be stored, how data would be moved from the administrative data sources to the destination database, what methods would be used to match and link data about clients and service providers across data systems, what technologies would be used to deliver data to the decision makers, etc.”[5]  One key decision is what “data architecture” to utilize.  The following are three to consider:

  • Data Warehouse.  “A data warehouse collects data from multiple administrative systems, links them together, and stores them in a centralized repository.” [6]  This architecture is particularly useful for law, policy and program development (Category Two) and/or program evaluation and performance measurement (Category Three).
  • Federated Data.  “Federated data architecture allows data stored in [individual agencies’ data systems] to be dynamically extracted, linked, and presented to the user. This approach obviates the need to store all of the data in a shared database. Instead, federated data systems use specialized software to “expose” data in the transactional data systems to the authorized users or computer programs directly. When a user submits a data request, the federated data system decomposes the user’s query into a set of queries and dispatches them to the transactional data systems.” [7]  One advantage of this architecture is that it may be easier for individual agencies to ensure that they are complying with the legal requirements pertaining to client information as the information stays in each agency’s database.  
  • Hybrid Approach.  Hybrid architectures combine the combine elements of the data warehouse and federated data models.  You can create an architecture that utilizes a data warehouse for Category Two and Three data sharing, that uses a federated model for Category One information sharing. [8]

3D. Establish processes for undertaking evaluation research on target populations, programs, and client outcomes in compliance with federal statutes and regulations including Institutional Review Board (IRB) approvals.

Whenever possible, design the evaluation and performance measurement framework simultaneous to the development of the intervention; that way, the team can identify and collect the data that correspond to the effectiveness measures you set from the start.  The team always wants to “mainstream” evaluation and performance measurement as part of the planning and management of the program, so that evaluation is not a separate, outside process but an integral part of the program’s operations.  

Research that involves human subjects raises several ethical and legal issues.  For that reason, research with human subjects requires Institutional Review Board (IRB) approvals (see below), and such approvals are contingent on obtaining informed consent from the participants.

One important note:  often researchers conduct program evaluation and/or performance measurement using data previously collected by agencies in the course of providing services to clients ; the data was not originally collected for research purposes and thus individuals were not asked to sign authorizations for such use.   As Stiles and Boothroyd point out, in such a situation it is critical that you determine whether under applicable laws if it is sufficient for the agency that holds the data to consent to its release to researchers, or if the agency must obtain consent from the individuals whose data the agency possesses before it can be released for research purposes.[9]  (But note: as described in the Federal Law section of this Tool Kit, some federal laws specifically allow for the disclosure of certain information for research purposes without a signed authorization from the individuals to whom the information pertains.)  In either situation, researchers also still need to obtain IRB approvals (see below) to work with the data even though they are not having direct contact with the subjects of the data as part of the research.

Applicable Laws

The team must ensure that all data sharing is done in compliance with all applicable federal and state laws. The federal laws that must be considered include the Family Education Rights and Privacy Act (FERPA) for school data, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) for sharing healthcare information and the Public Health Services Act of 2000 which sets forth the Federal Drug and Alcohol Regulations (42 CFR, Part 2).   

All research with human subjects requires Institutional Review Board (IRB) approvals.  Sites should also reference the Code of Federal Regulations (CFR) Title 45, Public Welfare, DHHS, Part 46, Protection of Human Research Subjects for guidance on compliance.  (See sections “Obtaining IRB approvals” and “Obtaining Consents for Human Subjects Research”.)

Partnering with Universities

Jurisdictions can form alliances with academic and research institutions that can assist them in carrying out research to demonstrate the effectiveness of their programs and practices.  Such an alliance can serve the research interests of the academic institution and simultaneously provide resources to a particular jurisdiction to produce the desired evaluation research. The ideal arrangement for program evaluation or performance measurement is to have an entity external to the provider of the intervention conduct the evaluation or assessment.   

Obtaining IRB approvals

An Institutional Review Board or IRB is a committee that performs ethical review of proposed research.  Research universities and hospitals have IRBs to evaluate studies conducted by its faculty and students.

If the team is partnering with a university you should be able to access the university’s IRB. The university researcher will have to have IRB approval before they can participate. There are three main types of IRB review:

  • Exempt.  Title 45 CFR Part 46 identifies different categories of research that involve no more than minimal risk to subjects as being exempt from the federal regulations for the Protection of Human Research Subjects. 
  • Expedited.  The federal regulations also identify categories of research that present no more than minimal risk to human subjects can be reviewed by a member of the IRB as opposed to the full board.
  • Full Board.  Research that involves more than minimal risk to human subjects must be reviewed by the full IRB. 

IRB training is often required for those working on research projects both with respect to the project’s design as well as actual implementation.   As part of the IRB process, the team must submit a protocol that includes:

  • A sampling plan.  This describes the methods by which individuals – the research subjects -- will be selected to participate in the research.
  • A measurement plan.  This describes the outcomes that the researchers will monitor and how they will track them. 
  • A list of all instruments to be administered to the subjects.  
  • A description of how the researchers will obtain informed consent from the subjects to participate in the research, including copies of the consent forms.
  • An assessment of risks and benefits to the subjects of participating in the research. 

Changes to any of these pieces must be submitted as an amendment for IRB review.  Approval typically lasts one year, and the project must seek annual review and approval. 

Obtaining consents for human subject research

A key IRB requirement is obtaining consent from human subjects to participate in the research project.   A variety of parental consent and youth assent forms are provided for your review. It is essential to point out that these forms be regarded as samples and that they be modified to comply with statutory provisions in the particular jurisdiction in which they are to be used.  The team must also be aware of what is required to release information in general and what is required to release information for purposes of evaluation.

Users of the Tool Kit may consult and adapt the following parental consent forms, youth assent forms, and scripts to obtain informed consent/assent for research: 

  • Parental Consent Form, University of New Orleans, for the study “Factors Predicting Therapeutic Alliance in Detained Adolescents”
  • Louisiana State University Health Sciences Center, Youth Assent Form, for the study “Developing a Violence Risk Screening Tool”
  • The University of South Florida Institutional Review Board posts on its website the following templates for social and behavioral research:
    • Assent to Participate in Research:  Information for Persons under the Age of 18 Who Are Being Asked To Take Part in Research
    • Parental Permission to Participate in Social & Behavioral Research
    • Parental Permission to Participate in Research Involving Minimal Risk

3E. Establish each agency’s responsibility for data collection, including specifying the parameters of interagency data collection and sharing for the participating agencies and allocating costs.

This process is particularly important when the effectiveness of a program or practice is dependent on the combined efforts of multiple agencies. Participants will need to carefully plot what data will be needed from each agency to measure effectiveness, secure each data collection system, and set up a monitoring function to be sure that the data is reliably collected.   It is also important to early on allocate costs for data collection and sharing among the participating agencies or procure dedicated funding for these purposes.

The Virginia Restricted Use Agreement from the Virginia Department of Education is an example of a data agreement to assist with the undertaking of research or data analysis projects that involve the participation of multiple agencies. This document helps to detail requirements for participation by setting out data definitions, purposes, uses of data, access to restricted data, information subject to the agreement, and disclosure, security, and retention of data.

Two memoranda from the MST Institute provided additional assistance as examples to organizations that are working collaboratively to share data for program evaluation and performance measurement. The first, Using MSTI Enhanced Website Data for Research Protocol/Policy, outlines standards and policies for using website data for research. The second, Policy and Procedures for Obtaining Access to MSTI Data on Teams, describes responsibilities for responding to requests for data from outside organizations and contains a sample permission form to share documentation and data.

3F. Establish quality control and accountability for data collection and sharing

To achieve the collection and sharing purpose(s), the data that is accessed and used by the initiative must be accurate and complete. The data quality and data integrity policies and procedures developed by the initiative must be practiced and continually monitored for accuracy by all participating agencies that share data sets. These policies and procedures should address training, data validation, data updates, accuracy / timeliness and quality assurance of data inputs and outputs. It is the responsibility of the agencies and their agency data stewards to ensure the quality and accuracy of the data within their systems. [10]

3G. Establish privacy and security safeguards against the potential for undesirable publication of individual case data in the data collection and sharing process.

It is critical to remember that in the collection of aggregated data there is also the potential to publish individual case information and thereby violate confidentiality. Care must be taken to transmit the individual case information for aggregation in a manner that protects the identity of individual youths and their circumstances. Therefore, documenting the legal requirements that pertain to the data contained in the data system should be performed early in the JIS process.  This information should then be used to inform the data security classification assigned to all data elements for access to management planning and provisioning.  The initiative should also review their state data security classification policies for use in this process.[11]

Stiles and Petrila advise that to secure data, agencies and researchers must implement protocols that include training; policies and processes for data procurement and use (including the appropriate use of encryption), data security and access,  security incident and disaster recovery procedures, and recording and monitoring of system activity; and current technology that meets industry standards.  In addition, researchers should be required to sign confidentiality agreements.[12]

3H. Develop a plan for the reporting and publication and use of aggregated data to promote evidence-based practices.

The goal in collecting and sharing data for program evaluation and performance measurement is to focus on some level of intervention so that the team can, first, monitor its effectiveness and, second, take action to improve it if necessary.

With regard to the first, it is crucial that the research team “assess that the data received are valid and useful for research (i.e., to answer the research questions), and that the research team has adequate understanding of the data and the context within which they were collected to appropriately interpret findings.”[13]  A feedback loop that provides results regularly -- to both the front lines where services are delivered and the policy makers who can establish the funding and broad policy direction – is necessary for the data to actually prompt needed action. 

Sites have a both an important opportunity and a responsibility to publish -- both among their organizations and to the public– their evaluation research and performance measurement to document the effectiveness of their efforts. This is critical to advance their practice to be increasingly based on evidence, and informed by the outcomes achieved by the youths, families, and the public they serve.

The first step in building this last element of the feedback loop system is to consider your audiences. You may actually need to report data differently for different groups of stakeholders. The way data is presented will probably be different between reporting on a specific program evaluation as opposed to performance evaluation, but there are common approaches. For example, in either case a straightforwardand simple presentation that highlights the basics (numbersof clients served; goals/outcomes of the intervention; and numbers achieving desired outcomes, especially as compared to a benchmark or baseline of clients not receiving the service) may be adequate for most audiences.

Periodic, more detailed reports may be required for funding sources, especially if the data suggests that the program ought to be expanded or replicated in other areas.  And in some instances, the data will suggest the need for additional, targeted and sophisticated investigation of trends or findings; this might rise to the level of services research and would probably require the involvement of academic experts.

In its monograph on program evaluation, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) offered some very good advice.[14]  Sometimes, the results of a program evaluation are not what we expected, and could be viewed as “bad news.”  Since the purpose of conducting the evaluation is to continuously improve services, this is not necessarily a problem, especially if the information is communicated effectively.  OJJDP made the following recommendations on communicating less than favorable outcomes:

  • Present the data as soon as you can, but only when you are confident that your data and sources are correct; don’t present hunches or preliminary indications as confirmation of program problems. You may find after further inquiry that your initial supposition was incorrect.
  • Present the data to an audience limited to program managers or administrators (and perhaps to project funders, depending on how closely they are involved in your work).  At the start of an evaluation project, the appropriate individuals who will receive preliminary data and briefings should be identified, and they should be the only persons to receive potential bad news. Then they can make the decisions regarding how to use the data you provide.
  • Present the potential bad news in as positive a manner as possible. Program managers should be glad to receive data that helps them solve problems, or that directs their decisions regarding program modification. If your potential bad news comes early enough, they may be in a position to make corrections, lessen the impact, and improve the program. You will often find that your data will be well received, and perhaps that the news is not perceived by the recipients to be as bad as you thought it might be.

 

 

[1] Prashant Kumar, An Overview of Architectures and Techniques for Integrated Data Systems (IDS) Implementation. 

[2]S. Rondenell, C. Duclos & J. McDonald (2011).  Governance Guidelines for Juvenile Information Sharing.  http://www.acg-online.net/assets/guidelines_2011.swf

[3] S. Rondenell, C. Duclos & J. McDonald (2011).  Governance Guidelines for Juvenile Information Sharing.  http://www.acg-online.net/assets/guidelines_2011.swf

 

[4] Prashant Kuma.  An Overview of Architectures and Techniques for Integrated Data Systems (IDS) Implementation.

 

[5] Prashant Kumar.  An Overview of Architectures and Techniques for Integrated Data Systems (IDS) Implementation”.

[6] Prashant Kumar.  An Overview of Architectures and Techniques for Integrated Data Systems (IDS) Implementation.

[7] Prashant Kumar.  An Overview of Architectures and Techniques for Integrated Data Systems (IDS) Implementation.

[8] Prashant Kumar.  An Overview of Architectures and Techniques for Integrated Data Systems (IDS) Implementation.

 

[9]Ethical Use of Administrative Data for Research Purposes” by Paul G. Stiles, Ph.D., J.D., Roger A. Boothroyd, Ph.D.

[10] S. Rondenell, C. Duclos & J. McDonald (2011).  Governance Guidelines for Juvenile Information Sharing.  http://www.acg-online.net/assets/guidelines_2011.swf

[11] S. Rondenell, C. Duclos & J. McDonald (2011).  Governance Guidelines for Juvenile Information Sharing.  http://www.acg-online.net/assets/guidelines_2011.swf.  .

[12] Stiles, P.G. & Petrila, J. (2011). Research and confidentiality: Legal issues and risk management strategies. Psychology, Public Policy & Law, 17, 333-356.

[13] Paul G. Stiles, Ph.D., J.D., Roger A. Boothroyd, Ph.D.   Ethical Use of Administrative Data for Research Purposes. 

[14] James R. oldren, Jr., Timothy Bynum, Joe Thome. (August 1989). Evaluating Juvenile Justice Programs: A Design Monograph for State Planners. Washington, D.C.: Office of Juvenile Justice and Delinquency Prevention.