Bill DeBaun, Program Analyst
Welcome to the third edition of the Data Resource Roundup! In this series we will periodically share resources, including blogs, courses, white papers, and other tools, that cover various aspects of data. Whether it’s better managing and tracking of data or getting your organization to become more data-driven, it will all be here in the Roundup. Have your own resources that should be featured here? Be sure to let me know about them at email@example.com or by putting them in the comments. Want to see the first two volumes? See Vol. 1 here and here and Vol. 2 here.
Let’s start off by being blunt: everyone runs into bad data from time to time. If you work with enough data sets, you’re going to encounter missing data, duplicated data, and a whole host of other problems. That’s okay. Don’t panic. The Quartz Guide to Bad Data wants to help you navigate these troubled seas. The guide is broken down by “issues that your source should solve,” “issues that you should solve,” “issues a third-party expert should help you solve,” and “issues a programmer should help you solve,” and there are detailed descriptions of a number of problems under each heading. Quite a handy (and approachable!) reference.
Everyone likes a good data visualization. It’s irrefutable. New America brings us just such an example here with their “Mapping College Ready Policies 2015-16.” The underlying data examines whether and how each of the 50 states define college- and career-readiness. Click on any state, and you’ll find the standards, assessments, and definitions it considers to be sufficient for students to graduate from high school career- and/or college-ready.
Speaking of data visualization, Dr. Stephanie Evergreen’s blog is a place to bookmark for sure. Dr. Evergreen’s work advises people and programs on how to better present their data so that it’s digestible and clear. Take this post on “the problem with dashboards,” to start. She provides a good example of the problem (dashboards are overcrowded) and suggests an approachable solution (“loosen up to a dashboard report”).
The Association for Institutional Research (not to be confused for the American Institutes of Research) are a respected voice for the data scientists and analysts on college campuses who work with institutional data. In this mailbag, Craig This, Interim Director of Institutional Research, Wright State University-Main Campus, answers the question “Dear Craig, what are some effective approaches I can use when presenting data to faculty?” His answer is great, and it applies to more groups than just faculty. His advice? Understand that not all faculty are alike, do the math for attendees, define your data definitions, and drill down to specific groups rather than looking at an aggregate group.
This is a little involved, but it’s a step-by-step guide to creating a gauge chart in Excel (think a speedometer…but for program data).
Last comment about dataviz, and this one is an oldie (all the way back to November 2015) but goodie. The name says it all, and the video is quite illustrative. “Shut up about the y-axis. It shouldn’t always start at zero.”
It can be tough to encounter people and organizations who are skeptical about the transformational power of using data effectively. This report from the Future of Privacy Forum provides 19 snapshots of times in the K-16 system when analyzing data helped to change things up. “This report highlights the ways that newly available technology, data, and analytical techniques can create better educational outcomes. It presents concrete examples from Pre-K through higher education of how education data can be used to benefit students, the education system, and society-at-large.”
This is the most “researchy” piece in this edition of the Round-Up, but it’s also from Vox, so hopefully it’s adequately simplified for our non-technical readers. Anyone who has taken statistics is probably familiar (though maybe not deeply) with the concept of statistical significance. “P-values” are a measure of statistical significance that determines the likelihood that a result is different from 0 or could not have been arrived upon by chance. As Vox explains, academia’s longtime obsession with p-values are causing a lot of problems in science.
Stay tuned until next time for even more resources on data and evaluation!