Personal Data Security and Privacy in Education Technologies
Horsham (UK), November 2017 - Jen Persson is the Director of defenddigitalme, a civil liberties group founded in 2015 to campaign for safe, fair, and transparent children’s data in education. She is an independent member of England’s Department for Education National Pupil Database Steering Group. Jen will share her knowledge in Session DAT 23, 07 December from 14.15 to 15.45. The highlighted question is, "Are you ready for the General Data Protection Regulation (GDPR) in May 2018?"
Europe’s General Data Protection Regulation (GDPR) comes into force in May 2018. It also applies to providers from non-EU countries. What fundamental changes do you think it will engender?
Jen Persson: What is exciting about the potential for change is that through collective cooperation and collaboration, we could make our data much more understandable and usable for individuals. With the advances in technology that we expect in edtech and other areas of life, better digital understanding is fundamental as a good foundation for the future.
Our personal data across our lifetime doesn’t often stay in silos, but can move around between organisations and cross borders, or be copied without our knowledge. There is very often an imbalance of power in education between providers and learners that is exacerbated when the learner is a child. And as data usage, including large-scale analytics, has grown in recent years, there has not been a concomitant growth in digital understanding on the part of the people from whom the data comes and is about: the data subjects - you and me.
Personal data is valuable, but all too often this value is kept by organisations who use it in ways we do not expect and that don’t benefit us. GDPR makes clear that there are shared responsibilities between processors and controllers that exist in a relationship of trust with the data subject. We should all know with whom we are involved and how knowledge about our lives is being used. We should all understand how our internet interactions on one page of a website affect the prices or ads presented to us on another. This understanding would create greater fairness and help promote a better balance of power between us, the sources of the information, and those whose knowledge about our likes, preferences, and behaviours can be used as a means of control and manipulation.
GDPR will apply to any individual located in the EU, so providers worldwide will need to consider how they interact with and market to people in Europe, and securely store, use, and destroy this personal data. How the regulations will be applied in practice remains to be seen.
To what extent will it affect students and other learners?
Jen Persson: How much the changes will actually affect students and learners will depend on how well the people who manage this data meet the new responsibilities, and how aware students of all ages become of their rights. The GDPR emphasises that the data collectors and users are accountable to the source individuals in regard to how the latter’s personal data is used.
On the negative side, given that both the Council of Europe 2016-21 Strategy on the Rights of the Child and the UNCRC acknowledge that capability is not determined by a specific age, it is disappointing that GDPR does not allow for this and that children’s right to be heard and participate in decisions affecting them, both as individuals and as a group, was absent in shaping GDPR.
Much of how children’s data is used by third parties is not transparent to them today, and this circumstance has allowed bad practice to develop in the shadow world of data brokerage and data reselling that most people in the public don’t expect and don’t support. This must change. Children and adults alike have been kept largely ignorant as to what cookies and beacons actually do, and how mobile phone location settings inform some high street shops, enabling them to tailor marketing based upon this and related information.
In education, learners are often unable to see how their own progress has been measured or risk-scores predicted, and they certainly don’t understand how this may be affecting the way they are taught in the classroom. Decisions about us should involve us, and this involvement should expand under GDPR.
"Collect once, use many times" has become the mantra in many areas, at least in the UK’s public services, including education. In fact, GDPR reiterates the expectations of Principle 2 of the existing DPA law: The purposes of collection must be explicit and limited at the time of collection.
This protection is particularly important for children and young people, in whose lifetime the uses of data will change drastically, as technologies and policy develop over time. Their personal choices, behaviours, and beliefs will also change from those of their parents — parents who have taken “consent” choices on their behalf — in ways that differ equally drastically.
When children already have a health-data-generated digital footprint while still in the womb, what might their data lifecycle look like cradle to grave? The effects of a discrepancy between parents’ and children’s choices as children grow up may be significant. Genomic data once distributed is shared beyond the grave and may affect the insurance policies of generations to come, yet parents may put their children’s DNA into the wild through commercial ancestry tests or public-interest health research projects.
Who knows how this data may be used in future? So-called "sharenting" is starting to come under growing criticism. More than half of UK parents – 56% – say they do not post photos or videos of their children on social media, with 87% saying the main reason is that they want their children’s lives to remain private, according to a 2017 Ofcom Communications Market Report.
Today’s data solutions often fail school children’s privacy rights and fail to meet the spirit and letter of good data-protection practices when administrative data is used by an organisation without suitable privacy-preserving choices on offer to the individual. Gathering data 24/7, 365 days a year from classroom computer-monitoring and biometrics systems for building-access, library, and canteen systems is common in UK schools. So is scraping parents’ social media data to use in inspections, and the UK’s education regulatory agency Oftsed suggested last year that this practice overreaches into the private sphere. Often there is no explanation given as to how long data will be retained or confirmation given on leaving education that students’ fingerprints or scans have been destroyed. This should change under GDPR.
In practical terms, students and all learners will be able to make subject access requests more easily and find out what their data is used for and by whom. If done well, this could improve understanding of the benefits that public-interest research brings. Depending on the legal basis for its collection, learners will be able to request that their data be withdrawn, erased, or corrected, and no longer available to others. This could also help improve data accuracy and quality.
Greater attention will need to be paid than is the case today in regard to what data is necessary and proportionate in any admissions process, using classroom apps, assigning personalised IDs to school laptops, or imposing safeguarding software on Bring-Your-Own-Devices.
Consent, one of the six listed legal bases for processing, cannot be implied. It must mean affirmative action, freely given, and consider the balance of power between the data controller and data subject. This has implications for educational employment data, too, or for the use of social media in universities as communications channels or groups for sharing coursework and so on.
How will eLearning providers be affected? What changes will they have to make?
Jen Persson: All providers will need to be clear on the legal basis for any data processing they do. Marketing and data management will not only need to take account of GDPR, but of the ePrivacy Regulation as well. The definition of personal data is changing, and processing tasks are broad including collecting, storing, and deleting personal data. It will almost certainly include most situations in which the data consists of MAC addresses and IP addresses, even dynamic IP addresses. As the case of Breyer vs Germany demonstrated, the IP address in the logs of a website is personal data. Going beyond GDPR and looking at the ePrivacy regulation, data controllers and processors further need to take care over the processing of communications data that are identifying, such as headers, content and location.
The new accountability principle will mean initiating processes with a greater need to consider data protection by design and default. This includes where data is stored and the use of cloud services.
Data analytics about learners and their use of a platform will all be considered personal data, and providers will need to think how any mining or secondary uses of this data are permitted and communicated.
Clear understanding of individuals' information lifecycles for data processing will help data processors to explain its use to people; for children in particular, it must be in a language that they can clearly understand.
Large-scale monitoring or use of any sensitive personal data requires an assigned responsible person or Data Protection Officer. Organisations will need to consider where this role sits and should use guidance from the Article 29 Working Party on who and who cannot perform this role without a conflict of interest.
Where people should be able to withdraw and transfer their data elsewhere - data portability requirements - means there is a need to return personal data to the subject in a manageable format, for example as data in comma-separated values. The format needs to be consistent and interoperable for it to work and be of benefit to individuals and create minimal friction for data processors. It’s going to need cooperation and collaboration.
Other technical requirements need thought to achieve data minimisation - consider anonymisation for example.
This shift towards rights and risk-based responsibilities may also mean a new approach to data breaches. A significant risk to the right and freedoms of the individual needs to be reported to the regulator within 72 hours of the breach’s discovery, and the people affected must be notified directly.
Organisations need to start by identifying their data-protection regulator and data- protection officer. They must also audit their basic data processes and their legal basis: what they are collecting, storing, and processing - and why. Data holders must understand any data lifecycles and consider the guiding principles in regard to data protection by default and design, as well as security and accountability. The practicalities of applied management, privacy notices, breach notifications, consent management, and subject access all need attention.
If consent is the basis for any processing, then it must be possible to withdraw that consent at any time, and this requirement, which comes into force in May 2018, may prove to be a challenge for some of today’s systems.
Do you think the GDPR is the right way to create improved data security?
Jen Persson: Will GDPR prevent universities from hooking smart coffee machines up to their network and exposing themselves to malicious hacking? No, but it should mean that human and any automated security processes are structured and events logged. You can make your systems, tools, processes, and people all reflect the importance of keeping security-sensitive information secure and reduce risk.
We need to remember that GDPR is about protecting people: our rights and freedoms and protection from discrimination and harm, as much as the data per se.
What creates the risk of our data becoming insecure is its very existence and use.
GDPR does make some requirements for data security from infosec perspectives, but they’re not entirely new. Rather, they build on the basic principle that data must be secure.
Andrew Cormack, Chief Regulatory Adviser at JISC and the UK academic JANET network gives good advice in the UK, saying that GDPR requires universities, for example, to have procedures in place to detect, investigate, and respond to a personal data breach when one occurs. He recommends that organisations start by identifying the types of data held and note the ones that, if jeopardised, would necessitate contacting the data regulator. Where breaches are of high risk to the people to whom the data relates, it means contacting the affected individuals. I’d recommend that people follow Andrew Cormac’s blogs; they’re very practical.