Sagesse: 2020 Edition Volume V

Sagesse 2020, Volume V, Issue I

1-Introduction  [web link] [pdf]

2- Achieving Perfect PITCH  [web link] [pdf]

3- Our Phygital World and Information Management [web link] [pdf]

4- Preserving the Legacy of Residential Schooling Through a Rawlsian Framework [web link] [pdf]

5- Préserver l’héritage des pensionnats indiens en s’appuyant sur les principes de Rawls [web link] [pdf]

6- The Transformational Impacts of Big Data and Analytics [web link] [pdf]

7- Les effets transformateurs des mégadonnées et del’analytique [web link] [pdf]

8- Tribute to Leonora Casey [web link] [pdf]

9- Smart wearables and Canadian Privacy: Consumer concerns and participation in the ecosystem of the Internet of Things (IoT) [web link] [pdf]

10- In memory of Ivan Saunders… [web link] [pdf]

 

White Papers 2020

Whitepaper– Integrate Digital Preservation into Your Information Governance Program [web link] [pdf]

Whitepaper– Case Study: Migrating a Billion-Dollar Government Agency to a New Records System [web link] [pdf]

Introduction

 

Estimated reading time: 6 minutes. Contains 1200 words

 

Welcome to ARMA Canada’s fifth issue of Sagesse: Journal of Canadian Records and Information Management, an ARMA Canada publication.

Sagesse’s Editorial Team

Sagesse’s Editorial Board welcomes Barbara Bellamy, CRM, as ARMA Canada’s Director of Canadian Content, effective July 1, 2020. Barbara has a wealth of ARMA experience, at the Chapter level with her volunteer work on the ARMA Calgary Chapter as well her contributions to ARMA International’s Education Foundation (AIEF). She co-authored an article that appeared in Sagesse’s 2019 edition (see “Email Policy and Adoption”) and is a regular speaker at ARMA Canada conferences.  Barbara has already taken a leading role with moving Sagesse forward as you will see in reading this edition. She is a most valuable addition to the team. Welcome Barbara.  

We also welcome Pat Burns, CRM, to the Sagesse team. Pat also has an extensive resume with ARMA Canada, ARMA International and the ARMA New Brunswick Chapter. She was Chapter President for many years, was ARMA Canada’s Region Director for 4 years and then spent valuable time with ARMA International and the Institute of Certified Records Managers. We are thrilled to work with Pat and have her and her expertise on our team!

Congratulations are extended to Sagesse’s editorial board member Stuart Rennie who received ARMA International’s Company of Fellows distinction (Fellow of ARMA International or FAI) at ARMA International’s conference in 2019 at Nashville. Stuart is the 6th FAI in Canada.

University Essay Contest

ARMA Canada held its second essay contest for graduate students enrolled in graduate information management programs in Canadian universities in 2019. We are pleased to announce that the team of Alissa Droog, Dayna Yankovich, and Laura Sedgwick, from Western University, were the $1,000 recipients with their article “Preserving the Legacy of Residential Schooling Through a Rawlsian Framework.”  Their article focuses on what some have called a controversial decision made by the Supreme Court of Canada in 2017 when it ruled that student records from the government’s truth and reconciliation initiative, specifically the Independent Assessment Process (IAP records), which related to how survivors of residential schooling systems were treated, should be destroyed.

The recipient of the $600 award was Emily Speight, from Dalhousie University, for her article, “Smart Wearables and Canadian Privacy: Consumer Concerns and Participation in the Ecosystem of the Internet of Things (IoT).” Her article focuses on the plethora of smart technologies, from smart cars to smart clothing, and the Internet of Things (IoT) potential to revolutionize how we live, yet there appears to be resistance to those technologies. How should that resistance change to improve consumer engagement?

Congratulations to our students! 

Sagesse’s New Feature – White Papers

Be sure to check out Sagesse’s newest feature, white papers, published in addition to the Sagesse articles.  You’ll find the white papers on the ARMA Canada’s website at https://armacanada.org/, in About Sagesse, https://armacanada.org/portfolio/sagesse/

In 2019 two white papers were published:

  • “Choosing a Software – Getting to the Request for Proposal (RFP),” by Brenda Prowse, CRM, and
  • “Bring Order to Chaos: Regaining Control of Unstructured Data,” by Jacques Sauve.

 

The 2020 edition of Sagesse features the following white papers:

  • “Integrate Digital Preservation into your Information Governance Program,” by Lori J. Ashley, and
  • “Case Study: Migrating a Billion-dollar Government Agency to a New Records System,” by Jas Shukla.

 

2020 Sagesse Articles

  • Giles Grouch examines “Our Phygital World and Information Management,” and the rapid advances made by information technologies calling that phenomena “a phygital” world or a blend of physical and digital. While he states that our devices, smartphones, laptops, watches, etc., blur the line between personal and work information, new approaches in thinking about information and communication are required.
  • In keeping with changes in the business and academic landscape, Christy Walters and Chrystal Walters’ article discusses “The Transformational Impacts of Big Data and Analytics.” Their article looks at an evolving positive view of big data and its transformational impacts on business and discusses the innovative ways data analytics has evolved and how it is transforming today’s business landscape.?
  • Have you ever found yourself in a situation where you must briefly and without any visuals present to a group of non-records and information management professionals the value of a retention schedule? Sue Rock, CRM, did, and shares that experience in her article entitled “Achieving Perfect “PITCH.”  She also has excellent suggestions on how to prepare the impeccable 30-second sound bite on the role of records in a business setting.

Sagesse’s editorial board would also like to take this opportunity to acknowledge two outstanding members of ARMA Canada’s team who recently passed. 

  • Ivan Sanders, see article “In memory of Ivan Saunders,” and

  • Leonora Casey, see article “Tribute to Leonora Casey.”

Both articles appear in this issue of Sagesse. Ivan was for many years ARMA Canada’s Conference Director and a member of ARMA Saskatchewan Chapter. Leonora was ARMA Canada’s webmaster and a member of ARMA Calgary and ARMA Vancouver Island Chapters. We miss them both. 

Please note the disclaimer at the end of this Introduction stating the opinions expressed by the authors in this publication are not the opinions of ARMA Canada or the editorial committee.  We are interested in hearing whether you agree or not with this content or have other thoughts or recommendations about the publication. Please share and forward to: sagesse@armacanada.org  

If you are interested in providing an article for Sagesse, or wish to obtain more information on writing for Sagesse, contact us at  sagesse@armacanada.org

Enjoy! 


 

ARMA Canada’s Sagesse’s Editorial Review Committee: 

Christine Ardern, CRM, FAI, IGP; Barbara Bellamy, CRM, incoming ARMA Canada Director of Canadian Content; Alexandra (Sandie) Bradley, CRM, FAI; Pat Burns, CRM; Sandra Dunkin, MLIS, CRM, IGP; Stuart Rennie, JD, MLIS, BA (Hons.), FAI; Uta Fox, CRM, FAI, outgoing Director of Canadian Content.


 

Disclaimer

The contents of material published on the ARMA Canada website are for general information purposes only and are not intended to provide legal advice or opinion of any kind. The contents of this publication should not be relied upon. The contents of this publication should not be seen as a substitute for obtaining competent legal counsel or advice or other professional advice. If legal advice or counsel or other professional advice is required, the services of a competent professional person should be sought. 

While ARMA Canada has made reasonable efforts to ensure that the contents of this publication are accurate, ARMA Canada does not warrant or guarantee the accuracy, currency or completeness of the contents of this publication. Opinions of authors of material published on the ARMA Canada website are not an endorsement by ARMA Canada or ARMA International and do not necessarily reflect the opinion or policy of ARMA Canada or ARMA International.

ARMA Canada expressly disclaims all representations, warranties, conditions and endorsements. In no event shall ARMA Canada, its directors, agents, consultants or employees be liable for any loss, damages or costs whatsoever, including (without limiting the generality of the foregoing) any direct, indirect, punitive, special, exemplary or consequential damages arising from, or in connection to, any use of any of the contents of this publication.

Material published on the ARMA Canada website may contain links to other websites. These links to other websites are not under the control of ARMA Canada and are merely provided solely for the convenience of users. ARMA Canada assumes no responsibility or guarantee for the accuracy or legality of material published on these other websites. ARMA Canada does not endorse these other websites or the material published there.

Achieving Perfect “PITCH”
By Sue Rock, CRM

 

Estimated reading time: 11 minutes. Contains 2042 words

 

Introduction

Imagine you are sitting with a panel of records experts, in front of a group of keen Data Architects, and you have just 10 minutes to present the value of a retention schedule.  And, you can’t use any visuals. How would you strive to achieve the perfect pitch?

The theme for this article focuses on a case study of how the long-established fundamental of records management, the records retention schedule, was pitched to a group of Data Architects.  

The Opportunity

An opportunity was extended to present, among a panel of records managers, the topic ‘Retention Policies and Legal Considerations’ to the Alberta Data Architecture Association (ADA) in “10 minutes or less”.  

ADA indicated in its outreach that it expected an informal, information sharing session, followed by a social.  The invitation specified expectations as illustrated in Figure 1. Stringent as the time allotments were, it was encouraging that the group’s leadership had developed an ambitious plan to receive an overview of records management.  Three records managers responded enthusiastically to the invitation.

The Agenda

Figure 1 ADA meeting agenda, developed by ADA leadership.

 

The Challenge

The first step to prepare for a verbal presentation is to define the receiving audience.  Which nuggets of retention scheduling would best meet ADA’s expectations? The following definition outlines the general scope of Data Architecture practices.

Data architecture defines the collection, storage and movement of data across an organization … Information architecture refers to the development of programs designed to input, store and analyze meaningful information whereas data architecture is the development of programs that interpret and store data.

Secondly, what are the records management ‘go-to’ resources (Figure 1, Agenda item 7) to share with verbal offerings?

An immediate and useful ARMA resource is the free, five-page document called The ARMA Information Governance   Implementation Model (IGIM) which, in November 2019, remained in the BETA publishing stage. Among the seven models which attempt to connect the various stakeholders of Information Governance, the ‘Architecture’ references were displayed within the SUPPORTS function.  

The visual bullet points in Figure 2 helped describe the ADA receiving audience – specifically, the terms Taxonomy and Metadata draw records management closer to the realm of Data Architecture.  The path to achieving a perfect pitch of retention schedule value became clearer.

Figure 2 The ARMA Information Governance Implementation Model (IGIM) – excerpt – Structures

 

Having located ‘Architecture’ among the IGIM diagrams, further examination of the diagrams led to the role of records retention located under Capabilities within the IGIM model, shown in Figure 3.  

Figures 2 and 3 are examples of the type of resources immediately accessible to records managers to share with information management disciplines interested in understanding alignment opportunities.

Figure 3 The ARMA Information Governance Implementation Model (IGIM) – excerpt – Capabilities

 

The Presentation

Although an agenda had been distributed, the presentation itself was quite ‘fluid’. The audience numbers were small; those present were actively engaged.  The ADA Chair took on item 1 of the agenda “Records management background” in the allocated “10 minutes or less”, ending on a high note of “RM is not only about conservation but also about value. Records are expensive to gather and the full value within is often left un-harvested and/or inert.”

Three ARMA Calgary Chapter members represented Records Management in Calgary.  Within the allotted 5 minutes, the Chapter President deftly addressed the content records management assisted by a gigantic wall chart which very visibly illustrated the record lifecycle.  There had been no explicit exclusion of wall charts as visuals within the agenda!

Another RIM practitioner whose banking company fully and aggressively supports Enterprise Content Management entertained the audience with ‘cool stories’ (Figure 1, Item 4) – how their RIM forays and triumphs were achieved using creative technology solutions.  His presentation garnered more than 5 minutes, most likely because everyone uses banking services. The audience asked many technology-focused questions, and this bank in particular continues to lead in its implementation of user-oriented technologies which are truly ‘cool’. 

The segue from cool technology to records retention was a challenge.  Why would a retention schedule be of interest to Data Architects?  

A method of setting an audience at ease is to emphasize alignment, in this setting between Data Architecture and RIM practices. This portion of the presentation began by a quick review of Data Architecture’s governance scope, followed by a Records Retention Schedule governance scope to illustrate the parallel.  

The presentation began with a question: “Does Data Architecture require governance to track elements such as?” ->

  • Ownership

  • Security/permissions

  • Legislative requirements such as privacy

  • Legal requirements such as 

    • Trade secret contractual obligations
    • Copyright permissions/restrictions; 
    • Project management contractual obligations
  • Approved data collection sources both internal and external

  • Data transactions within the system

  • System integration with other systems

  • User agreement statuses:  current; expired

  • Data distribution agreement statuses:  current; expired.

It continued energetically with a summary of how a retention schedule is a governance tool for all business information.  It addresses key issues such as:

  • Who owns the records (by business role; department; etc.)?

  • Where are the records located (links to legislative requirements)?

  • What are the legislative requirements, including privacy permissions/restrictions?

  • Are there business legal requirements such as product licensing restrictions, copyright?

  • Who is permitted to use records; who is not permitted to use records?

Having established a point of intersection/alignment, measured by receptive nods and ah’s, the presentation proceeded to explain how aspects of a retention schedule can help Data Architects achieve fundamental objectives.

The ADA agenda timer was on his job!  It was urgent to pitch how exactly a records retention schedule could positively support the requirements of Data Architecture.  

A well-documented, management-approved retention schedule paves a path forward for Data Architects to exercise its governance requirements which include:

Govern and manage data as a strategic asset

Protect and secure data

Promote efficient use of data assets

Build a culture that values data as an asset

Honour stakeholder input and 

Leverage partners

The audience was clearly salivating for more rapid-fire value statements.  The presentation pressed forward in response!

Here’s where a retention schedule can deliver quick hits for Data Architecture:

Authorize redundant data destruction:

√ VALUE: Eliminate the damage voluminous outdated data can cause to a system’s performance.  

Increase system performance:

√ VALUE:  Capture business decisions to permit off-load of outdated data to less costly storage solutions.

√ VALUE:  Retain frequently accessed data on-line or near-line.

Separate wheat from the chaff:

√ VALUE:  Not all data is created equally; some require stringent business rules and resources.

√ VALUE:  Understand when data has been assembled into documents of evidence, 

√ VALUE:  Documents can be relocated into an RM-DM system to live out retention requirements.

After these hard-hitting facts were delivered, the presentation reduced its pace to a conversational conclusion, explaining how:

  • These illustrations of how a retention schedule can inform Data Architecture practices rely on populating the retention schedule with salient data about the data.  The retention schedule is organic. As descriptive information is added, it becomes a dynamic operating tool that positions a Company to leverage its data assets for future opportunities. 
  • The action of aggregating data governance rules into a retention schedule will position a Company to understand the value of enterprise-wide data governance. Even elementary actions as defining data into buckets such as program data, statistical data, and mission support data will help a Company assign appropriate value and resources to its data.

The presentation concluded with a rousing statement:

In short, a retention schedule is a dynamic operating tool that positions a company to leverage its data assets for future opportunities.”  

Tip – Be Prepared for a Perfect Pitch 

Records Managers have often been advised to prepare an ‘elevator speech’ – a 30 second sound bite which describes the records role in a business setting.  One would employ this speech should an opportunity arise, such as shoulder to shoulder with someone in an elevator. 

Philip Crosby, author of The Art of Getting Your Own Sweet Way (1972) and Quality Is Still Free (1996) suggested individuals should have a pre-prepared speech that can deliver information regarding themselves or a quality that they can provide within a short period of time, namely the amount of time of an elevator ride for if an individual finds themselves on an elevator with a prominent figure. Essentially, an elevator pitch is meant to allow an individual to pitch themselves or an idea to a person who is high up in a company, with very limited time.

There are many pitches one can prepare for, such as one’s current position and value-statement. For the purpose of interacting with Data Architects, or any IT discipline, a method of preparation could be by imagining a RIM role within an IT discipline – that is, by using business language common to both.

Some IT practitioners use the terms ‘Records Management’ and ‘Document Management’ interchangeably.  The following excerpt from a recruitment notice uses solely ‘document management’. Since it refers to ‘company-wide … framework’, records managers would astutely point out the correct term would be ‘records management’; however, does the distinction matter at this level of conversation?

 

Remain focused on the big goal – continuous dialogue with other information management professionals to ensure records management principles are applied as “a consistently applied process for managing and controlling documentation”.

The following position description contains exciting language to memorize for an elevator speech.  It has passion, potential and agile descriptions of how records managers contribute to the overall management of information resources.  Here is a list of the position’s responsibilities:

  • Work with business stakeholders, IT management team, Architecture and Portfolio Managers to further the objectives of overall information management practice;
  • Assist the business, IT management team and Architecture team in developing, communicating and promoting key information management principles, standards and practices that govern the current state, and define the future state of IT;
  • Work with Business and IT stakeholders to build a holistic view of the corporation’s information assets, and documents this using multiple views that show how the current and future needs of an organization will be met in an efficient, sustainable, agile, and adaptable manner;
  • Help IT teams gather information management needs and demand in a structured manner and translating that demand into IT roadmaps;
  • Promote and participate in sessions with business stakeholders and IT team to ensure policies are supported by the information management framework and standards while identifying the need for establishing new standards and practices.

In Conclusion…

What seemingly started quite informally, an invitation to present ‘what is a retention schedule to an audience of Data Architects, led to a reflection on being prepared to deliver a perfect pitch. Constraints such as verbal only — no supporting visuals; no ‘take away’ posting of presentation materials for the audience; and limited time are common to all situations – for example, an elevator!

To present verbally on a topic reasonably unfamiliar to an audience requires not only solid knowledge of subject matter, but also a colloquial delivery style which establishes rapport and confidence.  To distill the components of a retention schedule into auditory consumable “bytes” is a challenge. Finding common ground among mutual information management-based disciplines seemed a positive path forward.

Are records managers prepared, on the spot, to convey principles in a sound-bite manner?  As business language evolves, the use of phrases such as “must have” have outlived their audience tolerance.  Measures are good; however, speaking about saving terabytes of space, or applying retention destruction to gazillions of documents is equally unmemorable.

A recent ARMA Calgary workshop focused on policy writing.  What could be new? The examination of the language used in current policies was deemed ‘in the past’.  Newer language states facts in simple, assertive statements such as “We protect records.”

Through research, presentations, and writing, records managers assert their future by embracing continuous education, change, and challenge.

Our Phygital World and Information Management:A human-centric approach, September 2019
Giles Crouch, design anthropologist, E K S P A N S I V, a design anthropology firm ©2019 Giles Crouch

 

Estimated reading time: 30 minutes, 5 seconds. Contains 6018 words

 

Introduction

To say that managing information in today’s increasingly complex world is a challenge would be an understatement. Information technologies have advanced so rapidly and become so deeply embedded in our daily personal and work lives that we can now say we live in what we can truly call a “phygital” world; the blending of physical and digital. Often, on the devices we use, from smartphones, laptops, tablets, watches and voice-controlled devices; the line between personal and work information is increasingly blurred and intermingled. For those in information and records management, this is not new. But new approaches and ways of thinking about information and communications technologies are needed.

These advances in technology have had and are having, a profound impact on workplace cultures and processes as well as society on an organisational and global scale. Before the internet and our always-on, hyper-connected world the issues of understanding how humans interacted with Information Communications Technologies (ICT) in regards to managing information within the organisation, workplace culture and rituals wasn’t deeply considered, for the most part, beyond the surface. Now, these factors will play an increasingly vital role to the management of the organisation. Successful management of ICT and information within the organisation means a more human-centric approach. More than ever, software and hardware companies are taking a much more human-centric approach in the very design of the tools they make. An example is smartphone manufacturers such as Google and Apple who have designed their devices to capture peoples’ attention to use them more. Not many of us would use a smartphone today that had a black and white user interface. 

In just the past decade we have seen the rise of the UX (User Experience) and UI (User Interface) designer taking a front seat in software development. The old software development approach of consult, build, release, and revise in long cycles is over, replaced by agile methodologies and constantly released iterations. This was especially so for software tools delivered in the browser, known as Software-as-a-Service (SaaS), but even Microsoft is now constantly delivering updates to desktop software rather than releasing entirely new versions every year or two. 

My work as a design anthropologist is seeking to understand the intersection between technology, society, business and culture to help solve complex problems from a human-centred perspective. This paper takes a brief look at why a more human-centric approach to understanding and designing information use and ICT tools in the organisation has become critical.

Context. From the bronze axe to the smartphone.

For many of us in the technology world, we simply use the tools we have or are presented with at home or work. Two generations have grown up using computers and software, especially Microsoft Office and the Windows operating system. The current generation is growing up with touch interfaces on smartphones and tablets and a new one will grow up with voice activated devices. We often look at the problems and challenges of information management from a place of “now” and where we are. Understanding the context of ICT’s in society and the workplace will help frame why it is critical to understanding the role that culture, ritual and inclusivity take when maintaining, designing and implementing ICT’s and information management policies and procedures.

The Bronze Axe and Cave Drawings

Two key facets of humans are that we communicate to survive and that we use tools that enable us to employ the strategies we communicate to each other to survive. For thousands of years a worker had to know how to socialize and communicate with co-workers and use tools to stay employed, thus earning a salary and putting a roof over their head and food on the table. How we communicate has evolved significantly as have the tools we use to survive.

As humans evolved, we developed language to communicate and we learned how to manipulate information. Along the way, we started to draw images on cave walls; the cave walls were dry and thus images lasted, and it was a relatively safe place where a lion was less likely to make dinner out of you and your tribe. We went on to create multiple complex languages and writing. Symbols became an important part of how we communicate. Symbols are evolving still, such as emoji’s; love them or hate them.

At some point we also developed tools – first spears and later knives, swords and other tools. The bronze axe played a very large role in human societies that could make them. It was the first time in human history we could build complicated things. The bronze axe is akin to today’s smartphone. The axe, like the smartphone, was a social signal. If you had a bronze axe as opposed to Axe Version 1.0 which was a cheap stone that chipped, it meant you were either wealthy or strong/clever enough to take it from someone else. Either way, you had status and thus influence. It also meant you could make shelter faster and easier, catch your dinner and do other things that most likely kept you alive somewhat longer than the chap with Axe 1.0.

The rise of digital tools

A wee bit later we invented the printing press, then the radio and television. Now we have our smartphones with TV show access and podcasts. Some suggest that with the vastness of today’s information, we have become overloaded with information. Arguably, we achieved information overload when we printed more books than a human could read in a lifetime. Humans invented libraries and archives to store and manage information. Before that, we used stone and wood. In a sense, a library was humanity’s first database. Just in analog format. Over time, the ability to read and write and communicate with broader audiences became both a status symbol and necessary for societal and species survival.

ICT’s are tools that integrate our ability to communicate and organise more deeply than ever before. A key aspect of ICT’s is that they reduce friction (friction arises in communicating when more steps are needed to communicate information; it is less friction to send an email than a mailed letter) and the ability to organise. Software and hardware are simply the bronze axe and papyrus reed of long ago, slightly updated.

For a long time, communications methods were largely broadcast in nature; TV, radio, print, with little to no immediate feedback ability. Even for a time, the internet was a place we went to. To access the internet you needed to use a very defined device, a computer. You also needed a fair degree of skill to operate a computer, which at one time, we very functionally specific. They weren’t connected. Once computers became connected and the Personal Computer (PC) was invented, things started to change. But for a very long time, the internet remained a separate place. We often called it Cyberspace. It was a space, separated from the space that the real world occupied. Our physical and digital worlds were distinct in nature.

While early mainframes were (and are) expensive, so were early desktop PC’s and they took up a fair bit of table or desktop real-estate, had to be plugged in and connected to other computers by cables. A certain degree of education was also required to interact with these earlier devices. Even laptops until recently, still required a physical connection to the internet and to access other computers or peripherals such as printers and a mouse. Most information technologies were, essentially, a fixed artefact.

In homes, special locations were set aside for computers; a spot in the kitchen or a special desk in a study or living room. Rituals evolved around when we would spend time on the internet, the software we would use and how the computer was treated, especially in family settings. For organisations, the initial PC investments were expensive, but IT had control over what software was available and when, so ICTs were more easily managed.

We understand today, looking back, that the internet as a tool, has had a profound impact on human society and how we communicate with each other, both good and bad. All technologies, or tools, have, throughout history, been good and bad. A bronze axe helped you make shelter and dinner, but it could also be used as a weapon to be nasty to other humans. While we see great benefits in Artificial Intelligence (AI), we also recognize that it can be weaponized by rogue states or generally bad people who want to do bad things.

Now, our world is hyper-connected, always on; phygital. Today, there are more devices connected to the internet than there are people in the world. Increasingly, technologies are becoming more assistive and invisible yet ever more pervasive. In most everything we do today, some form of information technology plays a role, sometimes without us realizing it is happening. As NYU professor and author of the book, Here comes everybody, Clay Shirky has stated, “When technology becomes invisible is when it gets interesting.” 

No longer can we separate ICT’s from our work and personal worlds or our daily lives in general. Connectivity is everywhere and always on. This is why, as information management professionals, it is increasingly important to understand ICT’s as tools and see them from a more human-centric angle, not just as productivity tools. For so long, ICT’s were viewed as tools to manage information within the systems that they were designed to support. External connections were rare and they were not often designed to play with other tools. 

The very language we have used in developing these tools is abstract and dissociative from humans. The ICT tool (software and hardware) was at the forefront, the human as a “user” and the user conforms to the intent of the tool and the business system it is being designed to work in. This, in essence, is placing function before form. The problem with all tools throughout human history however, is humans. When something goes wrong with a software tool we are using, we blame the human. We can also blame the human that figured out a bronze axe could also be used to thump some poor chap over the head. Someone always tends to do something with a tool that the creator never intended, such as forgetting to plug it in. Additionally, all technologies have a duality. They can be used for good and bad. Add in the human propensity to do the unexpected and we introduce the law of unintended consequences. When social media tools hit the world, the expectation was for democracy to sweep the world and life to be wonderful. Instead, democracies have shrunk and we have a new meaning for trolls and new words such cyberbully and hacker. On the upside, movements like #MeToo were enabled. We’re a quirky bunch.

An excellent example of this in an information management context is Dropbox. A brilliant concept as a tool to enable people to easily share files. Humans love to share information as we know. It’s how we survive. That was Dropbox’s intent and it worked very well. There were also a lot of knowledge workers who had better computers at home than at work (this is still very often the case) so they’d upload the file to their Dropbox account to work on at home or their personal laptop at a coffee shop. Before that, it was USB sticks. But services such as Dropbox, Google Drive and Box have led to a bit of a nightmare for information management.

Policies were put in place, sometimes they worked. Some organisations simply adopted Dropbox as their primary document management system. Some a hybrid of on-premise and Cloud solutions. 

Humans will always find ways to work around organisational policies and rules as well. One rather significant example is the U.S. Marines in Iraq and the battle of Falujah during the Gulf War. The command officers came up with a battle plan, giving the orders on what weapons and tactics to use, all of which was communicated over the Army’s secure networks. The line soldiers who would be fighting the actual battle, however, had other ideas. They’d established their own secure network over top of the command one. They also determined they’d use slightly different tactics and which were communicated on their own network. They won the battle and likely would not have had they abided by the command network.

All of this is to say, the domain of creating, managing and operating ICT tools today is no longer exclusive to the IT department and those responsible for information management. In many organisations, we see departments using SaaS based tools they access via the internet without ever telling the IT department. In one recent technology audit of a firm who thought they were spending about $40,000 a year on various SaaS and related services, we found they were actually spending $320,000. It is a well managed firm with a strong IT department and firm policies and procedures in place. Yet this still happened and happens often.

Because ICT tools are increasingly easier to use and technology is so pervasive in both our personal and work lives, those in IT and information management will have to take a more human-centric approach to governance, management and deployment in the coming years. Next we explore the cultural, ritual and inclusivity aspects of our phygital world.

Understanding culture with information technologie

Technology, information and culture

As computers and PC’s began to enter organisations in a meaningful way in the late 1980’s, they were mostly for very defined activities; word processing and accounting. Finding a senior executive who had a PC monitor on their desk was a rare sight indeed. At that time, computers were simply seen as a tool used by line workers, not those with seniority. They held little social currency and were not seen as a signal of power. Into the 1990’s, as computers became networked, easier to use and purchasing costs lowered, they crept into more and more roles within the organisation.

Companies like Microsoft and Apple updated their operating systems from command lines to using Graphical User Interfaces (GUIs).) Rather than having to know code, one had to learn how to interpret symbols. In lock-step with PC’s came connected photocopiers that copied and printed from computers. Monitors became colour as well. The cost of networking became less, but required significant infrastructure investments such as ethernet cabling and networking equipment. Then, with networking advances came the internet. Once the cost of computers started to come down and the internet evolved to connect consumers, things really started to shift. PC’s began to gain social status both at home and at work. Laptop PC’s started to become cheaper as well. The cost to create information became much lower as did the friction.

Where previously, from the late 70’s to the late 90’s it was secretaries and other administrative staff who had to know how to use a computer, by the late 90’s sales people and even middle-level management also had to know how to use a PC. For sales people, the laptop became a primary working tool. The black leather or nylon laptop bag became ubiquitous at airports and on subways. This was also a boon for physiotherapists for all those workers with shoulder and neck problems from lugging heavy laptops around. Now, senior executives wanted a computer on their desk. The PC had become a status symbol at work. This is also, arguably, when we lost the work/life balance.

What type of device you had often indicated your seniority within an organisation for it indicated how much information you had access to. Seeing a large monitor on an executive’s desk implied they knew how to use technology and that they had access to and control over, important information. If you used a laptop, it conveyed that you were someone who was mobile and worked more outside the organisation’s walls. It suggested importance. The computer had now begun to establish itself in the organisational culture.

It was also around this time, the early 00’s that cell phones became more ubiquitous. Smaller, lower cost to operate and a key business tool. But until the launch of the Blackberry and then the iPhone in 2007 cell phones were more of a business tool than a consumer device. Having a Blackberry prior to the iPhone sent an even bigger signal of social importance within the organisation. When the iPhone launched, it put the device into the hands of the consumer. PC’s became far cheaper as processing power, memory and displays dropped significantly in cost. Then of course, came tablets, the Cloud and Software-as-a-Service (SaaS) companies exploded.

This is also when email become very popular; a tool that still hasn’t been replaced. It is also another example of us quirky humans. While it was intended as  a work productivity tool, humans found email to be an excellent way to share jokes leading to many embarrassing incidences of people hitting “reply all” by accident and office romances were unhappily discovered. To this day, email etiquette remains a hot topic in the workplace.

The combination of lower cost PC’s, ubiquitous and high-speed internet (and Wifi), smartphones, tablets, the Cloud and social media have had a profound effect on corporate cultures. The latest devices to impact the workplace are the smartwatch and fitness trackers. Again, owning an Apple Watch sends a signal of status versus a simple FitBit fitness tracker. While few organisations see the value in a smartwatch for employees, they still connect to corporate smartphones and networks and send a signal of a person’s income level.

The advances of ICT tools have led to increased collaboration between departments and across different organisations. This has also resulted in friction between younger and older workers in an organisation. Many of those who are in their mid-40’s and above, are used to an information environment that had more silos and departments that rarely shared amongst each other. Information is power.

The power dynamics of information

The power dynamic of information within the organization is important to understand. Senior executives are seen to have access to any and all information. Access wanes as the roles perceived importance declines. This also plays into inclusivity, which we discuss later. For those in information management, understanding the dynamics of power in regards to ICT’s and information is a key part of forming policies and developing or employing new tools in the organisation. 

ICT operations have developed from a largely administrative role with minimal social and cultural status within an organisation, to playing a key role culturally. Along with devices, information access and what tools you use to create, manipulate and manage information have also come to hold cultural significance. For instance, consider the communications tool, Slack. It is supposed to be a productivity tool that enables teams to collaborate far easier and to some degree it does. Slack’s creators seemed to think it would eliminate email, but not so far. While it is an excellent tool, Slack has added yet another layer to the complexity of information management today, most specifically in the area of document management. Slack enables communication outside of the organisation, thus creating an added area of cybersecurity worries for IT departments and those trying to manage information. Slack has also evolved its own culture within organisations, as have other tools.

Another often overlooked or missed power dynamic in the workplace is the “Fixer” as we will call them. This is the one person (and there may be one or two in various functional units) who know a particular app or database extremely well. They’re the resident expert and they fix peoples’ problems. This provides them with a degree of power within organisational culture. Changes to such tools, or the introduction of new ones, change this person’s power dynamic. They may be highly resistant to new tools because of this. Often Fixers feel that because they know these tools so well, they are more valuable to the organisation and have a greater degree of job safety. Fixers can be major roadblocks or they can be turned into champions. When they are recognised and made a key part of a transition, selection or policy development team, Fixers may be a significant benefit to a project’s success.

As mentioned, many organisations are often unaware of how many tools like Slack are being used. When an employee in one department is invited to collaborate with another department, they may end up using a tool that the other department uses. This may lead to a feeling of being included in a special way, which has an impact on the organisational culture. And inclusivity, which we address later.

Understanding the cultural impacts of ICT within an organisation, may lead to deeper insights regarding how people create, manage and share information. Such insights assist in creating better policies and procedures. Often, policies are ignored or worked around because they don’t align with organisational culture. Employees may feel alienated by some policies or tools and others may feel that the policies hinder workflow and reduce efficiencies. Culture can have a significant impact when considering major projects such as digital transformation or selecting and implementing new database or software tools.

Understanding the role of ritual with information technologies

When we hear the word “ritual” we tend to think of religious ceremonies and while it can be a version of ritual, it isn’t the definition of ritual. Think of your own morning routine when preparing for work, it is a form of ritual. Perhaps first you put on the kettle for tea or brew a coffee, feed your pets and begin getting ready for work during the work week. On the weekend, you may have a different routine. These are rituals.

People have similar rituals with their devices, from smartphones and tablets to their PC. From where we set up the device in our workspace and how we begin our day, to how we arrange apps on our smartphones and tablets. Perhaps checking email, which we often do on a smartphone before we even get to work. With the different software apps we use, we often have a ritual in the way that we use them as well as other productivity tools. Chances are you have a desktop wallpaper on all your screens and you’ve customized the apps as much as you can to your preferences. Changing these ritualistic behaviours can make us angry and frustrated. Part of the reason we don’t like changes in apps is because it interferes with our rituals. To a large degree these rituals play a part in our workflows.

Rituals also take place due to the volume of information created and stored. A new app or perhaps a change in the SharePoint structure or an SAP tool means a change in how an individual finds and manages information; this means a change in ritual. The bigger the change, may lead to e more resistance to the change, such as we’ve discussed with Fixers. Ritual is important to the individual worker and helps in understanding issues of inclusivity and culture, but play a lesser overall role. But are still valuable to acknowledge and understand.

Inclusivity in a Phygital Organisation

Tied to organisational culture is inclusivity. Information technologies have no opinions and are agnostic. How they are deployed within an organisation however, can impact employees’ sense of inclusivity. For example, management handing down their laptops to lower functioning roles could lead to a sense of people not feeling included as part of the organisation, or as less important. Even colours used in applications can impact some cultures. A heavy usage of yellow for example, may make some of Indian culture think of funerals as yellow is a colour representing death in India as blue is in Chinese culture. Expectations regarding the use of the organization’s information technologies may impact how employees feel included in the organisation. For example, the expectations on the use of an organization’s technology on weekends; are there expectations to answer emails on a Saturday or Sunday, when some employees, for religious reasons believe they should not be using these devices..

One example of inclusivity that may seem trivial, yet had a huge impact on how employees’ saw themselves within an organisation, was in regard to email addresses. In this example, senior management had an email address that represented the parent brand, while employees and lower management had email addresses for different brands. The employees feel undervalued and not part of the main brand they felt more empowered by. This caused ongoing friction between employees and management. Management resisted making any change for a long time. Eventually, with the intervention of the CIO and HR executive, all employees were given the same email address. The impact on morale was profound in a positive way. Small things can lead to big things.

In any organisation where information technologies are a key part of an employee’s job performance, issues of culture, ritual and inclusivity will play a key role in how people perceive, accept and work with the tools they are provided.

For decades, the development and deployment of information technologies has been made from the perspective mostly of the developer of the tools and the requirements of the buyer. People have been seen simply as “users” rather than humans. As mentioned earlier, the word “user” is an abstract term that creates a sense of disassociation between the human designing the technology and the human that has to use it every day at work. The user has simply been viewed as a functional part of the system, yet it is the humans which cause the most problems with information technologies. And as we’ve touched on, humans are quirky and will do unexpected things with technology.

A more human-centred approach to information management, changes how software and hardware are designed, developed and deployed within an organisation. This is in part, the role of the U (User Experience) designer and the UI (User Interface) designer. To better understand the impact of human behaviours on software or an overall ICT system, more organisations are using the anthropological process of ethnography before and during the development of solutions, whether they be hardware or software. A certain tension always exists in how an information management person sees the best solution and the humans on the other end who use the tools. 

Emerging trends in software and hardware

Earlier, we mentioned when technologies become invisible they become more interesting. This is starting to be the case with software. Hardware has become increasingly invisible (many people over the past 20 years have grown up using a computer and over the last decade, a smartphone) and somewhat plateaued in terms of capabilities. For decades hardware evolved significantly every year. This is known as Moore’s Law, where computing power doubles about every 18 months. Today, we are reaching the end of this evolution. We see minimal advances in storage, processing power and screen quality. This is very good for software and the humans who work with software.

For decades, there was a constant tension between software and the hardware it worked with. It led to high development costs and significant impacts on organisations ICT budgets and how they invested. With plateauing most PC’s and even smartphones and tablets, and the rise of Cloud computing, software now has an opportunity to become much, much better. And we see this with ever greater emphasis on a more human-centric approach to software development. Using what is called Agile Methodologies, software updates are more iterative and constantly improving whereas before it was all about strict version control and what was known as waterfall methodologies. In some cases, waterfall methods still apply, but are not as prevalent as they once were. 

Increasingly, software tools are being developed from the outset to be more connected to other tools. This is done through Application Protocol Interfaces (API’s) to the point where some pundits refer to the “API Economy” where entire business models for software products and their very survival, are based on connecting to a major platform. Tools like Trello, Monday (visual project management), Dropbox and others work best when they connect to platforms such as Slack, Google’s GSuite or Microsoft Enterprise and Microsoft Azure.

Many years ago, platform companies like Apple and Microsoft made a point of not enabling their software to work on each other’s platforms. This is no longer the case. While in certain instances Apple, Google or Microsoft may not enable certain features?  to play well with each other, they increasingly interconnect with each other. Microsoft Office and OneDrive for example, work across Apple OS’s and Android.

Many software and hardware companies are also leading development today from a human-centric design perspective. This is especially so with regard to IoT (Internet-of-Things) device makers who study human behaviours more than ever before. Software is about to get a lot better, but it is also going to be more complicated for organisations to implement and manage so many variables and layers. Sorry about that; humans are quirky that way.

A quick look ahead

While predicting the future is impossible, we can see to some degree, where information technologies are going. Some will have a significant impact on information and records management.

Artificial Intelligence (AI): The use of AI may be very helpful to information management, or it may not. AI could, for example, be used to monitor how information is managed within an organisation and recommend or make, changes. The risk is in attempting to fully automate this process which could result in more problems elsewhere within the structure of the organisation. AI will play a vital role in managing ever more complex systems that go beyond the human ability to comprehend.

Blockchain: This is a very promising technology, especially in records management. It will enable the stamping and confirmation of records in that they cannot be altered or tampered with, thus better guaranteeing contracts and document security. While blockchain offers key advantages, it is not a perfect system. For example, there is the 51% exploit whereby a hostile entity gains control of 51% or more of the power in the blockchain and can force the other participant systems to agree to a change, such as transferring funds to a fraudulent account.

Internet-of-Things (IoT): These are devices with sensors that connect to the internet. They can range from the common light bulb all the way up to complex manufacturing devices. The most well-known consumer IoT device is the Nest thermostat for the home. Some companies are developing IoT devices that fit in the toilet and will be able to monitor sugar levels and bacterial infections among other human maladies. A significant value of IoT devices isn’t the function they perform, rather it is the data they collect. IoT connected home devices can, for example, help power companies understand and manage energy loads on the grid.

Augmented and Virtual Reality: Still fairly expensive and a new technology, augmented reality (AR) is already being used in industrial settings with smart glasses that can overlay information and we are seeing it deployed in some smartphones and vehicles. Virtual Reality (VR) remains expensive in terms of the hardware and content development. Both AR and VR reside mostly in commercial applications and those being mostly industrial and military.

Autonomous vehicles, augmented and virtual reality, drones and biotechnology are all other tools that are emerging. They will add new layers of complexity for information management professionals. They will require new workplace and organisational policies and procedures and governance approaches.

Concluding

Our world is more complex and it’s not going to become any less complex, anytime soon. It is increasingly harder to draw the blurring line between information within and without the organisation, especially in terms of managing it. No doubt, better tools to accomplish this Herculean set of challenges will come along. Those that take a human-centric approach however, will have a greater chance of success.

For those in the field of information and records management, taking a more human-centred approach to understanding how and why information technologies are used within the organisation can be extremely helpful. By understanding power dynamics, one can see how information is viewed within the organisational structure, which can help in suggesting, recommending and defining new technologies, policies and procedures and governance. By understanding culture, it makes digital transformations easier and invites new ways of introducing not just new technologies, but gaining acceptance of new policies by employees. Looking at ritual helps with designing and deploying new policies, processes and tools to anticipate challenges and acceptance. And by taking inclusivity into account, this helps with messaging and policy planning as well as workplace morale and culture.

For decades, with the development of information technologies, the human was seen as being in the loop of the system, but they were seen as a functionary that would perform in predictable ways, and the tools themselves were designed for organisational systems that worked for the organisation internally. And as discussed previously, both software and hardware suffered from paying more attention to the humans using the tools and focused largely on the user. In the late 1970’s we saw the rise of the Human Computer Interface (HCI) practice, but as computers were larger, interfaces restricted (GUI’s were a dream at XEROX Parc).

Such a dissociative approach to developing information technologies is changing. Now we have Agile development Methodologies, rapid deployments, iterative processes. Data moves across organisational functions, resides within and without the organisation and is increasingly difficult to manage, secure and control. How people use the tools and consider information has evolved significantly over just the past decade. While the technologies have blended the result is that culture, ritual and inclusion play a more significant role in how humans perceive information and use information.

Information technologies are evolving rapidly and as we live in a phygital world, where devices and information creation and sharing accelerate, a human-centred approach to developing policies and new tools is key to successful development, implementation and management.

For this paper, I used my own laptop, keeping the document in the Cloud as I was working on it. Some of this I wrote in cafes, some at my desk, parts on my smartphone and some on the couch in the quiet of the evening. I also accessed documents in other organisations systems where I had permission to do so. In other words, though this is a single document, the information collected to bring it together, resided in many places with various rules and it was written in various locations across multiple devices.

Information technology and management is less about “users” today and more about humans as we evolve our understanding of how humans interact with technology at work and at home. It is increasingly hard to separate the two, especially with smartphones that may have both a corporate and personal credit card on them and mixed personal and work documents. Such a device may have music that is connected to the employee’s vehicle and home network where we increasingly see people with internet connected thermostats and other household appliances. It may also connect to IoT devices in the office and move various types of personal and work information.

All of this requires new approaches to how we consider and manage information within the organisational context. A design-oriented approach that is human-centric helps build a more contextual awareness in the development, planning and implementation of information management and ICT tools within the organisation. As we see more devices enter the organisation and as collaboration features become more prominent in almost every ICT tool and software application, new pressures on existing systems will occur and new challenges will emerge. 

Employees can work almost anywhere. Information lives in multiple locations. Devices are becoming ever easier to use just as the software is becoming easier and ever more interconnected. Traditional approaches to information management and ICT tools are having to evolve. Complexity of systems and tools will increase. Artificial intelligence is creeping into ever more organisations just as analytics tools struggle to bring value. As we know, humans are quirky in how they see tools and use them in their everyday lives.

For thousands of years, information was largely static. Today, it has become fluid and ever shifting.

Preserving the Legacy of Residential Schooling Through a Rawlsian Framework
Alissa Droog, Dayna Yankovich, Laura Sedgwick – MLIS Graduates, Western University

 

Estimated reading time: 34 minutes, 48 seconds. Contains 6963 words

 

Introduction

Indigenous Peoples in Canada have been mistreated. A notable example of this mistreatment is the residential schooling system in which countless Indigenous students were unwillingly taken from their homes and forced to attend boarding schools where they were, in many cases, abused physically, sexually, or emotionally. In trying to make amends for what was done, the Canadian Government developed initiatives to help move towards truth and reconciliation. In moving forward, issues regarding handling important records emerged, specifically the Independent Assessment Process (IAP) records, relating to how survivors of residential schooling systems were treated. In 2017, the Supreme Court of Canada ruled that these records should be destroyed. 

This paper evaluates this controversial decision through application of John Rawls’ Principles of Justice. Rawls is a 20th century American political philosopher whose Principles of Justice, namely the Principle of Equal Liberty and the Difference Principle, provide a framework to understand justice and fairness within society. More specifically, these principles assert that everyone ought to have equal rights and that where disparities do exist, social structures ought to benefit the least advantaged groups (Garrett). As Indigenous Peoples have been disempowered by Canadian society, Rawls’ Principles of Justice has been chosen as a lens to view this case as the focus on power imbalances effectively shines a light on themes including justice and fairness in a society of diverse groups and values.  The exploration of Rawls in relation to this court case suggests that the decision to destroy these IAP records does not support the survivors or efforts for reconciliation. Furthermore, this is not how human rights cases ought to be handled as it does not benefit the least advantaged groups.

Research Question

How can the 2017 Supreme Court of Canada’s decision (Canada (Attorney General) v. Fontaine, [2017] 2 SCR 205, 2017 SCC 47 (CanLII), <http://canlii.ca/t/h6jgp>) regarding the destruction of Independent Assessment Process records be evaluated considering Rawls’ Principles of Justice?

Background 

The residential schooling system lasted for over 150 years in Canada and was attended by over 150,000 First Nations, Inuit and Métis children (Truth and Reconciliation Commission of Canada 3)(TRC). In many cases, children were forcibly removed from their parents to attend these boarding schools which were funded by the Canadian government and run by religious organizations. Many perished or were abused at the hands of their caretakers. Beginning in the 1990’s and 2000’s, survivors of abuse in residential schools began to take their cases to court and in 2007, the Canadian government established the Indian Residential School Settlement Agreement (IRSSA). This agreement provided compensation to survivors of the residential schooling system, supported healing measures and commemorative activities, and established the Truth and Reconciliation Commission (TRC) (“Indian Residential Schools”). 

Compensation was provided to survivors of the residential schooling system in two ways. All attendees of the residential schooling system were entitled to receive compensation based on the number of years they attended through the Common Experience Payment (CEP), or, negotiate a different sum via the Independent Assessment Process (IAP). Over 79,000 survivors who came forward to receive the standard $10,000.00 for proof of having attended the schools, and an additional $3,000.00 for every additional year they attended (Logan 93). The IAP allowed survivors to receive additional compensation to the CEP if they suffered lasting psychological harm due to physical, verbal or mental abuse experienced in the schools. IAP applications were received between 2008-2012 and 38,257 claims have been received through the IAP process to date (“Indian Residential Schools”). 

In 2008, the IRSSA also established the TRC which had two goals: “reveal to Canadians the complex truth about the history and the ongoing legacy of the church-run residential schools…” and to “guide and inspire a process of truth and healing, leading toward reconciliation…” (Truth and Reconciliation Commission of Canada 23). Between 2008 and 2015, the TRC travelled across Canada collecting documents and 6,750 statements about residential schools (Truth and Reconciliation Commission of Canada 29). The TRC encountered several obstacles in gathering documents about residential schools, including resistance from the Ontario Provincial Police (OPP), Library and Archives Canada (LAC), and difficulty gaining records from the IAP held by the Indian Residential Schools Adjudication Secretariat (IRSAS). The TRC won court cases with LAC and the OPP to turn over records relating to residential schools, but the debate over who should house the IAP records has been fraught with ongoing tension (Truth and Reconciliation Commission of Canada 27-28). 

The tension between privacy and sharing records of trauma shown in the IAP court case reveals a complex story about reconciliation, privacy and record-keeping in Canada. In 2010, the TRC and IRSSA created a consent form allowing anyone who shared their testimony through the IAP to have their records and testimony archived with the TRC; however, this consent form did not exist at the time that the IAP started and survivors often failed to understand the difference between the TRC and IAP (Logan 94). To complicate matters further, the IRSSA “required an undertaking of strict confidentiality of all parties to the IAP hearings, including the Survivors themselves” (Truth and Reconciliation Commission of Canada 28). In 2014, the Chief Adjudicator of the IAP supported a decision that all records from the IAP process would be destroyed immediately (Truth and Reconciliation Commission of Canada 28). In the court case which followed, the TRC sought to archive the IAP documents with the LAC “as an irreplaceable historical record of the Indian Residential School experience” (Fontaine v. Canada (Attorney General), 2014 ONSC 4585 (CanLII), <http://canlii.ca/t/g8hd3>).

The court ruled that unless the claimant came forward and chose to share their documents with the TRC, IAP records would be destroyed after a 15- year retention period. This decision was appealed in the Ontario Court of Justice in 2016, and then appealed again in the Supreme Court of Canada in 2017. In both cases, the decision by Justice Parnell was upheld and, unless future action takes place, these records will be destroyed by 2027 (Indian Residential Schools Adjudication Secretariat). 

Literature Review

 A review of the literature reveals a body of scholarship dedicated to shedding light on the importance of record retention after an atrocity takes place. Record collection and preservation after human rights abuses are important steps in the healing and memory keeping process. However, the archives could become a contentious space when various groups lobby for their own best interest and seek to obscure the collective memory to their own advantage. The work of reconciliation or restorative justice relies on accurate assessments of the past and the creation of archives play an important role in either advancing or detracting from this work (Logan 92). 

Truth-telling as a path to healing remains a widely held consensus among post-conflict scholars, however, Mendeloff challenges these claims as based more on faith than on empirical evidence of peacebuilding (355). The notion that truth-telling fosters reconciliation, promotes healing, deters future recurrence, and prevents historical distortion appear true anecdotally, even if scientifically unverifiable (Mendeloff 356). To this end, TRCs have become a common mechanism of post-conflict restorative justice work. 

These commissions aim to reconstruct and repair the fractured social fabric within a conflicted society. Over the past 25 years, TRCs have been commonly employed post-conflict, typically following a similar pattern of providing all sides an opportunity to testify to their own experiences. The accounts are then collated into a unifying history of the human rights abuse, with a goal of recovery, as truth is unearthed, and societal memory is established. Over the past 25 years, TRCs have taken place in South Africa, Sierra Leone, Peru, Timor-Leste, Morocco, Liberia, Canada, and Australia, with many more planned for the future (Androff 1964). TRCs typically have little connection with the court system, and in instances where they do, it is in the interest of persecuting offenders. The South African TCR referred perpetrators to the court system, while Sierra Leone determined beforehand that serious crimes would proceed through the United Nations (UN) and lesser offenses through the TRC (Androff 1969). When the courts are involved, it is to work on behalf of the victims. Regardless of the court involvement, TRCs are designed “to produce a coherent, complex, historical narrative about the trauma of the violence and provide victims with the opportunity to participate in the process of post-conflict reconstruction” (Androff 1975).


Wood et al considered the field of archival studies and provided a critique of current practices in support of human rights work (398). They argue for a more nuanced look at the established archival description techniques and ask how it would look to invest less power in the institution housing the records and more in the people involved. The archival concept of provenance, that is ownership or custody of the record, becomes problematic when considering human rights records which hold community value. Citing an example from records of Indigenous Australians, Wood et al. discuss the use of parallel provenance and a participant driven model that honours the individuals and communities involved in their creation (403). These authors go so far as to describe an iterative recordkeeping process, where records are not static but include the voices of those who preserve, teach, add to, or in any other way become a part of the life of the record (Wood, et. al. 403). This honours the record as alive and relevant to the life of the community. 

When the Supreme Court reached its decision regarding the destruction of the IAP records in 2017, the case received significant national news coverage. Headlines from the Canadian Broadcasting Corporation (CBC) included “Court order to destroy residential school accounts ‘a win for abusers’: [National Centre for Truth and Reconciliation] NCTR director” (Morin), and “Indigenous residential school records can be destroyed, Supreme Court rules” (Harris). Then again in 2019, the case received attention with the development of the My Records, My Choice website, created by IRSAS, which provided the option for record preservation. CBC reported, “New website helps residential school survivors preserve or destroy records” and Aboriginal Peoples Television Network (APTN) National News featured the article “Former TRC chair encourages residential school survivors to save records” (Martens). 

  The IAP itself has been studied and Moran provides a thorough legal review of the IRSSA, which includes a multiple page summary of the IAP (Moran 531 – 564). This legal primer sheds light on the nature of the IAP, specifically its design as a high-volume litigative process, with some features curtailed to keep the proceedings claimant-centred. For instance, “perpetrators are not parties and [that] they have ‘no right of confrontation’ during the hearing. The limited rights of alleged perpetrators were expressly designed to protect that safety and security of the claimant during the stressful hearing process” (Moran 561).  

Similarly, Morrissette and Goodwill provide a detailed overview of the IAP process but from a health and human services perspective (542). Coming from this lens, they speak to the therapeutic relationship of survivor to therapists, and the unique healing requirements that may come about as a result of this process. These authors consider the rationale that led people to submit themselves to what may result in further traumatization: “For some survivors, formal hearings provide an opportunity to finally reveal the truth, describe their experiences, and assist in the prevention of future similar human tragedies and cultural trauma” (Morrissette and Goodwill 555). When a person opens up and expresses their experiences, this can prevent what these authors call a conspiracy of silence surrounding trauma, because “silence is profoundly destructive and can prevent a constructive response from victims, their families, society, and a nation” (Morrissette and Goodwill 555). Reimer has written a lengthy qualitative report on the experiences of those who participated in the CEP process. Although written before the start of the IAP, she asked participants about their thoughts on both the CEP and the proposed IAP. Many reported instances of re-traumatization as a result of participation in the CEP. Some stated the large amount of paperwork and involvement with lawyers was a deterrent for participation in the IAP (Reimer xiii-xvi). This was written before the official court decision regarding the destruction of the IAP records. Further research is needed to investigate how this decision will impact the healing and reconciliation process. Will this undermine the important work of traumatic disclosure and thus re-victimize participants? A gap exists in the literature. It is important to revisit this issue with consideration of legal, ethical, archival, and human services perspectives as aforementioned works do not sufficiently address these concerns. Previously mentioned works describing the IAP do not realize the future destruction of the records. As such, it will be important to revisit this issue both from a legal, ethical, archival, and health and human services perspective.  

Methods

To answer the research question, a literature review was conducted to determine existing gaps surrounding the NCTR, IRSSA and the IAP. Relevant documentation and literature were identified, reviewed, and analyzed. Much of the important information required for this research came from the IRSSA website. This website provides a range of information pertinent to this inquiry ranging from the IAP application guide to My Records, My Choice options to court decisions and legal documents and more. Rawls’ Principles of Justice framework was then applied to assess and analyze the documentation and final court decision. A series of secondary questions were developed and considered to further motivate the process and answer the research question. 

What happened? 

It is important to understand the background that led to the court decision regarding the destruction of IAP records before trying to evaluate the ethical dimensions of this case against Rawls’ Principles of Justice. Background knowledge of the situation can help to provide context for how people were treated and the decisions that were made to move the case forward. Furthermore, this additional insight helps to provide information from diverse perspectives held by various stakeholders. Such insights contribute to building a full image of what happened and who was affected. This is necessary to enable assessment and evaluation against Rawls’ Principles of Justice.  

What arguments led to court decision? 

Court documentation reveals a variety of conflicting arguments and considerations that led to the final decision to have the IAP records destroyed in 2027. Arguments in favour of destroying the IAP records include: promoting the autonomy of survivors (as they have a choice whether to reclaim and preserve their own record or let them be destroyed) and maintaining confidentiality (as survivors were told their records would be destroyed, according to one judge’s perspective). Arguments opposed to destroying the records include: the importance of preserving this information for future generations as a record of the atrocities that Indigenous people experienced throughout the 19th and 20th centuries contradictory evidence that could indicate that IAP records could be archived. A point of contention in the case was whether or not the language shared with the IAP claimants indicated that the records would be archived or destroyed after use. Various judges had differing perspectives on this matter. Further, the courts took into account whether the records ought to be considered government records or court records as government records are subject to “federal privacy, access to information, and archiving legislation” (Canada (Attorney General) v. Fontaine, [2017] 2 SCR 205, 2017 SCC 47 (CanLII), <http://canlii.ca/t/h6jgp>). This consideration is more pertinent to legal rather than ethical dimensions of the case.

What biases and values need to be considered? 

Biases and values that ought to be considered when addressing this research question include those related to which groups are in power, the diversity of values and perspectives, as well as personal biases.

This analysis focuses on the 2017 court case and how to move forward with the IAP records rather than the initial intention behind this process. Limitations of this research include lack of access to diverse perspectives and the inability to consult the various stakeholders to learn their views. Were we able to survey the survivors we would have a better sense of their values and desires moving forward. Further, we could better consider the relevance of diverse perspectives to Rawls’ Principles of Justice. In part due to these limitations, this is an exploration of Rawls in relation to the court proceedings and not a definitive decision on how the case ought to proceed.

Theoretical Framework

Rawls’ Principles of Justice are concerned with the ways in which rational persons would structure a society if they were a behind a so-called “veil of ignorance” that distorts any knowledge about who they are in the world. 

Rawls’ first principle, the Principle of Equal Liberty, states that, “Each person has an equal right to the most extensive liberties compatible with similar liberties for all” (Garrett). Everyone should have equal opportunity and access when it comes to basic rights and freedoms. Rawls’ second principle, the Difference Principle, holds that “Social and economic inequalities should be arranged so that they are both (a) to the greatest benefit of the least advantaged persons, and (b) attached to offices and positions open to all under conditions of equality of opportunity” (Garrett). This principle suggests that a world ought to be structured such that disadvantaged groups or individuals are privileged when it comes to social and economic inequalities (i.e. disadvantaged peoples are given the most benefits when relevant differences exist). To this point, a rational person would not design a world that favours one group over another socially, economically, or otherwise as this risks the designer being unable to benefit from the social structure.  

This framework can be used as a lens upon which to view the case at hand and to frame this research question. To put it differently, would a rational person behind a veil of ignorance support the decision regarding the destruction of IAP records? 

Findings and Discussion

Considering Stakeholders and Values

In 2017, as part of an appeal, the Supreme Court of Canada ruled that the previous court’s decision to destroy IAP records would be upheld. This decision ensures that all IAP records will be destroyed in September 2027, except the records that are preserved through the IRSSA’s My Records, My Choice initiative. This initiative permits individuals who made a claim through the IAP to choose between obtaining a copy of their IAP record to keep for themselves or share with others; preserve their record for history, public education, and research at the NCTR with open or restricted access; or to do both. Open access allows the NCTR to share the documents and personal information publicly for reconciliation purposes whereas restricted access means this information can be shared for purposes such as publication but only if personal information is removed. 

Notably, there is no option for claimants to have their IAP documents archived using a retention policy in which the records would be stored privately for a specified number of years after the death of the claimant. These options do not present enough choice for someone who may want their records to be preserved while maintaining their privacy while alive. If an individual does nothing, their record will be destroyed.

The court decision to destroy these records, except those that are preserved by individual stakeholders, impacts various groups and stakeholders that are sure to have diverse values and perspectives on the matter. Various groups that are impacted by the court’s decision include individuals who went through the IAP; relatives of those who have an IAP record (whether alive or deceased); the church or diocese and state; the NCTR; and future generations.

It is not possible to know the exact views and values of individuals within these groups or even fully understand or appreciate the views and values of these groups as a whole without direct consultation. For the most part, a general sense of the various perspectives can be inferred through review of court documentation, news sources, and the other literature referred to in this paper. 

Generally, the perspective of the church or diocese is that the records should be destroyed whereas the values and position of the NCTR are that the records ought to be preserved. The reasons for this seem clear. It appears that the churches and dioceses referred to in the court documentation do not want a detailed record of the atrocities for which they are in large part responsible. The NCTR values the importance of acknowledging and preserving records of what occurred in hopes of moving towards reconciliation and a coherent representation of what happened.

From the court documents and literature, it is difficult to determine to what extent individuals with IAP records appreciate that they can make their own decision as to whether their record is preserved or destroyed. On the one hand, these individuals may feel empowered by the fact that they are given control of their own record. On the other hand, individuals in this group may feel that these options are rather limiting and do not address the bigger issue. That is, although the My Records, My Choice initiative appears on the surface to have the aim of empowering individuals to make their own choices, it ultimately fails in so far as it presents a small selection of options. Further, it does not consider the values of various other stakeholders including future generations who ought to be informed of the atrocities bestowed upon those who came before. 

In accordance with Rawls’ Principles of Justice, specifically the Difference Principle, society must more attentively consider the most disadvantaged groups and make decisions that most closely align with benefiting these groups. In this case, identifying the most disadvantaged groups is a challenge in and of itself. It is safe to say that the church or the state are not the most disadvantaged. The most disadvantaged groups are either the individuals with IAP records, who suffered greatly at the hands of the church and state, or future generations, who will lack important insights about the past should the records be destroyed. For either group, in accordance with Rawls’ Principles of Justice, it appears that the decision to destroy IAP records would be harmful to both groups as will be subsequently explored in more detail. 

Evaluating Court Decision

Rawls’ Equal Liberty Principle states that “Each person has an equal right to the most extensive liberties compatible with similar liberties for all” (Garrett). Three rights can be considered from the 2017 court case of the IAP records: the right to privacy, the right to know and the right to autonomy. The right to privacy refers to the individual rights of IAP claimants to have their records remain private. The 2017 court case held the IAP records will be destroyed after 15 years except if an IAP claimant chooses to save their personal records because “The IRSSA’s express terms provided that the IAP Documents would be treated as highly confidential, subject to the very limited prospect of disclosure during a retention period, and then be destroyed” (Canada (Attorney General) v. Fontaine, [2017] 2 SCR 205, 2017 SCC 47 (CanLII), <http://canlii.ca/t/h6jgp>). The case also involves the right to know what happened in the residential schools in order to work towards reconciliation. Although this is a large part of the reason the court case made it to the Supreme Court, the 2017 case remains silent on this particular right, choosing instead to determine whether the supervising judge of the 2016 case had the right to make a decision about the destruction of the records in the first place. Lastly, the case deals with the right to autonomy, or the right for individuals and groups to control their own narrative. This could be interpreted as the right for claimants to determine what happens to their IAP record, or for groups such as the NTCR to use IAP documents to preserve the memory of residential schooling in Canada. Overall, the court case reveals tension between the individual rights of IAP claimants to autonomy and privacy (including the right to be forgotten) and the collective rights to know what happened and control the narrative of residential schooling in Canada. 

The 2017 court case ruled that IAP documents would be subject to a 15-year retention period in which claimants can opt to keep or share their records with the TRC before all records are destroyed in 2027. This decision attempts to balance the individual right to privacy and collective right to know by granting the individual right to autonomy to IAP claimants. However, by granting individuals the right to autonomy over their IAP records, this decision determines that the right to privacy and autonomy for individual claimants overrides the collective right to know. Thus, the court case aligns with Rawls’ Principle of Equity in so far as it protects the individual right to privacy and autonomy to control what happens with IAP records. However, the court case protects this right above the collective right to know which does not align with the Principle of Equity as it does not grant the right to know in similar ways to all people in Canada. By destroying the records, the privacy of individuals is upheld, but the long-term effects of the decision do not aid the reconciliation efforts. 

Rawls’ Difference Principle asserts that society should be organized so that everyone has similar rights, but so that the least advantaged persons receive the greatest benefit from structures within that society in order to aim for equality. In this case, the least advantaged persons in society can be understood as the IAP claimants, and Indigenous Peoples in Canada who have experienced the intergenerational trauma left by the residential schooling system. If applied, the Difference Principle would find a way to ensure that the greatest benefit from this court case would go to either IAP claimants and/or Indigenous Peoples in Canada.  

Based on these two principles, Rawls’ framework assumes that ethical judgements should be made behind a veil of ignorance so that a rational person could reason what is best for the society of disadvantaged groups. The result of the 2017 court case does not align with Rawls’ framework because it structurally benefits the advantaged people in this case. The state and religious organizations benefit from the destruction of the IAP records which shed light on the abusive acts that took place in the residential schooling system. Moreover, the reconciliation efforts by Indigenous Peoples are harmed by losing a significant portion of the story of residential schooling in Canada. The IAP records contain up over 36,000 testimonies of abuse in residential schools, whereas the TRC collected less than 6,500 statements about residential schools from survivors, families, church leaders, etc. The loss of these 36,000 records could harm future generations in Canada by contributing to an incomplete picture of the residential schooling system. 

Another factor to consider is the limited options given to IAP claimants as a result of the 2017 court case which established the My Records, My Choice initiative tasked with contacting surviving IAP claimants about sharing their records with the NCTR (Indian Residential Schools Adjudication Secretariat). The My Records, My Choice initiative offers only four options to IAP claimants which include: 

  • Allow their IAP records to be destroyed by 2027
  • Obtain a copy of their IAP records for themselves 
  • File a copy of their IAP records with the NCTR via open access 
  • File a copy of their IAP records with the NCTR via restricted access 

For a claimant who understands the long-term benefits of archiving their records with the NCTR and who does not wish to publicize their history of abuse, the restricted access option is the best option available. However, under the restricted access option, “the NCTR may use and share your records with others for purposes such as public education, but only if the NCTR removes your personal information” (My Records, My Choice). Unfortunately, even this option fails to fully respect if an IAP claimant wants to have their records archived in a way that disallows their use until after their passing. 

The options available to IAP claimants resulting from the 2017 case do not benefit the least advantaged people as many IAP claimants have passed away and will not have a choice about what happens with their records. Nor does the voluntary choice to archive or destroy records ensure that any of the records are preserved for reconciliation efforts. When we consider that the transcripts from the IAP records represent possibly the largest collection of transcripts detailing the most serious cases of abuse in the residential schooling system in Canada, the list of options does little to ensure that any of these are kept for the purpose of better understanding residential schooling or used for reconciliation. 

What would a rational person behind a veil of ignorance decide to do with the IAP records? It is possible that a person might choose to destroy the records out of respect for the traumatic nature of the information contained in the IAP records, especially when they consider that the result of publicizing these records “would have destructive effects on the fabric of existing communities and there would be generations’ worth of repercussions for all parties involved” (Logan 94). However, it is important that the person making an ethical judgement in Rawls’ framework is rational. By this, we mean that the person is able to make judgements considering other evidence and a variety of viewpoints.  A rational person would look at this situation and compare it to other truth and reconciliation efforts worldwide, as well as the long-term effects of destroying or keeping the records. 

With that in mind, a rational person behind a veil of ignorance would conclude that the destruction of the IAP records does not serve the long-term interests of reconciliation or collective memory for Indigenous Peoples in Canada. They also would not choose to publicize the records in any way as this fails to protect the right to privacy for IAP claimants. Thus, they would probably find a way to balance these rights by keeping all the records, but in a way that keeps confidential information private to avoid repercussions to survivors and their families. This could look similar to archiving the records with the NCTR through something similar to the “restricted access” option available through My Records, My Choice, but may also include a retention policy in which the records are only to be used after a specific number of years after the claimant has passed away, or only to be studied without personal information attached. 

A rational person would also find that the fate of the records should not be in the hands of a court system run by the state, but by Indigenous Peoples who should decide how they want to deal with the records. One of the largest problems in this case was how it focused on who had the authority to make decisions about the fate of the IAP records instead of the effects of keeping or destroying the records in the first place. For example, the 2017 Supreme Court case was started by the Attorney General of Canada who appealed the 2016 decision to destroy the IAP records on the grounds that the supervising judge of the 2016 case had no grounds to make that decision: “The Attorney General of Canada appeals to this Court, arguing that the IAP Documents are “under the control of a government institution” within the meaning of the Access to Information Act, the Privacy Act and the Library and Archives of Canada Act, and that the supervising judge had no jurisdiction to order their destruction” (Canada (Attorney General) v. Fontaine, 2017 SCC 47, p. 209-210). The court case spends considerable time defending the right of the IRSAS to make decisions about the fate of the IAP records. 

Overall, this is a complex issue and no solution will balance the rights of all stakeholders equally; however, keeping the records and archiving them in a way that protects the privacy of IAP claimants would benefit Indigenous People as the most disadvantaged group in the long-term. It does not benefit individual IAP claimants in the short-term as their right to autonomy about how their records are used is overridden by the right to know; however, the use of a strong privacy and retention policy could serve their right to privacy more than the current forms of restricted and open access archiving available through My Records, My Choice. This conclusion is also limited in that fact that it treats Indigenous Peoples as a homogenous group since keeping the records is viewed to be more useful for reconciliation efforts in the future. 

Archives and Indigenous Knowledge

This work seeks to synthesize known research rather than prescribe a framework for Indigenous archival practices. The particular strategy employed may differ according to geography, local practices, teachings, and community priorities. It is important to identify that alternative approaches to archival techniques exist which takes into account Indigenous methodologies and epistemologies.       
Little has been written specifically regarding Indigenous archiving in Canada. An Australian Research Council funded project, Trust and Technology: Building Archival Systems for Indigenous Oral Memory, did important work with the Koorie people of Australia and their archives. Their stated goal was to encourage the Australian archival profession to “understand the priorities of Indigenous communities and embrace Indigenous frameworks of knowledge, memory and evidence, including knowledge that is stored and transmitted orally” (McKemmish et al. 212). They aimed to build trust between Indigenous communities and archival services. Indigenous Australians felt wary of archival institutions as these had been in the past places where their own information was kept to be held against them. Two competing themes emerged: that the people desired control and access to their own records as owners; and there was governmental resistance to this as it was felt that these records belonged to them (McKemmish et al. 219). This approach is appropriate for consideration within a Canadian context as well. Out of this work, the researchers devised seven statements of principle, which Canada would be wise to consider when coordinating archival materials with Indigenous communities. Of particular relevance to this case are Principle 2 and Principle 4 which stress the recognition of rights in records and adoption of holistic approaches. These principles assert that Indigenous People should have a voice in what happens with records created pertaining to them and their experiences. Additionally, community-controlled archival systems should provide a means for bringing together residential school records in a manner in keeping with individual and community wishes (McKemmish 231). The remaining principles are also worth noting and would require further research to establish relevance to the Canadian context. These principles are as follows:

  • Principle 1: Recognition of all archival sources of Indigenous Knowledge
  • Principle 2: Recognition of rights in records
  • Principle 3: Recognition of rights in legal and archival frameworks
  • Principle 4: Adoption of holistic, community-based approaches to Indigenous archiving
  • Principle 5: Recognition of need for Indigenous people to challenge ‘‘official’’ records
  • Principle 6: Recognition of need for inclusive education and training for recordkeeping professional practice
  • Principle 7: Reconciling research, rethinking the relationship between academia and Indigenous communities (McKemmish 230-231).

Much of the literature surrounding residential schools has focused on transitional justice and lacks consideration of other approaches. Augustine Park, a Canadian restorative justice researcher, seeks to fill this gap by looking at community-based restorative work in response to residential school trauma (425). Although there is no set definition or framework for a one-size fits all approach to restorative justice work and residential schools, Park’s definition of restorative justice is helpful: “(1) justice practices that originate and take place within or between communities, (2) involving the participation of stakeholders and (3) which work to validate victims/survivors, encourage wrongdoer responsibility and transform relationships” (Park 427). This approach works to decentre the state as the sole arbiter of justice and healing post residential schools (Park 427). One of the goals of restorative justice work in Indigenous communities is “striving to teach decolonizing truths” (Park 440). To this point, when  the IAP case is considered through a restorative justice framework, the destruction of IAP records negates the valuable opportunity to use them within community justice and healing work. In contrast, transitional justice typically involves state-run and judicial approaches to healing. Thus, the IRSSA is an example of transitional justice because it looks to superficially or broadly address large-scale historical wrongs rather than taking a more nuanced, community-based approach. Of importance to the topic at hand is the contention “that community-based restorative justice (CBRJ) presents a locally meaningful alternative to official (state-sponsored) transitional justice responding to mass violence. In Canada, the failings of the official transitional justice apparatus point to the need for community-based alternatives” (Park 425). Destroying the IAP records may prove antithetical to the goal of Indigenous community-based restorative justice work. 

Recommendations 

Moving forward, it is recommended that this case be used as an example of what ought not be done in human rights cases in terms of presenting disadvantaged groups with limited options and seemingly disregarding significant values such as preservation of history and sharing truth and knowledge with future generations.  Additionally, the groups in power, in this case the church and state, ought not be in full control of the proceedings and processes. Especially in a case like this, alternative methodologies and ways of doing things (like preserving knowledge and archiving) ought to be considered. Based on the research, the following considerations are available for discussion:

  • Learn from this situation. Let it inform future humans rights archival cases, especially when sensitive records are involved
  • Recognize that records hold value to people and communities
  • Provide additional options in the present moment; refrain from presenting survivors with a false dichotomy 
  • Explore Indigenous archival mechanisms as an alternative for the IAP records
  • Incorporate IAP records into existing or developing community-based restorative justice projects as possible 

Conclusion

The Supreme Court Decision to destroy IAP records, aside from those preserved through My Records, My Choice, does not sufficiently benefit the IAP claimants, residential schooling survivors, or future generations. This is because the destruction of these records, in some sense, benefits the perpetrators rather than bringing to light the truth about what atrocities occurred in Canada in the 19th and 20th centuries. In accordance with Rawls’ Principles of Justice, it is important that the aforementioned groups, IAP claimants, residential schooling survivors, and future generations, are privileged when it comes to the Supreme Court’s ruling since these are the most disadvantaged groups. Numerous ways in which the court proceedings align (or do not) with Rawls’ Principles of Justice have been identified and explored. This exploration of the Supreme Court’s ruling in conjunction with Rawls’ Principles of Justice is limited in so far as the values and perspectives of actual stakeholders could not be directly obtained. 

Glossary of Terms

APTN- Aboriginal Peoples Television Network

CBC – Canadian Broadcasting Corporation 

CEP- Common Experience Payment

IAP – Independent Assessment Process

IRSAS – Indian Residential Schools Adjudication Secretariat

IRSSA – Indian Residential Schools Settlement Agreement  

LAC – Library and Archives Canada 

NCTR – National Centre for Truth and Reconciliation 

OPP – Ontario Provincial Police 

TRC – Truth and Reconciliation Commission of Canada

UN – United Nations 

Works Cited 

Androff, David K. “Truth and Reconciliation Commissions (TRCs): An International Human Rights Intervention and its Connection to Social Work.” The British Journal of Social  Work, vol. 40, no. 6, 2010, pp. 1960-1977. doi: 10.1093/bjsw/bcp139

Canada (Attorney General) v. Fontaine, [2017] 2 SCR 205, 2017 SCC 47 (CanLII), <http://canlii.ca/t/h6jgp>

CBC News. “New Website Helps Residential School Survivors Preserve or Destroy Records.” CBC News, 14 January 2019. https://www.cbc.ca/news/canada/saskatchewan/new-residential-school-website-survivors-records-decision-1.4977258. Accessed Apr. 2019.

Fontaine v. Canada (Attorney General), 2014 ONSC 4585 (CanLII), <http://canlii.ca/t/g8hd3>

Garrett, Jan. John Rawls on Justice. Western Kentucky University, 3 Sept. 2002,  https://people.wku.edu/jan.garrett/ethics/johnrawl.htm#prin. Accessed 14 Apr. 2019. 

Harris, Kathleen. “Indigenous Residential School Records Can Be Destroyed, Supreme Court Rules.” CBC News, 6 Oct. 2017, https://www.cbc.ca/news/politics/indian-residential-schools-records-supreme-court-1.4343259. Accessed 10 Apr. 2019.

Indian Residential Schools Adjudication Secretariat (IRSAS), http://iap-pei.ca/home-eng.php. Accessed Apr. 2019.

“Indian Residential Schools.” Indigenous and Northern Affairs Canada, 2019, https://www.aadnc-aandc.gc.ca/eng/1100100015576/1100100015577#sect1. Accessed Apr. 2019.

Kovach, Margaret. Indigenous Methodologies: Characteristics, Conversations and Contexts. University of Toronto Press, 2009.

Logan, Tricia. “Questions of Privacy and Confidentiality after Atrocity: Collecting and Retaining Records of the Residential School System in Canada.” Genocide Studies International, vol. 12 no. 1, 2018, pp. 92-102. 

Martens, Kathleen. “Former TRC Chair Encourages Residential School Survivors to Save Records.” APTN National News, 6 January 2019, https://aptnnews.ca/2019/01/16/former-trc-chair-encourages-residential-school-survivors-to-save-records/. Accessed Apr. 2019.

McKemmish, Sue, Shannon Faulkhead, and Lynette Russell. “Distrust in the Archive: Reconciling Records.” Archival Science, vol. 11, no. 3-4, 2011, pp. 211-239. doi:http://dx.doi.org.login.ezproxy.library.ualberta.ca/10.1007/s10502-011-9153-2

Mendeloff, David. “TruthSeeking, TruthTelling, and Postconflict Peacebuilding: Curb the Enthusiasm?” International Studies Review, vol. 6, no. 3, 2004, pp. 355-380. doi: https://doi-org.login.ezproxy.library.ualberta.ca/10.1111/j.1521-9488.2004.00421.x. 

Moran, Mayo. “The Role of Reparative Justice in Responding to The Legacy of Indian Residential Schools.” The University of Toronto Law Journal, vol. 64, no. 4, 2014, pp. 529–565. doi:10.3138/utlj.2505. 

Morin. Brandi. “Court Order to Destroy Residential School Accounts ‘A Win for Abusers’: NCTR Director.” CBC News, 6 October 2017.
https://www.cbc.ca/news/indigenous/court-order-destroy-residential-school-accounts-1-.4344918. Accessed Apr. 2019.

Morrissette, Patrick J., and Alanaise Goodwill. “The Psychological Cost of Restitution: Supportive Intervention with Canadian Indian Residential School Survivors.” Journal of Aggression, Maltreatment & Trauma, vol. 22, no. 5, May 2013, pp. 541-558. doi: 10.1080/10926771.2013.785459.

My Records My Choice. Indian Residential Schools Adjudication Secretariat, May 2019, http://www.iap-pei.ca/records-eng.php. Accessed May 2019.

Park, Augustine S. J. “Remembering the Children: Decolonizing Community-Based Restorative Justice for Indian Residential Schools.” Contemporary Justice Review vol. 19, no. 4, Dec. 2016, pp. 424-444. Doi: 10.1080/10282580.2016.1226818. 

Reimer, Gwen. The Indian Residential Schools Settlement Agreement’s Common Experience Payment and Healing: A Qualitative Study Exploring Impacts on Recipients. Aboriginal Healing Foundation, 2010.

Smith, Linda T. Decolonizing Methodologies: Research and Indigenous Peoples. Zed Books, 2012.

Truth and Reconciliation Commission of Canada. Honouring the Truth, Reconciling for the Future: Summary of the Final Report of the Truth and Reconciliation Commission of Canada. Truth and Reconciliation Commission of Canada, 2015.

Wood, Stacy, et al. “Mobilizing Records: Re-Framing Archival Description to Support Human Rights.” Archival Science, vol. 14, no. 3, 2014, pp. 397-419. Doi: http://dx.doi.org.login.ezproxy.library.ualberta.ca/10.1007/s10502-014-9233-1. 

Préserver l’héritage des pensionnats indiens en s’appuyant sur les principes de Rawls
Alissa Droog, Dayna Yankovich et Laura Sedgwick – Titulaires d’une maîtrise en bibliothéconomie et en sciences de l’information, Université Western

 

Estimated reading time: 38 minutes, 34 seconds. Contains 7714 words

Introduction

Les peuples autochtones du Canada ont été victimes de sévices. Le système de pensionnats indiens en est un exemple notable; un nombre incalculable d’enfants autochtones ont été séparés de leur famille et forcés de fréquenter ces établissements où, dans de nombreux cas, ils ont été victimes de mauvais traitements d’ordre physique, sexuel ou affectif. Afin de faire amende honorable, le gouvernement canadien a mis en œuvre des initiatives pour favoriser la vérité et la réconciliation. Les mesures prises pour jeter la lumière sur ce chapitre de l’histoire ont soulevé des questions quant à la manière de traiter les documents, en particulier les témoignages recueillis dans le cadre du Processus d’évaluation indépendant (PEI) faisant état des sévices subis dans les pensionnats indiens. En 2017, la Cour suprême du Canada a statué que ces documents devaient être détruits. 

Le présent essai se penche sur cette décision controversée en appliquant les principes de justice de John Rawls. Rawls est un philosophe politique américain du XXe siècle qui, dans le but d’élaborer un cadre théorique permettant de comprendre les notions de justice et d’équité au sein de la société, a introduit deux principes de base : le principe de liberté et d’égalité, et le principe de différence. Selon ces principes, toutes les personnes doivent posséder des droits égaux et, lorsque des disparités existent, les structures sociales doivent avantager les groupes les moins favorisés (Garrett). Nous examinerons donc la marginalisation des peuples autochtones par la société canadienne sous l’angle des principes de justice de Rawls, puisque les contextes d’inégalité de pouvoir permettent de faire ressortir efficacement des thèmes comme la justice et l’équité dans une société aux groupes et aux valeurs diversifiés. L’application des principes de Rawls à cette cause suggère que la décision du tribunal d’autoriser la destruction des dossiers du PEI ne sert ni les victimes ni les efforts de réconciliation. De plus, ce n’est pas ainsi qu’il faut traiter les affaires touchant les droits de la personne, car une telle résolution ne profite pas aux groupes les moins favorisés.

Question de recherche

Comment la décision de la Cour suprême du Canada de 2017 (Canada [Procureur général] c. Fontaine, 2017 CSC 47 [CanLII], [2017] 2 RCS 205 <http://canlii.ca/t/h6jgq>) donnant lieu à la destruction des dossiers du Processus d’évaluation indépendant peut-elle être évaluée en tenant compte des principes de justice de Rawls?

Contexte 

Plus de 150 000 enfants des Premières Nations, des Inuits et des Métis ont fréquenté les pensionnats indiens, un régime en place pendant plus de 150 ans (Commission de vérité et réconciliation du Canada [CVR], p. 3). Dans de nombreux cas, ces enfants ont été arrachés à leurs parents afin d’être placés dans ces établissements financés par le gouvernement canadien et gérés par des organisations religieuses. Beaucoup d’entre eux ont péri ou subi des sévices. C’est à partir des années 1990 et 2000 que les survivants de sévices ont commencé à porter leur cause devant les tribunaux et, en 2007, le gouvernement canadien a adopté la Convention de règlement relative aux pensionnats indiens (CRRPI). Cette entente a entraîné le versement d’indemnisation aux anciens élèves des pensionnats, la mise en œuvre de mesures de soutien, la tenue d’activités de commémoration et la mise sur pied de la Commission de vérité et réconciliation (CVR) (« Résolution des pensionnats indiens »). 

Les survivants de sévices ont eu droit à deux formes d’indemnisation : le Paiement d’expérience commune (PEC), une compensation financière selon le nombre d’années de fréquentation dans un pensionnat versé à tous les anciens élèves, ainsi qu’une indemnité supplémentaire déterminée par le Processus d’évaluation indépendant (PEI). Plus de 79 000 survivants ont réclamé le montant de base de 10 000 $ accordé pour la première année scolaire passée dans un pensionnat, ainsi que le montant supplémentaire de 3 000 $ pour chaque année supplémentaire de fréquentation (Logan, p. 93). De plus, le PEI a permis aux anciens élèves dont la violence physique, verbale ou psychologique subie a entraîné de graves séquelles de recevoir une indemnité supplémentaire au PEC. Parmi les demandes au titre du PEI reçues entre 2008 et 2012, 38 257 réclamations ont été traitées à ce jour (« Résolution des pensionnats indiens »). 

En 2008, la CRRPI a également établi la CVR, qui s’est dotée de deux objectifs : « révéler aux Canadiens la vérité complexe sur l’histoire et les séquelles durables des pensionnats dirigés par des Églises […] » et « orienter et inspirer un processus de témoignage et de guérison, qui devrait aboutir à la réconciliation […] » (Commission de vérité et réconciliation du Canada, p. 27). Entre 2008 et 2015, la CVR a parcouru le Canada pour recueillir des documents et 6 750 témoignages de survivants des pensionnats indiens (Commission de vérité et réconciliation du Canada, p. 30). Il lui a fallu surmonter plusieurs obstacles, dont les réticences de la Police provinciale de l’Ontario (PPO) et de Bibliothèque et Archives Canada (BAC), ainsi que la difficulté d’accessibilité des dossiers du PEI détenus par le Secrétariat d’adjudication des pensionnats indiens (SAPI). La CVR a eu gain de cause pour que BAC et la PPO fournissent leurs dossiers relatifs aux pensionnats, mais la question de la tenue des dossiers du PEI, à savoir où ils seraient consignés, a fait l’objet de nombreux débats (Commission de vérité et réconciliation du Canada, p. 31-32). 

La zone de tension entre la confidentialité et l’accessibilité aux documents traitant de traumatismes que fait ressortir la cause du PEI révèle une histoire complexe sur la réconciliation, la protection de la vie privée et la tenue d’archives au Canada. En 2010, la CVR et la CRRPI ont créé un formulaire de consentement permettant à toute personne ayant témoigné dans le cadre du PEI d’autoriser la CVR à consigner ses documents et son témoignage. Toutefois, ce formulaire n’existait pas au moment de la mise en œuvre du PEI, et bon nombre de survivants n’avaient alors pas bien compris la différence entre la CVR et le PEI (Logan, p. 94). Pour compliquer davantage les choses, la CRRPI a « exigé que toutes les parties à une audience du PEI, y compris les survivants, soient soumises à un engagement de confidentialité stricte » (Commission de vérité et réconciliation du Canada, p. 32). En 2014, l’adjudicateur en chef du PEI a annoncé qu’il était pour la destruction immédiate des dossiers du PEI (Commission de vérité et réconciliation du Canada, p. 33). Dans le procès qui a suivi, la CVR demandait que BAC conserve les dossiers du PEI « en tant que documents historiques irremplaçables témoignant de l’expérience vécue dans les pensionnats indiens [traduction libre] » (Fontaine c. Canada [Procureur général], 2014 ONSC 4585 [CanLII], <http://canlii.ca/t/g8hd3>).

Le tribunal a statué qu’à moins que le requérant se manifeste et consente à l’archivage de son dossier par la CVR, tous les documents du PEI seront détruits après une période de conservation de 15 ans. Cette décision a été portée en appel devant la Cour de justice de l’Ontario en 2016, puis de nouveau devant la Cour suprême du Canada en 2017. Dans les deux instances, la décision du juge Parnell a été maintenue et, à moins que d’autres mesures ne soient prises, tous les documents seront détruits d’ici 2027 (Secrétariat d’arbitrage des pensionnats indiens). 

Examen de la documentation

 Une analyse documentaire révèle l’existence d’un grand nombre de recherches visant à démontrer l’importance de la conservation des dossiers après que des atrocités ont été commises. La collecte et la conservation des documents dans un contexte de violation des droits de la personne constituent des étapes importantes du processus de guérison et de préservation de la mémoire. Toutefois, la question des archives suscite la controverse lorsque plusieurs groupes font pression pour faire primer leurs intérêts et tentent de manipuler la mémoire collective à leur avantage. Le travail de réconciliation ou de justice réparatrice repose sur des évaluations exactes du passé, et la création d’archives joue un rôle important dans la réussite ou l’échec de cette mission (Logan, p. 92). 

Les experts qui analysent les contextes d’après-conflit s’entendent tous sur le fait que la divulgation de la vérité mène à la guérison. Cependant, Mendeloff conteste ce consensus en affirmant qu’il repose davantage sur la foi que sur des preuves empiriques de consolidation de la paix (p. 355). Selon les données empiriques, la notion selon laquelle la divulgation des faits favorise la réconciliation et la guérison, dissuade la récidive et prévient la déformation historique semble vraie, même si elle ne peut être scientifiquement prouvée (Mendeloff, p. 356). En effet, les commissions de vérité et réconciliation sont devenues des mécanismes courants dans un contexte de justice réparatrice après conflit. 

Ces commissions visent à reconstruire et à réparer le tissu social au sein d’une société divisée. Au cours des 25 dernières années, les gouvernements ont souvent eu recours à ce type de commission dans des contextes post-conflictuels afin de permettre à toutes les parties de témoigner de leur expérience. Les témoignages recueillis forment un tout relatant une histoire de violation des droits de l’homme qui appelle au redressement à mesure que la vérité est révélée et que la mémoire sociétale se constitue. De plus, au cours du dernier quart de siècle, des commissions de vérité et réconciliation ont vu le jour en Afrique du Sud, en Sierra Leone, au Pérou, au Timor-Leste, au Maroc, au Libéria, au Canada et en Australie, et beaucoup d’autres sont à venir (Androff, p. 1964). Celles-ci ont généralement peu à voir avec le système judiciaire, sauf dans les cas où la situation nécessite que les contrevenants fassent l’objet de poursuites. Par exemple, la Commission de vérité et réconciliation de l’Afrique du Sud a demandé que les auteurs de crime soient traduits en justice, et la Sierra Leone a déterminé à l’avance que les crimes graves relèveraient de la compétence des Nations Unies (ONU) et les infractions de gravité moindre de celle de la Commission (Androff, p. 1969). Si les tribunaux sont interpellés, c’est dans le but de soutenir les victimes. Quelle que soit la participation des tribunaux, les commissions de vérité et réconciliation sont conçues « pour produire un récit cohérent, complexe et historique des traumatismes découlant de la violence et pour permettre aux victimes de prendre part au processus de reconstruction post-conflit [traduction libre] » (Androff, p. 1975).


Wood et coll. se sont penchés sur le domaine de l’archivistique et ont formulé une critique des pratiques actuelles en matière de défense des droits de la personne (p. 398). Ils remettent en question les techniques d’archivage établies et se demandent si une diminution du pouvoir des établissements chargés de la consignation des documents au profit des personnes concernées serait bénéfique. Le concept archivistique de provenance, c’est-à-dire la propriété ou la consignation des archives, devient problématique lorsqu’il est question de documents relatifs aux droits de la personne qui ont une valeur communautaire. Citant en exemple des archives d’Autochtones de l’Australie, Wood et coll. abordent le recours au principe de la provenance parallèle et un modèle axé sur les participants pour rendre hommage aux personnes et aux collectivités responsables de la création de ces documents (p. 403). Ils vont jusqu’à décrire un processus itératif de tenue d’archives, où les documents ne sont pas statiques et comprennent les voix de ceux qui y contribuent aux fins de préservation, d’enrichissement ou d’enseignement ou qui, d’une tout autre façon, viennent à en faire partie (Wood et coll., p. 403). Cela souligne le caractère vivant et la pertinence des archives pour la communauté. 

Lorsque la Cour suprême a tranché en faveur de la destruction des dossiers du PEI en 2017, l’affaire a fait l’objet d’une importante couverture médiatique nationale. La Société Radio-Canada (SRC) a publié sur son réseau anglais des articles intitulés « Court order to destroy residential school accounts ‘a win for abusers’: NCTR director [Morin] », et « Indigenous residential school records can be destroyed, Supreme Court rules » (Harris). L’affaire fait à nouveau les manchettes en 2019, lorsque le SAPI lance le site Web Mes documents, Mon choix afin de permettre aux anciens élèves des pensionnats de décider du sort de leurs dossiers. Dans l’actualité anglaise, mentionnons l’article de la SRC intitulé « New website helps residential school survivors preserve or destroy records » ainsi que celui du Réseau de télévision des peuples autochtones (APTN) : « Former TRC chair encourages residential school survivors to save records » (Martens). 

 Le PEI a lui-même fait l’objet d’études, et Moran le soumet à un examen juridique approfondi comportant un résumé du processus de plusieurs pages (p. 531-564). Cette analyse d’un point de vue juridique met en lumière la nature du PEI, en particulier sa conception en tant que processus judiciaire à volume élevé dont la portée est limitée pour que le litige demeure centré sur les plaignants. Par exemple, « les contrevenants ne sont pas des parties et n’ont ‘‘aucun droit de confrontation’’ pendant l’audience. Les droits des présumés contrevenants sont limités pour assurer la sûreté et la sécurité des plaignants pendant la dure épreuve que constitue le processus d’audience [traduction libre] » (Moran, p. 561). 

Dans le même ordre d’idée, Morrissette et Goodwill donnent un aperçu détaillé du PEI, mais du point de vue de la santé publique et des services sociaux (p. 542). Ils se penchent sur la relation thérapeutique entre les survivants et les thérapeutes ainsi que sur les besoins uniques en matière de guérison qui peuvent découler de ce processus. Ils se questionnent aussi sur les raisons qui amènent les gens à se soumettre à un processus susceptible d’exacerber leur traumatisme : « Pour certains survivants, une audience officielle est l’occasion de révéler la vérité, de décrire leur expérience et de contribuer à la prévention de futures tragédies humaines et traumatismes culturels similaires [traduction libre] » (Morrissette et Goodwill, p. 555). Le fait de s’ouvrir et de raconter son vécu peut empêcher ce que Morissette et Goodwill appellent une conspiration du silence autour d’un traumatisme, puisque « le silence est profondément destructeur et peut empêcher une réaction constructive de la part des victimes, de leur famille, de la société et d’une nation » [traduction libre] (p. 555). Reimer a rédigé un long rapport qualitatif sur les expériences des personnes qui ont participé au processus de PEC. Bien qu’elle ait rédigé ce rapport avant la mise en œuvre du PEI, elle a demandé aux participants ce qu’ils pensaient du PEC et du PEI proposés. Plusieurs ont mentionné que l’expérience avait ravivé leur traumatisme. Certains ont indiqué que la montagne de paperasse à remplir et les interactions avec les avocats dissuadaient les gens de participer au PEI (Reimer, p. xv-xviii). Ce rapport a été rédigé avant la décision officielle du tribunal concernant la destruction des documents du PEI. D’autres recherches sont nécessaires pour évaluer l’incidence de cette décision sur le processus de guérison et de réconciliation. La décision de la Cour suprême aura-t-elle pour effet de miner l’important travail de divulgation des traumatismes et de victimiser à nouveau les survivants? La documentation présente des lacunes. Il est important de réexaminer la problématique du point de vue juridique, éthique, archivistique et des services sociaux, car les ouvrages susmentionnés ne se penchent pas suffisamment sur ces questions. Les travaux précités qui décrivent le PEI ne considèrent pas l’incidence qu’aura la destruction des documents. Il sera donc important de poursuivre la réflexion en tenant compte des aspects juridiques, éthiques et archivistiques, de même que des domaines de la santé publique et des services sociaux. 

Méthodologie

Pour répondre à la question de recherche, nous avons procédé à un examen de la documentation afin de déceler les lacunes qui subsistent au sein du Centre national pour la vérité et la réconciliation (CNVR), de la CRRPI et du PEI, et nous avons recueilli, examiné et analysé tous les documents pertinents. Une grande partie des renseignements nécessaires à notre recherche proviennent du site Web de la CRRPI, qui nous a donné accès à une foule de ressources utiles, dont le guide pour les demandes au titre du PEI; le site Web Mes documents, Mon choix; les décisions des tribunaux et les documents juridiques. Aux fins d’analyse, nous avons ensuite appliqué les principes de justice de Rawls à la documentation et à la décision définitive du tribunal. De plus, nous avons soulevé et examiné une série de questions secondaires pour appuyer davantage notre argument et mieux répondre à la question de recherche. 

Que s’est-il passé? 

Il est important de comprendre le contexte qui a mené à la décision du tribunal en faveur de la destruction des dossiers du PEI avant d’évaluer la question d’un point de vue éthique en s’appuyant sur les principes de justice de Rawls. Une connaissance des événements qui ont mené à ce jugement aide à mettre en contexte la manière dont les gens ont été traités et les décisions prises pour faire avancer le dossier. De plus, une exploration des circonstances permet de faire ressortir les points de vue variés des différentes parties prenantes. Une telle lucidité contribue à dresser un portrait détaillé des événements et des personnes touchées – un processus nécessaire pour permettre l’analyse de la situation au regard des principes rawlsiens. 

Quels arguments ont mené à la décision du tribunal? 

Les documents judiciaires révèlent une variété d’arguments et de considérations contradictoires ayant mené à la décision ultime de détruire les dossiers du PEI d’ici 2027. Les arguments en faveur de la destruction des dossiers étaient les suivants : la promotion de l’autonomie des survivants (puisqu’ils ont le choix de récupérer ou de consigner leur dossier, ou d’en autoriser la destruction) et le maintien de la confidentialité (puisque les survivants avaient été informés que leurs documents seraient détruits, selon les dires d’un juge). Les arguments s’opposant à la destruction des documents comprenaient l’importance de conserver ces informations pour les générations futures comme preuve des atrocités subies par les Autochtones aux XIXe et XXe siècles, et la présence de preuves contradictoires concernant la consignation des dossiers. Un point litigieux dans cette affaire consistait à déterminer si les requérants du PEI avaient été clairement informés de l’archivage ou de la destruction des documents après usage. Les différents juges ne s’entendaient pas sur cette question. En outre, les tribunaux se sont interrogés afin d’établir si les documents doivent être considérés comme des documents gouvernementaux ou des documents judiciaires, car les documents gouvernementaux sont « soumis aux lois fédérales en matière de protection des renseignements personnels, d’accès à l’information et d’archivage » (Canada [Procureur général] c. Fontaine, 2017 CSC 47 [CanLII], [2017] 2 RCS 205 <http://canlii.ca/t/h6jgq>). La pertinence de ce questionnement devient évidente si l’on examine l’affaire du point de vue du droit plutôt que de celui de l’éthique.

 

Quels biais et valeurs doivent être pris en compte? 

Pour répondre à cette question de recherche, il faut entre autres tenir compte des biais et valeurs associés à ceux qui détiennent le pouvoir, de la diversité des intérêts et des points de vue, ainsi que des préjugés personnels.

La présente analyse porte sur l’affaire judiciaire de 2017 et l’approche à adopter en ce qui concerne les dossiers du PEI, plutôt que sur l’intention initiale qui sous-tend ce processus. Il convient de souligner certaines limites auxquelles se heurte la présente recherche, dont le manque d’accès aux divers points de vue et l’impossibilité de consulter les parties prenantes afin de connaître leur opinion. S’il était possible de sonder les survivants, nous aurions une meilleure idée de leurs valeurs et de l’approche qu’ils aimeraient voir adopter. En outre, nous pourrions mieux examiner la pertinence des différents points de vue par rapport aux principes de justice de Rawls. En partie en raison de ces limites, la présente analyse est une exploration des procédures judiciaires à la lumière des principes de Rawls et non pas une recommandation sur la manière de procéder dans cette affaire.

Cadre théorique

Les principes de justice de Rawls s’intéressent à la manière dont des personnes rationnelles structureraient une société si elles se trouvaient derrière un « voile d’ignorance », c’est-à-dire si elles étaient dépourvues de toute connaissance de qui elles sont dans le monde. 

Le premier principe de Rawls, le principe de liberté et d’égalité, veut que « chaque personne ait un droit égal aux libertés les plus étendues compatibles avec la liberté des autres [traduction libre] » (Garrett). Chacun devrait donc bénéficier des mêmes chances et d’un accès égal lorsqu’il est question des droits et libertés fondamentaux. Le deuxième principe, le principe de différence, assume l’idée que « les inégalités sociales et économiques doivent être agencées de manière à ce qu’elles soient (a) au plus grand bénéfice des moins favorisées et (b) rattachées à des possibilités ouvertes à tous dans des conditions d’égalité des chances [traduction libre] » (Garrett). Selon les principes de Rawls, un monde devrait être structuré de manière à améliorer le sort des individus et groupes défavorisés en vue de remédier aux inégalités sociales et économiques (c.-à-d. que les plus démunis doivent être les plus avantagés lorsqu’il existe des différences importantes). De ce point de vue, une personne rationnelle ne concevrait pas un monde qui favorise un groupe par rapport à un autre sur les plans social, économique ou autre, puisqu’elle risquerait de ne pas pouvoir tirer avantage de la structure sociale. 

Nous pouvons examiner le litige dont il est question sous l’angle de la théorie de Rawls et nous en servir comme cadre théorique pour notre thèse. Autrement dit, une personne rationnelle derrière un voile d’ignorance appuierait-elle la décision des tribunaux en faveur de la destruction des documents du PEI? 

Discussion et constat

Prise en compte des parties prenantes et de leurs valeurs

En 2017, dans le cadre d’un appel, la Cour suprême du Canada a maintenu la décision de première instance en faveur de la destruction des documents du PEI. Cette décision autorise la destruction des documents d’ici septembre 2027, sauf si les survivants consentissent à leur préservation dans le cadre de l’initiative Mes documents, Mon choix du SAPI. Cette initiative permet aux requérants du PEI d’obtenir une copie de leurs documents en vue de les conserver ou de les partager, d’autoriser le CNVR à archiver leurs documents à des fins historiques ou de recherche et d’éducation publique, ou d’opter pour ces deux choix. Les requérants peuvent consentir à un accès libre (le CNVR pourra rendre publics les documents et les renseignements personnels aux fins de réconciliation) ou à un accès restreint (l’utilisation des renseignements est permise à certaines fins comme la publication, mais seulement si les renseignements personnels sont supprimés). 

Il convient de noter qu’un requérant ne peut pas autoriser l’archivage de ses documents en vertu d’une politique de préservation qui permettrait seulement la divulgation des informations au bout d’un certain nombre d’années après son décès. Les options proposées ne constituent pas un choix suffisant pour une personne qui choisirait de conserver ses documents tout en protégeant sa vie privée. De plus, les personnes qui ne soumettent pas de demande verront leurs documents détruits.

L’ordonnance de la cour visant la destruction des documents (sauf si le plaignant consent à leur préservation) a des répercussions sur plusieurs groupes et parties prenantes dont les valeurs et les points de vue sont assurément divergents. Les divers groupes touchés par la décision du tribunal comprennent les requérants du PEI, leurs proches, l’État et les églises et diocèses concernés, le CNVR et les générations futures.

Sans consulter directement les individus appartenant à ces groupes, il n’est pas possible de connaître tous leurs points de vue et valeurs ni même de les comprendre ou de les juger dans leur ensemble. Il est tout de même possible de se faire une idée générale des différents points de vue lorsqu’on examine les documents judiciaires, les articles de presse et les autres ouvrages mentionnés dans le présent essai. 

De façon générale, les églises et diocèses sont en faveur de la destruction des documents, alors que le CNVR, compte tenu de sa position et de ses valeurs, s’y oppose. Les raisons qui expliquent ce constat semblent évidentes. Les églises et diocèses mentionnés dans les documents judiciaires veulent éviter la préservation de documents détaillant les atrocités dont ils sont en grande partie responsables. Le CNVR, dans l’espoir de progresser vers la réconciliation et une représentation cohérente des événements passés, considère qu’il est important de reconnaître les témoignages entendus et de les préserver.

L’examen de la documentation et des documents judiciaires permet difficilement de déterminer dans quelle mesure les requérants du PEI sont reconnaissants de pouvoir décider du sort de leurs documents. D’une part, ces personnes peuvent se sentir habilitées par le fait qu’elles ont le contrôle de leur propre dossier. D’autre part, ils peuvent avoir l’impression que les choix proposés sont restrictifs et n’apportent pas de solution au vrai problème. Cela dit, bien que l’initiative Mes documents, Mon choix semble à première vue vouloir donner le pouvoir aux survivants, elle faillit à sa mission en n’offrant pas assez d’options. De plus, elle ne tient pas compte des valeurs d’autres parties prenantes, y compris les générations futures, qui se doivent d’être informées des atrocités infligées à ceux et celles qui les ont précédées. 

Selon les principes de justice de Rawls, en particulier celui de la différence, la société doit privilégier les groupes les plus défavorisés et adopter les décisions qui leur seront le plus profitables. Dans cette affaire, déterminer les groupes les plus désavantagés est un défi en soi. On ne risque pas de se tromper en affirmant qu’il ne s’agit ni des organisations religieuses ni de l’État. Par conséquent, il s’agit soit des requérants du PEI, qui ont beaucoup souffert aux mains de l’Église et de l’État, soit des générations futures, qui seront privées d’un savoir important advenant la destruction des documents. Conformément aux principes de justice de Rawls, il semblerait que la destruction des documents soit dommageable pour les deux groupes, comme nous le verrons plus en détail ultérieurement. 

Évaluation de la décision du tribunal

Le principe de liberté et d’égalité de Rawls veut que « chaque personne ait un droit égal aux libertés les plus étendues compatibles avec la liberté des autres. [traduction libre] » (Garrett). Trois droits doivent être pris en considération dans l’affaire de 2017 au sujet des documents du PEI : le droit à la vie privée, le droit de savoir et le droit à l’autonomie. Le droit à la vie privée fait référence au droit à la confidentialité de chaque requérant du PEI. En 2017, le tribunal a statué que les documents du PEI pourront être détruits après 15 ans, à moins que les plaignants souhaitent conserver leur dossier, étant donné que « selon les termes exprès de la CRRPI, les documents du PÉI seraient traités comme des documents hautement confidentiels, sous réserve d’une possibilité très limitée de communication au cours d’une période de conservation, après quoi ceux-ci seraient détruits. » (Canada [Procureur général] c. Fontaine, 2017 CSC 47 [CanLII], [2017] 2 RCS 205 <http://canlii.ca/t/h6jgq>). Le différend portait également sur le droit de connaître la vérité sur les pensionnats indiens dans un but de réconciliation. Cela a motivé en grande partie la décision de porter l’affaire devant la Cour suprême en 2017, quoique le tribunal soit demeuré muet sur ce droit précis, s’efforçant plutôt de déterminer si le juge de première instance avait le droit d’ordonner la destruction des documents. Enfin, l’affaire traite aussi du droit à l’autonomie, soit le droit, en tant qu’individu ou groupe, de contrôler son propre récit. Cela pourrait être interprété comme le droit pour les requérants du PEI de déterminer ce qu’il adviendra de leur dossier ou pour des groupes comme le NCTR d’utiliser les documents aux fins de préservation du souvenir des pensionnats au Canada. Dans l’ensemble, l’affaire met en évidence le fossé entre les droits individuels des requérants du PEI et les droits à l’autonomie et à la vie privée (y compris le droit à l’oubli), ainsi que le droit collectif de connaissance du passé et de contrôle sur le récit des pensionnats au Canada. 

Dans l’affaire de 2017, le tribunal a statué que les documents du PEI seraient conservés pendant 15 ans, une période au cours de laquelle les requérants peuvent choisir de récupérer leurs dossiers ou de les confier à la CVR aux fins d’archives, avant leur destruction en 2027. Cette décision vise à trouver un juste milieu entre le droit individuel à la vie privée et le droit collectif de connaissance en concédant le droit individuel à l’autonomie aux requérants du PEI. Toutefois, en permettant aux survivants de décider du sort de leurs documents, la décision du tribunal établit que le droit à la vie privée et à l’autonomie l’emporte sur le droit collectif de savoir. Ainsi, la décision respecte le principe d’équité de Rawls dans la mesure où elle protège le droit individuel à la vie privée et à l’autonomie. Par contre, elle fait valoir ce droit aux dépens du droit collectif de savoir, ce qui ne cadre pas avec le principe d’équité, car il n’accorde pas le droit de savoir de la même façon à tous les Canadiens. En détruisant les dossiers, la vie privée des individus est maintenue, mais les effets à long terme de la décision n’appuient pas les efforts de réconciliation. 

Le principe de différence de Rawls affirme que la société doit être organisée de sorte que chacun détienne des droits similaires, mais aussi de manière à ce que les plus défavorisés tirent le plus grand avantage des structures établies afin de favoriser l’équité. Dans cette affaire, on peut considérer que les personnes les plus défavorisées de la société sont les requérants du PEI et les peuples autochtones du Canada à qui le système des pensionnats indiens a légué un héritage traumatique intergénérationnel. Si appliqué, le principe de différence ferait en sorte de voir à ce que les requérants du PEI et les peuples autochtones du pays tirent le plus grand bénéfice de la décision du tribunal. 

Sur la base de ces deux principes, le cadre théorique de Rawls suppose que les jugements éthiques devraient être posés derrière un voile d’ignorance afin qu’une personne rationnelle puisse décider ce qui est mieux pour les groupes les plus défavorisés de la société. Le résultat du litige de 2017 ne concorde pas avec le cadre théorique de Rawls, car la décision profite structurellement aux plus favorisés dans cette affaire. L’État et les organisations religieuses tirent profit de la destruction des documents du PEI qui font lumière sur les actes de violence perpétrés dans les pensionnats. De plus, la perte d’une partie importante de l’histoire des pensionnats au Canada compromet les efforts de réconciliation avec les peuples autochtones. Les documents du PEI renferment plus de 36 000 témoignages de mauvais traitements subis dans des pensionnats, alors que la CVR a recueilli moins de 6 500 dépositions de survivants, de familles, de dirigeants religieux, etc. La perte de ces 36 000 témoignages pourrait nuire aux générations futures du Canada en contribuant à dresser un portrait incomplet du système des pensionnats indiens. 

Un autre facteur à considérer est le caractère restreint des options offertes aux requérants du PEI dans le cadre de l’initiative Mes documents, Mon choix, créée par le SAPI à la suite de la décision du Tribunal de 2017 et ayant pour but de communiquer avec les survivants afin de savoir s’ils souhaitent confier leurs documents au CNVR (Secrétariat d’adjudication des pensionnats indiens). Quatre choix seulement sont offerts aux requérants du PEI : 

  • Autoriser la destruction de leurs documents d’ici 2027.
  • Obtenir une copie de leurs documents pour eux-mêmes. 
  • Confier une copie de leurs documents au CNVR avec accès libre. 
  • Confier une copie de leurs documents au CNVR avec accès restreint. 

Pour un requérant qui comprend les avantages à long terme de l’archivage de ses documents auprès du CNVR, mais qui ne veut pas rendre publics les sévices qu’il a subis, l’accès restreint est la meilleure option. Si le requérant choisit cette option, « le CNVR peut utiliser et partager [ses] documents à des fins comme l’éducation du public, mais seulement si le CNVR supprime [les] renseignements personnels » (Mes dossiers, Mon choix). Malheureusement, cette option ne permet pas à un requérant de consentir à l’archivage de ses documents en autorisant leur utilisation qu’après son décès. 

Ainsi, les options offertes à l’issue de l’affaire judiciaire de 2017 ne profitent pas aux moins favorisées, car de nombreux requérants du PEI sont décédés et ne peuvent pas se prononcer sur le sort de leurs documents. Le choix volontaire d’archiver ou de détruire les documents ne garantit pas non plus que des documents soient conservés aux fins de réconciliation. Lorsque nous considérons que les documents du PEI renferment peut-être la plus grande collection de témoignages détaillant les cas les plus graves de sévices perpétrés dans les pensionnats indiens au Canada, la liste des options offertes aux requérants ne contribue guère à faire en sorte que leurs dossiers soient conservés dans le but de mieux comprendre le contexte des pensionnats ou aux fins de réconciliation. 

Quel sort une personne rationnelle derrière un voile d’ignorance réserverait-elle aux documents du PEI? Il est possible qu’elle choisisse de détruire les documents par respect pour la nature traumatisante de l’information, surtout si elle considère que leur publication « aurait des effets destructeurs sur le tissu des communautés existantes et des répercussions pour toutes les parties concernées pendant de nombreuses générations [traduction libre] » (Logan, p. 94). Selon la théorie de Rawls, il est toutefois important que la personne qui exerce un jugement éthique soit rationnelle. Cela signifie qu’elle doit être en mesure d’émettre des jugements en tenant compte de preuves diverses et de points de vue divergents. Une personne rationnelle examinerait la situation, la comparerait à des efforts de vérité et de réconciliation déployés ailleurs dans le monde, et considérerait les effets à long terme de la destruction et de la conservation des documents. 

Dans cette optique, une personne rationnelle derrière un voile d’ignorance conclurait que la destruction des dossiers du PEI ne sert pas les intérêts à long terme de la réconciliation ou de la mémoire collective des peuples autochtones au Canada. Elle n’appuierait pas non plus la diffusion des documents de quelque façon que ce soit, puisqu’elle brimerait le droit à la vie privée des requérants du PEI. Ainsi, elle trouverait probablement un moyen d’équilibrer ces droits en conservant tous les dossiers et en préservant la confidentialité des renseignements afin d’éviter toute conséquence néfaste sur les survivants et leur famille. La solution pourrait ressembler à l’option d’archivage des documents auprès du CNVR avec « accès restreint » offerte dans le cadre de l’initiative Mes dossiers, Mon choix, mais elle inclurait aussi une politique de conservation permettant uniquement l’utilisation des documents après un certain nombre d’année suivant le décès du requérant ou à des fins d’études sans accès aux renseignements personnels. 

Une personne rationnelle conclurait également que le sort des documents ne devrait pas être remis entre les mains d’un système judiciaire géré par l’État, mais entre celles des peuples autochtones. L’un des principaux problèmes dans cette affaire est qu’on se concentre sur l’autorité qui devrait décider du sort des documents du PEI plutôt que sur les effets de la conservation ou de la destruction des documents. En effet, la décision du tribunal ordonnant la destruction des dossiers en 2016 a été portée en appel devant la Cour suprême par le Procureur général du Canada en 2017 au motif que le juge de l’instance précédente n’avait pas le droit de rendre un tel jugement : « Le Procureur général du Canada se pourvoit devant la Cour, faisant valoir que les documents du PÉI ‘‘relèvent d’une institution fédérale’’ au sens de la Loi sur l’accès à l’information, de la Loi sur la protection des renseignements personnels et de la Loi sur la Bibliothèque et les Archives du Canada, et que le juge surveillant n’avait pas compétence pour ordonner leur destruction. » (Canada [Procureur général] c. Fontaine, 2017 CSC 47 [CanLII], [2017] 2 RCS 205 <http://canlii.ca/t/h6jgq>). Le tribunal a consacré beaucoup de temps à défendre le droit du SAPI de prendre des décisions sur le sort des documents du PEI. 

Dans l’ensemble, il s’agit d’une question complexe, et aucune solution ne peut concilier les droits de toutes les parties concernées. Toutefois, le fait d’archiver les documents tout en veillant à la protection de la vie privée des requérants du PEI profiterait, à long terme, au groupe le plus désavantagé, c’est-à-dire les peuples autochtones. À court terme, cette solution n’est pas avantageuse pour les requérants, car le droit de savoir outrepasse leur droit à l’autonomie quant à l’utilisation des documents. Cependant, le recours à une politique stricte en matière de confidentialité et de conservation pourrait mieux faire valoir leur droit à la vie privée que les options d’archivage à accès restreint et libre proposées dans le cadre de l’initiative Mes dossiers, Mon choix. Cette solution est toutefois limitée en ce sens qu’elle traite les peuples autochtones comme un groupe homogène puisque la conservation des dossiers profiterait davantage aux efforts de réconciliation futurs. 

Archives et savoir autochtone

Le présent essai cherche à synthétiser les recherches les plus connues plutôt que d’orienter les pratiques en matière d’archives autochtones. La stratégie privilégiée peut varier en fonction de la géographie, des pratiques locales, des enseignements et des priorités de la communauté. Il est important de noter l’existence d’autres approches archivistiques qui tiennent compte des méthodologies et épistémologies autochtones.
L’archivage autochtone au Canada a fait l’objet de peu d’études. Un projet financé par l’Australian Research Council, Trust and Technology: Building Archival Systems for Indigenous Oral Memory, a permis de franchir des étapes importantes sur le plan des archives autochtones et des échanges avec le peuple aborigène des Koories. L’objectif était d’encourager les archivistes australiens à « comprendre les priorités des communautés autochtones et à mettre en œuvre leurs principes ayant trait au savoir, à la mémoire et aux témoignages, y compris les connaissances stockées et transmises oralement [traduction libre] » (McKemmish et coll., p. 212). Le projet visait à établir des liens de confiance entre les peuples autochtones et les services d’archives. Les Australiens autochtones se montraient prudents à l’égard des institutions d’archives, car les documents qui y étaient conservés avaient autrefois été utilisés à leur détriment. Deux thèmes concurrents sont ressortis : que les gens souhaitaient exercer un contrôle sur leurs propres documents et y avoir accès à titre de propriétaire; et que le gouvernement se montrait réticent à cet égard puisqu’il considérait que ces documents relevaient de leur compétence. (McKemmish et coll., p. 219). Le contexte canadien se prête à l’approche adoptée par l’Australian Research Council. Dans le cadre de leurs travaux, les chercheurs mandatés ont relevé sept principes dont le Canada serait avisé de tenir compte lorsqu’il est question de coordonner des documents d’archives avec les communautés autochtones. Les principes 2 et 4, qui soulignent l’importance de la reconnaissance des droits sur les documents et l’adoption d’approches holistiques, sont particulièrement pertinents pour l’affaire qui nous préoccupe. Ces principes soutiennent que les peuples autochtones devraient pouvoir se prononcer sur le sort des documents qui les concernent et qui relatent leurs expériences. En outre, les systèmes d’archivage contrôlés par la collectivité devraient permettre de regrouper les documents des pensionnats d’une manière conforme aux souhaits des individus et des communautés (McKemmish, p. 231). Il convient de mentionner aussi les autres principes qui, pour établir leur pertinence dans le contexte canadien, nécessiteraient de plus amples recherches. Les sept principes relevés sont les suivants :

  • Principe 1 : Reconnaissance de toutes les sources archivistiques du savoir autochtone
  • Principe 2 : Reconnaissance des droits sur les documents
  • Principe 3 : Reconnaissance des droits dans les contextes juridiques et archivistiques
  • Principe 4 : Adoption d’approches holistiques en matière d’archivage autochtone axées sur la communauté
  • Principe 5 : Reconnaissance de la nécessité pour les Autochtones de remettre en question les documents « officiels »
  • Principe 6 : Reconnaissance de la nécessité d’adopter une approche inclusive en matière d’enseignement et de formation à l’intention des professionnels de l’archivistique
  • Principe 7 : Conciliation entre le milieu de la recherche universitaire et les communautés autochtones (McKemmish, p. 230-231).

Une grande partie des ouvrages qui portent sur les pensionnats indiens concentrent leur attention sur la justice transitoire et ne tiennent pas compte d’autres approches. Augustine Park, une chercheuse canadienne du domaine de la justice réparatrice, cherche à combler cette lacune en examinant les initiatives de réparation axées sur la communauté de justice qui ont été mises en œuvre en réponse aux traumatismes subis dans les pensionnats (p. 425). Bien qu’il n’y ait pas de définition arrêtée ou de cadre défini pour une approche universelle en matière de justice réparatrice et de pensionnats, la définition de la justice réparatrice de Park est pertinente : « (1) des pratiques de justice qui émanent des communautés, qui y sont mises en œuvre ou qui sont transmises d’une communauté à l’autre; (2) auxquelles les parties prenantes participent et (3) qui s’efforce de reconnaître les victimes et les survivants, de souligner la responsabilité des auteurs d’actes répréhensibles et de transformer les relations [traduction libre] » (p. 427). Cette approche soustrait à l’État son rôle de seule autorité en matière de justice et de guérison des séquelles des pensionnats (Park, p. 427). L’un des objectifs des initiatives de justice réparatrice mises en œuvre dans les communautés autochtones est de « s’efforcer d’enseigner des vérités décolonisantes [traduction libre] » (Park 440). Par conséquent, si l’on procède à un examen de la question des documents du PEI au regard du modèle de la justice réparatrice, la destruction des dossiers élimine la possibilité avantageuse de les utiliser dans le cadre d’initiatives communautaires de justice et de guérison. En revanche, la justice transitoire fait habituellement appel à des approches étatiques et judiciaires de la guérison. Ainsi, la CRRPI est un exemple de justice transitionnelle parce qu’elle cherche à remédier de façon superficielle ou générale aux torts historiques à grande échelle plutôt qu’à adopter une approche communautaire nuancée. L’affirmation selon laquelle « la justice réparatrice communautaire constitue une solution de rechange valable à la justice transitionnelle officielle (parrainée par l’État) pour contrer la violence de masse revêt une grande importance pour le sujet dont il est question ici. Au Canada, les lacunes de l’appareil judiciaire transitionnel officiel soulignent la nécessité de solutions de rechange communautaires » [traduction libre] (Park, p. 425). La destruction des documents du PEI peut s’avérer contraire à l’objectif des travaux de justice réparatrice communautaires autochtones. 

Recommandations 

À l’avenir, il est recommandé d’utiliser cette affaire comme un exemple à ne pas suivre dans les causes de droits de la personne, c’est-à-dire offrir des options limitées aux groupes défavorisés, en plus de faire fi de valeurs importantes comme la préservation de l’histoire et le partage de la vérité et des connaissances avec les générations futures. De plus, les groupes au pouvoir, c’est-à-dire l’Église et l’État dans cette affaire, ne devraient pas avoir le plein contrôle des procédures et des processus. Surtout, dans un cas comme celui-ci, d’autres méthodes et façons de faire (comme préserver les connaissances et archiver les documents) doivent être envisagées. D’après notre recherche, les recommandations suivantes peuvent faire l’objet de discussions :

  • Tirer des leçons de cette situation et s’en servir pour éclairer les affaires relatives aux droits de l’homme et aux archives, surtout
    lorsqu’il est question de documents de nature délicate.
  • Reconnaître que les documents ont de la valeur pour les gens et les communautés
  • Offrir des options supplémentaires pour maintenant et éviter de présenter aux survivants une fausse dichotomie. 
  • Explorer différents mécanismes d’archivage autochtone afin de trouver une solution de rechange pour les documents du PEI.
  • Intégrer, dans la mesure du possible, les documents du PEI à des projets communautaires de justice réparatrice existants ou en cours d’élaboration. 

Conclusion

La décision de la Cour suprême de détruire les dossiers du PEI, à l’exception de ceux conservés grâce à l’initiative Mes dossiers, Mon choix, ne profite pas suffisamment aux requérants du PEI, aux survivants des pensionnats indiens et aux générations futures. En effet, plutôt que de révéler la vérité sur les atrocités survenues dans les pensionnats canadiens aux XIXe et XXe siècles, la destruction des documents sert en quelque sorte les intérêts des auteurs des sévices. Conformément aux principes de justice de Rawls, il est important que la décision de la Cour suprême privilégie les groupes susmentionnés, soit les requérants du PEI, les survivants des pensionnats et les générations futures, puisqu’ils sont les plus défavorisés dans cette affaire. Nous avons exploré dans quelle mesure les procédures judiciaires ont respecté (ou non) les principes de justice de Rawls. Cet examen de la décision de la Cour suprême à la lumière de la théorie rawlsienne comporte toutefois ses limites, car les parties prenantes n’ont pu être interrogées en vue de connaître leurs valeurs et points de vue. 

Glossaire des acronymes

APTN – Réseau de télévision des peuples autochtones

BAC – Bibliothèque et Archives Canada

CNVR – Centre national pour la vérité et la réconciliation 

CRRPI – Convention de règlement relative aux pensionnats indiens 

CVR – Commission de vérité et réconciliation du Canada

ONU – Nations Unies 

PEC – Paiement d’expérience commune

PEI – Processus d’évaluation indépendante

PPO – Police provinciale de l’Ontario 

SAPI – Secrétariat d’adjudication des pensionnats indiens

SRC – Société Radio-Canada 

Bibliographie

ANDROFF, David K. « Truth and Reconciliation Commissions (TRCs): An International Human Rights Intervention and its Connection to Social Work », The British Journal of Social Work, vol. 40, no 6, 2010, p. 1960-1977. DOI : 10.1093/bjsw/bcp139.

Canada (Procureur général) c. Fontaine, 2017 CSC 47 (CanLII), [2017] 2 RCS 205 <http://canlii.ca/t/h6jgq>

CBC News. « New Website Helps Residential School Survivors Preserve or Destroy Records », CBC News, 14 janvier 2019 (consulté en avril 2019). <https://www.cbc.ca/news/canada/saskatchewan/new-residential-school-website-survivors-records-decision-1.4977258>

Commission de vérité et réconciliation du Canada. Honorer la vérité, réconcilier pour l’avenir Sommaire du rapport final de la Commission de vérité et réconciliation du Canada, Commission de vérité et réconciliation du Canada, 2015.

Fontaine c. Canada (Procureur général), 2014 ONSC 4585 (CanLII). <http://canlii.ca/t/g8hd3>

GARRETT, Jan. John Rawls sur Justice, Western Kentucky University, 3 septembre 2002 (consulté le 14 avril 2019). <https://people.wku.edu/jan.garrett/ethics/johnrawl.htm#prin

HARRIS, Kathleen. « Indigenous Residential School Records Can Be Destroyed, Supreme Court Rules », CBC News, 6 octobre 2017 (consulté le 10 avril 2019). <https://www.cbc.ca/news/politics/indian-residential-schools-records-Supreme-court-1.4343259>

KOVACH, Margaret. Indigenous Methodologies: Characteristics, Conversations and Contexts, University of Toronto Press, 2009.

LOGAN, Tricia. « Questions of Privacy and Confidentiality after Atrocity: Collecting and Retaining Records of the Residential School System in Canada », Genocide Studies International, vol. 12 no 1, 2018, p. 92-102. 

MARTENS, Kathleen. « Former TRC Chair Encourages Residential School Survivors to Save Records », APTN National News, 6 janvier 2019 (consulté en avril 2019). <https://aptnnews.ca/2019/01/16/former-trc-chair-encourages-residential-school-survivors-to-save-records/>

MCKEMMISH, Sue, Shannon FAULKHEAD et Lynette RUSSELL. « Distrust in the Archive: Reconciling Records », Archival Science, vol. 11, no 3-4, 2011, p. 211-239. DOI: http://dx.doi.org.login.ezproxy.library.ualberta.ca/10.1007/s10502-011-9153-2

MENDELOFF, David. « TruthSeeking, TruthTelling, and Postconflict Peacebuilding: Curb the Enthusiasm? », International Studies Review, vol. 6, no 3, 2004, p. 355-380. DOI : https://doi-org.login.ezproxy.library.ualberta.ca/10.1111/j.1521-9488.2004.00421.x. 

Mes dossiers, mon choix, Secrétariat d’adjudication des pensionnats indiens, mai 2019 (consulté en mai 2019). <http://www.iap-pei.ca/records-eng.php>

MORAN, Mayo. « The Role of Reparative Justice in Responding to The Legacy of Indian Residential Schools », The University of Toronto Law Journal, vol. 64, no 4, 2014, p. 529-565. DOI : 10.3138/utlj.2505. 

MORIN, Brandi. « Court Order to Destroy Residential School Accounts ‘A Win for Abusers’: NCTR Director », CBC News, 6 octobre 2017 (consulté en avril2019).
<https://www.cbc.ca/news/indigenous/court-order-destroy-residential-school-accounts-1-.4344918>

MORRISSETTE, Patrick J. et Alanaise GOODWILL. « The Psychological Cost of Restitution: Supportive Intervention with Canadian Indian Residential School Survivors », Journal of Aggression, Maltreatment & Trauma, vol. 22, no 5, Mai 2013, p. 541-558. DOI : 10.1080/10926771.2013.785459.

PARK, Augustine S. J. « Remembering the Children: Decolonizing Community-Based Restorative Justice for Indian Residential Schools », Contemporary Justice Review, vol. 19, no 4, décembre 2016, p. 424-444. DOI : 10.1080/10282580.2016.1226818. 

REIMER, Gwen. Paiement d’expérience commune, composante de l’Accord de règlement relatif aux pensionnats indiens, et guérison : une étude qualitative exploratoire des incidences sur les bénéficiaires, Fondation autochtone de guérison, 2010.

« Résolution des pensionnats indiens », Affaires autochtones et du Nord Canada, 2019 (consulté en avril 2019). <https://www.aadnc-aandc.gc.ca/fra/1100100015576/1100100015577#sect1>

Secrétariat d’adjudication des pensionnats indiens (consulté en avril 2019). <http://iap-pei.ca/home-eng.php>

SMITH, Linda T. Méthodologies décolonisantes : Recherche et peuples autochtones, Zed Books, 2012.

WOOD, Stacy et coll. « Mobilizing Records: Re-Framing Archival Description to Support Human Rights. » Archival Science, vol. 14, no 3, 2014, p. 397-419. DOI : http://dx.doi.org.login.ezproxy.library.ualberta.ca/10.1007/s10502-014-9233-1. 

The Transformational Impacts of Big Data and Analytics
Christy Walters & Crystal Walters

 

Estimated reading time: 20 minutes. Contains 3975 words

 

Introduction

The age of big data and analytics is changing the business and academic landscape as today’s organizations are using innovative techniques to derive valuable insights from unstructured and structured data. There is an evolving positive view of big data as its transformational impacts on business has come to the forefront in recent years. In this article, we will evaluate the current literature regarding big data and analytics and its transformative power. We will discuss the innovative ways in data analytics has evolved and how it is transforming today’s business landscape. Finally, we will form a conclusion on the impacts of big data and analytics on present-day information management practice and address the skills that are required for information management practitioners to create value for organizations in this big data age. The positive outcomes of big data and analytics do not come without emerging challenges that will also be considered in the review. 

The ‘Big Data’ Evolution 

In his 2012 article, “On the Origins and Development of ‘Big Data’,” Francis Diebold concludes that the phrase ‘big data’ likely arose out of lunch-table conversations involving Chief Scientist, John Mashey at Silicon Graphics Inc. (SGI) in the mid-1990s. In a later interview, Mashey stated his first uses of the term stemmed from his desire to convey in the shortest way that “the boundaries of computing keep advancing. 

The concept of big data was first defined in terms of the 3Vs by Doug Laney in his research note “3D Data Management: Controlling Data Volume, Velocity, and Variety” in 2001. Two decades later, the 3Vs of volume, velocity and variety are the generally accepted defining dimensions of big data and have evolved into the 4Vs with the addition of the V of veracity . The 4Vs have provided a framework for the design of the innovatory software required to handle the needs of the big data explosion.  In the current information age, big data and computer analytical techniques are used to process and glean insights from applications that are so large (from terabytes to zettabytes) and complex that they require advanced and unique data storage, management, analysis, and visualization technologies.  In “Transformational Issues of Big Data in Networked Business,” Baesens et al. argue for the addition of a fifth V as a complement to the 4Vs from a business perspective. They suggest that adding the V of ‘value’, will help drive consideration of what to do inside the perimeters set by the 4V definition, namely, how to innovatively investigate and analyze big data and how to anticipate and leverage the transformational impacts of big data.

It would not be hyperbolic to claim that big data is possibly the most significant ‘tech’ disruption in business and academic ecosystems since the meteoric rise of the Internet and the digital economy. With most of today’s data coming from beyond the corporate boundaries, being generated in unstructured format from networks of people and devices, we see more powerful algorithms and better knowledge representation schemes for making sense of all of this heterogeneous and fragmented information than ever before. Since the big data being generated today comes from mostly unstructured sources and networks of people and devices, there is a need for even more powerful algorithms in order to process and make sense of this information.  

The definition of information management is constantly evolving as technology, ideas, and business needs change. Data has a lifecycle that is based on its level of usefulness to an organization. Data has to be interpreted to render information, and information has to be understood to emerge as knowledge, which leads to effective decision-making. Big data adds a layer of complexity to the management of information, yet at the same time adds opportunities to derive actionable insights never before possible. Today’s information management practitioners can prosper by taking advantage of the new opportunities made possible by this wealth of data, particularly in leveraging insights from data to help organizations make better decisions and create more value for both the customer and for the organization itself. 

As the fuel that drives Internet of Things (IoT) and artificial intelligence, big data and analytics can predict problems before they happen. Together with IoT, big data predictive analytics can assist organizations to be proactive instead of reactive. This is incredibly valuable to organizations and can change the way decisions are made which has a cascading effect on all areas of an organization. Given the nature of the profession, information management practitioners are perfectly positioned to help their organizations tackle the challenges of the big data era. With this era comes greater risk, particularly in the area of cybersecurity. Since using analytical techniques on big data creates even more value from information than ever before, information will become one of the greatest commodities that an organization has. Information management practitioners need to expand their security and privacy knowledge, increase their computer literacy and increase their knowledge of risk management in order to better serve their organizations and help to protect the organization’s data and information. 

Organizational and Societal Impact of Big Data

Making analytical inferences from data is as old as the field of statistics dating back to the 18th century, but today’s inferences are unique since economic and social transactions are quickly moving online, allowing for the digital capture of big data that draws insights from a pool of data sets on human discourse.  From the perspective of scientific inquiry, the entire globally connected networked economy now can be envisaged as a large-scale real-world laboratory where researchers can design and conduct experiments and collect the data needed to obtain answers to a variety of questions

With the incorporation of big data analytical techniques, more organizational decisions will be automated which further impacts the decision processes and responsibilities throughout organizations. Enhanced decision-making is a positive outcome of big data for business as big data allows us to leverage both prediction and causal analysis and draws on a variety of techniques from machine learning, classical statistics, and econometrics, to the design of experiments that existing theories and hypothesis. By testing these theories, analytics applications have already started to make a measurable impact on organizations.

Beyond enhanced decision-making in commercial settings, there is great societal benefit from such experimentation since we now have unprecedented digitization of social processes.  Several recent editorials have expanded on the idea of the societal impact of big data and analytics. In their article “Big Data & Analytics for Societal Impact: Recent Research and Trends,” Gupta et al. highlight the ability of the innovative uses of big data to have a measurable societal impact, citing in particular the applications of online-to-offline commerce, proactive customer care, and Internet of Things enabled cars

While the effects are often positive, there are challenges inherent in the manipulation of data for experimentation which have raised controversies particularly in the social media sphere as in 2014, when Facebook and OKCupid were questioned for experimenting on unaware users by adjusting the content presented to them. The key question coming out of these controversies is whether such social media experimentation is beneficial to society at large and what moral scrutiny is appropriate in this situation. Current literature indicates that there is a strong case in favour of such experimentation for businesses in particular, not just to avoid costly bad decisions, but also for the pursuit of a better understanding of what drives human social interactions. For example, manipulating social media data for experimentation can enhance a company’s decision-making and allows for testing social science theories that would otherwise be too costly to test using other means. While it may benefit the business, the ethical and privacy implications of such experimentation is of concern, particularly when it comes to personal privacy which will be discussed later in this article. 

Decision-Making and Trust

Big data is more increasingly becoming a trending practice that organizations use to improve and enhance decision-making, thus increasing operational efficiencies and gaining competitive advantage over rivals.  The key advantage of using big data and analytics in an organizational context is making better data-driven decisions by linking a set of explanatory variables to a business response or outcome and thus facilitating predictive or causal inference-based decision-making. Trust is recognized as one of the most important factors in addition to information (data) which has traditionally garnered the most attention in the literature. The concept of trust in big data and analytics is a key factor in implementing decision-linked inferences from data. Even if the data itself is of the highest quality, an organization will fail to enhance its decision-making if the firm’s leaders do not trust the data and choose not to rely upon either the data or the relevant analytic techniques.  A challenge related to big data and analytics is generating trust in an analytical model, since the performance of a model is summarized using statistical measures that may be difficult for end-users or non-experts to understand. Given that the use of analytical models plays a key role in the decision-making of the firm, it is important that the models are understandable to decision-makers in the organization

This challenge has been further explored in the literature and in business, for example, in 2016, KPMG International commissioned Forrester Consulting to examine the power of trust in data and analytics and released the report “Building Trust in Analytics: Breaking the Cycle of Mistrust in D&A.” The report reveals that just 38 percent of organizations have a high level of confidence and trust in their analytics.  As the report summarizes, given the power that it holds, trust in data analytics should be a non-negotiable business priority.  Additionally, the report summarizes the ways that KPMG believes organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers to deliver sustainable business results. Further to generating trust, new challenges compounded by the data revolution require a shift in organizational culture to adapt to the new world of data since it is now possible to experiment cheaply and accurately and base decisions on that data

While moving in quietly, technological change has a relatively disruptive long-term impact on business. The shift from the use of in-house database management systems to cloud-based options means that corporate boundaries are stretching. The big data era allows organizations to leverage the power of prediction and causal analysis in order to develop the corporate body of knowledge. With the many collaboration tools and sharing methods that exist today, businesses have more opportunities to share knowledge and insights with others. In order to be successful, business intelligence and analytics projects depend on significant business or domain knowledge as well as effective communication skills.  Long-lasting benefits from such projects are realized when the meaningful and actionable knowledge gained from turning data into information (using analytics) is effectively communicated to the business and domain experts in the organization. This sharing of knowledge is critical to informing decisions within organizations.  Knowledge sharing both inside and outside the corporate boundaries is an effective means of gaining competitive advantage in today’s digitalized world. 

Disruptive Impacts of Big Data

Historically, the Information Systems (IS) field has developed research at the intersection of computing technology and data in business and society and IS researchers focused on problems and outcomes from a broad perspective of the enterprise as a whole. From sociology to political science, to economics and psychology, the IS discipline has been thinking and researching questions at the intersection of technology, data, business and society for five decades. The aforementioned cross-disciplinary nature of IS research is a strength in today’s connected world and positions researchers well to exploit the big data opportunity. With access to even richer data sets and larger volumes of data, there is more potential than ever before for IS thought leadership to become foundational in education, business and policy.   

New disruptive technologies such as data analytics come with many opportunities. Dramatic shifts are happening across many professions as job categories will become obsolete in favour of new and emerging jobs created by big data innovations.  Data analytics offers a competitive advantage by analyzing an organization’s data quickly and effectively for future insights. Due to this shift, there is a need to retrain employees in this big data age, since there is a training gap in terms of what they need to know and the training they have had up to this point.  Big data has disrupted old business methods and brings the need to develop new employee skillsets including knowledge of cybersecurity, privacy, data protection and risk management.

Even before the development of the cloud and the concept of blockchain technology which has had pervasive impacts in more recent years, IT architectural innovation had a radical impact on development processes and their outcomes and systems developers had to navigate the difficulties of integrating past standalone systems into the world of the Internet.  As organizations are currently in the process of navigating the world of big data, machine-learning skills are becoming more and more necessary for IT professionals in order to build decision systems that are automated. Information management professionals need to be aware of the basic concepts of data analytics and machine learning, particularly as it pertains to the use of information insights that come from big data innovations.  Information professionals can create value for their organizations by assisting in the organization, storage and preservation of the outputs of machine learning knowledge for effective decision-making and use in organizations. Communication and knowledge sharing is important to allow for productiveness in this era of radical innovation. 

Data analytics has had a dramatic impact on the field of social sciences as collaboration between researchers, social scientists and data scientists makes innovative projects possible. 

Analysis of micro-level big data offers an opportunity for social scientists to develop more complex psychological models with useful implications of behaviour. Jennifer Golbeck’s work is a good example of the integration of social sciences and data sciences. In “Predicting Personality from Social Media Text,” Goldbeck uses social media text to predict personalities using psycholinguistic text analysis. Replicating the text-based Big Five personality score predictions generated by Receptiviti API, she uses a prediction algorithm with social media data sets of personality scores for about 9000 users to determine the accuracy of the Receptiviti API Big Five personality predictions. The outcome of this research was that remarkably similar results were obtained from the four data sets that Goldbeck tested using the prediction algorithm.  

The potential for big data innovations using data from social websites can enable researchers to go where they could only dream of going in the past since the vast amount of micro-level big data about human interactions offers opportunities never available in the physical world due to cost or infeasibility of comparable data collection.  Social media big data and analytics has been examined more recently in the research regarding trends in human resource management and recruitment. The use of machine learning and predictive analytics, combined with social media big data is changing the way recruitment is carried out in the organizational context. In the field of recruitment, big data is a potential solution to disintermediating talent requirement and availability to enable a better understanding of candidate’s behaviours and expected fit within an organization. The use of social media platforms to link various data streams using appropriately defined unique identifiers is key in using big data to determine candidate behaviours and expected fit in an organization. 

It is important to note that informational usefulness is directly proportional to the quality of the data. Far too often companies and universities consider investments in data quality as too expensive and/or too difficult. The importance of having accurate data to maximize the impact of big data techniques cannot be overstated.  When it comes to business priorities, data quality is too often relegated to a lower priority in an organization.  In the age of big data, this can have catastrophic consequences. Therefore, management should be on board to help enforce data quality processes and procedures throughout an organization. With the wealth of industry-specific knowledge that can be extracted from high quality data, combined with the use of relevant analytic techniques, an investment in data quality can be expected to bring returns well into the future. 

Big Data Security and Privacy Challenges

More and more organizations need to be aware of the privacy implications and security threats of managing systems in the era of big data. Many of the security applications designed for smaller volumes of data cannot manage the larger volumes of data that are characteristic of the big data age. Coupled with this, the increasing use of cloud-based storage solutions has facilitated more data mining and collection of data than in the past. With the prevalence of the Internet and cyberattacks in recent years, cybersecurity is an increasingly important area of practice since most people are not aware of how important it is to protect their information on the Internet. Particularly in the organizational context, cybersecurity awareness training is an extremely important tool to ensuring the security of an organization’s information assets. When it comes to cybersecurity awareness, not all threats are created equal. Phishing is a form of cyberattack that is particularly common and happens through email. In the case of phishing, employees are often given training to help them to identify and report suspicious emails.  A key to combatting phishing attacks is educating individuals of the signs of a malicious email through cybersecurity awareness training efforts which emphasize this particular form of attack.

In the case of big data privacy, the raw material or data used to glean interesting insights is increasingly heterogeneous and unstructured. With the proliferation of smart technology, an individual’s claim to personal privacy is virtually nonexistent. People largely supply the data used for big data and analytics through their everyday actions. The challenge in big data and privacy is how to protect the privacy of individuals without a significant loss in informational quality. In their paper, “Privacy and Big Data: Scalable Approaches to Sanitize Large Transactional Databases for Sharing,” Menon and Sarkar present a scalable approach to sanitize transactional data. They show that using this heuristic approach allowed sensitive item sets to be removed, while maintaining similar recommendation accuracy as the original rate observed. Such approaches have the potential to be expanded and transform the culture of hesitation around data sharing due to privacy implications like the unintended exposure of sensitive information

Transformational Impacts and Information Management Opportunities

This research article focuses on the transformative power of big data and analytics in today’s business environment and highlights the challenges the area is bringing to light. It explores the technical and managerial issues of business transformation and disruption that require further exploration in this new age of intelligent information. Further, by closely analyzing the more recent big data and analytics research, we examined the organizational and societal impacts of big data, the importance of decision-making and trust, the disruptive impacts of big data and big data security and privacy challenges.  Based on our research, we conclude that the transformative power of big data and analytics will only continue to progress and grow as the field continues to transform business and society in ways that are yet to be realized. 

For the information management profession, big data and analytics means information management practitioners need to consider adding new skills to their current skillsets in order to assist organizations with the new security, privacy, risk management and data management issues that are characteristic of the big data age.  In this new business landscape, the protection of information and data becomes crucial to an organization. Therefore, the information management strategy needs to be redefined to address the emerging data dynamic that impacts the storage and use of information, which will include greater focus on the protection of information from unauthorized access and use.  Given larger volumes of data, information management professionals need to familiarize themselves with options for cloud storage as well as privacy and security issues relevant to cloud storage. Further, cybersecurity becomes of utmost importance as cyberattacks are increasing and will only continue to increase in the big data age where an organization’s data has more value than ever before. For this reason, information management practitioners will need to work closer with privacy, IT, security and legal in their organizations.

Information is power, and this statement has never been more accurate than in this current big data age with data flowing from audio, video, sensors, social media, machines and other sources. All of this data needs to be categorized, stored and protected according to its value and that is a key area where the profession of information management is perfectly positioned to help drive big data towards actionable intelligence, and subsequently provide tremendous value to organizations. 

Bibliography

Agarwal, R. and Dhar, V. (2014). Editorial —Big Data, Data Science, and Analytics: The           Opportunity and Challenge for IS Research. Information Systems Research, 5(3), pp. 443-448.

Baesens, B., Bapna, R., Marsden, J.R., Vanthienen, J., & Zhao, J. L. (2016). Transformational issues of big data and analytics in networked business. MIS Quarterly, 40(4), pp. 807- 818.

Chen, H., Chiang, R., and Storey, V.  2012. Business intelligence and analytics:  From big data to big impact. MIS Quarterly, 36(4), pp. 1165-1188.

Diebold, F. (2012). On the Origin(s) and Development of the Term ‘Big Data’. Working Paper No. 12-037: (http://dx.doi.org/10.2139/ssrn.2152421)

Dhar, V. (2013). Data Science and Prediction. Communications of the ACM, 56(12), 64-73. 

Dutta, D. (2018). Social Media and Technology Trends in the HRM: Cases in Recruitment and Talent Management. 10.5772/Intechopen. 79342.

Evans, G. (2017). Disruptive technology and the board: The tip of the iceberg.  Economics and Business Review, 3(1), 205-223.

Golbeck, Jennifer Ann (2016) “Predicting Personality from Social Media Text,” AIS            Transactions on Replication Research: Vol. 2, Article 2. 

Golbeck J. (2018) Predicting Alcoholism Recovery from Twitter. In: Thomson R., Dancy C., Gupta, A., Deokar, A., Iyer, L., Sharda, R., & Schrader, D. (2018). Big data & analytics for societal impact: Recent research and trends. Information Systems Frontiers, 20(2), pp. 85-194.

Huang, H., Gartner, G., Krisp, M. & Van de Weghe, N. (2018).  Location based services: ongoing evolution and research agenda.  Journal of Location Based Services, 12(2), pp. 63-93

Huerta, E. and Jensen, S. (2017).  An Accounting Information Systems Perspective on Data Analytics and Big Data. Journal of Information Systems, 31(3), pp. 101-114.

Hyder A., Bisgin H. (eds) Social, Cultural, and Behavioral Modeling. SBP-BRiMS 2018.             Lecture Notes in Computer Science, Vol 10899. 

KPMG International. (2015, January). Building Trust in Analytics. Retrieved from            https://home.kpmg/content/dam/kpmg/xx/pdf/2016/10/building-trust-in-analytics.pdf.

Lohr, S. (2013). The Origins of ‘Big Data’: An Etymological Detective Story.” New York           Times, Business, Education, Technology, Society Section, Feb 1 (https://bits.blogs.nytimes.com/2013/02/01/the-origins-of-big-data-an-etymological-          detective-story/) 

Menon, S, and Sarkar, S. 2016. “Privacy and Big Data: Scalable Approaches to Sanitize Large Transactional Databases for Sharing,” MIS Quarterly (40:4), pp. 963-981. 

Press, G. (2013). “A very short history of big data.” Forbes, Technology Section, May 9            (https://www.forbes.com/sites/gilpress/2013/05/09/a-very-short-history-of-big- data/#73fb86b065a1)

Zhang, K., Bhattacharyya, S., and Ram, S. 2016. “Large-Scale Network Analysis for Online Social Brand Advertising,” MIS Quarterly (40:4), pp. 849-868.

Works Cited

Les effets transformateurs des mégadonnées et de l’analytique
Christy Walters et Crystal Walters

 

Introduction 

L’avènement des mégadonnées et de l’analytique transforme le monde de l’entreprise et le milieu universitaire dans la mesure où les entreprises et les établissements d’enseignement utilisent aujourd’hui des techniques novatrices pour obtenir de précieux renseignements à partir de données structurées et non structurées. Ces dernières années, la mise de l’avant des effets transformateurs des mégadonnées sur les entreprises est à l’origine d’une évolution positive de la manière dont elles sont perçues. Dans le présent article, nous analysons la littérature publiée récemment sur les mégadonnées, l’analytique et leur pouvoir transformateur. Nous examinons les directions novatrices qu’a prises l’analytique des données et étudions leur effet transformateur sur le paysage entrepreneurial d’aujourd’hui. Enfin, nous concluons en parlant des effets des mégadonnées et de l’analytique sur les pratiques de gestion de l’information qui ont cours et nous abordons la question des compétences que doivent acquérir les spécialistes de ce domaine pour apporter une plus-value aux entreprises à l’ère des mégadonnées. Toutefois, les effets positifs des mégadonnées et de l’analytique s’accompagnent de l’émergence de nouvelles difficultés que nous aborderons également ici. 

Évolution des mégadonnées 

Dans l’article intitulé « On the Origins and Development of ‘Big Data’ » qu’il a publié en 2012, Francis Diebold affirme que le terme « mégadonnées » a probablement vu le jour au milieu des années 1990 lors d’un dîner auquel a participé John Mashey, le scientifique en chef de la société Silicon Graphics Inc. (SGI). Dans une entrevue qu’il a donnée par la suite, John Mashey a affirmé avoir commencé à utiliser ce terme pour faire comprendre que « les limites de l’informatique continuent d’être repoussées » (NDT : le terme mégadonnées [Big Data en anglais] a été créé en référence au terme Big Bang, la théorie selon laquelle l’univers serait en expansion)

La première définition de la notion de mégadonnées a été donnée en 2001 par Doug Laney à l’aide des 3V dans sa note de recherche intitulée « 3D Data Management: Controlling Data Volume, Velocity, and Variety ». Deux décennies plus tard, il est généralement admis qu’au concept initial des 3V (volume, vélocité et variété) renvoyant aux dimensions des mégadonnées vient s’ajouter un quatrième V (pour véracité). Le concept des 4V a fourni un cadre favorable à la mise au point de logiciels novateurs qui ont permis de répondre aux besoins créés par la multiplication fulgurante des mégadonnées. À l’ère de l’informatique, les techniques d’analyse par ordinateur utilisées pour traiter un volume de mégadonnées aussi phénoménal (mesuré en téraoctets et même en zettaoctets) obligent à recourir à des technologies avant-gardistes de stockage, de gestion, d’analyse et de visualisation des données. Dans leur article intitulé « Transformational Issues of Big Data in Networked Business », Bart Baesens et coll. préconisent l’ajout d’un cinquième V (pour valeur) au concept des 4V. Ils affirment que cet ajout permettra de déterminer ce qu’il faut faire à l’intérieur du périmètre délimité par le concept des 4V, à savoir comment étudier et analyser de façon novatrice les mégadonnées, anticiper leurs effets transformateurs et en tirer parti.

Il ne serait pas exagéré d’affirmer que les mégadonnées pourraient être le plus important bouleversement technologique au sein des entreprises et dans le milieu universitaire depuis l’essor fulgurant d’Internet et de l’économie numérique. De nos jours, comme la plupart des données sont produites hors des frontières de l’entreprise et sont générées dans un format non structuré par des réseaux de personnes et d’appareils, des algorithmes plus puissants que jamais et des schémas de représentation des connaissances d’une qualité inégalée sont mis au point pour rendre intelligible cette masse d’information hétérogène et fragmentée. Étant donné que les mégadonnées produites aujourd’hui proviennent principalement de sources non structurées et de réseaux de personnes et d’appareils, il faut des algorithmes encore plus puissants pour traiter cette information et la rendre intelligible. 

La définition de la notion de gestion de l’information évolue à mesure que la technologie, les idées et les besoins opérationnels évoluent. Les données ont un cycle de vie fondé sur leur degré d’utilité pour une entreprise. Elles doivent être interprétées de manière à fournir des renseignements et ceux-ci doivent être compréhensibles pour devenir des connaissances qui permettront de prendre des décisions éclairées. Si les mégadonnées viennent complexifier la gestion de l’information, elles nous donnent également des possibilités d’obtenir des données exploitables comme jamais auparavant. De nos jours, les spécialistes de la gestion de l’information peuvent tirer parti des nouvelles possibilités qu’offre cette profusion de données pour aider notamment les entreprises à améliorer leurs processus décisionnels et à dégager une plus-value pour leurs clients comme pour elles-mêmes. 

Servant de carburant à l’Internet des objets et à l’intelligence artificielle, les mégadonnées et l’analytique peuvent nous aider à prévoir les problèmes avant qu’ils ne se produisent. Utilisée conjointement à l’Internet des objets, l’analyse prévisionnelle des mégadonnées peut aider les entreprises à anticiper les phénomènes plutôt qu’à y réagir. Il s’agit d’une perspective extrêmement intéressante pour les entreprises, puisque cela pourrait faire changer la manière de prendre des décisions et avoir des répercussions sur l’ensemble des activités d’une entreprise. Compte tenu de la nature de leur profession, les spécialistes de la gestion de l’information sont extrêmement bien placés pour aider leur entreprise à relever les défis auxquels elles doivent faire face à l’ère des mégadonnées. Toutefois, si l’utilisation des mégadonnées comporte des avantages, elle s’accompagne également d’une augmentation des risques, notamment en ce qui a trait à la cybersécurité. En effet, comme l’utilisation des techniques d’analyse des mégadonnées donne à l’information une valeur inégalée à ce jour, celle-ci devient l’une des plus précieuses matières premières que possède une entreprise. Les spécialistes de la gestion de l’information doivent approfondir leurs connaissances des domaines de la sécurité, de la confidentialité, de l’informatique et de la gestion des risques pour être en mesure d’assurer la protection des données et des renseignements de leur entreprise. 

Incidence des mégadonnées sur la structure organisationnelle et sociétale

Si l’idée de tirer des conclusions à partir d’une analyse des données est aussi vieille que l’invention des statistiques au XVIIIe siècle, le processus est très différent de nos jours, puisque les transactions économiques et sociales se font désormais de plus en plus en ligne, ce qui facilite le traitement par des moyens numériques d’un ensemble de fichiers sur les activités humaines pour en tirer des mégadonnées exploitables. Sur le plan de la réalisation d’enquêtes scientifiques, l’économie mondiale interconnectée peut désormais être perçue comme un laboratoire d’envergure planétaire grâce auquel les chercheurs peuvent concevoir et mener des expériences, en plus de recueillir les données nécessaires pour répondre à diverses questions

Grâce aux techniques d’analyse des mégadonnées, les entreprises pourront prendre un plus grand nombre de décisions automatiquement, ce qui aura une incidence accrue sur les processus décisionnels et les responsabilités connexes à l’échelle de l’entreprise. L’amélioration des processus décisionnels est, pour les entreprises, un des effets bénéfiques des mégadonnées, car celles-ci leur permettent de bénéficier de possibilités d’analyse causale et prévisionnelle. De plus, elles s’inspirent de diverses techniques allant de l’apprentissage automatique à la conception d’expériences fondées sur les théories et les hypothèses existantes, en passant par les statistiques classiques et l’économétrie. En permettant d’expérimenter ces théories, les mises en application de l’analytique ont déjà commencé à exercer une influence tangible sur les entreprises.

Outre l’amélioration des processus décisionnels des entreprises, une telle expérimentation comporte des avantages importants pour la société, puisque nous disposons désormais d’un nombre sans précédent de processus sociaux numérisés. Plusieurs articles récents ont étudié l’incidence des mégadonnées et de l’analytique sur la société. Dans un article intitulé « Big Data & Analytics for Societal Impact: Recent Research and Trends », Ashish Gupta et coll. ont démontré que les utilisations novatrices des mégadonnées pourraient avoir une incidence tangible sur la société, notamment par l’effet du commerce en ligne sur le commerce classique, le service à la clientèle par anticipation et les véhicules compatibles avec l’Internet des objets

Bien que les effets soient souvent positifs, il existe des difficultés inhérentes à l’utilisation des données à des fins d’expérimentation, comme en attestent les controverses suscitées par le sujet, notamment en 2014 dans la sphère des médias sociaux, lorsque Facebook et OKCupid se sont fait reprocher de mener des expériences à l’insu des utilisateurs en adaptant le contenu qui leur était présenté. La question fondamentale à l’origine de ces controverses est de savoir si une telle expérimentation profite à l’ensemble de la société, et de déterminer quel comportement éthique il convient d’adopter dans une telle situation. La littérature publiée récemment à ce sujet indique qu’il existe de solides arguments en faveur de la réalisation de ce type d’expérimentation, notamment pour les entreprises, non seulement pour éviter la prise de mauvaises décisions susceptibles de s’avérer coûteuses, mais aussi pour mieux comprendre ce qui motive les interactions humaines. Par exemple, l’utilisation des données des médias sociaux à des fins d’expérimentation pourrait permettre d’améliorer les processus décisionnels d’une entreprise et de mettre à l’essai certaines théories des sciences sociales, alors qu’il serait trop coûteux de le faire par d’autres moyens. Bien qu’une telle expérimentation soit susceptible de profiter aux entreprises, ses conséquences sur les plans de l’éthique et de la confidentialité sont préoccupantes, notamment en ce qui a trait à la protection des renseignements privés. Cette question sera abordée plus tard dans le présent article. 

Importance de la confiance dans les processus décisionnels

Les entreprises ont tendance à utiliser de plus en plus les mégadonnées pour faciliter et améliorer leurs processus décisionnels. Elles peuvent ainsi accroître l’efficacité de leurs activités et acquérir un avantage concurrentiel. Le principal avantage de l’utilisation des mégadonnées et de l’analytique dans un contexte d’entreprise est qu’elle facilite la mise en œuvre d’un processus décisionnel axé sur les données en associant diverses variables explicatives à une réponse ou à un résultat commercial. Il devient donc possible de prendre des décisions en s’appuyant sur l’analyse causale ou prévisionnelle. Dans la littérature, la confiance est généralement considérée, à l’instar de l’information (les données), comme l’un des facteurs fondamentaux. La confiance qu’inspirent les mégadonnées et l’analytique joue un rôle déterminant dans la mise sur pied de processus décisionnels axés sur les données. En effet, même si elle dispose de données de très bonne qualité, une entreprise ne pourra pas améliorer ses processus décisionnels si les cadres de l’entreprise n’ont pas confiance dans les données et refusent de se fier aux données ou aux techniques d’analyse connexes. L’une des difficultés liées aux mégadonnées et à l’analytique est de faire en sorte qu’un modèle analytique inspire confiance, puisque l’efficacité d’un modèle repose sur l’utilisation de mesures statistiques qui peuvent sembler nébuleuses aux utilisateurs finaux ou aux non-spécialistes. Comme l’utilisation des modèles analytiques joue un rôle capital dans les processus décisionnels d’une entreprise, il est essentiel que les cadres puissent comprendre ces modèles

Cette difficulté a été analysée en profondeur dans la littérature et par les entreprises. Par exemple, en 2016, KPMG International a chargé Forrester Consulting de rédiger le rapport intitulé Building Trust in Analytics: Breaking the Cycle of Mistrust in D&A, qui se penche sur l’importance de la confiance qu’inspirent les données et l’analytique. Ce rapport montre que seulement 38 % des entreprises ont une grande confiance dans l’analytique. En somme, le rapport indique que les entreprises devraient élever au rang de priorité absolue la confiance en l’analytique des données étant donné le potentiel de cette discipline. De plus, le rapport se penche sur la perception de KPMG quant à la nécessité pour les entreprises d’exploiter l’analytique de confiance comme un moyen stratégique de réconcilier les points de vue des décideurs, des scientifiques des données et des clients afin de générer des résultats durables. En plus d’obliger les entreprises à renforcer la confiance dans les données, la révolution des données exige un changement de culture d’entreprise afin de s’adapter à la nouvelle réalité, puisqu’il est désormais possible d’utiliser à moindre coût les données pour prendre des décisions fiables

Bien que graduels, les changements technologiques ont, à long terme, une incidence relativement déstabilisatrice sur les activités. Le passage des systèmes de gestion de bases de données propres aux entreprises aux systèmes en nuage a pour effet d’élargir les limites de l’entreprise. L’avènement des mégadonnées leur permet désormais de tirer parti des outils d’analyse prévisionnelle et causale pour enrichir l’ensemble des connaissances de l’entreprise. Grâce aux nombreux outils de collaboration et méthodes de partage actuels, les entreprises sont en mesure de mettre plus facilement en commun leurs connaissances et leurs perspectives. Pour que les projets de veille stratégique et d’analytique soient fructueux, ils doivent s’appuyer sur un volume considérable de connaissances en lien avec une activité ou une discipline, ainsi que sur des communications efficaces. En effet, il n’est possible de tirer profit des avantages à long terme de tels projets que lorsque les données ordinaires peuvent être transformées (par l’analytique) en données utiles et exploitables, puis communiquées efficacement aux spécialistes de l’activité ou de la discipline qui travaillent dans l’entreprise. La mise en commun des connaissances est indispensable au fonctionnement des processus décisionnels des entreprises. Dans le monde numérique d’aujourd’hui, une telle mise en commun des connaissances à l’intérieur comme à l’extérieur de l’entreprise est un moyen efficace de se démarquer de la concurrence. 

Effets déstabilisateurs des mégadonnées

Historiquement, les recherches menées par les chercheurs de la discipline des systèmes d’information étaient à la croisée de l’informatique et des données produites par les entreprises et la société en général. Ces chercheurs se sont donc concentrés sur l’étude des problèmes et des résultats à l’échelle de l’entreprise. De la sociologie à la science politique en passant par l’économie et la psychologie, la discipline des systèmes d’information étudie depuis cinq décennies les questions qui se situent au carrefour de la technologie, des données, des entreprises et de la société. Le caractère interdisciplinaire de la discipline des systèmes d’information constitue une force dans le monde interconnecté dans lequel nous vivons. Les chercheurs de cette discipline se retrouvent donc dans une position favorable pour tirer parti de l’utilisation des mégadonnées. Le fait de pouvoir disposer de fichiers et de données plus intéressants sur les plans qualitatif et quantitatif donne la possibilité aux penseurs de cette discipline de jouer un rôle fondamental dans l’enseignement, les entreprises et la politique.  

Si les nouvelles technologies, comme l’analytique des données, ont des effets déstabilisateurs, elles comportent aussi de nombreux avantages. Des transformations radicales se produisent dans bon nombre de professions, si bien que certaines d’entre elles deviendront désuètes et seront remplacées par de nouveaux emplois créés par les innovations dans le domaine des mégadonnées. L’analytique des données procure un avantage concurrentiel à une entreprise, car elle lui permet d’analyser rapidement et efficacement ses données pour obtenir des renseignements qui lui seront utiles. En raison des transformations causées par l’avènement des mégadonnées, il est nécessaire de faciliter le recyclage des employés, car la formation de ces derniers n’est plus adaptée. Les mégadonnées ont transformé les anciennes façons de faire, si bien que les employés doivent acquérir de nouvelles compétences, notamment en matière de cybersécurité, de confidentialité, de protection des données et de gestion des risques.

Même avant l’invention de l’informatique en nuage et du concept des chaînes de blocs, deux techniques qui ont eu une incidence considérable ces dernières années, les innovations en matière d’architecture des TI avaient déjà influé radicalement sur les processus de développement et leurs effets, tant et si bien que les développeurs de systèmes avaient déjà dû surmonter les difficultés liées à l’intégration des anciens systèmes autonomes dans le cyberespace. Maintenant que les entreprises s’apprêtent à explorer le monde des mégadonnées, les informaticiens ont de plus en plus besoin d’acquérir des compétences dans le domaine de l’apprentissage automatique pour être en mesure de créer des systèmes de prise de décisions automatisés. Les spécialistes de la gestion de l’information doivent connaître les rudiments de l’analytique des données et de l’apprentissage automatique, notamment en ce qui a trait à l’utilisation des renseignements exploitables découlant des innovations en lien avec les mégadonnées. Ces spécialistes peuvent générer une plus-value en facilitant la mise en forme, le stockage et la conservation des connaissances acquises par apprentissage automatique pour aider les entreprises à utiliser ces connaissances afin de prendre des décisions éclairées. La communication et la mise en commun des connaissances jouent un rôle de premier plan dans l’augmentation de la productivité à une époque d’innovations extrêmes. 

L’analytique des données a eu une incidence considérable sur le domaine des sciences sociales, car la collaboration entre les chercheurs, les spécialistes des sciences sociales et les scientifiques des données rend possible la réalisation de projets novateurs. 

L’analyse des mégadonnées à l’échelle micro donne aux spécialistes des sciences sociales la possibilité de créer des modèles psychologiques d’une complexité accrue pouvant s’avérer utiles pour les études sur le comportement. Le travail de Jennifer Golbeck est un bon exemple de l’intégration de la science des données aux sciences sociales. Dans son article intitulé « Predicting Personality from Social Media Text », la chercheuse utilise les textes publiés dans les médias sociaux pour cerner la personnalité de leurs auteurs par l’entremise d’une analyse psycholinguistique des textes. Reproduisant les prévisions générées à partir de textes par l’API de Receptiviti sur les notes obtenues aux cinq grands traits de personnalité, elle a appliqué un algorithme à des fichiers de données provenant des médias sociaux contenant les notes sur les traits de personnalité d’environ 9 000 utilisateurs pour déterminer l’exactitude des prévisions établies par Receptiviti. Cette étude a mis en lumière la ressemblance frappante entre les résultats du traitement des quatre fichiers par l’algorithme de prédiction de Jennifer Golbeck. 

Les innovations rendues possibles par l’utilisation des mégadonnées provenant de médias sociaux donnent aux chercheurs la possibilité d’explorer des territoires jusque-là inconnus, puisque l’énorme volume des mégadonnées au niveau local sur les interactions humaines leur ouvre des horizons jamais accessibles dans le monde physique, étant donné le coût ou l’impossibilité d’une telle collecte de données. La recherche sur les tendances dans le domaine du recrutement et de la gestion des ressources humaines s’intéresse depuis peu aux mégadonnées et à l’analytique des médias sociaux. L’utilisation de l’apprentissage automatique et de l’analyse prévisionnelle, combinée aux mégadonnées des médias sociaux, transforme la notion de recrutement en entreprise. Les mégadonnées pourraient mener à l’élimination des intermédiaires en matière de besoins en talents et disponibilités de ces derniers. Cette solution donnerait une meilleure idée du comportement d’un candidat et permettrait de déterminer plus facilement s’il correspond au profil recherché par l’entreprise. L’utilisation des médias sociaux pour lier divers flux de données à l’aide d’identificateurs uniques définis judicieusement joue un rôle essentiel dans l’exploitation des mégadonnées pour évaluer le comportement d’un candidat et déterminer s’il correspond au profil recherché par l’entreprise. 

Il est important de noter que l’utilité de l’information est directement proportionnelle à la qualité des données. Bien trop souvent, les entreprises et les universités considèrent que le coût et la complexité de l’investissement dans l’amélioration de la qualité sont trop élevés. On ne saurait trop insister sur l’importance de disposer de données fiables pour utiliser de manière optimale les techniques d’analyse des mégadonnées. La qualité des données est trop souvent perçue comme une priorité de moindre importance pour une entreprise. Or, à l’ère des mégadonnées, une telle façon de penser peut avoir des conséquences catastrophiques. Par conséquent, les cadres d’une entreprise devraient contribuer aux processus et aux procédures visant à garantir la qualité des données à l’échelle de l’entreprise. Vu l’ampleur des connaissances sur un secteur d’activité pouvant être extraites de données de grande qualité grâce à l’utilisation de techniques d’analyse pertinentes, la rentabilité d’un investissement dans la qualité des données est pratiquement garantie. 

Difficultés en lien avec la protection des mégadonnées et de la confidentialité

À l’ère des mégadonnées, de plus en plus d’entreprises doivent être conscientes des risques que comporte la gestion des systèmes en ce qui a trait à la protection des données et de leur confidentialité. En effet, nombre des applications conçues pour assurer la sécurité des données ne sont plus efficaces dans le cas des mégadonnées. De plus, l’utilisation croissante des solutions de stockage en nuage rend l’exploration et la collecte de données plus faciles que par le passé. Compte tenu de l’omniprésence d’Internet et du nombre des cyberattaques qui ont marqué ces dernières années, la cybersécurité est une discipline qui prend de l’ampleur, car la plupart des gens ne savent pas combien il est crucial de protéger leurs renseignements sur Internet. En particulier dans le contexte de l’entreprise, il est absolument fondamental de sensibiliser le personnel à la cybersécurité, car la sécurité de l’information, c’est-à-dire des biens de l’entreprise, en dépend. À propos de la sensibilisation à la cybersécurité, il faut savoir qu’il existe différents types de menaces. L’hameçonnage, par exemple, est une forme de cyberattaque particulièrement courante qui se produit par l’intermédiaire d’une messagerie électronique. Afin de les aider à repérer et à signaler les courriels suspects, les employés reçoivent souvent une formation sur le sujet. Pour lutter contre l’hameçonnage, il est essentiel d’expliquer aux employés quels sont les signes d’un courriel malveillant en leur faisant suivre une formation de sensibilisation à la cybersécurité qui met l’accent sur cette forme d’attaque particulière.

En ce qui a trait à la protection de la confidentialité des mégadonnées, la matière première, c’est-à-dire les données brutes, qui sert à l’obtention de renseignements intéressants est de plus en plus hétérogène et de moins en moins structurée. À mesure que les technologies intelligentes se répandent, il devient presque impossible d’assurer la protection des renseignements personnels, puisqu’une grande partie des données qui composent les mégadonnées traitées par l’analytique sont produites par les gens eux-mêmes dans le cadre de leurs activités quotidiennes. Le défi que posent l’utilisation des mégadonnées et la protection de leur confidentialité est celui de trouver des moyens d’assurer la sécurité des renseignements personnels sans entraîner une baisse considérable de la qualité des données. Dans leur article intitulé Privacy and Big Data: Scalable Approaches to Sanitize Large Transactional Databases for Sharing, Syam Menon et Sumit Sarkar proposent une approche évolutive pour aseptiser les données transactionnelles. Ils démontrent que cette approche heuristique permet d’éliminer les éléments de nature confidentielle sans nuire à l’exactitude des données originelles. L’utilisation de ce type d’approche pourrait être élargie, ce qui ferait disparaître les réticences à échanger les données qui sont suscitées par les conséquences d’une infraction à la confidentialité, comme la divulgation involontaire de renseignements confidentiels

Effets transformateurs et possibilités de gestion de l’information

Le présent article insiste sur le pouvoir transformateur des mégadonnées et de l’analytique dans le contexte entrepreneurial d’aujourd’hui et il met en lumière les difficultés que fait naître cette discipline. Il examine les questions techniques et administratives soulevées par le processus transformateur et déstabilisateur auquel sont soumises les entreprises, un processus qui demande à être exploré en profondeur alors que l’ère de l’information intelligente n’en est qu’à ses débuts. De plus, l’analyse approfondie des dernières recherches menées sur les mégadonnées et l’analytique nous a permis d’étudier l’incidence des mégadonnées sur la structure organisationnelle et sociétale, l’importance de la confiance dans les processus décisionnels, les effets déstabilisateurs des mégadonnées, ainsi que les difficultés en lien avec la protection des mégadonnées et de la confidentialité. Les recherches que nous avons menées nous permettent de conclure que le pouvoir transformateur des mégadonnées et de l’analytique va continuer de croître à mesure que cette discipline transformera les entreprises et la société d’une manière à ce jour imprévisible. 

Pour ce qui est de la profession de gestionnaire de l’information, il est évident que l’utilisation des mégadonnées et de l’analytique obligera les spécialistes de ce domaine à acquérir des compétences pour être en mesure d’aider les entreprises à régler les nouveaux problèmes de sécurité, de confidentialité, de gestion des risques et de gestion des données qui sont propres à l’ère des mégadonnées. Dans ce nouveau contexte entrepreneurial, la protection de l’information et des données acquiert une importance de premier plan pour les entreprises. Par conséquent, la stratégie relative à la gestion de l’information doit être revue pour tenir compte de la nouvelle dynamique à laquelle les données sont soumises, qui a une incidence sur le stockage et l’utilisation de l’information. Il faudra accorder une plus grande attention à la protection de l’information contre les tentatives d’accès et d’utilisation non autorisées. Compte tenu de l’augmentation du volume des données, les spécialistes de la gestion de l’information doivent se familiariser avec les options de stockage en nuage ainsi qu’avec les questions de confidentialité et de sécurité liées à cette technologie. De plus, la cybersécurité revêt désormais une importance capitale dans la mesure où le nombre des cyberattaques croît et qu’il ne cessera d’augmenter à une époque où les données d’une entreprise ont plus de valeur que jamais. C’est la raison pour laquelle les spécialistes de la gestion de l’information seront amenés à collaborer plus étroitement avec les services de leur entreprise chargés de la protection de la confidentialité, des technologies de l’information, de la sécurité et des questions juridiques.

L’expression selon laquelle l’information est synonyme de pouvoir n’a jamais été aussi vraie que depuis l’avènement des mégadonnées produites par les sources audio et vidéo, les capteurs, les médias sociaux, les machines et bien d’autres. Toutes ces données doivent être classées, stockées et protégées en fonction de leur valeur. Il s’agit d’une activité fondamentale qui place les spécialistes de la gestion de l’information dans une position idéale pour participer au processus visant à rendre les mégadonnées exploitables et donc à conférer à leur entreprise une plus-value considérable. 

Bibliographie

AGARWAL, R. et V. DHAR. « Editorial – Big Data, Data Science, and Analytics: The Opportunity and Challenge for IS Research », Information Systems Research, vol. 25, no 3, 2014, p. 443-448.

BAESENS, B., R. BAPNA, J. R. MARSDEN, J. VANTHIENEN et J. L. ZHAO. « Transformational issues of big data and analytics in networked business », MIS Quarterly, vol. 40, no 4, 2016, p. 807-818.

CHEN, H., R. CHIANG et V. STOREY. « Business intelligence and analytics: From big data to big impact », MIS Quarterly, vol. 36, no 4, 2012, p. 1165-1188.

DIEBOLD, F. On the Origin(s) and Development of the Term ‘Big Data’, PIER Working Paper, no 12-037, 2012. Sur Internet : <URL : http://dx.doi.org/10.2139/ssrn.2152421>.

DHAR, V. « Data Science and Prediction », Communications of the ACM, vol. 56, no 12, 2013, p. 64-73. 

DUTTA, D. Social Media and Technology Trends in the HRM: Cases in Recruitment and Talent Management, 2018, DOI : 10.5772/Intechopen.79342.

EVANS, G. « Disruptive technology and the board: The tip of the iceberg », Economics and Business Review, vol. 3, no 1, 2017, p. 205-223.

GOLBECK, J. Predicting Personality from Social Media Text, AIS Transactions on Replication Research, vol. 2, article 2, 2016.

 GOLBECK, J. « Predicting Alcoholism Recovery from Twitter » dans THOMSON, R., C. DANCY, A. GUPTA, A. DEOKAR, L. IYER, R. SHARDA et D. SCHRADER. « Big data & analytics for societal impact: Recent research and trends », Information Systems Frontiers, vol. 20. no 2, 2018, p. 85-194.

HUANG, H., G. GARTNER, M. KRISP et N. VAN DE WEGHE. « Location based services: ongoing evolution and research agenda », Journal of Location Based Services, vol. 12, no 2, 2018, p. 63-93.

HUERTA, E. et S. JENSEN. « An Accounting Information Systems Perspective on Data Analytics and Big Data », Journal of Information Systems, vol. 31, no 3, 2017, p. 101-114.

HYDER, A. et H. BISGIN (éd.). Social, Cultural, and Behavioral Modeling, SBP-BRiMS 2018, notes de cours en informatique, vol. 10899.

 KPMG International. Building Trust in Analytics, janvier 2015. Sur Internet : <URL : https://home.kpmg/content/dam/kpmg/xx/pdf/2016/10/building-trust-in-analytics.pdf>.

LOHR, S. The Origins of ‘Big Data’: An Etymological Detective Story, New York Times, cahier Business, Education, Technology, Society, 1er février 2013. Sur Internet :<URL : https://bits.blogs.nytimes.com/2013/02/01/the-origins-of-big-data-an-etymological-detective-story/>.

MENON, S. et S. SARKAR. « Privacy and Big Data: Scalable Approaches to Sanitize Large Transactional Databases for Sharing », MIS Quarterly, vol. 40, no 4, 2016, p. 963-981.

PRESS, G. A very short history of big data, Forbes, section Technology, 9 mai 2013. Sur Internet : <URL : https://www.forbes.com/sites/gilpress/2013/05/09/a-very-short-history-of-big-data/#73fb86b065a1>.

ZHANG, K., S. BHATTACHARYYA et S. RAM. « Large-Scale Network Analysis for Online Social Brand Advertising », MIS Quarterly, vol. 40, no 4, 2016, p. 849-868.

Ouvrages cités

Tribute to Leonora Casey

 

By Gita Werapitiya, Patricia Hirsche, Alexandra (Sandie) Bradley, CRM, FAI, and Pat Burns, CRM

 

Estimated reading time: 5 minutes, 30 seconds. Contains 1103 words

 

On December 17, 2019, Leonora Kathleen Casey, a colleague and a dear friend of ARMA Calgary chapter and ARMA Canada completed her ‘adventure’ (as Leonora called it) after battling with pancreatic cancer.  Leonora was surrounded by love, warmth, family and prayers.

Born in Brandon, MB, Leonora was the younger sister to two older brothers. She was predeceased by both her parents and one brother.

Leonora spent 25 years of her working life in Calgary. It was there that she became a very active member of the Calgary chapter of ARMA International. Leonora served in many capacities on its Board of Directors, including chapter President. She went on to serve on the ARMA Canada team and was its first webmaster.  She was also a contributor and supporter of the ARMA International Education Foundation (AIEF).

Leonora took pride in chronicling achievements and events of ARMA Canada and the chapters across Canada and posting them on the ARMA Canada web site.  Her efforts as webmaster earned her the first “Webmaster of the Year” award by ARMA International.  

Through her association with ARMA International, we got to know Leonora at a professional and personal level.  

 

“Leonora and I served on the board of the ARMA Calgary chapter in the early 1990s. As a board member, Leonora demonstrated her leadership skills, was conscientious and meticulous when it came to work ethics and was a strong supporter of the initiatives undertaken by the chapter.  Outside of ARMA, Leonora enjoyed fun and adventure. The year-end board socials that she so graciously hosted at her home were fun events. The hours of chatter and laughter over good food and drinks were evenings that I will fondly remember, although it scared the heck out of her feline children. 

Early last year, Leonora and her partner, Stuart, undertook one of the ‘greatest of adventures’ – a three-month drive across Canada in their motorhome, and visiting with friends and family along the way.  Reconnecting with Leonora during her stop in Calgary in August 2019 was an absolute joy. It was just like the good old days! She was eager to show us her e-bike, gave us a tour of Fred, her motorhome, and shared with us the many adventures during their trip across Canada.” 

(Gita Werapitiya)

 

“Leonora was a kind soul. In addition to serving together on the ARMA Calgary Board, I had a relationship with Leonora that was very meaningful to me, but more importantly, helped me take a giant step into an enjoyable time of ARMA service. When I was a newbie to the Board of ARMA Calgary, Leonora recognized a real greenhorn when she started working with me on ARMA matters. Here is how she very kindly and diplomatically helped me advance from greenhorn to useful ARMA volunteer. She would ask me to come early to a meeting or event on a pretense, usually going over numbers or finances from the membership portfolio that I was working on, and then she would prod me to ask questions about how things worked in ARMA Calgary. Leonora shared her experiences as a new volunteer and about how to work with the Board as a whole and individuals on the Board whose style and manner were often downright puzzling! She was generous with advice and anecdotes and we had many laughs together as she helped me adjust to the workings of the Calgary Chapter. 

(Patricia Hirsche)

 

“I first met Leonora when I was a member of the ARMA Canada region team.  I was on an ARMA official visit to the Calgary chapter, and participated in a chapter workshop as well as meeting with the Board.  I remember that Leonora had lots of ideas and contributed well to discussions in the meetings.   Most memorable of all, however, was her departure when she left our supper meeting, beautifully dressed in business attire (including high heeled shoes), zipping away on her Vespa scooter. Following on that first impression of her, I remember that she always made a fashion statement and took notice of design and colour in her surroundings. Spending time with her at a conference was always an education about the city we were in, it’s history, and other relevant information.  As I got to know her more, she revealed that her past had included time in the Canadian Armed Forces, as well as diverse experience in records and information management. 

She was quick to pick up on technology and provided good support for the first ARMA Canada website.  Her move to Vancouver Island, to the Victoria region, was Calgary’s loss and the Vancouver Island chapter’s gain.  As a member of the Local Conference committee for the ARMA Canada conference in Nanaimo, 2012, she discovered such local delights as the “Nanaimo Bar Trail” and made sure that every delegate received a Nanaimo bar soap in their delegates’ bag.  Following her retirement from the Government of British Columbia’s Information Management branch, she was happy to enjoy the boating and recreational opportunities that the island provided for her and Stuart.  She lived well, if too short a life.  I wish now that I had spent more time with her here on the coast, she seemed so young and with so much life ahead.  Cancer is not fair…. 

(Alexandra (Sandie) Bradley)

 

I first met Leonora at a Canadian Conference in 1995 in Calgary where she was part of the organizing committee.  We were there to “watch and learn” as we were given the task of organizing the next one in my hometown. Leonora was someone you could connect with if you had any questions and would be the first to offer anything from templates, content to advice to help us on our way.  We stayed connected throughout our professional relationship though thousands of kilometers away.  

When I became Region Manager of ARMA Canada, I had to select and appoint people to fill many roles and Leonora not only took on webmaster but looked after our financial business for a time until we could appoint a person responsible.  The Canadian Team loved her diligence and commitment to ARMA Canada. Even after retirement we kept in touch and Bernita Cogswell and I got to spend a day with Leonora and Stuart (or Salty, as she referred him as) on their Canada wide trek this past June.  A marvelous reunion that I am so thankful to this day for the time we spent together. 

(Pat Burns)

 

“Please do not stand there weeping, remember with joy, remember the humour and remember the music.” These were Leonora’s parting wishes to her friends.

Let’s keep Leonora in our thoughts and prayers as she continues on her next ‘adventure’!

Smart wearables and Canadian Privacy: Consumer concerns and participation in the ecosystem of the Internet of Things (IoT)
Emily Speight

 

Estimated reading time: 26 minutes, 15 seconds. Contains 5250 words

 

From smart cars to smart clothing, the Internet of Things (IoT) has the potential to revolutionize the way we live. Smart City IoT initiatives are facilitating more efficient traffic flows for commuters, and enabling bike sharing programs to reduce emissions and improve quality of life (Vinke). IoT solutions are redefining the aging experience by allowing individuals to remain independent in their homes longer. Voice activated Smart Home technologies provide solutions that remove barriers for individuals with mobility impairment; tasks such as shopping, manipulating window blinds and adjusting a thermostat can be achieved by issuing a voice command (AgingInPlace.org). Smart Wearables technologies provide means for healthier living; bras and shirts made from smart textiles can monitor blood pressure and heart functioning (electrocardiogram monitoring), providing insight for self-quantification (Awazade). Despite the potential of IoT technology, Canadians have been hesitant to adopt the technology and those that do often abandon the technology. This paper will examine resistance to IoT technology defined by Perera et al. as “smart wearable” and discuss measures to improve consumer engagement. 

While abstract concepts of IoT are translated into technologies that impact daily life, defining IoT remains a challenge. The Internet of Things has been defined in numerous ways. According to Perera et al. the Internet of Things is “a network of networks where, typically, a massive number of objects/things/sensors/devices are connected through communications and information infrastructure to provide value-added services” (585). Tzafestas describes the Internet of Things as “things/objects in our environment being connected so as to provide homogeneous communication and contextual services” (98). A central theme regularly underlying the various descriptions and definitions is the connection of everyday objects to the internet using sensors. IoT technology is pervasive and facilitates discreet, passive collection of mass amounts of data, which may be used to improve the lives of humans (Perera et al. 585). Smart wearables, simply put, are electronic, sensing technological devices worn on – or implanted into – the human body. Some examples of smart wearables available today include rings that track physical activity and provide the wearer with customized alerts for phone notifications; socks that measure pressure distribution on feet; watches that allow the wearer to track physical activity and perform functions normally associated with smartphones, such as sending and receiving SMS text messages; and armbands that track the wearer’s heart rate.

The Ecosystem of IoT

The IoT is expanding rapidly. Gartner predicts that by the year 2020 up to 30 billion devices will be connected as part of a 1.9 trillion-dollar industry (Gartner). Amidst the hype, it is interesting to note that adoption rates of IoT technology are low and attrition rates among users of IoT technologies are high (Garg 1). Garg describes an “ecosystem of IoT” that is composed of connected devices, the data generated by these devices, and stakeholders. The stakeholders involved in this ecosystem of IoT include “people/users, organizations and regulators” (2). Garg argues that only when the needs of stakeholders are met can the ecosystem of IoT function at its highest capacity; failure to meet these needs results in disengaged stakeholders and erodes the IoT ecosystem (3). The high rate of user abandonment of IoT technologies suggests that in the current environment consumer needs are not being fully realized. Analysis of user requirements is challenging given the broad range of applications that functions within the IoT ecosystem. 

Consumer Concerns Regarding IoT

Canadian consumers have a number of concerns regarding smart wearables. User design is important as consumers need to understand the technology and be comfortable operating appropriate hardware and software, such as smartphones and mobile apps (Puri v). The technology must be convenient for the consumer, with low impact on the consumer’s day-to-day life. Wearables that are unfashionable or cannot be worn discreetly are less likely to be adopted, as well as wearables that require significant effort to maintain due to short battery life or other design weaknesses (Emrich). Cost and value both play a role in adoption and use, IoT technology must be affordable for entry and must offer long term value for maintaining consumer use (Emrich).

Value provided to the consumer is impacted in a variety of ways. In some instances, IoT technology may cause more harm than benefit. The Owlet Baby Care “smart sock,” technology which monitors infant vital signs has been the subject of criticism. Doctors report that frequent false alarms by these devices have resulted in increased stress levels for parents, additional strain on the medical system and even unnecessary testing being performed on infants (Thompson). Similar concerns regarding data quality and the impact of false positives are reported on other smart wearable devices such as the Apple Watch (McGrath). Concerns around the safety and adverse health effects of smart wearables also brings forth questions regarding the level of value provided versus the risk of smart wearables (Physicians for Safe Technology).

The most commonly cited deterrent to smart wearables is privacy concerns. According to Kerr et al. privacy and security are the foremost concerns of consumers regarding the use of smart wearables (1068). Research by Epstein et al. on consumer abandonment of smart wearable devices revealed privacy considerations as the most prevalent reason for desertion (1111). In fact, Epstein et al. found that 45.2% of the time privacy concerns were cited as driving consumer decisions to abandon smart wearables. The concerns were multi-faceted. Consumers were uncomfortable with location tracking that revealed their movements to others and objected to selling their information to third parties for advertising purposes (1110). 

Consumer apprehension regarding the collection of data by smart wearables is not without merit. The data captured by smart wearables can be very personal. Consider the example of smart underwear made from smart fabric that tracks and measures levels of urinary leakage. The smart underwear currently can be used in the treatment of incontinence and is expected to have future applications in monitoring fertility and diabetes (Brusco). The Office of the Privacy Commissioner of Canada (OPC) recognizes the human body as “the vessel of our most intimate personal information” (Office of the Privacy Commission of Canada [OPC], “The Strategic Privacy Priorities”). In recent years, advancements in smart wearables have allowed the integration of biotechnology to collect consumer health data (Wissinger 779). Without adequate assurances and practices in place to protect such highly personal data and information, consumers will remain hesitant to participate in the ecosystem of IoT. Robust privacy legislation is a necessary ingredient for an effective ecosystem of IoT.  

Privacy Challenges

Many nations are currently grappling to balance privacy with other competing interests, such as the need for national security, the need to foster innovation, and the need to support research. The European Union (EU) recently updated its privacy legislation from the EU Data Protection Directive (EU Directive 95/45/E) to the General Data Protection Regulation (GDPR) which was enacted in May 2018.  Canada has chosen to employ an omnibus approach to privacy. Privacy laws are enacted by the federal government and the provinces are given the choice to comply with the federal legislation or enact substantially similar provincial legislation. This approach ensures that all Canadians enjoy a certain standard of privacy protection. While Canada has legislation to address privacy in both the public and private-sector, this paper will focus solely on the private sector. The Personal Information Protection and Electronic Documents Act (PIPEDA) is Canada’s federal legislation that governs how private-sector organizations are expected to manage personal information. Some provinces have elected to enact provincial privacy legislation governing the private sector; these laws provide protections that meet or exceed the protections mandated by PIPEDA. An examination of PIPEDA is informative about the general private-sector privacy environment as the legislation serves as a minimum standard of privacy protection in Canada. Canada appoints a Privacy Commissioner to the Office of the Privacy Commissioner of Canada (OPC), which is independent of the government and the Privacy Commissioner reports to Parliament. The role of the OPC is to oversee compliance with privacy legislation and advocate for privacy rights. 

As an advocate for privacy rights, the Office of the Privacy Commissioner of Canada has consistently called for increased regulatory powers for the OPC and significant reforms to existing privacy legislation. The PIPEDA legislation was passed in 2000 in a digital and political environment that was distinctly different than the one we face today. PIPEDA was enacted before the widespread emergence of technologies such as Web 2.0, biometric facial recognition software, artificial intelligence, big data, IoT, and cloud computing in everyday life. The legislation is dated and does not effectively address the challenges that technological advances have created for society. A recently enacted amendment to PIPEDA has legislated mandatory data breach reporting for companies that collect the personal information of Canadians in a commercial capacity. While this amendment brings a much-needed reform to the Act, Canada’s privacy legislation is still in dire need of an update. Whether PIPEDA, in its current form, meets the requirements of “adequacy” of the recently enacted GDPR remains the subject of much debate. 

PIPEDA broadly defines both personal information and personal health information. Personal information is defined as, “information about an identifiable individual” (PIPEDA 4) and personal health information is defined as: 

(a) information concerning the physical or mental health of the individual; 

(b) information concerning any health services provided to the individual; 

(c) information concerning the donation by the individual of any body part or bodily substance of the individual or information derived from the testing or examination of a body part or bodily substance of the individual; information that is collected incidentally to the provision of health services to the individual (PIPEDA 3-4). 

The broad definition employed by PIPEDA extends privacy protections to Canadians who chose to employ IoT technology. Further, the OPC has been very clear in its position that data collected by IoT technology, and specifically smart wearables, fit the definition of personal information (OPC “Wearable Computing” 1; OPC “The Strategic Privacy Priorities” 2-4) and is protected under PIPEDA.

PIPEDA’s Fair Information Principles 

Canada’s PIPEDA legislation is based on ten fair information principles that organizations subject to the legislation must follow namely: (a) accountability; (b) identifying purposes; (c) consent; (d) limiting collection; (e) limiting use, disclosure, and retention; (f) accuracy; (g) safeguards; (h) openness; (i) individual access; (j) challenging compliance (OPC “Fair Information Principles”).

a) Accountability

The principle of accountability places responsibility on businesses to comply with PIPEDA and requires that the organization, as well as any third-party partners with the organization, act in accordance with PIPEDA legislation (OPC, “Fair Information Principles”). 

b) Identifying Purpose

The principle for identifying purpose asserts that consumers must be informed regarding the reason for the collection of their personal information either prior to or at the time of collection (OPC, “Fair Information Principles”). 

c) Consent

The principle of consent requires that organizations obtain meaningful consent prior to the collection, use or disclosure of personal information (OPC “Fair Information Principles”). 

d) Limiting Collection

The principle of limiting collection requires that businesses limit the collection of personal data to only what is required to fulfil the purposes identified to consumers (OPC “Fair Information Principles”).

e) Limiting Use, Disclosure, and Retention

The principle of limiting use, disclosure, and retention obligates businesses to restrict the use and disclosure of personal data to only those purposes for which consent has been given. Personal data should not be retained after it is no longer required for legal reasons or identified purposes and should be destroyed in a secure manner (OPC “Fair Information Principles”). Personal information that is kept longer than required must be anonymized (OPC “The Internet of Things” 16).

f) Accuracy

The principle of accuracy necessitates businesses to take measures to ensure personal information is accurate and that procedures are established to allow consumers to have inaccurate information corrected (OPC “Fair Information Principles”).  

g) Safeguards

The principle of safeguards obligates businesses to take measures to protect personal information against loss and theft (OPC “Fair Information Principles”).  

h) Openness

The principle of openness allows consumers to challenge businesses regarding compliance with PIPEDA’s fair information principles (OPC “Fair Information Principles”). 

i) Individual Access

The principle of individual access provides consumers with the right of access to their personal information. Consumers have a right to verify their personal information and have incorrect or incomplete information about them corrected. (OPC “Fair Information Principles”).  

j) Challenging Compliance 

The principle of challenging compliance allows consumers to challenge businesses regarding compliance with PIPEDA and the fair information principles. (OPC “Fair Information Principles”).

Identifying Purpose and Consent

A close relationship exists between the principles of identifying purpose and consent because meaningful consent necessitates that users be properly informed. Written notices are often employed to inform consumers regarding why their personal information is being collected and how their data will be used. Typically, written collection notices are vague, written in complex language and are overly lengthy which renders them of little value to consumers. Often consumers do not read the collection notices as the time investment required is significant. Cranor and McDonald estimated that to read all the privacy policies an American Internet user encounters each year would require an annual time investment of 201 hours (565). Reading collection notices is clearly excessively burdensome for consumers and, given the opacity of most collection notices, arguably provides little benefit. Consumers are left with a sense of confusion concerning what data are being collected, how data are being used and shared, and the impact on personal privacy. It is unsettling to consider that a decision to participate in the ecosystem of IoT requires consumers consent to risks and practices they do not understand. 

Challenges regarding consent are further compounded by the egregious misappropriations of the principle of consent that sometimes occur. Research by Wissinger revealed attitudes of blatant disregard for consumer privacy protections that cited user consent as justification for negligence toward security and privacy obligations (781). Attitudes of apathy are incongruent with both the letter of the law and the spirit of PIPEDA; the OPC is clear in its position that consent does not remove PIPEDA imposed obligations to protect personal privacy and provide adequate safeguards. Clearly, the current approach to informing consumers and obtaining meaningful, informed consent is inadequate. Adding requirements to ensure clear, succinct, plain use language can provide some benefit in the pursuit to improve existing models for identifying purpose and consent; however, these changes would only be one facet of a comprehensive solution to current shortcomings. 

While businesses may have been able to use opacity as a tool to conceal how consumer data were used in the past, potential secondary uses of personal data are gaining attention. Recent reports of Fitbit data being used in Canadian courts (Waggot and McCutchan) and American courts issuing subpoenas for Amazon Echo recordings (CBS Interactive Inc.) are shedding light on the issue that there are privacy issues associated with smart devices of all types, including wearables. The OPC acknowledges that notice (identifying purpose) and consent are areas of challenge in today’s digital environment and has taken action to address these shortcomings (OPC “PIPEDA Fair Information Principle 3”). The OPC (“Wearable Computing”) also notes that the current binary model which limits user choice to either opt-in completely or opt-out completely and consent is only collected once, is insufficient regarding smart wearable technology. The OPC has made several recommendations intended to address challenges associated with notice and consent such as personalized privacy options that can be controlled by the consumer (10). 

The most significant challenges about notice and consent in the IoT ecosystem are the introduction of smart wearables, such as Lifelogging technologies, and voice activated technologies that are constantly “listening,” that do not limit data collection to the individual wearing the device. Instead, these devices collect data from the environment surrounding the individual wearing the device.  Current methods which were designed to provide notice, obtain consent, and address the other privacy principles with an individual at a single-point in time, are insufficient to protect the personal privacy of individuals who are in the environment of a consumer wearing these types of smart devices. Certainly, many individuals may wish to opt-out of their image being captured by a stranger’s lifelogging technology, or intimate data from a conversation with friends being captured by one party’s voice activated device. Questions clearly emerge regarding how to notify individuals that their personal information may be collected by another consumer’s smart wearable. If a method is in place to notify the individual that their information may be collected by another consumer’s smart wearable, how does an individual provide consent or opt-out? Who is responsible to ensure compliance with the Fair Information Principles and protect the privacy rights of individuals whose personal information is being collected by a consumer’s wearable device?

Limiting Use

The value of data in the ecosystem of IoT and today’s environment of “big data” cannot be overstated. Data have been referred to as “the new oil” (Pringle) and “the new gold” (Farkas 5). The potential for personal data to be used as a resource for revenue generation, and their significance in “improving” customer service through personalization, positions data as a highly valuable asset. Given the value of data, businesses have significant motivation to expend efforts to justify increased data collection, rather than critically evaluating what data are truly needed. Voice activated technologies that are always “listening” demonstrate how expansive data collection can become. The OPC has already voiced concerns regarding the handling of data from voice activated devices and has stated that limitations must be put in place to ensure that private conversations are not being recorded and sent to company servers (OPC “The Internet of Things” 20). Decisions regarding acceptable collection should not be left to the sole discretion of private enterprise; frameworks and rules need to be in place to provide guidance regarding acceptable collection. 

While PIPEDA also creates limitations about the retention of information, this too is another area where businesses and consumers often have conflicting interests. Although anonymization of data has been provided as an acceptable option which allows organizations to retain data longer than necessary, there are challenges surrounding the effectiveness of anonymization processes. In many instances, given the amount of data collected, it is possible to track the data back to the individual it pertains to (OPC “The Internet of Things” 16). For anonymization to be an acceptable alternative, processes must be in place that ensures that data cannot be re-identified to individuals.

Accuracy

Given the value of IoT collected data to both consumers and businesses, it could be easy to mistakenly assume that a discussion of data accuracy in the ecosystem of IoT would be superfluous. Accuracy is indeed a challenge with IoT devices where sensors collect vast amounts of data. Efforts to control costs of smart devices may necessitate the use of cheaper, less accurate sensors. Users have reported inaccurate data as a common reason for abandoning IoT wearables (Epstein et al. 1110). The lack of mechanisms to correct inaccurately recorded sensor data creates a significant challenge to ensuring accurate data.  Accuracy of data and the lack thereof, is especially concerning in the ecosystem of IoT where the personal data generated are analysed and used for various purposes which directly impact the individuals. Smart wearables are often used for diagnosis and decision-making in a medical context (OPC “Wearable Computing” 11), where accuracy is extremely important. The OPC also cites concerns regarding the impact of inaccurate notification on accuracy; if consumers are unaware of what data are being collected, they are not equipped to request the appropriate data collected in an effort to verify accuracy (OPC “The Internet of Things” 20). 

Security, Safeguards, Accountability, and Openness

Security is a leading concern of consumers in the ecosystem of IoT. The importance of the safeguard principle cannot be overstated as failure to comply can result in devastating consequences. Many smart wearables play a role in maintaining health (pacemakers, implantable cardio-defibrillators, insulin pumps) that, if hacked, could be seriously detrimental and result in death. Unfortunately, the rush to get smart wearable devices to market quickly has come at the expense of security. A study conducted by HP (as cited in OPC “The Internet of Things”) revealed that approximately 70% of IoT devices had security vulnerabilities, 70% of devices failed to use encryption for communications, and 60% did not use encryption for software updates (21). Security and safeguards must be implemented to ensure consumer confidence and continued participation. 

On November 1, 2018, PIPEDA’s mandatory breach reporting rules were enacted, which provide consumers with increased protections and may serve to motivate businesses to apply a more respectful attitude toward security. The principle of accountability complements the principle of safeguards and affords protections to consumers when data storage or processing is outsourced, which is especially relevant in today’s global economy. Furthermore, openness is essential for building consumer confidence as it provides an avenue for consumers to voice complaints. The OPC has called for an end to self-regulation in the private-sector regarding privacy and continues to request powers be given to the OPC to proactively ensure compliance (“Annual Report 2018” 2). “Trust but verify,” was a central theme in the OPC’s most recent annual report, which argues that the OPC should be granted the ability to inspect the privacy practices of private companies in a regulatory capacity, without the need of a formal complaint (“Annual Report 2018” 2). Equipping the OPC with greater powers would be beneficial in building consumer confidence, ensuring compliance, and demonstrate in a meaningful way that Canada is committed to privacy protection.  

Daniel Therrien, the current Privacy Commissioner of Canada, has indicated that PIPEDA is too permissive in an era of IoT, and that the legislation allows companies excessive leeway to exploit personal information for company gain (OPC “Annual Report 2018” 2). Ann Cavoukian (as cited in Jones), former Information and Privacy Commissioner of Ontario, has argued that the business model that disregards privacy is becoming obsolete, and that privacy considerations are integral when designing processes, products, and technologies. While businesses may balk at the additional time and expense to build privacy and security protection into IoT solutions and business models, Cavoukian (as cited in Jones) has argued that privacy protection is good for business and can help businesses differentiate. As consumers are becoming increasingly aware of commercial abuses of personal data, they will likely re-assess the value they gain from their smart wearables versus the increased risk to their privacy. Businesses that have built-in privacy protection will have competitive advantage. Cavoukian (as cited in Privacy Analytics) argues that “data privacy is the minimal cost of doing good business.” The costs of ignoring security (data breaches) or having to build in privacy after the design phase can be staggering (as cited in Jones).

Data Ownership

Discussion around control of personal information and personal data bring forth questions of ownership. Canadians, as a group, have a low level of personal data awareness; they have a limited understanding of how their personal data may be used by third parties (International Institute of Communications 14). Confusion around data in the IoT ecosystem is not limited to Canadians. A survey of 465 American adults who employed the use of smart wearables to collect health data reported that approximately half of the individuals believed they “owned” the personal health data collected, and 30% believed that ownership was shared between them and the company that collected their data (“Survey Reveals Consumer Views” 13). Uncertainty regarding data ownership may in part be attributed to an absence of Canadian legislation that provides a property right in data (Scassa 16). 

Canadian legislators have been deliberate in taking an approach that relies on existing laws such as copyright, confidential information, and personal information protection laws to protect interests around data rather than passing data ownership laws. Canada is not unique in its approach to data ownership. The United States of America and the EU have both taken similar approaches to data ownership, preferring to rely on existing property laws that intentionally exclude an ownership right in data (Farkas 15; Determann 55). Scassa and Farkas are both hesitant to recommend the creation of an ownership right in data and employ a cautious approach, recommending that existing property laws could be amended to address current shortcomings (17) (15). Determann, on the other hand, is hostile to the notion that data ownership would provide any benefit and lists several harms that such a law would precipitate including “suffocat[ion] of free speech, information freedom, science, and technological progress” (55). 

Determann’s concerns about an ownership right in data are not without merit. Providing a property right in data would clearly disadvantage one or more groups in the IoT ecosystem, which would serve as a deterrent and result in decreased stakeholder participation. Given that the majority of calls for a property right in data come from business and government, rather than consumers, it is unlikely that consumers would be advantaged, and smart wearable attrition rates would further increase. Data ownership rights would serve as a catalyst for decreased efficiency in the IoT ecosystem.  

Data ownership rights should not be pursued as an avenue to improve consumer adoption and retention rates for smart wearable devices. Robust privacy legislation can address concerns around security, collection, consent, use and disclosure that are frequently raised around smart wearables. Canada’s existing PIPEDA legislation is currently inadequate to protect consumers in the IoT environment; however, amendments could be made to bring the legislation into alignment with the needs of today’s digital economy.

Conclusion

The IoT offers endless possibilities to improve daily life in society. Smart wearables provide solutions that can facilitate healthier living and independent lifestyles. Despite the advantages, consumer acquisition and retention of IoT products and services remain a challenge. Consumer demands for assurances of privacy and security must be met if the IoT ecosystem is to function effectively. Comprehensive privacy legislation can address consumer concerns and serve as a more effective solution than data ownership laws in creating an environment that fosters trust and consumer participation in today’s digital society.

Canada’s PIPEDA legislation currently governs consumer privacy protection in the private sector; however, the legislation has become outdated and in its current form is inadequate to provide protections in today’s digital society. Significant changes to the legislation are necessary to address the considerable advances in the capabilities of technology. As it is currently written, PIPEDA is quite permissive, and departures from the spirit of the Act by commercial companies further erode consumer confidence in the IoT ecosystem. Updating PIPEDA and granting authority to the OPC to take proactive measures to ensure compliance with the Act are essential to build and maintain trust with consumers. The EU has recently updated its privacy legislation to address the new digital environment and ensure continued consumer privacy protections, positioning it as a leader in data privacy. Canadian legislators must act swiftly to ensure Canadian competitiveness in the digital economy. While businesses may be concerned about the consequences of tighter privacy restrictions, they will ultimately benefit from increased user trust and the ability to use privacy as a differentiator. Robust privacy legislation will lay the necessary groundwork for gaining consumer confidence, increasing participation in the IoT ecosystem, and maximizing benefit to society.

Works Cited

AginginPlace.org (April 2019). “IoT and Seniors” Aging In Place. https://www.aginginplace.org/iot-and-seniors/ Accessed 16 April 2019.

Awazade, Shubham. “IoT in Intelligent Mobile Health Monitoring System By Smart Textile.”  Textile Mates: Your All Time Partner. 11 March 2017. https://www.textilemates.com/iot-intelligent-mobile-health-monitoring-system-smart-textile/. Accessed 22 October 2018. 

Brusco, Sam. (2015 October 22). “A Rather Attractive Solution to Urinary Incontinence.” ECN. https://www.ecnmag.com/blog/2015/10/rather-attractive-solution-urinary-incontinence 

CBS Interactive Inc. “Judge Orders Amazon to Produce Echo Recordings in Double Murder Case.” CBS News. 12 November 2018. https://www.cbsnews.com/news/amazon-echo-judge-orders-company-produce-alexa-recordings-double-murder-case-2018-11-12/ 

Cranor, Lorrie Faith, and Aleecia M. McDonald. “The Cost of Reading Privacy Policies” I/S: A Journal of Law and Policy for the Information Society, vol. 4, no. 3, 2008, pp. 543-568.  https://kb.osu.edu/bitstream/handle/1811/72839/ISJLP_V4N3_543.pdf?sequence=1 

Determann, Lothar. “No One Owns Data.” UC Hastings research paper, no. 265, 14 February 2018, pp. 1-44. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3123957. Accessed 9 November 2018. 

Emrich, Tom. “Wearable Market in Canada Expected to Explode, IDC Canada Says.” Betakit. 3 June 2014. https://betakit.com/wearable-market-in-canada-expected-to-explode-idc-canada-says/ Accessed November 18, 2018.

Epstein, Daniel A., et al. “Beyond Abandonment to Next Steps: Understanding and Designing for Life After Personal Informatics Tool Use.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2016 May, pp. 1109-1113. https://doi.org/10.1145/2858036.2858045.  

Farkas, Thomas J. “Data Created by the Internet of Things: The New Gold Without Ownership?” Revista la Propiedad Inmaterial, no. 23, 1 August 2017, pp. 5-17. http://dx.doi.org/10.18601/16571959.n23.01 Accessed 9 November 2018.

Garg, Radhika. “Open Data Privacy and Security Policy and its Influence on Embracing the Internet of Things.” First Monday. vol. 23, no. 5-7, 2018.  http://firstmonday.org/ojs/index.php/fm/article/view/8166/7211 https://dx.doi.org/10.5210/fm.v23i5.8166 

Gartner. “Gartner Says it’s the Beginning of a New Era: The Digital Industrial Economy.” Gartner Press Release, 7 October 2013. https://www.gartner.com/newsroom/id/2602817 . Accessed 24 October 2018.

International Institute of Communications. “Personal Data Management: The User’s Perspective.” International Institute of Communications. 2012, pp. 1-46. www.iicom.org/images/iic/themes/Qual_Report_pdm_final.pdf 

Jones, Hessie. “Dr. Ann Cavoukian: Why Big Business Should Proactively Build for Privacy.” Forbes. 17 August 2018. https://www.forbes.com/sites/cognitiveworld/2018/08/17/ann-cavoukian-why-big-business-should-proactively-build-for-privacy/#7b302fae2e3d Accessed 21 November 2018.

Kerr, Don, et al. “Security, Privacy, and Ownership Issues with the use of Wearable Health Technologies.” Wearable Technologies: Concepts, Methodologies, Tools, and Applications, edited by M Khosrow-Pour et al., Information Resources Management Association (IRMA), 2018, pp. 1068-1083). https://doi.10.4018/978-1-5225-5484-4.ch048 

McGrath, Jenny. “Lack of Regulation Means Wearables Aren’t Held Accountable for Health Claims.” Digital Trends. January 19, 2019. https://www.digitaltrends.com/wearables/wearable-devices-leading-to-over-diagnosis/ Accessed February 10, 2019.

Physicians For Safe Technology. “Wearable Wireless Devices.” Ca. 2018. Accessed November 10, 2018. https://mdsafetech.org/wearable-devices/ 

Office of the Privacy Commissioner of Canada (OPC). “Annual Report to Parliament on the Personal Information Protection and Electronic Documents Act and the Privacy Act.” 2018. https://www.priv.gc.ca/media/4831/ar_201718_eng.pdf

Office of the Privacy Commissioner of Canada (OPC). “Fair Information Principles.” 9 January 2018. https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/ Accessed October 29, 2018.

Office of the Privacy Commissioner of Canada (OPC). “PIPEDA Fair Information Principle 3 – Consent.” 8 January 2018. https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/principles/p_consent/  Accessed October 24, 2018.

Office of the Privacy Commissioner of Canada (OPC). “The Internet of Things: An Introduction to Privacy Issues with a Focus on the Retail and Home Environment.” February 2016 https://www.priv.gc.ca/media/1808/iot_201602_e.pdf

Office of the Privacy Commissioner of Canada (OPC). “The Strategic Privacy Priorities.” 9 September 2016. https://priv.gc.ca/en/about-the-opc/opc-strategic-privacy-priorities/the-strategic-privacy-priorities/  Accessed November 2, 2018.

Office of the Privacy Commissioner of Canada (OPC). “Wearable Computing: Challenges and Opportunities for Privacy Protection.” 2014, pp. 1-21. https://www.priv.gc.ca/media/1799/wc_201401_e.pdf

Perera, Charith et al. “The Emerging Internet of Things Marketplace From an Industrial Perspective: A Survey.” IEEE Transactions on Emerging Topics in Computing, vol. 3 no. 4, 2015, pp. 585-598. https://doi.org/10.1109/TETC.2015.2390034 

Personal Information Protection and Electronic Documents Act (PIPEDA), S.C. 2000b, c. 5, s. 2(1), pp. 3-4. Retrieved from https://laws-lois.justice.gc.ca/PDF/P-8.6.pdf  Accessed 24 October 2018.

Pringle, Ramona. “Data is the New Oil:” Your Personal Information is Now the World’s Most Valuable Commodity. CBC News, Technology & Science. 25 August 2017. https://www.cbc.ca/news/technology/data-is-the-new-oil-1.4259677. Accessed 20 October 2018.

Privacy Analytics. Embed Data Privacy Proactively to Win Big Time. 29 August 2018.  https://privacy-analytics.com/de-id-university/embed-data-privacy-proactively-to-win-big-time/. Accessed 21 November 2018.

Puri, Arjun. “Acceptance and Usage of Smart Wearable Devices in Canadian Older Adults.” 2017. University of Waterloo. https://uwspace.uwaterloo.ca/bitstream/handle/10012/11861/Puri_Arjun.pdf?sequence=5 Accessed April 2, 2019. 

Scassa, T. “Data Ownership.” CIGI papers. no. 187, 2018. https://www.mdpi.com/2624-6511/1/1/6/htm 

Accessed 16 October 2018. 

“Survey Reveals Consumer Views on Data Ownership.” Journal of AHIMA, vol. 85, no. 5, 2014, p. 13.  

Thompson, Dennis. “Pediatricians Say No to Wearable Smartphone Baby Monitors.” HealthDay News. (24 January 2017). https://www.upi.com/Health_News/2017/01/24/Pediatricians-say-no-to-wearable-smartphone-baby-monitors/4181485293107/ Accessed November 10, 2018.

Tzafestas, Spyros G. “Ethics and Law in the Internet of Things World.” Smart Cities, vol. 1 no. 1, 2018, pp. 98-120. https://doi.org/10.3390/smartcities1010006 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3251542

Vinke, Nikkie. “Smart Cities, Smart Transit: Bike Shares as Urban Transport Solution.” Finch & Beak. 6 Feb 2015. https://www.finchandbeak.com/1108/smart-cities-smart-transit-bike-shares.htm Accessed 14 January 2019.

Waggot, George and Wilson McCutchan. “Canada: Fitbit Evidence: Coming to a Court Near You.” Mondaq. 21 November 2014. http://www.mondaq.com/canada/x/355492/employment+litigation+tribunals/Fitbit+Evidence+Coming+Soon+To+A+Court+Near+You Accessed 30 October 2018.

Wissinger, Elizabeth. “Blood, Sweat, and Tears: Navigating Creepy Versus Cool in Wearable Biotech.” Information, Communication & Society, vol. 21, no. 5, 2018, pp. 779-785. https://doi.org/10.1080/1369118X.2018.1428657  

 

List of Abbreviations

EU European Union
GDPR General Data Protection Regulation
IoT Internet of Things
OPC Office of the Privacy Commissioner of Canada
PIPEDA Personal Information Protection and Electronic Documents Act
In memory of Ivan Saunders… (January 9, 1944 to January 29, 2019)
Barbara Bellamy, CRM, Bernita Cogswell and Jolynne Jackson

 

Estimated reading time: 3 minutes, 49 seconds. Contains 765 words

 

Ivan Saunders

 

We all have different memories of Ivan but one thing we all agree on is that Ivan was a wonderful mentor, colleague and friend. We are so privileged to have worked with such an amazing, strong individual. He had the most courageous strength and tenacity that every single person who ever came into contact with him could feel while in his presence.  He will be missed by his ARMA community and friends. 

Ivan volunteered at the Chapter level and for the Canada Region. As the ARMA Canada conference director, Ivan would visit conference sites in his unassuming, quiet manner. He viewed the site and then started asking tough questions of staff from hotels and conference centres. The conference team often heard that he was a tough negotiator and these comments were conveyed with warmth and respect. He acted as the liaison between the host chapters and venue; always listening and taking their input/insight into consideration.  

He used to say that he did his due diligence, which he certainly did. As Conference Director, Ivan’s keen eye and attention to detail along with his sharp negotiating skills was an asset to ARMA Canada. And Ivan loved to negotiate; he was made for the role.

Ivan also put in countless hours at his home Saskatchewan chapter.  He always generously mentored chapter members making several strong relationships with like-minded professionals. 

Ivan was my unofficial mentor.  We were 40 years apart but that didn’t stop us from building a friendship.  Ivan and I spent almost a decade planning local and national ARMA events, but we only saw each other in person once or twice a year.  He was someone I always looked forward to catching up with – hearing stories about his children and grandchildren, his camp in New Brunswick, talking business or learning his perspective on issues over an impeccable glass of wine.  He was humble and modest…until you learned of his wine collection!  He had quite the palate and selection of wines. Jolynne Jackson.

Ivan’s work life included a position with Parks Canada documenting Canada’s military and architectural history. After leaving Parks Canada, Ivan worked as an historian on the restoration of Government House in Regina. Beginning in the 1980s until his retirement in 2013, Ivan’s work focused on capturing and preserving the history of Saskatchewan: as the Archivist for the City of Regina, in his work for the Saskatchewan Archives Board and finally for the Provincial Capital Commission.

I got to know Ivan more on a personal level after he moved to New Brunswick. He and I would go for lunch sometimes to catch up. It was nice as I enjoyed his company. He was a very smart man who was not only a historian, records and information management person but a very well-rounded individual with varied interests. Ivan invited my husband Gary and me to his camp at Woodlands for dinner one Sunday. He cooked an amazing meal for us. We asked if we could help, he said he was good and suggested we go out and look around the property. We went through the gardens and could not believe all the vegetables, herbs and fruits he was growing.”

Bernita Cogswell

We are so privileged to have worked with such an amazing, strong person, who had such a deep care about his family, colleagues and the success of ARMA Canada and the Saskatchewan chapter. He taught us the value of volunteering, the art of the deal and how to persevere even in tough situations.

We will always remember Ivan. He was a leader, a mentor, and a strong negotiator. 

Let me close with a funny story about Ivan. For those of you that knew him, this will give you a chuckle. 

He carried a cell phone that he never turned on.  One time a colleague and I were to meet him at a certain place in a large conference hall at a certain time. He didn’t show up so we called him, but his phone went straight to voicemail. We took turns looking for him around the hall and after an hour he came from behind a curtained area and said he was waiting for us – bit of a miscommunication of the exact spot we were to meet.  When we asked about his cell phone he said “ya I have it with me but I never turn it on.”  We just smiled and laughed.  

Jolynne Jackson

Integrate Digital Preservation into Your Information Governance Program: Advance your understanding and advocacy for long-term electronic records preservation

Lori J. Ashley

 

Estimated reading time: 16 minutes, 15 seconds. Contains 3253 words

 

Introduction

Public, private, and voluntary sector organizations are required to keep information and records for as long as necessary to meet a range of legal, financial, operational, regulatory compliance, and cultural memory purposes, depending on their unique mission, history and objectives.  An exponentially increasing number of valued information assets are born and will spend their entire “lives” in the digital realm. For those assets that must be retained long-term, there are risks and serious threats from technology obsolescence that should be systematically considered and proactively addressed. 

Since the 1970’s archivists around the world have recognized that the obsolescence of digital media and storage devices was a risk to accessing authentic, readable, and usable electronic records. Dependence on computer software applications, many of which use proprietary file formats to create, save, store and manage, and retrieve information assets, poses an additional threat. The emergence of technology neutral open standard formats (e.g., JPG, PDF/A, SVG) now commonly used by records producers while the assets are still in their custody has partially addressed the risk but are often inconsistently used and not a one-time solution for indefinite long-term or permanent records. The emergence of Information Governance as a coordinating accountability framework for enterprise lifecycle management is a positive development that can be leveraged to address the vulnerability of long-term digital information.

Given the reliance and economic impact of computer technologies used by all sectors to deliver goods and services, digital preservation strategies and capabilities should become part of every organization’s information governance program. Records and archives management professionals are uniquely positioned as part of their established advisory, appraisal, scheduling, and custodial roles to help their organizations assess the maturity of hosted and on-premise information systems to handle the demands of long-term and permanent electronic information assets.

How Long is Long-Term? 

Long-term is defined in the digital preservation community’s de facto standard as a period of time that is “long enough to be concerned with the impacts of changing technologies, including support for new media and data formats, or with a changing user community. Long Term may extend indefinitely.” Technology refresh cycles are relatively short, however, in the three-to five-year range.  Like hardware and storage media, file formats can become outdated, obsolete or unsupported, often without users realizing it. Think of once popular formats like Lotus 1-2-3 and WordPerfect, which have come and gone. When obsolescence happens, accessibility becomes a challenge. 

The benchmark for long-term retention commonly applied by preservation practitioners is 10 years due to known risks associated with bit corruption, broken links, abandoned or decommissioned applications, and media degradation. Traditional data archiving approaches and common electronic records fixity methods (e.g., saving to a uniform file format like PDF) are proving insufficient to address exponential growth and diversity in the types of digital information that need to be protected from rapid technological change.  

The fragility of digital content as a significant potential risk to businesses has been identified by leading analyst firms that include Forrester, Info-Tech Research Group, and Gartner. In 2015, Gartner noted: “As formats change, software is retired and hardware becomes obsolete, the data that organizations might want to keep can be lost forever.” These warnings about risks associated with hardware and software obsolescence, link rot, storage degradation, and vendor abandonment are being directed to CIOs, CTOs, and enterprise architects to raise awareness about adding to the organization’s technical debt.

Defining Digital Preservation 

Digital preservation is a formal set of processes and activities that maintain permanent and indefinite long-term information stored in digital formats in order to ensure continued access. The preservation of digital information is widely considered to require more proactive and continuous attention than preservation of other media. 

Digital preservation actions must be taken over the lifetime of electronically stored information due to changes in software and hardware environments, deterioration of magnetic media such as CDs, DVDs, computer hard drives, and to keep pace with evolving business, legal and regulatory requirements for access and re-use. Digital objects in a preservation system are actively migrated over time to newer formats using policies and automated workflows. Preservation actions are captured in the metadata associated with each digital object to demonstrate authenticity and chain of custody. Digital objects are organized into collections and shared in accordance with the organization’s unique permissions and rights.

Many commercial and open source digital preservation tools are based on the Open Archival Information System (OAIS) reference model. OAIS is a conceptual framework for functions and actions that a digital repository must execute to ingest, store, preserve, and provide access to digital objects for a community of users. 

First released in 2003 as ISO 14721 and updated in 2012, the standard has helped to raise awareness and understanding of concepts relevant for archiving digital objects, clarify terminology for comparing data models and archival architecture, expand consensus on the infrastructure and processes needed for digital information preservation and access, as well as guide the development of supporting standards.  

Common activities for digital repositories conforming to the OAIS reference model include file ingest and characterization, integrity validation and protection, collection management, system and data monitoring, migration of assets from obsolete file formats, replication to multiple geographic locations, robust metadata management to facilitate search and retrieval, and secure access. 

A companion standard, Audit and Certification Criteria for Trustworthy Digital Repositories (ISO 16363:2012), includes the OAIS technical functions in addition to identifying organizational and security management capabilities and metrics. Dedicated resources – funding, skilled staff, tools, storage, and, organizational commitment — are required for a digital repository to persistently monitor risks and adapt to changing conditions. 

Two preservation repositories in the world have been certified to ISO 16363 thus far: The Audio/Visual Repository at the Indira Gandhi National Center for the Arts and the United States Government Publishing Office (GPO.)  A self-assessment checklist is available from PTAB – Primary Trustworthy Digital Repository Authorisation Body Ltd – that practitioners can study to learn about the metrics and documentation required to demonstrate that a repository is trustworthy and able to protect and preserve its digital holdings over time.  

Active Digital Preservation

The primary role of a digital preservation system is to ensure objects and their metadata remain accessible, useable, and readable over the long-term by providing a proactive way to migrate file formats as they become obsolete or are no longer supported by a vendor or by the organization. Core functions of an integrated digital preservation solution that conforms to the ISO 14721 standard provide for:

Content IngestThe Ingest function allows users to upload information into the repository. Various methods include simple drag and drop via a web browser, via a secure holding area, using APIs, as well as an optional bulk upload service. Ideally the content hierarchy can be imported from external systems, re-arranged before upload, or modified after the objects are ingested.

Integrity Checks and Metadata ExtractionSystem workflows during the Ingest process perform quality assurance actions that include virus checking, checksum verification, file format checking against a format registry, as well as metadata extraction and validation. Some organizations choose to migrate (normalize) file formats on ingest using migration pathways. The system also detects duplicate files.

Archival StorageIt is common archival practice to preserve a master file and to create access copies in order to protect the authenticity and integrity of the original object and its metadata.  Archival storage means that files are saved to multiple servers in multiple data centers within a given region. All objects held in storage are integrity checked and checksums are calculated on access or at regular time intervals. Archival storage is dependent on other preservation services that include media renewal, security protections, and the availability and enforcement of preservation metadata standards. 

Data and Collection ManagementA range of capabilities in a digital preservation system allows repository administrators to manage the metadata that describes the content in their collections. Options include providing a custom metadata schema, selecting/editing one or more of standard metadata schemas, adding custom search indexes, viewing audit trails, creating and modifying content hierarchies, synchronizing metadata and collection structures from catalogs. It may also be possible to classify personal data information to meet GDPR and other privacy requirements.

Access Repository administrators can facilitate browsing, searching, and viewing of content as well as display technical and descriptive metadata through a user interface.  Search tools support full text search of permitted file formats and metadata as well as limited search within a selected folder. Administrators can launch access workflows to transform files, send notifications, and create content packages for download or delivery.  The solution may include a web-based portal that can be customized for internal access or controlled public access to collections including fast playback of audio-visual content. Organizations can also choose to control and provide access to collections through their existing web portals.

Preservation Planning and ActionThe purpose of a digital preservation repository is to ensure the longevity and viability of content and metadata over time. Repository administrators have options to identify file formats, select preservation and presentation actions, and apply migration pathways on individual assets or collections. Standard tools include the ability to create new “Digital Master” preservation copies, create presentation copies for sharing, and implement and monitor all preservation actions via a standard workflow engine.

AdministrationThrough standard and customized reports, administrators can monitor all functions of the repository including storage and file types.  User permissions, roles, and options for two factor authentications are also part of system administration. Tags can be assigned to assets and folders to set user access as well as define which actions can be performed by each role.

Who Uses Digital Preservation Technology?

Curators, librarians, and archivists have long recognized digital preservation capabilities and resources would be needed to protect their digitized and born-digital assets of historical and cultural value. This preservation community represents national, state and provincial libraries, archives and special collection repositories as well as museums, private and corporate archives, and academic institutions around the world.  

Many of these organizations are dealing with aging digitized collections of documents, photographs and audio/visual assets, as well as experiencing exponential increases in the volume and diversity of born-digital assets that are being transferred or donated for permanent preservation. The Internet and social media channels enable repositories to make their collections more widely and easily accessible, providing additional incentives to organize and tag digital assets for search and retrieval. 

Public Sector Archives

The Library and Archives Canada (LAC), one of the world’s largest public sector library and archival institutions,  released its strategy for a digital preservation program in 2017, describing a multi-phase initiative to preserve its existing seven petabyte historical collection and prepare for millions of born-digital government records to be accessioned in the future. Numerous agencies will transfer permanent records from the enterprise GCDOCS (OpenText) system so LAC is working to implement scalable, automated file ingest from provincial and federal agencies to meet the demand.

Academic Archives and Libraries

Founded in 1846, the State University of New York at Buffalo (UB) is the largest campus in the 64-campus public university system. The University Archives preserves and provides public access to varied collections dating back to the 1800’s that include student and campus publications, papers of prominent people and area organizations, as well as the private documents of faculty members. The Archives had identified a large amount of content on storage media that was sometimes difficult to identify and classify, and which was at risk of file degradation. A digital preservation system was implemented to speed discovery times and ensure that historical resources remain accessible for the long-term. UB integrated a digital preservation system with its existing ArchivesSpace catalog to increase the amount of digital material available to the university community and the public. Faculty members use the digital archive as part of their curriculum and students reference the collections to complete their capstone projects.

Preservation of electronic information and records is also vitally important for commercial organizations whose global communications, governance, intellectual property protection, brand and asset management, knowledge management, and critical operational processes now rely heavily on born-digital content. 

Corporate Archives

Established in 1846, the Associated Press (AP) is an independent news cooperative headquartered in New York City, with photojournalists working in over 100 countries to tell the world’s stories. In 2015 the Corporate Archives implemented a cloud-based digital preservation system to systematically acquire, organize, preserve and make accessible the organization’s corporate governance records, including Board of Director minutes and monthly CEO memorandums, changes to charters and bylaws, oral histories, documentaries, and financial reports. The historical news collection includes the original wire copy from notable events such the assassination of U.S. President John F. Kennedy as well as award-winning photographs. AP’s digital corporate archive has become a trusted source of invaluable resources for departments across the organization that include marketing, legal services, strategic planning, and public relations.

Digital Preservation Moving to the Mainstream

All modern organizations rely heavily on records and information managed exclusively in digital format. In any large enterprise there are likely to be records managed in hundreds of different file types and formats – images, audio and video files, websites, social media, electronic messages, case files, maps, health care documents, as well as  artefacts and assets of historical interest. And many of these assets must be retained over successive generations of hardware and software technologies to remain authentic and usable.

Retention periods for records and information vary widely from brief (e.g., 30 days) to indefinite (e.g., termination plus 30 years) to perpetual (e.g., permanent). Use cases for long-term digital records preservation for the energy, utility, pharmaceutical, financial services and insurance, and consumer goods industries are well documented. Legal liability, digital transformation, mergers/acquisitions, regulatory compliance, application decommissioning, cybersecurity, and other mainstream risk management concerns have made defensibility of data transfer and archiving a compelling concern for IT infrastructure and architecture.

Operational efficiency use cases include secure retention of corporate governance materials, intellectual property, and brand assets. Industry use cases include the retention of pharmaceutical research and clinical trial data to comply with global food and drug regulations; the retention of life insurance and pension records for the lifetime of the client and beyond; and retention for life of equipment in the construction, energy, and utility sectors.  A public sector use case is the mandate for recorders of real property documents to make them accessible and reproducible forever.

Most business and enterprise content management applications that are used to manage and store business records do not have adequate capabilities to perform preservation actions. Metadata practices across an enterprise or between business units are often incomplete or inconsistent, hampering the search and re-use of documents over time. 

In vast collections of electronic records, tracking file obsolescence, and monitoring media degradation is a complex and resource intensive effort. Digital preservation systems designed to conform to the OAIS model are purpose built to address all the core elements of successful long-term digital preservation, combining durable storage, format migrations, and on-going monitoring. 

Successful and sustained digital preservation involves compliance with business rules, records management policies, and data governance principles. Record producers need guidance and support to determine the most efficient, systematic, and controlled approaches and triggers to ensure preservation actions are taken proactively. Record and information management professionals can help to meet these needs by: 

  • Analyzing and tracking file formats in use
  • Identifying systems and repositories where long-term electronic records are stored
  • Assessing the active preservation capabilities of systems and electronic record repositories
  • Analyzing retention schedules to identify and prioritize ‘at risk’ digital files
  • Updating transfer protocols to ensure preservation actions are taken as needed, regardless of whether the records are active, inactive or archival

A Call to Action 

While digital preservation has primarily been the focus of archival communities in the past, the world’s reliance on computer technologies is moving the discipline into the purview of records management and information governance. Like other allied professions, records management must continue to adapt its practices to address emerging challenges associated with electronically stored information while applying principles and proven methods that ensure compliance and mitigate risk.

Ensuring that unique long-term digital information can be easily found, used and trusted in the future is critical to the mission and success of your organization – regardless of sector or industry. Records managers, archivists, and information governance professionals are well positioned to positively impact their organization’s data strategies and preservation practices by focusing on lifecycle requirements and leveraging established practices for protecting the integrity, authenticity, and usability of electronic records. 

Key initiatives that may provide opportunities for ARMA and allied professionals to engage stakeholders and advance preservation capabilities for long-term digital assets include: 

  • Business case development for new or upgraded systems 
  • Planning for legacy application decommissioning 
  • Developing preservation and transfer requirements for technology procurements 
  • Digitization projects (paper to digital and film to digital) 
  • File classification and defensible disposition initiatives 
  • File and systems migrations, e.g., Office 365 
  • Business reorganization, mergers, consolidations and acquisitions 
  • Employee onboarding and training

Digital preservation requires resources, discipline, and commitment.  Through consultation with content creators and users, information governance and records management practitioners can help educate their leadership and peers about the fragility of digital information and promote available preservation tools and community-endorsed practices.

Appendix

Preservica is a privately held company with offices in Boston, Massachusetts (US) and Abingdon, Oxfordshire (UK) that offers SaaS and on-premise solutions which combine essential technical functions of long-term digital preservation into a single integrated platform. Trusted by more than 220 organizations in 14 countries around the world, Preservica’s standards-based software has been designed from the ground up to tackle the unique challenges associated with ensuring that fragile digital information remains accessible and trustworthy over decades. 

Preservica’s active digital preservation solution is architected with extensive workflows and APIs to ingest and manage long-term and permanent digital information transferred from a wide variety of source systems. Files are ingested in their native format and actively migrated to newer formats over time and at scale, ensuring the digital information remains useable and readable. Preservica is storage-agnostic, offering flexibility for where and how data is stored and helping to avoid vendor lock-in. 

Since 2012 Preservica teams have collaborated with practitioners from public sector archives and libraries, cultural heritage institutions, corporate archives, and academic institutions to shape the development of tools and techniques capable of ensuring the authenticity of digital information and records that organizations need or want to retain and access long-term.  In addition to regularly gathering and analyzing requirements from our vibrant user community to inform the company’s product roadmap, Preservica supports research and advocacy with leading professional associations that include: Information Governance Initiative (IGI), Archives and Records Association (ARA), Information and Records Management Society (IRMS), ARMA International, Council of State Archivists (CoSA), the National Association of Government Archives and Records Administrators (NAGARA), Digital Preservation Coalition (DPC), and AIIM.  

www.preservica.com

 

Lori J. Ashley

Industry Market Development Manager

Preservica

 

Lori Ashley is Preservica’s Industry Market Development Manager where she analyzes cross-industry and sector-specific requirements for long-term and permanent records retention and use. Research into archives and records management needs in corporate, government, and voluntary sector organizations are used to develop digital preservation use cases, support marketing initiatives, as well as inform Preservica’s product roadmap. 

Lori is Preservica’s liaison to the Council of State Archivists (CoSA) and the National Association of Government Archivists and Records Administrators (NAGARA).  She is a long-time member of AIIM, SAA, and ARMA, and has served in numerous board positions for the ARMA Madison (WI) chapter.

Lori joined Preservica in 2017 after 14 years as an independent management consultant and educator who advised government and commercial organizations on information governance and records management.  She is co-developer with Dr. Charles Dollar of the Digital Preservation Capability Maturity Model (DPCMM) and self-assessment tool (www.DigitalOK.org) which has been used by CoSA’s membership and more than 120 organizations worldwide.  Lori served as Records Coordinator for the Wisconsin Department of Electronic Government. Before her public sector service, she was a business and regulatory strategist for an energy company.

CASE STUDY: MIGRATING a Billion-Dollar Government Agency TO A NEW RECORDS SYSTEM

 

By Jas Shukla, Gravity Union 

 

Estimated reading time: 22 minutes, 54 seconds. Contains 4583 words

 

Introduction

Organizations are increasingly rolling out electronic records and content management initiatives in an environment where compliance requirements constantly evolve. For example, organizations must comply with fast-changing regulations such as the EU’s General Data Protection Regulation (GDPR), Canada’s Anti-Spam Law (CASL), the Digital Privacy Act, and other world-wide privacy and data policies.  

There is also an increasing amount of unstructured data that executives and information management professionals are concerned with. In a recent survey to records and information management professionals, when asked “which of the following represent the top records management challenges for your organization,” the number one response, at 40%, was “the volume of unmanaged digital documents outside of RIM control.” File shares, ungoverned collaboration work spaces, mobile devices, and the growing volume of content in software-as-a-service (SaaS) applications are proliferating. 

Yet despite the growing importance of compliance and managing information, about 50% of Electronic Content Management (ECM) Projects fail.​ 

ECM projects fail for different reasons. In our experience, we see failures most commonly due to legacy technology that is not supported, lack of leadership, lack of adoption due to a poor user experience, or a poor rollout process such as a ‘big bang’ approach. 

This article takes an in-depth look at a successful ECM project with a provincial government agency and details how that project was rolled out over a period of 18 months. This case study covers: 

  • How to comply with government standards and schedules for digital applications
  • How to engage leadership and department end-users in an incremental rollout 
  • How to migrate large volumes of content, including unstructured content 

Detailed in this article are best practices and learnings to help inform your next ECM project.

Context

In 2018, a BC provincial government agency embarked on a journey to adopt a new Electronic Document and Records Management System (EDRMS). For privacy reasons, we cannot directly name the agency. This section describes the context of the organization, the challenges with the existing EDRMS, and the legislative backdrop in the province. 

About the Provincial Agency and Project Team

This public-sector agency has a workforce of several hundred employees in their head office, and thousands of part-time and full-time staff across the province. They generate a net income of over $1 billion CAD. The primary users of the EDRMS are approximately 800 people spread across 50 departments at the head office. 

The agency project team for the EDRMS consisted of IT team members, records manager, project manager, and a partner for implementation. 

Information Management at the Agency 

The provincial agency had a ten-year old legacy records management system. The system was experiencing system performance issues due to the growth of its database over the years. It was manually maintained by a records manager who identified content as records and archived content periodically. 

Further, the building that hosted the records management servers was sold and the servers were scheduled to move. A significant risk was that the application may become inoperable during that move. The existing software that the records management solution was built on was no longer supported by the vendor, and as such, the path to recovery in the event of a failure was unclear and represented a major risk to ongoing operations.

In order to mitigate the risk of the impact on the agency’s operations should the records management application fail, the agency decided to move forward with the implementation of a new EDRMS.

Legislative Context 

The BC government and public agencies are mandated to classify information under a legislative framework called the Information Management Act (IMA). The IMA applies to all ministries, to courts in a limited way, and to designated public sector organizations. The IMA requires public sector organizations to hold, transfer, archive and dispose of information in accordance with an information and classification schedule.  

Two key classification schedules are: The Administrative Records Classification System (ARCS) and the Operational Records Classification Systems (ORCS). The ORCS and ARCS information schedule is the file plan for the for BC Provincial Government. The goals of the ORCS and ARCS information schedule are to:

  • Ensure records are kept for as long as required
  • Identify records of enduring value for preservation
  • Ensure that others are routinely destroyed when they are no longer needed

For this project, all records had to follow the ORCS and ARCS information schedules including retention and disposition. 

Project Goals 

The leadership at the public agency wanted a system that would work across all departments, meet legal requirements, and was easy-to-use by employees. 

The project objective was to replace the legacy system with a reliable solution that:

  • Improves overall customer satisfaction through effective collaboration
  • Reduces organizational compliance risk
  • Increases confidence in a trusted single-source-of-truth for documents
  • Increases productivity through enhanced search, retrieval, and performance
  • Complies with the Information Management Act

Solution Overview

The vision for the new EDRMS is for staff to create, collaborate on, and maintain documents and other content while records management lifecycle needs are addressed via rules configured behind-the-scenes. The EDRMS ensures records compliance as any content required to follow the Information Management Act will be managed in the records solution.

Critical features of the EDRMS included: 

  • A “single source of truth” for all major business and support operations with robust version control and content quality 
  • A consistent, standardized user experience for collaboration across all business units 
  • Easy search, retrieval, collaboration, and management and ability to add metadata to content 
  • Behind-the-scenes automated records management 
  • Full audit and forensic capabilities 
  • Hosted on-premises to comply with Canadian government cloud technology regulation

 

Physical and Digital Records Strategic Approach 

The solution was designed so that all records, including physical and digital records, are managed in the EDRMS. The strategy to manage these types of records followed a Classic File Plan approach with a collaborative window for digital files. 

Classic File Plans 

Classic file plans, born out of paper records management, typically contain two stages and then a disposition action:  

  • Active – the stage that the records provide value to the business or would likely be referenced by the business. As an example, in the paper world, a binder of invoices might live on someone’s desk for two years at which point the record would enter the semi-active stage. 
     
  • Semi-Active – the stage after the Active window that is the remainder of time for the retention policy. Following the above example, after two years, invoices would be archived in a different location (in a box in the basement, or at an off-site location). This time frame can be driven by regulatory compliance, business requirements, historical value, etc.  
     
  • Disposition Action – the action that happens after the Semi-Active stage is complete. It could be one of Destroy, Archive, and Transfer or Make Permanent – depending on the compliance need. For the invoices example, they could be destroyed after 7 years when they are no longer needed for tax records or other purposes. 

 

Figure 1: Classic File Plan Stages

Adding a Collaborative Window 

The classic file plan model works for paper records. Paper is inherently immutable because it can’t be easily changed. In other words, paper enters this process after it is finalized. 

Moreover, the classic model assumes that the document is finalized and is missing the period that end-users work on the document. We refer to this period as the Collaboration Window. Systems that follow the classic model do not consider how users collaborate on content before it is finalized and assume that end-users will classify and declare records. 

Industry best practices have long recommended that a solution should capture records without disrupting the way end-users work. The solution needs to auto-classify content against the file plan and automatically declare records. 

Our updated model includes a Collaboration Window that happens before declaration and automatically calculates the declaration schedule. 

Figure 2: File Plan Stages for Digital Environments

Calculating Declaration

If documents are declared before an end-user feels like they have had the chance to complete the document, they may not want to use the system, leading to poor adoption. If it takes too long to declare a document as a record, then the organization is at risk. For example, if signed contracts are not declared immediately in a records management system, this could lead to accidental deletion of critical files and no easy way to recover. 

For this project, the team decided to calculate declaration automatically, and not rely on end-users to act first. Based on experience, the project team chose three main methods for declaration:  

Immediate: Some documents such as finalized policies, signed contracts, etc. are candidates for immediate declaration. These documents are declared the moment they are added to the EDRMS. However sometimes end-users make mistakes around metadata – for example, when selecting a department. We followed a rule that the “immediate declaration” provides a 2-3 business day grace period for end-users to modify the document and metadata. 

Time-Based: Auto declaration of other content happens based on the amount of time that has passed since a certain date. These dates could be: 1) Created date – the date that the document was created, 2) Modified Date – the date that the document was last modified, 3) Custom Date Field – any date field that is being used in the system such as Contract Date, Project End Date. Modified Date was the preferred model in most cases, because if the user is still editing the document, then it should not be declared as a record. 

Event-Based: The EDRMS supports tying declaration to an event. A common event that triggered declaration was changing the document status of a document from “draft” to “final.” Other trigger events included when project files were marked as Closed or a project reached its end date. 

Lessons Learned from Adapting File Plans for Digital Use

A challenge in applying ARCS and ORCS classification schedules is that they were developed for managing physical records, and don’t always easily apply in a digital environment. 

Of note is the use of SO that the ARCS and ORCS User Guide defines as “Explains when a file designated SO should be closed.” In most file plans SO stands for supersede or obsolete – meaning that the file retention is calculated based on content being replaced by a newer version (to supersede) or simply when they are no longer relevant (have been made obsolete).

For example, if a corporate policy for overtime is updated, then the previous version of that policy is superseded and may, for example, need to be kept for seven more years before running through a destruction process. Organizations need to keep old versions, especially in legal situations. For example, if an organization is being sued by a former employee for overtime from three years ago, then they need to know what the official overtime policy was back in 2017.

Looking at ORCS and ARCS, many of the record categories have a retention schedule driven by SO. For example, many of the record categories that fall under “6820 – Information Systems Operations” implement a SO type retention schedule as seen below:

Figure 3: ORCS AND ARCS FILE PLAN SCREEN CAPTURE FROM FEB 14, 2019

Each area of the records categories in the ORCS and ARCS file plan that leverages SO will define what SO means. For example, for “6450-80 – IT application/system documentation – final versions”, defines SO as 

upon completion of the post-implementation review, or when the project is abandoned, and when no longer required for reference.

The problem here is that SO typically implies that the end-user needs to step in and act – essentially telling the system that a given document is no longer useful. If the file plan was implemented as-is, the agency needs to manually flag millions of documents over time. This is not the most efficient way of running documents through a disposition process. 

Instead, the process was automated. We worked with the government agency to determine the longest period content is relevant for a given record category. For example, 6820-06 Log files which include application, server, web site, system, audit, event, and equivalent logs define SO to mean “when no longer required.” The organization determined that the IT-related log files would not typically be required after three years.

Instead of requiring that end-users go back into all the log files and flag them as SO, a disposition approval process is automatically kicked off on log files after three years whereby the end-user can opt to save the log files for an additional two years, or can run them through a disposition workflow.

One of the benefits of the EDRMS is the flexibility of the retention workflows. They allow multiple paths through the workflow. In this case we can allow end-users to flag documents as SO (as unlikely as that is) but also automatically kick off a disposition approval process after a reasonable amount of time. 

We learned that this is a good compromise that helps automate the compliance process and minimizes the burden of the end-user, while staying within the spirit and intent of the ORCS and ARCS file plan.

Below is a screenshot of an example retention workflow that:

  • Waits for one year after the last modified date (to provide an adequate collaboration window)
  • Deletes the version history of the document 
  • Declares the item to be a record
  • Waits for the item to be flagged as superseded or obsolete OR automatically pushes forward after five years and then depending on the path out will run through a given approval process before being destroyed by the EDRMS.

 

Figure 4: ORCS AND ARCS FILE PLAN WORKFLOW

Record Managers have an important role to play in applying file plans to digital records systems. It cannot be understated – when migrating to a new EDRMS, it’s critical that Record Managers have an in-depth knowledge of the file plan so that they can translate the schedules to digital environments. 

Another consideration for records managers is to determine how much flexibility departments have in asking for specific workflows for their content. In the early days of this project, pilot groups could have small variances in workflows. For example, one department retains project files for three years, and another for four years. This degrades performance of the solution as content rules continue to run in the background for a growing volume of content. In future work with departments, we reduced the volume of similar workflows. As much as possible, we recommend keeping workflows consistent across departments, and only have exceptions in rare cases. This results in better performance of the solution and easier maintenance for record managers.  

After building these workflows, we calculated that the automated destruction of over 2.5 million documents over the next 7 years, saves over three thousand of hours of effort for the organization!  

Project Management Process

Large scale ECM projects are inherently difficult to manage. Often, critical business leaders need to reschedule or place the project on hold. In the meantime, other initiatives within the organization reach out for assistance with a clear need for an ECM solution, but can’t proceed because fundamental solution decisions are not complete. This project followed a lightweight and agile methodology to onboard departments incrementally when schedules permitted. 

Lightweight and Agile Methodology 

This project did not follow a traditional large-scale implementation approach where work is done for months in the lead-up to a ‘big bang’ deployment. 

Given this dynamic nature of large-scale ECM projects, we find it best to follow light and agile methodologies. We recommend a version of Scrum, which is defined as “a flexible, holistic product development strategy where a development team works as a unit to reach a common goal.” It challenges assumptions of the “traditional, sequential approach” to product development, and enables teams to self-organize by encouraging physical co-location or close online collaboration of all team members, as well as daily face-to-face communication among all team members and disciplines involved. Scrum is an iterative and incremental agile software development framework for managing product development, which works well for large-scale ECM projects that have a large degree of unknowns. 

This methodology led to a pilot approach where a few departments and processes are chosen and implemented first, and later other departments and processes are on-boarded in a phased manner. In this way, lessons learned were incorporated as the project proceeded. 

Pilot Group Rollout Process

A key decision during the EDRMS initiative was to plan how to sequence work with five initial pilot groups. All groups were scored on a risk scale based on size, complexity of content, capacity to engage and ability. Based on that ranking, the project started with the least risky groups and worked through to more risky groups. The riskier groups tended to involve more people, have more complex content, and have less team availability and capacity to engage. 

The benefits of working this way are: 

  • The project team becomes more capable​ over time 
  • For riskier teams, there will be more pre-made solutions​ previously developed
  • Success stories will have time to influence riskier groups

Each pilot group followed implementation activities as shown in Figure 5. 

Figure 5: Implementation Approach Per Department

At a high-level, these were the key activities at each stage: 

  1. Current State Assessment: The goal of this step is to understand the department, how they work, where they work, who they work with, and any issues or pain points. This is captured in a session with department representatives. 
  2. Requirements: In the requirements phase, the most important requirements are determined and prioritized. These are documented for each pilot department to refer to.
  3. Content Discovery: Current content for the department is listed and prioritized in a spreadsheet. This was limited to active content in network file storage, desktops, and other file sharing systems. 
  4. Card Sorting and Mind Mapping: This a visual, collaborative activity with department end-users to identify sections in the new information architecture. This collaborative process is recommended because it helps people get involved with design, have a say, and own it in the end. 
  5. Prototype and Review: Create a prototype with key sections for review and feedback from end-users. This prototype is adapted based on the feedback.
  6. UAT and Migration Planning: Based on the volume and type of content, a plan is created for migration. 
  7. Migration and Go Live: End-users are scheduled to manually migrate content, and scripts prepared for automated migration. After these activities, the pilot site is ready for launch. 
  8. Support and Reflection: After the site is live, weekly support is provided for department and a reflection meeting is held to discuss what worked well and what did not. 

These activities were repeated for the 5 pilot groups and the remaining 45 departments, and any lessons from the last step were incorporated into future work as applicable. 

Leadership Alignment and Involvement

During the process of rolling out the first department sites, the team observed that some sites were more successful than others. Success in this context means how happy users are with the solution, how well attended training sessions are, and how engaged department users are with the solution. In less successful sites, we noticed that users were not happy with the change in how their documents were managed. 

In discussions of digital transformation best practices, a common best practice is that leadership involvement is critical to the success of the project. For example, Harvard Business Review identified leadership misalignment as a fundamental reason for digital transformation failure because:

“If top managers aren’t on the same page, it makes it difficult for their direct reports to agree on what to prioritize and how to measure progress.”

This rang true during the initial pilot rollouts at the agency as well. During the reflection sessions held at the end of a department rollout, a trend was identified: the more successful departments had leadership involvement throughout the rollout project. Leaders from the more successful departments attended meetings and worked with their direct reports to prioritize training and migration activities. 

After a few less successful rollouts, the project team did a few things to ensure success for future departments: 

  • A check-in with leadership a few weeks before the project team started work on their department site. This check-in ensured the department still planned to prioritize time for the EDRMS, and if anything had changed, the work was scheduled for another window. 
  • Leadership attendance at key meetings was non-negotiable. These meetings included: kickoff, prototype review, and support and reflection to discuss what worked well and what did not. The project team found that engagement with these sessions helped leaders understand the importance of the project, effort required from their direct reports, and enabled them to make prioritization decisions in the work of the department. 

Managing Status Across Pilot Groups 

Working in this iterative way takes focus and diligence in tracking. As the project proceeded, we adapted the communication method to include both digital and physical communication methods. 

A master Kanban board tracked progress across all departments. Kanban is a lean method from Japan to visually manage work and to give team members a view of their progress and overall process. Kanban—which is the Japanese word for “billboard”—was developed by an automobile manufacturer in the 1940s and 1950s.

Typically, columns group work by stage, and rows (or swim lanes) group work by owner or another category. This government project used a Kanban board to group work by stage and by department in swim lanes. This was in a digital form on a project collaboration site, as well as physical form in the agency’s head office as shown in Figure 6.

Figure 6: Master Kanban board for the project

A physical board helped during the rollout process because it was always visible, and people could easily come by to see which departments are making progress. We learned that even though it takes effort to keep up to date, it has benefit during status meetings and drove a bit of ‘competition’ across departments. The board was in a visible area to leaders in the agency, and sparked conversations within departments if some were ‘lagging.’ 

This model of tracking project status works especially if people are co-located together. Where this isn’t possible, there are digital tools which can track work in a Kanban board. 

Migration and Launch

Migration Process 

Migration of the right amount of content to an information management platform in a timely manner is important for adoption and return-on-investment. If an insufficient amount of content is migrated, then we are missing out on an opportunity to drive adoption. On the other hand, migrating everything may draw too much time from the business and be cost prohibitive.

This project migrated over 4 million records over an 18-month timeframe. This was possible with a mixed model of manual migration and automated migration scripts.

The Role of Manual Migration

Although the project team had a set of scripts to help with migration, and almost all content could be migrated automatically, there are benefits to having end-users perform some of the migration manually:

  • It is an opportunity to reinforce solution training and help users gain mastery of the platform 
  • It acts as a quality check of the new solution because end-users use it in-depth for a period of days to migrate content 
  • It ensures that the user experience makes sense and that end-users are comfortable with the information architecture (i.e. know where the content will be located) 
  • It allows end-users to provide feedback on any issues that are uncovered during migration and to have the project team solve those issues in a timely fashion 

An adequate level of migrated content is needed in the system to be useful for end-users. The project team found that typically one years’ worth of historical content and all high-value content are needed for successful migration. Typically, end-users within departments performed the manual migration of current content for the last year or last month.  

Automated migration with scripts was leveraged for: 

  • Older content that was delegated immediately as a record
  • Volumes of highly templated structures (employee files, vendor and supplier invoices, contracts) 

As the pilot groups proceeded with migration, we prepared the next group by sharing instructions on cleaning data in advance. For example, removing duplicates and redundant content was helpful before putting effort into migration. 

Migration as a Fun Activity

Manual migration can be daunting and overwhelming. The project team encouraged departments by setting up ‘migration parties’ for end-users within a department. These sessions had food and went a long way into making migration into something that people look forward to.  

Recommendations and learnings from the migration parties included: 

  • Invite as many people as possible, including directors and management to the migration party. A group can make the work go faster and more people benefit from learning about the solution.
  • Make live and immediate information architecture changes during the session. For example, sometimes during a migration session, end-users identified names of document libraries or views that are confusing or unclear. If a representative from the implementation team is at the session, they can make changes immediately. End-users see an immediate result from their feedback and are more encouraged to continue using the system. 
  • For larger content areas, anticipate that additional time may be required. The project team generally scheduled two sessions of two hours each, which was enough time for most departments. Some departments did not finish manual migration during the time allotted. For those that ran out of time, leadership helped teams commit to another session or have people complete migration on their own. Again, it is critical that leadership is involved and aware of migration progress for the additional time to be committed.  

Migration can be time-consuming, but it is critical to the success of any EDRMS project. Make sure you plan for it and be creative for how to make it fun for your own teams. 

Summary and What’s Next 

For this initiative, 4 million files were moved in an 18-month period involving about 800 users over 50 departments. As one director at the agency noted: 

“One thing that is a big win for us is the increased effectiveness and efficiencies we have gained as a result of the project and transition.  It is taking one quarter of the time to administer our documents, find files and explore the Hub.” 

This is a great result for the agency who now has a reliable system. In addition, the agency gained more trust in the validity and relevance of information managed by the EDRMS through proper versioning and document control. A robust and effective search experience supports staff in locating documents among millions of records in the system. Finally, the new EDRMS is reliable and performant, which encourages use and reduces the risk to operations from outages and frustrating performance issues.

Satisfaction with the system is being measured and has improved over time as well. Most users recommend the system to others and find it easy to learn and search. As the system grows in volume and usage, the team is monitoring performance and making changes as needed. 

Author Biography 

Jas Shukla is a Senior Consultant and leads marketing at Gravity Union. She has over 15 years of experience in user experience and consulting. Previously, Jas worked as a Design Lead at product and consulting firms. She started her career as a Program Manager at Microsoft on the SharePoint team.

Company and Product Descriptions

The EDRMS solution described in this article was built by Gravity Union with two core pieces of software: Microsoft SharePoint and Collabware CLM.

About Gravity Union 

Gravity Union is a compliance-inspired, digital transformation consultancy that empowers organizations to take control of their critical information. Gravity Union works with organizations to plan, execute and maintain Electronic Content Management (ECM) solutions. Gravity Union’s team has experience with every major information management software suite, but focuses on delivering the best tools that the market has to offer. SharePoint, Office 365, Collabware CLM and Collabspace comprise the central offering for creating digital workspaces. Gravity Union has a perfect track record implementing Electronic Document and Records Management Systems across Canada in some of the most highly-regulated organizations in the country, including the Government of Canada, city municipalities, and energy companies.

About Microsoft SharePoint

SharePoint is the industry’s leading platform for content management, collaboration and building line-of-business applications. SharePoint provides robust features for managing documents and collaboration. For this project, key features included version control, metadata management, enterprise search and easy email drag-and-drop from Outlook. It has both on-premises and cloud options for hosting. 

About Collabware CLM 

Collabware CLM provides automated and fully compliant records management for SharePoint. It allows the record manger to manage the lifecycle of content in the SharePoint collaboration portal and archival sites. Collabware CLM allows for the management of the file plan, retention workflow development, security groups, case file management, disposition approval and review, auto-classification, physical records management and more.  

Work Cited