Impact of Tangible Cost Asset (TCA) Accounting on Electronic Recordkeeping Practices

SAGESSE VOLUME VII WINTER 2022 – AN ARMA CANADA PUBLICATION

by Bruce Miller, MBA, IGP

 

Back to Sagesse 2022

 

Introduction

The purpose of this report is to document specific changes in recordkeeping practices that will be required for a municipality that has fully implemented TCA (Tangible Capital Asset) accounting practices. The changes in recordkeeping will occur in the following areas:

  1. The procedures used to create, identify (label), and file records in physical (paper) form will change.
  2. The procedures used to create, identify (declare) electronic records within a modern EDRMS (Electronic Document & Records Management System) system will change. 
    1. The underlying Retention Schedule (File Plan). There are structural changes required in the retention rules within the schedule.
    2. The EDRMS must present asset data lists to the end users to support proper TCA recordkeeping.

We will assume that all municipalities should/will incorporate these recordkeeping changes, even if they have not yet fully adopted TCA accounting methods, as we assume they will eventually be adopting TCA.

We are assuming that TCA accounting practices are to be applied to a system for managing electronic records. For the purpose of this report, we will assume the EDRMS will be RBR (Rules-Based Recordkeeping) capable. 

Definitions

Asset An individual, indivisible asset. Also sometimes referred to as a component

Asset Class A set of assets forming a greater whole. For example, Roads is an asset class, comprising all the assets of Roads.

CAC Capital Asset Code (CAC). Unique code or identifier that identifies a particular asset. Can be alpha-numeric or numeric. 

Case A set of related records, typically about a Person, Place, Event, or Thing. The set of records is not considered complete until some triggering event has taken place that defines the end of the business activity. For example, a company with 1,000 employees would typically have 1,000 “cases”, one for each employee that contains all the records for each particular employee over the duration of their employment. Each case contains all of the records for the activity (in this example the employment of the employee), from the beginning of the activity (start of employment) to the end of the activity (termination of employment). 

Case Category A file plan category containing related records about a business activity with a defined end date. Disposition is triggered by an event date that defines the end of the activity, such as “End of Useful Life” (A machine), or “Close of all Legal Matters” (A workplace Accident). For example, employee records might be eligible for disposition 3 years after termination of employment. All records within a case category reach disposition and are processed as a complete, intact group – they are never separated or processed as individual records. In many organizations, 50% or more of all business records are case records.

Category A node in the hierarchical file plan. Denotes a set of records of related activity, i.e. Travel Requisitions. All categories are linked via a child/parent relationship. Each category is designated either as a Case or Administrative category, and is enumerated or labelled with a unique category ID.

Classify The process whereby a document is assigned a category from the file plan (retention schedule). Classification is often part of the Declaration process and can be achieved by the user (manual), or by the system (RBR). Manual classification can be achieved explicitly (the user selects and assigns a category), or implicitly (by virtue of selecting a storage location such as a folder, that matches the subject of the document, and which bears the appropriate category for that document).

Component The smallest, indivisible portion of a linear asset. It cannot be further broken down into parts.

Declare Manage a document as a record. The document is presumed to meet the criteria of a business record, however it may not. A declared document is tracked by the EDRMS, is classified against the file plan, and is immutable (users cannot edit or delete it) to preserve its integrity. It can only be deleted via the formal disposition process.

Disposition As distinguished from Deletion. Formal, structured process of determining what happens to records at the end of their retention period. The process is human-initiated, and the decision as to what is destroyed/transferred is ultimately governed by an approved file plan (retention schedule). A records administrator provides oversight of the process. Approval from originating business units is typically sought prior to physical destruction, and a formal audit trail of disposition is maintained. Disposition yields (3) possible outcomes following the expiration of the retention period:

  1. Destroy 
  2. Transfer to outside agency for permanent archival storage
  3. Unknown. Retain until disposition is known. Some possibilities:
  • Held for legal review
  • In Dispute
  • Disposition simply not yet known or decided

ECM Enterprise Content Management (system). A platform for the management of unstructured documents and data. Examples include Microsoft SharePoint, IBM’s Content Manager, and OpenText’s Content Manager. Most ECM platforms have recordkeeping capabilities.

EDRMS (EDRMS). A business information system in which the records of an organization are created, captured, maintained, and disposed of. Such a system also ensures their preservation for evidential purposes, accurate and efficient updating, timely availability, and control of access to them only by authorized personnel. An EDRMS includes rules and procedures governing the storage, use, maintenance and disposition of records and/or information about records, and the tools and mechanisms used to implement these rules.

An EDRMS delivers specified recordkeeping controls. Most systems can manage electronic and physical records. Many are comprised of general-purpose content management systems that deliver recordkeeping capability. Some are certified compliant with recordkeeping standards such as US DoD 5015.2 or ICA Module 2. An EDRMS can be configured to store exclusively records, however it will typically store all three of the following categories of items:

  • Declared Records
  • Non-Records
  • Non-Declared (unmanaged) records

EOL End of Life. Trigger date that triggers the final retention period for the records. The date on which the asset reaches the end of its useful life. EOL can sometime represent the disposal date of the asset, where the disposal date exceeds the originally-projected asset EOL.

Linear Asset An asset with no defined starting or ending point, and multiple interconnected components, such as a buried pipeline, or systems of roadways. A linear asset must therefore be broken into manageable pieces, each of which will be considered an individual asset

MACL Master Asset Class List. The list of all assets, each asset bearing an identifying code (CAC). This list is presumed to be stored in some corporate database system, such as an ERP (Enterprise Resource Planning) system. Examples of such systems may include SAP, JD Edwards, or Microsoft Dynamics. Any list of assets presented within the EDRMS is presumed to be derived from the MACL, either by duplicating the list, or by presenting the MACL within the EDRMS via a software integration between the EDRMS and the ERP system hosting the MACL.

Project A predetermined list of assets that have received budget approval for work to be carried out on the assets. Typically, a project is assigned a G/L code, and a date. 

RBR (Rule-Based Recordkeeping). An automated method of selecting documents to manage as records by defining rules that the targeted documents must meet. Rules typically match documents based on Content Types and/or metadata field values, then declare the document as a record, and classify it against the file plan.

Retention Schedule Also known as a File Plan. The list of approved retention periods and disposition rules for each business activity or subject area within the organization. Typically a hierarchy of business functions broken into specific activities. Driven by legislative obligation (various laws and regulations that apply to the business), and operational corporate policies. Also identifies which records are vital.

TCA Tangible Capital Asset. An asset.

EDRMS

EDRMS is a blend of two core technologies (along with several optional additional technologies). The first is a modern ECM (Enterprise Content Management) platform (which used to be known as document management). This platform forms a digital repository for all electronic records, and provides for advanced searching by content and metadata, security control, version management, workflow automation, and collaboration such as multi-author document editing, and much more. The second technology is recordkeeping capability, often delivered as a set of features within the ECM itself or as a third-party product added to the content management platform.

The records retention schedule underpins both technologies. The retention schedule does more than just feed retention rules to the ECM platform-it actually greatly influences the configuration of the ECM itself. This is necessary for the recordkeeping component to do its job properly.

All modern EDRMS systems incorporate RBR (Rules-Based Recordkeeping) to some extent. RBR is an approach to electronic recordkeeping that automates the recordkeeping functions the end user would normally have to carry out. These functions include identifying which documents are records, when to declare documents as records, and how to classify the documents against the retention schedule. A full and proper EDRMS deployment that fully utilizes RBR capability automates all these end user recordkeeping functions. End users have absolutely no role to play in the declaration or classification of any records. They simply operate the system as an ordinary everyday ECM, without thinking about records management whatsoever. Thanks to RBR however, in the background documents are being declared as records and are being properly classified against the retention schedule, even if the user is blissfully unaware of this.

Modern electronic recordkeeping software can carry out retention and disposition in ways not previously available. Because the records are digital, we have more document-level information to deal with and we can leverage that information to do more granular, more sophisticated, and more flexible retention and disposition. For example, we can apply retention based on the value of documents, we can apply multiple retention rules to a single category, even different types of retention rules within the same category. The software has these amazing retention and disposition capabilities, however we have to tell it what we want it to do. And that’s the job of the retention schedule. If we know what the recordkeeping software is capable of in terms of retention and disposition, then we can write a retention schedule to take full advantage of these powerful new capabilities. A retention schedule that leverages these retention and disposition capabilities is referred to as a “software ready” retention schedule.

A software ready retention schedule is written with the assumption that it will be used within an EDRMS and will take full advantage of the advanced retention and disposition capabilities of the software. Any well-written software ready retention schedule can be used with any modern recordkeeping software, regardless of brand.

Figure 1 shows what a modern EDRMS looks like conceptually. There are three “layers” to an EDRMS:

The retention schedule The software ready retention schedule. This will be divided into case categories and administrative categories. On the left side are two administrative categories (operator rounds, and employee onboarding). On the right are two case categories (union grievances, and safety audits).

ECM structure Often referred to as “information architecture”, the ECM structure consists of all the so-called “libraries”, or places that documents can be stored. Different ECM products have different names for storage locations. Storage locations can be called libraries, folders, cabinets, etc. ECM structure also consists of the metadata, fields of information permanently stored with each document placed in each storage location. There is more to ECM structure than just libraries and metadata, including such things as versioning, security and collaboration, etc. But for now we’re only concerned with libraries and metadata.

RBR rules RBR rules refer to the rules created within the recordkeeping software to automate the recordkeeping processes, namely declaration (which documents are declared as records and when), and which retention rules in the retention schedule get applied to which locations in the ECM structure.

Done properly, the retention schedule massively impacts the ECM structure. Each category in the retention schedule translates to a library in the ECM structure. This library is where users will store documents for that particular category. Both the category and the library bear exactly the same name. Case categories require that the library be subdivided into “cases”, or containers, one for each case. This allows us to group records of each case together, separate from and independently of all other cases.

At the top of the pyramid lies the recordkeeping software and its RBR rules. This is where you define declaration rules such as “if library = “operator rounds” and approved = “yes” then declare”. Retention rules also get defined here, such as “if library = “operator rounds” retention equals true document date +5 years”. The rules need to know what the library names are, and what metadata it can work with.

As you can see, the retention schedule forms the base upon which the ECM is structured. This in turn allows the RBR rules to execute against that structure, as shown in figure 3.

Figure 1 – A Modern EDRMS

Case Records

The retention schedule must differentiate between case and so-called “administrative” record categories. Each category in the retention schedule therefore is either a case category or an administrative category. In most organizations today, about 60% of all records belong to case categories. The best way to understand case records structure is with the help of an example. Suppose you have 1000 contracts in existence at any one time. Each contract has a contractor name, the contract value, an expiration date, a contract type, etc. This data will not change among all the documents in any given case. Each contract theoretically could have an expiration date different from those of all other contracts. All contracts would have a single retention rule similar to “keep five years after contract end date, then destroy”. Although there is only one single rule applied to all 1000 contracts, that single rule has 1000 different trigger dates, i.e. 1000 different expiry dates. The recordkeeping software must therefore track each of these 1000 dates.

Let’s look at this from the perspective of an EDRMS end user. A user has a document related to a particular contract. The document may be an email suggesting several changes to the draft of the contract. The user must specify which of the 1000 contracts the document is related to. How is this accomplished? The user must have a way to choose from among the 1000 contracts. How this is done can vary among different ECM systems but the most common would be a simple drop-down list of all 1000 contracts, as shown in figure 2. Each contract has a unique name, and the user must select one of the 1000 contracts. The ECM will have a library known as “contracts”. That library will be further subdivided into 1000 case containers, each bearing a unique name of one of the 1000 contracts. This is a good example of how the retention schedule shapes the ECM structure. The two have to work in concert, and only then can the RBR rules be applied to the records within these libraries.

Figure 2 – Contract Selection

TCA Overview

Tangible capital assets (assets) are non-financial assets having physical substance that:

  • Are held for use in the production or supply of goods and services for rental to others for administrative purposes or for the development, construction, maintenance or repair of other tangible capital assets;
  • Have useful economic lives extending beyond an accounting period;
  • Are used on a continuing basis; and
  • Are not for resale in the ordinary course of operations

The diagram that follows shows typical municipal TCAs:

Figure 3 – Asset Classification

With the asset accounting approach, municipalities now:

  • Identify each asset by class/category
  • Identify a current and ongoing value of that asset
  • Continuously track the current value of each asset by tracking the funds and work invested in the asset each year.

The financial statements must now disclose, for each major category of asset and in total:

  • Costs at the beginning and end of the period;
  • Additions in the period;
  • Disposals in the period;
  • The amount of any write-downs in the period;
  • The amount of amortization of costs of asset for the period;
  • Accumulated amortization at the beginning and end of the period
  • Net carrying amount at the beginning and end of the period.

As shown in the table below, each of the assets has a cost assigned to it each year. Anything the municipality does to that asset that affects its value has to be tracked, so as to show an increase or decrease in its value. This means the Finance people need to associate work activities with each individual asset. This in turn means that the records generated by the activity, which support the work carried out on these assets, must also be associated with each individual asset.

A typical Asset Inventory Sheet is shown in Appendix 2. A Property Record Card is shown in Appendix 3. These are typical of the documents used to record and track assets. These days, asset data, which is usually voluminous in nature, is recorded in modern ERP (Enterprise Resource Planning) systems.

Recordkeeping Implications

The following are the major impacts on recordkeeping as a result of TCA Accounting practices: 

Asset Classification There must be a clear means of identifying (naming) Tangible Capital Assets that recordkeeping is aware of.

TCA Records Identification There must be a means whereby a given record can be related to (associated with) an asset.

Retention Rule Structure The retention schedule rules must be structured in a certain way to accommodate appropriate retention for assets.

Asset List Synchronization The master list of assets must be synchronized with the corresponding list in the EDRMS.

EDRMS Configuration The host EDRMS must be configured with user selection lists and other metadata characteristics in order to support TCA recordkeeping.

Asset Types and Classification

Assets are loosely grouped into Infrastructure and Non-Infrastructure assets, as shown below:

Infrastructure

Roads

Facilities (buildings)

Waste and Storm Water Management (WSW)

Water Treatment and Distribution (WTD)

Parks and Playgrounds (sometimes referred to as Land Improvements)

Non-Infrastructure

Fleet

Equipment

Infrastructure assets can be considered to be those assets that are in, attached to, or represent improvements upon, the land (such as parks). Activities carried out upon these assets generate records that document these activities. In an asset setting, these documents (records) must be associated with each asset so the finance people can determine how much money was spent on a particular asset, and what impact that activity had on the asset. Furthermore, certain selected records need to be preserved for the life of that asset in order to support Finance’s claim as to the value of the asset. 

It’s obviously too general simply to say “the asset”. A municipality would have a network of hundreds of miles/KM of roads, or several miles/KM of buried pipeline. These types of assets are referred to as Linear Assets. Under TCA rules, each asset is broken down into individual components, each of which are indivisible. These components are treated in isolation of each other component. It is important to understand how assets are divided up into components. It is a simple breakout of an asset into its components parts. Each municipality will have its own particular approach to the breakout of its assets – there is no single “right way” to define it. We will list some examples of typical breakouts. 

Let’s start with Roads. How do we treat a large systems of roadways as an asset? We break it down into its component parts, as shown:

Figure 4 – Road Segmentation

Each road is named, in our case Road 1 and Road 2. The road is then divided into Segments, segments 1-4 as shown in our example. Each segment is then broken into individual components. Note for instance that Road 2 segment 2 (R2S2) has a concrete sidewalk, whereas Road 1 Segment 1 (R1S1) has a pavement sidewalk. TCA accounting needs to know this distinction, as it will place a lower value on the sidewalk for R1S1 compared to the sidewalk for R2S2. In addition, it tells us that a repair or maintenance will be required sooner on the sidewalk for R1S1 compared to the sidewalk for R2S2. This is the inherent benefit of TCA accounting – by breaking down a large, complex (and often aging) asset into individual components, the municipality can better understand and budget maintenance, and make better decisions based on an accurate valuation of the assets. 

As mentioned earlier, different municipalities will have somewhat different approaches to how they break down an asset, what they name the individual pieces, and how many levels they choose to utilize in this breakdown. Let’s take Facilities as an example. Below are shown four facilities:

Building 1 (a stadium)

Building 2 (a recreation centre)

Building 3 (a fire hall)

Building 4 (an animal shelter)

A facility may be broken down as follows:

Each of the two facilities (buildings) has been broken down into its component parts. Suppose the washrooms in both buildings 1 and 2 are renovated, via a single contract. That means that four washrooms (2 in each building) were renovated. This renovation activity affected 4 individual assets, i.e. the four washrooms. The records generated by the specifications for the work, the contract selection and award, the construction, and payment, all relate to the following four assets:

Building 1 washroom 1
Building 1 washroom 2
Building 2 washroom 1
Building 2 washroom 2

MACL Synchronization

When an EDRMS user declares an asset-related document by placing it into the EDRMS as a record, they must have a means of selecting the appropriate asset, such as:

Vehicle X from the list of vehicles

Pump X from the list of Water Distribution Pumps

Road segment X from the list of road segments

This list must obviously be presented within the EDRMS. It must also be current (up to date) with the MACL. Remember that the MACL “lives” in the ERP system, not the EDRMS. If the MACL is extracted from the ERP system and displayed in the EDRMS via a custom integration between the EDRMS and the ERP system hosting the MACL, the EDRMS is not storing a duplicate of the MACL. Inside the EDRMS, the EDRMS is merely querying the ERP system and presenting the MACL. If, however the MACL is stored in the EDRMS as a duplicate, the MACL and the duplicate list in the EDRMS will eventually differ, as changes are made to the MACL. In this case, there must be a means by which the MACL and the EDRMS can be synchronized, either continuously in real time, or periodically (e.g. every 24 hours). This means that:

  1. When a new CAC is added to the MACL, the EDRMS asset list must be updated.
  2. When Finance changes the EOL of an asset, the Records Manager must be made aware of it so they can change the trigger date in the asset’s retention rule.

In a small municipality with limited IT resources, this synchronization will have to occur manually, i.e. Finance and the Records Manager must keep each other informed of changes as they occur. If the Master Asset Class List and the EDRMS fall out of sync over time, the entire recordkeeping process is no longer TCA-compliant. In a larger municipality with a larger MACL, and more IT resources, there are many ways to build custom software integration solutions that will automatically update the retention rules automatically as a result of changes to the MACL.

The MACL is the master list – the authoritative list of assets. Put another way, the retention schedule and the ECM Asset Lists (used for end user selection) are both subservient to the MACL. The retention schedule and the ECM Asset List must somehow keep up with changes in the MACL. A “change” means an addition to, a deletion from, or a modification to an existing MACL entry. This sets up a three-way synchronization challenge as illustrated below:

Whenever a change is made to the MACL, both the Asset List (in the ECM), and Retention Schedule (Assets IDs and EOLs) must be updated accordingly with the changes. Worse still, there is a strong interdependence between the retention schedule and the SharePoint Asset Lists. The RBR rules contain explicit field values for assets. Therefore, the RBR rules can only work if the Asset field values are being presented in the ECM

Every time the MACL changes, these changes need to be communicated to IT so they can update the ECM Asset List, and to Records Management so they can update the retention categories, including any changes in the EOLs. This could represent a great deal of manual data entry. In a larger municipality this could approach a full-time job just for the data entry. The solution is a custom integration between the ERP system and the EDRMS as illustrated below:

This integration could take the form of a custom integration that runs in real time, or periodically, (e.g. at midnight each night). The MACL could be an asset management application such as WorkTech, or it could be as simple as a spreadsheet. Each time there is an asset change, the integration software would take the actions as shown in the table below:

Physical Records

How would TCA recordkeeping be handled with physical (paper) records)? The same approach as used in electronic would have to be used for paper:

  • Each paper document would have to be filed in a folder with same labelling as called for in the electronic system. Each label would have to bear a CAC number, drawn from the Master Asset List.
  • The records would have to be marked with the appropriate TCAS status. This means TCAS records must be physically segregated from the remaining records in the category.

The paper system would in this case follow the electronic. The electronic system is presenting the choices to the end user, who will have to follow through and label the documents and folders appropriately. With physical records, users will need access to paper copies of the following lists in order to properly classify (file) records:

Retention Schedule

Master Asset Class List

Project List (for project cases)

The retention schedule will generally not change very often, however the Master Asset Class List and the Project list could change quite frequently. If this is the case, it would be best if the users used the computer just to look up appropriate values in these lists, instead of having to rely on paper copies that could quickly get out of date.

Using TCA

Appendix 1 shows a high-level view of how records are handled in an asset-compliant setting. If the record being declared describes an activity that has affected a specific named asset, then the process shown on the right-hand side of the diagram is applied. We refer to this as an ARM (Asset Repair & Maintenance) record. If the record is not dealing with a repair or maintenance, then there are three possibilities on the left-hand side of the diagram as shown:

Construction of New Asset Building of a new asset or group of assets. This would only apply to assets that have to be built, i.e. roads, facilities, storm drains, and land improvements. It does not apply to assets that have to be acquired, such as equipment or fleet. The user has to select the secondary ACS (Asset Construction), specify the asset class to which the activity pertains, then select a case project. A case project will typically be a capital-funded construction project. All construction projects are cases.

Planning/Management Activities related to the general planning and overall management of the assets, such as long-term plans and forecasts, estimates for future funding, etc. The user has to select the secondary APM (Asset Planning & Management), specify the asset class to which the activity pertains, then select a specific named asset.

Operations Regular operation of the assets, such as snow clearing (roads), window cleaning and janitorial services (buildings/facilities), leasing (land, buildings), inspections, etc. The user has to select the secondary AOP (Asset Operations), specify the asset class to which the activity pertains, then select a specific named asset.

Suppose a record is about a repair on a specific asset – a road segment for example. The repair applies to a specific named asset (the road segment), so the right-hand side of the diagram applies. It is an ARM record. First the user must select the asset class, in this case Roads. Then, from within the asset class of Roads, they have to select a specific (named) asset, for instance Maple Street Segment 2. It’s possible that for some reason, the asset is not available in the selection list. If so, the software will have to notify the Records Manager of the need for this asset, and the user will have to wait and try again later.

Assuming the asset appears in the list, the user selects it. If the user happens to be a designated administrative user, the software can ask explicitly if this record indicates activity that results in a betterment of the asset. This would apply to very few users who are entrusted with the knowledge and motivation to answer this question appropriately. We will assume however that most of the time, this is a non-privileged user not trained to properly answer the question.

Now the user must select a document type, such as:

Invoice

Drawing, As-Built

Drawing, Final

Specifications

Timesheet

etc.

The Document Type is a mandatory document metadata field. Each of these document types is defined in advance and presented to the user as a limited selection – they have to identify the appropriate document type. The definition of each document type specifies whether the document type represents TCAS documents, or not (this is an internal system attribute or field). By selecting the appropriate document type therefore, we now know if the document is a TCAS record or not. If TCAS, the EDRMS will assign a retention period of EOL + X years. If the document is non-TCAS, it will be assigned a retention period of X years. 

Suppose however the record is a plan specifying which roads to repair in the coming summer months. This document does not affect a specific (named) asset. It is therefore an APM (Asset Planning & Management) record. The user will go to the APM section of the file plan, then select an asset class, in this case Roads. The record’s classification is “APM-Roads”. Roads is the primary level, Planning & Management is the secondary level. The retention period will be (typically) X years.

A record would follow a similar process if it were about routine operations of an asset. Suppose the document was a plan for road snow clearing. This does not affect the asset itself – it is purely an operational activity. The user would go to the ROADS section of the file plan, then select the activity “Operations”. The record’s classification is “Roads – Operations”. Roads is the primary level, Operations is the secondary level. The retention period will be (typically) X years.

Appendix 1

TCA Usage

 Appendix 2

Asset Inventory Sheet

Appendix 3

Property Record Card

About the Author

Bruce Miller, MBA, IGP is a world leading expert on electronic recordkeeping. He is an independent consultant, an author, and an educator. Widely regarded as the inventor of modern electronic recordkeeping software, he pioneered the world’s first commercial electronic recordkeeping software in 1989. In 1997 he achieved the world’s first e-Records software certification against the US DoD 5015.2-STD standard, and has since presided over several successful software certifications. In 1999 he developed the world’s first e-Records software engine for business software. That year he received ARMA Canada’s National Capital Region’s Ted Ferrier Award of Excellence for his contribution to the field of records management. Bruce’s software was the first technology in the world to be certified against the revised 5015.2 June 2002 standard. In 2002 his company was acquired by IBM, where he served for three years as IBM’s global e-Records Strategy and Business Development Executive. At IBM he was honoured as a Technical Leader, one of only 439 out of 360,000 IBM employees. Mr. Miller is the recipient of the prestigious 2003 Emmett Leahy Award, considered the highest international recognition given to professionals in the field of information and records management. His book “Managing Records in Microsoft SharePoint 2010” was an ARMA best seller, and the second edition was released in October 2015. Bruce holds a Diploma in Electronics Engineering Technology, a Masters in Business Administration from Queen’s University, and is a certified Information Governance Professional. Learn more about Bruce and his consulting practice at https://www.rimtechconsulting.com

The Strange Case of Dr. Digitization and Mr. Film: Preservation of Film Records in a Digital Medium

SAGESSE VOLUME VII WINTER 2022 – AN ARMA CANADA PUBLICATION

by Oscar Alonso Aguilera Garcia

 

Back to Sagesse 2022

 

Abstract

Film strip decay is one of the most significant challenges that archivists must confront in the near future. With the renewed interest in and debate surrounding the film preservation in recent years – especially since the adaptation of digital material in the film industry – it is now more important than ever for archivists to clarify and consider new methods for preservation while also exploring the new concepts and new meanings that such methods bring to bear. To that end, this paper proposes that film archiving can go far beyond the traditional concept of preserving history by seeking a more enduring system of conservation that could potentially allow films to be maintained not only in various physical formats but also by way of memory.

By tracing the basic concept of film as evidence of the past, and by considering the relationship between truth, reality and preservation, entirely new modalities might be introduced into the archival space. In the end, however, this paper does not seek to offer a single solution to the problems surrounding digitization and film archiving. Rather, it seeks to begin the process of accepting and adopting new technologies or techniques that will bring to bear new alternatives in the field of preservation.

Introduction

Film decay – that is, of the original, physical elements – seems to be the main concern in archival studies for film preservation. Film strips are not made to last indefinitely without being properly stored, and even in optimal conditions, they suffer deterioration over time. In the United States of America, only 20 % of the films in the 1910s and 1920s survive in complete form in American archives, and only half of the movies produced before 1950  still exist in their original form (National film Preservation Foundation, n.d., Para 2). Digitization offers many opportunities to rescue otherwise rare films and even keep them profitable for future generations by making it possible to easily screen them worldwide. That being said, many digitization techniques are still in their infancy, such as the transfer of 35 mm film strips to a 4K format for streaming services or the adaptation of 8 mm or16 mm strips into new digital formats that can then be mixed with elements that are digital by nature. Nevertheless, it is essential for people working within the field of film preservation to understand and perhaps seek new ways of approaching and integrating this emerging technology. This might not seem, at first glance, to be an exceptionally sensible claim. The digital camera has become such a ubiquitous filmmaking tool in the modern era that it must seem perfectly logical to conclude that preserving film has only become less challenging over time. The film Suicide Squad (Ayer, D., 2016, USA), for example, was shot using 35 mm film strips that were then converted into a digital format for editing and distribution purposes. This is also a common practice for archival film projects which in recent years have shifted away from converting digital footage to film strips to maintain the projects in their native format and avoid the expense involved in a transfer to a physical format. None of this, of course, should be very surprising. By its very nature, the film industry has always evolved alongside and in conversation with innovations and new technologies. And there has long been a strong connection between movies as a medium of truth and the archival concept of evidence. 

Movies, in essence, are elements of the past. This perception must necessarily guide the daily practice of analysing, selecting, and preserving records, making them accessible and rendering them intelligible to future users. Until the arrival of digitization, the primary conception of “cinema as evidence” was highly physical, since it essentially described a mechanical way to preserve the past that was not subject to distortion in the same way as other types of art. But while digitization has impacted this attitude only slightly, in this essay we propose that film archiving can go far beyond the traditional concept of preserving history by seeking a more enduring system of preservation that could potentially allow films to be preserved not only through physical formats but also as a memory.

Film Preservation, Digitization and Alternatives 

The creation of film archives and the need for film preservation are subjects that should be treated with the utmost seriousness due to the speed at which films decay and the accompanying possibility of losing these invaluable records. The first influential discussions on motion picture longevity can be traced back to communications in the early twentieth century among the motion picture industry’s worldwide primary players, concerning how best to manage their product throughout the production, distribution, and exhibition processes. (Karen F Gracy, 2013b, P. 369) The systems which came about as a result of these conversations certainly represented an essential first step in the archival process, but they tended to extend only as far as the needs of commercialization required, something which remains a constant problem in the film industry. 

Specifically, film exchanges were created as central locations where the film collections belonging to the motion picture companies were available for rental to local exhibitors. Most of them included corporate offices and private screening rooms where said exhibitors – i.e. the owners and operators of domestic movie theatres – could preview and choose movies for rental on a commercial basis. But while the creation of these exchanges represented a useful first step along the path towards long-term preservation, the desire for profit exerted its own counteracting pressures. For an industry that relied on regular, repeat customers, for example, the problem of poorly maintained films was a constant concern. (Karen F., P. 372) With the advent of digital cinema, of course, these two impulses have largely been reconciled. Digital films can be easily and widely accessed by customers and vendors alike without in any way degrading the cinematic medium itself. And thanks to the possibilities inherent in streaming video, the moving image has become the predominant form of communication in the twenty-first century. Modern life, in many ways, would be incomprehensible without photography, video and cinema, all of which can now be retrieved, produced, controlled, and propagated by anyone with access to the internet. (Forbes, D. 2009, P. 37) The opportunities these systems of communication have created for interaction between individuals and the safe and durable creation and storage of new meanings and new memories are quite possible immeasurable.

Digital medium does have inherent problems, however, not the least of which – in common with physical film – is a certain amount of fragility. Just as film strips are subject to degradation and destruction over time, digital film exists in a form that can be corrupted, or lost, or rendered otherwise inaccessible. The first cause of this fragility is that technology moves faster now than it did when film preservation first emerged as a discipline. In less than a decade, current digital film creation and storage methods could easily be rendered obsolete, making it almost impossible to preserve certain artifacts for an extended period of time. Second, digital preservation requires expertise that archivists and filmmakers still do not always possess since many of the concepts and technologies involved are either brand new or still being developed. And third, the way that films are screened can and has changed over time, leading to serious distortions in how people perceive the final product. (Conrad, 2012, P. 31) Even if we thought to adopt the practice of creating physical film copies as backups, modern filmmaking has adopted digital technology so completely – with computer-generated imagery (CGI) demanding the use of digital storage and manipulation – that it ultimately makes more sense to find new digital options instead ignoring them. (Conrad, 2012, P. 37) But despite the impasse that this situation might seem to represent, organizations that recognize the value of archiving audio-visual materials are actively working towards the goal of saving and preserving materials throughout the modern digital media landscape. Even in light of certain legal problems, such as copyright issues and the demands of commercialization, digitization and digital materials are being embraced as offering more solutions than problems, in large part because they offer fast access to records to an extremely broad audience.

Not everyone is equally as enthusiastic about the increasing ubiquity of digital media, however. One concern that has some archivists siding against digitization is the idea that archives would focus their efforts on creating accessible copies rather than protecting the original materials for preservation purposes. Budgeting, of course, is a necessary aspect of archival operations; decisions must be made about how and where money is spent, and there are legitimate concerns among preservationists about how the apparent value of digitization will likely draw resources away from physical preservation.   In time, they feel, this kind of thinking will lead to a standard of practice that favours screening rather than preserving original materials, a shift that will ultimately reduce the quality of available material in an effort to promote wider accessibility. (Gracy, K. 2013a, P. 369) Granting that digitization does also offer the possibility of screening without using or affecting the physical record, solutions developed going forward must nevertheless also account for lingering issues having to do with conservation, storage, preservation, and duplication.

A key example of one of these issues concerns the datacenters that end up storing the relevant digital records. Recalling the comparison offered above, digital film data centres share with physical film warehouses the risk of materials damaged by exposure to the elements or simply because of degradation over time. Thankfully, new technologies are presently being developed that could potentially increase the lifespan of certain digital storage devices. One example of this is the experiment that Microsoft concluded just at the end of September 2020. A two-year test of a sealed container datacenter located on the floor of the Pacific Ocean successfully demonstrated the overall reliability of the technology in question in an environment with reduced corrosion from oxygen and humidity, fewer temperature fluctuations, and a general absence of people who could damage the equipment as a result of their daily interaction with it. (Microsoft, Para 12) The success of this kind of experiment opens the door to storing records using equipment that could theoretically last much longer than has previously been the case. This development would seem to solve at least two problems at once. The first is that it protects the physical manifestation of the records in a sealed form for future generations. The second is that it allows for the creation of copies at a comfortable pace and thereby preserves the record in the memories of the individual and society for as long as they desire to screen the relevant video. This kind of thinking, called “cultural memory,” constitutes a distinctly non-physical aspect of the archivist’s discipline whereby material is preserved in part by keeping it alive in the collective minds of a given community(Ulf Vierke, 2015, P. 21). In many ways, this way of thinking has only really become possible since digitalization became relatively inexpensive.

Previously, preservation of analog moving images and audio required a high initial investment in equipment which still did not always solve some of the problems inherent in the process. Screened copies of films, for example, tend to be of lower quality as a result of continuous use, and film transfer work, while possible, tends to take a prohibitive amount of time (Gracy, K. 2013a, P. 368). Digitization allows films to be cheaply and easily stored and screened, but the technology involved is still in its relative infancy at the moment, and the problems associated with its prolonged use are still being debated. 

That being said, it is necessary to remember that the film industry encountered the same kinds of problems relatively early in its history since there was virtually no consideration given to film preservation or the value of this new type of physical record until the 1930s at the earliest, several decades after the advent of moving pictures. In this way, we could consider this present stage as the beginning of a new era in the history of film archives and of the film industry itself (Gracy, K. 2013b, P. 371). The film industry is always going to think in terms of business, of course, and considerations of profit and loss will ultimately determine how and when new technologies are adopted. That said, some amount of thought should still be given to the fewer material benefits of film preservation. In addition to being products that are intended for exhibition and sale, after all, films and photography are also forms of art and historical documents that are deeply intertwined with the concepts of evidence and memory. 

A Closer Look at Film as Evidence

Since the inception of the film industry, it has become natural for people to connect with the idea that moving images and photographs can in some manner preserve time. The first cinematographers sought to record events worldwide and offered a glimpse of the world for an audience eager to devour each new image that emerged. The first decades of this industry’s life accordingly inspired artists and intellectuals to debate the implications of these developments from a diverse array of aesthetic, scientific, and philosophical perspectives. Underlying many of these debates was the impact of these new recording devices upon the conception of memory (Amad, P, 2010, P96). For some researchers, the film reel resembled a sort of a time capsule or time machine which would capture a place or an object in a way that could potentially be stored for future reference, marking the indexical, irrefutable, and reproducible trace of past events as they unfolded in duration (Amad, P. 2010, P. 135). But as the medium of cinema took time to be understood, this conception of its impact on memory and culture also took time to be digested and adopted. 

Perhaps the best-known researcher, and the one whom most film historians and theorists call back to when attempting to explain what they think cinema actually is, would be Andre Bazin. His essay Ontology of the Photographic Image (1958) has offered generations of scholars an extremely useful analogy between the “mummy complex” and the essence of film. In primary terms, Bazin put forth the idea that photography, and cinema itself, offers us the opportunity to preserve, artificially, anything that is captured through the lenses of the camera; to snatch it from the flow of time, to stow it away neatly, so to speak, in the hold of life (Bazin A., 2005, Pp. 82-83). This ability that cinema possesses to preserve time accordingly gives the film a quality of credibility that no other art form can claim, and has lent film an inherent quality of truthfulness since photography enjoys certain advantages in terms of this transference of reality from the thing to its reproduction (Bazin A., 2005, Pp. 94). Bazin followed this initial claim by further arguing that the value of the camera was that it could be considered objective, since for the first time, between the originating object and its reproduction, there intervened only the instrumentality of a non-living agent. For the first time, the world’s image was formed automatically, without man’s creative intervention. The arts, up to that point, were based on the presence of man. Only photography derived an advantage from his absence. (Bazin A., 2005, Pp. 92-93) This theory is crucial to archival studies since archives are interested in preserving evidence with as much fidelity to the original as possible.

It is, of course, necessary to delimit the concept of evidence for this argument to succeed. For this document’s benefit, we ultimately gravitated toward English philosopher and political reformer Jeremy Bentham and American Lawyer and legal scholar John Henry Wigmore’s theories, which suggest that evidence is constituted by the very processes that use evidence to prove a fact or acquire knowledge about a past event. (Meehan J., 2006, P. 137) This short explanation seems to align with our analogy of the mummification of the past and the essence of evidence. It also seems to apply in legal terms, as when the recording of an event constitutes proof that the event took place, with the image or record itself a signifier of the relationship between record and event. (Meehan J., 2006, P. 139) This notion of evidence as proven fact is a concept that in large part governs the placement of records in archives; specifically, the idea that records prove an event in the past. The past was recorded and is stored so it can become evidence for the future. The archive offers a complete look at the past and, in this way, accepts the truth.

This concept of evidence necessarily relies upon the idea that the archive is a repository of objective truth and that the material stored in the repository is there to preserve the evidence of truth. Archivists employ notions of evidence to refer to the function and value of records, to shape how they treat records, define the role of the archive in society, and provide a particular substance to archival ideas concerning the nature and purpose of the archival endeavour. (Meehan J., 2006, P. 128) This kind of thinking should inherently connect to the notion of memory, yet it seems that it is normal to divide these two concepts and even consider them to be in opposition. This thinking has contributed to the current division between evidence and memory and kept archivists from fully considering the possible affinities between the different facets of the archival idea. (Meehan J., 2006, P. 131) By digitizing objects, it becomes possible to continue connecting memory to evidence while also preserving the original physical material. Perhaps more importantly, it gives us the chance to see two archival systems, cultural memory and physical archiving, living and working together. In a way, it’s possible to mummify the records and at the same time give them a new life. The opportunities are endless.

Conclusion

The idea of film as evidence can thus be traced back to the advent of cinema, and as a result of the works of theorist Andre Bazin, it has been cemented in the popular imagination that cinema is the evidence of time as preserved through images. Despite this close connection between film as a medium and evidence as a concept, however, preservation has always been and continues to be a significant problem for those creating and studying film in its various forms. Until the advent of digitization, archival studies and cinema studies did not question the basic concepts that govern the creation and storage of records. But while some may perceive the ensuing conversations as a kind of crisis, this is emphatically not the case. The advancement of technologies in digital storage, like the advancement of technologies used in cinema, should not be considered extraordinary but rather a natural step in preserving the cinematic record. The constant evolution of technology in cinema has allowed it to survive and become the predominant form for the representation of reality. Digitization offers archivists the same kind of opportunity, allowing for the screening of films without using or affecting the physical records. The technology currently available for storage is limited, but it will improve and become the preferred method for research in time.

About The Author

Oscar Aguilera Garcia is a student of Master of Information at the University of Toronto, where he learned the importance of archival methodology in film preservation and new preservation methods. He is working on a documentary project which is in line with his future goal of working on film documentaries. Oscar is currently finishing his Master’s degree.

Work Cited

Amad, P. (2010). “Counter-Archive: Film, the every day, and Albert Kahn’s Archives de la Planéte.” Columbia University Press.

Ayer, D. (2016) Suicide Squad. Warner Brothers: USA

Bazin, A. (2005). What Is Cinema? Translated by Hugh Gray. Forewords By Jean Renoir. Univ. of California Press: California. 

Chassanoff, A. (2013). “Historians and the Use of Primary sources Materials in the Digital Age.” The American Archivist (2013) 76 (2): 458-480

Conrad, S. (2012), “Analog, The Sequel: an Analysis of Current Film Archiving Practice and Hesitance to Embrace Digital Preservation.” Archival Issues 34(1), pp. 27-43.

Forbes, D (2009), “Film Archives: A Decaying History.” African Research & Documentation 110, pp. 37-43

Gracy, K. (2013, a). “Ambition and Ambivalence: a Study of Professional Attitudes Toward Digital Distribution of Archival Moving Image.” The American Archivist (2013) 76 (2): 346-373. DOI: 10.17723/aarc.76.2.t401kx8j64682224

————–(2013, b), “Moving Image Preservation Work: The Evolution and Integration of Moving Image Preservation Work into Cultural Heritage Institutions.” Information & Culture: A Journal of History 48(3), pp. 368-389

Meehan, J. (2006). “Towards an Archival Concept of Evidence.” Archivaria 61 (Spring): pp. 127-146.

National Film Preservation Foundation (n.d.) “Preservation.” National Film Preservation Foundation: https://www.filmpreservation.org/preservation-basics [accessed September 27, 2021]

Roach, John (2020, September, 14) “Microsoft finds underwater datacenters are reliable, practical and use energy sustainably.” Microsoft Innovation Stories: https://news.microsoft.com/innovation-stories/project-natick-underwater-datacenter/ [Accessed December 1, 2020]

Vierke, U. (2015), “Archive, Art, and Anarchy: Challenging the Praxis of Collecting and Archiving: From the Topological Archive to the Anarchic Archive.” African Arts 48(2) (Summer), pp. 12-25.

Introduction

SAGESSE VOLUME VII WINTER 2022 – AN ARMA CANADA PUBLICATION

by Barbara Bellamy

 

Back to Sagesse 2022

 

From the Editor

Welcome to the seventh edition of Sagesse: Journal of Canadian Records and Information Management, an ARMA Canada publication. We have several great articles written by some familiar authors and first time Sagesse authors for you to enjoy. As always, we welcome your feedback.

Sagesse Editorial Team

I would like to thank the Editorial Review Committee for their time, insights, and dedication to the Information Profession and Sagesse. Without them, there would be no Sagesse. 

Christine Ardern

Alexandra (Sandie) Bradley

Pat Burns

Sandra Dunkin

Heather McAra-Tinkler

Stuart Rennie

Uta Fox

I am also excited about the addition of Ann Snyder to the Editorial board. Ann’s experience includes data remediation; data mining/analytics; e-discovery; IG process building/improvements; IG technologies; IG program maturity assessment/gap analysis; long-term digital preservation; IG program training; IG frameworks, programs, committees; policy development and implementation; IG and data transfer during restructuring events; and industry best practices.

I would also like to extend my appreciation to Christy Walters and Jay Jorgensen from the ARMA Canada Board for their support and efforts securing French translation and maintaining our publication on the ARMA Canada site. And finally, thank you to Simon Ouaziz, a colleague of mine for reviewing the French translation for content prior to publication.

University Essay Contest

ARMA Canada held its fourth essay contest for graduate students enrolled in graduate information management programs at Canadian universities in 2021.  We are pleased to announce that Oscar Alonso Aguilera Garcia from the University of Toronto was the $1,000 recipient with the article “The Strange Case of Dr. Digitization and Mr. Film”. This article discusses the need for Archivists and Information Professionals to consider new methods for film preservation. This paper proposes that film archiving can go far beyond the traditional concept of preserving history by seeking a more enduring system of conservation that will allow films to be maintained in various physical and digital formats while maintaining its integrity.

Congratulations Oscar! 

2022 Sagesse Articles

Bruce Miller discusses the “Impact of Tangible Capital Asset (TCA) Accounting on Electronic Recordkeeping Practices for municipalities that have implemented TCA. He writes about the need to update the procedures to create, label and file physical documents. Digital procedures will also need to be updated including the changes necessary to the retention rules

Mark Grysiuk provides an entertaining look at incident response management in “Say Goodbye to May Long Weekend”. It is a fictional case study about a Canadian organization attacked by hackers right before May long weekend. The Records Manager plays a critical role in guiding management decisions and providing insights into incident response planning.

Good communications are vital to organizations proactively meeting their privacy obligations. In Anne-Marie Hayden’s article “Enhance Communications to Improve Privacy Practices” she discusses techniques that can help manage privacy challenges when they arise. And contains techniques to better comply with consent and openness requirements and improve online privacy policies and notices. 

In this next article written by The First Nations Information Governance Centre (FNIGC), “Respecting First Nations Data Sovereignty in Records & Information Management”, Melissa Dane provides an overview of the FNIGC. She defines the concepts of First Nations Data Sovereignty and First Nations data before briefly outlining the First Nations Principles of OCAP®. The paper ends with a discussion of various ways Records and Information Management professionals are implicated in First Nations Data Sovereignty and how they may respect the principles of OCAP® in their work.  

Dans cet article suivant rédigé par The First Nations Information Governance Centre (FNIGC), « Respecter la souveraineté des données des Premières Nations dans la gestion des documents et de l’information », Melissa Dane donne un aperçu du FNIGC. Elle définit les concepts de la souveraineté des données des Premières Nations et des données des Premières Nations avant de décrire brièvement les principes des Premières Nations du OCAP®. Le document se termine par une discussion sur diverses façons dont les professionnels de la gestion des documents et de l’information sont impliqués dans la souveraineté des données des Premières Nations et sur la façon dont ils peuvent respecter les principes du OCAP® dans leur travail.

The thesis of this next article by Jennifer Bodnarchuk, “Information governance vs. data governance: what’s the difference and why does it matter?” is that the distinct differences between data and information do not need to be understood in order to govern data and information. Data and information governance are essential to provide the guiderails of process and structure to protect, preserve, organize, and give appropriate access to the data and information that lead to knowledge and wisdom for organizations.

Records and Information Management is Vital to System Development and Implementation” by Tod Chernikoff discusses the gap between those who buy or develop information management systems and compliance with records and information management requirements. Records and information management staff must be involved in the Software Development Lifecycle process from the beginning to ensure those systems properly manage records and information across its lifecycle.

« La gestion des dossiers et de l’information est essentielle à l’élaboration et à la mise en œuvre du système » de Tod Chernikoff discute de l’écart entre ceux qui achètent ou développent des systèmes de gestion de l’information et la conformité aux exigences en matière de gestion des dossiers et de l’information. Le personnel de gestion des dossiers et de l’information doit participer au processus du cycle de vie du développement logiciel dès le début pour s’assurer que ces systèmes gèrent correctement les dossiers et les informations tout au long de leur cycle de vie.

Strategic planning is a critical part of any successful program. This paper by Christine Ardern looks at the elements which go into strategic planning. “Do you know where you are going? A look at Strategic Planning” also provides the steps involved and some valuable resources that can be used as reference tools to begin your strategic planning. 

La planification stratégique est un élément essentiel de la réussite de tout programme. Cet article de Christine Ardern examine les éléments qui entrent dans la planification stratégique. «Savez-vous où vous allez? Un regard sur la planification stratégique» fournit également les étapes impliquées et des ressources précieuses qui peuvent être utilisées comme outils de référence pour commencer votre plan stratégique.

Please note the disclaimer at the end of this Introduction stating the opinions expressed by the authors in this publication are not the opinions of ARMA Canada or the editorial committee.  We are interested in hearing whether you agree or not with this content or have other thoughts or recommendations about the publication. Please forward to: sagesse@armacanada.org

If you are interested in providing an article for Sagesse or wish to obtain more information on writing for Sagesse, visit the ARMA Canada’s website – www.armacanada.org – see Sagesse

Enjoy! 

ARMA Canada’s Sagesse’s Editorial Review Committee

Christine Ardern, CRM, FAI, IGP; Barbara Bellamy, CRM, ARMA Canada Director of Canadian Content; Alexandra (Sandie) Bradley, CRM, FAI; Pat Burns, CRM; Sandra Dunkin, MLIS, CRM, IGP; Heather McAra-Tinkler; Stuart Rennie, JD, MLIS, BA (Hons.), FAI; Ann Snyder, and Uta Fox, CRM, FAI.

Disclaimer

The contents of material published on the ARMA Canada website are for general information purposes only and are not intended to provide legal advice or opinion of any kind.  The contents of this publication should not be relied upon. The contents of this publication should not be seen as a substitute for obtaining competent legal counsel or advice or other professional advice.  If legal advice or counsel or other professional advice is required, the services of a competent professional person should be sought. 

While ARMA Canada has made reasonable efforts to ensure that the contents of this publication are accurate, ARMA Canada does not warrant or guarantee the accuracy, currency or completeness of the contents of this publication.  Opinions of authors of material published on the ARMA Canada website are not an endorsement by ARMA Canada or ARMA International and do not necessarily reflect the opinion or policy of ARMA Canada or ARMA International.

ARMA Canada expressly disclaims all representations, warranties, conditions and endorsements. In no event shall ARMA Canada, its directors, agents, consultants or employees be liable for any loss, damages or costs whatsoever, including (without limiting the generality of the foregoing) any direct, indirect, punitive, special, exemplary or consequential damages arising from, or in connection to, any use of any of the contents of this publication.

Material published on the ARMA Canada website may contain links to other websites. These links to other websites are not under the control of ARMA Canada and are merely provided solely for the convenience of users. ARMA Canada assumes no responsibility or guarantee for the accuracy or legality of material published on these other websites. ARMA Canada does not endorse these other websites or the material published there.

How Projects, Technology and Solutions Were Impacted By COVID-19: The Issues and Solutions We Faced

SAGESSE WINTER 2021 – AN ARMA CANADA PUBLICATION

by Troy Sawyer

 

Back to Sagesse 2021

 

Abstract

This article looks at the impact of COVID-19 on various consulting projects in which I was involved and looks at the challenges posed by the shutdown and clients working from home.

Introduction

As records and information management professionals, we have always worked well offsite with our clients, using tools such as conference calls and remote desktop sharing to develop, implement, execute and deliver everything from policy development to system architecture, in order to support custom development.

2020 started off with a number of great projects.  Looking back at project timesheets, I started my research for this article to see where my time was actually spent in the first few months of the year. To preface this, my role varies significantly throughout the projects we will look at. This article provides a view that is not specific to my application developer role, records consulting role or software implementer and trainer role including an in-depth look at some specific projects. The following is a breakdown of the categories of tasks undertaken within each project from the review of those early timesheets.

  • Records Management and Software Implementation
    • Content classification
  • Training, Webinars and Documentation
    • We did a multi part webinar series on one of our software tools
  • Legacy Database Migration
    • A few projects regarding legacy databases from old Microsoft Access, spreadsheets and servers.
  • Reporting
    • This mainly included security and auditing.

When COVID-19 first appeared we had to look at all the projects to see what was involved, the focus on what we could do and next steps. The world (or offices) was our oyster and things were moving along. 

Pre-COVID-19 PROJECT Management Process

Before diving into some specific projects, the issues and ultimately how they were handled, let’s look at the good and bad of managing projects pre COVID-19. 

Pros  

The approach to managing the projects pre COVID-19 was very structured. Projects had been done for years and followed a set process/template, which varied depending on the project type and client industries. Each one required the creation of a Request for Information/Request for Proposal (RFI/RFP); receiving a Purchase order and preparing a contract with the client, signing a non-disclosure agreement (NDA); creating the necessary reports and documentation and following a series of predefined steps to incorporate project planning, set up and completion with client sign-off.  Given that the processes were well defined, project documentation and planning steps were created quickly with amendments made according to client needs.

Cons 

Pre COVID-19 times, while being structured has a positive side, it also has a negative side – it can lead to rigid processes that can sometimes slow projects down and create obstacles. Strict and rigid processes around creating and gathering required project paperwork, and scheduled meetings can also slow up progress. For example, there is a need for certain paperwork to be completed between the company and the client before a project can start, NDAs , Statements of Work (SOW) , Purchase Orders (POs)) or contracts in general. Often clients require a non-disclosure agreement prior to starting a project and in order for us to bill a client, we need a purchase order.  I am always the worst when it comes to doing work on the project before the NDA or even a PO is complete because I want to get started on the project.  The “legal” paperwork as discussed above, does not impact my role directly. 

In the following sections, we’ll outline our experiences and show how different methods of approaching projects have worked out during this time of COVID-19. 

COVID-19 and mid project adjustments

As projects moved along from the beginning of the year, little did we know what was coming or how much it would affect existing projects that were at many different stages of implementation. Many steps in the process were slowed down because people were working from home, which affected such activities as paperwork being signed. In general, access to resources required for projects was the main factor in slowing projects down. 

There were many projects where there was an increase in urgency, created because people were working from home.  Working from home created issues around data security and accessing both physical and electronic information. 

Hindsight is always 20/20; if we knew now what we did then things may have been different. For the remainder of this article, let us look at what did happen, and what the issues were.

What Didn’t Change

As a Records Management Consultant and Developer, my day to day didn’t change. With our office setup, I continued to work from the office while following the “new normal” guidelines and restrictions. Days were filled with data analytics, software feature development, documentation creation and other aspects of the projects in which we were involved. Many of our clients are outside of our immediate vicinity and to reduce project costs most of our work has been completed remotely for years, so our day to day activities continued. Given our use of desktop and screen sharing tools such as ZOOM and Microsoft Teams, we were able to continue to move work forward using the technology available to us without having to be onsite 

While some things changed, others did not – meeting deadlines, applying Microsoft patches and licensing continued. Current projects had a number of time bombs with the timers ticking. 

Through the next sections I will break down some of the mid project adjustments required, focusing on three project types:

  1. Legacy Database Conversion 
  2. Electronic Document Management Software Upgrade
  3. Corporate Strategy for Records Management

Using each of these projects, I will look at the time bombs, how they were impacted by the global pandemic and how we handled them. Section 1 describes the challenges of each scenario.  

1. Legacy Database Conversion – Resource and Deadline Challenges

For this project, a client had many home-grown Access Databases that were being migrated to a cloud implementation of WesternIM’s Records and Information Management Tool – WISPIR (WesternIM’s Information System for Physical Inventory Recording). This project had been approved for about a month but the actual project work was in the early stages. With only a half a dozen people involved, we will look at how mid project resources were cutback, limited, transitioned to new jobs or given other priorities. How did we succeed where other projects were in jeopardy of failing? 

In terms of the Legacy Database Conversions, the perfect storm happened. Not only did we have a transition to client personnel working from home because of COVID-19, there was a personal leave coming for the individual with the required corporate knowledge,   compounded by which, the project that had to be finished by the end of the fiscal year!

2. Electronic Document Management Software Upgrade- Agility 

Let’s look at a large Electronic Document Management Software (EDRMS) upgrade. Here we had a brewing pot of issues that could only have been handled with an Agile approach. The Agile methodology is something we use in the IT development world to break down a process that involves more back and forth consultation in terms of client acceptance for our implementations and allows for work to be done, reviewed and adjusted quickly as the project moves from start to finish. This is a skill set that has allowed us to adapt our projects and accommodate large changes to scope and timelines. 

The existing situation with the EDRMS upgrade was a recipe for disaster:

  1. Old browser technology support for user functionality was being discontinued
  2. There were requirements to move to a newer browser. 
  3. Microsoft was rolling out the requirement for Active Directory to have secure connections.
  4. Security implementation was not adequate in previous implementation.
  5. Unstructured content was filed through a very stringent strainer.

To focus on a couple of ingredients here, between project management, IT and WesternIM as the consultant, we had to be very agile in order to make sure the brewing pot didn’t sit on the fire for too long and that the result was palatable on both sides.

3. Corporate Strategy – Access to information 

Remote access was not something widely available to support all of the organization’s activities. Security, control and even physical information were limiting factors on accessing content. With our corporate strategy for the Records Management project, it was very clear that accessing information was the number one issue for staff working from home. The lack of remote access to both the physical and electronic information created a roadblock for the RM team to support the organization’s need to access information. 

A new data map was required and processes for requests for information needed to be developed. How do you get from point A to point B without the data map that identifies corporate vital records? As a result of COVID-19, new records were being created around personal or financial information.

A couple of years ago, a software system was installed that the client was having some difficulties with. Collaboration was already an issue; the tools were not easy to use for the work being done in various departments. Compounded by that, licensing was coming due, data centres were being moved and software was out of maintenance.

In this case, we’ll see how it was a reason to accelerate a strategy.

Collaboration

Before the arrival of COVID-19, the physical work space was a great asset for allowing people to collaborate. Conference rooms, the water cooler or just peeking over a cubical made working with others easy. Onsite work was great to get a better understanding of what really happens. RFPs, requirements documents and contract arrangements are a great start but real-time collaboration, from our experience, always increases your success rate as well as the end product.

Over the past few months since the COVID-19 pandemic began, projects have been managed on the fly and ad hoc virtual meetings have become the new norm. As a result, we have seen more success with projects becoming easier to manage allowing more work to get done.

In the pre COVID-19 days it was normal to schedule a meeting for every Tuesday but working during the pandemic has shown me the light. In the new work environment, it has become a great habit to respond to emails promptly. We are all sitting in fewer meetings, but we should be talking (emailing, text, video calling) more. 

COVID-19 Impacted Project Solutions

With what did happen, let’s revisit the three main projects touched on in the previous section – Mid Project Adjustments. What were the solutions and how did we keep projects on the rails and ultimately get to successful conclusions? 

  1. Legacy Database Conversion 

The success in this one was relatively easy.  We were working with great clients. First, we did miss the window for working with the main corporate knowledge holder before they went on leave. The leave was coming and we all knew it. Early into the project and COVID-19, we brought in the reinforcements. Working with the client project lead, we needed an internal resource, which resulted in being beneficial since the new resource was going to be the main corporate knowledge on the next project. Bringing the person up to speed meant a little more work on this project up front but put us well ahead in the next.

Yes, we missed the deadline of the end of fiscal year by weeks. Arrangements were made that the work could continue after the deadline, but the project was “officially” completed on paper for it to fall into last year’s fiscal year. Again, even though the deadline was missed for full system rollout, with the next project already in line with the new reinforcements, the projects worked well running into each other. As we worked through the second project, we were able to take lessons learned from things like naming conventions and data maps on the first project directly into the next. This also worked the other way, as lessons were learned in the second project, we were able to go back to the first and apply improvements since the project work was able to continue after the official wrap up.

  1. Electronic Document Management Software Upgrade

Like most projects, User Acceptance Testing (UAT) played a big part in getting through successfully. We discovered that the software didn’t have all the functionality the user wanted.  With the initial recipe not quite getting us the end meal that was going to be filling, adjustments to the ingredients were necessary to avoid the disaster we referred to in the recipe earlier. 

While working through UAT with the client, issues of user functionality being unsupported and not available were identified so other methods were brought in. As the consultants, we looked at work arounds and brought a few options to the table that assisted the client in getting what they needed. The client had a number of tools they could leverage to fill in any gaps found. By using other software they already had, undertaking some custom work to make their software work the way they wanted and even designing a simple manual process, the upgraded system fulfilled their requirements.

In the end, what ended up solving one of the main issues was a patch made available from the software vendor! Apparently, we weren’t the only ones missing the deprecated functionality which was only available on the old browser.

  1. Corporate Strategy for Records Management

For this, success was migrating the data with the risk high that records and documents were going to be unavailable and potentially lost due to licensing, cost and time to process paper work, something had to be done, and quickly.

Given the risk, we managed to get some pre-approved support time, worked many late nights and with the use of our WISPIR tool, a full migration of the records and documents to the new solution was successfully completed. All content was backed up and migrated into the new environment which was more familiar to the internal IT staff of the client.

Even with all the project red tape because of working in the COVID-19 environment and with strict and rigid processes around required paperwork, sometimes all it comes down to is just hard work.

Present COVID-19 situation 

Getting through the projects during COVID-19 was both difficult and rewarding. Not only did we succeed, we developed new and better project management strategies we otherwise would never have needed. Even if everything goes back to normal tomorrow, we can take the lessons learned and apply them. How we handle deadlines and resources, apply our agile methodology, access and secure information remotely and finally collaborate with our clients have all changed as a result of personnel working from home. 

With remote work as an option going forward, technology tools have become more valuable. Access, stability and security have come to the forefront. Things we often took for granted have now become our essential tools.

This is new territory for us, which can lend itself to allow the implementation of bigger and better solutions. As things change, we are seeing the introduction of new ways of working, from new routines and processes to setting up home work spaces, new tools being introduced like the online conferencing applications and even just adapting to, working and communicating with each other without the physical cues that make up a large part of getting your message across.

Just like history, we learned from this and made the future of our projects better. With more flexibility, better collaboration with clients and within our team and increased and secure access to information, at the end of the day, success came from a lot of simple hard work. When the going got tough, we got smarter.

New Projects

With most of the projects we started at the beginning of COVID-19 wrapping up, what is next for new projects? The key points that I believe contributed to our success are:

  • Communication and hard work. 
  • Staying connected by whatever way makes sense for you and your organizations, whether it is using better email communication, or webcams during online conference calls.
  • Replying to that email today, or just making the call If you need an immediate response
  • Lastly, the more work done, the more that is completed, however you work today. 

As we settle in to the new normal, we found that you should not hesitate to start new projects even during a pandemic. For a while there, anything new was unlikely to get traction until priorities were re-evaluated against the impact of the pandemic. Even during periods of uncertainty, the deficiencies in our processes and procedures came into focus. Based on the various situations we found ourselves up against there are several positive outcomes: we’ve learned from what we’ve run into and now understand how such factors as agility, access to information and collaboration helped us succeed during such interesting times.

About the Author

Partner and Lead Developer of Western Information Management. Troy has a diverse background in and out of the Information and Records Management world. Troy is the architect and lead developer for WesternIM’s software applications including WISPIR – WesternIM’s Records and Information Management Tool and WesternIM’s Connector for Outlook, as well as many custom solutions for integration and software customization.

Troy’s background spans physics and technology, chemistry, system development, programming for many platforms and industries, data and system analysis, education and teaching, energy and environmental air emissions. With project experience ranging from independent contractors to large international shipping companies in both the private and public sectors.

Extraordinary and Pragmatic: Targeted Information Management in Higher Education

SAGESSE WINTER 2021 – AN ARMA CANADA PUBLICATION

by Jay Jorgensen, MLIS CRM

 

Back to Sagesse 2021

 

Abstract

Higher education institutions are a birthplace for innovation and creativity through teaching, learning and research. Building and implementing pragmatic administrative processes can augment innovation and capabilities by enabling effective and efficient resource allocation and decision-making.

This article highlights technology selection, process and information management considerations through case studies that were rapidly implemented at the University of Alberta at various levels – operational, compliance, and strategic – to rapidly respond to environmental factors that would reshape the work environment and expectations, both short-term and long-term.

Introduction

Access to reliable information assets – records, information, data, and processes – is critical for effective decision-making. This applies at all levels in an organization, from strategic goals to maintaining and improving business operations. Disruptions from an external environment, financial constraints, and changes to workplace location can be exploited to introduce new or enhanced business processes, strategy, and technology that introduce workplace efficiencies relating to information management. Material improvements to information access and use, sharing and preservation can take place with little to no financial “budget-line” investments, instead leveraging existing subject matter expertise and applying an information management lens to it.

A large Canadian university had to respond quickly and methodically to two simultaneous, extraordinary change factors: a global pandemic, and a significant multi-year reduction to its operating budget and government funding model. The University of Alberta had to increase its capacity to deliver more student and research services and shift its operational and service delivery model from an on-campus to entirely remote environment, while reducing administrative costs in doing so. Employees would have to do things differently, leveraging new tools, processes and technologies to meet operational objectives.

In this article, three case studies highlight the different responses taken to change or enhance organizational information management practice in response to environmental factors. The development of immediate, incremental and pragmatic solutions to achieve business outcomes is described, along with challenges faced in implementation and user adoption. Context is also provided around how a change initiative could, or rather had to come to fruition out of necessity.

Provincial Post-Secondary Landscape

The Institution

In the Province of Alberta, there are 26 publicly funded post-secondary institutions. These institutions receive government funding to offset operating costs and offer most of Alberta’s post-secondary programs. The University of Alberta in Edmonton, Canada is the largest institution in this category.

The University of Alberta in Edmonton, Alberta has a student and employment community of over 50,000 people; and five physical campuses, four located throughout Edmonton city limits with one additional campus located in the municipality of Camrose. It offers a diverse range of graduate and undergraduate degree programs; between it and the University of Calgary, these two institutions account for most of the province’s university research capabilities.

The University of Alberta is one of Canada’s top teaching and research universities. It holds an international reputation for excellence across the humanities, sciences, creative arts, business, engineering and health sciences. Times Higher Education, a London (UK)-based magazine and provider of higher education data for research-led institutions worldwide, publishes annually its World University Rankings, and the University of Alberta placed sixth overall in Canada (131st worldwide) on performance indicators in four key areas: teaching, research, knowledge transfer and international outlook.Current State, Challenges and Constraints

Provincial Budget

In March 2020, the University of Alberta was tasked with reducing overall operating expenses by more than $120 million CAD (approximately 11%) over a three-year period (2020-2023). In order to achieve both in-flight and planned budget cuts, significant financial changes had to take place, including closing buildings, increasing instructional and non-instructional fees, and raising tuition. 

Additionally, a larger restructuring effort was put into place, to improve efficiencies and reduce administrative costs, including the potential reduction or elimination of 1,000 staff positions. This budget reduction was on top of a previous in-flight budget reduction that took place in October 2019 of 6.9% (approximately $44 million CAD) for all university campuses and units.

Worldwide Pandemic

The World Health Organization (WHO) was informed on December 31, 2019 of cases of pneumonia of unknown cause in Wuhan City, China. A new strain of coronavirus, subsequently named “COVID-19 virus” was identified as the cause; this strain had not yet been previously identified in humans.

On March 11, 2020, the rapid increase in the number of cases outside China led the WHO to announce that the outbreak could be characterized as a pandemic. By then more than 118,000 cases had been reported in 114 countries, and over 4200 deaths had already been recorded.

On March 13, 2020 the University of Alberta suspended all in-person classes and exams, eight days after the province had confirmed its first presumptive case of COVID-19, and only 2 days after the WHO declared the outbreak of COVID-19 a global pandemic. The Government of Alberta cancelled all in-person classes, remote learning began March 14, 2020, and by March 22, all possible research and operations were also moved to remote work, with restrictions and full closure applied to nearly all administrative and office-based functions.

The University’s Response

A perfect storm of financial, operational and environmental health factors came together at the same time to severely impact the University’s day to day administration and service delivery. In addition to producing an operational plan in response to the budget, the full institutional Crisis Management Team was activated to handle and coordinate pandemic-related activities.

Several factors immediately came into play:

  • Loss of expertise and knowledge through forthcoming staff layoffs and attrition;
  • Fundamental change in location and way of conducting operational activities; and
  • Tremendous upcoming changes to both administrative structure and service delivery at the University as part of the budget response and restructuring.

The global pandemic and provincial budget had to be simultaneously addressed by the institution. These issues could not be deferred or ignored; they had to be acted upon presently across the institution and at individual operating unit levels. The budget and pandemic also took on a highly personal effect, where individual employees would directly experience the fallout from both environmental conditions: job losses or reclassifications; changes to reporting structures; and working remotely with less or different resources available to continue to provide similar levels of service.

Institutional Obligations

The 2019 Harvard Business Review article 6 Reasons why Higher Education Needs to be Disrupted stated:

“the reality in today’s digital-first world is that we need to teach every generation how to learn, unlearn, and relearn – quickly – so they can transform the future of work, rather than be transformed by it”. 

Harvard Business Review

This statement rings true in this circumstance and applies to both students adapting to online learning and a virtual community; and staff working remotely and accessing and creating information and records virtually. University students and staff had to react and adapt to the given circumstances quickly and repeatedly, with little time, or tolerance for indecisiveness.

Business as Usual

Immediately following the suspension of in-person teaching, learning and administration, the institution still had to carry on with administering programs and services and delivering instruction to students, remotely instead of on campus.  Employees of the University were required to work remotely. This meant challenges and changes would take place to existing business processes as well as expectations:

  • University administration had to be fiscally responsible in enabling these services; and
  • Students also had to adapt to the new, virtual instructional model to obtain credits.

Addressing Information Risk

From an administrative perspective, the same information management challenges remained:

  • Responding to increasingly complex information access requests;
  • Preventing and minimizing the frequency and severity of privacy and security breaches;
  • Assigning and mitigating information risk within areas of responsibility;
  • Implementing meaningful information management improvements quickly; and
  • Identifying and reducing duplicate or overlapping efforts in managing records.

The University also had continuing obligations to create and capture records; to be able to respond to information access requests; and namely, to continue to provide services within and across broad portfolios including human resources management, fundraising and alumni engagement, procurement, facility operations and management, governance, faculties, research, and others.

It was up to the institution to find ways to continue to deliver services, to meet expectations and service requirements, relying heavily on individual units that made up the University to adjust and adapt first, while still finding ways to maintain services and operations.

Institutional Response

Business Transformation

Creativity and innovation would be key elements to address, out of necessity, the new challenges to service delivery, communications, sharing of information, and making decisions. Conflating organizational transformation requirements with pandemic response mechanisms meant the timeline for change, adoption and implementation would be accelerated.

U of A For Tomorrow

In 2019, the Government of Alberta announced it would decrease available funding provided to all higher education institutions in the province. In response to this forecasted budget reduction, the University of Alberta created U of A for Tomorrow, a five-year institutional plan to address short and long-term fiscal restraints relating to continued research, teaching and community engagement efforts by the institution. 

The U of A for Tomorrow plan had two major, short-term initiatives for 2020 focused on academic restructuring and service excellence (administrative) transformation:

  • Implementing process improvements and in-flight corrections to operating models;
  • Development and approval of a new operating model that would enable institutional savings of over $120 million dollars in the near term. 

Longer-term goals of the plan included increased self-sufficiency (less reliance on government funding), and an increase in global reputational rankings. This was a time where new capabilities would have to be built; new expectations would have to be set; and priorities would have to be drawn on what services had to continue and in what capacity. 

COVID-19

In March 2020 the COVID-19 pandemic was announced, and the University of Alberta had to respond and shift from a connected physical campus to a digital remote work and study environment. This meant a change in expectation for conducting and performing work activities – many activities that were already anticipated in the U of A for Tomorrow proposal. The pandemic accelerated and enabled many of the University of Alberta’s change efforts out of necessity.

Do More with Less, or Do Differently

The University of Alberta is an enormous, billion-dollar higher education operation. Higher education is also an industry that encourages innovation. The University could not ignore the budget, nor the global pandemic. Now was a time in which changes needed to be implemented, and fast. An institutional strategy to reform both administrative and academic structures was being developed at the highest level, and individual units would have to anticipate, react and respond to that mandate.

Unit-based adjustments are often small in nature, involving processes or personnel, and on the surface may not register individually as part of institutional business transformation. When counted together, or aggregated as part of a larger strategy, similar changes become much more noticeable and can demonstrate evidence of change, conformance, and the ability to demonstrate compliance.

Case Study 1: Demonstrating Compliance Capabilities

In terms of risks to information, department leaders did not have a clear understanding of risks to information within their area of responsibility, or how these risks came about or were measured. Additionally, the institutional mandate on what records management requirements were necessary to protect or preserve information assets was not easily applied across such a federated operating and information infrastructure at the University of Alberta.

The University needed to build a top-down approach to information management, an approach that could be measured and quantified from the bottom up. The proposed solution was an Integrated Information Strategy (Information Strategy) and Information Maturity Framework built off the principles of the institutional Records Management Policy. 

Integrated Information Strategy

In an environment of financial austerity and business transformation, the University Records Office had to find a way to do more, with less. This involved looking at information as a business asset, or an organizational resource. How could this asset be leveraged to allow stakeholders (the Unit) to be able to effectively access, trust and protect its information?

Instead of focusing on creating more standalone or idealistic records management procedures, the University Records Office took a pragmatic approach to identify and assess information management capabilities in Administrative Units at the University of Alberta. 

The Integrated Information Strategy (Information Strategy) incorporated and addressed intersectional information management issues that allowed University Units to identify, accept and mitigate the risks in managing University information. The Information Strategy created a partnership among multiple information management disciplines with shared accountability. The solution was simple, scalable and repeatable, and included consistent messaging.

Information Maturity Framework

The four points below comprise the University of Alberta Information Maturity Framework, produced by the University Records Office:

  • Access to available information and records;
  • Effective management and organization of University information and records;
  • Preservation of information and records; and
  • Mitigation or elimination of information risk.

Combined, the integrated strategy and framework brought together experts from privacy, information security, records, and archives, to collaboratively address the inconsistent application of records management practices across the institution. For instance, it can be difficult to answer a records management question around personal information if not considering privacy; or difficult to answer a system-related privacy issue without also engaging both information security and information technology.

Implementation

As part of the Integrated Information Strategy, an objective picture of information risk within departments needed to be painted across at individual unit, portfolio, and institutional levels. To do so, the University Records Office created an engagement and assessment framework (the Information Maturity framework) that identified and compared Unit-based current information management practices to both Integrated Strategy and Records Management Policy expectations and requirements.

The Information Maturity framework contained three sequential phases that were implemented in every engagement between a Unit and the University Records Office:  pre-engagement, support and assessment, and post-engagement. In each phase there were unique activities that contribute to and enable the overall implementation, while also demonstrating progress or progression through an initiative.

The pre-engagement phase of the Information Maturity Framework was critical to a successful implementation of the Integrated Information Strategy. This is where Unit leadership was consulted and engaged to “open the door” for a records management capability assessment. This involved identifying who in the unit should be involved, and for how long, and what kind of work effort would be required to complete the assessment. We also document and approve a charter for engagement, outlining scope, schedule and anticipated outcomes.

In the support and assessment phase, this is where the University Records Office really gets involved with the business unit, to build and enhance information management capabilities. Some tactics included delivering a capability assessment workshop; acquiring and reviewing process documentation; and identifying gaps and working to close deficits.

In the closure and post-engagement phase, evidence was gathered, and guidance was provided to the Unit through training. The completed reporting scorecard, along with an assessment report was provided to each group, with suggestions for next steps. The Unit was also provided with a scorecard representing the Unit’s capability to comply with the Records Management Policy using the Framework as an assessment tool. The scorecard and assessment templates were the same for any unit; the complexity of information within is unit-specific.

By following a playbook or common engagement approach, it allowed the strategy to be measured across different groups for the same criteria, and also allowed for a relatively similar effort to be placed on determining records management capabilities. 

Observations

The information strategy applied across the institution, and the framework could be adapted for implementation in any circumstance, organizational structure, for any level of complexity of managing information. Implementing the integrated information strategy validated that unit information and records could be ‘good enough’ to enable compliance with rules, implementation of technology, trustworthy sharing of information, and a variety of other business requirements that involve information and records.

The duration of time spent with a unit often had an inverse relationship with records management effectiveness. The longer time spent engaging a unit, the more likely they were to procrastinate, or not address key issues. Some groups were so keen to come up with their information management rules, that they were able to do so in one-hour working sessions. This speaks to the point of ‘failing fast’ and achieving quick wins that demonstrated success and built reputation while establishing unit-based accountability for information risk.

The strategy and framework addressed information management pain points that were commonly experienced by multiple units (e.g., access, protection, security, duplication) and produced technology-agnostic, process-based rules to alleviate those pain points. In most cases, bringing a Unit together to achieve a common understanding of basic records management practices was enough to create consistency and predictability in creating Unit information. Units were able to answer, for themselves, common questions such as:

  • Where are our Unit’s official records?
  • How do we name and organize our committee meeting minutes?
  • What information do we provide in response to an information request?
  • Who can provide me with access to the Google Shared Drive?
  • Is the Unit in another faculty doing this the same way we are?

As records management capabilities continue to grow, the University is already exploring what to do next to bring about efficiencies and effectiveness in managing information and records. The ability now exists to compare results across units and observe how effective and pragmatic records management lends itself to technology implementation as a next step or enhancement.

Case Study 2: AI-Enhanced IM Capabilities

McKinsey & Company, an international management consulting firm stated in a 2018 Executive Briefing that businesses need technology improvements to provide value for businesses, contribute to economic growth, and make once unimaginable progress on some of our most difficult societal challenges. The article goes on to suggest that workflows and workspaces also need to adapt, to create an opportunity where people work more closely with machines. Instead of viewing automation as a replacement for people, automation needs to be a tool or resource that workers can leverage to increase their productivity and capabilities in the workplace. 

In this example, automation opportunities focused on user driven change, designing what was wanted and needed by an operational unit to satisfy external and internal demands for information and customer service. 

Robotic Process Automation (RPA) is the technology that enables the configuration of computer software, or “robots” to emulate human interactions with and within technology systems to execute business processes. RPA robots interpret and trigger responses and communications between systems to perform repetitive tasks quickly, frequently, and accurately. 

Within Advancement Services at the University of Alberta, the Office of Recording Secretary (ORS) – the unit responsible for donation and gift processing, endowment fund establishment and reporting – was a high-volume transaction processing unit with stable processes, operating procedures, and clear lines of accountability.

Challenges and Constraints

Primary objectives of this RPA implementation were to enhance and bring additional efficiencies to already-established processes and procedures for intake, storage, and management of charitable donations and related records within ORS; and to enable cross-platform, automated capture and retention management of records and information spanning creation, use, and preservation.

ORS needed to maintain, or potentially increase throughput of processing donation related records and information, such as balance sheets, tax receipts and journal entries. The opportunity for enhancement was there: 

  • Many transactions within the unit still had some paper-based component to them, whether it be signing or approving documents, issuing and filing receipts, and performing transaction audits or reconciliations;
  • Batch processing was repetitive in nature and volume-dependent; processing was limited to the capacity of staff available to handle and process the batches during normal operating hours.

Another challenge was up-front versus over-time costs for technology or staffing. Could a department justify a budget line-item for technology investment in a time of fiscal constraint? This was a case where solution implementation could be quick, and noticeable, and the financial cost was significantly less than the salary equivalent for additional staff.

Multiple brainstorming sessions were held with Unit stakeholders, along with representatives from Information Services & Technology, the University Records Office, and Advancement Records. At the beginning, it was unclear what was wanted from an automation perspective, but through successive brainstorming sessions and conversations, several key requirements were identified:

  • Create an entirely electronic tax receipting process for charitable donations, and replace the current paper-based process. This would leverage technology integration between systems (including the use of Application Programming Interfaces (API) to enable automatic filing and retention management of final records. Numerous other charities were already issuing electronic tax receipts for online donations, this was becoming an operational expectation as well as an expectation from donors;|
  • Enable more efficient processing of final records for access and preservation in the Unit EDRMS (Electronic Documents and Records Management System) quickly and efficiently, when individual documents could take up to 10 minutes each to manually prepare and upload to the EDRMS (receive the file, rename the file, upload to EDRMS, add departmental metadata, perform quality assurance, link EDRMS file to Unit Customer Relationship Management database);
  • Establish a common understanding of what records would be available and for how long (how long to retain departmental records, and what department would be responsible).

Following the brainstorming sessions, the Unit was able to clearly articulate where the bottlenecks or challenges were present in operational processes, and that became the focus for potential automation opportunities.

The initiative was quick to get to technology design, building off the records management fundamentals already confirmed using the Information Maturity Framework. ORS was able to describe its information management processes, including naming conventions, filing structures, and metadata, access considerations, and final/official records and repositories.

Opportunity

In this scenario, could an up-front investment in technology automation for predictable business transactions serve to free staff time and increase responsiveness? This initiative was a culmination of several institutional tools and resources, as well as a shift in philosophy by the Unit. Now was the time to consider ways to improve business processes, in light of budget cuts and a global pandemic that fundamentally changed the way information and records were being handled.

Jumping to automation is not easy, however. There had to be subject matter expertise available to represent current processes, to suggest and test new solutions, and to manage the change. This was a unique situation that took advantage of that expertise on the sides of the Unit, Information Services & Technology, and from an information management perspective.

Outcomes

A coordinated work effort between multiple stakeholders (Information Technology Business Transformation, Advancement Records, the University Records Office, and the Office of the Recording Secretary (ORS) achieved the following outcomes:

  • Explore additional functionality was realized when the Office of Advancement and Central Information Technology came together to explore the possibility of creating additional technology functionality to address an information management (and resource) challenge;
  • Automate key processes described as part of this initiative, though some of the end results were achieved in a manner not expected at the time of project initiation. In their product development and build cycle, they were able to identify additional system capabilities that could be leveraged from within to meet the unique needs of the Unit;
  • Determine its capability and risk tolerance for investing in automation. This spoke to their understanding of processes and technologies, as well as their appetite for additional technology change. ORS was able to decide whether to pursue more technology projects, and whether their current or projected workload (and budget) would allow them to pursue additional improvement or continuous improvement opportunities involving technology and records.

Case Study 3: Process and Technology Efficiencies

Many business processes were disrupted when the University of Alberta mandated a remote work environment in March 2020 in response to the COVID-19 pandemic. Units themselves had to adapt quickly to survive in an entirely remote digital work environment, significantly different from the in-person, location-based work environment that was previously the norm:

  • Congregation and in-person collaboration was halted;
  • Printing forms or other materials virtually eliminated;
  • Interoffice mail was no longer an option for sharing information and documents.

Units needed to implement and establish new expectations and technology solutions that would get them through the “current situation”, and have some staying power to make things better both short and long term. 

This is not to say that there were no electronic transactions or approvals in place before the pandemic shutdown; rather it was a piecemeal, unit-based approach towards acceptance and implementation of signatures technology and processes for performing electronic transactions. In many cases, electronic forms did exist but instead of existing as digital workflows, they were simply electronic copies of paper-based forms (that would still often require printing at one or more stages).

Approvals and workflows at the University were largely disconnected and paper-based: many internal forms required printing, physical completion and signing, and routing to intended recipients. This was a slow process in an in-person working environment that was tolerated by many groups, in spite of technology advancements and increased client expectations to leverage electronic transactions and workflows.

When the working environment was forced to become a remote work environment, fortunately, both parties to an approval – the originator or signatory as well as the receiver – were stakeholders sharing a common goal. As such, each side had something to gain by enabling, understanding and supporting electronic approvals.

The pandemic unlocked an opportunity for improvement when it came to electronic transactions – all business units were impacted, though the urgency impacted equitably, in that conducting approvals and transactions electronically became a requirement, not just a desired state. The risks and urgency for changing processes still remained focused within Units.

Unit Necessity

As an example, the Office of Advancement realized quickly that there would be many benefits to using electronic approvals signatures compared to the existing paper-based processes, including:

  • Cost savings, for less printing technology and materials;
  • Time savings, with the ability to route approvals electronically to recipients;
  • Ability to continue business operations in a remote working environment.

Electronic signatures were not entirely new to Advancement, though they were never officially endorsed – a common theme amongst other University administrative departments as well. There were instances where individuals were already using electronic signatures technology to approve or sign documents, and signatures were being generated using a variety of technology tools. These “one-off” or personalized solutions were identified and analyzed for appropriateness, usability, and authenticity: were the transactions valid, complete and acceptable? In many cases, yes – and this would be the building block for developing an internal procedure.

The Office of Advancement was able to establish its own unit-based operational governance model for electronic approvals and signatures. A standard operating practice was published, including the following scope:

  • How electronic approvals would be performed (e.g.- applying scanned images of signatures for embedding in electronic forms or documents, or typing names in acknowledgement boxes, or where available, using existing electronic signatures technology);
  • What unit transactions would the electronic approval process apply to;
  • What unit transactions would still have to be conducted using physical signatures; and
  • Who within the Unit would support and be responsible for implementation.

As an accountability measure, the Unit also consulted with the University Records Office for guidance around institutional and legislative requirements or constraints for using electronic signatures. It was agreed that the internal operating practice aligned with known requirements and measures put in place met the minimum requirements for implementing electronic signatures, and for maintaining the records and information associated with those approvals.

A quick, focused standardization of process and expectation within the Unit enabled a rapid implementation of electronic approvals. This approach met most, if not all operational requirements and was a welcome change and improvement for many individuals.

Additionally, this exercise to identify and validate internal processes for electronic approvals proved useful when the University also began to explore the same challenges (and opportunities) at an institutional level.

Institutional Interest

Central Information Services & Technology (IST) was also aware and impacted by the need for electronic approvals to varying degrees across the institution. IST saw this operational shift as an opportunity for change, and brought together an electronic signatures technology selection pilot project. IST chose specific stakeholders to participate in the electronic signatures pilot. Participant groups were selected based on their familiarity with technology solutions; business need for electronic signatures; and variety of potential situations to apply the technology:

  • Office of Advancement, as an operational unit with clear business needs to continue to approve and handle transactions, approvals and signatures involving donors and charitable donations in support of University programs.
  • Central Human Resources, as an operational unit with a high number of transactions requiring signatures or approvals, especially relating to personnel, time coding and payroll operations, and hiring and managing approvals.
  • Supply Management Services, as an operational unit with high-volume, financial transactions for purchasing equipment, supplies and services on behalf of the University.
  • Central Information Services & Technology; the University Records Office; and the Information & Privacy Office as governance groups with information management requirements.

A diverse and technology savvy pilot group that had both common and unique business needs for an electronic approvals solution would be critical to initiative success. Departments needed to be able to make decisions and approvals electronically: especially within their own units, and often across multiple units. Information and processes had to be reliable and trusted. To achieve this, coordination and confirmation at policy, technology and operational levels had to be established and communicated. Common requirements of selecting and implementing an institutionally-endorsed electronic signatures technology included:

  • Development of an information strategy for electronic signatures including cost-benefit analysis;
  • Agreement on information risk and impacts of electronic signatures technology on business records within and across units, for using or not using electronic signatures technology;
  • Development and implementation of an operational model governing technology procurement, administration, and evaluation.

Once the scope of the transactions was determined, IST coordinated the definition of 13 different use cases that could be categorized by risk or complexity of transaction:

CategoryExample Use Cases within the Category
Lower RiskInternal approvals or authorizations, such as vacation requests or timesheet approvals within one department or unit
Medium RiskSingle signature approvals, such as a donor’s intention to make a pledge to the University, or the authorization to hire an employee; approvals spanning two units.
Higher RiskMultiple signature and/or “one-over” approvals, such as performance appraisals; appointment letters; funding authorization for professional development; establishment of new funds or endowments on behalf of a donor to the University of Alberta; approvals spanning more than two units or levels of approvals; approvals requiring external signatures. 
Information Risk Categorization by Transaction

The use cases were cross-referenced across the participating pilot groups to identify specific transactions to test against two proposed electronic signature technologies. It was found that most use cases and transactions would be able to be tested in all pilot groups; some remaining transactions were unique to one pilot group based on business function.

By no means were the definitions and use cases set in stone; they were defined and created to represent known or likely operational situations requiring approvals with an urgent business need. These use cases were able to be assessed for ability to integrate with the technology, to see how the proposed signature technology solution(s) could achieve the expected outcomes of the use cases, instead of focusing on replicating existing processes step by step.

Implementing a standardized electronic signatures technology required bringing together operational requirements, information governance, subject matter expertise, and technology governance. Each brought a unique perspective to the table, dependent on the other perspectives in achieving what would be an institutional success. 

Summary

Each of the three case studies presented were ideas, in-the-moment solutions to environmental and financial challenges that were faced by operational units. Common elements to each case study included:

  1. Leveraging a situation or to force change or improvement. Changes had to happen, and instead of asking why now, the question changed to why not now? To let an opportunity for improvement pass would be poor judgement and detrimental to future change possibilities;
  2. Enabling change through incremental approaches. Smaller, successive changes and improvements increase a unit’s knowledge of the situation and builds confidence in achieving success;
  3. Leading from the business perspective, rather than the Information management perspective as effective business solutions have information management practices integrated into them, and often the same business solutions exist in different subject areas with common information management principles that aren’t immediately recognized as such;
  4. Recognizing that solutions are not perpetual. Continuous improvement should be a business objective, including considering whether the status quo is ‘good enough’. Environment, technology, and expectations are always evolving, and units must be able to evolve and adapt when developing solutions for creating, managing and protecting information and records.

About the Author

Jay Jorgensen, MLIS CRM provides information management consulting, advisory and assessment services at the University of Alberta. Jay has over 15 years of diverse experience spanning healthcare, energy, fundraising and higher education sectors. Jay is the current Marketing & Communications Director for ARMA Canada Region. 

References

1 Publicly funded post secondary institutions, Government of Alberta https://www.alberta.ca/publicly-funded-post-secondary-institutions.aspx retrieved October 10, 2020

2 About the University of Alberta. https://www.ualberta.ca/about/index.html retrieved September 28, 2020

3 World University Rankings 2021. Times Higher Education. https://www.timeshighereducation.com/world-university-rankings/2021/world-ranking retrieved October 13, 2020

4 The University Budget. April 1, 2020. https://www.ualberta.ca/vice-president-finance/resource-planning/the-university-budget/index.html retrieved September 19, 2020

5 Coronavirus disease (COVID-19) pandemic. https://www.euro.who.int/en/health-topics/health-emergencies/coronavirus-covid-19/novel-coronavirus-2019-ncov retrieved September 2, 2020

6  About COVID-19. University of Alberta. https://www.ualberta.ca/covid-19/about/index.html retrieved October 10, 2020

7  6 Reasons why Higher Education Needs to be Disrupted. Harvard Business Review. November 19, 2019. https://hbr.org/2019/11/6-reasons-why-higher-education-needs-to-be-disrupted retrieved October 3, 2020.

8 University of Alberta for Tomorrow Vision, Principles and Goals. University of Alberta https://www.ualberta.ca/uofa-tomorrow/goals/index.html retrieved October 1, 2020.

9 Records Management Policy. University of Alberta. March 14, 2014. https://policiesonline.ualberta.ca/PoliciesProcedures/Pages/Information-Management-and-Information-Technology.aspx retrieved August 22, 2020.

10 Information Management Maturity Framework. University Records Office, University of Alberta https://docs.google.com/presentation/d/1VUV89c-_y0OgN3FRmmcOc_1iW5F59qenUODme2Vi4WY/ retrieved October 13, 2020.

11 AI, automation, and the future of work: Ten things to solve for. McKinsey Executive Briefing by James Manyika and Kevin Sneader. June 1, 2018. https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for retrieved September 19, 2020.

12 Robotic Process Automation (RPA). UiPath. https://www.uipath.com/rpa/robotic-process-automation retrieved September 10, 2020.

Leveraging Digital Transformation to Benefit Student Learning – Creating a Digital Student Record in Alberta

SAGESSE WINTER 2021 – AN ARMA CANADA PUBLICATION

by Donna Molloy

 

Abstract

 

The use of a digital student record is transforming recordkeeping practices for the creation, management, and disposition of student records in Alberta. Student records document decisions that support student learning and their creation is mandated through provincial government legislation.  This article identifies what organizations need to consider for digital transformation projects. Specifically, it identifies strategies for Alberta school divisions preparing to upload digital student records into the provincial Department of Education’s digital record repository. Successful implementation of digital student records requires collaboration, resource sharing and staff engagement.

 

Introduction

 

For schools in Alberta, uploading student records into an electronic repository is a transformational change creating both challenges and opportunities. Many school divisions have a long-standing practice of keeping hard copy student records.  A student record is made up of content mandated by the provincial government through the Education Act, and specifically, through the Student Record Regulation  and therefore identified through legislation.  A student record is created when a student enters school and includes decisions made about the education of the student collected or maintained by a school division, regardless of the manner in which it is maintained or stored. It is a vital record documenting decisions for and about student learning and is usually generated over a 13-year period with information gathered from more than one location (i.e. primary or elementary school, junior high, then high school).   Some examples of content include the student’s registration, a birth certificate, and report cards. Adding to the complexity of managing student records, some students take programs in more than one location and multiple people add content to the student record. Many school divisions administer student records using paper files.   Over the last decade an increasing amount of content has become digital and is stored outside of a printed record, creating multiple information repositories. The way the student record is maintained varies across the province, and sometimes, even within school divisions.   The retention period for a student record is specified in the Student Record Regulation for a period of time after a student either graduates or leaves the school division.

 

Background

 

More than a decade ago, the provincial Department of Education (The Department) began development of a digital record repository for student records, known as the Provincial Approach to Student Information (PASI). From the outset, PASI was identified as a solution for addressing the lengthy time it takes for transitioning a student record from one school division to another for transferring students. PASI is now the endorsed repository for storage of digital student records. By 2018, the Department had communicated that beginning in the fall of 2020 divisions must manage transferring student records by uploading content into PASI.

 

Prior to PASI

 

For school divisions, the prior process to manage transferring student records involved physically moving hard copy files from one site to another. Historically, when a student was about to leave a school (e.g. primary or elementary school), the process for many school staff was to identify and transfer hard copy files onto the next school (e.g. junior high or high school).  School staff sent individual student records through Canada Post when the student moved away, or boxed and delivered files for a particular grade to the new school, for example when all grade six students were leaving Elementary School and transferring to a Junior High School. Throughout a school year, a varying percentage of students transfer to other schools, some within the same city but another division, within Alberta, and others out of province. For some students, particularly students in foster care, their transition from one school to another is often poorly handled because these students move more frequently than others. For students with complex needs, special reports and assessments are sometimes difficult to locate and do not make it to a new school in a timely manner to be used by educators who need to support these transferring students into a new school setting.

 

Over the last decade, Alberta communities experienced forest fires and floods, which affected student records; water and fire, are a severe hazard for school buildings and resulted in the destruction of paper records!

 

The COVID-19 pandemic also highlights the need for a digital record that can be accessed outside a school building. In the spring of 2020, school buildings closed, preventing easy access to hard copy files. Layoffs and funding uncertainty created resourcing challenges for staff to perform administrative tasks. Throughout the summer of 2020, schools focused on developing and implementing school re-entry plans for ensuring student safety.

 

Digital records introduce a change in the ways staff work. Some school divisions have a greater capacity for a digitization project than others based on available resources and the volume of content already digital. .There is limited records management capacity at schools since school administrative staff have numerous tasks to accomplish on a daily basis. Managing records is mixed in with a long list of other priorities. The move to digitize records creates challenges and the perception of more work for staff who have a primary role in serving the front line needs of students. Additional funds to secure resources for digitizing records are not available.

 

Opportunities from PASI

 

When PASI was initially developed, the Department of Education shared the benefit of how the process for transferring student records will be easier for schools. The focus of communications was “Katie’s story”, which showed how PASI would help a young girl in the foster care system to be welcomed when she moved to a new school. PASI was supported and endorsed within the province by the College of Alberta School Superintendents and by the Association of School Business Officials of Alberta.

 

The Department of Education built the digital repository, identified document types to be uploaded, implemented information security provisions, and created a tool for uploading content into PASI.  Ultimately, PASI brings digital transformation to one of the most critical records produced by schools, the student record, and it supports the transition of these records from one school to another.

 

Some initial communication from the PASI team identified Service Bureaus as a solution to scan records for the schools. Some schools proceeded to have files scanned. Experienced and professional imaging service providers initiated scanning. Digitization standards from the province within The Government of Alberta’s Digitization Guideline, which mirror the Canadian Standard, Canadian General Standards Board (CGSB) 72.34-2017, Electronic Records as Documentary Evidence, are followed. While outsourcing scanning works well for clearing out file cabinets, this transformational change requires that new content is continuously uploaded when new information is received. Staff need to understand the process and upload what is required when records are created. An example of an important record to be added to the student record is a guardianship document, typically provided as a hard copy Court Order from parents. A guardianship document specifies who has access to, and information about, a student, and is needed by school staff, particularly Administrators. As documents are received, school staff need to upload new content to the digital student record.

 

PASI requires everyone to work collaboratively, province-wide, to adopt a change that fundamentally alters the way that a critically important record to support student learning is managed in Alberta.

 

PASI provides an opportunity for school divisions to embrace the value of digital records management functionality. PASI also allows for electronic records disposition.  With the transition from paper to digital records it will no longer be necessary for schools to inventory, box, or shred paper files. Without access to school buildings during the pandemic, parents are submitting records electronically more often than pre-COVID. This content needs to be verified and then uploaded into a secure repository, which makes the process more efficient.

 

PASI supports the transition of students transparently from one division to another. When a new student arrives from an Alberta school division, the new school will access the student’s record electronically in PASI through a change in access permissions. The student is then assigned to the new school. The old process, where schools contacted the student’s prior school to request the transfer of a record that required a hard copy file to be mailed through Canada Post, ends.

 

A digital student record provides an opportunity to strengthen information security measures with login and security features (e.g. password protection). Paper records often presented security challenges for schools where they were to be kept locked in cabinets and accessed in a centralized office location. There was inconsistency across most school divisions. The future is controlled access to student records through role permissions built into the system. PASI has built-in requirements to authenticate records and an audit trail that allows for monitoring who has accessed and viewed documents. This was not possible with paper records.

 

PASI also provides almost real time updating of content. Changes occur without a delay for batch uploading. The Department of Education is also granting limited, read only access to students who can look up their marks and get access to generate their own transcripts. This ability is managed by the Department and removes the administrative burden currently on school divisions to disseminate routine information to former students, while supporting students transitioning to post-secondary institutions.

 

Successful implementation of PASI occurs when school divisions work collaboratively to create quality records. Consistent processes are required for good quality student records. Identifying what is to be specifically uploaded, how this can best be achieved, and using standards for quality are all crucial for success.

 

Technology is Transforming Business Processes

 

Before the COVID-19 pandemic, tools to support videoconferencing and online meetings were already in place; with the pandemic the use of these tools became critical. Schools were closed to students in the Spring of 2020. Access to facilities was limited for staff and student safety. Microsoft Teams, Google Meet, Zoom and other platforms were used to communicate between teachers, students, and support staff.  Implementation of new processes during the pandemic required pivoting to these tools immediately. Working from home and the use of technology has helped many to see the value and necessity of digital records management.

 

In the world of the COVID-19 pandemic, school staff and specialized service providers (Psychologists, Occupational Therapists, Physical Therapists, etc.) cannot readily access paper files in file cabinets. PASI provides access to a student record through the use of technology. Staff have no access to a paper file in a cabinet in the schools when they are restricted to working from home. When the COVID-19 pandemic will be over is unknown, but access to digital documents supports student learning when educators can review prior assessments, analyze reports on student progress, and determine if interventions being used to support student learning are showing results. In a digital age we are responsive to the needs of staff serving students at multiple sites spread geographically throughout a school division. Ultimately there will be more support for student support and learning.

 

The transformation to digital records has been occurring within school divisions over a number of years as each division invests in its systems. PASI leverages the investments made in Student Information Systems in Alberta, provided by several vendors (e.g. PowerSchool, Maplewood).  Vendors are working with school divisions and developing solutions for uploading content to PASI from these systems. Digital data was often stored separately from paper records in multiple repositories and work is well underway to find a way to upload information from the multiple repositories into PASI.  PASI is now the “source of truth” for the province-mandated content of a student record.

 

Good Information Governance and Leadership

 

Important to the successful implementation of a digital student record program is good leadership, with a vision identifying the goals and what is required. Within each school, the Principal has always had an important role in implementing change. Successful projects at schools include Administrators who lead staff. Success requires they be involved in projects of this nature.

 

In March 2020, at a Digital Student Records Symposium, a smaller school division, with extremely limited capacity, shared how they successfully scanned and uploaded student records to PASI once everyone in the school division knew the plan and expectations. With leadership and planning, their division completed the transformation to digital student records.

 

An important part of digitizing student records for many school divisions is to create clarity regarding the specific content to be included in the mandated student record. This includes information such as registration records, birth certificates, report cards or progress reports, psychological assessments, court orders related to custody and access, and Individualized Program Plans as required by the Student Record Regulation.  Initial project documentation from the Department of Education consisted of a growing list of document types to guide schools on content and categories to be captured.

 

An important first step is to analyze existing record-keeping practices and ensure mandated content is only being uploaded and stored digitally.  In many schools, hard copy student records included content that was not required (such as samples of student work, consent forms, anecdotal notes, etc.) and records were often duplicated in more than one location. School divisions then need to identify where other content, not part of the mandated student record will be stored, content that is supplemental student information.

 

Some larger school divisions were early innovators as they uploaded their records into PASI. They served as pilot organizations, and their work assisted others. Their questions to the Department of Education project group provided clarity about required documents, and their experiences improved processes. These divisions led the way and smoothed out some of the issues. Their insights and lessons learned were shared with others, proving that good information governance and leadership is critical to success.

 

Project Management Essential

 

To add their content to PASI, schools are able to use the PASIprep tool provided by the Department of Education.  It is up to each school division to identify how to upload student record content creating a strategy that makes sense to it.  School divisions need to determine how to sort and prepare the files for scanning, scan the hard copy records, upload to PASI, perform quality assurance, and put in place a process for secure disposition of the original source documents.   School divisions need to purchase scanners and identify who will perform the work.

 

In a perfect world, one strategy would work for all; however schools and school divisions do not operate that way. Maturity levels for records management differ across the divisions. Resources are not equal across the province, and each school division operates by identifying its own priorities.   Even within a school division, there are often inconsistencies in how student records are managed.  Involving staff in planning supports engagement in the project and ensures that the needs of the school division and school Administrators are addressed.

 

Each school division needs to identify an implementation strategy by identifying the current state and planning for what is required. This means determining resource requirements and how much can be achieved at each school within the school year. A customized plan is the outcome that identifies the phases of the project

 

The approach for getting into PASI is not a “paint by numbers” template, but requires thoughtful analysis. Multiple approaches will be used by school divisions in Alberta.

 

Incremental Change

 

For school divisions challenged by existing priorities, limited resources, and budgets, the need to figure out how to implement the required digital transformation project is challenging. One way to achieve the goal of uploading all student records into PASI is a phased approach. The first priority is to focus on the current requirement to upload transferring student records.

 

The next priority is to identify an approach to upload the remaining student records. Depending on the particular division and school practices different solutions may provide a bigger benefit.  Some strategies and considerations include:

 

  • Digitizing and uploading content beginning with the kindergarten and grade 1 students, the earliest grades, and creating a plan to have only a digital student record; essentially a day forward approach.  This is a quick win as you start from the beginning of a student’s school career and introduce a digital student record from the onset.
  • Implementing digital student records for students in early childhood learning programs.   These students often have a significant amount of reports and assessments.
  • Seeking out opportunities to support digital transformation for “mobile” students, such as children in the foster care system, or students attending school at more than one site.   Electronic records that are easily accessible support efforts for a smooth transition by multiple individuals working with these students.
  • Creating digital student records to support students learning from home.  A portion of students are now learning at home. These students are great candidates for a digital student record. This is often a diverse group of students covering multiple grades.
  • Digitizing student records for the students transitioning to the next grade at another school within the school division, the internal transferring process. For example, schools may digitize student records for students at the highest grade in an elementary school which avoids shipping physical files to the junior high when they transfer for the next year.
  • Digitizing and uploading content for high value records such as program planning and assessment records and ensuring that records for interventions such as Individualized Program Plans are uploaded.
  • Ensuring that the needs of Administrators at the schools are addressed to support implementation. Their endorsement and leadership are critical to implementing transformative change at schools. It is helpful to determine if there is consistency across the Division and if there are early innovators that the school can work with as a pilot project.
  • Breaking the project into phases to introduce incremental change. This keeps staff focused on what is required which reduces change fatigue and the burden of asking too much of staff who already have enough to do.
  • When outsourcing to a Service Bureau, clearly define your requirements so that you achieve a quality outcome. Ensure you know how the work of outsourcing will support your overall goal.  Once existing student records have been digitized, the process for adding new content needs to be determined.
  • Digitizing and uploading content from specialized service providers (e.g. Psychologists, Occupational Therapists, Physical Therapists, etc.) so it is accessible to staff who need the information (e.g. Teachers, Counsellors, Administrators). These staff provide their expertise to multiple schools sites and students who are now learning at home. Many of their records were created electronically so they only need to be uploaded.  Some of this work is performed collaboratively with specialized expertise of individuals who seek to share information with a multi-disciplinary collaborative approach.
  • Seeking out quick wins such as uploading existing digital content. Some schools already have digital records such as online registration. This is an opportunity to upload born-digital content.
  • Finally, acknowledging that some student records will remain as paper. For example, student records for 2021 graduates don’t really need to be scanned as their school careers are ending. School divisions can continue to use existing processes in parallel with digital student records. Digitizing all student record content is not a requirement.   While this creates different approaches to managing records, some school divisions lack the resources (staff and financing) to fully implement digital student records at this time.

 

Changing Lives

 

Supporting student learning, and identifying the value of interventions, such as the Individualized Program Plans (IPP) and other customized learning plans for accommodating a student’s needs, is only possible when the information is in the hands of the educator working with the student.   Teachers change from one year to the next for a particular student, and multiple teachers are teaching students at the higher grades. Administrators, Counsellors, Psychologists, Occupational Therapists, Physical Therapists, and other service providers all need access to student records to support student learning. Information needs to be available to staff and remain accessible as the student progresses through the school system, from one site to the next, and from one grade to the next.

 

Resource Sharing and Collaboration

 

In order to make transformation to digital student records successful, several school divisions participated in stakeholder consultations with the Department of Education.  Successful school divisions that have implemented digital student records described the collaboration required.

 

These projects are most successful when individuals such as Information Technology, Information Management, Registrars in Schools, Counsellors, and Administrators work together to identify requirements and strategies.

 

Resources to support school divisions implementing digital student records, along with case studies, have been created and shared by the Records Management Committee of the Association of School Business Officials of Alberta (ASBOA). The Committee, representing several school divisions, has a history of resource sharing starting with the creation of a model records retention schedule guideline. The Department of Education continues to engage Committee members for the benefit of all school divisions.

 

Training and Change Management

 

Critical to the success of this project is the change management process. Many schools have other initiatives underway and the sudden closing of schools during the COVID-19 pandemic is unprecedented. It is important for staff to see the importance of this project to create a digital student record in the midst of all the other changes they are facing, and to understand the value of a digital record to support student learning. School staff have close relationships with students they see every day; the COVID-19 pandemic altered relationships and challenged teachers to connect with students remotely. Now it is necessary to make student records accessible to staff when they need them, from wherever they are, while ensuring personal information is protected.

 

Conclusion

 

Digital student records, accessible in an electronic records repository, support student learning when timely access to critical documents is made available to educators who support the student’s learning.  The electronic records repository contributes to information sharing among schools and educators and ensures that the investments being made in interventions for students are showing results. Digital student records reduce inefficiency and risks from boxing, shipping or mailing paper records. They allow for electronic records disposition when retention requirements have been met.  They improve information sharing and provide staff access to information that supports student learning where they are learning, particularly when many are learning at home.

 

PASI was a decade in the making and creates the opportunity to share crucial information that supports the transitions that occur for students. It supports the transition of students coming into new school environments. It benefits students with Individualized Program Plans and assessments that need to be available in a timely manner. It also supports the student’s transition into post-secondary education.

 

Alberta has created a digital student record repository, PASI, which demonstrates the value of digital student records for all Canadian jurisdictions. Alberta and the PASI implementation could be used as an invaluable case study for other provinces who may be struggling to manage paper student records, particularly during the COVID-19 pandemic.

 

Technology has transformed how we work and connect. The COVID-19 pandemic shut down the schools and emptied buildings. The future of sharing information is digital and the creation of this repository for Alberta school divisions is transforming information sharing. When information provided by specialized service providers (e.g. Psychologists, Occupational Therapists, Physical Therapists,  etc.) is made accessible to educators who need it for student learning (e.g. Teachers, Administrators) better outcomes are likely. Students are supported in their learning wherever they are receiving instruction when educators are able to access their student record.

 

About the Author

 

Donna Molloy is the Principal Consultant for Dynamic Leadership, a company that has supported Alberta school divisions to successfully implement improved Information Governance through creation and implementation of records retention schedules, establishing records management programs, development of policy documents, tools and training.    Work to support student recordkeeping is an opportunity to find ways to improve the management of one of the most critical records generated by school divisions.   In an era of fires, floods, and a pandemic, this work is essential to protecting these vital records.

 

References

 

Information about PASI :

https://extranet.education.alberta.ca/pasidevnet/Docs/Business/start.html

 

High school transcript information :

https://www.alberta.ca/student-information-high-school-transcripts.aspx

 

ASBOA toolkit :

https://asboalberta.ca/page/digitization-guidelines-and-toolkit

 

 

Repenser le calendrier de conservation : Comment élaborer un calendrier de conservation adapté aux logiciels

SAGESSE WINTER 2021 – AN ARMA CANADA PUBLICATION

par Bruce Miller, MBA, IGP

 

view PDF 09-Sagesse-2021 Re-Envisioning the Retention Schedule FR

Back to Sagesse 2021

 

 

Abstrait

 

Les logiciels modernes de tenue de documents électroniques offrent de nouvelles capacités et techniques de gestion qui n’étaient pas possibles avec les documents physiques. Ces capacités comprennent notamment les multiples règles de conservation par catégorie, l’attribution de la conservation en fonction de la valeur du document, les multiples déclencheurs et types de conservation, l’annulation de la conservation, la déclaration automatique des documents, etc. Bon nombre de ces nouvelles capacités sont rendues possibles grâce à l’attribution de champs de métadonnées aux documents numériques. Pour profiter pleinement de ces nouvelles capacités, le calendrier de conservation doit tirer parti de ces métadonnées connues et disponibles, et exiger de nouvelles métadonnées à l’appui de la conservation. Le calendrier doit tenir pleinement compte des nouvelles capacités et les utiliser s’il y a lieu. Un calendrier de conservation qui tire pleinement parti de ces nouvelles capacités est appelé calendrier de conservation adapté aux logiciels.  Structuré différemment d’un calendrier de conservation traditionnel, il utilise de multiples règles de conservation par catégorie, tire parti des métadonnées des documents, utilise de multiples types de déclencheurs de conservation propres aux documents numériques, précise explicitement comment les dossiers particuliers sont traités, et possède plusieurs autres capacités fonctionnelles.

 

La nécessité d’un nouveau calendrier de conservation

 

De nombreuses organisations ont décidé de déployer un SGEDD (système de gestion électronique des documents et des dossiers) moderne. Les administrateurs de documents de ces projets apprendront bientôt que le calendrier de conservation est la pierre angulaire d’un SGEDD efficace.

 

Le SGEDD résulte du mélange de deux technologies. La première est une plateforme de GCE moderne (gestion de contenu d’entreprise), auparavant connu sous le nom de gestion de documents. Cette plateforme constitue un dépôt numérique pour tous les documents électroniques et permet la recherche avancée par contenu et par métadonnées, le contrôle de sécurité, la gestion des versions, l’automatisation des flux de travail et les tâches collaboratives comme la révision de documents rédigés par plusieurs auteurs, et bien plus encore. La deuxième technologie est la capacité de tenue de documents, souvent fournie sous forme d’un ensemble de caractéristiques au sein de la plateforme de GCE elle-même ou comme produit tiers ajouté à celle-ci.

 

En réalité, le calendrier de conservation est à la base des deux technologies. Le calendrier de conservation ne se limite pas à intégrer les règles de conservation à la plateforme de GCE; il influe grandement sur la plateforme elle-même. Cela est nécessaire pour que la composante de tenue de documents fonctionne correctement.

 

Tous les SGEDD modernes intègrent dans une certaine mesure la tenue de documents fondée sur des règles (Rules-Based Recordkeeping, ou RBR). La RBR est une approche en matière de tenue de documents électroniques qui automatise les fonctions que l’utilisateur final doit normalement exécuter. Ces fonctions comprennent la détermination des documents à conserver et à quel moment les déclarer comme tels, et du mode de classement par rapport au calendrier de conservation. Un déploiement complet et approprié de SGEDD qui utilise pleinement la capacité de RBR automatise toutes ces fonctions de tenue de documents pour l’utilisateur final. Les utilisateurs finaux n’ont absolument aucun rôle à jouer dans la déclaration ou la classification des documents. Ils utilisent simplement le système comme une plateforme de GCE ordinaire, sans avoir à penser à la gestion des documents. Toutefois, grâce à la RBR, les documents de référence sont déclarés et classés correctement par rapport au calendrier de conservation, même si l’utilisateur n’en est absolument pas conscient.

 

Les logiciels modernes de tenue de documents électroniques peuvent effectuer les opérations de conservation et d’élimination en employant des moyens dont la plupart des professionnels des documents n’ont peut-être même jamais entendu parler. Étant donné que les documents sont numériques, les administrateurs disposent d’un plus grand nombre de renseignements au niveau des documents et peuvent tirer parti de ces renseignements pour assurer une conservation et une élimination plus précises, sophistiquées et souples. Par exemple, ils peuvent appliquer la conservation en fonction de la valeur des documents; ils peuvent aussi appliquer plusieurs règles de conservation à une seule catégorie, et même intégrer différents types de règles de conservation au sein d’une même catégorie. Le logiciel possède ces incroyables capacités de conservation et d’élimination; toutefois, l’administrateur des documents doit lui dire ce qu’il veut qu’il fasse. Et ce rôle revient au calendrier de conservation. Si nous savons ce que le logiciel de tenue de documents est capable d’accomplir en matière de conservation et d’élimination, nous pouvons alors dresser un calendrier de conservation de manière à tirer pleinement parti de ces capacités nouvelles et puissantes. Un calendrier de conservation qui tire parti de ces capacités de conservation et d’élimination est appelé calendrier de conservation « adapté aux logiciels ».

 

Les calendriers de conservation traditionnels ont été établis sans aucune connaissance des capacités des logiciels modernes de tenue de documents. Si l’on utilise un calendrier traditionnel dans un SGEDD moderne, le logiciel ne pourra utiliser aucune de ses capacités avancées de conservation et d’élimination. En outre, ce type de calendrier limitera grandement la capacité d’utiliser pleinement les techniques modernes d’automatisation de la RBR. Un calendrier de conservation adapté est cependant établi en présumant qu’il sera utilisé dans un SGEDD et qu’il tirera pleinement parti des capacités avancées de conservation et d’élimination du logiciel. Tout calendrier de conservation adapté aux logiciels qui est bien rédigé peut être utilisé avec n’importe quel logiciel moderne de tenue de documents, quelle que soit la marque.

 

La figure 1 montre un extrait d’un calendrier de conservation clairement désuet. Ce calendrier indique le titre, la description et une règle de conservation très simpliste pour chaque catégorie. Il s’agit cependant d’un vrai calendrier qui est utilisé en ce moment.

 

Figure 1 – Calendrier de conservation traditionnel

 

La figure 2 ci-dessous montre à quel point un calendrier de conservation moderne adapté aux logiciels diffère d’un calendrier de conservation traditionnel. Un calendrier traditionnel n’est généralement qu’une longue liste d’activités, de règles de conservation et de citations. Un calendrier de conservation moderne, comme celui illustré à la figure 2, comporte toutefois trois composantes différentes, mais interreliées. Nous y reviendrons plus loin.

 

Figure 2 – Calendrier de conservation adapté aux logiciels

 

Dans le présent article, nous expliquerons comment le calendrier de conservation joue un rôle crucial dans la configuration globale d’un SGEDD moderne, et nous soulignerons les caractéristiques d’un calendrier de conservation adapté aux logiciels.

 

Le rôle du calendrier de conservation

 

La figure 3 montre à quoi ressemble un SGEDD moderne sur le plan conceptuel. Un SGEDD comporte trois « couches » :

 

Le calendrier de conservation Il s’agit du calendrier de conservation adapté aux logiciels. Les données seront divisées en catégories de cas et en catégories administratives. À gauche se trouvent deux catégories administratives (rondes des opérateurs et accueil et intégration des employés). À droite se trouvent deux catégories de cas (griefs syndicaux et audits de sécurité).

 

Structure de GCE Souvent appelée « architecture de l’information », la structure de GCE comprend toutes les « bibliothèques » ou les endroits où les documents peuvent être stockés. Les divers produits de GCE emploient des termes différents pour nommer les emplacements de stockage. Les emplacements de stockage peuvent être appelés bibliothèques, dossiers, armoires, etc. La structure de GCE comprend également les métadonnées et les champs d’information stockés de façon permanente, chaque document étant placé dans chaque emplacement de stockage. La structure de GCE ne se limite pas aux bibliothèques et aux métadonnées, elle englobe aussi le contrôle des versions, la sécurité et la collaboration, etc. Mais pour l’instant, nous ne nous préoccuperons que des bibliothèques et des métadonnées.

 

Règles de la RBR Les règles de la RBR renvoient aux règles créées dans le logiciel de tenue de documents pour en automatiser les processus, à savoir la déclaration (quels documents sont déclarés et à quel moment) et quelles règles de conservation du calendrier de conservation sont appliquées à quels emplacements dans la structure de GCE.

 

S’il est bien réalisé, le calendrier de conservation a une incidence profonde sur la structure de GCE. Chaque catégorie du calendrier de conservation se traduit par une bibliothèque dans la structure de GCE. Cette bibliothèque est l’endroit où les utilisateurs stockent les documents de cette catégorie particulière. Chaque catégorie du calendrier de conservation prend la forme d’une bibliothèque dans la structure de GCE. La catégorie et la bibliothèque portent le même nom. Les catégories de cas exigent que la bibliothèque soit subdivisée en « cas » (ou contenants). Ces subdivisions permettent de regrouper les documents d’un cas en les maintenant séparés et indépendants de ceux de tous les autres cas.

 

Au sommet de la pyramide trônent le logiciel de tenue de documents et ses règles de RBR. C’est ici que l’on définit des règles de déclaration comme « si bibliothèque = “rondes d’opérateur” et approuvé = “oui”, alors déclarer ». Les règles de conservation sont également définies ici, par exemple « si bibliothèque = “rondes d’opérateur”, la conservation est égale à la date réelle du document + 5 ans ». Les règles doivent savoir quels sont les noms des bibliothèques et quelles métadonnées elles peuvent utiliser.

 

Comme il est possible de le constater, le calendrier de conservation constitue la base sur laquelle la GCE est structurée. Cela permet aux règles de RBR de s’exécuter en fonction de cette structure, comme le montre la figure 3.

 

Figure 3 – Un SGEDD moderne

 

Documents de cas

 

Le calendrier de conservation doit faire la distinction entre une catégorie de cas et une catégorie que l’on appelle « administrative ». Chaque catégorie du calendrier de conservation est donc soit une catégorie de cas soit une catégorie administrative. Aujourd’hui, dans la plupart des organisations, environ 60 % de tous les documents appartiennent à des catégories de cas. La meilleure façon de comprendre la structure des documents de cas est d’utiliser un exemple. Supposons qu’il existe 1 000 contrats en vigueur à un moment donné. Chaque contrat comporte notamment un nom d’entrepreneur, une valeur, une date d’expiration et un type de contrat. Ces données ne changeront pas dans tous les documents d’un cas donné. En théorie, la date d’expiration de chaque contrat pourrait être différente de celle de tous les autres contrats. Tous les contrats auraient une seule règle de conservation qui ressemblerait à « conserver cinq ans après la date de fin du contrat, puis détruire ». Bien qu’il n’y ait qu’une seule règle qui s’applique aux 1 000 contrats, cette règle unique comporte 1 000 dates de déclenchement différentes, c’est-à-dire 1 000 dates d’expiration différentes. Le logiciel de tenue de documents doit donc effectuer le suivi de chacune de ces 1 000 dates.

Examinons cette question du point de vue d’un utilisateur final du SGEDD. Un utilisateur a en sa possession un document lié à un contrat particulier. Le document peut être un courriel suggérant plusieurs modifications à l’ébauche du contrat. L’utilisateur doit préciser auquel des 1 000 contrats le document est lié. Quelle est la procédure à suivre? L’utilisateur doit avoir un moyen d’effectuer un choix parmi les 1 000 contrats. La façon de procéder peut varier d’un système de GCE à l’autre, mais le moyen le plus courant serait une simple liste déroulante contenant les 1 000 contrats, comme le montre la figure 4. Chaque contrat porte un nom unique, et l’utilisateur doit sélectionner l’un des 1 000 contrats. Le système de GCE comportera une bibliothèque appelée « contrats ». Cette bibliothèque sera subdivisée en 1 000 contenants à cas portant chacun un nom unique faisant référence à l’un des 1 000 contrats. Voilà un bon exemple de la façon dont le calendrier de conservation façonne la structure de GCE. Les deux doivent fonctionner de concert et ce n’est qu’à ce moment que les règles de RBR peuvent être appliquées aux documents contenus dans ces bibliothèques.

 

Figure 4 – Sélection d’un contrat

 

Structure du calendrier de conservation

 

Un calendrier de conservation moderne adapté aux logiciels est enregistré dans une feuille de calcul. Il y a deux raisons à cela :

 

  1. Il est lisible par machine. Tous les éléments du calendrier de conservation, y compris toutes les catégories et les règles de conservation de la RBR, peuvent être lus par un logiciel moderne de tenue de documents électroniques et importés directement dans la plateforme de GCE ou le logiciel de tenue de documents électroniques lui-même.
  2. Il offre une meilleure présentation. Dans une feuille de calcul, nous pouvons regrouper les éléments par unité fonctionnelle ou par ministère. Nous pouvons appliquer des filtres à diverses colonnes pour examiner des sous-ensembles du calendrier. Nous pouvons utiliser la numérotation automatique des catégories. Comparativement à un document écrit, il s’agit d’un meilleur environnement pour élaborer le calendrier, le réviser et le présenter tant aux machines qu’aux personnes.

 

Le format propriétaire des feuilles de calcul que vous utilisez importe peu (Microsoft Excel, Google Sheets, etc.). Les exemples que nous présenterons dans le présent rapport utilisent Microsoft Excel. Le calendrier de conservation est un classeur composé de plusieurs feuilles de calcul.

Le calendrier de conservation comprend trois composantes importantes :

 

Catégories Feuille de calcul contenant toutes les catégories de chaque unité fonctionnelle de l’organisation. Chaque catégorie est nommée, numérotée et comporte une règle de conservation. Lorsqu’il y a plus d’une règle de conservation pour une catégorie, une seule règle de conservation est affichée et toutes les règles de conservation de la catégorie sont énumérées dans la feuille de calcul MRR (multiple rétention roules ou règles de conservation multiples).

 

Cas Feuille de calcul contenant des détails comme la nomenclature d’appellation de chaque cas pour toutes les unités fonctionnelles.

 

MRR Feuille de calcul contenant les règles de conservation pour chaque catégorie qui comporte plus d’une règle de conservation.

 

La première feuille de calcul résume les principales fonctions opérationnelles, comme le montre la figure 5 ci-dessous.

 

Figure 5 – Principales fonctions

 

Dans cette feuille de calcul, le titre de la colonne code est un court acronyme pour chacune des fonctions opérationnelles. La fonction (fonction) fait référence au nom de la fonction. Le numéro (numéro)fait référence au numéro séquentiel attribué à chacune des principales fonctions opérationnelles. La description est une description détaillée de la fonction. Chaque ligne de cette feuille de calcul constitue un groupe d’unités fonctionnelles différent au sein de l’organisation, souvent appelé service ou section. Chaque ligne de cette feuille de calcul correspond à une feuille de calcul du même nom.

 

Catégories

 

La figure 6 montre une feuille de calcul pour l’une des fonctions opérationnelles, dans ce cas-ci le service du greffier (Clerk’s Office).

 

Figure 6 – Catégories d’unités fonctionnelles

 

Chaque rangée de la feuille de calcul correspond à une seule catégorie. Les lignes blanches sont des catégories administratives, habituellement assorties de règles simples de conservation fondées sur le temps, et les lignes vertes indiquent les catégories de cas qui sont subdivisées en cas. Comme le présent rapport ne permet pas de traiter tous les titres de colonne de façon exhaustive, nous ne mettrons en évidence que les titres clés de la figure 6. Les principaux titres sont les suivants :

 

Secondaire (secundary) Titre abrégé de la catégorie.

 

No. Numéro séquentiel unique de la catégorie.

 

Description Description détaillée de la catégorie.

 

Numéro MRR (MRR number) Indique qu’il existe plusieurs règles de conservation pour cette catégorie. Les règles apparaissent dans la feuille de calcul MRR. Chaque lot de règles propres à cette catégorie est numéroté de façon unique.

 

BR Conservation de l’entreprise (business retention). Conservation exigée par l’entreprise, et non pas la période de conservation prévue par la loi.

 

Déclencheur (trigger) Il s’agit soit du champ de métadonnées du document, soit le champ de métadonnées du cas utilisé pour déclencher la période de conservation.

 

Type Un des cinq types de conservation (expliqué plus loin dans ce rapport).

 

Unité (unit) Mesure de l’unité de temps, généralement les années.

 

Mesure d’élimination (disp. action) Mesure d’élimination. Qu’adviendra-t-il des documents à la fin de leur cycle de vie? Habituellement, ils seront supprimés, conservés de façon permanente, examinés ou transférés.

 

Cas

 

La figure 7 ci-dessous montre la feuille de calcul utilisée pour définir les détails (structure) de tous les cas.

 

Figure 7 – Structure des cas

 

L’objectif de cette feuille de calcul est de préciser la convention d’appellation pour chaque cas de chaque catégorie désignée comme une catégorie de cas. Chaque cas appartenant à une catégorie doit porter un nom différent de tous les autres cas de la même catégorie. Certains systèmes de GCE sont fortement limités en ce qui concerne la longueur du nom des contenants. Un contenant est ce que le système de GCE utilise pour regrouper les documents liés entre eux. Dans certains systèmes de GCE, il s’agit d’un dossier, d’une armoire, d’un ensemble de documents, etc. Nous le désignerons par le terme générique de « contenant ». Nous définirons une convention d’appellation des cas en trois parties. Chaque partie indiquera un nom, si le cas est obligatoire ou facultatif (O/F) et un nombre maximal de caractères permis pour cette partie du nom. Les titres de colonne sont les suivants :

 

Nom de la catégorie (category name) Nom (titre) de la catégorie.

 

Exemples de cas (case examples) Exemples fictifs de la façon dont le nom apparaîtrait pour chaque cas.

 

PRI Principale fonction opérationnelle (primary business function) dont relève la catégorie.

 

Nom (name) Nom de la partie. L’administrateur du système attribue au contenant un nom approprié qui correspond au cas particulier, mais cette colonne indique en quoi consiste le nom.

 

O/F (M/O) Soit obligatoire (O) ou facultatif (F).

 

MAX Nombre maximal autorisé de caractères.

 

Conservations à règles multiples

 

Cette feuille de calcul contient une ligne pour chaque composant d’une règle de conservation dans chaque catégorie qui spécifie plus d’une règle de conservation. Ces règles peuvent être directement lues par machine dans la plupart des logiciels modernes de tenue de documents. Cette feuille de calcul peut également être facilement manipulée de sorte que les titres et l’ordre des colonnes apparaissent dans l’ordre particulier requis par le logiciel de tenue de documents. Les titres de colonne sont les suivants :

 

MRR Numéro séquentiel unique qui identifie le lot de règles propres à une catégorie donnée. Chaque ligne portera le même numéro pour toutes les composantes de règles d’une catégorie donnée.

 

PRI Principale fonction opérationnelle dont relève la catégorie.

 

Champ de document Champ de métadonnées de document qui déclenche la règle de conservation.

 

Valeur, document Valeur du champ de métadonnées du document qui est nécessaire pour déclencher la règle.

 

Champ de cas Champ de métadonnées de cas qui déclenche la règle de conservation.

 

Valeur, cas Valeur du champ de métadonnées de cas nécessaire pour déclencher la règle.

 

Nom Nom du déclencheur externe qui active la règle de conservation. Habituellement à partir d’une source externe comme une base de données d’entreprise.

 

Valeur Valeur du déclencheur externe nécessaire pour déclencher la règle.

 

REL Relié. Un opérateur booléen qui relie cette composante de règle à la composante de règle suivante. Par exemple : ET, OU, SAUF (AND, OR, NOT).

 

Type Type de règle de conservation. Les types de règles de conservation sont énumérés plus loin dans ce rapport.

 

Période Période de conservation.

 

Unité Unité de temps, habituellement en années.

 

Élimination Mesure effectuée à la fin du cycle de vie, p. ex. transfert, permanent, etc.

 

Caractéristiques du calendrier de conservation

 

Nous examinerons ici les cinq caractéristiques structurelles de base d’un calendrier de conservation adapté aux logiciels. Ces caractéristiques sont les suivantes :

 

Règles de conservation multiples Capacité d’avoir plusieurs règles de conservation, et plusieurs types de règles de conservation, pour une catégorie donnée du calendrier.

 

Conservation fondée sur la valeur Capacité de fonder les périodes de conservation sur la valeur de certains documents précis au sein de la catégorie.

 

Documents publiés Méthode de traitement des documents ayant une période de conservation indéterminée.

 

Dérogation aux règles de conservation (Retention Over-Ride, ou ROR) Capacité d’un utilisateur final d’outrepasser une règle de conservation assignée.

 

Modification continue Moyen de traiter les documents qui sont constamment révisés et mis à jour.

 

Règles de conservation multiples

 

Dans l’ensemble, les calendriers de conservation traditionnels ne permettent qu’un seul traitement de conservation pour chaque catégorie. Ce traitement de conservation, ou règle, peut être fondé sur le temps, comme dans « supprimer après cinq ans », ou sur le cas, comme dans « supprimer deux ans après la fin de l’enquête ». Selon la première règle, chaque document peut être détruit lorsqu’il atteint l’âge de cinq ans. L’élimination est effectuée document par document. Dans la deuxième règle, tous les documents d’un cas donné sont admissibles à l’élimination deux ans après la fin du cas, c’est-à-dire lorsque l’enquête est terminée. Dans ces deux cas, une seule règle de conservation s’applique à tous les documents de cette catégorie particulière.

 

Un logiciel moderne de tenue de documents électroniques nous permet toutefois d’appliquer non seulement plusieurs règles de conservation pour une catégorie donnée, mais aussi différents types de règles au sein d’une même catégorie. Chaque type de règle de conservation renvoie à une approche différente utilisée pour calculer l’admissibilité à l’élimination. À l’intérieur du logiciel, le type de conservation fait appel à un algorithme différent qui détermine comment la conservation est calculée. D’autres logiciels offrent une sélection différente de types de conservation. Certains offrent plus de types de conservation que d’autres. De plus, un type de conservation donné dans un produit peut avoir une fonction similaire à celle d’un autre produit, mais son nom sera différent. Le tableau suivant montre les cinq types de conservation les plus courants que l’on retrouve dans la plupart des logiciels :

 

Type Utilisation
T Fondé sur le temps (fondé sur l’âge du document)
D Fondé sur le document (fondé sur la propriété « champ de métadonnées » d’un document)
E Fondé sur les événements (pour les documents de cas ou les événements définis externes)
R Basé sur les relations (pour Remplacement)
O Modification (écraser). Document auquel on apporte continuellement des ajouts, écrasant ainsi les modifications antérieures, p. ex. une liste de suivi ou une base de données. Ne doit jamais être immuable; ne sera jamais supprimé.

 

Il existe de nombreuses situations réelles qui exigent de multiples règles de conservation dans une catégorie donnée. Voici quelques exemples courants :

 

  1. Les copies signées d’une entente doivent être conservées beaucoup plus longtemps que les ébauches et les documents justificatifs ou accessoires liés à l’entente.
  2. La loi précise qu’une période de conservation différente s’applique si un document se rapporte à une personne en deçà d’un certain âge.
  3. Dans le cadre de projets d’ingénierie, chaque type de document du projet a une durée de vie et une valeur différente aux fins de conservation.
  4. Les documents approuvés doivent être conservés plus longtemps que ceux qui n’ont pas été approuvés.
  5. Les procès-verbaux et les ordres du jour des réunions officielles sont habituellement conservés de façon permanente, tandis que les autres documents liés à ces réunions peuvent être éliminés.
  6. La période de conservation de certains documents peut varier selon le résultat du processus opérationnel. Par exemple, les documents relatifs à l’acquisition d’une entreprise précisent que certains documents relatifs à la diligence raisonnable doivent être détruits immédiatement si l’acquisition échoue, mais si l’acquisition est réalisée avec succès, ils doivent être conservés pendant un nombre déterminé d’années.
  7. Politique. Les documents relatifs à la politique peuvent être éliminés après quelques années, alors que la politique officielle « publiée » reste en vigueur indéfiniment jusqu’à ce qu’elle soit remplacée.

 

Dans tout calendrier de conservation moderne adapté aux logiciels, il arrive fréquemment que des règles de conservation multiples s’appliquent jusqu’à 80 % de toutes les catégories du calendrier. Examinons un exemple réel d’une catégorie du calendrier de conservation qui exige plusieurs règles de conservation. Sous la rubrique « Ressources humaines », nous trouvons une activité (catégorie) appelée « Titres de compétence, employé et apprenti ». Cette activité est utilisée pour stocker tous les documents liés aux titres de compétence dont ont besoin les employés et les apprentis, par exemple pour la conduite de véhicules munis de freins à air, la manipulation de matières dangereuses, la lutte contre les incendies ou les services médicaux d’urgence. Il existe trois règles de conservation pour ces titres de compétence, fondées sur les diverses lois applicables suivantes :

 

  1. Si matières dangereuses = oui, conservation = date d’expiration du titre de compétence + 50 ans, puis détruire
  2. Si unité fonctionnelle = lutte contre les incendies (fire) ou services médicaux d’urgence (EMS), conservation = date d’expiration du titre de compétence + 8 ans, puis éliminer
  3. Si matières dangereuses = non .et. unité fonctionnelle .sauf =. lutte contre les incendies ou services médicaux d’urgence, conservation = 5 ans

 

Examinons ce que ces trois règles signifient vraiment. La première règle stipule que si le titre de compétence a trait à des matières dangereuses, les documents qui le concernent doivent être conservés pendant 50 ans, puis détruits. La deuxième règle stipule que si le document appartient à l’unité fonctionnelle lutte contre les incendies ou à l’unité fonctionnelle services médicaux d’urgence, les documents qui y sont liés doivent être conservés pendant huit ans après l’expiration du titre de compétence puis détruits, et ce, et, quel que soit le type de titre de compétence. La troisième règle semble assez compliquée et, techniquement, elle l’est quelque peu, mais sa signification est fondamentalement simple. La troisième règle énonce simplement que tous les autres titres de compétences doivent être conservés pendant cinq ans, puis détruits. Cette règle s’appliquerait à tous les titres de compétence qui ne sont pas liés aux matières dangereuses et qui ne font pas partie des unités fonctionnelles lutte contre les incendies ou services médicaux d’urgence.

 

Le logiciel de tenue de documents doit disposer d’un moyen de savoir quelle règle s’applique aux documents de cette catégorie. Elle s’appuiera sur les métadonnées pour apprendre ce qu’elle doit savoir. Nous avons besoin d’un champ de métadonnées de document appelé « matières dangereuses ». La valeur par défaut sera NON. Toutefois, si l’utilisateur inscrit OUI dans ce champ, cela déclenche la règle 1 pour ce document. Nous avons besoin d’un deuxième champ de métadonnées appelé « unité fonctionnelle ». Si ce champ contient soit « lutte contre les incendies » ou « services médicaux d’urgence », la règle 2 s’appliquera à ce document. La règle 3 s’appliquera à tous les documents restants de cette catégorie.

 

Il s’agit d’un excellent exemple de la façon dont le calendrier de conservation oriente la structure de GCE. Le calendrier de conservation précise les trois variantes des traitements de conservation nécessaires pour cette catégorie. Il précise explicitement les champs de métadonnées nécessaires dans la structure de GCE. Tant que ces champs de métadonnées existent et que les utilisateurs les utilisent, les règles de conservation seront appliquées correctement. Évidemment, ces trois champs doivent être obligatoires, car les règles de conservation de la RBR dépendent des valeurs de ces champs pour fonctionner.

 

La figure 8 montre comment ces trois règles de conservation sont exprimées dans le calendrier de conservation. Le calendrier de conservation est une feuille de calcul composée de plusieurs colonnes de gauche à droite.

 

Figure 8 de conservation dans une seule catégorie

 

La colonne « secondaire » (secondary) indique le titre de la catégorie. La colonne « numéro MRR » (MRR number) indique que cette catégorie comporte plusieurs règles de conservation. Le numéro MRR 100.1 indiquera les détails des règles. Pendant ce temps, la colonne BR, ou conservation de l’entreprise, indique 5 (ans). Il s’agit de la règle par défaut de cinq ans suivi de la destruction, comme l’exige la règle 3. Toutefois, le titre de la colonne MRR indique la règle numéro 100.1, qui renvoie à l’ensemble complet des règles pour cette catégorie. Examinons les détails des règles de conservation pour cette catégorie. Voir la figure 9 ci-dessous.

 

Figure 9 – Détails des règles

 

Cette feuille de calcul peut contenir des centaines, voire des milliers de règles. Toutefois, dans cette catégorie, il y a exactement huit rangées qui forment les trois règles de conservation uniques pour cette catégorie, soit les rangées 79 à 86 inclusivement. Chaque logiciel de tenue de documents électroniques possède des capacités et des limites différentes en matière de règles de conservation multiples. De plus, chaque produit a une approche et une nomenclature légèrement différentes quant à la façon dont les règles sont exprimées et documentées. L’exemple que nous voyons à la figure 9 est une expression neutre des trois règles qui devraient s’appliquer à la plupart des logiciels modernes de tenue de documents. Il faudrait probablement les modifier pour les adapter à un logiciel particulier.

 

À la ligne 79, nous définissons la première règle. La règle est déclenchée par le champ de document « matières dangereuses » (hazardous materials) et la valeur doit être « oui » (yes). Le type de règle de conservation est T (basé sur le temps), la période de conservation est de 50 ans et la mesure d’élimination est « supprimer » (delete). La règle 2 est un peu plus compliquée. Les lignes 80 et 81 sont consacrées aux situations dans lesquelles l’unité fonctionnelle est « lutte contre les incendies ». Les lignes 82 et 83 sont consacrées aux mêmes situations, mais dans lesquelles l’unité fonctionnelle est « services médicaux d’urgence ». À la ligne 80, sous le titre de colonne REL (Relié), nous entrons l’opérateur booléen ET (AND). Cela signifie simplement que la condition à la ligne 80 et la condition à la ligne 81 doivent toutes deux être satisfaites pour que cette mesure soit exécutée. À la ligne 81, nous précisons qu’il doit y avoir une date dans le champ de métadonnées « date d’expiration du titre de compétence » (credential expiration date). Par conséquent, si l’unité fonctionnelle est « lutte contre les incendies » et qu’il y a une date d’expiration, le document sera conservé pendant huit ans après la date indiquée dans le champ « date d’expiration du titre de compétence ». Veuillez noter que le type de conservation est D, ce qui indique au logiciel de déclencher la période de conservation à partir de la date indiquée dans le champ de date intitulé « date d’expiration du titre de compétence ». Les rangées 82 et 83 remplissent la même fonction, mais pour l’unité fonctionnelle appelée « services médicaux d’urgence » (EMS). Les rangées 80 à 83 sont toutes trois nécessaires pour la règle de conservation 2.

 

Les rangées 84 à 86 constituent la règle de conservation 3. La ligne 84 précise que le champ « unité fonctionnelle » (business unit) ne doit pas contenir l’expression « lutte contre les incendies » (fire). À la ligne 85, nous précisons que le champ « unité fonctionnelle » ne doit pas contenir l’expression « services médicaux d’urgence ». À la ligne 86, nous précisons que le champ « matières dangereuses » (hazardous materials) doit contenir la valeur « NON » (NO). Une fois ces trois critères satisfaits, le document sera conservé pendant cinq ans, puis supprimé.

 

Cet exemple était délibérément compliqué, mais il montre comment nous pouvons établir des règles de conservation très sophistiquées et complexes. Les logiciels modernes de tenue de documents électroniques sont plus que capables de traiter ces règles complexes; toutefois, il faut indiquer de manière explicite au logiciel exactement ce qu’il doit faire. Cela nécessitera l’utilisation de métadonnées dans les règles, et il est impératif que le calendrier de conservation précise les métadonnées nécessaires pour appliquer les règles. Ces métadonnées doivent ensuite être intégrées au système de GCE. Ce n’est que lorsque les métadonnées ont été construites que la règle peut fonctionner. Au cours de la vie du système de GCE, il est impératif que ces champs de métadonnées ne soient pas perturbés, renommés, supprimés ou modifiés de quelque façon que ce soit. Si des changements sont apportés à ces métadonnées à n’importe quel moment, ils doivent être communiqués au professionnel de la GDI afin que la règle de conservation puisse être modifiée en conséquence, sinon la règle cessera tout simplement de fonctionner.

 

Conservation fondée sur la valeur

 

Avec les logiciels de tenue de documents d’aujourd’hui, nous pouvons attribuer des périodes de conservation fondées sur la valeur des documents au sein d’une catégorie. Nous pouvons attribuer des périodes de conservation plus longues aux documents de plus grande valeur, et des périodes de conservation plus courtes aux documents de moindre valeur. Pour ce faire, nous nous appuyons encore une fois sur les métadonnées des documents au sein de la structure de GCE. Nous aurons besoin d’un champ de métadonnées pour différencier les documents de grande valeur de ceux de moindre valeur. Il existe de nombreuses façons d’y parvenir en utilisant un seul ou plusieurs champs de métadonnées, en fonction de l’activité en question. Toutefois, pour le moment, nous utiliserons une technique très répandue dans un certain nombre d’organisations. Supposons que nous avons une activité (catégorie) pour « projets d’immobilisations » (capital projects). Il s’agit de grands projets d’ingénierie à forte intensité de capital, comme la construction de routes, de ponts ou de bâtiments. Chaque projet est un cas au sein de la catégorie. Chaque cas conservera tous les documents liés à ce projet particulier jusqu’à la fin de sa durée de vie (soit la date de fin du projet). Il va sans dire qu’il pourrait y avoir des milliers, voire des dizaines de milliers de documents pour chaque projet. Nous pouvons définir un champ de métadonnées qui nous indiquera la nature de chaque document. La nature ou le sujet du document peut nous indiquer sa valeur propre aux fins de l’attribution d’une période de conservation. Un bon exemple serait un champ de métadonnées nommé « Type de document, projets d’immobilisations » (Document Type, Capital Projects). Ce champ serait obligatoire dans la bibliothèque du système de GCE afin que chaque document contienne une valeur dans ce champ. Il y aurait une liste déroulante des types de documents semblable à celle illustrée ci-dessous :

 

Types de documents, projets d’immobilisations
Type de document Déclencheur Délai de conservation (années)
Gestion de projet TDD 5
Contractuel/juridique EOL 5
Planification et logistique EOP 5
Rapports, ébauches TDD 2
Rapports finaux S.O. P
Charte/autorisation S.O. P
Procès-verbal/ordre du jour de la réunion S.O. P
Fiche technique EOL 5
Dessins, ébauche EOP 5
Dessins conformes à l’exécution S.O. P
Réglementation et conformité EOP 5
Permis et licences EOP 10
Lié aux entrepreneurs EOP 2
Lié aux approbations TDD 25
Lié au budget EOP 5
Autre EOP 2

Les déclencheurs de conservation sont les suivants :

TDD Date réelle du document (true document date)

EOL Fin de vie utile de l’actif (end of life)

EOP Fin du projet (end of project)

 

Les utilisateurs finaux sont obligés de choisir l’une des 16 valeurs possibles pour ce champ obligatoire. Normalement, les utilisateurs ne verront pas le déclencheur ou le délai de conservation lorsqu’ils sélectionnent le type de document. Ils le pourraient, mais la plupart des utilisateurs ne s’intéressent simplement pas aux périodes de conservation. Voici quelques exemples de la façon dont la règle de conservation a été dérivée de la sélection du type de document :

 

Spécifications techniques Ces documents seront conservés cinq ans après la fin de la durée de vie utile de l’actif en construction. S’il s’agit d’un pont, les spécifications techniques doivent être conservées à portée de main pendant toute la durée de vie utile.

 

Gestion de projet Ces documents comprennent des éléments tels que les calendriers, les graphiques de Gantt et d’autres documents liés à la gestion du projet. La valeur diminue rapidement après leur utilisation, de sorte que la période de conservation est la date du document (date réelle du document) +5 ans, puis le document est détruit.

 

Lié au budget Les documents liés au budget doivent être conservés pendant cinq ans après la fin du projet. Il n’est pas nécessaire que ces documents soient conservés pendant toute la durée de vie utile de l’actif en construction.

 

Tous les types de documents n’ont pas nécessairement besoin d’un traitement de conservation différent des autres types de documents. Veuillez noter que les deux types de documents « lié à l’entrepreneur » et « rapports, ébauches » ont chacun le même traitement de conservation. Dans de nombreux SGEDD modernes, le type de document est utilisé pour aider les utilisateurs finaux à chercher et à extraire des documents selon leur type. Cela est particulièrement utile lorsque le volume de documents est élevé, c.-à-d. des milliers ou même des dizaines de milliers de documents. Le champ du type de document facilite la recherche du document auquel on s’intéresse. Nous pouvons en profiter pour attribuer des périodes de conservation appropriées à chaque type de document.

 

La figure 10 montre comment ces règles de conservation seront saisies dans le calendrier de conservation lui-même, dans la feuille de calcul MRR.

 

Figure 10 – Règles de conservation selon le type de document

 

Veuillez noter qu’il existe deux types de conservation différents parmi les 16 règles de conservation. Les règles des rangées 101, 104, 114 et 116 de la feuille de calcul utilisent chacune un type de conservation T (conservation fondée sur le temps). Les autres règles utilisent le type de conservation E (conservation fondée sur les événements), sauf pour les trois règles des rangées 105, 106 et 107, qui exigent une conservation permanente. Le type de conservation E précise que la date de déclenchement est une date d’événement quelconque. À la rangée 108 de la feuille de calcul, la date d’événement correspond à la fin de vie utile (EOL) de l’actif. Toutefois, à la rangée 109, la date de l’événement correspond à la fin du projet.

 

Cette approche de conservation fondée sur la valeur est généralement utile lorsqu’on dispose d’un très grand nombre de documents dans une activité (catégorie) donnée. Cette approche offre deux avantages distincts :

 

  1. Meilleure récupération des documents. Les utilisateurs peuvent chercher des documents en fonction du type.
  2. Meilleure granularité de la conservation. Les documents présentant une faible valeur permanente sont détruits tôt, et les documents de valeur plus élevée et plus permanente sont conservés plus longtemps.

 

Encore une fois, il est important de souligner l’importance des métadonnées dans le SGEDD. Cette technique ne serait pas possible sans des métadonnées bien définies, en l’occurrence le champ « type de document ». Des métadonnées bien définies et soigneusement examinées sont essentielles à la réussite de tout projet de GCE, et elles sont tout aussi importantes pour l’automatisation de la tenue de documents.

 

Documents publiés

 

La période de conservation de certains types de documents est « indéfinie ». Cela signifie habituellement que le document doit être conservé jusqu’à ce qu’il ait été remplacé par une nouvelle version. Le document est conservé pendant une période indéterminée jusqu’à ce qu’il soit remplacé par cette nouvelle version. Voici quelques exemples :

 

Politiques Une politique, comme une politique sur l’utilisation des courriels, est en vigueur jusqu’à ce qu’elle soit remplacée par une nouvelle version.

 

Procédures opérationnelles normalisées Les procédures opérationnelles normalisées sont souvent documentées pour des éléments comme les exercices d’alarme incendie, les entrées dans des espaces clos, les processus d’essai diagnostique, les procédures d’exploitation et d’essai d’une usine, etc. Ces procédures demeurent en vigueur et doivent être suivies jusqu’à ce qu’elles soient remplacées par une nouvelle version.

 

Matériel de formation Du matériel de formation a été élaboré pour un cours de formation particulier. Ce matériel est utilisé pour donner le cours aussi souvent que nécessaire. Éventuellement, ce matériel de formation sera remplacé par une nouvelle version. La période de conservation du matériel de formation original est indéterminée, c’est-à-dire jusqu’à ce qu’il soit remplacé par une version plus récente.

 

Plans De nombreux plans sont en vigueur jusqu’à ce qu’ils soient remplacés par des versions plus récentes, comme les plans opérationnels annuels, les plans d’urgence, les stratégies d’entreprise, etc. Les plans sont parfois remplacés selon un cycle prévu, par exemple tous les ans ou tous les cinq ans. Toutefois, dans bien des cas, un plan est en vigueur jusqu’à ce qu’il soit remplacé par une nouvelle version, et il est impossible de prévoir à quel moment cette nouvelle version entera à son tour en vigueur.

 

Nous qualifions ces documents de « documents publiés ». Un document publié est simplement un document qui est « en vigueur » jusqu’à ce qu’il soit remplacé. Le document est « en application ». Nous ne devons pas détruire ces documents tant qu’ils sont en vigueur. Une fois qu’ils auront été remplacés, nous pourrons appliquer la conservation. Après la date de leur remplacement, nous pourrons les supprimer. Le terme « publié » est simplement un terme pratique; il n’est pas nécessaire d’utiliser ce mot en particulier. Dans une catégorie donnée où un document publié est en cours d’élaboration, il y aura beaucoup d’autres documents en plus du document publié lui-même. Supposons que le document publié sur lequel on travaille est une politique. On retrouvera de nombreuses ébauches de la politique. Il y aura aussi de nombreux courriels contenant des directives, des instructions et des commentaires concernant l’élaboration de la politique. Il y aura de nombreux documents de référence, entre autres des documents financiers, des séances d’information juridiques et des documents justificatifs ou auxiliaires. Parmi tous les documents de cette catégorie, il n’y a qu’à la politique proprement dite (qui a été mise en application) que nous devons appliquer le processus de remplacement. Nous pouvons appliquer une règle de conservation différente aux documents restants. Les documents restants ne seront pas conservés indéfiniment. Ils peuvent être éliminés à une période fixe ou un certain temps après l’entrée en vigueur du document publié. Quoi qu’il en soit, nous devons trouver une façon de distinguer les documents publiés de ceux qui ne le sont pas. Pour ce faire, nous utilisons un champ de métadonnées appelé PUBLIÉ (O/N).

 

Pour traiter le remplacement dans un SGEDD moderne, nous utilisons une combinaison des quatre champs de métadonnées suivants :

 

Version Version du document en question. Les versions peuvent prendre plusieurs formes, comme un numéro séquentiel, une date ou même une saison (été, automne, etc.).

 

Date de remplacement Date à laquelle un document a été remplacé par une nouvelle version.

 

Date d’entrée en vigueur Date à laquelle une nouvelle version d’un document remplacé est entrée en vigueur.

 

Publié Document dont la période de conservation est indéterminée (jusqu’à ce qu’il soit remplacé). Cette caractéristique permet de distinguer un document appartenant à la catégorie des documents qui nécessitent un remplacement de ceux qui ne requièrent pas ce traitement de conservation (ébauches, commentaires, documents justificatifs et accessoires).

 

Le processus de remplacement est illustré à la figure 8.

 

Figure 11 – Processus de remplacement

 

La version 1 a été publiée ou est entrée en vigueur le 10 janvier 2018. Le 11 juin 2018, cependant, la version 2 a été approuvée et est entrée en vigueur. Par conséquent, la date remplacée de la version 1 est devenue le 11 juin 2018, et la date d’entrée en vigueur de la version 2 était également le 11 juin 2018. Le 10 décembre 2018, la version 2 a été remplacée par la version 3, qui est entrée en vigueur le 10 décembre 2018; la version 3 n’a pas de date de remplacement, car elle n’a pas encore été remplacée par une nouvelle version. Chacun de ces trois documents comporterait la valeur OUI (YES) dans le champ de métadonnées PUBLIÉ (PUBLISHED). Tous les autres documents justificatifs et accessoires liés au document publié comporteraient la valeur NON (NO) dans le champ de métadonnées PUBLIÉ (PUBLISHED).

 

Supposons que nous ayons une catégorie appelée « Politiques, entreprise » qui présente les règles de conservation suivantes :

 

Si publié = oui, conservation = date remplacée + 5 ans

 

Si publié = non, conservation = 2 ans

 

La figure 12 montre comment nous saisirions ces deux règles dans le calendrier de conservation de la feuille de calcul MRR.

 

Figure 12 – Détails des règles de conservation

 

À la ligne 147 de la feuille de calcul, nous indiquons que la valeur OUI doit être dans le champ « publié ». Nous entrons « AND » dans la colonne REL (RELATED) pour indiquer qu’une 2e condition doit être remplie. À la ligne 148 de la feuille de calcul, nous spécifions qu’il doit y avoir une date dans le champ « date remplacée ». Le document sera détruit cinq ans après la date inscrite dans le champ « date remplacée ». Veuillez noter que le type de conservation est D (déclencheur de conservation dans un champ de métadonnées « date » du document). À la ligne 149 de la feuille de calcul, nous traitons tous les documents restants, c.-à-d. ceux qui ne sont pas publiés. Ici, nous conservons simplement ces documents pendant deux ans, puis nous les détruisons. Le type de conservation T indique au logiciel de détruire les documents deux ans après la date réelle du document.

 

Dérogation aux règles de conservation

 

De temps à autre, dans certaines catégories, le responsable opérationnel demandera de modifier le calendrier de conservation et de conserver le document pendant une plus longue période. Nous désignons cette prolongation comme un « remplacement » du calendrier de conservation. La raison pour laquelle un utilisateur professionnel appuie cette mesure varie considérablement, mais voici quelques exemples courants des raisons pour lesquelles il pourrait vouloir remplacer le calendrier de conservation.

 

  1. Valeur de référence. Un document particulier peut avoir une valeur inhabituellement longue (persistante) pour référence future. Il peut s’agir d’un rare précédent jurisprudentiel. Il peut s’agir d’une spécification technique ou d’une photographie d’une pièce d’équipement extrêmement rare qui est désuète depuis longtemps, mais toujours en service, et le document peut devoir être conservé tant que l’équipement sera encore en service.
  2. Valeur de protection. Un document peut consigner quelque chose qui pourrait être utilisé à l’avenir pour protéger l’organisation contre des poursuites judiciaires ou servir à la défendre en cas de contestation judiciaire ou réglementaire à l’avenir. Ce document peut servir de preuve que le responsable de l’entreprise estime devoir être conservé bien au-delà de la période normale de conservation, « juste au cas où ».
  3. Valeur juridique. Certaines lois obligent une organisation à conserver les documents pertinents s’il existe un « risque raisonnablement prévisible » de poursuites judiciaires. Vous pourriez penser que ce ou ces documents pourraient être importants en cas de poursuite judiciaire future contre votre organisation.
  4. Valeur historique. Les documents d’une catégorie donnée ne contiennent habituellement aucun document ayant une valeur historique. Mais pour quelque raison que ce soit, il arrive de temps à autre qu’un document soit considéré comme ayant une importance historique, même si cela n’était pas prévu. Par exemple, une photographie de la cérémonie d’inauguration des travaux d’une nouvelle installation peut être incluse dans les documents du projet de construction; toutefois, la photo peut être déclarée historique. Par conséquent, vous pourriez souhaiter conserver cette photo particulière en permanence.

 

Pour permettre à un utilisateur final de modifier une période de conservation, vous avez besoin d’un mécanisme lui permettant de désigner un document qui possède une valeur de conservation plus élevée. Il s’agirait d’un autre champ de métadonnées. Le champ utilisé pour cette modification serait communément appelé CRITIQUE (O/N) ou quelque chose de semblable. Le nom du champ n’a pas d’importance; il peut s’appeler de la façon dont vous le souhaitez, pourvu que l’utilisateur comprenne son utilité. Nous définissons ensuite deux règles de conservation distinctes pour cette catégorie : l’une où CRITIQUE = NON et l’autre où CRITIQUE = OUI, comme indiqué ci-dessous :

 

Si critique = oui, conservation = 25 ans

 

Si critique = non, conservation = 5 ans

 

La figure 13 montre comment nous inscririons ces règles dans le calendrier de conservation.

 

Figure 13 – Dérogation aux règles de conservation

 

Dans la rangée 162 de la feuille de calcul, nous avons une règle de conservation simple fondée sur le temps où la valeur du champ « critique » = OUI. Les documents qui satisfont à cette règle seront détruits 25 ans après la date réelle du document. Dans la rangée 163 de la feuille de calcul, nous spécifions une règle de conservation de cinq ans où « critique » = NON.

 

Voici quelques éléments importants à prendre en considération lors de la mise en œuvre des dérogations aux règles de conservation :

 

  1. Chaque catégorie peut avoir une période de conservation différente pour la dérogation. Par exemple, une catégorie « audits financiers » peut présenter une dérogation de 25 ans, tandis qu’une catégorie « collections muséales » peut comporter une dérogation de permanent.
  2. Cette capacité peut faire l’objet d’abus. Certains utilisateurs peuvent avoir tendance à trop l’utiliser, et ce, sur trop de documents. Le seul moyen d’éviter cette situation est d’éduquer vos utilisateurs et de surveiller l’utilisation de la dérogation. Nous recommandons de produire régulièrement, par exemple mensuellement, un rapport sur l’ensemble du SGEDD afin de déterminer à quelle fréquence, dans quelles catégories et quels utilisateurs ont appliqué la dérogation. Surveillez-la fréquemment pour vous assurer qu’elle est utilisée de façon raisonnable, et non pas de façon abusive.
  3. Cette dérogation peut être combinée à d’autres règles de conservation dans une catégorie donnée. L’exemple ci-dessous montre comment la dérogation peut être appliquée à une catégorie comportant des règles de remplacement :

 

  1. Si publié = oui, conservation = date remplacée + 5 ans
  2. Si publié = non, conservation = 2 ans
  3. Si critique = oui, conservation = 25 ans

 

 Modification continue

 

Contrairement aux documents physiques traditionnels, les documents électroniques peuvent être modifiés de façon continue au fil du temps. Il existe trois méthodes distinctes pour modifier un document numérique :

 

Sauvegarde sous un nom de fichier différent Chaque fois que l’on modifie le document, on le sauvegarde sous un nom de fichier différent. Cela crée un document distinct chaque fois que l’on modifie le document. Chaque document est différent et porte un nom de fichier propre. Techniquement et légalement, chaque modification constitue un document différent. Chacun de ces documents peut être déclaré et géré indépendamment des autres.

 

Sauvegarde sous le même nom de fichier On modifie le document et le sauvegarde sans changer son nom. Cette opération remplace la version précédente du document par une nouvelle version qui contient les modifications. Il n’y a pas de suivi des modifications. Aucune version n’indique à quelle fréquence le document a été modifié ni quelles sont les différences entre les versions. Sur les plans juridique et technique, il s’agit d’un seul et même document dont le contenu a changé au fil du temps. C’est ce que nous appelons un document en « modification continue ». Il est continuellement modifié. La fréquence à laquelle il est modifié n’a aucune importance; il ne faut donc pas être rebuté par le qualificatif « continu ». Les modifications sont effectuées en continu dans la mesure où les données sont constamment écrasées pendant toute la durée de vie du document.

 

Sauvegarder et incrémenter la version Dans tout système de GCE moderne, il existe une option pour activer la gestion des versions. Chaque fois qu’on sauvegarde le document, le système incrémente automatiquement le numéro de version par un. Lorsqu’on sauvegarde un document pour la première fois, il se voit automatiquement attribuer le numéro de version « 1 ». Lors de la prochaine sauvegarde, il se verra attribuer le numéro de version « 2 », et ainsi de suite. Cela permet de revenir en arrière et de voir toutes les modifications apportées au document. Sur les plans juridique et technique, chaque version constitue un document qui peut être géré indépendamment des autres versions. Selon certains, la série de versions constitue un seul document. Quoi qu’il en soit, dans la tenue de documents moderne, les versions et les modifications apportées à ces versions devraient être conservées conformément au principe de conservation des documents.

 

Ici, nous ne tenons compte que de la deuxième des trois méthodes susmentionnées, c’est-à-dire la sauvegarde sous le même nom de fichier. Nous appelons cette opération la « modification continue ». Voici quelques exemples :

 

  1. Registre de suivi. Feuille de calcul utilisée pour faire le suivi des présences des étudiants, des appels téléphoniques, des changements apportés aux projets, etc. La feuille de calcul est mise à jour périodiquement (chaque jour, chaque semaine, chaque mois) ou au besoin. Chaque fois que la feuille de calcul est mise à jour, elle est sauvegardée sans que son nom soit changé.
  2. Bases de données. Il est possible d’utiliser une base de données pour faire le suivi des actifs, des congés des employés ou d’autres informations. Ces bases de données peuvent inclure Microsoft Access, Oracle ou même un document Microsoft Notepad. La base de données est mise à jour périodiquement et toutes les données sont stockées dans une « base de données ». Cette base de données peut être constituée d’un seul fichier ou d’un ensemble de fichiers connexes considéré comme un document. Le nom du ou des fichiers de la base de données ne change jamais, et le contenu est constamment écrasé à mesure que de nouvelles données sont ajoutées ou modifiées.
  3. Bloc-notes. Un bloc-notes peut être un document ordinaire, comme un document en format Microsoft Word utilisé pour consigner les notes d’opérateurs, les notes de police ou tout autre document mis à jour périodiquement et continuellement. Microsoft propose une application logicielle novatrice appelée OneNote spécialement conçue pour consigner des notes en continu dans un seul document. En fait, OneNote est une base de données de documents non structurés.

 

Ces documents sont souvent importants. Cependant, étant donné qu’ils sont continuellement modifiés et sauvegardés (c.-à-d. que les données sont continuellement écrasées), nous ne pouvons ni les rendre immuables (les verrouiller et empêcher leur suppression ou leur modification) ni les supprimer. Alors, comment pouvons-nous les gérer au sein d’un calendrier de conservation? N’importe quelle catégorie donnée peut compter un ou plusieurs documents de ce type. Par exemple, une catégorie comme « rendement et suivi des ventes » peut contenir des documents liés aux quotas et aux objectifs de vente d’une équipe. Ces documents peuvent comprendre un registre de suivi, c’est-à-dire une feuille de calcul qui enregistre et suit les données agrégées de toute l’équipe des ventes au fil du temps. Comme ce journal de suivi est continuellement modifié, nous ne pouvons ni le rendre immuable ni le supprimer. Essentiellement, nous devons l’ignorer et ne pas y toucher. Dans cet exemple, nous définissons deux règles de conservation comme suit :

 

Si modification continue = oui, conservation = ignorer

 

Si modification continue = non, conservation = 5 ans

 

Dans le système de GCE, chaque document de cette catégorie doit posséder un champ de métadonnées obligatoire nommé « modification continue » (continuous overwrite). La valeur par défaut serait NON. Pour chaque registre de suivi stocké dans cette bibliothèque de GCE (catégorie), l’utilisateur doit indiquer « modification continue » = OUI. Lorsque « modification continue » = OUI, le logiciel ne tient pas compte du document, ne le verrouille pas et n’y applique aucune suppression. Pour tous les documents où « modification continue » = NON, le document sera conservé pendant cinq ans, puis détruit.

 

La figure 14 montre comment saisir ces informations dans la feuille de calcul.

 

Figure 14 – Modification continue

 

Dans la rangée 194 de la feuille de calcul, nous spécifions que pour tous les documents dont le champ de métadonnées « modification continue » = OUI, nous utiliserons le type de conservation O (ignorer, aucune suppression). Dans la rangée 195 de la feuille de calcul, nous spécifions que pour tous les documents dont le champ de métadonnées « modification continue » = NON, nous utilisons le type de conservation T (fondé sur le temps) et nous supprimons ces documents cinq ans après leur date réelle.

 

Dans certains cas, les registres de suivi et les bases de données similaires sont « transférés » périodiquement. Supposons qu’un registre de suivi est utilisé pour suivre le rendement des ventes au cours d’une année civile donnée. Une feuille de calcul est mise à jour de façon continue tout au long de l’année. À la fin de l’année, la feuille de calcul de l’année en question est abandonnée, et une copie portant un nom différent est produite pour l’année suivante. Cette nouvelle feuille de calcul est ensuite mise à jour continuellement tout au long de la deuxième année. Cela signifie que la mise à jour de chaque registre cesse à la fin de chaque année civile. Nous pouvons alors appliquer la conservation à ces registres. Supposons que nous ayons une catégorie contenant des registres de suivi qui ont été transférés à la fin de chaque année. Par exemple, si nous appliquons une période de conservation par défaut normale de cinq ans, cela suffirait pour recueillir les registres de suivi. Les registres de suivi seraient conservés pendant cinq ans après la fin de chaque année civile, puis ils seraient détruits. En règle générale, si la période de transfert est inférieure à la période de conservation par défaut, le traitement par modification continue n’est pas nécessaire.

 

Sommaire

 

Une nouvelle approche du calendrier de conservation est essentielle au déploiement d’un SGEDD moderne. Le calendrier doit prendre la forme d’une feuille de calcul et contenir des descriptions détaillées des catégories, des cas explicites et une convention d’appellation pour les catégories de cas, ainsi que des règles de conservation mathématiquement correctes pour toutes les catégories qui exigent plusieurs règles de conservation. Cela permet de tirer parti des capacités complètes du logiciel de SGEDD.

 

Le calendrier de conservation ne se limite pas à préciser les règles de conservation : il constitue la structure sous-jacente de la plateforme de GCE. Une fois le SGEDD entièrement déployé, la structure de GCE, le calendrier de conservation et les règles d’automatisation de la RBR fonctionnent de concert comme une seule unité interconnectée. Tout changement apporté à l’une ou l’autre de ces trois composantes doit être soigneusement coordonné afin que les règles de tenue de documents fondée sur des règles ne soient pas enfreintes. Cela signifie que le professionnel de la GDI doit :

 

  1. Remanier considérablement le calendrier de conservation.
  2. Exercer une forte influence sur la structure de GCE.
  3. Concevoir et déployer des règles de déclaration et de conservation automatisées des documents.
  4. Surveiller de manière continue l’ensemble du système et s’assurer que les changements sont communiqués et pris en compte aux trois niveaux du système.

 

À propos de l’auteur

 

Bruce Miller, MBA, IGP est un expert de réputation mondiale en tenue de documents électroniques. Consultant indépendant, auteur et éducateur, il est à la source du premier logiciel de tenue de documents électroniques au monde. Il a été directeur mondial de la stratégie et du développement des affaires d’IBM en matière de documents électroniques. M. Miller compte parmi les 439 employés d’IBM (sur les 360 000 que compte l’entreprise) qui se sont vu décerner le titre de leader technique. Il a reçu le prestigieux prix Emmett Leahy, la plus haute reconnaissance internationale accordée aux professionnels du domaine de la gestion de l’information. Son livre Managing Records in Microsoft SharePoint est l’un des ouvrages les plus vendus de l’ARMA. M. Miller est titulaire d’un diplôme en technologie du génie électronique, d’une maîtrise en administration des affaires (MBA) et est un professionnel certifié en gouvernance de l’information.

 

Re-Envisioning the Retention Schedule : How to Build a Software-Ready Retention Schedule

SAGESSE WINTER 2021 – AN ARMA CANADA PUBLICATION

by Bruce Miller, MBA, IGP

 

View PDF 08-Sagesse-2021-Re-Envisioning-the-Retention-Schedule-EN

Back to Sagesse 2021

Abstract

Modern electronic recordkeeping software provides new capabilities and techniques for managing digital records that were not previously possible with physical records. These capabilities include such things as multiple retention rules per category, retention assignment based on document value, multiple retention triggers and types, retention override, automatic declaration of records, and more. Many of these new capabilities are possible due to the presence of metadata fields assigned to digital records. To fully utilize these new capabilities, the retention schedule must leverage known available document metadata, and call for new document metadata in support of retention. The schedule must be fully aware of the new capabilities and utilize them where appropriate. A retention schedule that fully leverages these new capabilities is referred to as a software-ready retention schedule.  It is structured differently from a traditional retention schedule, allows multiple retention rules per category, leverages document metadata, uses multiple types of retention triggers unique to digital records, and explicitly specifies how case-based records are handled, as well as a number of other functional capabilities.

The Need for a New Retention Schedule

Many organizations have decided to deploy a modern EDRMS (Electronic Document and Records Management System). The records administrators of such projects will soon learn that the retention schedule is the key building block underpinning a successful EDRMS.

EDRMS is a blend of two technologies. The first is a modern ECM (Enterprise Content Management) platform (which used to be known as document management). This platform forms a digital repository for all electronic records, and provides for advanced searching by content and metadata, security control, version management, workflow automation, and collaboration such as multi-author document editing, and much more. The second technology is recordkeeping capability, often delivered as a set of features within the ECM itself or as a third-party product added to the content management platform.

In reality, the retention schedule underpins both technologies. The retention schedule does more than just feed retention rules to the ECM platform-it actually greatly influences the configuration of the ECM itself. This is necessary for the recordkeeping component to do its job properly.

All modern EDRMS systems incorporate RBR (Rules-Based Recordkeeping) to some extent. RBR is an approach to electronic recordkeeping that automates the recordkeeping functions the end user would normally have to carry out. These functions include identifying which documents are records, when to declare documents as records, and how to classify the documents against the retention schedule. A full and proper EDRMS deployment that fully utilizes RBR capability automates all these end user recordkeeping functions. End users have absolutely no role to play in the declaration or classification of any records. They simply operate the system as an ordinary everyday ECM, without thinking about records management whatsoever. Thanks to RBR however, in the background documents are being declared as records and are being properly classified against the retention schedule, even if the user is blissfully unaware of this.

Modern electronic recordkeeping software can carry out retention and disposition in ways most records professionals may not have even heard of. Because the records are digital, administrators have more document-level information to deal with and can leverage that information to do more granular, more sophisticated, and more flexible retention and disposition. For example, they can apply retention based on the value of documents, they can apply multiple retention rules to a single category, even different types of retention rules within the same category. The software has these amazing retention and disposition capabilities; however, the records administrator must tell it what they want it to do. And that’s the job of the retention schedule. If we know what the recordkeeping software is capable of in terms of retention and disposition, then we can write a retention schedule to take full advantage of these powerful new capabilities. A retention schedule that leverages these retention and disposition capabilities is referred to as a “software ready” retention schedule.

Traditional retention schedules were written without any knowledge of the capabilities of modern recordkeeping software. If you use a traditional schedule within a modern EDRMS, the software won’t be able to utilize any of the advanced retention and disposition capabilities it delivers. Furthermore, it will severely curtail the ability to fully utilize modern RBR automation techniques. A software ready retention schedule however is written with the assumption that it will be used within an EDRMS, and will take full advantage of the advanced retention and disposition capabilities of the software. Any well-written software ready retention schedule can be used with any modern recordkeeping software, regardless of brand.

Figure 1 shows an excerpt from an admittedly dated retention schedule. It shows the title, description, and a very simplistic retention rule for each category. But it is real, and it is in use today.

Figure 1 – Traditional Retention Schedule

Figure 2 below shows just how different a modern software ready retention schedule looks in comparison to a traditional schedule. A traditional schedule typically is just a long list of activities and retention rules and citations. A modern retention schedule shown in figure 2 however has three different but interrelated components. More on this later.

Figure 2 – A Software-Ready Retention Schedule

In this paper, we will explain how the retention schedule plays a pivotal role in the overall configuration of a modern EDRMS, and highlight the characteristics of a software ready retention schedule.

The Role of the Retention Schedule

Figure 3 shows what a modern EDRMS looks like conceptually. There are three “layers” to an EDRMS:

The retention schedule The software ready retention schedule. This will be divided into case categories and administrative categories. On the left side are two administrative categories (operator rounds, and employee onboarding). On the right are two case categories (union grievances, and safety audits).

ECM structure Often referred to as “information architecture”, the ECM structure consists of all the so-called “libraries”, or places that documents can be stored. Different ECM products have different names for storage locations. Storage locations can be called libraries, folders, cabinets, etc. ECM structure also consists of the metadata, fields of information permanently stored with each document placed in each storage location. There is more to ECM structure than just libraries and metadata, including such things as versioning, security and collaboration, etc. But for now, we’re only concerned with libraries and metadata.

RBR rules RBR rules refer to the rules created within the recordkeeping software to automate the recordkeeping processes, namely declaration (which documents are declared as records and when), and which retention rules in the retention schedule get applied to which locations in the ECM structure.

Done properly, the retention schedule massively impacts the ECM structure. Each category in the retention schedule translates to a library in the ECM structure. This library is where users will store documents for that particular category. Each category in the retention schedule forms one library in the ECM structure. Both the category and the library bear exactly the same name. Case categories require that the library be subdivided into “cases”, or containers, one for each case. This allows us to group records of each case together, separate from and independently of all other cases.

At the top of the pyramid lies the recordkeeping software and its RBR rules. This is where you define declaration rule such as “if library = “operator rounds” and approved = “yes” then declare”. Retention rules also get defined here, such as “if library = “operator rounds” retention equals true document date +5 years”. The rules need to know what the library names are, and what metadata it can work with.

As you can see, the retention schedule forms the base upon which the ECM is structured. This in turn allows the RBR rules to execute against that structure, as shown in figure 3.

Figure 3 – A Modern EDRMS

Case Records

The retention schedule must differentiate between case and so-called “administrative” record categories. Each category in the retention schedule therefore is either a case category or an administrative category. In most organizations today, about 60% of all records belong to case categories. The best way to understand case records structure is with the help of an example. Suppose you have 1000 contracts in existence at any one time. Each contract has a contractor name, the contract value, an expiration date, a contract type, etc. This data will not change among all the documents in any given case. Each contract theoretically could have an expiration date different from those of all other contracts. All contracts would have a single retention rule similar to “keep five years after contract end date, and then destroy”. Although there is only one single rule applied to all 1000 contracts, that single rule has 1000 different trigger dates, i.e. 1000 different expiry dates. The recordkeeping software must therefore track each of these 1000 dates.

Let’s look at this from the perspective of an EDRMS end user. A user has a document related to a particular contract. The document may be an email suggesting several changes to the draft of the contract. The user must specify which of the 1000 contracts the document is related to. How is this accomplished? The user must have a way to choose from among the 1000 contracts. How this is done can vary among different ECM systems but the most common would be a simple drop-down list of all 1000 contracts, as shown in figure 4. Each contract has a unique name, and the user must select one of the 1000 contracts. The ECM will have a library known as “contracts”. That library will be further subdivided into 1000 case containers, each bearing a unique name of one of the 1000 contracts. This is a good example of how the retention schedule shapes the ECM structure. The two have to work in concert, and only then can the RBR rules be applied to the records within these libraries.

Figure 4 – Contract Selection

Retention Schedule Structure

A modern software ready retention schedule is recorded in a spreadsheet. There are 2 reasons for this:

  1. It is machine-readable. All the elements of the retention schedule including all categories and RBR retention rules can be read by modern electronic recordkeeping software and imported directly into the ECM and or the electronic recordkeeping software itself.
  2. Better presentation. In a spreadsheet we can group things by business unit, or by department. We can apply filters to various columns to examine subsets of the schedule. We can use automatic numbering to number categories. Compared to a written document, it is a better environment for developing the schedule, revising it, and presenting both to machines and to humans

It does not matter which proprietary spreadsheet format you use (Microsoft Excel, Google Sheets, etc.). The examples we use in this report utilize Microsoft Excel. The retention schedule is a workbook consisting of multiple worksheets.

The retention schedule consists of 3 major components:

Categories A worksheet containing all the categories for each business unit within the organization. Each category is named, numbered, and has a retention rule. Where there is more than one retention rule for the category, only one retention rule is shown and all of the category’s retention rules are listed in the MRR worksheet.

Cases A worksheet containing the details such as naming nomenclature for each case across all business units.

MRR A worksheet containing retention rules for each category that has more than one retention rule.

The first worksheet is a summary of primary business functions, as shown in figure 5 below.

Figure 5 – Primaries

In this worksheet, the code column heading is a short acronym for each of the business functions. The function is the name of the function. Number indicates a sequential number assigned to each of the primary business functions. The description is a detailed description of the function. Each row in this worksheet constitutes a different business unit grouping within the organization, often called a department or section. Each row in this worksheet has a corresponding worksheet of the same name.

Categories

Figure 6 shows a worksheet for one of the business functions, in this case the clerks department.

Figure 6 – Business Unit Categories

Each spreadsheet row is a single category. White rows are administrative categories, usually with simple time-based retention rules, and green rows indicate case categories that are subdivided into cases. This report does not allow for a comprehensive treatment of all column headings, therefore we will highlight only the key headings from figure 6. The primary headings are as follows:

Secondary Short title of the category.

No. Sequential unique number of the category.

Description Detailed description of the category.

MRR number Indicates that there are multiple retention rules for this category. The rules appear in the MRR worksheet. Each batch of rules unique to that category are uniquely numbered.

BR Business retention. Retention needed by the business, not the retention period specified by legislation.

Trigger Either the document metadata field, or the case metadata field used to trigger the retention period

Type One of the 5 retention types (explained later in this report).

Unit Unit of time measure, typically years.

Disp. Action Disposition action. What will happen to the records at the end of their lifecycle. Typically delete, keep permanently, review, transfer.

Cases

Figure 7 below shows the worksheet used to define the details (structure) of all cases.

Figure 7 – Case Structure

The purpose of this worksheet is to specify the naming convention for each case within each category that is designated as a case category. Each case within a category must be uniquely named from all other cases in that same category. Some ECM systems have severe limitations on the length of a container name. A container is what the ECM uses to group together documents that are related. In some ECM systems this is called a folder, a cabinet, or a document set, etc. We will refer to these with the generic name “container”. We specify a case naming convention of 3 parts. Each part will specify a name, whether it is mandatory or optional (M/O), and a maximum allowable number of characters for that part of the name. The column headings are as follows:

Category name The name (title) of the category.

Case examples Fictional examples of how the naming would appear for each case.

PRI The primary business function underneath which the category falls.

Name The name of that part. The system administrator names the container with a suitable name that matches the particular case, but this column shows what the name is supposed to consist of.

M/O Either mandatory (M) or optional (O).

MAX Maximum allowable number of characters.

Multi-Rule Retentions

This worksheet contains one row for each component of a retention rule for each category that specifies more than one retention rule. These rules can be read by machine directly into most modern recordkeeping software. This spreadsheet can also be easily manipulated so that the column headings and the order of the columns are in the particular order that the recordkeeping software requires. Column headings are as follows:

MRR A unique sequential number that identifies the batch of rules unique to a given category. Each row will have the same number for all rule components in a given category.

PRI The primary business function that the category falls under.

Document field The document metadata field that triggers the retention rule.

Value, document The value of the document metadata field necessary to trigger the rule.

Case field The case metadata field that triggers the retention rule.

Value, case The value of the case metadata field necessary to trigger the rule.

Name The external trigger name that triggers the retention rule. Typically from an external source such as a corporate database.

Value The value of the external trigger necessary to trigger the rule.

REL Related. A Boolean logic operator that relates this rule component to the following rule’s component. Examples are AND, OR, NOT, etc.

Type Retention rule type. Retention rule types are itemized later in this report.

Period Retention period.

Unit Unit of time measure, usually in years.

Disposition Action carried out at the end of the lifecycle, e.g. transfer, permanent, etc.

Retention Schedule Characteristics

Here we will examine the five core structural characteristics of a software ready retention schedule. They are:

Multiple Retention Rules The ability to have multiple retention rules, and multiple types of retention rules, for any given category in the schedule.

Value-Based Retention The capability to base retention periods on the value of specified documents within the category.

Published Documents A method of handling documents with an indeterminate retention period.

Retention Over-Ride (ROR) The ability for an end user to override an assigned retention rule.

Continuous Over-Write A means of dealing with records that are being continuously revised and updated.

Multiple Retention Rules

Traditional retention schedules by and large allow only a single retention treatment for each category. That retention treatment, or rule, can be time-based as in “delete after 5 years”, or case-based as in “delete two years after end of investigation”. With the first rule each document qualifies for destruction is it reaches five years of age. Disposition is accomplished on a document by document basis. In the second rule all the records in any one case are qualified for disposition two years after the case has ended, i.e. the investigation is over. In both of these instances, there is one single retention rule that applies to all the records in that particular category.

Modern electronic recordkeeping software however allows us to support not only multiple retention rules for any one category, but also different types of rules within a category. Each retention rule type refers to a different approach used to calculate the eligibility for disposition. Inside the software, the retention type invokes a different algorithm that determines how the retention is calculated. Different software products offer a different selection of retention types. Some offer more retention types than others. And a given retention type in one product may be similar in function to that of another product, but will be named differently. The following table shows the five common retention types found across most software products:

Type Usage
T Time Based (based on document’s AGE)
D Document-based (based on a document metadata field property)
E Event Based (For case records, or external defined events)
R Relationship-based (for Supersedence)
O Over-Write. A document that is continually added to, over-writing prior changes, e.g. a tracking list or database. Must never be immutable, will never be deleted.

There are plenty of real-life situations that call for multiple retention rules within a given category. Below are some common examples:

  1. Executed copies of agreements must be kept much longer than drafts and supporting or ancillary documents related to the contract.
  2. Legislation specifies a different retention period applies if a document is referring to a person below a certain age.
  3. In engineering projects, each type of document within the project has a different lifetime and value for retention purposes.
  4. Approved documents are to be kept for longer than those that have not been approved.
  5. Minutes and agenda of formal meetings are typically kept permanently, whereas the remaining records related to that meeting can be discarded.
  6. The retention period of certain records can vary depending on the outcome of the business process. For instance, records related to the acquisition of a company specify that certain due diligence records are to be destroyed immediately if the acquisition fails to close, but if the acquisition is completed successfully, they are to be retained for X years.
  7. Policy. Documents related to the policy can be discarded after a few years, whereas the official “published” policy that was put into effect remains indefinitely until superseded.

In any modern software ready retention schedule, it is common to have multiple retention rules applicable to as many as 80% of all categories in the schedule. Let’s take a look at a real-life example of a category in the retention schedule that requires multiple retention rules. Under human resources we have an activity (category) called “Credentials, Employee and Apprentice”. This is used to store all records related to the credentials needed by employees and apprentices, such as for operating vehicles with air brakes, handling hazardous materials, firefighting, or emergency medical services. There are three retention rules for such credentials, based on various applicable legislation as shown:

  1. If Hazardous Materials = Yes, retention = Credential Expiration date + 50 years then destroy
  2. If Business Unit = Fire or EMS, retention = Credential Expiration date + 8 years then destroy
  3. If Hazardous Materials = No .and. Business Unit .not =. Fire or EMS, retention = 5 years

Let’s look at what these three rules really mean. The first rule states that if the credential is for hazardous materials, the records relating to that credential must be kept for 50 years then destroyed. Rule two states that if the record belongs to the business unit fire or the business unit EMS, and regardless of what type of credential it is, records relating to these credentials must be kept for eight years after the expiry of the credential, then destroyed. Rule three looks rather complicated and technically it is somewhat complicated however its meaning is inherently simple. Rule 3 simply says that all other credentials are to be kept for 5 years then destroyed. This would apply to all credentials that are not hazardous materials, and are not within the business unit fire or the business unit EMS.

The recordkeeping software must have a way to know which rule to apply to which records within this category. It will rely on metadata to tell it what it needs to know. We need a document metadata field called “hazardous materials”. The default value will be NO. If, however the user enters YES in this field, that triggers rule 1 for that document. We need a second metadata field called ”business unit”. If this field contains either “fire” or “EMS”, rule 2 will be applied to that document. Rule 3 will be applied to all remaining documents in that category.

This is an excellent example of how the retention schedule drives ECM structure. The retention schedule specifies the three different variations of retention treatments necessary for this category. It explicitly specifies the metadata fields needed in the ECM structure. As long as those metadata fields exist, and users use them, the retention rules will be applied correctly. Obviously these three fields must be mandatory, as the RBR retention rules depend on the values in these fields to do its work.

Figure 8 shows how these three retention rules are expressed in the retention schedule. The retention schedule is a spreadsheet consisting of several columns from left to right.

Figure 8 – Three Retention rules in a single category

The column “secondary” shows the title of the category. The column “MRR number” indicates that this category has multiple retention rules. MRR number 100.1 will show the details of the rules. Meanwhile, the column BR or business retention shows 5 (years). This is the default rule of five years then destroy, as needed for rule 3. However the column heading MRR shows rule number 100.1, which  refers to the entire set of rules for this category. Let’s take a look at the details of the retention rules for this category. Referred to figure 9 below.

Figure 9 – Rule Details

There could be hundreds or even thousands of rules in this worksheet. In this category however there are exactly 8 rows which act together to form the three unique retention rules for this category, rows 79 through 86 inclusive. Each electronic recordkeeping software product has differing capabilities and limitations for multiple retention rules. Furthermore, each product has a slightly different approach and nomenclature to how the rules are expressed and documented. The example we see in figure 9 is a neutral expression of the three rules that should apply to most modern recordkeeping software products. They would likely have to be adjusted to suit any one particular software product.

In row 79 we define the first rule. The rule is triggered by the document field “hazardous materials”, and the value in the field must be “yes”. The retention rule type is T (time-based), the retention period is 50 years, and the disposition action is delete. Rule 2 is a little more complicated. In rows 80 and 81 we take care of the situation where the business unit is “fire”. In rows 82 and 83 we take care of the same situation, but where the business unit is “EMS”. In row 80, under the column heading REL (Related), we enter the Boolean operator AND. This simply means that the condition in row 80 and the condition in row 81 must both be met in order for the action to take place. In row 81, we specify that there must be a date in the metadata field “credential expiration date”. Hence if the business unit is “fire “and there is an expiration date, the document will be kept for eight years after the date specified in the field “credential expiration date”. Note the retention type is D, which tells the software to trigger the retention period on the date field known as “credential expiration date”. Rows 82 and 83 accomplish the same thing but for the business unit called “EMS”. Rows 80 to 83 together are necessary for our retention rule 2.

Rows 84 to 86 form our retention rule 3. Row 84 specifies that the field “business unit” must not contain the word “fire”. In row 85 we specify the field “business unit” must not contain the word “EMS”. In row 86 we specify that the field “hazardous materials” must contain the value “NO”. Once these 3 criteria have been satisfied the document will be kept for 5 years then deleted.

This has been a deliberately complicated example, but it shows how we can make very sophisticated and complex retention rules. Modern electronic recordkeeping software is more than capable of handling these complex rules; however we have to explicitly tell the software exactly what to do. This will require the use of metadata within the rules, and it is imperative that the retention schedule specify the metadata needed to execute the rules. This metadata must then be built into the ECM. Only when the metadata has been constructed can the rule possibly work. Over the lifetime of the ECM it is imperative that these metadata fields not be disturbed, renamed, removed, or altered in any way. If changes are made to this metadata at any time, the changes must be communicated to the RIM professional so the retention rule can be adjusted accordingly, otherwise the rule will simply stop working.

Value-Based Retention 

With today’s recordkeeping software we can assign retention periods based on the value of records within a category. We can assign longer retention periods to documents of higher value, and shorter retention periods to documents of lesser value. To do this we again rely on document metadata within the ECM structure. We will need a metadata field to differentiate documents of high value from those of lower value. There are many ways to do this that can involve a single metadata field or multiple metadata fields depending on the particular activity. For now however we will use a very common technique found in a number of organizations. Let’s suppose we have an activity (category) for “capital projects”. These are large capital-intensive engineering projects such as building roads, bridges, or buildings. Each project is a case within the category. Each case will store all the records related to that particular project through to the end of its life (which would be the project end date). Needless to say, there could be thousands, even tens of thousands of documents for each project. We can define a metadata field that can tell us the nature of each document. The nature, or subject of the document in turn can tell us the inherent value of the document for the purposes of assigning a retention period. A good example would be a document metadata field called “Document Type, Capital Projects”. This would be a mandatory field in the ECM library, so that every single document must have a value in this field. There would be a dropdown list of document types similar to that shown below:

Document Types, Capital Projects
Document Type Trigger Retention Period (Yrs)
Project Management TDD 5
Contractual/Legal EOL 5
Planning and Logistics EOP 5
Reports, Draft TDD 2
Reports, Final N/A P
Charter/Authorization N/A P
Meeting Minutes/Agenda N/A P
Technical Specifications EOL 5
Drawings, Draft EOP 5
Drawings, As-Built N/A P
Regulatory-and Compliance EOP 5
Permits & Licenses EOP 10
Contractor-related EOP 2
Approval-related TDD 25
Budget Related EOP 5
Other EOP 2

Retention triggers are as follows:

TDD True document date

EOL End of life (of asset)

EOP End of project

End users are forced to pick one of the 16 possible values for this mandatory field. Users would not normally see the trigger or the retention period when they select the document type. There is no reason why they couldn’t, but most users simply don’t have an interest in the retention periods. Below are some examples of how the retention rule was derived from the selection of the document type:

Technical specifications These records will be kept for 5 years after the end of life of the asset being constructed. If the asset is a bridge, technical specifications are necessary to keep on hand for the entire life of the bridge.

Project management These records would include things such as project schedules, Gantt charts, and other documents related to the management of the project. The value diminishes quickly after they have been used, hence the retention period is the date of the document (true document date) +5 years, then destroy.

Budget related Records related to the budget are to be kept for 5 years following the end of the project. These records are not needed to be held for the life of the asset under construction.

Not all document types necessarily need to have a different retention treatment from other document types. Note above that the two document types “contractor -related” and “reports, draft” each have the same retention treatment. In many modern EDRMS systems the document type is used to help end-users search and retrieve documents by their type. This is particularly useful where there are high volumes of documents, i.e. thousands or even tens of thousands of documents. The document type field makes it easier to find the document of interest. We can take advantage of that to assign appropriate retention periods to each document type.

Figure 10 shows how these retention rules will be entered into the retention schedule itself, in the MRR (multiple retention rule) worksheet.

Figure 10 – Document Type Retention Rules

Note there are two different retention types among the 16 retention rules. The rules in each of the spreadsheet rows 101, 104, 114, and 116 each use retention type T (time-based retention). The remaining rules use retention type E (event-based retention), except for the 3 rules in spreadsheet rows 105, 106 and 107 which call for permanent retention. Retention type E specifies that the trigger date is some event date. On spreadsheet row 108 the event date is the end of life (EOL) of the asset. In row 109 however the event date is the end of project (EOP).

This approach to value-based retention is generally useful when you have a very high quantity of records within a given activity (category). This approach offers 2 distinct benefits:

  1. Improved document retrievability. Users can search for documents based on the type of document.
  2. Better retention granularity. Documents of low continuing value are destroyed early, and documents of higher, more persistent value are kept longer.

Once again, it’s important to point out the criticality of metadata in the EDRMS. This technique would not be possible without properly defined metadata, in this case the “document type” field. Well-defined and carefully considered metadata is a key to the success of any ECM project, and is equally important for recordkeeping automation.

Published Documents

Certain types of documents have an effective retention period of “indefinite”. This usually means that the document is to be kept until it has been superseded by a newer version. The document is kept for an indefinite period of time until it is replaced with that newer version. Some examples may include:

Policies A policy, such as an email usage policy, is in effect until replaced by a newer version of the policy.

Standard Operating Procedures Standard operating procedures are often documented for things such as fire alarm drills, confined space entries, diagnostic test processes, plant operating and testing procedures, etc. Such procedures remain in effect, and must be followed, until they have been replaced with a newer version.

Training materials Training materials have been developed for a particular training course. These materials are used to deliver the course as often as necessary. Eventually, these training materials will be replaced with a newer version. The retention period for the original training materials is indefinite, until superseded with a newer version.

Plans Many plans are in effect until replaced with newer versions, such as annual operational plans, emergency plans, corporate strategies, etc. Sometimes plans are replaced on a scheduled cycle such as annual or every 5 years. However, in many cases a plan is in effect until it is replaced with a newer version, and it is impossible to predict when that newer version will arrive.

We refer to such documents as published documents. A published document is simply one that is “in effect” until superseded. The document is “in play” so to speak, or “in force”. We must not destroy these documents while they are in effect. Once they’ve been superseded, we can then apply retention. After the date they were superseded, we can then delete them. The term “published” is simply a convenient moniker, it is not necessary to use that particular word. Within any given category where a published document is being developed, there will be many more documents than just the published document itself. Suppose the published document being worked on is a policy. There will be many drafts of the policy. There will also be many emails with directives and instructions and comments related to the development of the policy. There will be many reference documents including financial documents, legal briefings, and any manner of supporting or ancillary documents. Of all of these records in that category, we only need to apply the supersedence process to the actual policy itself (which was put into effect). We can apply a different retention rule to the remaining documents. The remaining documents do not have an indefinite retention. They can be disposed of in a fixed period of time, or a certain period of time after the published document has been put into effect. Either way, we need a way to distinguish the published documents from those that are not. To do this we use a document metadata field called PUBLISHED (Y/N).

To handle supersedence in a modern EDRMS, we use a combination of the following four metadata fields:

Version The version of the document in question. Versions can be one of many forms, such as a sequential number, date, or even a season (summer, fall, etc.).

Superseded date The date that a document was superseded by a newer version.

Effective date The date that a newer version of a superseded document was put into effect.

Published A document with an indefinite retention period (until superseded). This distinguishes a document in the category that requires supersedence retention treatment from those that do not (drafts, commentary, supporting or ancillary documents).

The supersedence process is shown in figure 8.

Figure 11 – Supersedence Process

Version 1 was published, or came into effect on January 10, 2018. On June 11, 2018 however version 2 was approved and took effect on that date. Hence the superseded date of version 1 became June 11, 2018, and the effective date of version 2 was also June 11, 2018. On December 10, 2018 version 2 was superseded by version 3, which became effective on December 10, 2018 version 3 has no superseded date as it has yet to be superseded by a newer version. Each of these 3 documents would have the value YES in the metadata field PUBLISHED. All other supporting and ancillary documents related to the published document would have the value NO in the metadata field PUBLISHED.

Suppose we have a category called “Policies, Corporate” with the following retention rules:

If Published = Yes, retention = Date Superseded + 5 years

If Published = No, retention = 2 years

Figure 12 shows how we would enter these two rules into the retention schedule in the MRR (Multi Retention Rules) worksheet.

Figure 12 – Retention Rule Details

In spreadsheet row 147 we indicate that the value YES must be in the field “published”. We enter ”AND” in the REL (RELATED) column to show that there is a 2nd condition that must be met. In spreadsheet row 148 we specify that there must be a date in the field “superseded date”. The document will be destroyed 5 years after the date in the “superseded date” field. Note the retention type is D (trigger retention on a document metadata date field). In spreadsheet row 149 we deal with all the remaining documents, i.e. those that are not published. Here we simply keep these documents for 2 years, then destroy. The retention type T tells the software to destroy the documents 2 years following the true document date.

Retention Over-Ride

Every now and then in some categories, the business owner will request the ability to override the retention schedule and keep the document for a longer period of time. We refer to such an extension as a retention schedule “override”. The rationale for a business user to support this varies greatly but the following are some common examples of the reasons they may wish to override the retention schedule:

  1. Reference value. A particular document may have an unusually long (persistent) value for future reference. It may be a rare legal precedent. It may be a technical specification or photograph of an extremely rare piece of equipment that is long out of date but still in service, and the document may need to be preserved as long as the equipment is still in service.
  2. Protective value. A document may record something that could be used in the future to protect the organization from legal action, or serve to defend it in the event of any legal or regulatory challenge in the future. It may serve as evidence that the business owner feels should be kept well beyond the regular retention period, “Just in case”.
  3. Legal value. Some legislation obligates an organization to keep relevant documents if there is a “reasonably foreseeable prospect” of legal action. You may suspect that this document or documents would be important in the event of future legal action against your organization.
  4. Historical value. Records within a given category do not ordinarily contain anything of historical value. But for whatever reason, every now and then a document may be deemed to be historically significant, even though it was never expected to be. For example a photograph of the sod turning ceremony during the construction of a new facility may be included with the construction project documents, however the photo may be declared as historical, therefore you would wish to keep that particular photo permanently.

To allow an end user to override a retention period, you need a mechanism by which they can designate a document with higher retention value. This would be yet another metadata field. The field used for this override would commonly be called CRITICAL (Y/N), or something similar. The name of the field does not matter-it can be called anything you wish, as long as the user understands its purpose. We then define two separate retention rules for this category — one where CRITICAL = NO, and one where CRITICAL = YES, as shown below:

If Critical = Yes, retention = 25 years

If Critical = No, retention = 5 years

Figure 13 shows how we would enter these rules into the retention schedule.

Figure 13 – Retention Rule Over-ride

In spreadsheet row 162 we have a simple time-based retention rule where the value of the field “critical” = YES. Documents meeting that rule will be destroyed 25 years after the true document date. In spreadsheet row 163 we specify a retention rule of 5 years where “critical” = NO.

Below are some important considerations when implementing retention rule overrides:

  1. Each category can have a different retention period for the override. For example, a category “financial audits” might have an override of 25 years, whereas a category “museum collections” might have an override of permanent.
  2. This capability can be subject to abuse. Some users may have a proclivity to using it too much, on too many documents. The only defence against this is to educate your users, and to monitor the usage of the over-ride. We recommend that on a regular basis, perhaps monthly, you should run a report against the entire EDRMS to determine how often, in what categories, and which users applied the override. Monitor it frequently to ensure it’s being used in a healthy fashion and not being abused.
  3. This override can be combined with other retention rules in any given category. The example below shows how the override can be applied to a category with supersedence rules:
  1. If Published = Yes, retention = Date Superseded + 5 years
  2. If Published = No, retention = 2 years
  3. If Critical = Yes, retention = 25 years

 Continuous Overwrite

Unlike traditional physical documents, electronic documents can be modified on an ongoing basis over time. There are 3 distinctly different methods to modify a digital document:

Save with a different filename Every time you modify the document, you save it with a different filename. This creates a separate document for each time you modified the document. Each is different from the other, and they bear a different filename from each other. Technically and legally, each change constitutes a different record. Each of these separate records can be declared and managed independently of the other.

Save with the same filename You make a change to the document and save it without changing its name. This overwrites the previous version of the document with a new version that contains the changes. There is no record of the changes made to the document. There is no version that tells you how often it was changed or what the differences are between the versions. It is legally and technically one record, the content of which has changed over time. This is what we refer to as a true “continuous overwrite” document. It is continuously being overwritten. The frequency with which it is modified is of no consequence, so don’t be put off by the “continuous” terminology. It is continuous insofar as all the changes are continuously being overwritten throughout the lifetime of the document.

Save and increment the version In any modern ECM system, you can optionally turn on version management. Each time you save the document the system automatically increments the version number by one. The first time you save a document it will automatically have version 1. The next time you save it, it will have version 2, and so on. This allows you to go back in time and see every change made to the document. Legally and technically, each version is a record and can be managed as a record independently of all other versions. Some would say that the entire version series is a single record. Either way, in modern recordkeeping the versions and the changes to those versions should be preserved in accordance with the principle of records preservation.

Here we are concerned only with the 2nd of the 3 methods shown above – saving with the same filename. We refer to this as “continuous overwrite”. Some examples may include:

  1. Tracking logs. A spreadsheet used to track attendance of students, phone calls, project changes, etc. The spreadsheet is updated periodically (daily, weekly, monthly) or on an as needed basis. Each time the spreadsheet is updated it is saved without changing its name.
  2. Databases. You may have a database to track things such as assets, employee leave, or other information. Such databases might include Microsoft access, Oracle, or even a Microsoft Notepad document. The database is updated periodically and all the data is stored in a “database”. This database can consist of a single file or a set of related files, which we considered to be a record. The name of that database file(s) never changes, and is continuously overwritten as new data is added or modified.
  3. Notebooks. A notebook can be an ordinary document such as a Microsoft Word document which is used to record operator notes, police notes, or anything else which is periodically and continuously updated. Microsoft has an innovative software application called OneNote which is specifically designed to record notes on a continuous basis in a single document. In essence, OneNote is a database of unstructured documents.

Such documents often constitute important records. But because they are being continuously modified and saved (i.e. continuous overwrite), we can neither make them immutable (locked them down and prevent deletion or modification) or delete them. So how do we deal with it in a retention schedule? Any given category can have one or more such documents. For example, a category such as “sales performance and tracking” might contain records of the sales quotas and targets of a sales team. Mixed within these records could be a tracking log – the spreadsheet that records and tracks aggregate data of the entire sales team over time. Because this tracking log is being continuously overwritten, we cannot make it immutable or delete it. In essence, we have to ignore it and leave it be. In this example we would define two retention rules as follows:

If Continuous Overwrite = Yes, retention = Ignore

If Continuous Overwrite = No, retention = 5 years

In the ECM, each document in this category must have a mandatory metadata field called “continuous overwrite”. The default value would be NO. For each tracking log stored in this ECM library (Category), the user must specify “continuous overwrite” = YES. Where “continuous overwrite” = YES, the software will ignore the document and not lock it down or apply any deletion to it. For all documents where “continuous overwrite” = NO, the document will be kept for 5 years then destroyed.

Figure 14 shows how we would enter that into the spreadsheet.

Figure 14 – Continuous Overwrite

In spreadsheet row 194 we specify that for all documents where document metadata field “continuous overwrite” = YES, we will use retention type O (ignore, no deletion). In spreadsheet row 195 we specify that for all documents where document metadata field “continuous overwrite” = NO, we use retention type T (time-based) and delete these documents 5 years after their true document date.

In some cases, tracking logs and similar databases are “rolled over” on a periodic basis. Suppose a tracking log is used to track sales performance in a given calendar year. A spreadsheet is updated continuously throughout the year. At the end of the year the spreadsheet for that year is left behind and a copy is made with a different name for the following year. This new spreadsheet for the next year is then continuously updated throughout the second year. This means that updating of each log ceases at the end of every calendar year. We can then apply retention to those logs. Suppose we had a category that contained tracking logs that were rolled over at the end of each year. If we applied normal default retention period of 5 years for example, that would be sufficient to capture the tracking logs. The tracking logs would be retained 5 years after the end of each calendar year, then destroyed. As a general rule of thumb, if the rollover period is less than the default retention period, the continuous overwrite treatment is not necessary.

Summary

A fresh approach to the retention schedule is essential if you are about to deploy a modern EDRMS system. The schedule must be in a spreadsheet format, and contain detailed category descriptions, explicit cases and naming convention for case categories, as well as mathematically correct retention rules for all categories that require multiple retention rules. This allows you to leverage the full capabilities of the EDRMS software.

The retention schedule does more than simply specify retention rules – it forms the basic underlying structure of the ECM platform. Once the EDRMS is fully deployed, the ECM structure, the retention schedule, and the RBR automation rules all work together an as interconnected unit. Any change to either of these three components must be carefully coordinated so that the RBR rules do not break. This means the RIM professional must:

  1. Substantially re-work the retention schedule
  2. Heavily influence the ECM structure
  3. Design and deploy automation records declaration and retention rules
  4. Continuously monitor the overall system and ensure and changes are communicated and reflected in all three levels of the system

About the Author

Bruce Miller, MBA, IGP is a world leading expert on electronic recordkeeping. He is an independent consultant, an author, and an educator. He pioneered the world’s first electronic recordkeeping software. He served as IBM’s global e-Records Strategy and Business Development Executive. At IBM he was honored as a Technical Leader, one of 439 out of 360,000 IBM employees. Mr. Miller is the recipient of the prestigious Emmett Leahy Award, the highest international recognition given to professionals in the field of information management. His book “Managing Records in Microsoft SharePoint” was an ARMA best seller. Bruce holds a Diploma in Electronics Engineering Technology, an MBA, and is a certified Information Governance Professional.

Medical Information Management for COVID-19 or Other Medical/Emergency Procedures

Sagesse Winter 2021 – An ARMA Canada Publication

 

by Alexandra (Sandie) Bradley, CRM, FAI

 

Back to Sagesse 2021

 

ABSTRACT

 

This paper will examine why one’s personal medical information is a critical record, why record keeping techniques are necessary to access and preserve this information, and how this information assists with successful medical treatment. Observations are based on the author’s personal journey through the Canadian health care environment.  Beyond treatment, a person should also consider what information to have immediately available in case of a sudden emergency.  These records should be maintained for immediate access in case a person is stricken or must evacuate their home quickly.

 

INTRODUCTION

 

A sudden emergency, an accident, or the diagnosis and care for a critical medical issue such as COVID-19 or other significant illness will catapult an individual into the medical environment of testing, diagnosis, treatment and continuing care.  Each procedure will involve records creation and maintenance by the care providers, in a variety of organizations.  Records managers, as professionals assume responsibility for the records of organizations.  However, where our individual health is concerned, we are the subject of the various records that will accumulate about our issue.  For our use and protection, we are faced with collecting and maintaining personal health information as individuals, no matter where else this information may reside.  This paper will examine why this information is one’s most critical record, why such personal record keeping is necessary and how our information assists with successful treatment. Observations are based on a personal journey through the Canadian health care environment.  Finally, in light of current emergencies like hurricanes and wildfires, it is essential to maintain key health records in case we are stricken or have to leave our homes quickly.

 

Welcome to Canadian Medicine

 

Most people enjoy living a healthy and predictable life.  However, there may come a time when, through an accident or diagnosis of illness, the healthy person suddenly becomes the “patient”, and embarks on a pathway to healing, one that may involve a wide number of treatment providers and agencies.  Thankfully, our universal health care system in Canada means that individuals are not deprived of access for treatment.  However, across the country, various provincial jurisdictions govern the provision of health services, and within each province, diverse professional service providers may have a role to play.  The information that is documented and maintained is subject to each jurisdictions’ rules and responsibilities, varying from practice standards of professional groups to laws governing access and privacy.  Despite progress to unify these diverse systems, health information may still be fragmented and not always universally accessible to the health care providers.

 

Personal health information

 

The terminology identifying medical records has evolved to the collective term “personal health information” (1).  The retention and management of this information is governed by diverse organizational requirements, and generally is maintained for the use of the creator, not the subject of the information (2). 

 

As this paper will demonstrate, one patient will likely be seen by several doctors, may have many tests, and while the results may be collected or aggregated by the agencies administering the treatments, there is still no single source of truth or one “master” medical file on the patient.  The cumulative personal health information collected by the patient at each step becomes their key to understanding their diagnosis and treatment, and enables accurate communication between the patient and the various medical staff involved.  Furthermore, in an emergency, the patient’s’ ability to provide immediate and accurate medical history will speed up the provision of the necessary care.

 

This conclusion is based on my personal experience with a serious illness.  The paper discusses the management of personal health information from both the person experience as well as from a records manager’s perspective in collecting, organization and managing information. 

 

My case study

 

Records collection

 

Whether a person identifies symptoms or is suddenly stricken, whether the entrée to treatment is through a physician’s office, or through an emergency facility, the treatment process generally starts with a series of tests for diagnosis.  A friend warned me that an immediate reaction to medical news is shock, and that having a second person present helps with hearing and noting the information provided about treatment. (She called the shock “medicine brain”, and based on my experience, she was correct.  A second person heard the information I missed.)

 

A notebook was also recommended, to keep track of conversations, notations about medications provided, and reminders for activities, follow up meetings, etc. (Notetaking on my smart phone was not convenient, often not permitted because of proximity to medical equipment or difficult without access to WIFI.)  My husband was able to attend all meetings, and his observations assisted with my notetaking. 

 

Record collecting began after the first meeting with the family doctor, when we both recalled what the doctor said and noted the discussion.  The notebook became “the Blue Book” which evolved past note-taking to include handouts and other materials.  When the collection of information grew beyond five pages of documents, the Blue Book was added to the Blue Briefcase, to hold the accumulated material which had evolved into a personal case record and diary of all that happened in the journey.  This information was retained in chronological order, and corresponded to the dated notes in the Blue Book.  At each stage of my treatment, the notes provided me with quick recall and summation of treatment to discuss with the attending health staff.

 

Initial diagnosis and testing

 

My diagnosis arose from a sister’s medical event.  At a regular check-up, I made a casual comment in the family physician’s office.  On the basis of my sister’s issues, the family doctor ordered a series of routine tests.  One of them, a blood test, detected anomalies, and the doctor ordered more testing.  

 

As discussed above, notes of the conversation with the family physician were made in the Blue Book.  Doctor’s patient notes and medical laboratory records were created.  While doctors’ records are still not universally available in our province, a patient can sign up for access to the laboratory records and create copies of their laboratory results (3).  These copies are vital for future discussions (such as when complications arise).  

 

A referral from the family physician to an internist was also given, and at his office, medical history (from the Blue Book) was provided.  There were several sets of tests ordered. The dates and results of these tests were recorded in the Blue Book as they were completed, and were also shared by the internist’s office to the family doctor, who discussed them with me.  

 

Another referral from the family physician was made to a gastroenterologist, for a colonoscopy, properly called a “CT Scan of Abdomen and Pelvis with Contrast”, which took place in the local hospital.  Preparation for the test including detailed instructions and medication. On arrival for the test, a nurse provided a form requesting personal and family medical history, use of medications, allergies, and previous medical tests taken. 

 

Now, in addition to this third physician’s records, there were pharmacy (prescription) records, test procedures, the hospital test record, and discharge instructions (4) following the test.  The discharge instructions provided a brief description of the test results, which were soon followed by another record, the laboratory results of the colonoscopy.  These results were forwarded automatically to the family physician, and a copy was provided to me.  All documents were collected into the Blue Book.  Note:  My husband was also present after the colonoscopy and heard the instructions from the gastroenterologist to the surgeon.  (Sedated, I heard nothing.)  The next day, the family physician provided confirmation of the diagnosis and the prompt need for surgery.  As part of the discussion, the family physician provided a printed copy of the analysis of the colonoscopy, which was noted in the Blue Book collection, and added to the Blue Briefcase  collection of documents, now including three sets of physicians’ tests and notes. I am uncertain if the provision of copies to patients is a standard procedure or just the practice of our family physician.  If such documents are not provided, I recommend asking for copies to be provided. 

 

Treatment

 

The next step in the process was surgery.  

 

At his office, the surgeon (Physician#4) confirmed a date for the procedure.  Again, there was a review of medical history, from the well-thumbed Blue Book.  His staff provided more information about the surgical process, including more handouts for the Blue Briefcase including “Dialogue for Patient Fasting Guidelines”, “Bowel Preparation Instructions – Surgery” (5) and a checklist of the various departments of the hospital to be cleared before surgery.  

 

Appointments were made for these pre-surgical consultations at the hospital, first with the anaesthesiologist who reviewed my hospital testing record, and confirmed physical information, and a surgical nurse to brief me on the pre- and post- surgery requirements.  Again, I was required to provide my medical history (accumulating in the Blue Book), family medical history, medication use and allergies.  The anaesthesiologist indicated that the surgery would be 1.5 hours in duration, and that, prior to the surgery, no medications or supplements are permitted.  The surgical nurse provided two booklets: instructions (6) for the surgery preparation:  showering, hair washing, no use of deodorant or makeup prior to arrival at the hospital on the day of surgery.  She also confirmed the requirements of the post-surgical recovery outlined in the second booklet (7):  deep breathing, use of sugarless gum, coughing and stretching to ensure no complications from the surgical procedure. She instructed me to bring the second Enhanced Recovery after Surgery (ERAS) booklet to the hospital when I came in for surgery.  I answered questions about a representation agreement and final preparations if a full code was called. Again, all was noted in the Blue Book and the surgical instructions filed into the Blue Briefcase.  The last stop was at the hospital laboratory where blood work was done with results forwarded to the surgeon prior to the operating date.

 

Surgery took place two weeks later. Tissue samples were sent for biopsy, with the results forwarded to my family physician and the surgeon.

 

During the hospital stay, physicians, nurses and physiotherapists checked my progress, charting information into an integrated electronic record keeping system (8). All interactions with me were noted into the system, either by the nurses on a mobile computer that was brought to the bedside, or in the various locations of the hospital, including the laboratory, pharmacy or imaging department (9). I took no notes during this time.  

 

Five days later, I was discharged.  More information was provided by hospital staff, beyond the ERAS booklet, to cover diet (10).  A daily injection of medication was prescribed, to ensure no blood clotting after surgery. All conversations were noted in the Blue Book.

 

Emergency follow up 

 

Ten days later I was back at the Emergency Department with sharp pains and difficulty breathing.  Blue Book notes provided my recent medical history to the attending doctor.  After initial examinations by the emergency staff, including referral to my recent surgical history in the hospital’s electronic system, Physician #5 sent me for scans to determine the cause of my pain.  The scans revealed an infection as well as thrombosis in my lungs. I was readmitted to the hospital.  The infection site was drained and I was isolated for three days, until further tests revealed that the infection was not contagious.  All of this treatment was noted into the hospital electronic system, and forwarded to both my family physician and surgeon.  Two days later, I was discharged again, with medication to treat the infection as well as the thrombosis (a three month treatment of drugs).  Once again, I updated notes in the Blue Book to cover new medications and treatments.

 

A visit to my family physician was necessary, to prescribe the complete course of medications for the two conditions.  She also provided me a copy of the report by the BC Cancer Agency, which indicated that the surgery had successfully contained the tumour. This report was added to the Blue Briefcase.  She also referred me to a haematologist, (Physician #6) to confirm what further treatment might be required from the thrombosis.  

 

Ten days later, I visited the surgeon for the surgical follow up.  Despite the complications described here, he pronounced the surgery a success, and indicated that he would see me in a year for further examination.  He also referred me to an oncologist (Physician#7) for review of the tumour and any further treatment.  Ten days later, in our meeting, the oncologist stated that there was no need for further treatment. This was duly noted in the Blue Book.

 

The last medical visit was the haematologist, who reviewed my medical history from my Blue Book notes and the electronic hospital records.  He indicated that after the three month period of medication was completed, I was cured, and should expect no further complications, unless a similar set of circumstances arose in future. There were final notes (I thought) in the Blue Book, and much dancing and celebrating by me.

 

Recovery

 

Once hospitalization was finished, I started on the path to “back to normal” health.  The two challenges post-surgery were to regain strength and resume physical activity.  Clearly, improved muscle tone and increased energy levels were needed after the two month interval of surgery and recovery.  Normally I would swim or walk or play golf.    At our local recreation centre’s medical physiotherapy unit, I obtained an appointment with a specialist in post-surgical recovery.  She assessed my condition, reviewing my Blue Book notes and the documents associated with the surgery and post complications.  Through the next three months, she led me through a series of strengthening and balancing exercises, providing me with an illustrated set of instructions.  These were well-thumbed, copied and added to the Blue Briefcase.  I had graduated to the Weight Room for independent exercise, just as the facility closed because of the COVID 19 pandemic. However, a regime of walking and now golf have replaced the inside activities.

 

Finally, the last and continuing post-surgical issue is diet.  In the booklet provided upon discharge, the instructions were to follow a moderate diet for 6 to 8 weeks and then resume “normal” eating.  However, the notes from the Blue Book indicate that the physicians can assume, but don’t guarantee, when “normal” returns.    Six months post-surgery and after trial and error with different types of foods, digestion was definitely not back to normal.  Through a friend, I heard about Inspire Health (11), a cancer support agency approved by the BC Ministry of Health.  I registered with them, providing permission for them to access and review my hospital, laboratory and Cancer Agency medical history.  Inspire Health provides referrals to individual specialists, and I obtained the services of a dietitian.  Resulting from the closures caused by COVID, Inspire Health adopted a model of remote learning.  The dietitian booked a Zoom conference and my health history and subsequent diet issues were discussed.  The Blue Book notes were essential to provide context to the issues. After the first meeting online, we booked a series of Zoom calls. I was asked to maintain a food diary, and to submit it two days prior to each Zoom call.  

 

She immediately placed me back on the restricted diet provided when I was discharged from hospital.  In addition, she identified food allergies and reactions to specific food types, which I was instructed to remove from my diet, as well as supplements and electrolytes to add to my diet.  Each Zoom call resulted in six to eight pages of notes in the Blue Book, as well as copies of handouts about aspects of diet that related to my particular circumstances (12).  She requested further blood tests be undertaken, so I contacted my family  physician, providing her with all of the information I had been given via my Blue Book notes, and she ordered the tests.  Results indicated that I needed more iron and vitamins, which the physician ordered.  These were noted in the Blue Book.  

 

In all, there were four consultations with the dietitian, each resulting in recommendations for diet and supplements.  I shared the results of her recommendations with my family physician, who is now monitoring my results for iron deficiency.

 

A follow up visit to the internist who had been part of my diagnosis was the last step in the immediate care.  He concurred with the recommendations about diet supplements, and requested another follow up in six months to ensure progress.

 

Continuing care

 

This case will not be completed for at least five years, when the surgeon will pronounce that my treatment is completed.  At year one past surgery, a scan and colonoscopy are required, to ensure no return of cancer cells.  If none are found, the tests will drop to three years post-surgery, and then five years.  Each of the tests and scans will generate a set of additional records and notes, which are filed in chronological order within the Blue Briefcase and Book.

 

Benefits of the personal recordkeeping

 

The case study illustrates the varied sources and diverse types of records that will accumulate as a patient is diagnosed and treated for a medical condition.  Despite the evolving state of centralized electronic record keeping in the medical environment, the uncoordinated systems are still not linked, particularly the physicians’ records and those of outside service providers like the physiotherapist and dietitian, where the records are created within their office environments.  (Depending on the doctor’s relationship with the hospital, there may be access from the doctor’s office to the hospital information, but this access will not be universal.)  Consequently, when required to describe their medical condition and history, a patient must have a comprehensive and reliable information source at their fingertips.  In my case, the Blue Book and Briefcase started as reminders for me, and evolved into the story of my health, which I was required to recite with each specialist consultation.  

 

The need for this personal record keeping may be short-term, for the duration of the immediate treatment, or long-term, depending on the medical condition.  In my circumstances, because I will be treated for my condition for five years, this set of records will become a vital and ongoing active record until such time as I am declared “cured”.  Subsequently, this record keeping will become an historic record of my treatment, with a summary available at my fingertips should I require medical treatment for any reason in the future.  As previously noted, the physicians’ records may be retained for 16 years past my last contact or treatment, but for me, these become a permanently valuable set, and will be retained for my lifetime. 

 

In case of emergency – make your own information available

 

An emergency situation requires a different, less detailed, but equally important set of personal medical information.  If a patient is suddenly stricken and must be carried into emergency treatment, a succinct summary of medical information will be essential for the immediate treatment and assistance to the first responders.  Such information should be readily available to an emergency medical technician, ambulance attendant or other individual who is responding to the sudden event, accident, collapse, or other immediate event (13).  Sometimes called the “ICE” file, this vital record should include the information that a paramedic or first responder will require to treat the patient, and also provide to the emergency staff at the hospital.  

 

The original idea for the ICE file was based on an observation by a paramedic in the United Kingdom that most people have their smartphone with them at all times, so it was logical to encourage people to carry their essential information on their smartphone and in an unlocked file for ready access.  Various types of phones have software and procedures available so the phone will remain secure, yet the first responders will be able to access the information they need (14).  

 

Across Canada, various agencies have identified and provide template forms for the individual to complete and have available should emergencies arise, when the individual may not be in a position to provide the information. As an example, the West Toronto Support Services (15) organization provides a two page form which includes a section for relevant medical history, medications, medical allergies, and special considerations, such as dialysis or extensive medical history.  This form is brief enough that it can folded and carried in a wallet or purse.  A second, more detailed form is provided from Patient Pathways (16) which is described as “one of the most thorough available and takes into consideration any Advanced Care Planning documents that outline your preferences for future healthcare.”  This eight-page document covers full identification as well as important circumstances (e.g. deafness) , life-threatening allergies, mobility and sensory issues, medical conditions and recent surgeries, prescription medication record, non-prescription medications, medical emergency contacts, current physicians, and personal and household contacts.  Both agencies recommend that the completed forms, one for each family member, are stored in a clear plastic bag and attached to the front or inside of your refrigerator, where paramedics are trained to look. It is also recommended that these forms are updated annually, or when medical circumstances change.

 

While organizations may vary in the recommendations for the content of this information and where to keep it, generally, first responders agree that what they will need is information on whether there is a life threatening condition, whether there is a condition that could appear threatening but is not, and information (signed by your physician) that you are DNR (have indicated Do Not Resuscitate) if you have chosen this approach (17).

 

 The “go bag” 

 

Another type of emergency medical information you may require is the evacuation or “go bag”, information that should be prepared in advance, and ready when and if you must leave your home on short notice.  With the recent wildfires, storms, floods and other disasters, emergency planning authorities are encouraging all citizens to have a carry-all or backpack, prepared in advance and filled with the essential items you need to live in an emergency shelter or other facilities.

 

Along with other household information, emergency planning organizations are encouraging families to collect family health information (18).  As an example, within their Home Emergency Plan kit template, Prepared BC includes a page for family health information, including names and medical card numbers, lists of medications, medical equipment and other health information and family physicians. There is also a tip about keeping extra amounts of up to date prescriptions in the emergency kit, along with extra glasses and contact lenses, a first aid kit and other basic COVID disinfecting supplies.  In popular media (19), articles featuring the emergency needs of families also stress the need for the medical information and supplies of medicines needed to be top priority for the emergency bag or pack.

 

Conclusion

 

As a healthy person, I had no significant medical history, other than giving birth to my children, and receiving immunizations and dental care, before my diagnosis.  However, as I collected the information during my adventure (20) of the cancer treatment, my professional training alerted me to collect, organize and retain the medical information that accumulated.  My case record, at my fingertips, will have value for the duration of my treatment, and following, will contribute to my medical history.  In five years, upon conclusion of my treatment, I can foresee a file closing and culling of the documents to retain the essential information:  laboratory reports, doctors’ notes and medications.  I will include a summary in the “in case of emergency” and “to go” information that I will retain within the house. Hopefully, there will be no need for another adventure. If a hospital stay is required in future, the comprehensive medical information of the hospital treatment is in the CST system and available to the doctors and staff who will treat me.  However, until the physician’s office records are also online, some type of bridge record keeping will be required.  Another notebook, perhaps.

 

Every person’s medical adventure is unique, and may include fewer or more steps than the one described in this paper.  However, the ability to document and track the treatment provides assistance to the health care professionals and enables the patient to contribute to his or her immediate care.

Endnotes

 

1 British Columbia. E-Health (Personal Health Information Access and Protection of Privacy) Act. SBC 2008, Ch. 38, Part 1 Definitions and Interpretation. “Personal health information means recorded information about an identifiable individual that is related to the individual’s health or the provision of health services.”

2 College of Physicians and Surgeons of British Columbia Practice Standard Medical Records, Data Stewardship and Confidentiality of Personal Health Information. Section 3-6 (2) “Records are required to be maintained for a minimum period of 16 years from either the date of last entry or from the age of majority, whichever is later, except as otherwise required by law.” www.cpsbc.ca/files/pdf/PSG-medical records.

3 In British Columbia, this service is called “my ehealth”. www.myehealth.ca

4 Vancouver Coastal Health, Providence Health Care. Discharge Instructions Following Colonoscopy/Sigmoidoscopy/Polypectomy/Gastroscopy/Endoscopic Ultrasound, Vancouver Coastal Health, March 2017.

5 Office of Dr. Eiman Zargaran, North Vancouver, BC., August, 2019.

6 Vancouver Coastal Health. Preparing for your Surgery: Information for Patients having an Operation. Vancouver Coastal Health, February 2016

7 Vancouver Coastal Health. Enhanced Recovery after Surgery (ERAS) Colon Surgery. Vancouver Coastal Health, January 2018. 

8 Clinical + Systems Transformation. On April 28, 2018, Lions Gate Hospital and Squamish General Hospital were the first of approximately 40 acute care sites across the Vancouver Coastal, Provincial Health Services Authority and Providence Health Care, to implement a multi-year project including a new shared computer system to replace aging systems. . https:cstproject.ca. 

9 Ibid. “What’s changing for clinical care: Mark’s Patient Journey” illustrates the various components of the electronic health system as a patient arrives at a hospital for care. The video matches my experience in the hospital setting.

10 “Your first 6-8 weeks after surgery”, Lions Gate Hospital, n.d..

11 InspireHealth Supportive Cancer Care. https://www.inspirehealth.ca

12 “Health Eating for your Condition”. “HealthLink BC is a government-funded telehealth service launched in 2001, which provides non-emergency health information to the residents of British Columbia, Canada through combined telephone, internet, mobile app, and print resources.” www: British Columbia HealthLink BC

13 Rod Brouhart. “Where to Keep Medical Information for Emergencies.” Emergency response organizations have varying recommendations, ranging from a bracelet or device on a person’s body to various locations in the home, in a wallet or purse or on a cell phone. https://www.verywellhealth.com/where-should-I-leave-medicalinformation-1298503?print 10/1/2020 

14 TechGuy Labs. Put Emergency Medical Information on Your Smartphone. https://techguylabs.com/blog/ptemergency-medial-information-your-smartphone. Identifies processes for setting up through the Apple Health app on iPhones, or through free third party apps available for iPhone and Android phones.

15 Toronto Paramedic Services. Information Sheet, West Toronto Support Services, n.d. https: wtss.org/news/incase-of-emergency-form

16 PatientPathways. In Case of Emergency Form. Guidelines and Forms, 2019. http: PatientPathwaysca

17 Op. cit. Brouhard.

18 ww2.gov.bc.ca/gov/content/safety/emergency-prepared-response-recovery/preparedbc/guides&resources. This template covers the full gamut of emergency information including vital identification, property and other records. 

19 Shilton, A.C. “Prepare a ‘Go Bag’ Today. The New York Times, September 20, 2020. P. D10

20 In his seminar Re-Envisioning the Retention Schedule, ARMA Canada and RIMTech Consulting, September 17, 2020, Bruce Miller referred to case records as individual “adventures”. Considering my contact with the various medical services for my treatment, the term “adventure” I agree that the term is a perfect description for a personal medical case file.

 

About the Author

 

Alexandra (Sandie) Bradley has been a records and information manager for over 45 years, and a member of the ARMA Vancouver Chapter for 38 years.  Through her chapter, regional and International roles within ARMA, she has been a mentor and teacher, researcher, writer, and advocate for our profession.  She is a librarian and Certified Records Manager and was made a Fellow of ARMA International (Number 47) in 2012. She is currently a member of the Sagesse Editorial Board and focusses on research and writing. 

 

References

 

British Columbia.  SBC, Ch. 38.  E-Health (Personal Health Information Access and Protection of Privacy) Act. www.bclaws.ca/civix/document/id/complete/statreg/08038_01

 

British Columbia.  HealthLink BC.  https://www.healthlinkbc.ca.  In addition to resources for non-emergency health matters, this site is the definitive source for information about COVID-19 in British Columbia.

 

British Columbia.  Prepared BC. Fill-in-the-blanks Home Emergency Plan.  2019.  ww2.gov.bc.ca/preparedbc/preparedbc-guides/

 

Brouhard, Rod.  “Where to Keep Medical Information for Emergencies”.  Very Well Health, April, 2020.  https://www.verywellhealth.com/where-should-I-leave-medical-information-1298503?print

 

Clinical + Systems Transformation (CST). https.cstproject.ca

 

College of Physicians and Surgeons of British Columbia, Vancouver, B.C.  Practice Standard:  Medical Records, Data Stewardship and Confidentiality of Personal Health Information.  Version 3.0, September 2014, rev. June 21, 2019. www.cpsbc.ca/files/pdf/PSG

 

InspireHealth Supportive Cancer Care.  https://www.inspirehealth.ca

 

Miller, Bruce.  Re-Envisioning the Retention Schedule.  RIMTech Consulting and ARMA Canada, September 17, 2020.

 

Patient Pathways.  In Case of Emergency Form: Guidelines and Form. 2019.  http://patientpathways.ca/plan-ahead/in-case-of-emergency/

 

Shilton, A.C.  “Prepare a ‘Go Bag’ Today.  New York Times, September 20, 2020.  P. D10

 

TechGuy Labs.  Put Emergency Medical Information on Your Smartphone.  https://techguylabs.com/blog/put-emergency-medical -information-your-smartphone.  Accessed 9/22/2020

 

Toronto Paramedic Services.  Information Sheet; In Case of Emergency.  West Toronto Support Services. n.d.  https://wtss.org/news/in-case-of-emergency/

 

Vancouver Coastal Health Authority.  http://www/vch.ca/for-health-proessionals/library. Various publications provided by physicians and hospital staff.

Starting the Digital Transformation Journey in a Pandemic

SAGESSE WINTER 2021 – AN ARMA CANADA PUBLICATION

By Sandra Bates and Amitabh Srivastav, IGP, CIP, PMP

Back to Sagesse 2021

Abstract

Digital transformation (DT) is a key drive and a critical factor for organizational success in the current digital business environment. The reality of the world-wide pandemic significantly affected normal business and social activities, and has vaulted DT to the forefront of management’s priorities. DT has forced management in businesses and governments alike to digitally transform in order to continue functioning and providing products and services. Although DT is not a new initiative, it has changed from an innovative journey to a business necessity leaving management lost and struggling as to how to proceed. This article will examine the business drivers for DT by linking them to key concepts and identifying the DT initiative’s scope. Next, the article will discuss preparing the business case via Use Cases and examining change management risks and mitigation strategies.

Introduction

Dating back prior to the invention of the abacus or compass, people have been using technology to improve the way they live and work in society.  For centuries now, transformation in society due to new technology inventions has been slow and the benefits were often easily realized. Digital technology has evolved primarily over the past 70 years. However, in the past 30 years, digital technology has developed at a rate never before experienced, which affects almost every industry. Before, people looked at digital technology and said “wouldn’t it be cool if we could…” to an environment where the technology develops at a rate where people were apprehensive, but needed to change with technology. Now, they can realize some of their imaginations, those crazy ideas. People no longer need to imagine, because digital technologies can make it a reality.

COVID-19 vaulted many organizations and government agencies into a world where digital technology and processes were forced upon them due to the current climate.  Now organizations have a “taste” of what digital technologies can do, and there is no turning back.  Organizations have introduced many digital processes, yet may still be thinking in manual ways and not realizing the full potential of the digital capabilities that are thrust upon them.

Hence the need for digital transformation (DT). DT is not solely about taking manual processes and using technology to replicate that process but rather it is about digitizing a process and using that technology to enhance or realize additional benefits.  Some examples of additional benefits are improved analytics, better self-serve capabilities, and effective collaboration.

For information professionals, DT is about realizing the organization’s information as assets and using them to their full potential. 

Defining Digital Transformation

Digital transformation is the integration of digital technology into all areas of a business, fundamentally changing how you operate and deliver value to customers. It’s also a cultural change that requires organizations to continually challenge the status quo, experiment, and get comfortable with failure. (enterprisers project, 2020)

Throughout this article, the authors will address DT from the information professional’s perspective and looking at terminology and best practices when engaging in the DT initiatives.

Digital Transformation Drivers

Looking at the technology trends, legislation, regulations and stakeholder expectations gives an idea of some of the key drivers of DT.

Technology Trends: Artificial Intelligence (AI) 

The past few years has seen a drastic improvement and uptake in Big Data (BD) and Machine Learning (ML). This is largely due to the vast amounts of data that is continuously collected. BD and ML offer solutions to use this data and give it value as an asset. Analytics and robotic process automation (RPA) reduce human effort, offer better, faster business decisions and improve customer experiences. 

Knowledge and information gained from AI is changing the way organizations do business. The ability to improve the customer experience will determine the success or failure of organizations. Ultimately the ability to digitally transform your organization is necessary to stay competitive and fiscally profitable. This is achieved through utilizing AI to understand and make decisions based on customer behaviours, values and needs. Not only can AI solutions provide the knowledge and information to accomplish this, but also, they are more effective and efficient people.

Legislation and Regulations

The Government of Canada has also recognized the value of DT. In understanding the value in DT, the government recognized the risks associated with it as well. To protect the privacy of Canadians and safeguard against the misuse of their data, Canada’s Digital Charter was introduced. The objective of the Charter is to keep Canada competitive by realizing the economic and societal benefits of DT, while safeguarding against privacy and security threats that can accompany a digital society.

(www.ic.gc.ca, accessed Oct 2020)

Canada is invested in this strategy.  In addition to empowering digital businesses, investing in DT can increase economic growth and accessibility. By 2030 it is expected that every Canadian business or home, regardless of geographical location, will have internet access to enable access. But currently, reliable digital access that is expected in more populous areas is not accessible to 16% of the population. That means there are over six million Canadians that cannot promote their businesses, file their taxes online, apply for jobs on-line, shop or connect with family via social media, etc.

Privacy

Another primary driver for DT in Canada is the information privacy. Canada’s privacy by design principles generate an environment where privacy of personal information is embedded into technology solutions. This is especially important with cloud-based solutions. It is essential in seeking solutions to ensure that understanding information privacy and security outside of Canada is not always equal to Canada’s mandate.

In Canada, legislation such as Personal Information Protection and Electronic Documents Act (PIPEDA) guide the approach to governance regarding the protection of privacy of information by Canadian based businesses. 

In addition to the federal legislation, most provinces also have mandates such as the Freedom of Information and Protection of Privacy Act (FOIP) and the Local Authority Freedom of Information and Protection of Privacy Act (LA FOIP) in Saskatchewan which guide handling personal information by government agencies as well as access to information requests. The ability to provide information in response to access to information requests within legislated timelines often poses a challenge due to manual processes.  DT initiatives have the potential to improve response to requests as well as make information available through initiatives such as Open Government where the federal government is making more information accessible to the public through proactive publishing of information.

Although Canada’s PIPEDA may prove to be stronger than many countries if businesses wish to house information in Europe or conduct business with European agencies, the General Data Protection Regulation’s (GDPR) “right to be forgotten” require that all personal information of an individual must be erased upon request of the individual.

Technology solutions implemented within the organization must understand the need for privacy and security measures. In Canada, the privacy laws are more robust than in some countries and understanding where data residency issues may be a factor are essential to cloud solutions.  Knowing where your data lies physically and the understanding the local laws in those countries that may affect the privacy and security of the information. It is important to privacy protection that data residency issues are understood and addressed. 

Concepts and Definitions

When the organization’s senior management team decides to embrace DT and move forward with the initiative, the team may have varying ideas about DT, the key concepts, the scope of the initiative, the end product, etc.  The first question is – what is DT? As noted above, DT is very broad, encompassing technology and non-technology concepts.  Understanding these key concepts can help the organization define its DT journey, its business case, and increase the probability of its success.  Some of the key concepts are:

Cloud enablement: is about a “cloud-first mindset” to leverage capabilities and tools, and deploy services that are outside the firewall.  This implies a cloud computing model that ARMA defines as, “A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” (Glossary of Records Management and Information Governance Terms, ARMA International TR 22-2016, p 9)

Intelligent capture: focuses on opportunities to convert physical information into digital formats across multiple channels.  Intelligent capture can also leverage capabilities of RPA using artificial intelligence via machine learning and machine teaching.

Repository-neutral content: are storage locations that are independent of the underlying systems and technology creating content so the content is available to diverse business users.  Integrated collaboration leverages diverse technology platforms and disparate repositories to allow teams to save, search, and share content.  This can include knowledge management, data analytics, data management, etc.

Information governance (IG): is a very critical, if not the most important concept for a successful DT initiative.  It is acknowledged that IG, “… helps organizations achieve business objectives, facilitates compliance with external requirements, and minimizes risk posed by sub-standard information handling practices.” (Glossary of Records Management and Information Governance Terms, ARMA International TR 22-2016, p 28)

Content services: is recognized as another key and critical concept because it facilitates the capability to deliver content and / or services on-demand to any device, anywhere at any time independent of the source of the content.  In many cases, this is the end goal of a successful DT initiative. Since the content has business value, therefore IG, compliance, security, and privacy are critical considerations when delivering the content to internal users and external customers as a new service or a digital product.  More importantly, the success or failure of a new online service or digital product can determine the success or failure of a DT initiative.  The DT initiative needs to define the product scope of the initiative.  PMI defines product scope as “The features and functions that characterize a product, services, or result.” (PMBoK 6th ed ., p 715)  Therefore, the business case needs to clearly articulate the product, service, or result of the DT initiative – i.e. “What is the end-state or end goal?” In fact, the business case should identify the benefits and outcomes from the perspective of different stakeholders – i.e. “What does it mean to me?” or “How will it benefit me?”

Auto-classification: is the systematic identification and classification of content into categories according to a taxonomy representing logical structures such as functions, activities, procedures, methods, etc.  Auto-classification can leverage RPA using AI via machine learning and machine teaching to analyze the explosive growth of digital content and categorized it, including redundant, obsolete, and trivial (ROT) content.  “According to data compiled by Visual Capitalist, a single internet minute holds more than 400,000 hours of video streamed on Netflix, 500 hours of video uploaded by users on YouTube and nearly 42 million messages shared via WhatsApp.” (https://www.statista.com/chart/17518/data-created-in-an-internet-minute/, accessed Oct 2020).

Customer experience: is the collection of experiences, emotions, expectation, impressions, etc. as a result of interacting with online services and digital products across all platforms and delivery channels from an organization’s website, mobile apps, chat, call centres, etc.  Customer experience is closely associated with content services, because from an existential perspective a “thumbs up or down” on social media can materially affect the success or failure of new content services as mentioned above.

Product Scope

While the above concepts help define a DT initiative, it still needs to define the product scope of the DT initiative – in other words what will content services deliver to its stakeholders and how?  When the organization accepts a “cloud-first” strategy of a cloud computing service model, it includes Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).  The nomenclature includes other services such as Content as a Service (CaaS), Managed Content as a Service (MCaaS), Data as a Service (DaaS), etc. More recent technology trends include Blockchain as a Service (BaaS) and Artificial Intelligence as a Service (AIaaS) from third-party cloud-based providers, amongst other services and products.

CaaS / MCaaS infers a “digital mindset” too, for the product scope to deliver services and products.  This means digitizing business processes for the content throughout the information lifecycle from the “cradle-to-the-grave.”  The scope for content services can be just as board as the DT initiative itself.  So, the second question is – what is the product scope?  The scope can include the following key ones:

Information architecture: overlaps many areas of design, but some key ones are navigation, user experience, user interface, security model, taxonomy, etc.

Artificial Intelligence: can perform data analytics on content from many sources, such big data, various types of sensor (i.e. Internet of Things (IoT)), wearable devices, social media, etc.  The analysis can focus on trend analysis, predictive analytics, modelling, etc.  AI can also identify trending topics, curate content from disparate repositories, and deliver it, based user-defined rules.

Document management (DM): traditional document management was “save, search and share,” but now DM is more collaborative with real-time co-authoring on any device, from anywhere, anytime (assuming authorized access).

Records management (RM): has traditionally focused on physical records management, but now the greater emphasis is on digital content such documents, presentations, reports, websites, social media posts, chats, email, video conference, multi-media (images, video, and audio), etc.  Access to the records should be on any device, from anywhere, anytime (again, assuming authorized access).  Records management can include archival storage for long-term preservation.

Knowledge management (KM): is now becoming critical due to the exponential growth in content creation.  It includes digital content that is not a record.  Knowledge management can help derive insight, drive innovation, improve organizational performance, reduce operational risks, increase market share, etc.  Furthermore, “… not knowing what your organization knows is definitely a recipe for rework, stagnation, and inefficiencies.”  (The Official CIP Study Guide, AIIM, 2019, p 49).  Lew Platt, HP’s former CEO once famously stated, “If only HP knew what HP knows, we would be three times more productive.”

Search experience: goes to the heart of finding content using search technology and rendering search results from disparate repositories.  The search technology needs to crawl the content on a regular basis, and update a searchable index.  The search technology also needs to process queries to locate content matching the search criteria, and process the search results to sort them based on filters.  Finally, the search technology needs to format the results and render them for the user’s device.

e-Discovery: is closely related to the search experience, but the focus is on discovering content in order to respond to litigation, compliance, investigation, and information requests.

Digital asset management (DAM): is a mindset shift to thinking, treating, and managing information business assets, applying a value to the information assets, and perhaps monetizing the information asset.  This includes digital content mentioned above, but with a greater emphasis on multi-media such as podcasts, video, digital images, movies.  Digital assets can also include architectural and design documents, intellectual property, logos, trademarks, copyrights, etc.

Digital rights management (DRM): If the digital asset has business values, then the organization has to manage and control access to the asset.  DRM is a “… form of managing digital content to limit access from a specific device and / or prevent unauthorized copying or conversion …” (Glossary of Records Management and Information Governance Terms, ARMA International TR 22-2016, p 15)

Archival services: is really a combination of KM, DAM and DRM for organization to develop and deliver new content services and products to internal business users, customers, and other stakeholders.  In some cases, it is also an opportunity for organizations to monetize their archived audio, video, and other types content libraries.

Data management: has traditionally focused on structured data in warehouses and data marts (and now data lakes) for business intelligence, key performance indicators, decision modeling, and other analytics. Now data management should be included within KM because of the large volume of unstructured data, or visa-versa – i.e. KM should include data management.  In fact, data structures, such as data lakes, store structured (e.g. financial, customer information, etc.) and unstructured (e.g. email, multi-media, social media, etc.) data / content.  Managing this data and content to derive knowledge and actionable insight is both data and knowledge management.

Business Case for a DT initiative

Success for DT in any organization relies heavily on making a good business case. A business case is often required several times throughout the DT work you are looking to accomplish. A well-developed business case demonstrates the value of the initiative to your organization and provide the rationale required by your executive to support the initiative whether it is approval to proceed with developing a framework for a particular line of business or the implementation of a technology solution to advance DT in your organization.

Executives are responsible for the success the organization and require sufficient information to support and approve funding for operational activities. The decision to support or provide funding is strongly influenced through the business case. Ensuring that the reason, problem or current state that requires addressing is clearly defined along with the benefits, risks, costs and impact the initiative will have for the organization.  It is important to be clear, concise and accurate in the statements provided.

Use Cases for the Business Case

How does one determine and decide the scope of the DT initiative when preparing the business case?  One approach is to identify and use personas for business users and determine their needs.  Creating personas can be time consuming and complex, but they can be the foundation for good requirements and user experience.  Common personas are Executive, Director, Senior Manager, HR Manager, Marketing Manager, Financial Analyst, Administrator, Records Manager, etc.  Specific industries would have specialized personas, e.g. hospitality would have Hotel Manager, Restaurant Manager; life sciences might have a Compliance Manager, Patient Care Manager, etc.

Developing Use Cases require identifying the appropriate ones in order to define the scope of the DT initiative.  This is 4-step process for selecting the Use Cases and prioritizing them.

Select and identify many representative Use Cases in order to have a good sample of personas.  When identifying and assessing the Use Cases, the organization should consider strategic, financial, compliance, and operational risks.  As well, determine whether the Use Case has business value or not; and determine whether the Use Case is worth the risk or not.  Finally, “filter out” and prioritize about 10 to 15 Use Cases for the business case.  Note that the size of the organization and the focus of the DT of the initiative will influence the final number of Use Cases prioritized in the business case.

Governance, Compliance and Risk Management

Governance in any organization is essential to ensure the appropriate framework and procedures are in place to give credence to the direction the organization has decreed as the official strategy. Likewise, to ensure the accountability and efficiency of DT strategy in the organization the DT governance framework is crucial to the success of the DT initiatives.

Compliance must also be a consideration when engaged in strategic planning and establishing oversight. The ability to understand and function within the mandate is imperative. The governance strategy must be achievable as well as understood. Information must be clear and readily accessible. Once implemented, a good practice is to have an audit process in place to assure compliance, monitor performance management, determine corrective actions and future improvement needs.

As with all initiative there is risk associated with DT.  The first step in risk management is identifying the risks associated with embarking on the DT initiatives as well as to understand the risks associated with not proceeding. Threats to the organization are identified as anything that can cause disruption in services, financial loss or damage to reputation.  

The needs for risk mitigations should be identified in the business continuity plan. Understanding the risks associated with operations may be paramount to the success of the business. Risk mitigations include any factor that may occur, including pandemics, that cause a disruption to business as usual. Dating back to the Spanish Flu of 1918, Canada has had many flu pandemics (www.thecanadianencyclopedia.ca, Oct 2020) three of which have occurred since the SARS outbreak in 2003. In understanding that pandemics pose a greater risk in recent years the necessity to include the mitigations for a pandemic event equal or surpass the need for mitigations due to natural disaster depending on your geographical location.

Change Management and Stakeholder Risks

Change is ubiquitous and permeates every organization. The cliché, is truer today than ever, because technological change is so rapid.  Consequently, digital technologies are forcing organizations to embark on DT initiatives that fundamentally change how they operate.

Organizational Change Management (OCM) is often overlooked in technology implementations, which is unfortunate as it has the power to “move the needle” from failure to success. OCM is about initiating a “mindset shift” to change the organization’s culture. Recall the phrase originated by Peter Drucker, and made famous by Mark Fields, President at Ford, “Culture eats strategy for breakfast, lunch and dinner.” (https://www.supplychain247.com/article/organizational_culture_eats_strategy_for_breakfast_lunch_and_dinner/legacy_supply_chain_services#:~:text=Organizational%20culture%20eats%20strategy%20for%20breakfast%2C%20lunch%20and,of%20each%20other%20to%20create%20true%20organizational%20transformation, accessed Oct 2020)  In other words – “culture is everything!”

OCM is managing the human impact of organizational change.  The Information Governance Body of Knowledge (IGBOK) states, “OCM … is a framework that describes … ‘changes to processes, job roles, organizational structures and type and uses of technology.’” (Information Governance Body of Knowledge, 1st ed., ARMA International, p 120).  In the context of a DT initiative, an OCM strategy is essential for a successful initiative.  OCM will involve challenging the “way things are done” with respect to people, process and technology.  This means initiating conversations not only around technology, but also how teams collaborate today, and how they can collaborate more effectively and efficiently tomorrow.  Digitalizing business processes is more than just applying new technology.  Digitizing must focus on understanding the growing and changing business drivers, improving those processes, applying governance, and educating business users in “new ways of working” — in other words changing the culture.  The challenge for OCM is to “how to make the culture work with the DT initiative rather than against it?” To that end, it is imperative to first understand the organization’s desire for change, so that culture becomes an enabler of change.

In a DT initiative, the organization needs to examine the OCM and stakeholder risk along two dimensions – cultural mindset and using new processes with new technology.

Firstly, if the culture is rigid, i.e. “stuck in the past” – then changing the mindset will be challenging.  In general, this is evident in large organizations with heavy bureaucracy that continue to “do things as they were done.”  This risk can lead to diminishing organizational performance and even reduce the chance of survival.  Only when the pain of the status quo exceeds the ability to survive, then only transformative change is forced onto the organization.  On the other hand, organizations with an entrepreneurial culture are all about change, and “doing things differently and better.”  This is evident especially in start-up organizations because they need to continuously adapt and transform to survive.  As an example, this risk can affect intelligent capture, integrated collaboration, IG, customer experience, etc.

Secondly, if the culture is rigid, then organizations are prone to use legacy processes and tools.  Unfortunately, the imperative to develop new process that will use new technology is absent.  Furthermore, when organizations implement new processes and technology, the business users view the change as a “threat” to themselves to learn new skills, their job, and their daily business routine.  This risk can introduce conflict in the workplace between those who feel threatened and management, and even co-works who embrace the new processes and technology.  As an example, this risk can affect cloud enablement, repository neutral content, content services, auto-classification, etc.

Below is a quadrant grid that illustrates these risk relationships.  Along the x-axis is the organization’s cultural mindset to change from “Rigid” to “Adaptable”.  Along y-axis is the organization’s ability to accept new processes and technology tools.  Quadrant I represent the highest probability of failure for the DT initiative, while Quadrant IV represents the highest probability of success, i.e. probability of success is ¼.

Quadrants II and III represent high probability of failure, because only one of the two dimensions has moved from a “Low (0)” towards “High (10).”  The middle of the grid represents medium probability of success.  The organization recognizes it must change and is willing to do so, but there is significant resistance.  This most likely corresponds to the organization changing from legacy processes to new processes using new tools, but the organization is facing resistance, as just mentioned. 

The DT initiative will not be a straight line from Quadrant I to IV as the business users eagerly start adapting and are willing to change such that the DT initiative’s risk reduce over time.  Instead, the initiative will zigzag from decreasing risk to increasing risk, and back to decreasing risk.  Consequently, the risk management response will change according to the attributes that the organization is transforming.  For example, the DT initiative’s effort to improve content enablement might face less resistance because the IT department is eager to adopt the new cloud-based tools.  However, IG related to new developing processes and training to use auto-classification tools in the cloud faces resistance from the business users.  These could hinder the DT initiative. So, the risk management response needs to address these two attributes in order to the get the DT initiative “back on track.”

Summary

DT is a very complex, broad, diverse and yet advantageous undertaking. It is an inevitable chapter in the success of business in a technology-driven world. When beginning the DT journey, it is important to understand that DT is the new normal for business and society, and is not “a” project. Instead, DT is a culmination of innovative solutions and constant change that will provide advantages, and is necessary to help organizations remain competitive. Additionally, in the information industry, DT adds value to the information assets and data holdings of an organization allowing for the use of its information assets and data to be used to make better business decisions.  

Research and experience are the key to success with DT initiatives. It is important to remember that DT is relatively new and this is the time to explore, experiment and be comfortable with the understanding that organization will have successes and challenges along the way.

About the Authors

Amitabh Srivastav is an IG/IM transformation strategist. He provides CxO/VP-level consulting advice on digital transformation and focuses on content convergence, process automation, change and risk management, and governance and compliance. He writes and speaks at industry conferences about digital transformation and participates in the development of standards and certification exams.

Sandra Bates is passionate about digital transformation and a recognized professional in the security, information and privacy industries. Her passion is evident through her work within the Executive Government of Saskatchewan and is an engaged “industry” volunteer as President-Elect, ARMA Saskatchewan Chapter, while being actively involved in AIIM Digital Transformation groups.