Data-Driven Content Management
Many of you who are involved in packaging and consumer branding will be at least loosely familiar with the term content management or possibly the acronym ECM (enterprise content management). These information systems house and often control a wide variety of information relating to the inner workings of a business, or, as I'll explain, can hold information pertaining to the "outward-facing" worlds of packaging, marketing, and Web-presence. However, the acronym could also written ECm, because there is frequently a great deal of enterprise content without much management.
The principal reason for this is that for many ECM vendors, it's easier to deploy expensive mass storage, interfaces, and documentation to the pharmaceutical market than to implement the detailed processes and corporate cultural change needed to make an ECM work in a regulated, audited, and, most importantly, human environment.
These challenges grow in the real world of pharma IT, where change itself can be daunting and accurate and reliable data is no longer simply the ideal but is mission critical. Successful ECM installations therefore carefully integrate internal IT skills with effective implementation strategies and sustainable third-party software offerings to reach the needed balances to support such a system.
As the industry clears some of these hurdles, we do see several true leaders in the market successfully deploying managed Web content, managed marketing materials, print-on-demand, and the needed regulatory-compliant processes to handle this content. So lets turn our focus to a relatively new frontier in content management, namely managing the actual consumer-readable and physician-readable copy (text) that is on the primary or secondary package and on the label, leaflet, or other public-facing print vehicle.
Even today, the vast majority of this textual data is moved through internal corporate approval processes and the packaging supply chain in a document-driven workflow. MS Word, Excel, or other documents are sent as email attachments or possibly checked in and out of a SharePoint site to get to their reviewers, translators, and constituents.
This can be cumbersome, confusing, error-prone, and at the very least, highly inefficient. Premedia operators and others in the print preparation supply chain are often found cutting and pasting text from these documents straight into the artwork, typically Adobe Illustrator for packaging. There is no true audit trail on a cut-and-paste, and when I demonstrated this process to a conference on pharmaceutical labeling, you could hear an audible gasp. So much care, albeit document-based, to get the data to that point, only to finish with a process that's very tedious and error prone.
Because of this, massive amounts of QC/QA are applied to artwork to strain out any errors that might have been introduced. Automated and human QC is liberally applied so that there is a "200 percent QC" (reading everything twice) informal standard. This is possibly comforting, but it's also very expensive and slow; far better not to introduce the error to begin with.
If we take a step back from the documents, we often find that actually much of this data WAS a controlled system initially. Good examples of this are the Supplement Facts section and Cautions, Ingredients, Distribution Statements and other information. If that data could be exported as data (not as documents) and then directly "injected" into the artwork without typing, pasting, or otherwise handling it, the risk of errors decreases substantially and the remaining QC applied can naturally focus on the fewer areas that are not automated or were actually intended to be changed.
Another advantage of this data-driven approach is more subtle, but equally powerful. Assume that the data is in a database and not a document: now we can use the power of the database to relate different copy elements to different packages in a more efficient and dynamic manner.
Let's look at the difference between a distribution statement and an ingredient list as to how they would flow. In the current workflow, every product typically has its own "copy deck" or list of copy associated with a package. The distribution statement, even if it is used on 1,000 packages, is contained in each and every copy deck (some companies might group a few products together in a copy deck, but the process is still inefficient).
In the database driven model, this same statement would be securely written, approved and, where multiple languages are involved, translated once. From there the system would then "relate" that copy element to whichever packages rightfully contain that element. It would not be reviewed again and again or edited by various parties.
The ingredients list would be handled the same way but would be "related" to fewer packages, namely those with that exact list of ingredients. In this way the database could present this list of elements at any time in the workflow by presenting all (and only) the elements related to the package in question. On approval, this deck could be exported for automatic flow into the artwork. Thus the ingredients and distribution statement are both applied, but at different levels of application based on their relation to the artwork.
In the non-phama market, Schawk has been using this process for several years and has created thousands of packages with a markedly lower rate of content errors. Also, on the artwork creation side, we pick up a good deal of speed. Rather than formatting each piece of copy for color, size, font and placement, we build the artwork by pre-specifying all of these attributes into a template. In this way, when the type is brought in, physical placement of it and all other items into the artwork is fully automated. Though the template takes some time to create and qualify, the use of this template, often across multiple SKUs, brings excellent return in speed and error reduction. pP
- Companies:
- Schawk, Inc.