References and Discussions


Information technology leaders and educators often misuse the terms framework, methodology and model in an effort to explain things using examples and differing terminology; in very much the same manner that users contact the serive desk to report incidents, not problems, though the words may be mistakingly interchanged. Terminology is important, however, even words that may seem related, are not closely synonymous. The effort to clarify often does the opposite, as was the case recently in a discussion about the correct, or perhaps most accurate use of the words framework, methodology and model within technical documents. The three terms have distinct and different meanings, which may be based upon the context in which they are being used and who is defining them.

ITIL, PMBOK, CISSP CBK, COBIT, and CMMI are examples of frameworks. The ITIL publications present a recommended framework that can serve as a basis for the unique needs of an organization. A Framework is a set of established, written good practices that are used during strategy and design. They are usually general, theoretical and require customization to a specific environment. A framework exists in a textbook or some other document and may be used as a reference and starting point to a practitioner. Frameworks are impractical to implement as provided.

IT service management in practice (and as implemented in an organization) consists of a set of methodologies, which usually includes processes such as change management, configuration management and capacity management as they are executed in an environment. Methodology is thus a system or systems for producing an outcome or output, used during operation or action. A de facto methodology exists to accomplish a task whether or not it is documented or standardized, which may or may not be the most efficient or effective (Process, Procedure, Work Detail, SOP, etc.).

A flow chart is a graphical representation of a methodology, as is instruction manual, or anything that offers step by step instructions.

A Model is a prototype, proposed procedure, or something that can be used to test or simulate. It may be real or logical. A model in this explanation could be a methodology that is being tested or simulated.

The ITIL incident management process is explored to further illustrate and differentiate.

The framework for an incident management process is documented in the ITIL publications, with the majority of the best practices discussed in the Service Operation manual. The set of procedures that have been implemented to guide the operation of an organization is the methodology (methods) that are established. In the absence of uniform methodology, the technique and approach to providing service will vary as will the results, outcomes, outputs, quality and customer satisfaction. During the early stages of process improvement, multiple approaches are often considered and tested. Each approach may be considered a model, with one selected, refined and further tested prior to standardization and deployment.

A direct analogy can be established with the use of frameworks, methodologies and models in information technology and the development of automation, assembly lines and interchangeable parts in manufacturing during the 1800s. Standardization and repeatability in factories creates efficiency, high output of predictable results and increased quality; thousands of virtually identical units crafted within controlled tolerances. This is the ideal state for information technology processes; routine and orchestrated approach, repeatable experience that meets or exceeds a user’s expectations, predictable results, and ability to assure quality.


A procedure is a granular set of instructions necessary to perform the actual tasks that processes begin to define. A procedure normally establishes the roles and responsibilities with more detail than a process. A procedure is the first descriptive document that specifies the “how” to accomplish a task, and is the initial documented reference point. Because of the detailed nature of a procedures, the necessity for multiple exit/stop points, and the reality that procedures follow a less “binary logic” based flow than processes, graphical languages such as unified modeling language (UML) can capture detailed instructions and interactions in concise diagrams.

The minimum requirements to properly write a procedure are:

1) Who performs the task, and in ITIL/ITSM implementations, who is informed when a task is completed, whether or not that task was successful. Who is a role, not a specific person. RACI charts often accompany procedures that are graphically depicted.
2) What specific steps are performed.
3) When the steps performed.
4) How the steps performed.

If a user calls the service desk and reports that their email client will not send email, a tier 1 service representative (who) would identify the cause of the incident (what) and would have a scripted set of instructions to work through (how), and assuming the incident occurs during standard support hours (when) would be able to resolve the incident.

Procedures are not meant to captures the minute details of the work to be performed; details are documented in a specialized Work Detail document.

Many service desks do not provide 24 X 7 X 365 service, instead having core service hours and different procedures for after hours support, emergency service, VIP support or other special situations. These are examples of how multiple procedures would support one process.

Procedures can range from simple to complex, and the method used to document the procedure depends upon the overall complexity. A simple procedure could be documented with two tables, one detailing the roles and responsibilities, and a second giving a high level description of the steps that need to be taken to complete the procedure.

In a more complex procedure, a graphical language such as Unified Modeling Language (UML) can be used to accurately describe the procedure. Below is a sample procedure of a patient entering a hospital emergency room, depicted in UML:

Hospital ER Example



Documentation is incomplete unless it includes certain suggested base information that should accompany a procedure through the approval process. These include:

1) The parent process of the procedure.
2) Dates, including approval, effective and expiration.
3) Approver, commonly the change manager in ITIL implementations.
4) Revision history.
5) A brief description.
6) The procedure owner(s).
7) Roles and responsibilities.
8) Related documents, such as a parent policy or child procedure(s).
9) The procedure diagram, or a roles and responsibilities chart and a listing of steps in text.


A process is a high level description of the activities required to accomplish a specific task, develop a product or complete a service, and it describes the interrelationships between the departments or functional areas that take part in the overall lifecycle of one pass. Processes are usually described in graphical languages, such as flow charts. For the purposes of this entry, the term process refers to IT, ITIL/ITSM processes that underpin business processes. The distinction is made because managers that own business processes may not own IT processes, which are usually governed by the ITIL change management process. IT processes may also change with technological innovation, even though the supported business process may not be impacted by technology.

The minimum requirements to properly write a process are:

1) The inputs to and outputs (or outcomes) from the process, and the trigger(s) that initiate the process.
2) Who is responsible (not normally accountable) and who is communicated to (if applicable), for performing the action. “Who” is rarely a specific person; it is commonly defined by a role or set of roles.
3) What major tasks or functions are performed; not specifically how they are performed.

A graphical depiction of a process can be drawn in any modeling language or text may be used when appropriate. When documenting a process with explicitly defined responsibility, a standard logic based flow chart language is drawn with boundaries, or “swim lanes”.

An example of a process for purchasing a computer for a new employee with explicitly defined responsibility is shown below:

Process for Purchasing a Computer



While an illustration is often a starting point for documenting a process and the first artifact inspected during reviews, documentation is incomplete without the following suggested base accompanying information:

1) Dates, including approval, effective and expiration.
2) Approver, commonly the change manager in ITIL implementations.
3) Revision history.
4) A brief description.
5) The process owner(s).
6) Related documents, such as a parent policy or child procedure(s).
7) The process diagram itself.

.


ITIL was developed by the British Government in the 1980s as a framework to standardize the way that information technology departments operate. Practitioners in ITIL use a set of best practices to enhance the efficiency and improve the experiences and satisfaction of the end users of their systems, and to maximize the value delivered to the organization.

ITIL establishes a common organizational structure for information technology departments that is separated into four groups, called functions; IT Operations Management, Technical Management, Applications Management and Service Desk (routinely called the help desk in many organizations). The offerings of the IT department, which are decided upon by the business, are referred to as services, which are documented in a catalog. Services in themself become valuable with agreed upon service levels that are documented and adhered to in service level agreements.

Services are conceived, developed, implemented, supported and improved through the five stages of the ITIL lifecycle – Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement, respectively. Throughout the lifecycle, there are various defined roles, processes and best practices that focus IT professionals on supporting the mission of the business. ITIL is customer oriented, not technology focused. During an ITIL implementation, the dissimilar structures found in the remnants of MIS groups that developed as data processing was revolutionized in the past two decades are transformed into standardized, effective bodies.

A comparison can be made to the way fast food restaurants operate; perhaps the McDonaldization of computer departments. If you walk into a McDonalds or other fast food restaurant – perhaps the pinnacles of service management, you know exactly what to do. You walk up to the cashier (service desk) and place an order, there is a catalog (the food they offer with the price clearly displayed) for you to chose from, and there are policies, processes, procedures and work detail that are strictly adhered to so that you receive what you ordered in a consistent manner regardless of which location you happen to be at.

What would happen if they decided to shut the grill down and clean during peak lunch or dinner hours? What if you saw their newest menu offering advertised on television only to find that the cashier had no idea what you were talking about, or if the staff had no idea how to prepare it? Or if menu items were routinely unavailable or did not taste the way that you expected it to? The results are obvious; you would be an unhappy customer and the business would lose money on your transaction, and potentially you as a future revenue source.

The standardized framework and methods that are deployed to each restaurant, their capabilities, enable them to maintain uniformity even with a low skilled, transient workforce, their resources.

Unlike the fast food industry, IT departments are composed of highly educated, skilled professionals, and minimizing attrition is a critical factor for success since the loss of an employee includes the loss of their accumulated systems knowledge and experience. Nevertheless, for a moment, apply the thought pattern to IT – policy, process, procedure and work detail, standardized and methodical. When a user ‘s service has been disrupted or they need something, they call a central point of contact, the service desk, and the technician attempts to quickly resolve their difficulty and restore service. In the case of a fulfillment, such as a need for a toner cartridge, password reset, access to a resource or instructions on how to accomplish a task, the technician services their needs by following a set of documented, pre-approved and authorized procedures.

Behind the scenes, there is a set of operating procedures and work detail instructions that make sure that computer systems are running according to agreed upon service levels, that changes are strictly controlled, that documentation is developed and maintained, and unless absolutely necessary, changes are applied at the most opportune time to the business with minimal disruption to services.

Whether a customer is having a meal prepared for them or using a sophisticated information system, their objective is actually quite similar. The customers want the services without the ownership and risks of the infrastructure that delivered that service to them. And if they are not satisfied with the service that they receive, they will find it elsewhere; in these examples, a different restaurant, or a service provider outside of the organization.


At the heart of most service management implementations is metrics. Metrics are a critical success factor in gauging accomplishment or failure, the satisfaction of customers and the improvement or decline from an established baseline. A quote attributed to Peter Drucker, “If you can’t measure it you can’t manage it, if you can’t manage it you can’t improve it”, underpins the importance of metrics. In order for metrics to succeed, two factors cannot be overlooked: First, having the correct metrics, and second, assuring that metrics do not encourage undesirable behaviors.

The proper metrics measure the factors that are important to success. A service desk, for example, might be measured by some textbook performance metrics including percentage of calls resolved during the initial phone conversation, the average time taken to close an incident and the number of incidents that required escalation in a given time period, whether hierarchical or functional. Though these metrics may seem sound, a service manager that sets targeted, single dimensioned goals may find that customer service will suffer and other, unexpected consequences.

If success is measured by increased closure rate during the initial phone call or reduction in the number of incidents that require escalation, there is a disincentive for service desk employees to escalate calls. The result could be customer dissatisfaction, as the service desk employee that is discouraged from escalating calls may unnecessarily continue their attempt to restore service beyond the point that they are capable of. Likewise, when measurements focus to strongly on elapsed time, there is an incentive to spend as little time as possible or each incident, and customers may perceive poor quality service due to a perceived lack of attention. If a service desk employee perceives that they have spent too much time on one call, they may attempt to rush future calls to reduce their overall average.

Human nature will cause employees to adapt so that they will succeed. If the telephony system is not integrated with the service desk software application, there is certainly a risk and an incentive for inaccurate data entry, or simply not creating an incident for any call that takes an excessive amount of time to close. Best practice models recommend that incident records be opened during the early stages of the call because the most precise information would be the result of data entry as the call progresses; not entering the information after the fact. Thus a service desk employee could easily undermine time-based metrics by manipulation of time stamp or omission. Neither of these behaviors is desirable. Metrics must not only be achievable, they must encourage behaviors that align with organizational service goals.

Firms that sell products and services have struggled with creating metrics and incentives that encourage ethical behavior. If a sales person is commissioned based upon the profitability of a product or service, they are apt to sell the higher profit item for short-term economic gain, even if it is not the correct solution for the customer. Since service management takes a sales and marketing approach to customers of information technology; metrics and the resulting behaviors are of considerable importance in all processes. Though IT professionals within a firm are not paid a commission on the solutions they recommend, there is usually an incentive for them to propose solutions within their comfort zone. Many organizations have a low tolerance for or punish mistakes, or have instituted metrics that measure success rather than added value.

The practice of Management by Objectives, popularized by Peter Drucker, has been challenged in many organizations for this reason. It is not surprising that the most notable detractors of MBO was Dr. Edward Deming, whose key principles are fundamental to many modern IT frameworks, including ITIL.

Metrics are the most effective when they are aligned with an organization’s mission, which can only be achieved when the correct behaviors are encouraged. This requires a multidimensional approach to the design and implementation of measurement systems during service operation, and attention to human factors including motivation and organizational culture during all stages of a service’s lifecycle.


Services are predefined, orchestrated activities that provide value to customers. As industry transitioned from products to services, customers became more focused on the output or outcome that they desired rather than the manner in which it is provided. People do not want ½ drill bits; they want ½ inch holes. Uniformity of products also began and has continued to diverge; especially in technology components. The first clone of the IBM PC was strikingly similar to the real thing, inside and out. Today, two computers that look the same could substantially differ in the way they are serviced. While the keyboard, screen and software may all look identical, the make-up of the internal components, casing, and location of connectors could differ greatly.

In early mainframe installations, binders with standard operating procedures (SOPs) filled bookshelves in data centers. If service was necessary, detailed instructions with diagrams documented the exact location of parts and connectors, and the specific part numbers for identical replacements. The same SOPs could be found at every customer site; if an engineer worked on a mainframe at one corporation and was either re-assigned or hired by another corporation, there was little to no learning curve. Everything was standard.

Though international standards have assured the interoperability that modern architectures depend upon, there is only minimal standardization in the components that build today’s systems. This complexity created a necessity to expand the governing documents from a single SOP into the four principal documents that are cited in technical publications; policy, process, procedure and work-detail.

Policy:

A policy is a set of business rules that are consistent with the strategic and tactical directives of senior management or policy makers. Policy establishes what it is, who has authority, and why it is necessary. Simply stated, policies are rules.

Process:

A process is a high level description of the activities required to accomplish a specific outcome, deliver a service or develop a product, and the interrelationships between the customers, service and product providers that will achieve or deny the end goal. Simply stated, processes define how to get something done and who is responsible.

Procedure:

A procedure is a detailed description of the activities, with assumptions made to eliminate minute details. A procedure establishes who performs the tasks, what specific steps are performed, when the steps are performed, where the steps are performed and how they are performed. Simply stated, a procedure is what to do.

Work Detail:

Work detail captures the minute details of tasks that are performed. Many factors, including risk, frequency of work, repeatability and quality tolerances contribute to the level of detail captured. An instruction manual is a good example of work detail. Simply stated, work detail is exactly what to do with exactly what you have.

An SOP can be written when the procedure and the work detail are identical; something which rarely occurs these days.

Another example of an SOP is instructions on how to change the tire of an automobile. The method and tools are virtually the same on every passenger car manufactured; there may be slight differences in the shape of the wrench or the number of lug nuts, but once a person has learned how to change a tire on a car there is a high degree of confidence that they would be able to change the tire on almost any passenger car. And this holds true across virtually all manufacturers, brands, models, trim line and geographic location.

In the case of a routine task in information technology, such as replacing a defective hard drive in a computer, the procedure is the same: power off the computer, open the case, remove the old hard drive, install the new hard drive, close the case, power on the computer (a bit of a simplification but the basics). The work detail depends upon the type, brand and a host of other factors including location of the hard drive, the technology of the hard drive, the type of screws or connectors used, and the form factor of the computer. Details for replacing a hard disk drive in a notebook computer differ greatly from the details of a desktop system or server.

It is important to properly understand the role and value of each of the four governing documents, and the necessity for developing and updating them with the maturity of service management. The chart below summarizes the four basic documents, their role, lifecycle and who typically has the authority to change.

Policy Process Procedure and Work Detail


It surpirses many new ITSM practioners the problem management process is not required to close all problems in their queue. Many problems may never be addressed or closed. The underlying assumption that problems are treated in much the same manner as incidents is incorrect. For clarity, an incident is a disruption of service or the potential for a disruption of service. An incident is closed when the service has been restored, which can be accomplished by workarounds. If a user cannot print to one of three printers in their work-area, and the two other printers can accomplish the same task, then changing the user’s default printer to a printer that works closes the incident. In three words, Incident Management = Fix it Fast.

ITIL defines a problem as the root cause of one or more incidents; in this case, the broken printer is the problem, and the use of a different printer is the workaround. In most organizations there would be a protocol for having printers repaired and printers would not be addressed in the problem management process; it could indeed be handled by the request fulfillment process. As a simplified example, we’ll move forward with how problem management might handle a non-functioning printer.

Problem management would analyze the costs for repairing the printer, and in a mature organization, would also weigh the value of the asset to the organization, and the costs of not repairing it. Ownership of a excessive printers encourages people to print and increases the complexity of the architecture by not only the one printer, but also all of the computers that are connected to it, and all of the applications that can print to it.

Problem management would present the costs, risks and alternatives to the business, and a recommendation for how to proceed. The analysis might conclude that:

1) There are enough printers in the work-area to handle the load on most days; the normal utilization of printers in the work area is less than 25%.
2) In the period following the printer failure, paper and toner consumption actually decreased. An unexpected result of the failure was that user behavior changed with no known impact on productivity.
3) The cost to repair the printer will exceed the price of purchasing a new printer of equal or greater utility.
4) The analysis found that two employees in the work area print very long documents on a specific, different day each month. To accomplish this task, the print jobs could be scheduled to run at night when system usage is low, without affecting the productivity of the users.

The recommendation to the business based upon the analysis could be to remove the printer from service and not replace it. In this case, the problem record would be closed and removed from the queue once the printer has been removed from service.

Software problems may never be repaired, either. There is a common example given in many ITIL classes about an older version of a word processor that could not print a document that was greater than 300 pages in length all at once. This may be a myth but it is a great illustration. The workaround was to print the document in chunks less than 300 pages at a time. Problem management would probably return an analysis to the business that concluded:

1) Very few users, less than 1% of the user base print documents of that size.
2) All of the service desk representatives are aware of the issue; as are most of the users.
3) The workaround of printing documents in smaller chunks is acceptable and has become common practice.
4) There are minimal risks in continuing to utilize the workaround. The risk of modifying the application code presents a higher level of risk.
5) The cost to correct the software would be -some dollar amount-, would take -some amount of time-, require -some amount of people-, and would provide an unacceptable financial return on investment to justify the work.

Given this scenario, it is unlikely the business would take any action. The problem record would remain open for as long as the piece of software was in use and the workaround would continue to be used when a document over 300 pages needed to be printed.

While every incident that is reported to the service desk must be closed in a timely manner under the governance of a service level agreement, problem management’s problem queue may include records that remain open indefinitely. This is because the business, not IT, prioritizes the queue based upon its strategic and tactical objectives. Every problem should have a workaround identified because a permanent solution to "fix" the problem may never be applied. Work arounds are often quick and temporary "fixes", removing the component from service, using an alternate resource, or for that matter, one of the most common workarounds: restarting the piece of equipment.


Quality assurance (QA) and quality control (QC) are often incorrectly used interchangeably used to describe an organization’s efforts to address defects, loss of service and user satisfaction. Though related, the two terms differ greatly, in that quality assurance is focused on the prevention while quality control focusses on identification. QA is incorporated into processes, while QC is measured with the output, outcome or product of the process.

All information technology sub-disciplines address quality as integral to the successful delivery of services. QA is proactive, planned and predictive, where QC is reactive and actual. A comparison and examples of QA and QC shown in the table below:

Quality Assurance Quality Control Table



ITIL builds QA into the strategy, design and from the wisdom obtained from the continual service improvement (CSI) lifecycle stage. The incident and problem management processes can be viewed with QA and QC perspectives. Incident management’s objective, to restore service as quickly as possible is clearly a “control”, while escalation to problem management, the process that conducts root cause analysis and develops permanent solutions is proactive in prevention of future incidents. The incident management process has both QA and QC elements; a customer survey and service levels agreements are QA, where the survey results and service level compliance percentages are QC.

QA and QC are frequently eliminated from project schedules when inexperienced project managers fail to baseline a project schedule. The schedule baseline, which by definition must not be changed during the execution of the project, is the QA reference point. When project schedules are revised as a projects proceed, the underlying planned approach cannot be easily measure against the actual, impairing QC efforts. Failure to baseline is usually the result of insufficient training on project scheduling tools, including Microsoft Project.

IT security is universally divided into two distinct efforts, information assurance (IA) and information security (IS), QA and QC respectively. IA creates policy, processes and the standards that must be achieved prior to operation, where IS (at the network or security operations center in larger organizations) handles the real-time detection, from “friendly” sources (planned, known tests and scans) and all forms of internal and external attacks.

Statistical tools and mathematics are used by practitioners during both assurance and control. The difference is that statistical models allow QA to establish parameters for measuring whether or not processes operate with a low risk of defects (statistical process control, SPC). For QC, statistical analysis is applied to outcomes, outputs, finished products and completed services (statistical quality control, SQC).


Whether an ITSM practitioner, project manager, security analyst or any other professional that must measure the performance of a process, the de facto methods includes developing and using some form of metrics. Metrics are typically divided into two categories, quantitative and qualitative; quantitative measuring in mathematical terms such as number of defects or percentage availability, qualitative measuring something much less tangible including perceived quality or customer satisfaction.

Managers like quantitative measurements. If a service level agreement (SLA) stipulates 99.9% uptime, it is relatively straightforward to calculate the actual uptime mathematically. Qualitative measurements can be measured through surveys or in many cases, simple observation, but are usually less precise. The weakness of using any single metric is a one-dimensional view of the results of a specific process. A single, or even multiple quantitative metrics may measure the output of a process without accurately measuring the outcome. Thus, a manager may believe that their process is in control, when it may be from a quantitative aspect, the output, when the process is actually failing to satisfy customers, the outcome.

Take for example a customer that checks into a hotel for the night. The room is perfectly clean, the bed is comfortable, the room is well stocked with beverages and they complement the waiter in the hotel restaurant for an excellent meal. All of the guests of the hotel that night reported similar experiences, thus quantitative metrics report an in-control process. The only negative event that occurred was a 10 second blast of the fire alarm at 2:30 AM which awakened all of the guests. From a quantitative perspective assuming 8 hours of sleep in a given night, or 28,800 seconds, 10 seconds equates to .03% noise, or 99.97% quiet, which from a quantitative perspective is quite good.

The reality is the outcome, which would be a high percentage of dissatisfied guests, a qualitative metric. The lesson is that unless both quantitative and qualitative metrics are evaluated, a process should not be assumed in control. Simply stated, the outcome is not solely dependent on the output.

Apply the similarities of the hotel example to information technology organizations; high availability of services cannot be the sole measure of success, as it may not equate to customer satisfaction. Even if 99% availability is accomplished, the perception of users would be unsatisfactory service if the 1% unplanned downtime occurs during critical business periods. Many business services do not require anywhere near 99% uptime and can tolerate significant downtime after core business hours. A 2 hour unplanned outage during a period of very low usage, say 2:00 AM, may have no qualitative affect on services even though a breach of a quantitative SLA likely occurred.

ITIL would guide that understanding patterns of business activity (PBA) allows service managers to negotiate both the quantitative and qualitative components to service level agreements (SLA). The 2011 update of the ITIL framework expands upon the notion that information technology service management is often signle-dimensionally compared to manufacturing lines, which are measured in terms of output and minimizing defects. The breakdown occurs because a customer and the services provided have to be viewed in a more holistic approach. Customer perception and satisfaction cannot be quantitatively measured; though certain human behaviors, including exaggeration can be at least partially mitigated with the correct data, i.e., quantitative measurements.

In the hotel guest example, a mandatory alarm test that resulted in a 10 second siren might be acceptable at 11:00 AM at a business hotel. Thus, to properly measure the performance of services, SLAs must include both quantitative and qualitative metrics. IT professionals must focus their efforts on the satisfaction of users with the outcomes the service that are provided, a multi-dimensional indicator, not simply numerically measured outputs. This is potentially the next level in the evolution of value measurement in information technology services.


One of the earliest considerations during the adoption of ITIL by a corporation is establishing a starting point from the greater than two dozen processes. Several textbook examples of implementation strategies recommend customer-facing processes, such as change management, incident management or service level management. While choosing an approach where end-user satisfaction with services is base-lined, measured and evaluated against improvement metrics and targets, these approaches take extended time periods to translate into financial terms, real costs whether saved or avoided, competitive advantages, or long term strategic objectives.

Though not a common starting point, Service Asset and Configuration Management (SACM) offers an interesting entry-point when immediate financial results are demanded by corporate management or an aggressive return on investment target must be defended prior to acquisition of a service management software system. Information technology professionals are often reluctant (and sometimes reckless) in tracking technology assets, software and related support contracts; and information technology users rarely understand either the full costs of ownership or the incremental costs of adding functionality to baseline configurations.

Service management professionals understand the underpinning concept of “doing more with less” inherent in ITIL. A comprehensive database that includes a record for every hardware component and software license, and the functionality to cross reference assets to service contracts and to analyze utilization of software is a capability that is difficult to achieve. If the capabilities are achieved, real cost savings, and financial justifications for investments in expanding service management initiatives cannot only be measured, they will be championed.

Several examples are cited during ITIL fundamentals training courses. A common cost issue discussed in financial management sections is the failure of organizations to cancel maintenance contracts on hardware that has been retired from service, with printers as an obvious concern. Notebook computers, which tend to have a shorter useable life and are prone to high theft rates, and specialty devices such as plotters (or for that matter, any technology device that has a network connection) are further examples. ITIL best practices call for maintenance contracts to be canceled immediately after a device is retired from service, which is difficult to achieve, if not impossible, without good record keeping.

The corporate standard software image, which is often decided based upon assumptions and policy rather than an understanding of utilization, is another area where a robust software tool can create an opportunity for cost reduction or avoidance. Hard data that describes the use of software could defend a decision to deploy a viewer (or reader) rather than full-featured versions or costly software or upgrades. Cost saving possibilities through thorough inventory and understanding of software holds a greater potential than hardware devices, since organizations typically know much less about installed software, invisible to physical inventory, and dynamic in nature. Beyond the costs, software is often governed by policy and can be difficult to properly monitor where, how many times and by who a license is actually installed, creating both legal and security risks.

While service desk and knowledge base components of service management systems are typically the focal point of software evaluation processes, organizations would be well advised to include SACM features as part of their functional and operational requirements and scrutinize these capabilities prior to purchase. Discovery features that can automatically detect and record both the existence of an asset and its utilization patterns can be especially useful. The financial savings and efficiencies that accompany a strong SACM competency could easily justify the expense of the software tool itself, and provide the foundation for successful implementation of other ITIL processes.


ITIL defines the Service Catalog as the portion of the Service Portfolio that contains services that are either active or approved for delivery to customers or prospects. The approval process would include a review of numerous things, including alignment with firm strategy and capabilities, along with financial aspects including the price to be charged and the acceptable profit margin, if applicable.

Many companies struggle with the creation of their service catalogs, in effect, over complicating a document that answers two basic things: (1) A concise list of the services that are offered and (2) A concise list of the services that were historically offered. Concise is the keyword in achieving a properly authored service catalog.

If a given company owns and operates multiple data centers with hundreds of servers that run a variety of operating systems, including versions of Windows, Unix and Macintosh platforms, and offers customers features including high availability, backup and disaster recovery, choice of hardware platform, infrastructure (domain, identity and security management) and differing levels of bandwidth, the company service catalog could reflect one service: Application Hosting. All other sub-offerings should be considered features, which would be combined into different service packages based upon either user profiles or customized needs, and priced based upon market demand.

Likewise, though a software company may specialize in Java, JavaScript, C+ and PHP and may write customized line of business applications, operating system enhancements, web sites and monitoring systems, the service catalog offering could simply be Software Development.

Following these examples, a company that offers over one hundred different educational courses in classroom, on-line and/or pre-recorded media formats offers one service: Education; and regional a business that sells micro-computer network components and employs professionals that maintain the computer systems of small companies that cannot afford to hire an in-house IT specialist may only offer a pair of services: Systems Administration and Hardware Repair.There are few service-based companies that offer more than a handful of services.

Many firms and internal IT organizations struggle with this fact; as organization members that are responsible for the differing products (or workgroups) attempt to solidify the value of their offerings by having them recognized as unique services, when many are not. ITIL describes these offerings, for the most part, as processes, packages, features or attributes. Regardless of what hardware platform and operating system are used, a customer purchases application hosting; and regardless of what language is used, a customer purchases software development. In both cases, the customer may not even ask for the technical details of how their service was provided.

To further illustrate, the service catalog may be thought of as an “elevator” speech that describes to the non-technical consumer minimal information about what it is that is being offered in the marketplace; it is easy to understand, explained in as few words as possible, and holds no ambiguity. Many can be described in one to two words: network architecture, telecommunications, voicemail integration, disaster recovery, training, hardware disposal and security management are all very acceptable, to the point service catalog offerings.

A service catalog can usually be developed over the course of a few weeks in a few short meetings. It is important to keep in mind that the user of the service catalog is your customer, not the internal organization. Keeping it simple is the challenge; the realization that your firm offers few or even a single service does diminish its value. To the contrary, it makes it easier for employees to understand the vision, purpose and strategies of an organization. Just consider the simplicity and marketing value if every one of your employees can clearly state, in no uncertain terms, whom they work for, what services their company provides and the price charged.


Many technical professionals struggle with properly understanding the role of the service desk when they are first introduced to ITIL. The misconception arises when they initially equate the service desk function to the technical support desks they frequently contact at software and hardware manufacturers; though the standard tier 1, tier 2 and tier 3 (and perhaps beyond) conventions are used to describe the stages of technical support and there are other similarities, the roles of the service desk and technical support are quite different.

The Service Desk is the customer facing single point of contact for users that rely on the services that an information technology group provides. As the advocate for the end user, the mission of the Service Desk is to restore service as quickly as possible; not to investigate the root cause and to work on a permanent fix, which is the responsibility of Problem Management. Technical support is an organization within a product manufacturer that helps engineers and technicians identify the causes of failures and suggests solutions based upon expert knowledge of a product, whether hardware, software or a combination of both.

To clarify the distinction, consider an incident where a single end user calls the Service Desk to report that they are not able to send or receive email. After some conversation, the service desk representative determines that there is a network connectivity issue at the end user’s location, and they instruct the user to restart the wireless Internet router appliance that the local telecom provided to them. Assume for a moment that this resolves the user's complaint; the service desk can then close the incident and work is completed without further analysis. If this is a common occurrence, the service desk should publish the symptoms and resolution in a frequently asked questions or similar forum.

Now consider a scenario where there is an unplanned outage of email services due to component failures in the data center. The service desk receives numerous phone calls from users are unable to send and receive email. An incident is opened for each of the phone calls, and the service owner for email engages technical operations to restore service. An engineer discovers that the email server has frozen, and reboots the server in accordance with established emergency procedures, which immediately restores service. The service desk can then contact the users, verify that their service has been restored, and close the incidents. Though the root cause of the failure may not have been established and a permanent solution may not be in place, the service desk has fulfilled their responsibility to restore service.

Now that service has been restored, the service owner wants to proactively assure that this type of failure is avoided. Working with problem management, a problem record is established and, assuming that a software tool is in place that has incident and problem management systems, the closed incidents are linked to the problem record. Technical operations contacts the technical support desk at their hardware or software vendor, or both, and works with them to establish the root cause and propose a permanent solution. Depending upon the specifics of the permanent solution that is agreed upon, the business’s change management processes would be invoked, potentially leading to the problem record being closed.

There are three important distinctions between the Service Desk and the technical support desks that software and hardware vendors provide:

1) The service desk supports services; technical support supports products.
2) The service desk’s customer is the end user; technical support’s customer is typically an engineer or technician.
3) The service desk focuses on restoring services as quickly as possible; technical support works to discover root causes of failures and recommend permanent solutions.

It is important to understand that though the ITIL framework is flexible enough that it may provide guidance to product manufacturers, this is not ITIL’s intent. The genesis of ITIL is as a source of best practices for the operation of information technology departments. The mindset of ITIL is that IT must provide measurable value to the business in the form of services that offer both utility and warranty; thus the technology, products and manufacturer support organizations that are used to accomplish services must remain invisible to the end user.


It is a document that you will want ask for, read, understand and if it does not exist or cannot be easily accessed by each and every employee, it is a blatant failure that should be brought to C-level attention; an acceptable use policy. An acceptable use policy goes beyond the fraction of verbiage that will warn users at the login screen that they are not actually “welcome”, they are voluntarily logging into a computer system owned by a company and put in place to do the work of that company. And unlike the warning screen, which probably will not change all that often, an acceptable use policy must be kept up to date since new technology enters the electronic marketplace each day.

IT security is resonsible for writing an organization's acceptable use policy. Every user must read and comprehend it, thus it must be short, direct and easy to understand. From a user’s perspective, an acceptable use policy spells what they may and may not do when they are using corporate computers, including but not limited to:

1) Internet usage and content that may be viewed, interchanged or downloaded.
2) Limits on personal use of corporate email.
3) Restrictions on installing software including software that is freely downloadable.
4) Restrictions on transportable devices, such as USB media drives.
5) Expectation of privacy, if any.
6) The consequences of violating the policy, which are often severe.

Keep in mind that an acceptable use policy normally does not explicitly give permission to do anything, it just sets what the corporations tolerance levels are from the information security officer’s standpoint. Corporate security may not object to use of social networking sites, however your immediate supervisor may place sterner restrictions to assure productivity is not encumbered. Ultimately then, it is between employee and supervisor to discuss the gap between the acceptable use policy and what “acceptable use” is unacceptable to them.

Internet usage and content seems easy to understand, however the rapid movement toward cloud and service based offerings blur the line between accessing a website and using an application, especially for storage. It is becoming increasingly difficult to contain corporate data within the corporate network; it no longer takes a removable media device, high bandwidth and on-line storage (such as Dropbox) enables users to easily move files often without detection. Furthermore, synchronization software, whether to mirror files, calendars, contacts, or any other repository are readily available and often easily installable by end users. Staying with the user’s perspective, a written policy cannot inhibit the ability to use data interchange products, which often requires technical capabilities that budget constrained IT departments may not purchase. In simpler terms a user may be able to access a cloud based storage area, synchronize their calendar and address book or install third party messaging software; but “able to” does not imply “allowed to”.

Most corporations and some government agencies do not restrict access to personal web based email, and for those that do; many employees carry personal smartphones and can receive email separate from the corporate system. The time has long since past when even receiving a personal phone call at work was taboo; the modern workday has personal priorities mingled into it despite managers’ attempts to restrict. As a rule one should never send a personal email from a corporate email account for any reason whatsoever. While the likelihood of ever bit and byte being evaluated is low (tracked is another issue), an end user will often be held accountable for both receive and send functions if an incident occurs. If a non-employee friend writes or sends inappropriate content to a corporate account the employee could face the consequences of violating a policy.

So if the first rule is to never send a personal email from a corporate account, the second is to use measured caution when opening a personal email through a web client while at work. Assume anything displayed on a corporate system is no longer private.


A foundational topic of introductory statistics courses is the two error varieties, Type I and Type II errors:

1) A Type I error occurs when a condition is true but tests false.
2) A Type II error occurs when a conditions is false but tests true.

It is a relatively straightforward concept that is incredibly easy to confuse, with serious implications in the execution of security practices.

A Type I error (denoted alpha) is often compared to a false positive. If a medical test was to report a patient positive for a disease or condition, and the patient does not actually have the condition, a false positive has occurred. The implications in security systems would be a failure in authentication when an authorized person access and is denied, sometimes measured as the false rejection rate or FRR. While a false positive with a medical test or denial of authorization to a person that should authenticate is a nuisance, it is rarely a danger except in extreme cases.

A Type II error (denoted beta) is likewise compared to a false negative. If a guilty person is found innocent and evades incarceration a Type II error has occurred. In a secure environment, an unauthorized person gaining access to a system, secured area or classified information constitutes a Type II error. Similarly, if an airline passenger passes through airport security with a restricted item or weapon, a Type II error has occurred, which is often measured as a false acceptance rate or FAR. To a security professional, a Type II error is considerably more severe that a Type I error.

In practice, the outcome of a Type I error (or false positive) in an information system could lead to a service desk call and an incident being opened. If an authentication device were to fail, multiple incidents would be opened and the problem management process would be triggered to investigate and resolve the root cause of user credential failure. A Type II error, perhaps caused by improper assignment of directory permissions to groups or credentials inherited, usually go undetected.

If the two error rates for a given system are plotted on a graph, they will intersect at a point that defines the crossover error rate, or CER.

Crossover Error Rate


This point represents the overall accuracy of a given system and an acceptable balance between Type I and Type II errors in a given system. Higher sensitivity will yield more Type I (false positive) errors and can result in user dissatisfaction. This must be balanced against the risks and impacts of potential security breaches, and is usually negotiated as part of a security policy with senior management.


There is an adage that possession is nine-tenths of the law, and a second commonly known saying that a man’s (or to be modern, a person’s) home is their castle. These simple and timeless proverbs have an implication to today’s security professionals and business owners alike, in a concept known as “Security through empowerment”, the certainty that people are more likely to take care of and safeguard anything that they own, or have a vested interest in the safety of.

As many corporations embrace bring your own device (BYOD) policy, they are discovering that fewer notebook computers, mobile phones and similar electronic devices are being lost or stolen. Likewise, data suggests that fewer personal automobiles that are used for business purposes with reimbursement for mileage are damaged through misuse or accident than corporate owned vehicles. As many travelers are aware, rental cars are often abused and are to be avoided on used car lots for this exact reason. The use of personal property for business purposes, whether by information technology professionals, remote employees or even law enforcement officers not only extends the average life of items, but also makes for good business sense.

Theft of personal and corporate property is rising as workforces become more transient, short term contract employees are hired instead of full time, permanent employees and functions including office cleaning, food service and document shredding is outsourced. Though an employee’s desk and immediate work area do provide a sense of ownership, cabinets, drawers and file cabinets often do not lock, or have had the keys long misplaced. The problem amplifies in “hotel” offices, where shared work areas eliminate any sense of implied ownership. This proven concept that people take better care of and safeguard things that they own should not be ignored as part of corporate security and office policies.

Employees who have a vested interest in the profitability of a given company are much more likely to safeguard corporate data, guard trade secrets and discuss the nuances of their jobs in vague terms rather than divulging proprietary information. Likewise, the reality that non-compete and secrecy agreements that employees often sign can be difficult to enforce or prove a breach. For these and many more reasons, empowering an employee and creating an environment where all employees put make the best interests of the business their priority is also good, if not excellent business practice.

Theory may be difficult to put into practice; startup business can offer stock options or other incentive plans, while all business may find establishing and payment of cash bonuses difficult during economic downturns. Financial incentives are only a small part of the equation, as it well known to human resources authorities that monetary rewards quickly lose their power. In a recent Inc. magazine article, compensation ranked 10th behind (in order) purpose, goals, responsibility, autonomy, flexibility, attention, innovation, open-mindedness and transparency in employee’s top desires of their employers. Companies need to find ways to use these nine more desirable means to incentivize their employees, to build loyalty, and to provide a sense of ownership and empowerment; which will ultimately lead to safer and more secure workplaces.