0

Reducing Weapons Grade Materials from the World Stockpiles

Posted by admin on October 29, 2009 in Uncategorized

Reducing Weapons Grade Materials from the World Stockpiles

Consulting 7With the end of the Cold War, the United States and the former Soviet Union began dismantling thousands of nuclear weapons. This dismantlement resulted in large quantities of surplus weapons-usable highly enriched uranium and plutonium. This creates a huge problem – plutonium is one of the most expensive materials on earth, by far the most dangerous in the wrong hands and short of sending it into the sun you can’t just get rid of it!

Consulting1aTo reduce the threat of terrorists or rogue nations obtaining nuclear weapon materials, the United States and Russia agreed to dispose of 68 metric tons of surplusConsulting 4 weapon-grade plutonium. By one sources calculation, that is equivalent to over 7000 warheads! Disposal is accomplished by converting it to mixed oxide (MOX) fuel for use in existing nuclear reactors. Once the MOX fuel is used in the nuclear reactors, it is no longer usable for nuclear weapons.

Mixed oxide fuel contains a mixture of approximately 95 percent uranium and 5 percent plutonium. Low enriched nuclear fuel that is normally used in U.S. commercial power plants only contains uranium.

The Department of Energy signed a contract with a consortium comprised of Duke Energy, COGEMA, and Stone & Webster (DCS) to:Consulting 2

  • Design and operate a Mixed Oxide Fuel Fabrication Facility (MFFF)
  • Design the commercial MOX fuel

The Russian Federation agreed to use the DCS design of the U.S. facility in implementing the Russian Federation’s disposition program. Time became a critical factor because any delays in the US program would also delay the Russian Federation’s program.

Although the facility was to be based on the successful Melox and LaHague facilities in France, there were enough differences in material compositions and US Federal requirements that forced much of the process to be reconsidered.

The MFFF is an incredible investment with 500,000-square feet,  over 150,000 cubic yards of concrete, 31,900 tons of reinforcing steel, 3,366,000 linear feet of power and control cable, and 70 miles of piping. Getting the process and facility correct was critical.

After months of preliminary design, and recognizing that static calculations would not be adequate, DCS turned to ProcessModel Consulting Services to build a simulated model of the proposed process. During a three week period, ProcessModel worked closely with DCS engineers to create a model that would represent real world production capabilities.

Consulting 5The model provided incredible insight to the operation of a facility that has completely new characteristics. Many of the pre-simulation calculations where inadequate in predicting the capacity and the throughput of the facility. These static calculations did not work because of internal dependencies, change-over’s and natural cycles of the system.

Consulting 6a

The simulation model showed that machines previously thought to have ample capacity and be under-utilized, would become the bottleneck. Typical operations research calculations did not take into account that the MFFF system requires higher capacity for short periods of time. While other times, processing is delayed waiting for components of the mixture to be produced. These delays rendered the bottleneck machine idle; further reducing its effective capacity. Several machines exhibited the same bottlenecking behavior which further complicated the manual throughput calculations. This system is impossible to analyze with manual calculations. It is like jiggling one of many spoons in a bowl of Jell-O. When one spoon is jiggled they all jiggle. This facility behaved the same way. Every part of the system is connected, and adjusting one part affects every other part.

The model also helped to establish internal scheduling. Three materials are being produced through a series of steps that cross a common set of machines. Machine changeovers are time consuming while final production requires all three products. While the balance was almost impossible to predict before the model, the schedule requirements became clear after the model was validated.

The model was indispensable in teaching engineers about the system’s operation and the effects one area had on the others. Scott P. Baird, president of ProcessModel said, “The insight gained from simulating a system prior to actual creation can save millions of dollars and months of rework. The DCS project is typical of our customer’s projects. When the tool is used in capable hands the very nature of project changes. You can see things that are invisible to everyone else. It allows you to have insight and understand what will make the system function at peak performance.”

This is one more example of how manufacturing facilities can be optimized through the use of simulation. In addition, ProcessModel has been used on projects by General Electric, 3M, NASA, Motorola and thousands of others.

If you need a jump start to your project, contact ProcessModel. For more information about ProcessModel Consulting Services click here

To speak to a consultant about ProcessModel Consulting Services
call (801) 356-7165

3

Come on, be honest — wouldn’t you really like to Tell Management Were to Go!

Posted by admin on October 29, 2009 in General Info

Come on, be honest — wouldn’t you really like to

Tell Management Were to Go!

Irrespective of what type of a company you work in, management is seeking a better way of identifying the best projects — projects that will make difference in the bottom line of the company. This manual shows how…how to capture, how to find the problem areas and how to improve your processes. You will finally have a method to tell management where to go in company processes to find the greatest benefit.

Untitled1No matter how proficient a company is, performing a single task does not provide value to the customer. Only when all of the tasks required to produce a product or provide a service are performed correctly is an output of maximum value to the customer. Almost all processes involve work being performed by resources from a number of functional departments in an enterprise.

Companies need to change their focus from improving the way individual tasks are performed, to improving how the tasks all fit together to provide value to the customer: in other words, companies need to improve their processes. This does not mean that companies stop trying to optimize the performance of individual tasks. But, in many companies there are greater opportunities for benefits to be achieved by looking at the overall process.

“Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It’s where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to superefficiency.”

The suggestions Tell em where to go Button 1built into this project guide are based on twenty years of experience working with project teams on process improvement. During those years the results ranged from extremely successful to thoroughly disappointing. The two most important factors that have seemed to separate the excellent results from the others involved respect, respect for the methodology and respect for the people with the first hand experience.

The information contained in this procedural manual represents a compendium of thought regarding Process Management and Improvement. Every project is different and as you follow this guide there will be times when you will want to do things differently than suggested here. This is only a guide. Go ahead! Use your best judgment! But let me offer one overriding caution. Don’t skip over the involvement of people who do the work. They are your link to reality.


Five “General” Steps to Process Improvement

To say that there is only one path to Process Improvement would be a vast misstatement. In truth, over the past few years, numerous business authors have offered their ideas regarding the correct steps a business planner or manager should follow to achieve the best results. A careful search of current literature for example would reveal programs that take as few as three or as many as fifteen steps. Although the difference between plans generally lies in the comprehensiveness of each program, almost all have five “global” steps in common. They are:

Untitled2

Identify – The process of first defining and then selecting a process to be improved, this step, although seemingly simple in scope, may be the most difficult of the five. It typically begins, not with actually mapping a process, but with gathering managerial support for the entire process improvement program, establishing requisite change teams, and defining not only expectations but standards of support throughout an organization. It ends when process owners and managers alike agree on the identity of a process and the fact that a process problem exists and is capable of being documented and changed for the better.

Analyze – This step contains the essence of the basis for the improvement process itself. It generally begins with the development of process flow charts and diagrams and ends with a documented understanding of what an existing process is actually achieving, its value and what could be achieved by changing the process. It is here that process measurement metrics are also defined and recommended solutions to process problems first identified in step one are initially stated.

Re-Design – Once an existing process is understood, evaluated and documented, re-design can take place. In most cases, this step is the most ambitious. It can incorporate anything from developing future process flow diagrams and models to selecting best practices to be followed as benchmarks in the establishment of new process flows. Most often, this step is also that point in the improvement process where alternatives are not only recognized but agreed on by the managerial team.

Implement – Possibly the hardest, most critical step on the road to process improvement, implementing changes in process flows, measurement metrics, and managerial responsibility can be traumatic. Without this step however, nothing accomplished in the first three steps matters. Despite its criticality, the process of implementation is really fairly straightforward. It begins with the development of an implementation plan and ends when all planned changes have taken place.

Evaluate – The last but not “final” step in the improvement process, this step is continuous. It begins with the design of an evaluation plan but never ends. Rather, once the plan is accepted and implemented, it is continually updated and the process it is evaluating is continually changed and improved because of evaluative findings.

Lessons Learned

Principle 1: Top Management must be supportive of and engaged in – Process Improvement – efforts to remove barriers and drive success. A 1996 GAO study showed that the number one reason for the failure of improvement programs was lack of management support. What has changed since 1996 to make you believe that the same wouldn’t be true today?

Principle 2: An Organization’s Culture must be receptive to – Process Improvement – goals and principles! In this regard, there are two applicable, “truths,” that rise from the science of Organizational Design. The first is that cultural change must be lead by those inside the enterprise. The second is that change is not a start and stop process. It’s continuous and on going. Even more critical is the understanding that change is progressive. What alters one department, activity or process in a value chain one day may not affect another until days or even months later.

Principle 3: Major improvements and savings are realized by focusing on the business from a process rather than a functional perspective. This is the primary reason you’re reading this guide. In a nutshell, whereas some savings may be achieved by dealing with single activities in a value chain, the greatest savings are only realized when an entire process is evaluated and altered for improvement.

Untitled3

Principle 4: Processes should be selected for – Improvement – based on a clear notion of customer needs, anticipated benefits and potential for success! There’s a cleverly hidden message here. See if you can find it. It’s simply this. The largest gains may not always come from repairing those processes that seem to be in the greatest need of repair. So, think about the risk-benefit tradeoff before you tackle a process just because it looks needy. Look instead at the benefit to the customer before you decide which way to go.

Principle 5: Process owners should manage – Process Improvement – projects with teams that are cross functional, maintain a proper scope, focus on customer metrics, and enforce implementation deadlines! Here’s another hidden message. It’s that the biggest enemy of Process Improvement success is a set of vague, poorly defined parameters. PI projects must be distinctly framed, defined and mapped to succeed.

Where do we start?Untitled4

Remember that we said that there were five “general” steps to Process Improvement? Well, that’s still true. To work in the field though, it’s important to step down from the general-theoretical level of analysis and management and establish a little higher-level map of the entire process improvement game field. That way, you can not only identify where you are but see where you’re headed anywhere along the path. In essence, it adds to the framework of the five general steps to achieve a more productive and valuable sets of milestones on the road to better management. Okay, here’s what the new set of steps and sub-steps looks like:

Step 1: Identify

  1. Who wants the project? Who’s the stakeholder? Customer?
  2. What kind of project is it?

Document? – Document and evaluate a process for improvement

Improve? – Document and Improve

Reform? – Major PI project

Develop? – Create a new process

Re look? Maintenance of existing model

  1. Do you have the support of everyone affected?
  2. Has a Project Definition Agreement or problem statement been prepared? Does it adequately define the project?
  3. Has the Process in question been defined?
  4. Does the Process span more than a single organizational element?
  5. Does the process occupy a significant place in the value chain? Has its value been defined?
  6. Is there a Process owner? Has one been designated? Is the owner high enough in the organization to span all of the organizational elements included in the process?
  7. Has an “Improvement Team” been appointed? Are all of the requisite interests and skills represented on the team?
  8. Is the Process ready to be mapped?

Key Components and Concepts of the “Identify” Step

The first step in the PI process leads to a firm foundation for the remaining steps. It defines not only what is to be done, but how much effort and how many resources will be ultimately committed to the project. Accordingly, it’s imperative that practitioners have an in-depth understanding of some of the more illusive concepts and sub-steps found in this realm. To simplify the manual, items have been divided into “Discussion Points (DP).” Here are the DPs for the Identify Step.


First Meeting

Start projects by meeting with the project requestor to accomplish the following:

  • Determine the type of project
  • Establish the authority of the project
  • Designate Team Members
  • Prepare and sign off a Project Definition Agreement (see below)
  • Create a Project Announcement

Project Types. Make certain you understand the Project type. The reason should be obvious. A Re-Look project won’t require near as many resources as a full-fledged Process Improvement Change Project. Here’s a summary of the Project types and what you can expect from each:

  • DocumentModel the Existing Process
    Map and model an existing work process, establish the Value Stream and identify the none value adding activities. Review the process with a review team (people who do the work) to assure that it is accurate. These projects clarify processes and build process model libraries that are available for clarification, training and future projects.

  • Improve – Model and Improve
    Model the existing process and make changes that do not require major development efforts. A process expert team is formed to identify and test changes.
  • Reform – An In Depth Study and Reformation
    Model an existing work process and thoroughly overhaul it with an improvement team (of people who do the work). The team will strive for the best possible improvements. You will likely model the process several times before arriving at the final goal.
  • Develop – Create a Process from Scratch
    This Project is generally initiated to define a previously undefined process. An Improvement Team is selected with expertise covering all effected areas and a model is built to simulate future outcome.
  • Re-Look – Periodic Update of a Process
    A Re-Look Project is initiated when circumstances suggest that an existing process might benefit from periodic, “tweaking,” to make sure the process is performing in accordance with previously established guide lines and expectations. A Re-Look Project may be scheduled to be automatically initiated on a progressive basis depending on the business climate or initiated whenever management feels it necessary.


Project Authority & Scope.

The person who requests a project should have authority that encompasses the entire project. If the project affects areas outside of his or her authority, it’s important that you obtain the approval of someone who does have proper authority.Tell em where to go Authority


Project Definition Agreement (PDA).

Needless to say, communication is vital in all circumstances where change may result from any action taken by management, an Improvement Team or a Project Sponsor. Accordingly, it is strongly recommended that the PDA that follows be employed to firmly document the parameters of all Process Improvement Projects initiated at any level for any reason. Without taking this step there are unapparent dangers in performing a project. For example, the real purpose of the project may be missed entirely or the project may “creep” or a host of other problems that could have been avoided. Use the project definition agreement to provide a clear view of what is beneath the surface.Untitled6

Tell em where to go Definition Agreement

Tell em where to go Definition InstructionsSelecting a Project

Just because someone initiates a Project doesn’t mean that it should be done. In fact, according to a 1995 study by Holland and Kumar (Business Horizons, May-June 1995), a high percentage of Process Reengineering Project failures were the result of selecting the wrong process to reengineer. Even if that wasn’t the identical case with Process Improvement projects, it would probably be close enough to make a difference in the resources committed to a bad project. Here are some rules of thumb (ROT) to follow in the selection/approval of Improvement Projects:

Untitled7

  1. Select only Process Improvement Projects that deal with Processes that respond to customer needs.
  2. Select Processes Improvement Projects based on the comparative benefits to be achieved.
  3. Select Processes Improvement projects that have the highest probability of success.

Selecting Improvement Team Members

An hour’s worth of discussion about who should and shouldn’t be on an Improvement Team wouldn’t be nearly as valuable as a simple checklist about team composition. Here’s yours:

  1. Select team members from diverse areas of the process under consideration and make sure they represent all of the skill and activity sets found within the process.
  2. Be leery of volunteers or members who come highly recommended because they’re not busy right now.
  3. Try to get the pros, the employees who’ve been around for a while and know the process under sturdy inside and out. It’ll save you incredible amounts of time and effort when it comes time to define things.
  4. Try to have an odd number of team members.

The Process Owner’s Role

The concept of a Process Owner is relatively new and doesn’t fit very comfortably into most managers’ ideas of management structure. That’s because we’re more accustomed to the idea of a hierarchical arrangement then we are an arrangement that crosses activity and organizational boundaries.  Still, with regard to Process Improvement, the Process owner has several significant responsibilities. They are:

  1. Overall process design
  2. Setting performance targets
  3. Budgeting and distribution of operating monies


Project Announcement

You don’t have to announce every project. A Re-Look for example, may be nothing more than a quick examination of facts and circumstances. It’s your call. The purpose of the Project Announcement though is to givUntitled8e the manager who requested the project a chance to explain the project to the people in the areas that will be affected. The effect of this meeting can’t be overstated. Following this strategy you will cut through layers of red tape.

Here are the items that are normally covered in a Project Announcement:

  • Why the project is being undertaken.
  • The designation of team members and other persons in leadership positions.
  • An explanation of individual project roles and responsibilities.
  • A statement of management’s position regarding the project and a request for support and cooperation from all participants.

The “Analyze” Step

Key Components and Concepts of the “Analyze” Step

The second step in the PI process builds on the foundation laid in the previous step and forms the basis for all steps to follow. In fact, this step contains the essence of the basis for the improvement process itself. It generally begins with the development of process flow charts and diagrams and ends with a documented understanding of what an existing process is actually achieving, its value, and what might be achieved by changing the process. It is here that process measurement metrics are also defined and recommended solutions to process problems first identified in step one are initially stated Here are the sub-steps for the Analyze Step and the Discussion Points that follow.

Step 2: Analyze

  1. Map/flowchart the Process.
  2. Identify the Value Stream represented by the Process under study.
  3. Model the value stream.
  4. Identify activities that represent wasted time, duplication of effort, bottlenecks, etc.
  5. Define measurement metrics.
  6. Using the metrics defined above, evaluate and discuss the problems initially identified in Step 1.
  7. Present the simulated process to the Improvement Team for comments, revision, and approval.

Capturing the Essence of Process Flow

One of the most frequently asked questions is, “Is there one best way to capture the true nature, direction and character of a process?” The answer is, you bet!” In fact, it’s a short series of steps that most accurately and completely not only captures the essence of a process but ends only when a fully animated and a dynamic model of the process is ready for analysis. Following these steps not only save time but dramatically improves the accuracy of the Process Improvement itself! Here are the steps to follow.

1. “Marching Around”Discovering the Real Flow – It’s rare to find an immediately available, true and correct description of a process under study. Most organizations have a procedural guide that defines what’s supposed to be done and by whom, but that’s about it. What’s worse, when a process spans more than a single organizational element, it’s rare to find a worker or managers who’s fully familiar with the way a process performs beyond his own realm of responsibility. Because there may be a vast and undiscovered difference between the way an organization’s procedural guide reflects a process is supposed to work and the way it actually works, it’s a good idea to see for yourself. One of the best and most useful methods of validating the essence of a process is by walking from one end of the process to the other taking quick notes as you go. We call this method of establishing an accurate, initial flow picture, “Marching the Process.” The value of this step lies in the first hand knowledge gathered by the Improvement Team of where deviations may lie between what is expected and what is actually going on in a process and what issues may lie at the core of the process that make it work the way it does.

Do

March the Process, making Untitled9quick notes of the process flow. Urge the process expert to start with the most common flow. Collect the duration of the activity, splits, combinations and resources used. Collect what is happening at each activity.

As the modeler Marches the Process, it is useful to collect all data entry forms used. If the form is electronic then, do a screen print and highlight the entries made. Make note on the Process Collection Form and the data entry form that will allow the form to be linked to the activity step. Similarly, if other tools, devices or helps are needed, collect samples or make note of the requirement.

Process duration is best collected by using three points to estimate the duration of an activity. Ask the question “How long does it usually take to perform this operation?” Then ask “What is the shortest amount of time is has taken to accomplish this task?” Finally ask “What is the longest amount of time it has taken to accomplish this task?” These three times can now be arranged into a triangular distribution that will be used in ProcessModel. The distribution will be written T(min, most likely, max). That time value can be placed in the time field of the general tab.


2. “Interactive Mapping” – Creating the Flow Chart
– Use the information collected from Marching to develop a flow chart of the process. Walking through the process allows the modeler to now create the model faster and more accurately with the help of the process expert. The modeler will understand the terms, conditions and context of the information needed to build the map.

  1. Arrange the most common activities across the top of the page. This is called the “Happy Path.”  Use the diamond (Decision) shape to handle questions or exceptions.
  2. Build the “Flow” (the activity steps) first. Animate the model with default time entries to show to process team to determine accuracy.

Untitled10

3. “Turning the Switch” – Modeling the Flow Chart – As great – and often revealing – as an initial flow chart may be, it’s not the end product. In fact, making even short-term decisions about the character of a process, and the changes it may require to perform better, based on a flow diagram alone, can be disastrous. The reason lies at the core of all human systems. It’s called variation. Simply put, it’s extremely difficult to predict the overall performance characteristics of even the smallest systems when even slight amounts of variation are present in the way individual tasks or activities are completed. To examine the flow of work, material, ideas, communication, and the value that each represents or contributes to the overall process, requires the use of a tool that can accommodate variation and reflect the impact on the metrics selected to measure process performance and improvement. ProcessModel is just such tool. So, the final step to obtaining a true reflection of the current operational characteristics of a process is to “turn the switch,” and watch the process perform!

Information needed to model:

  1. Process duration – as a starting point use three points to estimate the duration of an activity. Ask the question “How long does it usually take to perform this operation?” Then ask “What is the shortest amount of time is has taken to accomplish this task?” Finally ask “What is the longest amount of time it has taken to accomplish this task?” These three times can now be arranged into a triangular distribution that will be used in ProcessModel. The distribution will be written T(min, most likely, max). That time value can be placed in the time field of the general tab.
  2. Mimic the arrival pattern.
  3. Develop special logic.
  4. Model resources last.

The concept of value

With regard to value, there are actually three types of tasks and activities found in any process. They are:

Value-Added Tasks (VA) – Those tasks that either …

… add something to the product or service that a customer would pay for

… represent a competitive advantage for the company or

… add a desired function, form or feature to the service.

Required, Non-Value-Added Tasks (RNVA) – Those tasks that add no value to the product or service but are present because they do one of the following …

…. Are required by law or regulation

…. Are required by business necessity

…. Would jeopardize the process if it was removed.

Non-Value Added, Waste tasks (NVAW) – Those tasks that are neither required nor add value to a product or service. Some commonly encountered waste tasks are …

  • ReworkUntitled11
  • Expediting
  • Multiple signatures
  • Counting
  • Handling
  • Inspecting
  • Setup
  • Downtime
  • Transporting
  • Moving
  • Delaying
  • Storing

The Value Stream

The value stream consists of one or more processes that produce product or service value for the customer but consume resources in the process of doing so. The value stream is generally the first place analysts look to either measure a Processes output or measure the impact of a process change on an organization.


Ten Ways to Immediately Improve a Process

Results from interviews with leaders in major corporations pointed to the fact that finding things to improve was not a problem, but finding the right things (the combination of projects) that would make a financial impact was arduous. Identifying those one or two key things that will impact the outcome of the system is difficult. ProcessModel provides a tool that automatically identifies the areas of the process that will provide the greatest impact if fixed.

Untitled12

The red bars indicate waste in the system. Eliminate the areas with the largest red and the system will produce more with a reduced cycletime. You will always be working on things that will affect the outcome o the entire system.

There are myriad ways to improve the flow of anything through its own process and thereby create an environment of overall operational improvement. Here are few of the most popular methods.

Reduce errors

Reduce duplication or fragmentation of tasksTell em where to go Button 2

Combine similar activities

Reduce handling

Move sequential tasks and activities closer together

Eliminate unused data

Standardize forms, operations and instructions

Remove artificial delays

Automate

Measurement Metrics

Clearly, when you’re analyzing a process, you have to have something to measure. If you’re not measuring something directly related to the performance of the process, then you’re not evaluating the process properly. What’s worse, if you have nothing to measure, then you’ll have no way to assess the impact of any changes that might be made to the process in the name of improvement. That said, there are really three different levels of measurement available to the analyst that serve to reflect the condition of the process under study as well as the company as a whole.

Operational Level Measurement Metrics – These low level metrics measure the outcome of day-to-day operations. Although they are the most common metrics available to the analyst, a few simple rules should be followed in their selection. Simply put, only select measures that are logical, relevant and sound for the process under study. Secondly, try to select measures that are not only easy to understand but relate to real world processes. Finally, try to use measures that are commonly found in similar businesses, are a natural byproduct of the process itself and have been used successfully in the past. Some of these metrics are:

  1. Throughput
  2. Cycle Time
  3. Production Cost
  4. Defect Rate
  5. Production Rate

The Importance of Communication

Communication is probably the single most important part of Process Improvement. Every step of the way, it’s vital that meetings be held with the stakeholders, Improvement Team members and managers to insure that everyone knows what’s happening, what impressions are forming and what plans are evolving as a result. At each meeting, the Process Owner or the leader of the Improvement Team should summarily review activities, findings and circumstances, as they currently exist so that everyone will know and feel free to question activities as they are progressing. In fact, to achieve the best results, a concentrated effort should be made to make certain that not only is everyone informed but that they are fully knowledgeable of:

  1. The current state of the metrics selected to evaluate the process under study
  2. The problem focus the Improvement Team is dealing with.
  3. The project’s organization and workflow.
  4. The Process under study, if not in great detail then at least from end to end so that everyone is ready to formulate needed decisions.


The “Redesign” Step

Key Components and Concepts of the “Redesign” Step

The third step in the PI process builds on the foundation laid in the previous step and forms the basis for all steps to follow. In fact, once an existing process is understood, evaluated and documented, re-design can take place. In most cases, this step is the most ambitious. It can incorporate anything from developing future process flow diagrams and models to selecting best practices to be followed as benchmarks in the establishment of new process flows. Most often, this step is also that point in the improvement process where alternatives are not only recognized but agreed on by the managerial team. Here are the sub-steps for the Redesign Step and the Discussion Points that follow.

Step 3: RedesignUntitled13

  1. Identify and chart an ideal “to be” model of what the future process should look like.
  2. Develop and build a computer model of the new process.
  3. Using the same metrics employed in the Analyze step, compare new and old model performance under identical conditions and assumptions.
  4. Identify best practices found in similar processes and incorporate those practices in the new model.
  5. Identify wide sweeping alternatives to the process as a whole.
  6. Again using the metrics defined above, evaluate and discuss the reaction of the new model to the problems initially identified in Step 1.
  7. Present the new or ideal model to management for comments and approval.

Redesign, a “Mopping up” process

The redesign of a process doesn’t happen all at once. Even small processes can contain surprises which, once encountered can befuddle the most capable analyst. So, redesign is an iterative process. That means that redesign takes place over a time horizon during which members of the Improvement Team and others are encouraged to suggest ways in which a process should be changed to improve its metrics. They are then presented with a parade of progressive models, one by one, that reflect the suggested changes until everyone agrees that nothing more can be gained by additional changes.

To insure that this iterative process occurs as smoothly as possible, the following steps are suggested to take the Team from the old process to the new in as little time as possible.

  1. Each team member should be given a list of the common methods used to improve process flows, a list of the defined metrics that are being used to measure process evolution and improvement, and a summary of all of the assumptions, findings and outcomes of the different process improvement steps.
  2. If modeling tools are being used to assess changes in processes, meetings should be scheduled as soon as possible following model runs so that no impetus is lost in the redesign process.
  3. An active comparison chart of the metrics selected for evaluation should be maintained by the Improvement Team so that team members can see what progress is being made in the improvement process.
  4. A final minimum variation goal (MVG) should be established to signal when little more is to be gained by making additional changes to the process under study. When the variation level is reached, the improvement process should be considered concluded and the team should make immediate plans to move to Implementation. An MVG can be nothing more than a statement that says, once ten different trials have been completed and no more than a .5% improvement in a given metric has been achieved, no further trials or changes will be considered.
  5. All redesign efforts conclude with a presentation of results and recommendations first to the team itself and then to management for review and approval.

The value of optimization!

Untitled14

It’s not at all uncommon for analysts to relax their vigilance once an analytical goal has been reached. Under no circumstances should a Process Improvement Team follow suit! Rather, it’s vital that not only is a model of a suggested process produced but that it is evaluated from several different perspectives. To that end, the final “To Be” model should be subjected to as many of the following modeling situations as possible and the measurement metrics evaluated for unique or unusual behavior before any process is submitted for implementation.

  1. A full range of test scenarios developed from existing cases.
  2. A full range of test scenarios based on expected “extreme” but realistic situations.
  3. Multiple runs of a set of routine or standard scenarios to detect the potential for unusual albeit rare process behavior.


Evaluating Outcomes – The “Golden Rules”

The true value of a revised process isn’t always apparent. In fact, small gains in financial savings may appear bigger than they are once taken out of the business context in which they were derived. Accordingly, here are a few recommendations that might prevent you from drawing conclusions about the value of a process change that doesn’t really fit.

  1. Always evaluate the impact of process change on a given metric in the context of other process and project information. No measurement result is good or bad by itself; rather, it only indicates the possibility of a problem not the cause.
  2. Measurements should be collected as a natural part of the process itself, not as an artificial construct.
  3. When it comes to metrics, stick to a level of detail that’s sufficient to identify and analyze problems.
  4. Be systematic in your approach to measuring outcomes. If you’re not, you could have a great deal of trouble determining just where cause and effect come into play given changes in a process.

Presenting to Management

Untitled15The secrets to presentation success are very simple and require little explanation.

Because a poor presentation can mean the difference between securing management approval for process change and not, it might be helpful to make note of some of the ways to look like a champion:

  1. Lead with a summary of what has been accomplished before getting into the practical details. In presentation parlance, “Tell them what you’re going to tell them – concisely, quickly and honestly.”
  2. Follow your summary with your recommendations, again concisely but not quickly. Consider having each member of the team whose area is affected by the recommended changes, make the respective recommendations.
  3. If the presentation is to secure approval, have a summary sheet ready for each recommendation with a signature block ready at the bottom of the sheet for the approving authority’s approval or disapproval.
  4. Get the management team involved in your presentation by using the animation of the simulation to show the principles of the change. If this is done in the proper order it will have a dramatic effect on your success.
    • Introduce how it works.
    • Show an area of particular interest.
    • Show the scope of the work that was accomplished to support the recommendations.
  5. This single step sets you apart form the every other presentation that the management team will see in the course of the year. They will see how you arrived at your conclusions and feel confidence in your recommendations.
  6. Always be prepared to offer further justification for any recommended solution. Offer your best and most significant reasons first and then follow with additional justification if asked.
  7. Make sure the decision maker relieves a full sent of any facts, reports or graphics you intend to present so that they can review them at their leisure later. Send the information to their office or present them to the manager only after your presentation.
  8. Always summarize what has transpired at the end of the presentation. Again, be concise and quick.

If you receive approval in the meeting, sit down and shut up. I have watched several presenters talk their way out of project approvals, because they didn’t stop when they had made the sale. Enjoy the moment – now the real work begins.

The “Implement” Step

Key Components and Concepts of the “Implement” Step

The forth step in the PI process builds on the foundation laid in the previous step and forms the basis for all steps to follow. Possibly, the hardest, most critical step on the road to process improvement, implementing changes in process flows, measurement metrics, and managerial responsibility can be traumatic. Without this step however, nothing accomplished in the first three steps matters. Despite its criticality, the process of implementation is really fairly straightforward. It begins with the development of an implementation plan and ends when all planned changes have taken place. Here are the sub-steps for the Implement Step and the Discussion Points that follow.

Step 4: Implement

  1. Untitled16

    Evaluate the new process model for phased or sudden-death implementation.

  2. Develop an implementation plan.
  3. Appoint an Implementation Coordinator.
  4. Evaluate the cultural impact of the implementation of the new process and develop a comprehensive change program to accommodate expected responses.
  5. Get everyone’s fingerprints on the knife!
  6. Present the final plan to management for comment and approval.

Finger Prints on the Knife

As is the case with all aspects of management, cooperation at all levels is essential if implementation of a new Process Plan is expected to go smoothly. To that end, it’s vital that everyone involved be appraised of what’s about to happen, how long it will take, the benefits of implementation, and how aspects of the current process are about to change. Informed regularly and honestly, people’s fears normally associated with change will disappear and a greater degree of cooperation will ensue. In short, an informed population is more likely to take faster ownership of a new idea than a population simply forced to accept circumstances.

Implementation Catch Points

Two basic elements need to be considered when planning the implementation of process change. As simple as that may sound, these elements are responsible for more problems implementing process improvements than any other. They are:

Scale

While a single activity process can be changed in a few minutes, a process consisting of hundreds or even thousands of activities could take a year or more. That means that part of a process may easily be working under different leadership doing vastly different tasks while another continues as usual. To plan for and manage a large scale process change requires the use of computerized flow-charting and project management software. Settling for anything less, risks losing not only momentum for change but also the accurate revision of the process itself.


Complexity

Where scale represents size, complexity speaks to the notion of the presence of myriad tasks that may or may not be similar in function or even related in form or outcome. Accordingly, it’s vital that implementation plans include supporting activities for all sub-processes contained within a major process. In addition, it’s equally vital that consideration be given to the difference in requirements mandated by the types of activities being implemented. The following is a partial list of requirements that should be programmed for various activities within a given process:

a. Equipment needed

b. Training needed

c. Policies and Procedures needed

d. Facilities/work spaces needed

e. Forms needed

f. Computer Programming needed.

Sudden Death VS Phased Implementation

Occasionally, a term comes along that completely describes a situation or circumstance with such accuracy that no further information or explanation is necessary. “Sudden Death,” is certainly one of those. Taken from the computer programming industry, it’s generally used to reflect a situation where a computer program is placed into immediate production (use) without backup, parallel safety systems or redundant alternate programs operating in the background. In short, a programmer who uses sudden death implementation is risking everything on the chance that a program just might not work, ergo the term sudden death. Despite its nefarious definition, sudden death implementation of a process improvement plan doesn’t have to imply mayhem and horror. Well, not if certain rules are followed. Here’s what they are:

  1. The greater the number of organizations a process spans, the greater the argument for Phased Implementation of planned changes.
  2. All processes that occur within or span a single department are candidates for sudden death implementation of process change.
  3. The decision line between Phased Implementation and Sudden Death generally occurs around the ten-activity curve. In other words, a process with ten or less activities can usually sustain the shock of complete revision without meltdown as long as a comprehensive implementation plan is created and followed.
  4. Possibly the two most important aspect of any implementation plan, regardless of style, are the establishment of deadlines for designated changes and the assignment of specific responsibility for those changes.

Implementation Plan Contents – Making the New Process come to Life!

The implementation plan for a single-activity process change can be as simple as an email announcement that establishes the time and place the change will occur. That’s rare though. Normally, even the simplest processes contain sufficient complexity and scale to warrant a detailed plan with supporting documentation to insure the timely and accurate change of a current process. In fact, to be completely prepared, the Implementation Coordinator should make certain that the plan itself contains each of the following:

  1. A copy of the initial flow chart depicting the old Process.
  2. A copy of the final flow chart depicting the new “To Be” Process.
  3. A Gantt chart showing each of the activities within the process undergoing change and when the activities are scheduled to begin and end.
  4. Where needed, a description of the work to be done to add/change/alter each activity in the Gantt chart.
  5. A list, by name, of personnel with assigned responsibilities for each of the planned changes.
  6. A list of material and support requirements necessary to insure the timely revision of each activity.
  7. A copy of the approval letter for the planned changes signed by the approving authority.

The “Evaluate” Step

Untitled17Key Components and Concepts of the “Evaluate” Step

The fifth step in the PI process builds on the foundation laid in the previous step and forms the basis for a new cycle of continuous Process Improvement through vigilance. The last but not “final” step in the improvement process, this step is continuous. It begins with the design of an evaluation plan but never ends. Rather, once the plan is accepted and implemented, it is continually updated and the process it is evaluating is continually changed and improved. Here are the sub-steps for the Evaluate Step and the Discussion Points that follow.

Step 5: Evaluate

  1. Develop an evaluation plan to insure periodic process review.
  2. Establish plan goals and “triggers” that would signal the need for further Improvement methods once a cycle review is completed.
  3. Build a set of “situational standards” that would signal the need for an out-of-cycle process review.
  4. Prepare recommendations for future process improvements and changes not possible due to technological or financial constraints.
  5. Prepare an after action report and store all supporting documentation in a safe place.

The purpose of an Evaluation Plan – Monitoring!

An Evaluation plan actually serves a dual purpose. Because a model can’t possibly represent all aspects of reality, there’s no way to be absolutely certain that a Process Improvement plan will actually improve a process to the degree planned. You can get pretty close but you can’t be perfect in your forecast. In fact, there’s irrefutable evidence that simply paying attention to a business process will often cause it to change to a degree that exceeds expectations without a manager actually altering the way things are done. So, things need to be monitored. The Evaluation plan serves first as an immediate “sensor” available to measure the performance of the new process both as it’s being implemented and when it’s finally fully in place and operating normally. It also serves as an on-going sensor established to detect, over time, changes in either process performance or situational parameters that might affect process outcomes.


Evaluation Plan Contents

The contents of the Evaluation Plan should be determined based on the needs of the organization and the purpose of the process under evaluation. In the simplest sense and to achieve the purposes listed above, it should contain the following:

  1. A copy of the Process Flow Plan
  2. A list of the metrics selected to evaluate process performance including:
    1. The expected statistical parameters associated with each metric
    2. A description of what each metric means and how it is to be measured.
    3. A sampling plan showing how statistical samples are to be taken.
    4. A CD containing a copy of the model used to project expected performance metrics.
    5. A schedule for future evaluations, showing when evaluations will take place, who has responsibility for collecting the data and to whom the resulting information will be sent.
    6. A list of “triggers” or metric values that might signal the need for further evaluation of the process. Such triggers may be internal or external to the process and may as simple as the occurrence of an extreme metric value. They may also be situational in nature and may include such things as the discontinuation of some portion of a product line.

Summary

It wouldn’t be prudent to end this Process Improvement Guidebook without a summary that takes advantage of what you’ve probably learned about Process Improvement itself.

Let’s go back to the beginning and take another look at things from a different angle.

Did you notice how neatly everything fit into the Five “General” Steps to Process Improvement – Identify, Analyze, Re-design, Implement and Evaluate? That’s because every process has to have a skeleton to hang its activities on. Without that framework, a lot of peripheral information needed to make things more understandable might be lost. So, each general step contains a lot more information than you might need to achieve a rapid, accurate, assessment of a process. That’s where Process Improvement itself takes over.

To fully appreciate and get the most use from any process – and Process Improvement IS a process! – you have to strip away the varnish and look at the “value-added” steps themselves. In other words, you have to find those steps that contribute directly to the final product, the outcome that you, the customer, is really interested in. Sometimes that’s most easily done using a memory-jogging, cleverly designed list of key points.

Did you notice that several of the key sub-steps have the same beginning letter?… that that letter is, “M.” These words or phrases are the value-contributing steps in the overall Process Improvement process itself. We call them the, “MI-6.” – the Mighty Important Six! They are…

March it

Map it

Model it

Mop it up

Make it

Monitor it

Find Out More InformationTell em where to go Button 3

0

Don’t leave Money on the Table — Optimize

Posted by admin on October 9, 2009 in Optimization

www.processmodel.com

“Great presentation! Are you sure you’ve found the best solution for our business?” OptimizationMoney_on_Table

These words, or questions of a similar nature, are heard time and time again in most meetings where simulation has been presented. Most of the time systems are so complex, that finding the “right solution” is easier said than done. To find the exact combination of conditions that will give you the ‘best’ possible system performance, you need to examine multiple scenarios. Every situation can require some modification of your simulation model. The sheer number of parameters and combinations can create thousands or even hundreds of thousands of possible experiments. You’re left facing an impractical task. And yet potential improvements are never realized due to a lack of time for experimentation.

With ProcessModel, the task just became a whole lot easier. ProcessModel has an optimizer that automates the process of creating, running and analyzing experiments. You give ProcessModel a goal or an objective, and the software will adjust the parameters of the model to meet that goal. For example, you set a goal of reaching the highest throughput with the minimum number of resources and the lowest WIP. ProcessModel runs experiments, changes the parameters, compares the output and shows you the best settings of parameters to achieve your goal…while you work on another project!

When to Use Process Optimization

When should I use optimization? Well, I run optimization when the number of choices exceeds the number of experiments I am willing to run to find the answer. For me, if the number of experiments will exceed 15, then I would setup an optimization. It only takes a few minutes set up and I can do other things while the computer is running experiments. In the case where there could be large numbers of experiments then I might let the optimizer run while I go to lunch or home for the night. When you return, the optimizer will have identified the best solution. It just doesn’t get any better than that!

How will I know how many experiments might be needed?

It depends on the number of factors and the number of choices for each of those factors. To figure out the number of experiments required, multiply number of possible options for one choice by the number of possible options of the next choice. For example if there are 3 job functions (factors) and each of the personnel in those job functions can vary by 5 (i.e. vary from 15 to 20) then the number of possibilities would be 125.

Think of it like this: you have 5 choices for the first factor. For each of those choices, there are 5 choices for the second factor, making 5 * 5 = 25. For each of those 25 combinations, there are 5 choices for the third factor, and so on. Thus, we get 125. It is easy to see how easily the number of experiments can get completely out of hand. If you have only 6 factors, with five choices each, then the number of experiments would be almost to 16,000! If two more factors are added then the number of possible experiments exceeds 390,000.

Optimization1bWhat about design of experiments?

Good question. First, in a complex system, design of experiments is used to limit the number of possible trials to save time and money. That means some experiments won’t be run because of an arbitrarily decision not to try those combinations. Rather than artificially eliminate possibilities, why not let the computer eliminate possible trials based on the response achieved? That is what simulation optimization does. It does a smart design of experiments, and evolves along the way. It is kind of like the selection process for the Tsetse fly. The weak ones die off and the strong ones mate and proliferate. In an optimization I performed, there were over 400,000 possible experiments. The optimization found a great solution after running 270 experiments! That’s intelligent automated design of experiments.Optimization1

Second, design of experiments wasn’t really designed to handle a range of possibilities for each factor (i.e. 15 to 20). If a range of possibilities is used then the number of experiments grows dramatically. Now, we are back to analyzing large numbers of experiments to figure out which is best. This is a bad choice, because you still do the work to analyze the results. In a complex system this could mean hundreds of hours of pouring over data find the “best” result

Optimization techniques included in Basics 2 Training Class – Sign up Now

How To Do Optimization – It’s really not that hard

  1. Prepare a validated model
  2. Insert scenario parameters
  3. Run the model — Launch SimRunner
  4. Set the goal of the optimization
  5. Select the parameters to adjust
  6. Optimize
  7. Plug optimization values into the model

Step 1 — Prepare a validated model

Since this article is not on validating models, I am not going to spend much time on validating and verifying your model. That being said, don’t optimize unless the model is validated. It is kind of like kissing the wrong girl. The outcome is uncertain and probability of failure is high.

Some of the simple things that you can do include:

  • Watch the animation to determine that the model performs all intended functions.
  • Make certain that the data going into the model is correct
    • Test all distributions (see the Model Object Miscellaneous – verify distribution values)
    • Export model data to Excel and check each time value, time units, capacities, queue sizes, move times, etc. I use a red pen to check-off each entry
  • Compare the overall output to real data from company systems. This is one of the vital reasons for doing a model of the as-is system…you can gain confidence in the predictive capability of the model before making changes.
  • Compare critical parts of the model with known or estimated data.

Step 2 – Insert Scenario Parameters into your Model

Now that you have a validated model to work with, create scenario parameters for each aspect of the model that you want to change. To help the learning process, we are going to use a very simple model to illustrate the steps used. The same procedure will apply to complex models.

The modelOptimization3

In this simple model Items arrive every 20 minutes. The first process takes 1 hour to complete, while the second process takes 3 hours to complete. The goal of this model will be to Maximize Throughput with the Minimum Resources. You could easily calculate in your head that 3 workers would be needed at the first process, while 9 would be needed at the second process. Obviously, you wouldn’t do simulation on model that you already knew the answer for, but I think you will find this simple example illustrative and helpful to your understanding.

First, define the scenario parameters for the elements that will be changed in the model. Scenario parameters are like a control levers used to set the number of resources (or any other parameter you want to change). When used with the optimizer, they are automatically adjusted through a range that you specify.

The scenarios dialog is found on the simulation menu.Optimization4

We will be adding a scenario parameter for each of the workers. I always start the scenario parameters with “s_” so that I can spot them in my model. I immediately know that parameter is for scenarios rather than an a variable or attribute.

I am defining 2 scenario parameters. One scenario parameter will be used for each resource type.

The Default Value acts as a starting point for the optimizer. Although we already know the answer, I am going to pretend that I don’t know the answer and set the default values to 2 and 8 respectively.Optimization5

The scenario parameter then needs to be placed in the dialog where the adjustments will be made. Since quantity of resources will be changed, we will go to the resource dialog. Delete the contents of the quantity field and right-mouse-click in the same field. Use the key words selection to select the appropriate scenario parameter.Optimization6

Since the Worker is assigned to the activity titled Process, we need to make certain that the capacity of the activity is large enough to handle the assigned workers. Without making the change to the Capacity of the Activity it is as if we have a lot of resources but no workspace for them to perform their tasks. To make this change, delete out the contents of the capacity field and right-mouse-click in the same field. Use the key words selection to select the appropriate scenario parameter.Optimization7

Repeat the procedure for Worker2 and the activity where that resource is assigned.

Step 3 — Run the model and Launch SimRunner

With scenario parameters in place the model, run the model and save the output.Optimization7b

This important because when the output is saved it creates a “map” of the things that can be compared by the optimizer. This “map” will allow us to set a target for which the optimizer can then hunt.

Launch SimRunner from the tools menu.Optimization8

Step 4 – Set the goal of the optimization

The objective function is an expression used to quantitatively evaluate a simulation model’s performance. In other words, you are going to create a target that SimRunner will aim for.Optimization9

By measuring various performance characteristics and taking into consideration how you weigh them, SimRunner can measure how well your system operates. However, SimRunner knows only what you tell it via the objective function. For instance, if your objective function measures only one variable, Total_Throughput, SimRunner will attempt to optimize that variable. If you do not include an objective function term to tell it that you also want to minimize the total number of workers used, SimRunner will assume that you don’t care how many workers you use. Since the objective function can include multiple terms, be sure to include all of the response statistics about which you are concerned.

In our example we are going to maximize the items processed while minimizing the number of resources.Optimization_10

In the Response Category field, select Entity. The statistics for Entities that can be used in the objective are now shown the Response Statistic field. Double-click on the “Item – Qty Processed” to move it into the Objective Function (to make it part of the targeting formula). Weighting could be added to adjust the relative importance of each response statistic.

We are not going to be making changes to the weighting factors in this example, but a brief explanation will help to show how it will be used. Weights serve as a means of load balancing for statistics that might bias the objective function. Since most simulation models produce a variety of large and small values, it is often necessary to weight these values to ensure that the objective function does not unintentionally favor any particular statistic. For example, suppose that a run of your model returns a throughput of .72 and an average WIP of 4.56. If you maximize throughput and minimize WIP by applying the same weight to both (W1=W2), you will bias the objective function in favor of WIP:

Maximize[(W1)*(Throughput)] = .72

Minimize[(W2)*(WIP)] = 4.56Optimization_11

In this case, since you want to ensure that both statistics carry equal weight in the objective function, you will apply a weight of 6.33 (W1=6.33) to throughput and 1.0(W2=1.0) to WIP to make them of equal weight in the objective function.

Maximize[(W1)*(Throughput)] = 4.56

Minimize[(W2)*(WIP)] = 4.56Optimization_12

In situations where it is necessary to favor one statistic over another, balancing the statistics first will make it easier to control the amount of bias you apply. For example, if you apply a weight of 12.67 (W1=12.67) to throughput and 1.0 (W2=1.0) to WIP, the objective function will consider throughput to be twice as important as WIP

The other two statistics that need to be part of the Object Function are the number of resources.  In the Response Category field, select Resource. Double-click on “Worker – Units” and “Worker2 – Units.” Both of these statistics need to be minimized. To minimize the number of Workers, click on the line in the objective function that needs modification. Select the Min (minimize) radio button and select the Update button, as show below.Optimization_13

Do the same for Worker2.

Step 5 – Select the parameters to adjust

Now that the “Target” has been identified, set the range for which scenario parameters are allowed to change. All Scenario Parameters defined in your model are displayed in SimRunner as Macros. In this article, macros will consistently be referred to as Scenario Parameters. SimRunner will use these Scenario Parameters to improve the value of your objective function.

The reason for setting the range is twofold: first, by limiting the range, the number of experiments is reduced allowing the targeting system to find a result faster; second, the system being optimized may have physical limitations that cannot be exceeded. An example physical limitation would be space constraints for the number of machines. The physical layout may restrict the number of machines to three. To allow the number of machines to vary from 1 to 5 would be useless unless building changes were going to be allowed as part of the project. In the more conservative case, the range of the changes would be restricted from 1 to 3.

Select the Define Inputs button and select the Scenario Parameter (Macros) to be adjusted in this optimization.Optimization_14

Select each Scenario Parameter in turn (in the Macros Selected as Input Factors field) and set a Lower Bound and Upper Bound then select the Update button.Optimization_15

Set the bounds as shown above.

There are a lot of capabilities that that won’t be covered in this tutorial. As a matter of fact the whole area dealing pre-analysis of the model to determine the warm-up length and number of replications will be covered in a future newsletter.

Step 6 – Optimize

With a few final settings the optimizer is ready to run. Select the Optimize button off of the top tool bar.Optimization_16

Optimization profile — SimRunner provides three optimization profiles: Aggressive, Moderate, and Cautious. The optimization profile is a reflection of the number of possible solutions SimRunner will examine. For example, the cautious profile tells SimRunner to consider the highest number of possible solutions—to cautiously and more thoroughly conduct its search for the optimum. As you move from aggressive to cautious, you will most often get better results because SimRunner examines more results—but not always. Depending on the difficulty of the problem you are trying to solve, you may get solutions that are equally good. If you are pressed for time or use relatively few input factors, a more aggressive approach—the aggressive profile—may be appropriate

Convergence percentage – With every experiment, SimRunner tracks the objective function’s value for each solution. By recording the best and the average results produced, SimRunner is able to monitor the progress of the algorithm. Once the best and the aver-age are at or near the same value, the results converge and the optimization stops. The convergence percentage controls how close the best and the average must be to each other before the optimization stops. A high percentage value will stop the search early, while a very small percentage value will run the optimization until the points converge.

What more is there to learn?

The remaining options are very straight forward, so I won’t spend time on those fields.

The next step is to hit the Optimize button and then the Run button.Optimization_17

SimRunner tries to maximize the objective function. It just so happens that this model found the best results on experiment number 1. That won’t happen often. We had a simple problem with a limited number of experiments and the algorithm guessed correctly. In this simple example is easy to see how the objective function was reached. The objective function was given 108 points (one for each entity that exited the model) and lost a point for each of the workers (minus 3 for Worker and 9 for Worker2), leaving the final objective function at 96. Note that experiment number 8 had the same number of Items processed, but lost one additional point because of having an additional worker.

Step 7 – Plug the Best Results into ProcessModel

The optimization should not be the final run of the model. Take the time to try the results in the simulation model, turn on the animation and watch the simulation run. This provides additional assurance that the model is working correctly with the new values. It also provides an opportunity to identify other areas for improvement not included in the objective function.

SimRunner intelligently and reliably seeks the optimal solution to your problem based on the feedback from your simulation model by applying some of the most advanced search techniques available today.

The SimRunner optimization method is based upon Evolutionary Algorithms. Evolutionary Algorithms are a class of direct search techniques based on concepts from the theory of evolution. The algorithms mimic the underlying evolutionary process in that entities adapt to their environment in order to survive. Evolutionary Algorithms manipulate a population of solutions to a problem in such a way that poor solutions fade away and good solutions continually evolve in their search for the optimum. Search techniques based on this concept have proven to be very robust and have solved a wide variety of difficult problems. They are extremely useful because they provide you with not only a single, optimized solution, but with many good alternatives.

Sign up for the optimization training

Haven’t gone to training yet? See the Course Schedule for Basics 1 Training

2

Value Stream Mapping Will Delay Production, Drain Your Cash and Wreck Havoc in These 5 Parts of Your Business

Posted by admin on October 8, 2009 in General Info

www.processmodel.com

See the five process problems experienced by most companies, why they can’t be solved with traditional Lean tools, and how you can tackle them quickly without even breaking a sweat. Read on…

Don’t get us wrong. This information isn’t anti-lean. It is pro-lean. Lean techniques have proven their worth time and time again. Almost every technique you have learned in lean should still be used. And if you can solve the problem using simple diagrams, don’t read any further. But if you haven’t noticed, some process problems are difficult to solve because the traditional tools just aren’t adequate. Here are some examples:Lean1

When several processes use the same resource

When product mix changes frequently

When there is high variability in processing times

When route selection varies

When objects go back through a process several times

This isn’t the entire list of issues.  But in each case a value stream map can’t paint a clear picture of what is happening (explained below). Another tool is needed to see and understand the behavior of a complex system. ProcessModel simulation software allows you to create a simple working model of a complex process so that you can see how it runs and understand its behaviors. You can actually watch it run, observing bottlenecks, delays, and other types of waste. You can make changes to the model and “simulate” the behavior of the changed process. ProcessModel doesn’t replace your current tools but it allows you to increase the effectiveness of your lean training.

Would you like to try ProcessModel? Click here to receive your personal trial version.

Need more information? Read on to learn what processes can best be solved with simulation.

Resources shared by different processes

Lean2Manufacturing, service, and heath care systems often face a common problem. Expensive resources are often required to be shared by several production lines or used by multiple departments. At Boston Medical Center, the Emergency, Inpatient, and Outpatient departments all share multi-million dollar CT Scanning equipment. The equipment is expensive, requires highly trained personnel, and has a fairly large footprint so it can’t be moved. The usage of the system does not justify any one of the groups having their own CT Scanners. When using Value Stream Mapping (VSM) to create a picture of how the system looks and behaves, which department do you use — Emergency, Inpatient, or Outpatient? If changes are made to the value stream of one of the processes without the others, the result is often disastrous. Many shared resource problems in a wide variety of industries go unsolved because the issues are multi-dimensional. Because of this complex nature, the problem is difficult to solve using traditional VSM methods.

Product mix changes frequently

In service systems and in assembly of expensive items it is common to run more than one type of “product” on the same line at the same time. This means that several products, each type having unique assembly times, assembly requirements and routing will be present on the same line at the same time. Furthermore, the product mix may change from day to day, and during the day. Creating a clear picture of the problem becomes difficult with traditional VSM methods. If a snapshot is created of how things look right now, in a few hours the picture would be completely different. A change to processing times causes buildup to occur in different areas. Traditional VSM becomes ineffective because many pictures would be needed to capture system. But which of those pictures would be used to make a change?

High variability in processing timesLean4

In many service and health care systems, variability is a tougher problem to solve than in manufacturing systems, where the inputs can readily be controlled. With traditional tools the variability is changed to a simple average. Using averages to make decisions on a complex system is like using a chainsaw to perform surgery…the results are never going to be right.

High variability in routing selection

When high variability in routing selection is first mentioned, it often conjures up pictures of a manufacturing job shop. Every order has the potential to move across the machines in a unique sequence. This high variability is often described as having “no process.” That description is simply inaccurate. Every single order has a tightly defined process. It is just difficult to describe with a single picture when trying to use an overview perspective. This problem is not limited to manufacturing job shops. Health care has a variety of these types of problems including the laboratory and the emergency department. Many general service processes exhibit the same behaviors including inbound call centers, insurance underwriting, etc. Again, traditional VSM methods can’t be used effectively to create a representative picture of this type of system because the single picture would provide little insight to the complex problem.

Rework

Lean5A lean practitioner will cringe at the very word “rework” because it means that the system has a built-in quality problem. In the “lean world” to allow the system to have built-in rework is costly, time consuming and just “wrong.”  It goes against the very nature of lean teaching. That being said, reduction of rework may take time. In one company we observed that it took over one year to correct a design which caused manufacturing failures, and then incorporate those changes into products. In the mean time, this company still needed to meet the production requirements until the engineering changes rippled through the system. In many systems the problem of rework is resolved over time. The problem with rework is that it pushes work back into the existing processes. Calculating the effect of this random rework is difficult. If the rework is relatively small then a VSM can ignore the problem and provide a clean picture of the system. If rework is high then the variability of the process doesn’t allow VSM to paint a clear picture of the problem or show the result of changes to the system over time.

Each of the systems discussed above is a problem that moves beyond capabilities of traditional lean tools. Unlike many of the problems for which VSM can be used effectively, the effect of a change is hard to determine in each of the groups listed above.

That is why ProcessModel simulation software was created. ProcessModel simulation software provides you with anLean7 easy way to expand your toolbox to handle problems not suited for traditional value stream mapping.

If you would you to experience a new tool to help you solve complex process problems, just Click here to receive your free personal training version (this isn’t a trial, it’s a non expiring usable version of the software).

0

Restaurant Savors Rich Rewards with Process Optimization

Posted by admin on October 8, 2009 in Food Services, Optimization

www.processmodel.com

_______________________________________________________________________________________________

It’s Friday night at the Restaurant.  As the owner of this buzzing social phenomenon you smile at the sight of bustling waiters and the line of patrons at the door just waiting to experience the fine dining your restaurant is Cheesecake1famous for.  For the moment everything seems ideal…who doesn’t want their restaurant to be the Friday night hot-spot?  But then out of the corner of your eye you see a couple from the back of the line check their watches and reluctantly walk away.  You quickly point them out to the host and ask if he has spoken to them.  He says “no, but the wait is at least two hours…they probably just don’t have that kind of time.” Looking around you notice the irony of many empty chairs.  Even more disconcerting you see several other groups approaching your door, but also turning away.

If this experience sounds familiar to you, you may also be considering this seemingly formidable problem of inefficient throughput.

How much money is walking out your door?

This question has been a sobering reality for popular restaurant owners everywhere …. until now.

Executive administrators at the Restaurant have shown revolutionary vision in their enlistment of the latest techniques to reduce waiting time, better utilize the facility and dramatically increase revenue.

Their foresight combined with the visual capabilities of ProcessModel has provided the company with the potential to make this lackluster experience a distant memory for Restaurant customers in the future.  Here’s how…

Problem

Administrators at the Restaurant knew that the notorious problem of inefficient restaurant service can be attributed to either of two major culprits; either the serving process itself is inefficient, or there are unused chairs despite the line outside the door.  Administrators at the Restaurant employed the specific capabilities of ProcessModel analysis to address the second of these problems.  They then pursued an optimized table mix which would offer maximum throughput using minimum space.

To illustrate this need for an optimized table mix, suppose the Restaurant has fifty parties of two waiting to be seated, but only 10 two-top tables.  These additional two-person parties will likely be seated at four-top or six-top tables.  When a party of ten walks in, there will likely be no suitable place for them, despite the dozens of empty seats.  In other words, if there are 300 available seats, but only 200 seats which meet each party’s needs, there are 100 potential spaces for customers which are simply being wasted.

In order to create a solution which can effectively address this problem, administrators at the Restaurant first had to acquire several essential pieces of information.

First, what is the party mix? In other words, on average, how many parties of two, four, or ten are in attendance during peak hours?

Second, when is each of the parties most likely to arrive?

Third, what is the existing table mix? (How many two-top, four-top, and six-top tables are in the restaurant now?)

And finally, how many locations are available for large parties?

Once this essential information was gathered, administrators used Process Model consulting services and software to create a solution.

SolutionCheesecake2

These key bits of information about the Restaurant processes enabled consultants at ProcessModel to create a computerized simulation of the flow of the restaurant’s customers through their existing system.  This simulation allowed the consultants to observe current processes, predict bottlenecks, and through optimization ultimately optimize table mixes.  They then reported on the predicted dilemmas and possible solutions in order to assist the Restaurant in creating a more streamlined overall process.  By instituting an optimized table mix, the Restaurant is now able to achieve maximum throughput.

It is important to note that the optimized table mixes are created specifically for use during peak hours.  During times when the restaurant is not at maximum capacity, it is less important to use an optimized table mix as there is no shortage of available seating, regardless of each party’s size.

Results

The outcome of the model created for the Restaurant has been astounding.  The optimized table mixes have had a profound effect in the locations where they have been adopted.

In these locations, the optimized table mix has shown a fifteen percent Increase in revenue. How would you like to increase your company’s revenue without adding people, changing the size of the facility or adding marketing budget?

In addition, the optimized table mix resulted in a thirty percent increase in seat utilization.  This increase was facilitated by the model on a detailed level, as it allowed the Restaurant to recognize that their need for two-top and four-top tables far exceeded their need for larger tables.  Through the use of optimized table mixes, the Restaurant has dramatically increased seat utilization while actually decreasing the total number of seats!

Future Applications

Cheesecake3The Restaurant’s ground-breaking applications of optimized table mixes have the potential to revolutionize the restaurant industry.   Few restaurant owners would shy away from the opportunity for an immediate fifteen percent increase in revenue with relatively modest additional investment.

Beyond the obvious immediate benefits of implementing the optimized table mixes, there is the immeasurable secondary benefit of providing customers with a consistently exceptional dining experience.  Unlike many innovations which increase revenue at the cost of customer service, the Restaurant’s optimized table mixes are mutually beneficial to restaurant and customer alike.  What customer wouldn’t support a system which ensures they receive consistently prompt and efficient service?

So the next time you’re stuck in an endless restaurant line, consider the timely seating and exceptional dining available at your local Restaurant.  Thanks to the landmark applications of the Restaurant’s optimized table mixes, you can start spending most of your ‘eating out time’ actually eating, instead of waiting to get a table.

Want to become and expert in optimization?    SIGN UP NOW!!!

0

Justifying Simulation for Process Improvement

Posted by admin on October 8, 2009 in General Info

www.processmodel.com

Accurate depiction of realityJustification1

Anyone can perform an analysis manually. However, as the complexity of the analysis rises, so does the need to employ a computer-based tool. While spreadsheets can perform many complex calculations and help determine the operational status of most systems, their use of average numbers to represent arrivals, activity times, and resource unavailability is like using a spoon to dig a canal. Simulation, provides the equipment for complex projects.

Using simulation, you can include randomness through properly identified probability distributions taken directly from study data. For example, while the time needed to perform an assembly may average 10 minutes, special orders take as many as 45 minutes to complete. Simulation allows interdependence through arrival and service events and tracks them individually. For example, while order arrivals may place items in two locations, the worker can handle only one item at a time—spreadsheet calculations assume the operator to be available simultaneously at both locations.

Advanced optimization techniques

Justification3aOptimization techniques such as linear, goal, and dynamic programming are valuable when you want to maximize or minimize a single element (e.g., cost, utilization, revenue, or wait time).  Unfortunately, these techniques limit you to only one element, often at the expense of secondary goals, and do not allow the randomness of input data (requiring you to use average process times and arrival rates)—this produces misleading results.  Simulation optimization allows you to examine multiple elements simultaneously and track system performance with respect to activity time, arrival and exit rates, costs and revenues, and system utilization. Optimizing multiple elements provides you with the information you need to make accurate decisions and to apply more effective solutions to the entire operation.

The ProcessModel optimization module is a built-in capability that allows you to perform optimization on simulation models. The optimization module accepts parameters over which you have control and could change (e.g., the number of operators and priorities of events), and allows you to define objective functions to minimize or maximize specific model elements (through weighting factors assigned to each element). Once you identify and define these items, the optimization module performs a series of tests through multiple scenarios to seek the optimal solution. The output data details the optimized result and reports on key factors in both text and graphic forms.

Insightful system evaluationsJustification5

Simulation tracks events as they occur and gathers all time-related data for reporting purposes. The information available about system operations is more complete with simulation than with other techniques. With static analysis techniques such as queuing theory and spreadsheets, you know the average wait time and number of items in a queue but there is no way to further examine the data. With simulation, you know the wait time, number of items, minimum and maximum values, confidence interval, data distribution, and the time plot of values. It is more valuable to know that the number of items in a queue exceeds 10 only 5% of the time than to know that 2 is the average number waiting.

Static analysis techniques allow you to use only average parameters. Such limitations can mislead you with estimates that suggest an over- or under-capacity situation. For example, spreadsheets assume that production orders move unconstrained when, in fact, an operator must facilitate the move. This can yield an inaccurate capacity estimate.

Scheduling capabilities

Simulation allows you to experiment with a system and see how it behaves with particular configurations of inputs, resource arrangements, routing flow rules, downtimes, and shift schedules. With the basic model elements in place, you can use simulation to test alternative production schedules through multiple scenarios and to perform many other scheduling functions.

To its credit, simulation allows real-life occurrences such as randomness and interdependence—in contrast to pure scheduling packages that allow activities to proceed unabated according to a specific rate.

AnimationJustification7

Animation is an extremely powerful aspect of simulation. Feedback from animation assures that the model performs correctly (e.g., if the animation shows no arrivals at an order processing area during mid-day, the arrival data may contain inaccuracies), helps identify bottlenecks, and assists in isolating which system elements you could modify to achieve better results. Animation is also an excellent presentation and training tool. Simulation animations sell new ideas easily and effectively, and demonstrate the effects on an entire system of performing duties timely and accurately.

Does simulation pay for itself?

Like other projects, simulation projects require you to balance expenditures against benefits.

Expenditures

  • Software acquisition
  • Training or startup time
  • Labor required capture information, build and analyze model and develop future process.

Hard-dollar savings

  • Lower capital expenditure, increased utilization of existing facility, reduction of net cost
  • Proper employee assignments prevent unnecessary hiring
  • Accurate and insightful facility planning eliminates unnecessary rework costs

Soft-dollar savings

  • Facility rearrangement or reassignment of duties increases productivity
  • Reduced wait time improves customer satisfaction
  • Accurate system depiction ensures valid decision-making information
  • Training costs of new hires is reduced

Labor savings

  • Rapid accurate development of models establishes time and cost data quickly
  • Increased understanding of the actual process improves employee education
  • Coordinated simulation projects improve teamwork and communication
  • Increased system understanding – “How I relate to the rest of the system”

Intangible benefitsJustification9

With simulation you can identify problems that have been invisible in the past, allowing savings not achievable by previous methods.  Your saving will largely depend on the selection criteria for projects, but a properly selected project will yield many times the investment.  One simulation user identified and tested an additional 10% saving from suggestions in a management meeting.  Wow, wouldn’t you like to be able to test the validity of suggestions on the spot? For numerous case studies, visit our blog at http://processimprovementdaily.com/

0

Medical Device Manufacturer Increases Production – Avoids Costly Surgery

Posted by admin on October 8, 2009 in Manufacturing

www.processmodel.com

Medical Manuf1Yeah, yeah. We’ve heard that before.” Is that the response you receive from your su­periors when you attempt to present a new way of saving the company time and money?

Often upper management personnel get excited about presentations that sound great, but experience disap­pointment when the actual plans are implemented and then fall short of expectations. They eventually be­come skeptical of new ideas.

This Company has taken a step beyond the normal presentation and approval process by using simula­tion to test ideas before they are im­plemented. Everyone involved in the process can see the results of the proposed changes.

The Company, one of the world’s largest medical technology companies, manufactures and sells a wide range of medical supplies, devices and diagnostic systems. Annual revenues top 3.0 billion. Employing approximately 19,000 people, The Company enjoys worldwide presence in over 135 locations in more than 40 coun­tries. When they make a decision there are a lot of key people in­volved.

The Company decided it needed to increase production of one of its medical devices. The plant manufac­tures pipets that are used for fluid transfer in many different lab envi­ronments.

The ProblemMedical Manuf2

The company was faced with a significant demand for additional product (a 15% increase). This created a backorder situation for the plant, as its capacity was not able to meet customer demand.

Management felt that the best op­tion was to purchase additional equipment to create another 5ml pipet line at a cost of greater than $1 million. Implementation time for the new line was between 12 and 18 months. A project team was char­tered to evaluate the situation, pre­sent solutions, and implement the most appropriate options. Their goal was to meet production de­mand with minimal costs.

The Solution

Detailed production information was gathered and a ProcessModel simulation model was created. The team identified the extrusion area as the bottleneck and created different model scenarios to determine the most appropriate actions to improve capacity.

The team made recommendations to increase capacity in extrusion by sharing production with another line. Improvements were made to downstream operations to increase machine uptime and improve prod­uct flow. Conveyors and ergonomic workstations were installed to re­duce ergonomic risk factors.

The Results

Using ProcessModel, we loaded input data gathered from production sheets and video analysis. The pro­ject team reviewed the model and all agreed that it was a good repre­sentation of reality. Through simu­lation modeling it was decided that additional capacity was required at the beginning of the product run, and this was where the “bottleneck” occurred. The team recommended sharing production with another line to increase capacity at extru­sion (beginning of product run).

The model demonstrated that a new production line was not re­quired for 2 years if the increased capacity and other small line im­provements were implemented. Average shift production for dif­ferent model scenarios was deter­mined.Medical Manuf3

During the course of the model study, downstream operations were discovered to be imbalanced. Improvements were made to downstream operations to in­crease machine uptime and im­prove product flow. Conveyors and ergonomic workstations were installed to reduce safety risk fac­tors. The Company pro­ject team learned through simula­tion modeling that small im­provements can lead to big re­sults. They also agreed that benchmarking production using a simulation model can help avoid costly decisions.

The results of the project increased produc­tion by 13.85%, which allows sales revenue increase of $2.1 million per year, and a delayed $1 million of capital spending.

Project implementation costs were $180,000 with the largest portion of the cost directed to safety risk factor reduction in major pro­duction areas. The Company’s goal is to “become the organization most known for eliminating unnecessary suffering and death from disease, and in doing so, become one of the best performing companies in the world.”

0

Taking the Trauma Out of Change

Posted by admin on October 8, 2009 in Medical

www.processmodel.comEmergency1

The Hospital is the only ACS verified level I trauma center  in its area. The Hospital is a major transplant referral center, performing heart, lung, kidney, liver, and pancreas transplants. They are also the primary teaching affiliate of the adjoining college. With nearly 450 beds and its level 1 trauma center, the Hospital has almost 50,000 visits to the Emergency Department per year.

Even with the size of the Hospital, a staggering 12% of the emergencies that would require ICU services had to be “turned away” because of insufficient capacity. Picture, if you will, an emergency scenario where a critical accident has occurred.  The ambulance has arrived and prepares to move the patent and notifies staff of an incoming critical patient. Nearly 12 % of the time, the Hospital has to direct the ambulance to seek another facility. Even though the Hospital is the best trauma center in the area, the Hospital knows they will not be able to admit the patient to ICU.

Emergency2“Our primary concern was our inability to provide quality healthcare in emergency situations. In addition, this inability to handle the ICU requirements represents an enormous  loss of revenue, but this fact is dwarfed by the black eye received each time the Hospital directs an ambulance to seek an alternate facility when the Hospital is the desired choice by both the patient and medical professionals. This problem was constantly in front of the Hospital administration and the doctors, begging for resolution.”

A Six Sigma Process Improvement / Data Analyst, was assigned to analyze the problem and find a solution to reduce the number of ambulance diversions away from the Hospital.  He created a simulation model which included the ICU unit, surgical capability and scheduling from both emergency arrivals (ambulances) and from elective surgeries. The model helped to uncover some critical information. The most startling was that doctors had complete control over elective surgery scheduling without insight of how the rest of the hospital was being affected by their decisions. The model showed that current patterns for elective surgery would stack up randomly on certain days, causing the ICU to fill in order to meet the demand from the surgeries. On other days the ICU unit would be unaffected by elective surgeries. With the ICU unit randomly loaded, the Hospital was unable to handle the demands of emergency patients needing ICU services – as a result ambulances were turned away 12% of the time.

Emergency3aUsing his simulation, the ProcessModel user was able to show doctors and hospital administration the cause of the diversions. He conducted experiments with the model and identified an acceptable level of elective surgeries per day while still accommodating the needs of emergency ICU care. “ProcessModel was an invaluable tool to sidestep all of the anecdotal suggestions and use quantitative methods to discover the problem and suggest solutions. We came to a definite number of elective surgeries that would still allow us to meet other requirements on ICU.” The amazing part of this story is that the Hospital will still perform the same number of elective surgeries as they did prior to the study, but by limiting the elective surgeries in any given day they could also handle the required emergency cases that would require the ICU.

Both doctors and administrators agreed with the solutions and have started to implement the changes. The increase of revenue to the Hospital, from being able to take additional emergency cases is in the millions of dollars, but the greatest benefit to the Hospital comes from the improvement in quality healthcare to the patients — which in turn enhances public relations and the Hospital’s reputation.
In addition to using ProcessModel to identify ways to reduce emergency diversions, it has also been used at the Hospital to establish the target of maximum length of patients’ stay in the Emergency Department, optimize the staffing levels and the number of procedures performed in Gastroenterology lab. A ProcessModel Simulation is also being developed which aims to reduce the patients’ waiting time in the Eye Institute.

1

Huge Savings in Sales Process Reflected

Posted by admin on October 7, 2009 in Sales

www.processmodel.com

The Company is a $16 billion diversified technology organization with leading positions in consumer and office; display and graphics; electronics and telecommunications; health care; industrial; safety, security and protection services; transportation and other businesses. The company has operations in more than 60 countries and serves customers in nearly 200 countries.Reflective1-1

For the past two years, the Company has been using Six Sigma methodology to pursue continuous quality improve­ment. Six Sigma is clearly focused on customer driven expectations and requires a thorough understanding of products and processes. In the Company’s case this also includes the selling process, as we are one of the first companies using Six Sigma to generate sales growth.

As a Company Six Sigma Black Belt, my first project was focused on reducing the cycle time of our Reflective Materials selling process. This involved short­ening the time between the initial sales visit and the completion of a written, customer-approved specifi­cation incorporating re­flective material in the customer’s safety garments.

Reflective materials are widely used on garments to enhance visibility and safety in the work­place. The Company works closely with cus­tomers to develop product specifica­tions, but the selling process also involves garment manufacturers. As a result, it was frequently taking as long as eight months to complete the sales cycle, which included writ­ing the specifications, prototyping garments and conducting field tri­als.

Reflective quote-1The goal of our Six Sigma project team was to reduce that sales cycle to five months by developing proce­dures and tools that would acceler­ate the specification writing process and allow us to write more of them. This would have a significant im­pact on our sales. The sales team knew shortening the sales cycle was a high priority for the division and they were fully engaged in provid­ing us with the data we needed.

After mapping out the sales process, the project team identified six key areas where improvement was needed in order to reduce cycle time.

We needed to run a Design of Ex­periments on the process but we had one challenge, our deadline was in six months and the current sales process was taking eight months. With ProcessModel we were able to run the Design of Experiments on the computer model and test our solution without disrupting the current system and without any direct costs. We were able to test possible solutions in just minutes and see how changes or a combination of changes would affect the process. Simulation gave us the ability to change various parameters in the sales cycle to enable us accomplish our goal.

We used ProcessModel to simulate our existing process, which starts with our sales representative, but also involves our technical representatives and channel partners.

The simulation helped us to identify where delays were encountered in the process and to quickly and safely see the results of changing various parameters. As a result, our team developed a number of rec­ommendations to eliminate those delays.

Reflective3

ProcessModel has helped the Company to reach the goal of the project by re­ducing the sales cycle from almost eight months to less than five months. At this time, we are often exceeding our initial goals by com­pleting the

cycle in a matter of weeks.

Simulation gave us the ability to complete the project in the allotted time and identify the best solution for improving the process. By ap­plying this technology to the De­sign of Experiments portion of the project, we were able implement the best solution. There is no ques­tion that the data generated from ProcessModel gave us the confi­dence to move ahead with our planned recommendations to reach our goal and I will certainly con­sider using the software to do De­sign of Experiments on future pro­jects.

0

NAVY Launches Training Changes with Simulation

Posted by admin on October 7, 2009 in Military

www.processmodel.com

The central objec­tive includes more quickly and accurately forecasting and justifying manpower, per­sonnel, and training requirements to ensure the training readiness of the Navy in the most efficient and effective manner. A three-phased strategy was developed to accomplish this effort.Navy2

Phase one involved the design, develop­ment, and delivery of a process model pro­totype spanning initial entry training through initial qualification training. This shows the capability of simulation tools and technology, as well as the application of specific steps in applying these tools. The Company selected ProcessModel simulation modeling to help determine ways to save time and money pertaining to Naval student training.

They chose a particular scenario to help demonstrate the flow of activity. They showed how an Avionics Technician re­ceives training for the first 12-18 months of his career. The model focused on the technician moving through specific training courses, and the many variables involved in the technician moving between training courses and the Navy’s operational forces (or Fleet). Composed of several organiza­tions, the Integrated Product Team (IPT) provided the necessary information to create the simulation model of the Navy training production flow, which will even­tually span the entire Navy.

Navy 1This simulation model provides a number of performance measures to help the Chief of Naval Education and Training (CNET) evaluate production flow improvements. These performance measures fall into two major areas: the time a Navy student is at the schoolhouse, but Awaiting Instruction (AI) or Awaiting Transfer (AT), and the number of students enrolled in specific courses.

AI measures the time a student is waiting (beyond the standard accepted time) at the schoolhouse to begin formal instruction. AT measures the time a student is waiting (beyond the standard accepted time) at the schoolhouse to transfer to a new location.

Problem

Although the amount of waiting time for one student may be relatively small, the total numbers from all students can end up as a very large number. This creates a “domino” effect because it steals time from the mission at the Fleet, which—in turn— can impact readiness. The additional time spent in AI and AT also costs the Navy additional dollars in lodging, per diem, and administrative support.

The seat utilization of specific courses is a critical measurement of cost and effective use of available resources. It measures the number of filled seats in a particular course. If the seats of a course are only partially filled, then the per capita cost of training those students is higher than a class with all the seats filled. Regardless of how many seats are filled, the course still has many fixed costs of required instructors, facilities, and equipment, etc.

However, it is the combination of these two performance measures that must be balanced to ensure the best training service to the Navy, while keeping the training costs to a minimum. For example, some­times a student gets classified as AI be­cause there are no available sets in a course and they must wait for the next course to begin. If we only focused on decreasing the AI time, we might decide to hold classes more often so students would not have to wait so long. However, in implementing this solution, we may find the seat utiliza­tion of the course drops considerably, and the training costs increase proportionately. When both performance measures are con­sidered together, the increase in training costs may substantially outweigh the de­crease in AI cost, and the proposed solu­tion would not reduce overall costs for the Navy.

Solution

The Company chose ProcessModel simulation modeling to help evaluate these kinds of questions. In addition, the model is designed to incorpo­rate cost factors, training time, facility and instructor constraints, and many other real-world variables.

Results

Navy3The data provided from the simulation model revealed a wealth of information— not only on AI, AT, and seat utilization— but also on many other critical aspects, such as total time to train and cost to train.

The analysis supplied by the simulation model could not have been accurately cap­tured in a non-animated (static) tool. For example, the schoolhouse can predict the influx of new recruits from Navy re­cruiting projections. However, the influx of Navy personnel from the Fleet is much more dynamic and harder to predict. The influx of students also varies greatly from month to month, and unforeseen budget constraints can quickly impact the flow of students into the schoolhouse. Through simulation, CNET can analyze the impact of these many variables and evaluate courses of action to mitigate the impact.

Future Applications

Using a simulation tool, CNET is able to cap­ture the many dynamics impacting their day-to-day operations, and more accurately fore­cast the impact of changes to the training pro­duction flow. Through the use of IPTs, the simulation model provides a composite pic­ture across the entire Navy and incorporates the right combination of performance meas­ures to evaluate Navy benefit. And as the effort continues through the remaining phases, CNET will have a decision support system to evaluate all aspects of the Navy training continuum and any proposed changes to the current way of doing business.

Copyright © 2009-2014 ProcessModel Inc. All rights reserved.
This site is using the Desk Mess Mirrored theme, v2.2, from BuyNowShop.com.