Monday, January 2, 2017

CONTINUOUSIMPROVEMENTPAL RUINS EVERYTHING (You've been lied to: It is not safer to fly)


Have you seen the television show on TruTV titled “Adam ruins everything” where Adam Conover destroys our perception of everything from green technology to the real origins of Christmas.

Here is my attempt to do just that with statement we all are led to believe namely,

It is safer to fly than to drive

You have heard the old saying: “You are more likely to get killed driving to the airport than flying on the plane” then people will go on and on about how much safer it is to fly than to drive.

But is this really safer to fly than to drive?  Not really!

Fact: You are over 280% more likely to die in a plane crash than driving in a car.

How can that be…….are the experts wrong?

I believe it is because experts are using a flawed metric namely, miles:

When you look at the statistics online, most all articles use the metric of “miles.”

For example, there are “X” number of deaths per million miles for a car and there are “Y” number of deaths per million miles for a commercial flight.

Passenger Mile?
 
First, the concept of a passenger mile, although used in the travel industry, can certainly skew the numbers (to the positive) to the extreme.  For example, if a flight with 500 passengers travel from Los Angeles to New York (approximately 2400 miles), then the statisticians would say that flight had 1.2 million passenger miles.  With around 34000 flights a day, that is tens of billions of passenger miles a day. Certainly there are millions of miles traveled by the big commercial airliners in the U.S. everyday…but not BILLIONS.

This creates the case where you have a very large denominator in the equation (# of deaths / passenger mile) and the larger the denominator, the smaller the overall chances are they say.


Also, due to the jet stream, distances between destinations on the ground and in the air are different.  

For example, a flight from the west coast to the east coast despite being the same miles apart will be a quicker flight (i.e. reducing the chance of dying in a plane because you are in it a shorter period of time) That same exact flight from east to west will be longer (i.e. increasing the chance of dying in a plane because you are in the plane longer.) In a car, driving 100 miles at 50 miles per hour would take you 2 hours while driving that same 100 miles at 40 miles per hour would take you 2.5 hours. Using their logic, the risk of getting in an accident are exactly the same, whereas I argue that the 2.5 hour trip would give you an extra half hour for something to go wrong.  


Also, someone can certainly die in a car without having traveled any miles or an airplane without it flying any distance yet.

Most importantly:

  • Pilots training time (which has a significant impact on safety) is logged in flight hours…not miles 
  • Airplane maintenance schedules (which has a significant impact on safety) are in flight hours…not miles


Why shouldn’t our metric be something we can make most common to each.

To eliminate as much variation as possible I will use the most consistent metric regardless of anything else, which is time. The total time in an airplane and total time in a car….regardless of the actual miles traveled.


Automobile results

There is an average of 220 million cars on the road on any given day in the United States for an average of 1.5 hours each.  Multiplying these two numbers gives us the total number of “Car Hours” per day in the United States. The actual number is 330 million car hours / day.  

Annually there are about 34000 car deaths, which make the daily average for car deaths about 93 per day. If you want to determine the number of car deaths per car hour, you simply divide 93 car deaths by 330 million car hours.  The number is 0.00000028 (deaths / car hour).  This also represents the probability that you will die in a car accident in one hour of driving or the odds are 3,533,568 to 1

Please note: This is not the same as deaths per hour…..it is deaths per car hour.  To determine the number of deaths per hour, you would divide 93 daily deaths by 24 hours, which is 3.8 deaths per hour somewhere in the United States.

Commercial Airline results

There is an average of 34,000 commercial flights in the United States every day with an average flight time of let’s say 2.5 hours….some are more and some are less but 2.5 hours is a conservative number (that is about the same as flying from Phoenix to Portland.  In actuality there will be years pass with no major incidences and then, unfortunately, there will be an accident resulting in many deaths…but if you average up the previous 10 years or so….you will arrive at a number anywhere from 25 – 35 per year or so, but for our purpose…I use 25 airline deaths per year to be conservative and skew the numbers to be "safer".

Multiplying these 34,000 commercial flights and 2.5 hours gives us the total number of “Flight Hours” per day in the United States. The actual number is 85,000 Flight hours / day.  Annually there are about 25 commercial flight deaths annually, which make the daily average for commercial flight deaths at about 0.0685 per day. If you want to determine the number of commercial flight related deaths per flight hour, you simply divide 0.0685 deaths by 85,000 flight hours.  The number is 0.00000081 (Deaths / flight hour).  This also represents the probability that you will die in a commercial flight in one hour of flying or the odds are 1,234,566 to 1.  

Please note: This is not the same as deaths per hour…..it is deaths per flight hour.  To determine the number of deaths per hour, you would divide .068 daily deaths by 24 hours, which are 0.0028 deaths per hour.

Do you see it…..the odds of dying in a car hour is 3,533,568 to 1 while the odds of dying in a flight hour are 1,234,566 to 1!

The chances of being killed in any one hour on a plane are 286% greater than that in a car.

The perception that planes are safer, in my opinion, stem from the fact that there are far fewer airplanes flying than there are cars driving. With a whopping 220 million cars on the road daily with only 34,000 commercial flights it’s no surprise there are more car related accidents…

But what would it look like if the number of flights hours equaled the number of car hours per day…that is to say, what if there were 361,644 flights a day in the United States….

There would be over 97,000 commercial airline deaths a year…that is about 260 deaths each day…..basically a commercial jetliner crashing almost daily.

It’s all about how you present it….rhetoric if you will, to persuade others to believe something or infer that because one scenario is safer…all scenarios are safer.

For example, if you said the following:

”You are more likely to die in a car crash while driving from Los Angeles to New York than flying from Los Angeles to New York.”

Then yes, that would be correct……WHY…..because it takes almost 800% more time to drive than to fly to New York from Los Angeles, therefore increasing your chance of dying in a car crash. 

In reality both driving and flying are extremely safe.  The actual odds of dying in a 2.5 hour plane ride are still very small (about 1:500,000 in my example.)

To give you some perspective on how remote this is....the chances of a person being struck by lightning in their lifetime is about 1:3000

You could spend your entire life riding on an airplane (and never getting off) and still not ever crash…..but to say that flying is safer than driving is not accurate.

Still don’t believe it. Take a look at any top ten most “Dangerous Jobs” list and see that airline pilots always rate more dangerous than professional drivers.

All of my referenced statistics were obtained from legitimate sources found on the internet.

I encourage you to play around with my numbers. Be more conservative….or less conservative with my averages and see that this is the case.


Thursday, January 15, 2015

I love Lucy Chocolate scene: A Scientific Analysis using Computer Simulation




We have all seen the I love Lucy episode titled “Job Switching”, where Lucy and Ethel obtain a job at a chocolate factory while Ricky and Fred play homemaker. There is a scene where Lucy and Ethel have been given the job of wrapping chocolates…except too many chocolates come out and hilarity ensues.

The main reason this video is funny is quite simple:

Their cycle time is greater than the TAKT time…Now that is funny stuff!!!

OK….that doesn’t sound funny….but it is.  Click below to watch.


Think you have challenges and need help? Contact me at Astrozuggs@gmail.com 

This clip has time and time again been used by Management gurus and Lean practitioners to illustrate many different concepts including:

• Management methods
• Lack of visibility (by Management)
• Flow
• Push vs. Pull processes
• Work stress
• Waste
• Quality
• Variability in processes

When presenting Lean, I myself usually show this video as an example of push processes and the waste that is created by this extreme and visual example.

No Scientific analysis exists!!!

One thing that I haven’t seen is a scientific evaluation around this clip. Concepts such as TAKT time, cycle time, and other key performance indicators that can be determined have only been briefly mentioned in other articles.  No real serious deep dive exists.

Question’s such as:

“What was the TAKT time”
“What were Lucy and Ethel’s cycle time to wrap chocolates”
“How many workers would it take to be successful”
“How many chocolates would have passed without being wrapped.”

All of these questions I hope to answer.

Step by step on how I did this.

1st step: Study video to determine Lucy and Ethel’s process.

To accurately determine their processes, I observed the video and then created a process map of what I observed.  First, I determined the very first step, which was “chocolates arrive via the conveyer belt.” I then determined the last step in the process, namely “Chocolates exit via the conveyer belt.”

Now knowing the very first thing and the very last thing, I was able to fill in the gaps (See process map below.)




2nd Step: Time Study.

How fast were chocolates coming in?

Approximately 6 chocolates came out during the 1st 10 seconds and then started increasing until it reached about 14 chocolates every 10 seconds. Determining this information allowed me to determine TAKT times (What is TAKT time?) for these periods by using the calculation “Seconds/Number of chocolates”, which were every 1.67 seconds/chocolate and increasing to .71 seconds/chocolate respectively. It is also interesting to note that the conveyer belt moved chocolates down the entire line in about 14 seconds to begin with but then sped up to about 7 seconds.  This will be important later.
Precise times taken were taken of both Lucy and Ethel’s cycle time to wrap chocolates.
For Lucy, during the 1st 10 seconds, the amount of time that it took from when she grabbed a piece of chocolate, wrap the chocolate, place it back on the conveyer belt and grab the next piece of chocolate was minimum 3 seconds with a maximum of 4 seconds. For Ethel, it was a bit slower minimum 3 seconds with a maximum of 5 seconds.
There were indeed examples of faster cycle times, but those examples were wrought with waste and quality issues as you saw in the video.  One major assumption of my analysis was that a (minimum of 3 seconds, most likely 4 seconds, maximum of 5 seconds) was a good pace for Lucy and Ethel for an 8 hour shift.

3rd step: Develop Discrete Computer Simulation of the “I Love Lucy Chocolate Scene”:

I set up simulations for the slow pace, faster pace and the fastest pace at which chocolates were arriving to Lucy and Ethel to see what would have happened if they were allowed to work an 8 hour shift.

Original scenario set up in Simulation (Lucy and Ethel at the slower pace)

Think you would like me to do simulation for you? Contact me at Astrozuggs@gmail.com 





Using Extend simulation software…To begin, I programmed Chocolates to arrive exponentially distributed with an average of 1.67 seconds (about 36 a minute), then the chocolates get grabbed by either Lucy or Ethel where I gave it a triangular distribution of minimum 3 seconds, most likely 4 seconds, and maximum 5 seconds to wrap the chocolates.

I also programmed the simulation the “reneg” a chocolate out of the system if it has been on a conveyer belt for more than 14 seconds, which represents a chocolate making it past both Lucy and Ethel without being wrapped.  The simulation runtime was set for an 8 hour shift. 

Analysis of Original scenario (Lucy and Ethel at the slower pace)

One surprising realization after running this first model is that even at the original “slow” pace, Lucy and Ethel would not be able to keep up for very long. 

This is because the simulation results reveal there were 17,087 chocolates that went down the line and Lucy and Ethel’s only managed to successfully wrap 14,207 of them!!!
This means that they missed about 2,875 chocolates!

Utilization* (i.e. the time they were actually physically working)
 
Utilization was hovering right at 98%.  This is an extremely high and unrealistic utilization number.  Typically, from my experience, utilization should be anywhere from 80% - 85%. 
Once you start pushing over 85% consistently, workers get stressed, mistakes happen, and so forth.
Conversely, the lower you get from 80% the more idle time your workers will experience and may not be an effective use of their time.  This is all variable and there are exceptions depending on your industry but let’s stick with around 80% to 85% being what we would like to shoot for as far as utilization.
In healthcare, we often talk about utilization when we speak to Daily Patient Census, Operating Room Utilization, Nurse or Tech productivity numbers (which are calculated based off of patient volumes and work time)….we are really just trying to understand how our workers, machines, and rooms are being utilized.

Original scenario set up in Simulation (Lucy and Ethel at the faster pace)

It should be noted that I used Camtasia studio software to record the simulation, which slowed down the video significantly. All videos appear very slow.



I programmed Chocolates to arrive exponentially distributed with an average of .71 seconds (about 85 a minute), then the chocolates get grabbed by either Lucy or Ethel where again, I gave it a triangular distribution of minimum 3 seconds, most likely 4 seconds, and maximum 5 seconds to wrap the chocolates.
It is interesting to note that the speed of the conveyer belt in this faster original scenario increased.  It now takes only 7 seconds for a chocolate to travel across the room.  Because of this, I programmed the simulation to “reneg” a chocolate out of the system if it has been on a conveyer belt for more than 7 seconds, which represents a chocolate making it past both Lucy and Ethel without being wrapped. 
Again, the simulation runtime was set for an 8 hour shift. 

Analysis of Original scenario (Lucy and Ethel at the faster pace)

The simulation results indicate that there were 40,697 chocolates that went down the line and Lucy and Ethel’s only managed to successfully wrap 14,414 of them!
Yikes…..this means that they missed about 26,273 chocolates!!!

Utilization

Utilization was over 99%, which again…you couldn’t possibly do real world.

Original scenario set up in Simulation (Lucy and Ethel at the fastest pace)


Toward the end of the scene, the manager comes out and say’s “Speed it up a little!!!” 
If you thought it was going fast before, think again!!!  Chocolates were to arrive exponentially distributed with an average of .167 seconds (about 360 chocolates a minute!!!)   At this point, it was actually pointless to run the simulation (but I did) because it looks almost identical (with respect to Lucy and Ethel’s utilization percentage, and the number they actually wrapped) to the previous simulation except with much more missed chocolates than actually being wrapped.  I sure would have liked to peek behind the wall to see what was going on behind the scenes!!!

What to do with this information?

Let’s try to determine what appropriate staffing at the slower pace.

I will do both a brief calculation using averages of TAKT and Cycle times to arrive at the appropriate staffing.

Using the TAKT time and Cycle times by dividing (4 seconds per chocolate per person to wrap / 1.67 seconds per chocolate coming in) = 2.4 workers to meet the TAKT time but does this tell you how many chocolates will be missed or what the staff utilization will be?

Scenario: Adding one more worker:

Using the simulation, I ran a scenario where everything remained the same except I added one more worker.

The results indicated that 17,084 chocolates created, our 3 workers utilization was at 80% with only 13 chocolates not getting wrapped…not bad.  I would be pretty comfortable with this result and hire and train only 1 additional person.

Let’s try to determine what the appropriate staffing should be at the faster pace.

It may prove more difficult to accurately determine staffing levels at the faster pace…let’s give it a shot.

Again, a calculation of TAKT and Cycle times reveal (4 seconds per chocolate per person to wrap / .71 seconds per chocolate coming in)= 5.6 workers to meet the TAKT time.

Let’s go ahead and round up and say the number of workers we need is 6.  Let’s then throw this number of workers into the simulation.

Using the simulation, I ran a scenario where chocolates were arriving every .71 seconds with 6 workers and the conveyer belt was moving chocolates through in 7 seconds.

The results indicated 40,155 chocolates were created, our 6 workers utilization was at 91% (significantly high) and 729 chocolates not getting wrapped.  This is pretty bad. Basically almost 2 out of every 100 chocolates were not wrapped.

Let’s try it with 7 workers

The results indicated that 40,767 chocolates were created, our 7 workers utilization was at 80% and 38 chocolates not getting wrapped.  This is much better.  I would be comfortable with this result and hire and train 7 workers.

In this case the calculation results of 5.6 vs the simulation answer of 7 workers was off by 1.4 workers, which is about a 20% difference. This illustrates how useful even a simple simulation can be.  What if instead of 7 workers, we were deciding to hire hundreds of workers!!!

Let’s try to determine what the appropriate staffing should be at the super-fast pace.

A calculation of TAKT and Cycle times reveal (4 seconds per chocolate per person to wrap / .167 seconds per chocolate coming in) = 23.95 workers to meet the TAKT time.
So hopefully, you have started to realize that we need to bump up our estimates from what our calculation gives us.  Let’s go ahead and make the number of workers we need to 25.  Let’s then throw this number of workers into the simulation.

Using the simulation, I ran a scenario where chocolates were arriving every .167 seconds (recall this is about 360 a minute) with 25 workers and the conveyer belt moving chocolates through in 7 seconds.

The results indicated that 172,742 chocolates were created, our 25 workers utilization was at 96% (unrealistically high) and 126 chocolates not getting wrapped.  I would be very concerned that the utilization is too high.

After a few more staffing scenarios, I landed on 28 workers with a utilization of 87%.



Do you see another problem here?......Hopefully you realized that you couldn’t possibly fit 28 workers in that area….Time to build out a new space or get a new facility? This is a topic for a different day.

In summary, I hoped to put some real scientific analysis around this clip and answer some important questions in a fun way.  I also hoped to illustrate using TAKT time calculations is a good starting point but please do not go through real-time PDSA cycles with respect to the number of resources (machines and workers) you should have…it is costly and painful.  Why not use simulation to do further analysis to reduce risk.  It only took me about an hour to observe the video and no more than 15 minutes to build this simulation from start to finish.

In this example, I used Discrete Event Computer Simulation.  Depending on your circumstances you can certainly leverage prior knowledge or experience to help guide your decisions or perhaps building a “mock-up” of your current operations and simulating it that way.  Performing a table top exercise may be appropriate in certain instances. Whatever you do….give it some thought!!!


Think your organization or business needs help? Contact me at Astrozuggs@gmail.com 

*For our purposes…utilization (in the simulation) is defined as the actual working time with nothing else included. For example if you are working for an hour and every 2 minutes you take 3 seconds to wipe the sweat from your forehead then you are not “working” for 1.5 minutes out of that hour and are being utilized 97.5% of the time.

Thursday, August 7, 2014

ACT Kids Health Fair (Observation and Analysis)



The annual ACT Kids Health Fair serves at-risk children who are eligible for  metropolitan Phoenix Head Start programs, but lack appropriate medical clearances. This all-volunteer event addresses the full spectrum of health requirements: transportation To and from the children's neighborhoods, all appropriate medical screenings and immunizations, Establishing and updating Medical records, and arranging emergency or continuing care as needed. Over 20,000 children have been screened to date.¹

2013 was the first year that the ACT Children Health Fair used technology to track patients and families as they flowed through the fair.  Learning from using this technology, how to implement it, and the information it gives will help the navigation of families through future ACT Kids Health Fairs more efficient.

Anonymous wrist bands to track flow

An anonymous wrist band with a unique ID was given to fair participants as they entered the fair.  A hand-held scanner (used by volunteers) was then used to scan the wristband as they entered the fair.  Scanning also took place at various points inside the fair and then one last time as they exited the fair. This scan documented a “time-stamp” of the exact time the wristband was scanned. The time stamp information was then put into a database with the ID number being the unique key.

I performed time studies and observations during the ACT Kids Health Fair on September 28, 2013 and analysis of the time stamp data.

Analysis of Time-Stamp data
 
Figure #1

An analysis (Figure 1) of the white-band  time stamps, which inlcuded Children and Parents revealed that wrist band scanning was inconsistent (as evidenced by missing Check-in and Check-out times.)  For example, you can see (Figure 1) that only 63% of white bands had an associated check-in time documented. For stations inside of the fair with very low percentage of scanning (i.e. Dental, Hearing, etc) it is also likely not every child visited every station.
Data that connects a wristband to any particular screening or a simple count of how many children visited each station could be used to confirm this.  



Figure 2
The number of Patients and Family arriving by hour (Figure 2) was determined by using the “check-in” time stamp to determine the distribution pattern of the number of people that checked in by hour.  Due to the fact that we now know many check-in times did not get captured, I used this pattern of “check-ins” to distribute all of the 2083 white bands into their respective hour. It was later determined that the wristband tracking had some glitches in the beginning of the day. Despite the fact that patients and family were participating in the fair prior to 9:00am, wristband tracking data starts at 9:00am.  It is interesting to note that there exists a steady decrease in families arriving until about 2:00pm (one hour before the fair ends) where it increases again. Leaders of the fair in previous years suspected this was the case and can now confirm this increase at the end of the day every year and have already incorporated this into their resource planning for the day of the event. This also served to help further demonstrate that my analysis of the data was indeed valid.




Figure 3

How long patients and families stayed in the fair (Figure 3) was determined by subtracting the check-out time from the check-in time of white bands. The sample only included white bands that had both check-in and check-out data (Sample size was 734, which gives approx +/-  3%.)




Figure 4
Using the wristband data to determine volumes of patients (Figure 4) that arrived in each hour and how long patients stayed in the fair,  5 discrete event computer simulations were performed to determine the approximate number of patients and family members physically in the fair in 15 minute intervals.  This graph represents the averages of the 5 simulation runs.  The Maximum number of people during any of the simulation runs was 753 people.  Recall: Patients were in the fair before 9:00am, however, the tracking did not start until 9:00.  This would have the affect of elongating the shape of figure 4 to the left and slightly reducing the number of people in the fair at any one time (because the number of people in the fair can now be spread out over more time) without dramatically changing the overall distribution of the graph.
Direct Observations

I also made random observations of every area in the ACT Kids Health Fair over the course of the day where I documented (69 observations) how long each station took to see a child (also known as cycle time.) Most stations had 2 or more observations (with different Clinicians being observed) except for Lead Screening which only had 1 observation. Also, families were observed:  Approx 56.25% of white bands were children. Using this assumption: 56.25% of the 2083 white bands (1172) were children participating in the ACT Kids Health Fair.
 TAKT Time and work balance was determined.

Figure 5



TAKT time is the time a patient needs to spend at any one station (includes the area’s check-in and check-out stations for areas such as dental, vision, etc)  for the fair to complete 1172 patients in 7 ½ hours*. 420 minutes to see 1172 children is approx a child every 23 seconds (27000 seconds / 1172 people) ~ 23 seconds. Figure 5 shows each station on the horizontal axis with the dark bars demonstrating how long each station takes to process one patient. The red line, which is at 23 seconds, shows how fast each process must be to keep the TAKT time. The idea is that you can add more stations until you reach a point where someone in your area is leaving every 23 seconds.
Knowing the TAKT time can help determine how many stations we may need to meet demand. For example:  If we want all 1172 Children to get dental screenings ( a dental screen takes approx 250 seconds) in a 7 ½ hour period. We know from our TAKT time that a child must be screened every 23 seconds. To accommodate this demand, we would need approx 11 dental screeners to meet this demand (i.e.  11 screeners x 23 seconds =  253 seconds)

I put this together with information that I thought may be interesting and helpful. It was a pleasure to be able to have the opportunity to help make future ACT Kids Health Fair Events more successful.
¹ACT Kids Health Fair - Saturday, September 27, 2014. (n.d.). Retrieved January 7, 2014, from www.actkidshealthfair.org: http://www.actkidshealthfair.org/cAbout.php?cid=9

*7 ½ hours = 450 minutes = 27,000 seconds



Tuesday, December 17, 2013

3 key learnings from the 2013 Institute for Healthcare Improvement (IHI)

I was fortunate enough to attend the Institute for Healthcare Improvement (IHI) Conference this year in Orlando Florida. I participated in many workshops, learning labs, and minicourses and I have decided to give a brief summary of a few learning's from my time there.

I. Simulation
There are simulations for discrete events such as physical processes but computer simulation can also simulate real-world phenomena such as the spread of information and even “change” behavior and how it affects an organization.

This workshop focused on how to take national research based metrics such as the rate of change readiness, adoption, and adaptability for organizations (among other metrics) to create a simulation to help the user determine the most effective strategies for implementing programs such as readmissions.

The workshop facilitators also spend a good amount of time speaking to the validity of data that is used as input for these models and techniques to arrive at conclusions with some reliability using Fermi decomposition techniques, which is an estimation technique. You can then PDCA the simulation model as you learn more about how your particular organization behaves.


II. Run Charts vs. Red, Green, Yellow

Many organizations use Red, Yellow, and Green to indicate the status of measures. This style of displaying metrics is limiting and can cause confusion. This can stifle crucial conversations and lead leaders to the wrong conclusions.

Instead, the presenters argued, using simple control charts can illustrate interdependencies of processes; prove that your processes are out of control (despite being within control limits), and allow leaders to draw valid conclusions. This is something that the classic Red, Yellow, and Green dashboards cannot convey well.

To show the difference, a handout exercise was performed during the workshop where the same metrics were presented, both in standard dashboard style with the use of color coding (Red, Yellow, and Green) and then again using Shewhart charts (control charts with upper and lower bound limits).

The presenters covered the 5 rules for detecting special cause in your processes using control charts, which are:

Example Control Chart
1. A single point outside control limits.
2. A run of eight or more points in a row above or below
    the mean
3. Six consecutive points increasing or decreasing
4. Two out of three consecutive points near a control limit
5. Fifteen consecutive points close to the mean



III. Real-time location systems (RTLS)
The presenters described RTLS as: Wireless technology that permits the tracking of moveable medical equipment, patients, and staff.

There are many reasons why an organization may want to use this technology including: Easily tracking of equipment that may be recalled, reduced excess purchasing, address shortages enterprise wide, better equipment planning, temperature monitoring on sensitive equipment.

Another application, not mentioned in this workshop was the ability to move to a real-time simulation forecasting model. If a system has the ability to input data into the model immediately (or close to), then a department or hospital may be able to forecast problems, days in advance (much like weather forecasting).

Think you need help? Please contact me at Astrozuggs@gmail.com with any questions you may have about this or any other Continuous Improvement Question.

Also, please see my other Continuousimprovementpal blogs

Saturday, September 7, 2013

Why Simulation is better than spreadsheets

Why is Discrete Event Simulation better than spreadsheets?

One important reason Discrete Event Simulation (DES) is a more powerful tool for analysis than spreadsheets is its resourcing and variability capability, which models real-world scenarios more easily.


What is Simulation?

Discrete Event Simulation (DES), often referred to as "Simulation" describes software which is designed to model complex processes for discrete events or rather, distinct processes which take into consideration the dimension of time (I know I’m getting kind of nerdy here, but I don’t know how else to describe it.)

Simulation software also typically includes the capability to create "animations" of what you are simulating. For example, if you are simulating an emergency room in a hospital, you would see an "animation", which may look similar to a video game showing patients, nurses, and doctors walking around. For all of my simulations, I use extend 7 (now known as ExtendSim).

Here are some questions that simulation can reliably help answer:

If my business increases volumes, at what point do I need to purchase more machines, or hire more people?

How much more annual revenue can I expect if I reduce my customers perceived waiting time by 3, 5, or 10 minutes?

How large should we make our waiting room for our urgent care clinic?


Think you have challenges and need help? Contact me at Astrozuggs@gmail.com

Carwash Example:

Let’s look at an example, which is similar to an example that we used at my previous employer when we would do simulation for healthcare. I have changed it to a carwash example.

Cars are arriving to a carwash at an average of every ten minutes, using the vacuum for ten minutes, then wash for ten minutes, and finally dry and detail for ten minutes.

In a spreadsheet analysis, you might include averages in cells where cars arrive every ten minutes and each step in the process happens in exactly 10 minute intervals, then the car leaves 30 minutes after it arrived (Figure 1).

In a simulation analysis, however, the vacuum along with the wash bay and dry/detail areas are considered resources that can only be used one at a time, which will allow for queuing of cars
until that resource becomes available again. Now, introducing variability into your processes will cause cars to wait in line. In the example below I programmed variability into the processes by introducing random distributions for each of the car wash bays, still with the same average as before but with plus or minus two minutes (i.e. 8-12 minutes instead of just always every 10 minutes.)

I then programmed customers to leave if they wait longer than 15 minutes (Figure 2), which can represent customers becoming frustrated or even the fact that your carwash may not have the physical space to hold more than a 15 minute line (i.e. customers cannot even pull in to your establishment from the street.)

Results for Spreadsheet:

Figure 1 spreadsheet analysis


Using a spreadsheet analysis (Figure 1), you can complete 58 cars in a ten hour shift (i.e. at the end of 10 hours there are still 3 cars in the process, which are not yet vacuumed, washed, or dry and detailed).
Results for Simulation:

Because the simulation allows for waiting in line, only 39 cars were completely finished and 15 customers left after 15 minutes of waiting (Figure 2).




 Figure 2: Extendsim Simulation showing 15 customers leave without having been  served. Wait-times for the vacuum are still over 8 minutes
                  
It should be interesting to note that when I ran the simulation with customers not leaving after 15 minutes (Figure 3) the wait time to use the vacuum shot up to over 76 minutes (Would anyone really wait 76 minutes to use the vacuum?)

Figure 3 Extendsim simulation showing average wait times of over 76 minutes when customers do not leave after 15 minutes

 You can see that there exists a notable difference between the spreadsheet analysis and the simulation analysis. In our small car wash example, the spreadsheet analysis suggests that the carwash is capable of handling almost 60 customers a day. If you consider a carwash that can make anywhere from $5.00 to $10.00 per customer then the annual revenue could be as much as $160,000 for our little carwash.

The simulation analysis suggests the carwash can realistically only handle about 40 customers a day with an additional 15 customers leaving or not being satisfied. Considering many customers would be given free carwashes to make up for the wait, we can realistically expect to see at most $95,000 in annual revenue.

This example was very simple with only 3 processes. Simulation becomes even a more powerful tool as complexity increases.

In one instance, I helped build a simulation (along with several other engineers) for a large portion of a hospital where several months after our project was complete, the head engineer received a frantic call from one of the hospital administrators. They were experiencing longer wait times and patients queuing up in several areas of the hospital and they could not determine why. One of the associates of the company grabbed his laptop (with the simulation on it), flew to California to make some observations, then adjusted and performed the simulation. He was able to do a root cause analysis of the simulation results, which also applied to the hospital’s real-world scenario the hospital was experiencing and was able to create a strategy to mitigate the issue.

The simple carwash example illustrates the benefits a properly built simulation can provide.

Additional benefits of DES include (but not limited to):

1. The ability to change process patterns to a granular level (i.e. you can dynamically change the volume of cars entering your system by hour, day, or month of year to capture seasonality)

2. The capability to perform scenarios such as increasing customer demand to determine at what point your business is at capacity (ex. Many healthcare systems use DES to determine when to build additional OR’s, Inpatient rooms, or what the effect of changing one process has on another part of the hospital)

3. The ability to use animations helps others (who may not be technical) clearly understand what the simulation is doing and what the outcomes of the simulation are (as opposed to viewing a presentation with graphs and numbers.)

See a more in depth list of Simulation benefits vs. spreadsheets at ProModel.

Conclusion:

I do not want to imply that spreadsheet analysis is not useful. Spreadsheet analysis can be a powerful tool to provide good answers and is fairly easy to perform (I use spreadsheets all the time.)

However, using simulation software will cause you to think about your processes, how it behaves, and will reveal things about your business that you would have never have seen or even considered.

Simulation does have some drawbacks, which include:

1. Simulation software is expensive. More affordable simulation software can still cost thousands of dollars.

2. You need lots of good data. You need to pull data anyway you can and have the ability to clean it up for analysis prior to including it in your simulation. Often you will be required to observe the process yourself, making note of times, how things work, and documenting anomalies that happen, which will later be built into the simulation.

3. Some knowledge of computer programming and logic is required.

4. Knowledge of probability and statistics are necessary to be sure that you created an accurate model. When reporting results, you need understanding of statistics to be able to determine if your model’s outcomes are statistically significant.

I hope this helped you understand simulation, ways it can be used, and why it can be more powerful than a spreadsheet analysis.

Please contact me at Astrozuggs@gmail.com with any questions you may have about this or any other Continuous Improvement Question.

Also, please see my other Continuousimprovementpal blogs

 

Sunday, August 11, 2013

4 Easy Steps To Improve Cash Flow for a B2B Service Company


A major business expense often overlooked is poor or sub-optimal cash flow. When it takes a long time to get back what you have already paid out, operating expenses increase and profit margin is lowered. By improving cash flow your company decreases its costs and increases it profits. This blog post will explain specifically how you can also generate new revenue and improve customer satisfaction as well – all at the same time!

Guest blog post by Chris Walker - Vice President at Starkweather Roofing (Phoenix, Arizona)

B2B Service Company
 
Starkweather Roofing services and installs primarily commercial and industrial roof systems. We offer residential roofing services as well, but the majority of our customers include commercial building owners, property managers, facility managers, and general contractors. We do not sell products and do not have a high volume of transactions with end consumers, categorizing us as a B2B (business-to-business) service company.

A Personal Example of Poor Cash Flow
Upon rejoining the company in February 2013, we quickly identified that for a variety of reasons invoices for commercial roof maintenance and emergency roof leak repair services took on average 45 days to be created. In other words, we had already paid for wages and materials a month-and-a-half before we even generated a request for payment. As a result past due receivables, collections efforts, and write-off’s were higher than necessary. Although service accounts for only ~10% of total revenues, cash flow was still clearly an opportunity for improvement worth pursuing. Thus, one of my first continuous improvement efforts focused squarely on increasing cash flow in our service department.

Once an invoice is created, it takes an additional two to five business days for the postal service to deliver it to the customer. Depending on how mail is distributed upon receipt, their review and approval processes, as well as agreed upon payment terms and other policies and procedures of the company receiving our invoice, it could take another month or more from that point until we receive payment… assuming there are no discrepancies.

Invoice disputes create poor customer relations and require valuable resource time from both our accounting and our service department staff to resolve. When we have to send significantly past due invoices to collections, this further decreases our profits as the collections agency charges a fee and in some cases negotiates a settlement amount to resolve the debt. In the end, at times we lose money on a transaction or are forced to write-off the invoice entirely.
On average we weren’t being reimbursed for our actual outgoing expenses on the services we provided for two to four (or more) months after we incurred them. Multiply this over hundreds of transactions per month and include collections and write-off’s, and you can see how cash flow could become an issue.

How We Fixed Our Cash Flow Problem

Through process mapping, data analysis, and interviews with our staff and customers, the root causes of our cash flow problem was identified to be a result of timing and communication. We found the longer it took for a customer to receive our bill and the less clear it was what the bill was for, the longer it took for us to get paid. Now that we knew what was causing the problem, we could go about identifying solutions to minimize it.

Using an approach known as value stream mapping, the key stakeholders and I documented the process step-by-step, beginning with completion of the repairs through receipt of final payment. We documented the pain points throughout the process and grouped them into categories, flagged where process variations existed, identified what data was available, and then began to brainstorm ideas for incremental improvement. In order to fix our cash flow problem we had to make timely, accurate, and pre-aligned invoicing of our roof repair service work orders a priority and resource the process accordingly.

One major problem was timing (taking 45 days to create a bill). By the time the customer received the bill they sometimes had forgotten they ever requested the service for that particular building to begin with, and needed additional time to reconcile their records with our invoice. Another contributing factor was the lack of an adequate explanation of our bill. Other than the address or building name of the roof we serviced, often our invoices were vague and some customers didn’t quite know what exactly they were being asked to pay for. The customers we interviewed agreed that if we could fix these two issues (timing and transparency), payments could be made much faster.

Our Improved Invoicing Process

We now dedicate a percentage of one person’s time to immediately reviewing all completed roof repair work orders for accuracy (ensuring we account for all time and materials actually incurred) and ensuring there is a thorough, easy-to-understand explanation of the work performed. During this review we also double-check to see if the roof may be under warranty, or if this may have been a repeat problem. The double-check process step eliminates a large number of disputes over repairs that likely should not have been billed in the first place (requiring credits later in the year). Our new policy is to finish this review within 2 business days of the completion of the repair work.

Starkweather Roofing’s service technicians take before and after pictures of repairs, but we had rarely shared them with customers previously (unless asked). Because our customers rarely if ever actually see the roofs of the buildings they own or manage, we now accompany the photos with the invoice not only to demonstrate that the work was done, but so they can also see what exactly it is that caused their problem(s), get an idea of their roof’s overall condition, and view the quality and craftsmanship they are paying for. We have an online customer portal for customers to report leaks, monitor progress, view work history, and download invoices, warranty information, and more anytime from anywhere they have an internet connection. Our new policy is to always make these pictures visible in that online portal, and to do so as soon as the work order review is complete.

The invoice itself is generated immediately after the work order is validated and the pictures have been made visible in the customer portal. This end-to-end batch processing approach to invoicing saves resource effort and greatly reduces the entire process duration at the same time. We now produce a clearly articulated, accurate invoice for a billable repair that accounts for all expenses and includes before and after pictures within just 2 days of completion of the work.
Before we mail the invoice, we email a digital courtesy copy directly to the requesting customer and give them 24 hours to review and respond with any questions or concerns. Our customers now have the opportunity to evaluate the work we performed, the associated charges, and to inform us of any questions, clarification requests, or disputes they may have before we mail a copy to their accounting department for payment. If there are any problems, we are able to quickly resolve the issue and align 100% on the charges. Often times they even copy the accounts payable contact on their email response to confirm approval of the charges for payment to help the remainder of the process go smoothly.

We now mail (and in a growing number of cases, email directly to the accounts payable contact to save further on time and the costs of printing, paper, envelopes, and postage) invoices just 3 business days after completion of the work and accrual of expenses, and are extremely confident in receiving timely payment on 99% of them. Our past due receivables and associated collections efforts and expenses has been virtually eliminated. A dramatic improvement over the previous average, more often than not we are now paid for our services in half the time it took us previously to even create the initial bill. Cash flow in our service department is no longer an issue – on to the next improvement opportunity!

Additional Benefits Realized by Improving Cash Flow
Customer satisfaction has dramatically improved as a result. Our customers greatly appreciate the timely confirmation that their roofing problems have been resolved as well as their improved ability to understand and actually see the work performed, and being given an opportunity to review the bill before it is sent for payment. A once mundane and often painful aspect of any business relationship has evolved into a pleasant, eagerly anticipated interaction… and another key differentiator of Starkweather Roofing from the competition.

These email exchanges also provide an opportunity to interact positively with our customers on a regular basis and to gently remind and encourage them to consider our other service offerings such as preventive roof cleaning and maintenance, roof restoration, re-roofing, and new roof construction without having to further invest in traditional interruption marketing methods. During the work order review process we make note of the number of work orders for that particular building over the previous 12 months. If there are a certain number of non-repeat issues on a roof in the previous year, we specifically ask in the invoice courtesy review email if he or she would like us to evaluate that roof further and provide recommendations and budget figures for maintenance, restoration, and/or re-roofing. This natural dialogue, value-add approach makes our business development efforts much easier, cost significantly less, and our conversion rate on proposals initiated under these circumstances are much higher than normal.
How To Improve Cash Flow in Four Easy Steps

You can implement the same improvement principles in your company that Starkweather Roofing did to not only improve cash flow but at the same time reduce expenses, increase profit margin, generate new revenue, and improve customer satisfaction.

1. Regardless of when it will actually be sent, make it a priority to generate an accurate customer invoice in a timely manner. The longer it takes to create the bill the more likely it will contain inaccuracies, which lead to increased disputes and elongated time to receive payment.

2. Clearly explain what exactly the bill is for. Itemize and be as transparent as you can. We include the specific building name, address, and suite number(s), as well as tenant name(s) where applicable in addition to the description of work performed and subtotals for labor, materials, and service charge. We also make available the request date, date of service, and before and after pictures. We leave no doubt what exactly the invoice is for and how we arrived at the charges.

3. Provide the customer a brief window of opportunity to preview the invoice and specifically ask them to respond with any questions or concerns. Give them a short timeframe to do so before it is sent out for payment. Proactively resolving disputes require less time and effort than doing so reactively, and significantly decreases your outstanding receivables, collections, and write-off’s.

4. Take advantage of this natural opportunity to interact with your customers to educate them on the other products or services your company offers. Do this as seamlessly and non-sales like as possible. Where it is appropriate, we mention the number of times in past year that particular building had a roofing issue, what they’ve spent so far in total, an approximate cost of preventive maintenance, and specifically ask their permission to provide them with a scope of work and price for a less expensive alternative that proactively eliminates the opportunity for future leaks (if possible), and provide options for restoration or re-roofing where necessary.

These principles are applicable not only for B2B service companies, but also B2C (business-to-customer) and product-centric companies where the final payment amount isn’t predetermined. Just think if plumbers, mechanics, consultants, landscapers, or any number of other companies you do business with approached invoicing in this manner.

If you have any questions or would like to offer your ideas or best practices for improving cash flow, please leave a comment below or email me at Chris@StarkweatherRoof.com