A STUDY ON METHODOLOGIES AND TOOL USED IN SIX SIGMA

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM A STUDY ON MET...
2 downloads 2 Views 482KB Size
International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM

A STUDY ON METHODOLOGIES AND TOOL USED IN SIX SIGMA 1

Deepak, 2Dr. Rohit Garg, 3Somvir Arya

1

M.Tech Research Scholar, IIET, Kinana, Jind,Haryana 2 Director, GNIOT, NOIDA 3 Head of Department of Mechanical Engineering, IIET , Kinana,Jind,Haryana ABSTRACT: Six Sigma is a formal methodology for analyzing, measuring, improving and then controlling or “locking in” processes. This statistical approach reduces the occurrence of defects to a Six Sigma level - less than four defects per million from a three sigma level or 66,800 defects per million. Six Sigma, a statisticsbased, comprehensive methodology that aims to achieve nothing less than perfection in every single company product and process. Six Sigma is a highly disciplined process that focuses on delivering and developing near-perfect products and services consistently. Reduction of variation to achieve very small standard deviations so that almost all of your products or services meet or exceed customer expectations is the purpose of Six Sigma. A flexible and comprehensive system for achieving, maximizing and sustaining business success. Six Sigma is uniquely driven by close understanding of customer needs, statistical analysis, disciplined use of facts, and diligent attention to improving, managing and reinvesting business processes. Keywords: six sigma tools. I. INTRODUCTION Six Sigma is a disciplined, data-driven methodology for eliminating defects in any process. Within Six Sigma Tools and methodology deal with overall costs of quality, both tangible and intangible parts, trying to minimize it, and in the same time, increasing overall quality level contributing to company business success and profitability. Success of Six Sigma is measured in financial terms, Defects per Million Opportunities, Customer Satisfaction, and Performance of Internal Work Processes and in Suppliers’ Performance. Six Sigma is in essence a structured way of solving problems in an existing

ISSN – 2394-0573

process based on analysis of real process data, i.e. facts Six Sigma is a rigorous and a systematic methodology that utilizes statistical analysis and information (management by facts) to measure and improve a company’s operational performance, practices and systems by identifying and preventing ‘defects’ in manufacturing and service-related processes in order to exceed and anticipate expectations of all stakeholders to accomplish effectiveness. (Tonner, 2003) Six Sigma is a toolkit and program for improving quality in manufacturing processes. A methodology which aims to reduce variations in a process. (Prewitt, 2003) Six-Sigma, a set of techniques and tools for process improvement, was developed by Motorola in 1986. Six-Sigma addresses the major root causes and guarantees the desired results, both in terms of improvement and time span. This enhancement approach delivers results of productivity, profitability and quality improvements based on its highly valuable approach (Chandra, A. (2009). Six-Sigma is adopted by many industries because of its proven benefits in increased profitability and reduction in cost especially for medium scale industries. Manufacturing sector is on the top in implementing Six Sigma with 69% contribution followed by IT (Information Technology) industries (Desai, D.A.,2008). Sigma (σ) is a Greek letter that represents the standard deviation of a sample population in statistics. When measuring process capability, the standard deviations between the process mean and the nearest specification limit is designated in sigma units. The greater the sigma value, more number of standard deviations fit between the mean and the nearest specification limit.

All Rights Reserved © 2015 IJEETE

Page 189

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM “One Sigma” is a very high degree of variability ( i.e. 7 “mistakes” out of 10 opportunities) “Six Sigma” is a very low degree of variability (i.e. 3.4 “mistakes” out of one million opportunities). This translates into 99.99966% perfection. II. SIX SIGMA METHODOLOGIES Main focus of Six Sigma is to improve all key processes of manufacturing setup and takes quality as a function of processes and reduce the rejection rate. Six Sigma mainly uses two main methodologies one is called Define, Measure, Analyze, Improve and Control, usually known as DMAIC and other is Define, Measure, Analyze, Design and Verify, known as DMADV. DMADV is used for creating new processes to produce products with minimum defect rate. Both the methodologies are based on Edwards Deming’s, Plan- Do-Check-Act cycle. Some Other methodologies that are being used during six sigma implementation are given as.  CDOC (Conceptualize, Design, Optimize and Control)  DCCDI (Define, Customer, Concept, Design And Implement)  DMADOV (Define, Measure, Analyze, Design, Optimize and Verify)  DMEDI (Define, Measure, Explore, Develop and Implement)  DCDOV (Define, Concept, Design, Optimize and Verify)  IIDOV (Invent, Innovate, Develop, Optimize and validate)  IDOV (Identify, Design, Optimize and validate) III THE SIX SIGMA DMAIC METHODOLOGY The most well known and most widely used methodology in Six Sigma is The DMAIC methodology. Most companies begin implementing Six Sigma using the DMAIC methodology, and later add the DFSS (Design for Six Sigma, also known as DMADV or Define, Measure, Analyze, Design, Verify) methodologies when the organizational culture and experience level permits.

ISSN – 2394-0573

The Six Sigma is not a completely new foundation. The roots of Six Sigma as a measurement standard can be traced back to Carl Frederick Gauss (1777-1855) who introduced the concept of the normal curve. It can be thought of as a roadmap for problem solving and product/process improvement. The purpose of this phase is to clarify the goals and value of a project. The Define phase and the beginning of the Measure phase are mostly qualitative. Sometimes quantitative data from process evaluations are used. A problem to be solved needs to be formulated from people’s experiences. The Six Sigma methodology itself is built from concepts introduced by W. Edwards Deming- PD-C-A, or Plan-Do- Check-Act - which describes the basic logic of data-based process improvement (Pande et al., 2000). The Six Sigma DMAIC (Define, Measure, Analyze, Improve, and Control) methodology is based on Deming’s PDCA idea. The DMAIC methodology is considered to be a newer approach to Six Sigma and is sometimes referred to as the “Breakthrough Approach” developed by Mikel Harry and Richard Schroeder (2000) (Gupta, 2004). 3.1.1 DEFINE STAGE OF THE SIX SIGMA The Define stage of the Six Sigma methodology and philosophy is the beginning of the spectrum for a Six Sigma project. The purpose of this step is to identify potential projects, to define and select a project and to set up the project team. Gryna (2001) specified five general steps of the define stage, they are summarized as: 1. To Identify the Potential Projects: This stage includes the screening and selection of projects. The opportunities that will increase customer satisfaction and reduce COPQ are the focus of this stage. 2. Evaluate Projects: The evaluation of projects includes a review which goes from an analysis of the scope and benefit to an assessment of factors to help set priorities. 3. Select Project: This is about selection of the project. The initial project should be a successful one. This is because a successful project is a form of evidence to the project team that the process works and helps to build momentum to future endeavours.

All Rights Reserved © 2015 IJEETE

Page 190

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM 4. Mission Statement for Project and Prepare Problems: A mission statement is based on the problem statement but it provides direction to the project team. Establishing of a problem statement brings to the forefront what it is while allowing seeing a planned outcome. 5. Selection and Launch of Project Team: Generally, a project team has a sponsor, a recorder, a leader, team members and a facilitator. to develop a charter that defines what the team will do and how the team will function is an option that may help in this step. This Define phase essentially sets the tone for the entire design project where the design problem is defined by the management, projects which are consistent are nominated with overall business strategy and selected based on benefits (De Feo et a l, 2002). Pareto principle is A way to assess the potential projects. The Pareto Principle states that a few contributors to the cost are responsible for the bulk of the cost. These vital few contributors need to be identified so that quality improvement resources can be concentrated in those areas. 3.1.2 MEASURE One of the Six Sigma methodologies is The Measure phase which identifies key process characterized and product parameters and measures the current process capability. This phase also concentrates on key customers and their critical needs . The steps in this stage as outlined by Gryna (2001) include: 1. Verify the project need and Measure the baseline performance: It helps in justifying the time spent on the project as well as helping to overcome the resistance to accepting and implementing a remedy. It is a good idea to confirm the size of the problem in numbers because it allows for a clear view of the problems that you have to deal with. 2. Documentation of the Process: Using tools such as process flow diagrams or process maps are useful in this stage. Documentation of the process allows for others to see the problems you’re dealing with. 3. Data Collection Plan: This stage involves quantification of symptoms and the formulation of theories and outline of symptoms. 4. Measurement System Validation: Variation comes in many different ways, from the process ISSN – 2394-0573

itself or even from the measurement system. Validating the measurement can involve such things as accuracy, repeatability, reproducibility, linearity and stability. 5. Process Capability Measurement: Knowing the initial process capability helps to define the work to be done in the analysis and improve phases to achieve a capability at the six sigma level. Process capability refers to the inherent ability of a process to meet the specification limits for a product. In the planning aspects of operations it is very important that the processes will be able to meet the specifications. A good reason for being able to quantify process capability is to be able to compute the ability of the process to hold product specifications. To use the process capability measurements is one way of ensuring that the process can meet specifications . Planners try to select processes with the six sigma process capability well within the specification width; a measure of this relationship is the capability ratio (Cp). It is useful to have a capability index that reflects both variation and the location of the process average; such an index is Cpk, because the average is often not at the midpoint. The higher the Cp, the lower the amount of product outside specification limits, if the average is equal to the midpoint of the specification range, then Cpk is equaled to Cp, most capability indexes assume that the quality characteristic is normally distributed (Gryna, 2001). 3.1.3 ANALYZE To identify the causes of variation and process performance this phase of the Six Sigma paradigm essentially analyzes the past and current performance data. Selection a high-level design from several different alternatives and develop detailed requirements against which a design will be optimized is the main purpose of this phase is to (De Feo, 2002). The steps of this again as stated by Gryna (2001) include: 1. Collection and Analysis of Data 2. Develop and Test Theories on Sources of Cause & Effect Relationships and Variation A large part of the Analyze phase is to be able to test the theories of management controllable problems. To do this would require the use of the

All Rights Reserved © 2015 IJEETE

Page 191

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM facts, rather than opinions to reach conclusions about the causes of a quality problem. The factual approach not only determines the true cause but also helps to gain agreement on the true cause by all of the parties involved (Gryna, 2001). Ways to test theories that have been developed are to collect new data. It requires data to be collected in the new processes that have been developed in order to see how well it is doing as compared to the processes before. Some measures that can be done include (Gryna, 2001):  Measurement following non-controlled operations  Measurement at intermediate stages of a single operation  Measurement of additional or related properties of the product or process  Study of worker methods In analyzing the errors of processes and procedures there will no doubt be some errors that are attributable to the way things are done. There are also human errors that management will have to contend with. However, not all errors can be blamed on the processes or even the machines being used. There are in general four types of errors that can be attributable to workers; they include inadvertent errors conscious errors, technique errors, and communication errors (Gryna, 2001). 3.1.4 IMPROVE In this stage, the team must be ready to veer back and forth between far out ideas along with the details of executing a plan (Pande et al., 2002). This phase of Six Sigma essentially designs a remedy, proves its effectiveness and prepares an implementation plan. The steps as outlined by Gryna (2001) include: 1. Remedy Design: This step identifies customers, defines their needs and proves the effectiveness of the remedy. The remedy designed must fulfil the original project mission, particularly with respect to meeting customer needs. 2. Prove Effectiveness of the Remedy: There are two main steps that can be taken to prove the remedy. Either by have a final evaluation under real world conditions and a preliminary evaluation of the remedy under conditions that

ISSN – 2394-0573

simulate the real world. Before any remedy is accepted, it must be proven. 3. Evaluation Alternative Remedies: The remedy selected should make an improvement on the original problem and it should optimize both company costs and customer costs. Reviewing the remedies given, assess which ones would have the largest impact and which of these are viable. 4. If necessary, Design Formal Experiments to Optimize Process Performance: The designing of the experiments can include production experiments, evaluating suspected dominant variables, exploratory experiments to determine dominant variables, response surface experiments and simulation. 5. Deal with Resistance to Change: Resistance to change is very common in this type of implementation, but a way to deal with this resistance is to educate the people involved in the change. 6. Transfer the Remedy to Operations: This stage includes changes in staffing and responsibilities. Additional equipment, materials and supplies along with extensive training may be involved. Transferring the remedy to operations may include revisions in operating standards and procedures. A useful tool in the Improve phase is the use of evolutionary operations or EVOP. The use of EVOP introduces small changes into variables according to a planned pattern of changes, these changes are small enough to avoid a detour from the status quo but large enough to gradually establish which variables are important. EVOP is based on the concept that every manufactured lot has the information which can be used to contribute about the effects of process variables on a quality characteristic (Gryna, 2001). Giving detail to the designing of experiments would allow easier conformance to quality in the future. The Six Sigma approach makes the use of the Design of Experiments (DOE) as an important part of its processes. Experiments can have numerous objectives, and the best strategy depends on the objective. Using DOE is like setting a concrete plan to conduct the experiment. DOE allows for establishing the important variables that affect quality.

All Rights Reserved © 2015 IJEETE

Page 192

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM 3.1.5 CONTROL The Control phase is the last phase of the Six Sigma methodology is where the designing and implementation of certain activities to hold the gains of improvement occur. Statistical Process Control (SPC) is a technique for applying statistical analysis to monitor, measure, and control processes where the major component is the use of the control charting methods (Wortman et al, 2001). SPC is something that can be used in this phase. The use of control charts has many benefits. When a control chart shows that a process is within specification limits and in control, it is often possible to eliminate the costs relating to inspection. The Control phase refers to the process used to meet standards consistently. The steps according to Gryna (2001) are: 1. Document the Improved Process and Design Controls: Control during operations is done through use of a feedback loop which is a measurement of actual performance, comparison with the action on the difference and standard of performance. 2. Validate the Measurement System: This step could include new measurement devices, the collection of new data and additional training for process personnel. After setting up the measurement system for the improved process, it must be evaluated and made capable. 3. Determine the Final Process Capability: The process changes implemented should be irreversible. Essentially, this step ensures that the process capability gained can be held during normal operating conditions. 4. Implement and Monitor the Process Controls: The steps mentioned above are used to monitor the processes and product performance. In this step, all of the remedies are implemented into the operations. Implementing and monitoring the improved process is the final step in a quality improvement project. According to Gryna (2001), the control process is in the nature of a feedback loop, control involves a sequence of steps: choose the control subject, measure actual performance, establish standards, establish measurement of performance, compare actual measured performance to standards and take action on the difference. Pande et al. (2002) states that the main purpose of the Control phase is quite simple: “once the improvement’s been ISSN – 2394-0573

made and results documented, continue to measure the performance of the process routinely, adjusting its operation when the data clearly indicates you should do so or when the customer’s requirements change.” IV.COMMONLY USED QUALITY CONTROL (QC) TOOLS IN SIX SIGMA Significant number of quality assurance and quality management tools are available and selecting an appropriate tool is not always an easy task. Seven basic quality tools used in Six-Sigma methodologies are: 1. Flow chart 2. Pareto diagram 3. Check sheet 4. Control chart 5. Histogram 6. Scatter plot 7. Cause-and-effect diagram. 1. Cause and Effect Matrix:A cause-and-effect matrix — sometimes called a C&E matrix for short — helps you discover which factors affect the outcomes of your Six Sigma initiative. It provides a way of mapping out how value is transmitted from the input factors of your system (the Xs) to the process or product outputs (the Ys). With these relationships visible and quantified, you can readily discover the most-influential factors contributing to value. Cause and Effect Matrix is a viable tool which provides the maximum amount of information. The Key Process Output Variables (KPIV) is scored according to their importance while the Key Process Input Variables (KPIV) is scored in terms of their relationship to key outputs. In the Matrix, a factor of importance for each parameter score is rank ordered and every listed input parameter is correlated to every output parameter. Finally, a total value for each parameter is obtained by multiplying the rating of importance with value given to parameters and adding across for each parameter. The KPIV are listed on the left-hand side while the KPOV are listed on the top right hand side of the diagram. In some cases,

All Rights Reserved © 2015 IJEETE

Page 193

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM the KPIV from one process are the KPOV for the next process. For example, moisture content and operator unawareness.. The results of the Cause and Effect Matrix are further analyzed with the Pareto Diagram. The Pareto Diagram (also known as 20/80 principle) helps in prioritizing the different categories taken into account for further

4

5 6 7 8 9

10 11 12 13

ISSN – 2394-0573

3

3

3

1

2 3 4 5 6 7 8 9

1 0

11

1 2

1 3

1 4

1 5

1 6

1 7

Total

3

5

Dirt

2

Unskilled operator Improper handling of core on line Core box is not mounted properly on machine Venting problem of bed plate bottom slab Trolley maintenance not adhered Less scratch hardness of core Strainer core / filter core not used Sand quality poor Blow candle problem in end core Locator broken problem in top slab core Porosity problem machine Broken blow candle of machine Mounting of sand

1

Blow Holes Pourisity and Pin Holes War page

1

3

Swells

KPIV

1

Mould Shift

KPOV

3 3 5 5 5 5 3 5 3

Cuts and Washer

S. no.

3

Miss run cold shut Hot Tears Crack Shrinkage Slag Inclusion Core Shift Time OF Cycle Sand Inclusion Scabs

Importance Estimation as Scale for Process From Customer

analysis like Failure More Effective Analysis (FMEA) (A. Kumaravadivel, U. Natarajan, 2011). The KPOV are rank ordered in accordance with the number of points from the Cause and Effect Matrix. This table below shows the key process input variable and key process output variable relations related to the foundry process:

3

1 0 0 0 3 0 3 3 3

0

0

3

0

0

3

0

70

3

3 0 0 0 3 0 3 3 0

0

0

3

0

0

3

0

69

0

0 0 0 0 0 0 1 1 0

0

0

0

0

0

3

0

15

0

0 1 1 3 0 0 0 3 0

0

0

3

3

3

1

0

44

0

0 0 0 0 1 0 3 1 0

0

0

1

0

0

0

0

20

0

0 0 0 0 3 0 0 1 0

0

0

1

0

0

0

5

36

0

0 0 0 0 5 0 0 0 0

0

0

0

0

0

0

0

25

0

0 0 0 0 5 0 0 0 0

0

0

0

3

0

0

0

40

3

0 0 1 0 5 0 0 0 0

3

3

0

1

0

0

0

93

0

0 3 5 3 1 0 1 3 0

0

0

0

0

0

0

0

62

0

0 0 0 0 3 0 0 5 3

3

3

3

1

3

3

3

96

0

0 0 0 0 0 0 0 5 0

0

0

0

0

3

0

0

34

0

0 0 0 0 0 0 0 3 0

0

0

0

0

1

0

0

18

All Rights Reserved © 2015 IJEETE

Page 194

21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

Total

0

0

0

33

0

0 0 0 0 0 0 0 3 0

0

0

0

0

3

5

3

48

1

1 0 0 0 0 0 0 1 0

0

0

0

0

3

0

0

20

0

0 0 0 0 0 0 0 1 0

3

0

0

0

1

0

0

11

0

0 1 1 0 3 0 0 1 3

3

0

3

3

3

0

0

67

0

0 3 0 0 5 0 1 0 3

0

3

0

3

0

5

0

85

0

0 0 0 0 5 0 0 0 0

0

0

0

3

0

0

0

40

0

0 0 0 0 5 0 0 5 0

0

0

0

0

0

0

0

50

1

3 5 1 1 1 5 0 1 0

1

1

0

0

0

3

1

88

3

3 0 0 3 1 3 0 1 0

1

3

0

0

0

0

3

77

1

1 0 0 3 1 1 0 3 0

0

1

0

0

0

0

1

52

3

3 0 0 0 0 3 3 5 0

3

0

0

0

0

0

0

73

5

5 0 0 0 3 0 0 3 0

0

0

0

0

1

3

3

71

0

0 0 5 3 0 0 1 3 0

0

3

0

0

5

3

0

91

0

0 0 0 1 0 0 1 0 0

0

0

0

0

3

0

0

17

0

0 0 0 0 0 0 0 1 0

0

0

0

0

3

0

0

14

0

0 0 3 5 0 0 5 0 0

0

0

0

5

5

0

0

95

0 0

0 0 0 0 0 0 0 3 0 0 5 5 0 0 0 3 3 5

0 0

0 0

0 0

0 0

0 0

0 0

0 0

15 73

0

0 0 0 0 5 0 0 0 0

0

0

0

0

0

0

0

50

0

0 0 0 0 0 0 0 0 0

0

0

0

5

3

0

0

34

0

0 0 0 0 5 0 3 0 0

0

0

0

3

5

0

0

73

0

0 0 0 0 3 0 0 5 0

0

0

0

0

0

0

0

40

57

20

0

96

19

0

135

18

0

150

17

0

17

16

0 1 1 0 0 0 0 5 0

51

15

0

69 60 57 115 110 330 60 84 360 51

14

magazine not ok in machine Core unloading problem on the machine Sand mixture not ok Variation in wash viscosity No rubber seal in the crank core box Gas leakage problem Poor Design of Crank Core Core ejection problem No dimple mark for screw End core fitment problem Jig handling problem at line Design Parameters Fifo is not maintained Uneven shop floor problem Go ,No go gauge is not qualified in bed plate assembly Bad tyre of trolley Wash stick on Boss area Poor handling core assembly Uneven Stripping Improper Raming Insufficient Turbulence Insufficient permeability Moisture Content Operator Unawareness

17

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM

2. Flow diagram

ISSN – 2394-0573

All Rights Reserved © 2015 IJEETE

Page 195

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM It is a collective term for a diagram representing a flow or set of dynamic relationships in a system. The term flow diagram is also used as synonym of the flowchart, and sometimes as counterpart of the flowchart. When it comes to conveying how information data flows through systems (and how that data is transformed in the process), data flow diagrams (DFDs) are the method of choice over technical descriptions for three principal reasons. 1. DFDs are easier to understand by technical and nontechnical audiences 2. DFDs can provide a high level system overview, complete with boundaries and connections to other systems 3. DFDs can provide a detailed representation of system components1 DFDs help system designers and others during initial analysis stages visualize a current system or one that may be necessary to meet new requirements. Systems analysts prefer working with DFDs, particularly when they require a clear understanding of the boundary between existing systems and postulated systems. DFDs represent the following: 1. External devices sending and receiving data 2. Processes that change that data 3. Data flows themselves 4. Data storage locations DFDs consist of four basic components that illustrate how data flows in a system: entity, process, data store, and data flow. Entity An entity is the source or destination of data. The source in a DFD represents these entities that are outside the context of the system. Entities either provide data to the system (referred to as a source) or receive data from it (referred to as a sink). Entities are often represented as rectangles (a diagonal line across the right-hand corner means that this entity is represented somewhere else in the DFD). Entities are also referred to as agents, terminators, or source/sink. Process The process is the manipulation or work that transforms data, performing computations, making decisions (logic flow), or directing data flows based on business rules. In other words, a process receives input and generates some output. Process names (simple verbs and dataflow names, such as ISSN – 2394-0573

“Submit Payment” or “Get Invoice”) usually describe the transformation, which can be performed by people or machines. Processes can be drawn as circles or a segmented rectangle on a DFD, and include a process name and process number. Data Store A data store is where a process stores data between processes for later retrieval by that same process or another one. Files and tables are considered data stores. Data store names (plural) are simple but meaningful, such as “customers,” “orders,” and “products.” Data stores are usually drawn as a rectangle with the righthand side missing and labeled by the name of the data storage area it represents, though different notations do exist. Data Flow Data flow is the movement of data between the entity, the process, and the data store. Data flow portrays the interface between the components of the DFD. The flow of data in a DFD is named to reflect the nature of the data used (these names should also be unique within a specific DFD). Data flow is represented by an arrow, where the arrow is annotated with the data name. Some Guidelines About Valid and NonValid Data Flows Before embarking on developing your own data flow diagram, there are some general guidelines you should be aware of. Data stores are storage areas and are static or passive; therefore, having data flow directly from one data store to another doesn't make sense because neither could initiate the communication. Data stores maintain data in an internal format, while entities represent people or systems external to them. Because data from entities may not be syntactically correct or consistent, it is not a good idea to have a data flow directly between a data store and an entity, regardless of direction. Data flow between entities would be difficult because it would be impossible for the system to know about any communication between them. The only type of communication that can be modeled is that which the system is expected to know or react to. Processes on DFDs have no memory, so it would not make sense to show data flows between two a synchronous processes (between two processes that

All Rights Reserved © 2015 IJEETE

Page 196

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM may or may not be active simultaneously) because they may respond to different external events. Therefore, data flow should only occur in the following scenarios:  Between a process and an entity (in either direction)  Between a process and a data store (in either direction)  Between two processes that can only run simultaneously.  Here are a few other guidelines on developing DFDs:  Data that travel together should be in the same data flow  Data should be sent only to the processes that need the data  A data store within a DFD usually needs to have an input data flow  Watch for Black Holes: a process with only input data flows  Watch for Miracles: a process with only output flows  Watch for Gray Holes: insufficient inputs to produce the needed output  A process with a single input or output may or may not be partitioned enough  Never label a process with an IF-THEN statement  Never show time dependency directly on a DFD (a process begins to perform tasks as soon as it receives the necessary input data flows) Example of data flow diagram:

3. PARETO CHART A bar chart used to separate the “vital few” from the “trivial many.” These charts are based on the Pareto Principle which states that 20 percent of the problems have 80 percent of the impact. The 20 percent of the problems are the “vital few” and the remaining problems are the “trivial many.” The PARETO procedure creates Pareto charts, which display the relative frequency of quality-related problems in a process or operation. The frequencies are represented by bars that are ordered in decreasing magnitude. Thus, a Pareto chart can be used to decide which subset of problems should be solved first or which problem areas deserve the most attention. Pareto charts provide a tool for visualizing the Pareto principle,_ which states that a small subset of problems tend to occur much more frequently than the remaining problems. In Japanese industry, the Pareto chart is one of the “seven basic QC tools” heavily used by workers and engineers. Ishikawa (1976) discusses how to construct and interpret a Pareto diagram. Examples of Pareto diagrams are also given by Kume (1985) . You can use the PARETO procedure to  construct Pareto charts from unsorted raw data (for instance, a set of quality problems that have not been classified into categories) or from a set of distinct categories and corresponding frequencies  construct Pareto charts based on the percentage of occurrence of each problem, the frequency (number of occurrences), or a weighted frequency (such as frequency weighted by the cost of each problem)  add a curve indicating the cumulative percentage across categories  construct side-by-side Pareto charts or stacked Pareto charts  construct comparative Pareto charts that enable you to compare the Pareto frequencies across levels of one or two classification variables. For example, you can compare the frequencies of problems encountered with three different machines for five consecutive days.  highlight the “vital few” and the “useful many categories by using different colours for bars corresponding to the n most frequently

Foundry process flow diagram

ISSN – 2394-0573

All Rights Reserved © 2015 IJEETE

Page 197

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM occurring categories or the m least frequently occurring categories.  Highlight special categories by using different colours for specific bars  create charts using either a high-resolution graphics device or a line printer  annotate charts created on graphics devices  save charts created on graphics devices in a graphics catalogue for subsequent replay  display sample sizes on Pareto charts  display frequencies above the bars  define characters used for features on plots produced on line printers  save information associated with the categories (such as the frequencies) in an output data set



restrict the number of categories displayed to the n most frequently occurring categories

Both the chart and the principle are named after Vilfredo Pareto (1848-1923), an Italian economist and sociologist. His first work, Cours d’Économie Politique (1895-1897), applied what is now termed the Pareto distribution to the study of income size. yJuran originally referred to these categories as the “trivial many”; however, because all problems merit attention, the term “useful many” is preferable. Refer to Burr (1990).

Rejection of cores in no.

35

100 34

95 29

30 25

63

20

90 80

83

70 60

20

50

15 10

100

percent (contribution in %)

40

40

12

30

34

5

5

20 10

0

0 Stage 3 core Making

Stage 9 core supply to line

Stage 7 core assembly

Stage 4 core Stage 5 & 6 core Transportation washing and drying

stages

Stage wise core rejection data of 100 cores rejected

Conclusion: 63 % cores rejected at core making and core assembly supply to line V.FISHBONE DIAGRAM The Fishbone diagram (also called the Ishikawa diagram) is a tool for identifying the root causes of quality problems. It was named after Kaoru Ishikawa, a Japanese quality control statistician, the man who pioneered the use of this chart in the 1960's (Juran, 1999). ISSN – 2394-0573

The Fishbone diagram is an analysis tool that provides a systematic way of looking at effects and the causes that create or contribute to those effects. Because of the function of the Fishbone diagram, it may be referred to as a cause-and-effect diagram (Watson, 2004).

All Rights Reserved © 2015 IJEETE

Page 198

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM Fishbone (Ishikawa) diagram mainly represents a model of suggestive presentation for the correlations between an event (effect) and its multiple happening causes. The structure provided by the diagram helps team members think in a very systematic way. Some of the benefits of constructing a Fishbone diagram are that it helps determine the root causes of a problem or quality characteristic using a structured approach, encourages group participation and utilizes group knowledge of the process, identifies areas where data should be collected for further study (Basic Tools for Process Improvement, 2009). The design of the diagram looks much like the skeleton of a fish. The representation can be simple, through bevel line segments which lean on an horizontal axis, suggesting the distribution of the multiple causes and sub-causes which produce them, but it can also be completed with qualitative and quantitative appreciations, with names and coding of the risks which characterizes the causes and sub-causes, with elements which show their succession, but also with other different ways for risk treatment. The diagram can also be used to determine the risks of the causes and sub-causes of

the effect, but also of its global risk (Ciocoiu, 2008). Usually, the analysis after Fishbone diagram continues with other representation and establishing treatment priorities methods. A lopsided diagram can indicate an over-focus in one area, a lack of knowledge in other areas, or it can simply indicate that the causes are focused in the denser area. A sparse diagram may indicate a lack of general understanding of the problem or just a problem with few possible causes (Straker). The repartition of the causes and sub-causes on the diagram must meet some relevance, membership or timeline criteria, but they can be put in any preference order or even random (Ciocoiu, 2008). After accepting the diagram, which must be stated in a decisional document (decision, minute, agreement etc.), follows the risk analyze of the elements in the diagram and then to the establishment of a plan for treatment or risk operation of the components (causes) and of the risk (global) of the characterized event (the effect)

Fig Fish bone diagram for the foundry industry

ISSN – 2394-0573

All Rights Reserved © 2015 IJEETE

Page 199

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM VI. ADVANTAGES AND DISADVANTAGES ADVANTAGES • Fishbone diagrams permit a thoughtful analysis that avoids overlooking any possible root causes for a need. • The fishbone technique is easy to implement and creates an easy-to-understand visual representation of the causes, categories of causes, and the need. • By using a fishbone diagram, you are able to focus the group on the ʺbig pictureʺ as to possible causes or factors influencing the problem/need. • Even after the need has been addressed, the fishbone diagram shows areas of weakness that once exposed - can be rectified before causing more sustained difficulties. DISADVANTAGES • The simplicity of a fishbone diagram can be both its strength and its weakness. As a weakness, the simplicity of the fishbone diagram may make it difficult to represent the truly interrelated nature of problems and causes in some very complex situations. • Unless you have an extremely large space on which to draw and develop the fishbone diagram, you may find that you are not able to explore the cause and effect relationships in as much detail as you would like to. (WBI Evaluation Group (2007), Needs Assessment Knowledge Base) CONLUSION Six Sigma is a disciplined, data-driven methodology for eliminating defects in any process. Within Six Sigma Tools and methodology deal with overall costs of quality, both tangible and intangible parts, trying to minimize it, and in the same time, increasing overall quality level contributing to company business success and profitability. It is a very broad field itself. Six Sigma is a step-by-step business improvement strategy used to drive out waste, improve profitability, to improve the efficiency and effectiveness and reduce quality costs and of all operations that meet or even exceed customers’ needs and expectations. Six Sigma is a toolkit and program for improving quality in manufacturing processes. A methodology which aims to reduce variations in a process. A Six Sigma DMAIC methodology is used to understand the root causes and management of ISSN – 2394-0573

recalls and also analyze the costs in a consumer products supply chain. There are many variables in supply chain, so it is essential for manufacturers to have procedures in place to prevent failures that result in a product recall. REFERENCES 1.Tonner, C., (2003), “Six Sigma”, iSixSigma, Available at: http://www.isixsigma.1 com/dictionary/Six_Sigma-85.htm. 2.Prewitt, E., (2003), “Six Sigma Comes to IT: Once Confined to Manufacturing Groups, the Quality Improvement Program called Six Sigma is now being used to Clean Up IT’s Act”, CIO, Vol. 16 No. 21, pp. 87-92. 3. Chandna, P. and Chandra, A. (2009), “Quality tools to reduce crank shaft forging defects: an industrial case study”, Journal of Industrial and Systems Engineering, Vol.3 No. 1, pp. 27-37. 4. Desai, D.A. (2008), “Improving productivity and profitability through Six Sigma: experience of a small-scale jobbing industry”, International Journal of Productivity and Quality Management, Vol. 3 No. 3, pp. 290-310. 5.Pande et al., (2002), The Six Sigma Way Team Field book: An Implementation Guide for Process Improvement Teams, McGraw-Hill Professional, New York, NY. 6.Wortman, B. et al., (2001), The Certified Six Sigma Black Belt Primer - First Edition, Use Fourth Edition, McGraw-Hill, New York, NY. 7..Basic Tools for Process Improvement. (1995, May 3). Retrived December 20, 2009, from Balanced Scorecard Institute: http://www.balancedscorecard.org/Portals/0/PDF /c-ediag.pdf 8.Ciocoiu, C. N. (2008). Managementul riscului. Teorii, practici, metodologii. Bucharest: ASE. 9.Ilie, G. (2009). De la management la guvernare prin risc. Bucharest: UTI Press & Detectiv. 10.Juran, J. M. (1999). Juran's Quality Handbook (5th Edition). McGraw-Hill.

All Rights Reserved © 2015 IJEETE

Page 200

International Journal of Exploring Emerging Trends in Engineering (IJEETE) Vol. 02, Issue 04, JUL-AUG, 2015 Pg. 189-201 WWW.IJEETE.COM 11.Straker, D. (n.d.) Cause-Effect Diagram. Retrived January 10, 2010, from QualityTools: http://syque.com/quality_tools/toolbook/causeeffect/cause-effect.htm 12.Watson, G. (2004). The Legacy Of Ishikawa. Quality Progress 37(4) , 54-47. 13.WBI Evaluation Group assessment knowledge base)

(2007),

needs

14. Gryna, (2001), Quality Plannind and Analysis (From Product Development Through Use), 4th ediyion, New York: Mc-Graw-Hill.

ISSN – 2394-0573

All Rights Reserved © 2015 IJEETE

Page 201