PACKT has just posted an sampling of four recipes that I curated from the entire book. I think they are a fun sampling. Here I’ve written a little bit about my rationale for choosing the recipes that I did. Enjoy.

From Chapter Two, Data Preparation: Select I’ve chosen Using the Feature Selection node creatively to remove, or decapitate, perfect predictors, to illustrate this. It happens to be one of mine. It is not difficult, but it uses a key feature in an unexpected way.

From Chapter 6, Selecting and Building a ModelNext-best-offer for large data sets is our representative of the pushing the limits category. Most of the documentation of his subject uses a different approach that while workable on smaller data sets, is not scalable. We were fortunate to have Scott Mutchler contribute this recipe in addition to his fine work in Chapter 8.

From Chapter Seven, Modeling – Assessment, Evaluation, Deployment, and Monitoring, Correcting a confusion matrix for an imbalanced target variable by incorporating priors, by Dean Abbott, is a great example of the unexpected. The Balance Node is not the only way to deal with an out of balance target. Also from Chapter Seven, I’ve chosen Combining generated filters. This short recipe definitely invokes that reaction of “I didn’t know you could do that!” It was provided by Tom Khabaza.

Depending on how you count, it has been a two year process. It was that long ago that I asked Tom Khabaza if he would consider taking on the challenge of an Introductory Guide to SPSS Modeler (aka Clementine). We had a number of spirited discussions via Skype and a flurry of email exchanges, but we were both so busy that we barely had time to have planning meetings much less write in between them. I don’t know if our busy consulting schedules made us the best candidates or the worst candidates for undertaking the first 3rd party book on the subject.

Some weeks after starting our quest, I got a LinkedIN message from an acquisitions editor at PACKT – a well established publisher of technical books in the UK. She wondered if I would consider writing a book about Modeler. I replied the same day. In fact, I replied in minutes because I had been working online. She had a very different idea for a book, however. She recommended a ‘Cookbook’. A large number of small problem solving ‘recipes’. Tom and I felt there was still a need for an Introductory book. (Look for it in Q1 of 2014). Nonetheless we were intrigued. Encouraged by the publisher we got to work again, but in a different direction. Believing, naively, that more authors made it easier, I recruited one more, then two more, and then, eventually, a third additional author. I can now tell you, that five authors does not make it easier. However, it does make it better. I am very proud of the results.

We cover a wide variety of topics, but all the recipes have a focus on the practical step by step application of ‘tricks’ or non-obvious solutions to common problems. From the Preface: “Business Understanding, while critical, is not conducive to a recipe based format. It is such an important topic, however, that it is covered in a prose appendix. Data Preparation receives the most attention with 4 chapters. Modeling is covered, in depth, in its own chapter. Since Evaluation and Deployment often use Modeler in combination with other tools, we include somewhat fewer recipes, but that does not diminish its importance. The final chapter, Modeler Scripting, is not named after CRISP-DM phase or task, but is included at the end because its recipes are the most advanced.”

Perhaps our book it a bit more philosophical than most analysis or coding books. Certainly, the recipes are 90% of the material, but we absolutely insisted on the Business Understanding section: “Business objectives are the origin of every data mining solution. This may seem obvious, for how can there be a solution without an objective? Yet this statement defines the field of data mining; everything we do in data mining is informed by, and oriented towards, an objective in the business or domain in which we are operating. For this reason, defining the business objectives for a data mining project is the key first step from which everything else follows.” Weighing in at 20 pages, it is a substantial addition to a substantial eight chapter book with dozens of recipes including multiple data sets, and accompanying Modeler streams.

I am also terribly proud of my coauthors. We have a kind of mutual admiration society going. I am pleased that they agreed to coauthor with me. They, I suspect, were glad that they didn’t have to play the administration role that I ended up with. In the end, we produced a project where each one of us learned a great deal from the others. Our final ‘coauthor’ was kind enough to write a Forward for us, Colin Shearer. “The first lines of code for Clementine were written on New Years Eve 1992, at my parents’ house, on a DEC Station 3100 I’d taken home for the holidays.” 

Colin has been a part of the story of Modeler from the very beginning, so we were terribly pleased to have him support us in this effort. All 6 of us have run into each other repeatedly over the years. The worldwide Modeler community was a very small one 15 years ago when most of us were learning Modeler. (Tom has a bit of lead on the rest of us.) With IBM’s acquisition of SPSS Inc. some years ago, the community has rapidly grown. From the Forward: “The authors of this book are among the very best of these exponents, gurus who, in their brilliant and imaginative use of the tool, have pushed back the boundaries of applied analytics. By reading this book, you are learning from practitioners who have helped define the state of the art.”

The book is being released in November, just a few weeks away. More information on the book, including a prerelease purchase opportunity, can be found on the PACKT website.

More information on the authors can be found here:

Scott Mutchler and I are the managers of the Advanced Analytics Team at QueBIT.

Dean Abbott is President of Abbott Analytics.

Meta Brown blogs at MetaBrown.com

More information about Tom Khabaza can be found at Khabaza.com

KDD 2013

After many years of trying to align my calendar and travel schedule I have finally made it. I am at kdd 2013 in Chicago.

As I have always feared, it is very academic in nature – lots of graduate student papers and the like. There is not a whole lot of focus on application here. Nonetheless I think it is important to monitor what our friends in the Computer Sciences are up to. So far I have been to a Big Data Camp and a workshop focusing on Healthcare. I have been constantly reminded of the vast gap between my clients – software end users – and the academic researchers. The distance between them is matched by the gap between the software users and their colleagues. Colleagues who don’t care terribly much about the software, but must understand the solution. I feel like a fragile bridge between these very different worlds. I won’t be able to justify coming every year, but I needed to experience this first hand.

kdd2013dmh

big-data-camp

My calendar has finally allowed me to attend the Ohio State Center for Public Health Practice's summer program. They had one full length weekend course. I just complete the first day of David Hosmer's Survival Analysis class. The class follows the content of his text (coauthored with Stanley Lemeshow and Susanne May).

The class is bit intense to be honest at more than 200 slides per day, and clocking in at almost 8 hours of content. There are breaks, of course, but class started at 8:30 and ended at just a few minutes to 5. Since most reading my blog would be in industry and coming off a full week be forewarned. Having issued the warning, however, I learned a great deal. I've taught chapter length treatments of this subject in SPSS Inc's old three day Advanced Stats class. That 90 minutes of material clearly had to leave plenty of detail out. Even at a full two days, Dr. Hosmer has to leave plenty of material out of the discussion. Some of the highlights of the experience included learning more about options in Stata and SAS, and when not to trust defaults – topics that just didn't fit my presentation on the subject.

I expect to post again when I've had a chance to reduce some of my lessons learned to writing. In the meantime make a note to check out the 2014 program! It is held around this time of year each year.

IBM has just released a new SPSS brand product. I have numerous friends in the SPSS community, and I have been a frequent beta tester, but I didn't know in advance about this release. It does resemble something that I saw demonstrated at last year's IOD. What to make of this product? It is web based, and looks pretty slick: Analytic Catalyst. There is also a video on YouTube. I like the visuals, and I agree that it looks easy to use. I'm anxious to try it, and might recommend it in certain client situations.

Never forgetting that the lion's share of a Data Mining project's labor is spent on Data Prep, and since I've never been on a project that didn't need Data Prep, I think that a tool like this is most useful after a successful Data Mining project is complete. For instance, I worked like crazy on a recent churn project, but after the project the marketing manager had to explore high churn segments to come up with intervention strategies. This could be used for that purpose. Or perhaps 'repurposed' for that since the video seems to indicate that it would be used in the early stages of a project.

My reaction, not a concern exactly, is the premise. It seems to assume that the problem is business users tapping directly into Big Data to explore it, searching for 'insight'. I don't think most organizations need more insight. I think they need more deployed solutions – solutions that have been validated that are inserted into the day to day running of the business. My two cents.

As of today, I have joined QueBIT Consulting as VP and General Manager of the Advanced Analytics team. I will have the exciting task of building a world class team of SPSS Experts. Joining the team with me will be Scott Mutchler of Big Sky Analytics

Here is today's press release.

Statistical Hypothesis testing does an OK job at avoiding proving the presence of effects, but it does a mediocre job (or worse) at disproving them. There are a lot of reasons for this, poor training among them, but it is largely systemic. I spent my Thanksgiving morning watching the “Vanishing of the Bees,” and my mind kept drifting to thoughts of Type II error. I know. I can grasp the obvious … maybe I need a break.

I don’t have any biological expertise in evaluating, in detail, the research on either side of the fascinating Colony Collapse Disorder debate, but I am always suspicious of negative findings of any kind unless I can read the research. In the case of this documentary, they claim (a claim that is perhaps biased) that pesticides were determined to be safe after administering a fairly large dose to an adult bee, and determining that the adult bee did not die during the research period. Was that enough? I can’t speak to the biology/ecology research, but it got me thinking about Type II.

We know well the magnitude of the risk we face in committing Type I, and it is trained into us to the point of obsession. When meeting analysts wearing this obsession on their sleeve, reminding everyone who will listen, leveling their wrath on marketing researchers daring to use exploratory techniques, I am often tempted to ask about controlling for Type II. I am often underwhelmed with the reply. There are just so many things that can go wrong when you get a non-significant result. Although I wrote about something similar in my most recent post, I’m am compelled to reduce my thoughts to writing again:

1) The effect can be too small for the sample size. Ironically, the problem is usually the opposite. Often researchers don’t have enough data even thought the effect is reasonably big. In this case, I was persuaded by the documentary’s argument that bee “birth defects” would be a serious effect. Maybe short term adult death was not subtle enough. More subtle would require more data.

2) The effect can be delayed. My own works doesn’t involve bees, but what about the effect of marketing? Do we always know when a promotion will kick in? Are we still experiencing the effects of last quarter’s campaign? Does that cloud our ability to measure the current campaign? Might the effects overlap?

3) The effect could be hidden in an untested interaction (AKA your model is too simple). The bee documentary proposed an easy to grasp hypothesis – that the pesticide accumulates over time in the adult bee. Maybe a proximity * time interaction? We may never know, but was the sample size sufficient to test for interactions, or was Power Analysis done assuming only main effects. Since they were studying bee autopsies the sample size was probably small. I don’t know the going rate for a bee autopsy, but they are probably a bit expensive since the expertise would seem rare.

4) Or its hidden in a tested interaction (AKA your model is too complex). I had a traumatic experience years ago when a friend asked me what “negative degrees of freedom” were. Since she was not able to produce a satisfactory answer to a query regarding her hypothesized interactions, her dissertation committee required here to “do all of them”. Enough said. It was horrible.

5) The effect might simply be, and what could be more obvious, not hypothesized. This, we might agree, is the real issue regarding the adult bee death hypothesis. It may not have been the real problem at all.

Statistics doesn’t help you find answers. Not really. It only helps you prove a hypothesis. When you are lucky, you might be able to disprove one. Often, we have to simply “fail to prove”. In any case, I recommend the documentary. Now that I’ve been able to vent a bit about Type II, I should watch it again and focus more of my attention on the bees.

When you get a statistical result, one too often immediately jumps to the conclusion that the finding “is statistically significant” or “is not statistically significant.” While that is literally true since we use those words to describe below .05 and above .05, it does not imply that there are only two conclusions to draw about our finding. Have we ruled out the possible ways that our statistical result might be tricking us?

Things to think about if it is below .05

Real:
You might have a Real Finding on you hands. Congrats. Consider the other possibilities first, but then start thinking about who needs to know about your finding.

Small Effect: Your finding is Real, but is of no practical consequence. Did you definitively prove a result with an effect so small that there is no real world application of what you have found? Did you prove that a drug lowers cholesterol at the .001 level, but the drug only lowers it at a level so small that no Doctor or patient will care? Is your finding of a large enough magnitude to prompt action or to get attention?

Poor Sample: Your data does not represent of population. There is nothing you can do at this point. Are you sure you have a good sample? Did you start with a ‘Sampling Frame’ that accurately reflects the population? What was your response rate on this particular variable? Would the finding hold up if you had more complete data? Have you checked to see if the respondent and non-respondent status on this ‘significant’ variable is correlated with any other variable you have? Maybe you have a census, or you are Data Mining – are you sure you should be focused on p values?


Rare Event:
You have encountered that 5% thing. It going to happen. The good news is we know how often it is going to happen. If you are like everyone else, you probably are operating at 95% confidence, and then each test, by definition, has a 5% chance of coming in below .05 from random forces alone. So you have a dozen findings – which ones are real? Was choosing 95% Confidence a deliberate and thoughtful decision? Have you ensured that Type I error will be rare? If you have a modest sample size did you chose a level of confidence that gave you enough Statistical Power (see below)? If you are doing lots of tests (perhaps Multiple Comparisons) did you take this into account or did you use 95% confidence out of habit?

Too Liberal: You have violated an assumption which has made your result Liberal. Your p value only appears to be below .05. For instance, did you use the usual Pearson Chi-Sq when Continuity Correction would have been better? Maybe Pearson was .045, Likelihood Ratio was .049,  Continuity Correction was .051. Did you chose wisely? Did you use Independent Samples T-Test when a non-parametric would have been better? Having good Stats books around can help, because they will often tell you that a particular assumption violation tends to produce Liberal results. You could always consider a Monte Carlo simulation or Exact Test, and make this problem go away. (An interesting ponderable is to ask if we are within a generation of abandoning distributional assumptions as ordinarily outfitted computers get more powerful?)

Things to think about if it is above .05

Negative Finding: You might have disproven your hypothesis. (I know that you have ‘proven’ your ‘Null Hypothesis’, but does anyone talk that way outside of a classroom?) Congrats might be in order. Consider the other possibilities and then start thinking about who needs to know about your negative finding. If it is the real thing, a negative finding could be a valuable. Be careful however before you shout that the literature was wrong. Make sure it is a bona fide finding.

Power: You may simply have lacked enough data. Did you do a Power Analysis before you began? Was your sample size commensurate with your number of Independent Variables? Did you begin with a reasonable amount of data, but attempted every interaction term under the sun? Did you thoughtlessly include effects like 5 way interactions without measuring the impact that it had on your ability to detect true effects? If you aren’t sure what a Power Analysis is, it is best that you describe your negative results using phrases like: “We failed to prove X”, not “We were able to prove that the claim of X, believed to be true for years, was disproved by our study (N=17)”. You can also Google Jacob Cohen’s wonderful “Things I have Learned (So Far)” to learn more about Power Analysis. I mention is in my Resources section, and it has influenced my thinking for years. Its influence is certainly present in this post.

Poor Sample: Your data is not representative of the population. This one can get your p value to move, incorrectly, in either direction.

Too Conservative: You have violated an assumption which has made your result Conservative. Your p value only appears to be above .05. Did you use an adjusted test in an instance when no adjustment was needed? Did you use Scheffe for Multiple Comparisons, but aren’t quite sure how to justify your choice? Most assumptions make our tests lean Liberal, coming in too low, but the opposite can occur.

 

This list has served me well for a long time. Always best to report your findings thoughtfully. Statistics, at first, seems like a system of Rule Following. It is more subtle than that. It is about extracting meaning, and then persuading an audience with data. Without an audience, there would be no point. They deserve to know how certain (or uncertain) we are.

I will be speaking in Kuala Lumpur, Malaysia next week on the subject of Data Mining. I will be discussing Data Mining, in general, and then participants will get a chance to try it using the resources providing by the excellent tool neutral Elder, Miner, Nisbit book. I believe the event is at capacity, but there are already tentative plans to try this format again in January, 2012, also to be held in Kuala Lumpur, Malaysia. The event organizer stays in charge of the details, but if you are interested in finding out more about the January four day event please email me.

Essential Elements of Data Mining

This is my attempt to clarify what Data Mining is and what it isn’t. According to Wikipedia, “In philosophy, essentialism is the view that, for any specific kind of entity, there is a set of characteristics or properties all of which any entity of that kind must possess.” I do not seek the Platonic form of Data Mining, but I do seek clarity where it is often lacking. There is much confusion surrounding how Data Mining is distinct from related areas like Statistics and Business Intelligence. My primary goal is to clarify the characteristics that a project must have to be a Data Mining project. By implication, Statistical Analysis (hypothesis testing), Business Intelligence reporting, Exploratory Data Analysis, etc., do not have all of these defining properties. They are highly valuable, but have their own unique characteristics. I have come up with ten. It is quite appropriate to emphasize the first and the last. They are the bookends of the list, and they capture the heart of the matter.

1) A Question
2) History
3) A Flat File
4) Computers
5) Knowledge of the Domain
6) A lot of Time
7) Nothing to Prove
8) Proof that you are Right
9) Surprise
10) Something to Gain

1) A Question: Data Mining is not an unfocused search for anything interesting. It is a method for answering a specific question, meeting a particular need. Getting new customers is not the same as keeping the customers you already have. Of course, they are similar, but different in both big and subtle ways. The bottom line is that every decision that you make about the data that you select and assemble flows from the business question.

2) History: Data Mining is not primarily about the present tense, which contrasts it from Business Intelligence reporting. It is about using the past to predict the future. How far into the past? Well, if your customers sign a 12 month contract than it is probably more than 12 months old. It must be old enough to have a cohort of customers that have started and ended a process that is ongoing. Did they renew? Did they churn? You need a group of records for which the outcome of the process is known historically. This outcome status is usually in the form of a Target or Dependent Variable. It is the corner stone of the data set that one must create, and is the key to virtually all Data Mining projects.


3) A Flat File:
Data Miners are not in the Dark Ages. They work with relational databases on a daily basis, but the algorithms that are used are designed to run on flat files. Software vendors are proud to tout “in database modeling,” and it is exciting for its speed, but you still have to build a flat file that has all of your records and characteristics in one table. The Data Miner and author Gordon Linoff calls this a “customer signature.” I rather prefer the idea of a customer “footprint” as it always involves an accumulation of facts over time. The resulting flat file will be unique to the project, specifically built to allow the particular questions of the Data Mining project to be answered.

4) Computers: Data Mining data sets are not always huge. Sometimes they are in the low thousands, and sometimes a carefully selected sample of a few percent of your data is plenty to find patterns. So, despite all the talk of Big Data, the size of the data file is not really a limiting factor on today’s machines. Statistics software packages were capable of running a plain vanilla regression on larger data sets decades ago. The real thing that separates Data Mining from R. A. Fisher and his barley data set is that Data Mining algorithms are highly iterative. Considerable computing power is needed to find the best predictors and try them in all possible combinations using a myriad of different strategies. Data Mining is not simply Statistics on Big Data. Data Mining algorithms were created in a post computing environment to solve post computing problems. They are qualitatively different from traditional statistical techniques in fundamental and important ways, and even when traditional techniques are used they are used in the service of substantively different purposes.

5) Knowledge of the Domain: A sales rep once told me a story, probably apocryphal, about the early days of the Data Mining software I use. A banking client wanted to put them to the test, so the client said: “Here are some unlabeled variables. We are going to keep the meaning of them secret. Tell us which are the best predictors of variable X. If you answer ‘correctly’, we will buy.” What a horrible idea! The Data Mining algorithms play an important role in guiding the model building process, but only the human partner in the process can be the final arbiter of what best meets the need of the business problem. There must be business context, and if the nature of the data requires it, that context might involve Doctors, Engineers, Call Center Managers, Insurance Auditors or a host of other specialists.

6) A Lot of Time: Data Mining projects take time, a lot of time. They take many weeks, and perhaps quite a few months. If someone asks a Data Miner if they can have something preliminary in a week, they are thinking about something other than Data Mining. Maybe they really mean generating a report, but they don’t mean Data Mining. Problem definition takes time because it involves a lot of people, assembled together, hashing out priorities, figuring out who is in charge of what. With this collaboration, the project lead can’t easily make up lost time by burning the midnight oil. Data Preparation takes much of the time. Perhaps you assume that you will be Mining the unaltered contents of your Data Warehouse. It was created to support BI Reporting, not to support Data Mining, so that is not going to happen. Finally, when you’ve got something interesting, you have to reconvene a lot of people again, and you aren’t done until you have deployed something, making it part of the decision management engines of the business. (See Element 10.)

7) Nothing to Prove: If you are verifying an outcome, certain that you are right, having carefully chosen predictors in advance, simply curious how well it fits, you aren’t doing Data Mining. Perhaps you are merely exploring the data in advance, biding you time, waiting until your deadline approaches and then using hypothesis testing to congratulate yourself on how successfully your model fits data that you explored. This is, of course, the worst possible combination of Statistics and Data Mining imaginable, and violates the most basic assumptions of hypothesis testing. Neither of these approaches are Data Mining.

8) Proof that you are Right: Data Mining, by its very nature does not have a priori hypotheses, but it does need proof. A contradiction? The most fundamental requirement of Data Mining is that the same data which was used to uncover the pattern must never be used to prove that the pattern applies to future data. The standard way of doing this is to divide ones data randomly into two portions, building the model on the Train data set, verifying the model on the Test data set. In this is found the essence of Data Mining because it gives one freedom to explore the Train data set, uncovering its mysteries, awaiting the eventually judgement of the Test data set.

9) Surprise: A common mistake in Data Mining is being too frugal with predictors, leaving out this or that variable because “everyone knows” that it is not a key driver. Not wise. Even if this is true, it discounts the insight that an unanticipated interaction might provide. Even if true, it is a needless precaution because Data Mining algorithms are designed to be resilient to large numbers of related predictors. This is not to say that feature selection is not important – it is a key skill – but rather that Data Miners must be cautious when removing variables. Each of those variables cost the business money to record, and the insights they might offer have monetary value as well. Doing variable reduction well in Data Mining is in striking contrast with doing variable reduction well in Statistics.

10) Something to Gain: It might be somewhat controversial, but I think not overly so, to establish an equivalence: Data Mining Equals Deployment. Without deployment, you have may have done something valuable, perhaps even accompanied with demonstrable ROI, but you have fallen short. You may have reached a milestone. You may even have met the specific requirements of your assignment, but it isn’t really Data Mining until it is deployed. The whole idea of Data Mining is taking a carefully crafted snapshot, a chunk of history, establishing a set of Best Practices, and inserting them in the flow of Decision Making of the business.

The issue of clarifying what Data Mining is (and what to call it) comes up in conversation often among Data Miners so I hope the community of data analysts will find this a worthy enterprise. I intend to present this list to new Data Miners when I met them in a tool neutral setting. Please do provide your feedback. Would you add to the list? Do you think that there any properties that are listed here that are not required to call a project Data Mining?